text
stringlengths 6
128k
|
---|
# Waveform Optimization and Beam Focusing for Near-field Wireless Power
Transfer with Dynamic Metasurface Antennas and Non-linear Energy Harvesters
Amirhossein Azarbahram, Onel L. A. López, , and Matti Latva-Aho A. Azarbahram,
O. López and M. Latva-Aho are with Centre for Wireless Communications (CWC),
University of Oulu, Finland, (e-mail: {amirhossein.azarbahram,
onel.alcarazlopez, matti.latva-aho}@oulu.fi).This work is partially supported
in Finland by the Finnish Foundation for Technology Promotion, Academy of
Finland (Grants 348515 and 346208 (6G Flagship)), the Finnish-American
Research and Innovation Accelerator, and by the European Commission through
the Horizon Europe/JU SNS project Hexa-X-II (Grant Agreement no. 101095759).
###### Abstract
Radio frequency (RF) wireless power transfer (WPT) is a promising technology
for future wireless systems. However, the low power transfer efficiency (PTE)
is a critical challenge for practical implementations. One of the main
inefficiency sources is the power consumption and loss introduced by key
components such as high-power amplifier (HPA) and rectenna, thus they must be
carefully considered for PTE optimization. Herein, we consider a near-field
RF-WPT system with the emerging dynamic metasurface antenna (DMA) at the
transmitter and non-linear energy harvesters. We provide a mathematical
framework to calculate the power consumption and harvested power from multi-
tone signal transmissions. Then, we propose an approach relying on alternating
optimization and successive convex approximation for waveform optimization and
beam focusing to minimize power consumption while meeting energy harvesting
requirements. Numerical results show that increasing the transmit tones
reduces the power consumption by leveraging the rectifier’s non-linearity.
Moreover, it is demonstrated that increasing the antenna length improves the
performance, while both DMA and fully-digital architectures may be favorable
depending on the setup. Finally, our results verify that the transmitter
generates accurate energy beams pointed to devices located in the near-field,
while energy beams are formed in devices’ direction in the far-field region.
###### Index Terms:
Radio frequency wireless power transfer, waveform design, beamforming, dynamic
metasurface antennas, near-field channels.
## I Introduction
Future wireless systems will facilitate efficient and eco-friendly
communication across a myriad of low-power devices, fostering a sustainable
society. Achieving this requires uninterrupted connectivity among these
devices and with the underlying infrastructure, all while mitigating
disruptions arising from battery depletion [1, 2, 3]. This is potentially
facilitated by energy harvesting (EH) technologies providing wireless
charging, thus, easing the maintenance of Internet of Things (IoT) devices and
increasing their lifespan. Moreover, EH may lead to improved energy efficiency
and reduced emission footprints across the network [4].
EH receivers may harvest energy from two types of sources: those that exist in
the surrounding environment, and those that are specifically designated for
energy transmission. From a transmission perspective, the latter is supported
by wireless power transfer (WPT) technologies, e.g., based on inductive
coupling, magnetic resonance coupling, laser power beaming, and radio
frequency (RF) radiation. Among them, RF-WPT is promising for charging
multiple users relatively far from the transmitter by exploiting the broadcast
nature of wireless channels. Furthermore, this can be accomplished over the
same infrastructure used for wireless communications. Notably, the most
important challenge toward maturing RF-WPT is related to increasing the
inherently low end-to-end power transfer efficiency (PTE) [4]. Herein, we
focus on RF-WPT, which is referred to as WPT in the following.
### I-A Preliminaries
Figure 1: Block diagram of a typical WPT system. The power consumption and
loss sources are listed under each block.
The end-to-end PTE depends on the performance of the key building blocks,
namely energy transmitter (ET), wireless channel, and energy receiver (ER) as
illustrated in Fig. 1. At first, a signal is generated and amplified using a
direct current (DC) power source at the ET. Then, it is upconverted to the RF
domain and transmitted over the wireless channel. Finally, the ER converts the
received RF signal to DC for EH purposes. Indeed, the end-to-end PTE
comprises: DC-to-RF, RF-to-RF, and RF-to-DC power conversion efficiency, i.e.,
$e=\underbrace{\frac{P^{t}_{rf}}{P^{t}_{dc}}}_{e_{1}}\times\underbrace{\frac{P^{r}_{rf}}{P^{t}_{rf}}}_{e_{2}}\times\underbrace{\frac{P^{r}_{dc}}{P^{r}_{rf}}}_{e_{3}}=\frac{P^{r}_{dc}}{P^{t}_{dc}}.$
(1)
In WPT, both transmitter and receiver introduce non-linearities to the signals
affecting the amount of harvested power. Indeed, an appropriately designed
transmit signal leveraging these non-linearities may reduce the power
consumption at the transmitter and/or increases the RF-to-DC conversion
efficiency at the receiver [5, 6]. Specifically, using multiple transmit
frequency tones can lead to high peak-to-average power ratio (PAPR) signals at
the receiver, enhancing the rectifier RF-to-DC conversion efficiency [7].
Meanwhile, the RF signal can be focused towards the ER using energy
beamforming (EB), which affects the transmit/receive waveform, to cope with
the channel inefficiencies captured by $e_{2}$, thereby enhancing the amount
of RF power that can be harvested [4]. Notice that the active transmit
components consume power to operate, while the passive elements introduce
power losses, both of which impact $e_{1}$ and must be considered. Therefore,
$e_{1}$, $e_{2}$, and $e_{3}$ are correlated, suggesting that their joint
optimization may lead to significant gains in terms of end-to-end PTE.
The end-to-end PTE is also affected by the system power consumption. One of
the main factors contributing greatly to the power consumption is the
transmitter’s architecture, which also determines the beamforming approach.
For example, in a fully-digital structure, each antenna element necessitates a
dedicated RF chain, with its corresponding high-power amplifier (HPA),
consuming a significant amount of power. An HPA aims to amplify the input
signal to compensate for the path loss and fading in a wireless system.
Moreover, the signal amplification in the HPA requires a DC power source,
which accounts for the majority of the power consumption. The significant
drawback of the fully-digital architecture is the high complexity and cost,
making it impractical for applications requiring massive multiple-input
multiple-output (mMIMO) implementations. Alternatively, analog architectures
using, e.g., passive phase shifters, are less expensive but offer limited
degrees of freedom for EB. Thus, a hybrid architecture implementation
combining both approaches is often more appealing in practice. Hybrid
architectures offer a trade-off between complexity (cost) and beamforming
flexibility [8, 9].
Although hybrid beamforming using phase shifters promotes cost reduction, it
still requires complex analog networks for phase shifting. There are emerging
technologies to provide hybrid beamforming capability with an even lower cost
and complexity, e.g., reflective intelligent surface (RIS)-aided systems [10]
and dynamic metasurface antennas (DMAs) [11]. Notice that RIS is an assisting
node that provides the passive beamforming capability using reflective
elements, while DMA is a transceiver consisting of configurable metamaterial
elements and a limited number of RF chains. DMA avoids analog network
implementation challenges and provides hybrid beamforming capability with low
cost and complexity. Each of these architectures may be favorable based on the
system setup. For instance, employing multiple low-cost RISs helps to cover
the blind spots that are prone to weak signal reception in a large area.
However, reflecting surfaces in RIS-assisted systems lack baseband processing
capability to perform channel estimation and send pilot signals. Thus,
acquiring accurate enough channel state information to attain a suitable
passive beamforming gain might force huge overheads to the system [12]. On the
other hand, DMA is a transceiver and has sufficient baseband processing
capability for channel estimation. However, its implementation requires some
RF chains making it more costly than RIS for large-scale implementation. All
in all, both of these architectures support low-cost transmitter deployments,
while the choice highly depends on the considered system setup. Interestingly,
the authors in [13] utilize a system model comprising both RIS and DMA
structures for uplink MIMO communication, while assuming that the channel is
perfectly known.
### I-B Prior works
There are many works either focusing on EB, waveform optimization, or joint
waveform and beamforming design for fully-digital WPT systems. The authors in
[14] utilize EB to power multiple devices in a MIMO system consisting of radio
stripes, while the deployment problem of this transmit architecture is
investigated in [15]. Furthermore, a low-complexity beamforming design relying
on the statistics of the channel is proposed in [16] to fairly power a set of
single-antenna devices. In the mentioned works, none of the practical system
non-linearities are considered and the focus is on the received RF power at
the devices. In [17], transmit beamforming and RF and DC combining at the
receiver are leveraged to increase the received DC power in a MIMO WPT system.
Although this work considers the rectifier’s non-linearity, the waveform
design is not investigated. Moreover, a low-complexity waveform design for
single-user setups is proposed in [18], while the large-scale multi-antenna
WPT scenario is addressed in [19]. The authors in [20] leverage beamforming
and a multi-sine waveform in a MIMO WPT system to enhance the harvested power.
Notably, the frameworks in [18, 19, 20] consider the rectifier non-linearity.
Interestingly, the authors in [21] perform waveform and beamforming
optimization while considering both main non-linearity sources (HPA and
rectifier) aiming to maximize the harvested DC power in a WPT system.
Although most of the works on WPT systems in the literature focus on
traditional fully-digital architecture, novel low-cost transmitters have also
attained significant attention recently. For instance, DMA is utilized in [22,
23] for a near-field WPT system, while the authors in [24] propose a minimum-
power beamforming design for meeting quality of service requirements of the
users in a simultaneous wireless information and power transfer (SWIPT)
system. However, none of these studies have considered the rectifier non-
linearity and its impact on the harvested DC power. Notably, the joint
waveform and beamforming design problem in RIS-aided WPT and SWIPT systems is
investigated in [25] and [26], respectively. Furthermore, the two latter works
consider the EH non-linearity at the receiver side, thus, taking into account
its impact on the harvested DC power.
### I-C Contributions
All in all, WPT systems have received considerable attention for some time.
Still, more effort is needed to reduce the system power consumption, thus
increasing the end-to-end PTE. For this, low-cost multi-antenna transmitters
like DMA are appealing and may pave the way for charging devices efficiently
in massive IoT deployments. Moreover, multiple studies aimed to enhance the
amount of harvested DC power (with rectifier non-linearity) in far-field WPT
systems or received RF power (without rectifier non-linearity) in near-field
WPT. As mentioned before, the receiver does not perform RF-to-DC conversion
linearly and the shape of the waveform may leverage the receiver’s RF-to-DC
conversion efficiency. Thus, it is imperative to take into account the impact
of the rectifier’s non-linearity in the system. To the best of our knowledge,
no work has yet investigated the radiative near-field power transmission and
the power consumption of a multi-antenna WPT system for meeting the EH
requirements of a multi-user setup while considering the receiver non-
linearity, especially when using low-cost transmitters. Herein, we aim
precisely to fill this research gap. Our main contributions are as follows:
1) We formulate a joint waveform optimization and beam focusing problem for a
multi-user multi-antenna WPT system with both a fully-digital and a DMA
architecture. Due to the huge potential of near-field WPT systems for future
practical WPT applications [3], we present our system model relying on a near-
field wireless channel, which can inherently capture far-field conditions as
well. Notice that there are some previous works [25, 21], which focused on
increasing the amount of harvested power in far-field WPT systems while
considering the receiver non-linearity. However, the literature lacks a
minimum-power waveform and beamforming design (even for far-field channels),
which can deal with meeting the EH requirements. This is a critical gap to
fill since such formulation mimics a practical setup where the EH users inform
their DC power demands and the WPT system must serve them with minimum power
consumption, thus maximum end-to-end PTE. Since the HPA is an active element
incurring most of the power consumption at the transmitter side, we model the
power consumption of a class-B HPA as a function of its output power. Notably,
our problem for fully-digital architecture shares some similarities with the
one discussed in [21] as objectives and constraints are interchanged. However,
the main focus of our work here is on the DMA-assisted system, which
introduces much more complexity to the problem due to the coupling between the
variables and their Lorentzian-constrained phase response. Note that the phase
shift introduced by the metamaterial elements is correlated with their
amplitude, which results in a different beamforming problem than other
architectures, e.g., RIS-assisted systems [26] and phase shifter-based hybrid
beamforming [9]. Mathematically, when dealing with those latter architectures,
both RIS passive elements or phase shifters introduce a phase shift to the
signal with constant loss111Note that in most of the works in the literature,
without loss of generality, this phase shifting process by the analog network
or RIS elements is considered to be lossless., while each phase-shifting
configuration in DMA elements leads to a different propagation loss introduced
to the transmit signal. The complexities associated with our specific problem
make the existing optimization frameworks for WPT systems inapplicable to our
system calling for novel approaches.
2) We propose a method relying on alternating optimization and successive
convex approximation (SCA) to efficiently solve the waveform and beamforming
optimization problem in the DMA-assisted WPT system. Specifically, we decouple
the optimization problem to maximize the minimum received DC power by tuning
the frequency response of the metamaterial elements, while minimizing the
consumed power for meeting the EH requirements when optimizing the digital
precoders. Generally, a huge complexity is introduced to the waveform
optimization problems by time sampling since the number of samples should be
relatively large to result in a reliable framework [21, 6, 20]. To cope with
this, we reformulate the received DC power of the users based on the spectrum
of the received waveform, which removes the time dependency in the problem.
Then, the metamaterial elements and the digital precoders are alternatively
optimized using SCA. Motivated by the influence of variable initialization on
the SCA performance, we propose a low-complexity initialization algorithm for
the digital precoders and DMA weights by leveraging the channel
characteristics and dedicating RF signals to the ERs. Furthermore, the
complexity of the proposed optimization framework scales with the number of
users, antenna elements, and frequency tones.
3) We illustrate the convergence of the proposed optimization method
numerically and show that the complexity increases with the antenna length,
number of transmit tones, and number of devices. Furthermore, we provide
evidence that increasing the antenna length and the number of transmit tones
reduces power consumption, while it increases with the number of devices and
user distance. Moreover, our findings evince that DMA performs better in terms
of power consumption when the number of RF chains and transmit tones are
relatively low, while the fully-digital architecture becomes favorable when
the mentioned parameters are sufficiently large. This also depends on the
specific HPAs’ saturation power, number of devices, and user distance.
Additionally, we verify by simulation that the transmitter can accurately
focus the energy on the receiver location in the near-field region, while
energy beams are only formed toward specific directions in the far-field
region.
The remainder of the paper is structured as follows. Section II introduces the
system model, including the transmit architectures, and signal and power
consumption modeling. The optimization problem for a joint waveform and
beamforming design, together with the proposed solving approach, are
elaborated in Section III. Section IV discusses the proposed initialization
algorithm, Section V presents the numerical results, while Section VI
concludes the paper.
Notations: Bold lower-case letters represent column vectors, while non-
boldface characters refer to scalars, ${\mathbf{a}}\odot{\mathbf{b}}$ denotes
the Hadamard product of $\mathbf{a}$ and $\mathbf{b}$, and $\\{x\\}$ is a set
that contains $x$. The $l_{2}$-norm of a vector is denoted as $|\cdot|$. The
mathematical expectation is represented by $\mathbb{E}$ and $(\cdot)^{T}$ and
$(\cdot)^{\star}$ are used to indicate the transpose and conjugate of a matrix
or vector, respectively. Furthermore, the real and the imaginary parts of a
complex number are denoted by $\Re\\{\cdot\\}$ and $\Im\\{\cdot\\}$,
respectively. Additionally, $\lfloor{\cdot}\rfloor$ is the floor operator, and
$\langle{\cdot}\rangle$ denotes the phase of a complex number.
## II System Model
We consider a multi-antenna WPT system to charge $M$ single-antenna EH
devices. The received RF power at the ER is transferred into the rectifier
input using a matching network. Then, it is converted to DC power by the
rectifier, while $\bar{P}_{m}$ denotes the EH requirement of the $m$th ER.
As previously mentioned, multi-tone waveforms can be exploited to leverage the
rectifier non-linearity and achieve a better end-to-end PTE. Hence, we
consider multi-tone signals with $N_{f}$ tones at frequencies
$f_{1},f_{2},\cdots,f_{N_{f}}$ for power transmission purposes. Without loss
of generality, we set $f_{n}=f_{1}+(n-1)\Delta_{f},\quad n=1,\ldots,N_{f}$,
where $f_{1}$ and $\Delta_{f}$ are the lowest sub-carrier frequency and the
sub-carrier spacing, respectively.
### II-A Transmit Antenna Architectures
The transmitter is equipped with a uniform planar array (UPA) and $N_{rf}\geq
M$ RF chains. The radiating elements are spaced uniformly, with $N_{h}$ and
$N_{v}$ being the number of elements in the horizontal and vertical direction,
respectively. Thus, the total number of elements is $N=N_{v}\times N_{h}$. Two
types of transmit antenna architectures are considered: 1) Fully-digital
architecture, which requires a dedicated RF chain for each radiating element,
thus $N_{rf}=N$, as shown in Fig. 2a. In a fully-digital architecture, there
is a single-stage beamforming process. Herein, $\omega_{i,n}$ is the complex
weight of the $n$th frequency tone of the $i$th muti-tone waveform. Despite
the high deployment cost and complexity, a fully-digital structure offers the
highest number of degrees of freedom in beamforming.
Figure 2: Transmit antenna architectures. (a) fully-digital architecture
(left) (b) DMA-assisted architecture (right).
2) DMA-assisted architecture, which comprises $N_{v}$ waveguides, each fed by
a dedicated RF chain and composed of $N_{h}$ configurable metamaterial
elements. Therefore, the number of RF chains, and consequently the cost and
complexity, is considerably reduced compared to digital structures, making DMA
suitable for mMIMO applications. Notice that DMA-assisted systems employ a
two-stage beamforming process, i.e., digital beamforming, followed by the
tuning of the amplitude/phase variations introduced by the metamaterial
elements, as illustrated in Fig. 2b. Herein, $q_{i,l}$ is the tunable
frequency response of the $l$th metamaterial element in the $i$th waveguide,
while $h_{i,l}$ is the corresponding waveguide propagation loss, which will be
explained in detail later.
### II-B Channel Model
In wireless communications, the region where the users are located between the
Fraunhofer and Fresnel distances denoted respectively as $d_{fr}$ and
$d_{fs}$, is the radiative near-field region, which is referred to as the
near-field region in the following. Specifically, a device at distance $r$
from a transmitter experiences near-field conditions
if$\sqrt[3]{\frac{D^{4}}{8\lambda_{1}}}=d_{fs}<d<d_{fr}=\frac{2D^{2}}{\lambda_{1}}$,
where $D$ is the antenna diameter, i.e., the largest size of the antenna
aperture, $\lambda_{1}=\frac{c}{f_{1}}$ is the corresponding wavelength to the
system operating frequency, and $c$ is the speed of light. Notice that both
system frequency and antenna form factor influence the region of operation.
Therefore, by moving toward higher frequencies, e.g., millimeter wave and sub-
THz bands, and/or utilizing larger antenna arrays, the far-field communication
assumption regarding planar wavefronts may not be valid anymore. Instead,
wavefronts impinging a receive node may be strictly spherical, thus, with
advanced capabilities to focus the transmit signals on specific spatial points
rather on spatial directions.
Notice that one of the main applications of WPT systems is in indoor
environments with line-of-sight (LOS) and near-field communication, e.g.,
restaurants, warehouses, and shopping malls. Thus, we employ the near-field
LOS channel model described in [27]. The Cartesian coordinate of the $l$th
radiating element in the $i$th row is
$\mathbf{g}_{i,l}=[x_{i,l},y_{i,l},z_{i,l}]^{T}$. Additionally,
$i=1,2,\ldots,N_{v}$ and $l=1,2,\ldots,N_{h}$. The channel coefficient between
user $m$ and the $l$th element in the $i$th row at the $n$th sub-carrier is
given by
$\gamma_{i,l,m,n}=A_{i,l,m,n}e^{\frac{-j2\pi}{\lambda_{n}}d_{i,l,m}},$ (2)
where ${\frac{2\pi}{\lambda_{n}}d_{i,l,m}}$ is the phase shift caused by the
propagation distance of the $n$th tone, with wavelength $\lambda_{n}$, and
$d_{i,l,m}=|\mathbf{g}_{m}-\mathbf{g}_{i,l}|$ is the Euclidean distance
between the element and the user located at $\mathbf{g}_{m}$. Moreover,
$A_{i,l,m,n}=\sqrt{F(\Theta_{i,l,m})}\frac{\lambda_{n}}{4\pi d_{i,l,m}}$ (3)
is the corresponding channel gain coefficient. Here,
$\Theta_{i,l,m}=(\theta_{i,l,m},\psi_{i,l,m})$ is the elevation-azimuth angle
pair, and $F(\Theta_{i,l,m})$ is the radiation profile of each element. In
addition, we employ the radiation profile as presented in [28], where
$F(\Theta_{i,l,m})=\begin{cases}G_{t}\cos{(\theta_{i,l,m})}^{\frac{G_{t}}{2}-1},&\theta_{i,l,m}\in[0,\pi/2],\\\
0,&\text{otherwise},\end{cases}$ (4)
$G_{t}=2(b+1)$ is the transmit antenna gain, and $b$ denotes the boresight
gain, which depends on the specifications of the antenna elements. Note that
the channel coefficient becomes $A_{m}e^{-j\psi_{i,l,m}}$ for far-field
communication, where $A_{m}$ only depends on the distance of the user $m$ from
the transmitter and $\psi_{i,l,m}$ is solely determined by the user direction
and the relative disposition of the antenna elements within the array.
### II-C Transmit & Receive Signals
The signal at the input of the $i$th HPA is given by
$x_{i}(t)=\sum_{n=1}^{N_{f}}{\omega_{i,n}e^{j2\pi f_{n}t}},\quad
i=1,\ldots,N_{rf}.$ (5)
The HPA introduces signal distortion and models such as the Rapp model [29]
capture this non-linearity. It is shown in [21] that when the HPA operates in
the non-linear regime, choosing a single-carrier waveform is preferred to a
multi-carrier one. The reason is that a single-carrier waveform is less
deteriorated by the adverse effect of the signal distortion caused by the HPA
when operating in the non-linear regime. On the other hand, when HPAs operate
in the linear regime, thus, not causing amplitude and phase distortion in the
signal, multi-carrier waveforms are preferred since they leverage the
rectifier’s non-linearity and enhance the harvested power performance.
Note that the non-linear regime of the HPA happens near the saturation
voltage. Thus, in practice, the HPAs can be properly chosen to have a suitable
value of the saturation voltage based on the system setup and avoid operation
in the non-linear regime. Since the aim of this work is to design multi-
carrier waveforms for DMA-assisted WPT, we consider HPAs to operate in the
linear regime. Mathematically, the output signal of the HPA is modeled as
$x^{hpa}_{i}(t)=Gx_{i}(t)$, where $G$ is the HPA gain. The rest of the signal
modeling formulation will be presented separately for different transmit
architectures in the following.
#### II-C1 Fully-Digital Architecture
Herein, $N_{rf}=N$, thus, the real transmit signal at the output of the $l$th
element in the $i$th row of the UPA can be expressed as
$\displaystyle\quad
x^{FD}_{i,l}(t)=\Re\biggl{\\{}x^{hpa}_{u{(i,l)}}(t)\biggr{\\}}=G\sum_{n=1}^{N_{f}}\Re\biggl{\\{}\omega_{{u{(i,l)}},n}e^{j2\pi
f_{n}t}\biggr{\\}},$ (6)
where ${u{(i,l)}}=(i-1)N_{h}+l$. Thereby, the RF signal at the $m$th receiver
when exploiting the fully-digital architecture can be expressed as
$\displaystyle y^{FD}_{m}(t)$
$\displaystyle=\sum_{i=1}^{N_{v}}\sum_{l=1}^{N_{h}}\sum_{n=1}^{N_{f}}\gamma_{i,l,m,n}x^{FD}_{i,l}(t)$
$\displaystyle=G\sum_{i=1}^{N_{v}}\sum_{l=1}^{N_{h}}\sum_{n=1}^{N_{f}}\Re\Bigl{\\{}\gamma_{i,l,m,n}\omega_{{u{(i,l)}},n}e^{j2\pi
f_{n}t}\Bigr{\\}}.$ (7)
Furthermore, by defining
$\mathbf{w}_{n}=[\omega_{1,n},\ldots,\omega_{N,n}]^{T}$ and
$\boldsymbol{\gamma}_{m,n}=[\gamma_{m,n,1,1},\gamma_{m,n,1,2},\ldots,\gamma_{m,n,N_{v},N_{h}}]^{T}$,
(II-C1) can be reformulated as
$y^{FD}_{m}(t)=G\sum_{n=1}^{N_{f}}\Re\Bigl{\\{}\boldsymbol{\gamma}_{m,n}^{T}\mathbf{w}_{n}e^{j2\pi
f_{n}t}\Bigr{\\}}.$ (8)
Figure 3: The Lorentzian constrained (the inner circle) and the ideal weights
(outer circle) in the complex plane. The arrows depict the mapping between the
weights.
#### II-C2 DMA-assisted Architecture
In metasurface antennas, the phase and amplitude that can be configured in the
radiating elements are correlated due to the Lorentzian resonance. Herein, we
capture this correlation by [30]
$q_{i,l}\in\mathcal{Q}=\Big{\\{}{(j+e^{j\phi_{i,l}})}/{2}\Big{|}\phi_{i,l}\in[0,2\pi]\Big{\\}},\quad\forall
i,l.$ (9)
where $\phi_{i,l}$ are the tunable phase of the $l$th metamaterial element in
the $i$th waveguide. As shown in Fig. 3, the ideal phase shifting exhibits a
constant unit amplitude, i.e., no losses, while the amplitude of the
Lorentzian weights depends on the configured phase.
Herein, microstrip lines are used as waveguides, similar to [31, 27]. The
propagation model of the signal within a microstrip is expressed as
$h_{i,l}=e^{-(l-1)d_{l}(\alpha_{i}+j\beta_{i})}$, where $d_{l}$ is the inter-
element distance, $\alpha_{i}$ represents the waveguide attenuation
coefficient, and $\beta_{i}$ is the propagation constant. The mathematical
model of the DMA is represented in Fig. 2b.
Notice that the number of RF chains in the DMA is reduced to $N_{rf}=N_{v}$.
Hence, the real transmit signal radiated from the $l$th element in the $i$th
microstrip can be expressed as
$\displaystyle x_{i,l}^{DMA}(t)$
$\displaystyle\\!=\\!G\Re\left\\{h_{i,l}q_{i,l}x_{i}^{hpa}(t)\right\\}\\!=\\!G\sum_{n=1}^{N_{f}}\Re\left\\{h_{i,l}q_{i,l}{\omega_{i,n}e^{j2\pi
f_{n}t}}\right\\}.$ (10)
Furthermore, the RF signal received at the $m$th user in the DMA-assisted
system is given by
$\displaystyle y^{DMA}_{m}(t)$
$\displaystyle=\sum_{i=1}^{N_{v}}\\!\sum_{l=1}^{N_{h}}\\!\sum_{n=1}^{N_{f}}\gamma_{i,l,m,n}x_{i,l}^{DMA}(t)$
$\displaystyle=G\sum_{i=1}^{N_{v}}\\!\sum_{l=1}^{N_{h}}\\!\sum_{n=1}^{N_{f}}\\!\Re\Bigl{\\{}\gamma_{i,l,m,n}h_{i,l}q_{i,l}\omega_{i,n}e^{j2\pi
f_{n}t}\Bigr{\\}}.$ (11)
Finally, we define
$\displaystyle\bar{\mathbf{w}}_{n}=[\underbrace{\omega_{1,n},\ldots,\omega_{1,n}}_{N_{h}},\ldots,\underbrace{\omega_{N_{v},n},\ldots,\omega_{N_{v},n}}_{N_{h}}]^{T}\in\mathbb{C}^{N\times
1},$
$\mathbf{q}=[q_{1,1},\ldots,q_{N_{v},N_{h}}]^{T}$,
$\mathbf{h}=[h_{1,1},\ldots,h_{N_{v},N_{h}}]^{T}$, and reformulate (II-C2) as
$y^{DMA}_{m}(t)=G\sum_{n=1}^{N_{f}}\Re\Bigl{\\{}(\boldsymbol{\gamma}_{m,n}\odot\mathbf{q}\odot\mathbf{h})^{T}\bar{\mathbf{w}}_{n}e^{j2\pi
f_{n}t}\Bigr{\\}}.$ (12)
### II-D Rectenna
At the receiver side, the RF signal is converted to DC. This can be modeled by
an antenna equivalent circuit and a single diode rectifier as illustrated in
Fig. 4. The RF signal at the input of the antenna is denoted as $y_{m}(t)$ and
has an average power of $\mathbb{E}\bigl{\\{}{y_{m}(t)}^{2}\bigr{\\}}$. Let us
denote the input impedance of the rectifier and the impedance of the antenna
equivalent circuit by $R_{in}$ and $R_{ant}$, respectively. Thus, assuming
perfect matching $(R_{in}=R_{ant})$, the input voltage at the rectifier of the
$m$th ER is given by $v_{{in},{m}}(t)=y_{m}(t)\sqrt{R_{ant}}$. Furthermore,
the diode current can be formulated as
$i_{d}(t)=i_{s}\bigl{(}e^{\frac{v_{d}(t)}{\hat{n}v_{t}}}-1\bigr{)},$ (13)
where $i_{s}$ is the reverse bias saturation current, $\hat{n}$ is the
ideality factor, $v_{t}$ is the thermal voltage, and
$v_{d}(t)=v_{in}(t)-v_{o}(t)$ is the diode voltage. Moreover, $v_{{o},{m}}(t)$
is the output voltage of the $m$th rectifier, which can be approximated
utilizing the Taylor expansion as [20, 19]
$v_{{o},{m}}=\sum_{i\ even,i\geq
2}^{n_{0}}K_{i}\mathbb{E}\bigl{\\{}y_{m}(t)^{i}\bigr{\\}},$ (14)
where $K_{i}=\frac{R_{ant}^{i/2}}{i!{(\eta_{0}v_{t})}^{i-1}}$. Herein, we
focus on the low-power regime, for which it was demonstrated in [19, 6] that
truncating the Taylor expansion at $n_{0}=4$ is accurate enough. Therefore,
(14) can be written as
$v_{{o},{m}}=K_{2}\mathbb{E}\bigl{\\{}y_{m}(t)^{2}\bigr{\\}}+K_{4}\mathbb{E}\bigl{\\{}y_{m}(t)^{4}\bigr{\\}}$
(15)
and the DC power at the $m$th receiver is given by
$P_{{dc},m}=\frac{{v^{2}_{{o},{m}}}}{R_{L}},$ (16)
where $R_{L}$ is the load impedance of the rectifier, while $y_{m}(t)$ is
equal to $y_{m}^{DMA}(t)$ and $y_{m}^{FD}(t)$ in the DMA and fully-digital.
Figure 4: Antenna equivalent circuit (left) and a single diode rectifier
(right) [20].
### II-E Power Consumption Model
Lowering the system power consumption is desirable for increasing the end-to-
end PTE and this greatly depends on the power consumption of the HPA. Let us
denote the maximum efficiency and the maximum output power of a class-B HPA as
$\bar{\eta}$ and $P_{max}$, respectively. Hereby, the efficiency of the $i$th
HPA at time $t$ is expressed as
$\eta_{i}(t)={\bar{\eta}}\sqrt{{P_{out,i}(t)}/{P_{max}}}$ [32], where
$P_{out,i}(t)$ is the output power of the HPA. Then, the corresponding power
consumption is
$P_{hpa,i}(t)=\frac{P_{out,i}(t)}{\eta_{i}(t)}=\frac{1}{\bar{\eta}}\sqrt{{P_{max}{P_{out,i}(t)}}}.$
(17)
Notice that each HPA is in charge of supplying the amount of radiating power
by all of the antenna elements, which are fed by its corresponding RF chain.
Thus, $P_{out,i}(t)$ is $\sum_{l=1}^{N_{h}}|x^{DMA}_{i,l}(t)|^{2}$ in DMA-
assisted architecture, and $|x^{FD}_{i,l}(t)|^{2}$ denotes the output power of
the RF chain connected to the $l$th element in the $i$th row.
There are also other power consumption sources in the WPT system. For
instance, digital baseband power consumption, which is required to perform the
digital beamforming, and the RF chain circuit power consumption, including the
mixer, local oscillator, and filter. However, the power consumption of these
sources is usually considered fixed and is negligible compared to the HPA.
Thus, without loss of generality, the total power consumption of the system is
given by
$P_{c}=\sum_{i=1}^{N_{rf}}\mathbb{E}\bigl{\\{}P_{{hpa},i}(t)\bigr{\\}}+P_{in},$
(18)
where $P_{in}=\sum_{i=1}^{N_{rf}}\sum_{n=1}^{N_{f}}|\omega_{i,n}|^{2}$ is the
total input power.
## III Joint Waveform & Beamforming Optimization
This section formalizes the optimization problem and describes the utilized
approach when employing the aforementioned transmit architectures.
### III-A Problem Formulation
The goal is to obtain a minimum-power waveform and beamforming design to meet
the EH requirements of the users. Thus, by utilizing (16) and substituting
(17) in (18), the optimization problem can be formulated as
$\displaystyle\operatorname*{minimize}_{\mathcal{V}}\quad$
$\displaystyle\frac{\sqrt{P_{max}}}{\bar{\eta}}\sum_{i=1}^{N_{rf}}\mathbb{E}\bigl{\\{}\sqrt{P_{out,i}}(t)\bigr{\\}}+\sum_{i=1}^{N_{rf}}\sum_{n=1}^{N_{f}}|\omega_{i,n}|^{2}$
(19a) subject to $\displaystyle v_{o,m}^{2}\geq R_{L}\bar{P}_{m},\quad
m=1,\ldots,M,$ (19b)
where $\mathcal{V}$ is the set of optimization variables, which is equal to
$\\{\omega_{i,n},q_{i,l}\\}_{\forall i,l,n}$ and $\\{\omega_{i,n}\\}_{\forall
i,n}$ for DMA-assisted and fully-digital architectures, respectively. Problem
(19) deals with extensive non-linearity since the objective and constraints
are highly non-linear and non-convex functions due to the signal model,
rectenna non-linearity, and the coupling between the digital and analog
beamforming variables in DMA.
### III-B Optimization Framework for DMA-assisted Architecture
One of the challenges making the problem intractable is the coupling between
the optimization variables. To cope with this, we propose using alternating
optimization by first optimizing the digital precoders when considering fixed
$\\{q_{i,l}\\}_{\forall i,l}$, followed by optimizing the metamaterial
elements’ frequency response for fixed $\\{\omega_{i,n}\\}_{\forall i,n}$.
#### III-B1 Optimization with fixed $\\{q_{i,l}\\}_{\forall i,l}$
Let us proceed by rewriting the optimization problem as
$\displaystyle\operatorname*{minimize}_{\\{\omega_{i,n}\\}_{\forall
i,n}}\quad$
$\displaystyle\frac{\sqrt{P_{max}}}{\bar{\eta}}\sum_{i=1}^{N_{v}}\mathbb{E}\biggl{\\{}\sqrt{\sum_{l=1}^{N_{h}}x^{DMA}_{i,l}(t)^{2}}\biggr{\\}}\\!+\\!\sum_{i=1}^{N_{v}}\sum_{n=1}^{N_{f}}|\omega_{i,n}|^{2}$
(20a) subject to
$\displaystyle\bigl{(}{K_{2}\mathbb{E}\bigl{\\{}y_{m}(t)^{2}\bigr{\\}}+K_{4}\mathbb{E}\bigl{\\{}y_{m}(t)^{4}\bigr{\\}}\bigr{)}}^{2}\geq{R_{L}}\bar{P}_{m},\forall
m.$ (20b)
The problem is still highly non-linear and non-convex. Interestingly, it can
be easily verified that $v_{o,m}$ is convex and increasing with respect to
${y_{m}(t)}^{2}$, and ${y_{m}(t)}$ is affine with respect to
$\bar{\mathbf{w}}$. Therefore, $v_{o,m}$ is a convex function with respect to
$\bar{\mathbf{w}}$ given a fixed configuration for metamaterial elements [21].
Thus, although (20b) is a non-convex constraint, its left-hand side consists
of a convex function. These properties motivate us to adapt the SCA method
[33] to optimize the problem iteratively. Specifically, the objective and
constraints can be approximated by their Taylor expansion.
###### Theorem 1.
By defining
$S_{i}=\sum_{l=1}^{N_{h}}\mathbb{E}\bigl{\\{}x^{DMA}_{i,l}(t)^{2}\bigr{\\}}$,
we can write
$\sum_{i=1}^{N_{rf}}\mathbb{E}\biggl{\\{}\sqrt{\sum_{l=1}^{N_{h}}x^{DMA}_{i,l}(t)^{2}}\biggr{\\}}\leq\sum_{i=1}^{N_{rf}}\tilde{f}\bigl{(}S_{i},S_{i}^{(0)}),$
(21)
where
$\tilde{f}\bigl{(}S_{i},S_{i}^{(0)})=\sqrt{S_{i}^{(0)}}+\frac{1}{2\sqrt{S_{i}^{(0)}}}\bigl{(}S_{i}-S_{i}^{(0)}\bigr{)},\forall
i\vspace{-2mm}$ (22)
is the first order Taylor expansion of ${\sqrt{S_{i}}}$ at point
$S_{i}^{(0)}$.
###### Proof.
The inequality can be proved using Jensen’s inequality [34] and the fact that
${\sqrt{S_{i}}}$ is a concave function with respect to ${S_{i}}$, while the
first-order Taylor expansion of a function is greater than or equal to the
function value at each point. ∎
Then, we rewrite $v_{o,m}$ considering fixed $q_{i,l}$. For this, we define
$\mathbf{a}_{m,n}=\boldsymbol{\gamma}_{m,n}\odot\mathbf{q}\odot\mathbf{h}$ and
leverage the fact that the average power of a signal is equal to the power of
its spectrum. Thus, we have [6]
$\displaystyle\mathbb{E}\bigl{\\{}y_{m}(t)^{2}\bigr{\\}}=g_{m,1}(\\{\bar{\mathbf{w}}_{n}\\}_{\forall
n})=\frac{G^{2}}{2}\sum_{n}|\mathbf{a}^{T}_{m,n}\bar{\mathbf{w}}_{n}|^{2},$
(23)
$\displaystyle\mathbb{E}\bigl{\\{}y_{m}(t)^{4}\bigr{\\}}=g_{m,2}(\\{\bar{\mathbf{w}}_{n}\\}_{\forall
n})=\frac{3G^{4}}{8}\sum_{\begin{subarray}{c}n_{0},n_{1},n_{2},n_{3}\\\
n_{0}+n_{1}=n_{2}+n_{3}\end{subarray}}...$
$\displaystyle(\mathbf{a}^{T}_{m,n_{0}}\bar{\mathbf{w}}_{n_{0}})(\mathbf{a}^{T}_{m,n_{1}}\bar{\mathbf{w}}_{n_{1}})(\mathbf{a}^{T}_{m,n_{2}}\bar{\mathbf{w}}_{n_{2}})^{\star}(\mathbf{a}^{T}_{m,n_{3}}\bar{\mathbf{w}}_{n_{3}})^{\star}.$
(24)
By leveraging (23) and (III-B1), we can write
${{v}_{o,m}}\geq K_{2}g_{m,1}(\\{\bar{\mathbf{w}}_{n}^{(0)}\\}_{\forall
n})+{K_{4}}g_{m,2}(\\{\bar{\mathbf{w}}_{n}^{(0)}\\}_{\forall n})\\\
+\sum_{n}\tilde{g}_{m,n}(\bar{\mathbf{w}}_{n}^{(0)})\bigl{(}\bar{\mathbf{w}}_{n}-\bar{\mathbf{w}}_{n}^{(0)}\bigr{)},$
(25)
where
$\tilde{g}_{m,n}(\bar{\mathbf{w}}_{n}^{(0)})=G^{2}K_{2}(\mathbf{a}^{T}_{m,n}\bar{\mathbf{w}}_{n}^{(0)})\mathbf{a}^{T}_{m,n}+\\\
\frac{3K_{4}G^{4}}{8}\biggl{[}4{|\mathbf{a}^{T}_{m,n}|}^{4}{|\bar{\mathbf{w}}_{n}^{(0)}|}^{2}{{\bar{\mathbf{w}}{{}_{n}^{(0)}}}}^{T}+\\\
8\sum_{n1}{|\mathbf{a}^{T}_{m,n}|}^{2}{|\mathbf{a}^{T}_{m,n_{1}}|}^{2}{|\bar{\mathbf{w}}_{n_{1}}^{(0)}|}^{2}{{\bar{\mathbf{w}}{{}_{n}^{(0)}}}}^{T}+\\\
\sum_{\begin{subarray}{c}n_{2},n_{3}\\\ n_{2}+n_{3}=2n\\\ n_{2}\neq
n_{3}\end{subarray}}2{(\mathbf{a}^{T}_{m,n_{2}}\bar{\mathbf{w}}_{n_{2}}^{(0)})}^{\star}{(\mathbf{a}^{T}_{m,n_{3}}\bar{\mathbf{w}}_{n_{3}}^{(0)})}^{\star}{(\mathbf{a}^{T}_{m,n}\bar{\mathbf{w}}_{n}^{(0)})}\mathbf{a}^{T}_{m,n}+\\\
2(\mathbf{a}^{T}_{m,n_{2}}\bar{\mathbf{w}}_{n_{2}}^{(0)})(\mathbf{a}^{T}_{m,n_{3}}\bar{\mathbf{w}}_{n_{3}}^{(0)}){{(\mathbf{a}^{T}_{m,n}\bar{\mathbf{w}}_{n}^{(0)})}}^{\star}{\mathbf{a}^{H}_{m,n}}+\\\
\sum_{\begin{subarray}{c}n_{1},n_{2},n_{3}\\\ -n_{1}+n_{2}+n_{3}=n\\\ n\neq
n_{1}\neq n_{2}\neq
n_{3}\end{subarray}}2(\mathbf{a}^{T}_{m,n_{1}}\bar{\mathbf{w}}_{n_{1}}^{(0)}){(\mathbf{a}^{T}_{m,n_{2}}\bar{\mathbf{w}}_{n_{2}}^{(0)})}^{\star}{(\mathbf{a}^{T}_{m,n_{3}}\bar{\mathbf{w}}_{n_{3}}^{(0)})}^{\star}\mathbf{a}^{T}_{m,n}+\\\
2(\mathbf{a}^{T}_{m,n_{1}}\bar{\mathbf{w}}_{n_{1}}^{(0)})(\mathbf{a}^{T}_{m,n_{2}}\bar{\mathbf{w}}_{n_{2}}^{(0)}){(\mathbf{a}^{T}_{m,n_{3}}\bar{\mathbf{w}}_{n_{3}}^{(0)})}^{\star}{\mathbf{a}^{H}_{m,n}}\biggr{]}$
(26)
is the first-order Taylor coefficient of ${{v}_{o,m}}$ at point
$\\{\bar{\mathbf{w}}_{n}^{(0)}\\}_{\forall n}$.
Similarly, it can be verified that
$\mathbb{E}\bigl{\\{}x^{DMA}_{i,l}(t)^{2}\bigr{\\}}=\frac{G^{2}}{2}\sum_{n}|q_{i,l}h_{i,l}w_{i,n}|^{2}.$
(27)
Now, we can reformulate the problem at point
$\\{\bar{\mathbf{w}}_{n}^{(0)},S_{i}^{(0)},{v}_{o,m}^{(0)}\\}_{\forall i,n,m}$
as
$\displaystyle\operatorname*{minimize}_{\begin{subarray}{c}\\{\bar{\mathbf{w}}_{n}\\}_{\forall
n}\\\ \\{{v}_{o,m}\\}_{\forall m}\\\ \\{S_{i}\\}_{\forall
i}\end{subarray}}\quad$
$\displaystyle\frac{\sqrt{P_{max}}}{\bar{\eta}}\sum_{i=1}^{N_{rf}}\tilde{f}\bigl{(}S_{i},S_{i}^{(0)})+\sum_{i=1}^{N_{rf}}\sum_{n=1}^{N_{f}}|\omega_{i,n}|^{2}$
(28a) subject to $\displaystyle\sqrt{{R_{L}}\bar{P}_{m}}\leq
K_{2}g_{m,1}(\\{\bar{\mathbf{w}}_{n}^{(0)}\\}_{\forall n})+$ $\displaystyle
K_{4}g_{m,2}(\\{\bar{\mathbf{w}}_{n}^{(0)}\\}_{\forall n})+$
$\displaystyle\sum_{n}\tilde{g}_{m,n}(\bar{\mathbf{w}}_{n}^{(0)})\bigl{(}\bar{\mathbf{w}}_{n}-\bar{\mathbf{w}}_{n}^{(0)}\bigr{)},\quad\forall
m$ (28b)
$\displaystyle\frac{G^{2}}{2}\sum_{n=1}^{N_{f}}\sum_{l=1}^{N_{h}}\bigl{|}q_{i,l}h_{i,l}\bar{\mathbf{w}}_{n}[i]\bigr{|}^{2}\leq
S_{i},i=1,\ldots,N_{v},$ (28c)
$\displaystyle\bar{\mathbf{w}}_{n}[(i-1)N_{h}+l]=\bar{\mathbf{w}}_{n}[i],\forall
i,l=1,\ldots,N_{h}.$ (28d)
Notice that by utilizing (21) and the fact that (20a) consists of two positive
terms, one can verify that (28a) serves as an upper bound for (20a), thus,
minimizing the latter leads to minimizing the former. Moreover, the inequality
in (25) ensures that the solution to this problem is a feasible solution of
(20) at each point. Interestingly, the problem222Note that for the sake of
notation and facilitating the reader’s understanding, we kept the problem in
the vector form and introduced the constraint (28d) into the problem. However,
the problem can be easily converted to scalar form, which removes the
mentioned constraint. has become convex and can be solved at a given point by
standard convex optimization tools, e.g., CVX [35]. Moreover, the solution can
be iteratively updated using the SCA algorithm [33].
#### III-B2 Optimization with fixed $\\{\omega_{i,n}\\}_{\forall i,n}$
Herein, the non-convex Lorentzian constraint of the metamaterials makes the
problem extremely difficult to solve. To tackle this, we propose decoupling
the problem to first maximize the minimum harvested power when optimizing
$q_{i,l}$. This allows us to leverage the beamforming capability of the
metamaterial elements and provide degrees of freedom to further reduce the
power consumption when optimizing $\omega_{i,n}$ [23]. Hereby, the
optimization problem with fixed $\\{\omega_{i,n}\\}_{\forall i,n}$ can be
reformulated as
$\displaystyle\operatorname*{maximize}_{\\{q_{i,l}\\}_{\forall i,l}}\quad$
$\displaystyle\min_{m}\frac{v_{o,m}^{2}}{R_{L}}$ (29a) subject to
$\displaystyle q_{i,l}\in\mathcal{Q},\forall i,l,$ (29b)
where (29b) is the non-convex Lorentzian constraint. Next, we cope with the
complexity caused by the metamaterial elements.
###### Theorem 2.
Problem (29) is equivalent to
$\displaystyle\operatorname*{maximize}_{\\{q_{i,l}\\}_{\forall i,l}}\quad$
$\displaystyle R$ (30a) subject to
$\displaystyle\Re\bigl{\\{}q_{i,l}\bigr{\\}}^{2}+(\Im\bigl{\\{}q_{i,l}\bigr{\\}}-0.5)^{2}\leq
0.25,\forall i,l,$ (30b) $\displaystyle R\leq
K_{2}{E}\bigl{\\{}y_{m}(t)^{4}\bigr{\\}}+K_{4}{E}\bigl{\\{}y_{m}(t)^{4}\bigr{\\}},\quad\forall
m.$ (30c)
###### Proof.
The proof is provided in Appendix -A. ∎
Notice that (30b) keeps the frequency response of the metamaterials within the
Lorentzian circle, while (30c) ensures that $R$ is below the minimum output
voltage of the devices. Problem (30) is still difficult to solve due to the
non-convex constraint (30c). To cope with this, we define
$\hat{\mathbf{a}}_{m,n}=\boldsymbol{\gamma}_{m,n}\odot\bar{\mathbf{w}}_{n}\odot\mathbf{h}$
and write
$\displaystyle\mathbb{E}\bigl{\\{}y_{m}(t)^{2}\bigr{\\}}={e}_{m,1}({\mathbf{q}})=\frac{G^{2}}{2}\sum_{n}|\hat{\mathbf{a}}^{T}_{m,n}{\mathbf{q}}|^{2},$
(31)
$\displaystyle\mathbb{E}\bigl{\\{}y_{m}(t)^{4}\bigr{\\}}={e}_{m,2}(\mathbf{q})=\frac{3G^{4}}{8}\sum_{\begin{subarray}{c}n_{0},n_{1},n_{2},n_{3}\\\
n_{0}+n_{1}=n_{2}+n_{3}\end{subarray}}...$
$\displaystyle(\hat{\mathbf{a}}^{T}_{m,n_{0}}\mathbf{q})(\hat{\mathbf{a}}^{T}_{m,n_{1}}\mathbf{q})(\hat{\mathbf{a}}^{T}_{m,n_{2}}\mathbf{q})^{\star}(\hat{\mathbf{a}}^{T}_{m,n_{3}}\mathbf{q})^{\star}.$
(32)
Similar to the case of digital precoders, it can be observed that ${v}_{o,m}$
is convex with respect to $\mathbf{q}$, thus, we can write
${{v}_{o,m}}\geq
K_{2}{e}_{m,1}({\mathbf{q}}^{(0)})+K_{4}{e}_{m,2}({\mathbf{q}}^{(0)})+\tilde{e}_{m}(\mathbf{q}^{(0)})\bigl{(}{\mathbf{q}}-{\mathbf{q}}^{(0)}\bigr{)},$
(33)
where
$\tilde{e}_{m}(\mathbf{q}^{(0)})=G^{2}K_{2}\sum_{n}(\hat{\mathbf{a}}^{T}_{m,n}{\mathbf{q}}^{(0)})\hat{\mathbf{a}}^{T}_{m,n}+\\\
\frac{3K_{4}G^{4}}{8}\sum_{\begin{subarray}{c}n_{0},n_{1},n_{2},n_{3}\\\
n_{0}+n_{1}=n_{2}+n_{3}\end{subarray}}\biggl{[}(\hat{\mathbf{a}}^{T}_{m,n_{1}}\mathbf{q})(\hat{\mathbf{a}}^{T}_{m,n_{2}}\mathbf{q})^{\star}(\hat{\mathbf{a}}^{T}_{m,n_{3}}\mathbf{q})^{\star}\hat{\mathbf{a}}^{T}_{m,n_{0}}+\\\
(\hat{\mathbf{a}}^{T}_{m,n_{0}}\mathbf{q})(\hat{\mathbf{a}}^{T}_{m,n_{2}}\mathbf{q})^{\star}(\hat{\mathbf{a}}^{T}_{m,n_{3}}\mathbf{q})^{\star}\hat{\mathbf{a}}^{T}_{m,n_{1}}+\\\
(\hat{\mathbf{a}}^{T}_{m,n_{0}}\mathbf{q})(\hat{\mathbf{a}}^{T}_{m,n_{1}}\mathbf{q})(\hat{\mathbf{a}}^{T}_{m,n_{3}}\mathbf{q})^{\star}\hat{\mathbf{a}}_{m,n_{2}}^{H}+\\\
(\hat{\mathbf{a}}^{T}_{m,n_{0}}\mathbf{q})(\hat{\mathbf{a}}^{T}_{m,n_{1}}\mathbf{q})(\hat{\mathbf{a}}^{T}_{m,n_{2}}\mathbf{q})^{\star}\hat{\mathbf{a}}_{m,n_{3}}^{H}\biggr{]}.$
(34)
Hereby, (30) can be reformulated at point $\mathbf{q}^{(0)}$ as
$\displaystyle\operatorname*{maximize}_{\mathbf{q},R}\quad$ $\displaystyle R$
(35a) subject to $\displaystyle R\leq K_{2}\bar{g}_{m,1}({\mathbf{q}}^{(0)})$
$\displaystyle+K_{4}\bar{g}_{m,2}({\mathbf{q}}^{(0)})+\tilde{e}_{m}(\mathbf{q}^{(0)})\bigl{(}{\mathbf{q}}-{\mathbf{q}}^{(0)}\bigr{)},$
(35b) $\displaystyle\Re\bigl{\\{}\mathbf{q}[(i-1)N_{h}+l]\bigr{\\}}^{2}+$
$\displaystyle(\Im\bigl{\\{}\mathbf{q}[(i-1)N_{h}+l]\bigr{\\}}-0.5)^{2}\leq
0.25,\forall i,l,\eqref{probdmaQQconf}$ (35c)
which is convex in the neighborhood of the initial point.
#### III-B3 Alternating Optimization Algorithm
We transformed the original problem into a convex form for both fixed
$\bar{\mathbf{w}}$ and $\mathbf{q}$ in the neighborhood of an initial point.
However, there is still an important challenge, i.e., the initialization of
the variables. Specifically, when using iterative algorithms relying on convex
approximation, e.g., SCA, the starting point must be feasible and the
performance is influenced by it. One can get a feasible point by setting an
extremely large amplitude for the digital weights, but this may lead to poor
performance. To cope with this, we propose a low-complexity initialization
method, which will be explained in the next section. This prevents the initial
consumed power from becoming extremely large, helping the SCA algorithm to
start from a relatively good initial point.
Algorithm 1 illustrates the proposed alternating SCA-based approach for
waveform and beamforming optimization. First, the digital precoders and the
frequency responses of the metamaterial elements are initialized. After that,
digital precoders and metamaterials are optimized in an alternating fashion in
lines 3-17. First, SCA is used to iteratively find a suboptimal solution for
fixed digital precoders. Specifically, $\mathbf{q}$ is updated in each
iteration until convergence in lines 5-8. Then, the obtained $\mathbf{q}$ is
used to run the SCA algorithm for finding suboptimal digital precoders
$\\{\bar{\mathbf{w}}_{n}\\}_{\forall n}$ through line 11-15. These two SCA-
based optimizations are repeated until the alternating optimization converges
to a suboptimal solution.
Algorithm 1 Alternating SCA-based waveform and beamforming design for DMA-
assisted WPT (ASCA-DMA).
1:Input: $\\{\gamma_{i,l,m,n}\\}_{\forall i,l,m,n}$, $\upsilon$ Output:
$P_{c}^{\star}$
2:Initialize: $\mathbf{q}^{(0)}$ and $\bar{\mathbf{w}}^{(0)}_{n},\forall n$,
$P_{c}^{\star}=0$
3:repeat
4: $\xi^{\star}_{1}=0$, $\xi^{\star}_{2}=\infty$, $P_{c}\leftarrow
P_{c}^{\star}$
5: repeat
6: $\xi_{1}\leftarrow\xi^{\star}_{1}$, solve (35) to obtain $\mathbf{q}$ and
$\mathbf{q}^{(0)}\leftarrow\mathbf{q}$
7: $\xi^{\star}_{1}\leftarrow$the objective value in (35a)
8: until $|1-{\xi_{1}^{\star}}/{\xi_{1}}|\leq\upsilon$
9: Compute $v^{(0)}_{out,m},\forall m$ using (15), 23, and III-B1
10: Compute $S_{i}^{(0)},\forall i$ using (27)
11: repeat
12: $\xi_{2}\leftarrow\xi^{\star}_{2}$, solve (28) to obtain $v_{o,m}$ and
$\bar{\mathbf{w}}^{(0)}_{n},\forall n$
13: $v^{(0)}_{out,m}\leftarrow v_{o,m}$,
$\bar{\mathbf{w}}^{(0)}_{n}\leftarrow\bar{\mathbf{w}}^{(0)}_{n},\forall n$,
$S_{i}^{(0)}\leftarrow S_{i},\forall i$
14: $\xi^{\star}_{2}\leftarrow$ the objective value in (28a)
15: until $|1-{\xi_{2}^{\star}}/{\xi_{2}}|\leq\upsilon$
16: Compute $P_{c}^{\star}$ using (19a)
17:until $|1-P_{c}^{\star}/P_{c}|\leq\upsilon$
### III-C Optimization Framework for Fully-Digital Architecture
Note that the proposed framework for the DMA-assisted system can also be used
for fully-digital architecture with some slight modifications333Note that the
framework can also be straightforwardly adapted for a traditional hybrid
architecture with a fully connected network of phase shifters.. Notably,
alternating optimization is not needed since the variables are just digital
precoders, and adapting SCA to find the suboptimal precoders is sufficient.
Algorithm 2 illustrates the SCA-based optimization for fully-digital
architecture. Notice that problem (28) can be easily modified to match the
fully-digital transmitter. Thus, the expressions are not rewritten to avoid
repetition.
Algorithm 2 SCA-based waveform and beamforming design for fully-digital WPT
(SCA-FD).
1:Input: $\\{\gamma_{i,l,m,n}\\}_{\forall i,l,m,n}$, $\upsilon$ Output:
$P_{c}^{\star}$
2:Initialize: ${\mathbf{w}}^{(0)}_{n},\forall n$, $P_{c}^{\star}=0$
3:repeat
4: $P_{c}\leftarrow P_{c}^{\star}$ , solve (28) to obtain
$\\{v_{o,m},S_{i},{\mathbf{w}}_{n}\\}_{\forall i,m,n}$
5: $v^{(0)}_{out,m}\leftarrow v_{o,m}$,
${\mathbf{w}}^{(0)}_{n}\leftarrow{\mathbf{w}}^{(0)}_{n},\forall n$,
$S_{i}^{(0)}\leftarrow S_{i},\forall i$
6: Compute $P_{c}^{\star}$ using (19a)
7:until $|1-P_{c}^{\star}/P_{c}|\leq\upsilon$
### III-D Complexity
The proposed ASCA-DMA algorithm consists of a low-complexity initialization,
followed by alternating optimization, while SCA is used to optimize each set
of variables. Each iteration of SCA attempts to solve a quadratic program [33]
((28) or (35)). Moreover, the complexity of quadratic programs scales with a
polynomial function of the problem size, while the degree of the polynomial
mainly depends on the type of the solver. Let us consider a simple solver
based on the Newton method with $\mathcal{O}(n^{3})$ complexity [33], where
$n$ is the problem size. Imagine $U_{1}$ and $U_{2}$ are the number of
required iterations (in the worst-case) for convergence of digital weights and
metamaterial elements’ weights, respectively. Furthermore, $U_{3}$ is
considered the number of iterations required for convergence in alternating
optimization. Hereby, the total complexity of the ASCA-DMA algorithm is
$\mathcal{O}(U_{1}U_{2}U_{3}n^{3})$, where $n$ scales with $M$, $N_{v}$,
$N_{h}$, and $N_{f}$. There is also some additional complexity introduced by
the initialization algorithm, which is negligible since the initialization
procedure is low-complexity. Moreover, the SCA-FD has only a single SCA stage
with a complexity $\mathcal{O}(U_{4}n^{3})$, where $U_{4}$ is the required
number of iterations for convergence of SCA in the worst-case.
## IV Initialization Algorithm
Algorithm 3 Initialization of digital precoders and the frequency response of
the metamaterial elements.
1:Input: $\\{\gamma_{i,l,m,n}\\}_{\forall i,l,m,n}$, $\tau_{s}$, $\varsigma$
Output: $\\{\omega_{i,n},q_{i,l}\\}_{\forall i,l,n}$
2:Initialize: Compute $z_{m},\forall m$ using (36)
3:Allocate one RF chain to each user, $\mathcal{R}_{m}=\\{m\\},\forall m$,
$\bar{\mathcal{R}}=\\{1,\ldots,M\\}$, $w_{m}=0,\forall m$
4:$N^{\prime}_{v}=N_{v}-M$, $N^{\prime}_{rf,m}=\lceil
z_{m}N^{\prime}_{v}\rceil$, $\mathcal{R}^{\prime}=\\{\\}$
5:repeat
6: $m^{\star}=\operatorname*{argmax}_{m\notin\mathcal{R}^{\prime}_{m}}z_{m}$,
$\mathcal{R}^{\prime}\leftarrow\mathcal{R}^{\prime}\cup m^{\star}$,
$i_{c}=M+1$
7: repeat
8: $\mathcal{R}_{m^{\star}}\leftarrow\mathcal{R}_{m^{\star}}\cup i_{c}$,
$i_{c}\leftarrow i_{c}+1$
9: $N^{\prime}_{rf,m}\leftarrow N^{\prime}_{rf,m}-1$,
$N^{\prime}_{v}\leftarrow N^{\prime}_{v}-1$
10: until $N^{\prime}_{rf,m}=0$ or $N^{\prime}_{v}=0$
11:until $N^{\prime}_{v}=0$ or $|\mathcal{R}^{\prime}|=M$
12:for $m=1,\ldots,M$ do
13: Compute $q_{i,l},\forall i\in\mathcal{R}_{m},l$ using (37) and (9),
$w_{m}=\tau_{s}$
14: Solve (40) to obtain $\hat{\omega_{i,n}}^{\star},\forall
i\in\mathcal{R}_{m},n$
15: repeat
16:
$\bar{\omega}_{i,n}=w_{m},\omega_{i,n}=\bar{\omega}_{i,n}e^{j\hat{\omega}_{i,n}}\forall
i\in\mathcal{R}_{m},n$
17: Compute $P_{dc,m}$ using (15), (16), (23), and (III-B1)
18: $w_{m}\leftarrow\varsigma w_{m}$
19: until $P_{dc,m}\geq\bar{P}_{m}$
20:end for
Algorithm 3 illustrates the proposed initialization algorithm. Since the
initialization algorithm has to be adaptable for multi-user scenarios, we
start by proposing a method to allocate the output signal of the RF chains to
the different users444The allocation is only for the initialization process,
and there is no limitation in this regard in the optimization procedure.. For
this, we utilize the channel characteristics by naming
$n_{m}^{\star}=\operatorname*{argmax}_{n}|\boldsymbol{\gamma}_{m,n}|$ as the
strongest sub-carrier channel between user $m$ and the transmitter. Then, a
coefficient $z_{m}$ is assigned to user $m$ based on the gain introduced by
its strongest channel, expressed as
$z_{m}=1-\frac{|\boldsymbol{\gamma}_{m,n_{m}^{\star}}|}{\sum_{\bar{m}=1}^{M}|\boldsymbol{\gamma}_{\bar{m},{n_{\bar{m}}^{\star}}}|}.$
(36)
More precisely, the RF chains are dedicated to the users based on this ratio
such that the users with lower channel gains are served by more signals and
vice versa. The allocation procedure is illustrated in lines 2-11 in Algorithm
3. First, an RF chain is allocated to each user, then, the rest of the RF
chains are divided among users based on their $z_{m}$.
Let us denote $\mathcal{R}_{m}$ as the set of RF chains dedicated to user $m$.
Then, we initialize $q_{i,l},i\in\mathcal{R}_{m}$ to compensate for the phase
shift introduced by both $h_{i,l}$ and $\gamma_{i,l,m,n_{m}^{\star}}$.
Specifically, we need to define $q_{i,l},i\in\mathcal{R}_{m}$ such that
$\phi^{\star}_{i,l}=\operatorname*{argmin}_{\phi_{i,l}}\bigl{\langle}(\frac{j+e^{j\phi_{i,l}}}{2})h_{i,l}\gamma_{i,l,m,n_{m}^{\star}}\bigr{\rangle},\forall
i,l,$ (37)
where $i\in\mathcal{R}_{m}$ and $q_{i,l}$ can be obtained accordingly. Notice
that (37) can be easily solved using a one-dimensional search with negligible
complexity.
The next step is to initialize the amplitude and phase of the digital
precoders. For this, let us proceed by defining the received RF power at the
$m$th user as
$P_{rf,m}=\frac{G^{2}}{2}|\mathbf{a}_{m,n}\bar{\mathbf{w}}_{n}|^{2}\\!=\\!\frac{G^{2}}{2}\sum_{n=1}^{N_{f}}\biggl{|}\sum_{i=1}^{N_{v}}\sum_{l=1}^{N_{h}}\gamma_{i,l,m,n}q_{i,l}h_{i,l}\omega_{i,n}\biggr{|}^{2}.$
(38)
Moreover, the output DC power of the rectifier is an increasing function of
the input RF power when operating in the low-power regime below the breakdown
region of the rectifier circuit’s diode [36]. Motivated by this, we aim to
increase the available RF power at each user during the initialization process
using the phases of the digital precoders with low complexity. One way to
increase the available RF power for each user is to facilitate the coherent
reception of the signals at the receiver. This can be done by reducing the
amount of phase shift introduced in the signal. For this, we assume the
initial digital weights to be
$\omega_{i,n}=\bar{\omega}_{i,n}e^{j\hat{\omega}_{i,n}}$, while the dedicated
signals to each user have the same amplitude $\bar{\omega}_{i,n}=w_{m},\forall
i\in\mathcal{R}_{m},n$. Moreover, the ideal phase initialization for
maximizing the received RF power is obtained by solving
$\operatorname*{argmax}_{\hat{\omega}_{i,n}\in[0,2\pi],\forall
i\in\mathcal{R}_{m},n}\sum_{n=1}^{N_{f}}\biggl{|}\sum_{i=1}^{N_{v}}\sum_{l=1}^{N_{h}}\gamma_{i,l,m,n}q_{i,l}h_{i,l}\bar{\omega}_{i,n}e^{j\hat{\omega}_{i,n}}\biggr{|}^{2}.$
(39)
Meanwhile, solving this problem is not straightforward and introduces much
additional complexity to our framework. Notably, the problem can be decoupled
and solved individually for each sub-carrier without any change in the optimal
solution. Still, there is a coupling between the digital weights of different
RF chains in a similar sub-carrier. For this, we further reduce the complexity
by formulating the problem as
$\operatorname*{argmin}_{\hat{\omega}_{i,n}\in[0,2\pi]}\biggl{|}\big{\langle}\sum_{l=1}^{N_{h}}\gamma_{i,l,m,n}q_{i,l}h_{i,l}e^{j\hat{\omega}_{i,n}}\big{\rangle}\biggr{|},\forall
i\in\mathcal{R}_{m},n.$ (40)
Although the reformulated problem may not have the same solution as (39), it
attempts to reduce the total amount of the phase shift of the received signal.
Therefore, solving (40) can lead to a suboptimal solution to (39) with much
lower complexity, and by using a one-dimensional search. Note that utilizing
such an approach is relevant since the goal is to have a reasonable
initialization for the variables, which leads to feeding a feasible initial
point to the optimization algorithm. The initialization procedure is
illustrated through lines 12-20 in Algorithm 3. For each user, the
metamaterials connected to its dedicated RF chains are initialized. Then, the
phases of the corresponding digital precoders are obtained. Finally, the
amplitudes are iteratively increased until the EH requirement is met and a
feasible solution is found. In the fully-digital architecture, the
initialization algorithm follows the same procedure with one difference, i.e.,
each antenna element has a dedicated RF chain, and thus, a dedicated signal.
## V Numerical Analysis
In this section, we provide numerical analysis of the system performance. We
consider an indoor office with a transmitter located at the center of the
ceiling. The operating frequency of the system is $f_{1}=5.18$ GHz, which
matches the characteristics of the utilized rectifier model [36]. The spacing
between the elements in the DMA is $\lambda_{1}/5$, while $\lambda_{1}/2$ is
the distance between two consecutive microstrips. Meanwhile, the inter-element
distance is $\lambda_{1}/2$ in the fully-digital architecture. Thus,
$N_{v}=N_{h}=\lfloor\frac{L}{\lambda_{1}/2}\rfloor$ for the fully-digital
system, and
$N_{v}=\lfloor\frac{L}{\lambda_{1}/2}\rfloor,N_{h}=\lfloor\frac{L}{\lambda_{1}/5}\rfloor$
for the DMA-assisted system [27]. Note that $L$ is the array length while the
arrays are considered to be square-shaped. We set the optimization parameters
$\tau_{s}=10^{-3}$, $\varsigma=5$, and $\upsilon=10^{-6}$. Without loss of
generality, we set $G=1$ and $\bar{\eta}=\frac{\pi}{4}$555In practice, the HPA
output power is larger than the input power because $G>1$. However, the
proposed framework applies to all values of $G$.. Finally, the rectifier
parameters are $v_{t}=25$ mV and $\eta_{0}=1.05$ [36, 21, 20].
We utilize the characteristics of the ®Rogers RO4000 series ceramic laminate
to calculate the propagation coefficients of the microstrips. Specifically, we
calculate the attenuation and propagation coefficients of a RO400C LoPro with
a thickness of 20.7 mil (0.5258 mm) using the formulation provided in [23],
which gives $\alpha=0.356$ m-1 and $\beta=202.19$ m-1. Based on the rectifier
circuit design and simulations in [36], the rectifier circuit diode enters the
breakdown region when the received RF power is approximately $100\ \mu$W for a
continuous wave ($N_{f}=1$). Moreover, it has been shown that the maximum RF-
to-DC conversion efficiency is approximately $20\%$ for the mentioned setup.
Hence, we establish a minimum requirement of $\tilde{P}_{dc}=20\ \mu$W for DC
harvested power. In the figures, FD refers to the fully-digital architecture.
Moreover, $d$ represents the distance between the user and the center of the
transmitter.
Fig. 5 provides the convergence performance of the proposed ASCA-DMA approach
by presenting the amount of power consumption at the end of each iteration of
alternating optimization. It is seen that the objective value gradually
decreases with iterations until convergence, while the number of required
iterations for convergence depends on the setup. For instance, it is shown
that increasing $L$, $N_{f}$, or $M$ can increase the complexity of the
problem leading to more iterations. Fig. 6 provides a detailed algorithm
performance by showing all iterations of optimization, including the SCA
algorithm for both optimizing metamaterials and digital precoders. It is seen
that since the minimum harvested power increases when optimizing $\mathbf{q}$,
the power consumption increases, while this facilitates decreasing the amount
of power consumption when optimizing digital precoders. Therefore, the power
consumption decreases gradually at the end of each iteration of the
alternating optimization (red arrows) until convergence.
Figure 5: The convergence performance of ASCA-DMA for (a) $M=1$, $L=15$ cm,
$N_{f}=1$ (left), (b) $M=1$, $L=25$ cm, $N_{f}=1$ (middle), and (c) $M=2$,
$L=25$ cm, $N_{f}=4$ (right) over iterations with $d=2.5$ m and $P_{max}=100$
W. Figure 6: The convergence performance of ASCA-DMA for $M=1$, $L=25$ cm,
$d=2.5$ m, $N_{f}=1$, and $P_{max}=1$ W over all iterations (alternating and
SCA). The red and green arrows indicate the starting points of optimization
for the metamaterial elements and digital precoders, respectively.
Fig. 7 showcases the power consumption of the system as a function of the
antenna length. Note that increasing the $L$ reduces the power consumption
since the number of elements and the array aperture increases. This leads to
more capability in beam focusing, thus, delivering more power to the devices
given the same transmit power. However, the performance comparison between DMA
and FD highly depends on the system setup. It is seen that DMA outperforms FD
when $L$ and $P_{max}$ are relatively low. Note that the output of each RF
chain in DMA has to feed all the corresponding elements, while the the amount
of power consumption scales with the saturation power of the HPA. Thus, when
$P_{max}$ is low, the output power of each RF chain in DMA is multiplied by a
small value, leading to potential performance gains compared to FD. However,
this is only the case when $L$ is low and the number of RF chains is small. On
the other hand, in FD, $N_{rf}$ increases with $L$ at a higher rate compared
to DMA. Thus, it is easier to reduce the amount of output power of the HPAs by
distributing the required output power among them, leading to lower power
consumption compared to DMA. However, the value of $L$ that shifts the
favorable architecture from DMA to FD or vice versa depends on $P_{max}$ and
the transmit power required for meeting the EH requirements. For instance,
when $d=2.5$ m and $P_{max}=1$ W, the required transmit power and the power
consumption multiplier are both low, leading to a shift in performance with
lower $L$. Meanwhile, when $d=6.5$ m and $P_{max}=100$ W, DMA outperforms FD
up to a higher value of $L$ compared to the previous case. Meanwhile, when $d$
is small and $P_{max}$ is sufficiently large, FD outperforms DMA over all $L$
values with the performance gap increasing with $L$. In contrast, when $d$ is
relatively large and $P_{max}$ is small, DMA becomes the favorable choice for
sufficiently large $L$.
Figure 7: The power consumption as a function of $L$ for $N_{f}=4$,
$P_{max}\in\\{1,100\\}$ W, and $d\in\\{2.5,6.5\\}$ m.
Our results in Fig. 8 corroborate that increasing $N_{f}$ reduces $P_{c}$. As
discussed in Section II-C, this is because the HPAs operate in the linear
regime, and higher $N_{f}$ can leverage the rectifier non-linearity and
deliver more DC power to the ERs via waveform optimization. However, as
previously mentioned, the preference for DMA or FD highly depends on the
system parameters. For instance, when $L=10$ cm and $P_{max}=1$ W, both
$N_{rf}$ and saturation power have small values, leading to a lower power
consumption for DMA compared to FD. The reason is that the multiplier of the
HPA output power is low, thus, a lower value is multiplied by the output power
of each RF chain in DMA. Combining this with a small $L$, thus a small
$N_{rf}$ for FD, leads to DMA outperforming FD. On the other hand, different
parameters and system setups may lead to different performances. For example,
it is seen that when $L$ and saturation power are both sufficiently large, the
favorable architecture shifts from DMA to FD. The reason is that the number of
RF chains is much higher for FD in this case, leading to lower output power
for each. Then, combining this with a large value of saturation voltage leads
to smaller HPA outputs in FD, thus, lower power consumption. Meanwhile,
increasing $N_{f}$ affects the performance gap between FD and DMA, and
increasing $N_{f}$ leads to more performance gains in FD compared to DMA since
the number of signals is relatively larger in FD. Thus, when DMA outperforms
FD for a given $L$ and $P_{max}$, FD may start outperforming DMA by
sufficiently increasing $N_{f}$. On the other hand, when FD performs better,
increasing $N_{f}$ can increase the performance gap.
Figure 8: The power consumption as a function of $N_{f}$ for $L=25$ cm,
$P_{max}\in\\{1,100\\}$ W, $L\in\\{10,20\\}$ cm, and $d=2.5$ m.
Fig. 9 illustrates the impact of user distance on performance. It is obvious
that the power consumption increases with distance since the path loss becomes
larger and more transmit power is required to overcome that and meet the EH
requirements. Interestingly, the performance shift between FD and DMA is also
shown here. It is shown that when both $P_{max}$ and $L$ are relatively low,
DMA outperforms FD, especially over large distances. On the other hand, when
$L$ increases, DMA starts performing better after a certain $d$ since the
number of HPAs is much larger in FD, and increasing their output power affects
the power consumption considerably. For instance, when $L=20$ cm and
$P_{max}=100$ W, FD starts with better performance than DMA, but the
performance gap decreases as $d$ becomes larger.
Figure 9: The power consumption as a function of user distance for $N_{f}=4$,
$P_{max}\in\\{1,100\\}$ W, and $L\in\\{10,20\\}$ cm.
The impact of the number of users on system performance has been illustrated
in Fig. 10 for different system parameters. As expected, the power consumption
increases with $M$ since more EH requirements must be met. As previously
shown, both DMA or FD may be the favorable choice depending on the system
setup and the number of devices. For example, when $P_{max}$ and $L$ are both
sufficiently large, DMA outperforms FD for a small $M$, but FD becomes
favorable when $M$ is relatively large. The reason is that increasing $M$
leads to more required transmit power to satisfy the requirements and as
earlier mentioned, a large $L$ leads to relatively smaller output powers
compared in FD to DMA.
Some discussions on the complexity of implementing a large-scale FD setup are
in order. Notice that 36 RF chains are needed in a 400 cm2 FD array and just 6
for a DMA with the same size at $f_{1}=5.18$ GHz. Thus, although FD may
outperform DMA in many setups, as seen earlier, DMA may still be favorable
since it can achieve relatively good performance with a reduced $N_{rf}$ and
complexity.
Figure 10: The power consumption over $M$ for (a) $P_{max}=1$ W (top) and (b)
$P_{max}=100$ W (bottom), while $N_{f}=4$, and $L\in\\{15,25\\}$ cm. The users
are located at $d=4.4$ m.
Fig. 11 provides some insights regarding the beam focusing capability in the
near-field WPT by illustrating the normalized received RF power in each
spatial point of the area. Note that the received signal at each point is
normalized by its path loss to remove the impact of the distance. In Fig. 11a,
it is seen that when the device is located in the near-field region, the beam
is focused around the device location, while the beam trace fades increasingly
past the device. Such phenomena can have a huge benefit in reducing the RF
emission footprint in the environment, which facilitates the implementation of
environmentally friendly WPT systems. Meanwhile, the beam pattern for a
located device in the far-field is formed in the user’s direction, as
illustrated in Fig. 11b. This may be highly disadvantaged in interference-
sensitive applications since the generated beam may cause difficult-to-handle
interference in the signals conveying information, e.g., in SWIPT.
Figure 11: The normalized received RF power (W) in the area when the energy
receiver is located at (a) the near-field region (top) and (b) the far-field
region (bottom) in the DMA-assisted system with $L=30$ cm and $N_{f}=1$.
The received RF waveform in DMA-assisted and fully-digital systems is
presented in Fig.12a and Fig.12b, respectively. As previously mentioned, high
PAPR waveforms are beneficial for enhancing the performance in terms of DC
harvested power [7]. Moreover, our simulations verify this by showing that the
received signal experiences high peak amplitudes at specific intervals. Recall
that when HPAs are operating in the linear regime, as in our case, the HPA
does not introduce distortion to the signal. Thus, it is beneficial to utilize
multiple tones to leverage the rectifier’s non-linearity. Note that the peak-
to-peak time depends on the characteristics of the EH circuit, mainly the
capacitor.
Figure 12: The received signal at the device using (a) DMA (top) and (b) FD
(bottom) for $M=1$, $L=25$ cm, and $N_{f}=8$.
## VI Conclusion and Future Work
In this paper, we investigated a multi-antenna near-field WPT system with a
DMA as the transmitter to charge multiple non-linear EH devices. Furthermore,
we proposed an optimization framework relying on alternating optimization and
SCA for the joint waveform optimization and beam focusing to minimize the
system power consumption while meeting the EH requirements. Numerical results
showed that both DMA and fully-digital architecture may be the favorable
choice in terms of power consumption depending on the system setups and
parameters such as antenna length, saturation power of the HPAs, number of
users, and user distance. Moreover, we showed that increasing the antenna
length or the number of tones can enhance the performance. Finally, we
verified that the transmitter can focus the energy beams on the spatial points
in the near-field region, while energy beams are formed toward the devices’
direction in the far-field.
As a prospect for future research, we may delve deeper into the signal
generation aspect by analyzing the power consumption based on the number of
tones. Another research direction is to utilize optimization approaches with
lower complexity, e.g., relying on machine learning, to learn online the
input-output relation of the system’s non-linear components while optimizing
the transmit waveform accordingly.
### -A Proof of Theorem 2
We relax (29b) by limiting the values of $q_{i,l}$ to lie within the
Lorentzian circle in the complex plane. By utilizing the fact that (29a) is a
positive and increasing function of the rectifier’s output voltage and using
the epigraph form, the relaxed problem can be written as (30). Note that each
configuration of a metamaterial corresponds to a point on the Lorentzian-
constrained circle. Imagine $\vec{e}$ is the vector that represents the
direction and gain of a point on the Lorentzian circle, while this direction
and gain impacts the transmit signal. Meanwhile, the goal of (30) is to
increase the minimum output voltage of the ERs. Therefore, when shaping the
transmit signal toward different ERs, it is obvious that $\vec{e}$ should be
chosen with the most possible gain in the required direction to improve the
signal strength at the receiver. Furthermore, the most possible introduced
gain by a metamaterial element along a specified direction happens when the
point is exactly on the Lorentzian-constrained circle in that direction.
Hence, although (30b) is a relaxed version of the constraint (29b),
considering the final solution of the metamaterials as $a\vec{e},0\leq a\leq
1$, the only solution that leads to the most gain for the desired direction is
$a=1$. Thus, the solution of (30) is the same as (29) leading to a
configuration positioned on the Lorentzian circle and the equivalence between
the problems.
## References
* [1] Z. Zhang _et al._ , “6G wireless networks: vision, requirements, architecture, and key technologies,” IEEE Veh. Technol. Mag., vol. 14, no. 3, pp. 28–41, 2019.
* [2] N. H. Mahmood _et al._ , “Six key features of machine type communication in 6G,” in 2nd 6G SUMMIT, pp. 1–5, 2020.
* [3] O. L. A. López _et al._ , “High-Power and Safe RF Wireless Charging: Cautious Deployment and Operation,” 2023.
* [4] O. L. A. López _et al._ , “Massive wireless energy transfer: enabling sustainable IoT toward 6G era,” IEEE Internet Things J., vol. 8, no. 11, pp. 8816–8835, 2021.
* [5] Y. Zeng _et al._ , “Communications and signals design for wireless power transmission,” IEEE Trans Commun, vol. 65, no. 5, pp. 2264–2290, 2017.
* [6] B. Clerckx and E. Bayguzina, “Waveform Design for Wireless Power Transfer,” IEEE Trans. Signal Process., vol. 64, no. 23, pp. 6313–6328, 2016.
* [7] C. R. Valenta _et al._ , “Theoretical Energy-Conversion Efficiency for Energy-Harvesting Circuits Under Power-Optimized Waveform Excitation,” IEEE Trans. Microw. Theory Tech., vol. 63, no. 5, pp. 1758–1767, 2015.
* [8] I. Ahmed _et al._ , “A Survey on Hybrid Beamforming Techniques in 5G: Architecture and System Model Perspectives,” IEEE Commun. Surv. Tutor., vol. 20, no. 4, pp. 3060–3097, 2018.
* [9] X. Gao _et al._ , “Energy-Efficient Hybrid Analog and Digital Precoding for MmWave MIMO Systems With Large Antenna Arrays,” IEEE J. Sel. Areas Commun., vol. 34, no. 4, pp. 998–1009, 2016.
* [10] Q. Wu _et al._ , “Intelligent Reflecting Surface-Aided Wireless Communications: A Tutorial,” IEEE Trans Commun, vol. 69, no. 5, pp. 3313–3351, 2021.
* [11] N. Shlezinger _et al._ , “Dynamic Metasurface Antennas for 6G Extreme Massive MIMO Communications,” IEEE Wirel. Commun., vol. 28, no. 2, pp. 106–113, 2021.
* [12] B. Zheng _et al._ , “A Survey on Channel Estimation and Practical Passive Beamforming Design for Intelligent Reflecting Surface Aided Wireless Communications,” IEEE Commun. Surv. Tutor., vol. 24, no. 2, pp. 1035–1071, 2022.
* [13] H. Jiang _et al._ , “Hybrid RIS and DMA Assisted Multiuser MIMO Uplink Transmission With Electromagnetic Exposure Constraints,” IEEE J Sel Top Signal Process, vol. 16, no. 5, pp. 1055–1069, 2022.
* [14] O. L. A. López _et al._ , “Massive MIMO with radio stripes for indoor wireless energy transfer,” IEEE Trans. Wirel. Commun., vol. 21, no. 9, pp. 7088–7104, 2022.
* [15] A. Azarbahram _et al._ , “On the Radio Stripe Deployment for Indoor RF Wireless Power Transfer,” 2023.
* [16] O. L. A. López _et al._ , “A Low-Complexity Beamforming Design for Multiuser Wireless Energy Transfer,” IEEE Wireless Commun. Lett., vol. 10, no. 1, pp. 58–62, 2021.
* [17] S. Shen and B. Clerckx, “Beamforming Optimization for MIMO Wireless Power Transfer With Nonlinear Energy Harvesting: RF Combining Versus DC Combining,” IEEE Trans. Wirel. Commun., vol. 20, no. 1, pp. 199–213, 2021.
* [18] B. Clerckx and E. Bayguzina, “Low-Complexity Adaptive Multisine Waveform Design for Wireless Power Transfer,” IEEE Antennas Wirel. Propag. Lett., vol. 16, pp. 2207–2210, 2017.
* [19] Y. Huang and B. Clerckx, “Large-Scale Multiantenna Multisine Wireless Power Transfer,” IEEE Trans. Signal Process., vol. 65, no. 21, pp. 5812–5827, 2017.
* [20] S. Shen and B. Clerckx, “Joint Waveform and Beamforming Optimization for MIMO Wireless Power Transfer,” IEEE Trans Commun, vol. 69, no. 8, pp. 5441–5455, 2021.
* [21] Y. Zhang and B. Clerckx, “Waveform Design for Wireless Power Transfer With Power Amplifier and Energy Harvester Non-Linearities,” IEEE Trans. Signal Process., pp. 1–15, 2023.
* [22] H. Zhang _et al._ , “Near-field wireless power transfer with dynamic metasurface antennas,” in IEEE SPAWC, pp. 1–5, 2022.
* [23] A. Azarbahram _et al._ , “Energy Beamforming for RF Wireless Power Transfer With Dynamic Metasurface Antennas,” IEEE Wireless Commun. Lett., pp. 1–1, 2023.
* [24] O. T. Demir and T. E. Tuncer, “Antenna selection and hybrid beamforming for simultaneous wireless information and power transfer in multi-group multicasting systems,” IEEE Trans. on Wireless Commun., vol. 15, no. 10, pp. 6948–6962, 2016.
* [25] Z. Feng _et al._ , “Waveform and Beamforming Design for Intelligent Reflecting Surface Aided Wireless Power Transfer: Single-User and Multi-User Solutions,” IEEE Trans. Wirel. Commun., vol. 21, no. 7, pp. 5346–5361, 2022.
* [26] Y. Zhao _et al._ , “RIS-Aided SWIPT: Joint Waveform, Active and Passive Beamforming Design Under Nonlinear Harvester Model,” IEEE Trans Commun, vol. 70, no. 2, pp. 1345–1359, 2022.
* [27] H. Zhang _et al._ , “Beam focusing for near-field multiuser MIMO communications,” IEEE Trans. Wirel. Commun., vol. 21, no. 9, pp. 7476–7490, 2022.
* [28] S. W. Ellingson, “Path Loss in Reconfigurable Intelligent Surface-Enabled Channels,” in IEEE PIMRC, pp. 829–835, 2021.
* [29] C. Rapp, “Effects of HPA-nonlinearity on a 4-DPSK/OFDM-signal for a digital sound broadcasting signal,” ESA Special Publication, vol. 332, pp. 179–184, 1991.
* [30] D. R. Smith _et al._ , “Analysis of a Waveguide-Fed Metasurface Antenna,” Phys. Rev. Appl., vol. 8, p. 054048, Nov 2017.
* [31] L. You _et al._ , “Energy Efficiency Maximization of Massive MIMO Communications With Dynamic Metasurface Antennas,” IEEE Trans. Wirel. Commun., vol. 22, no. 1, pp. 393–407, 2023.
* [32] C. Lin and G. Y. Li, “Energy-Efficient Design of Indoor mmWave and Sub-THz Systems With Antenna Arrays,” IEEE Trans. Wirel. Commun., vol. 15, no. 7, pp. 4660–4672, 2016.
* [33] S. Boyd and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.
* [34] E. J. McShane, “Jensen’s inequality,” 1937.
* [35] M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex programming, version 2.1.” http://cvxr.com/cvx, Mar. 2014.
* [36] B. Clerckx and J. Kim, “On the beneficial roles of fading and transmit diversity in wireless power transfer with nonlinear energy harvesting,” IEEE Trans. Wirel. Commun., vol. 17, no. 11, pp. 7731–7743, 2018.
|
# Testing High-dimensional Multinomials with Applications to Text Analysis
T. Tony Cai111University of Pennsylvania , Zheng Tracy Ke222Harvard University
, and Paxton Turner††footnotemark:
###### Abstract
Motivated by applications in text mining and discrete distribution inference,
we investigate the testing for equality of probability mass functions of $K$
groups of high-dimensional multinomial distributions. A test statistic, which
is shown to have an asymptotic standard normal distribution under the null, is
proposed. The optimal detection boundary is established, and the proposed test
is shown to achieve this optimal detection boundary across the entire
parameter space of interest. The proposed method is demonstrated in simulation
studies and applied to analyze two real-world datasets to examine variation
among consumer reviews of Amazon movies and diversity of statistical paper
abstracts.
> Keywords: authorship attribution, closeness testing, consumer reviews,
> martingale central limit theorem, minimax optimality, topic model
## 1 Introduction
Statistical inference for multinomial data has garnered considerable recent
interest (Diakonikolas and Kane, 2016; Balakrishnan and Wasserman, 2018). One
important application is in text mining, as it is common to model the word
counts in a text document by a multinomial distribution (Blei et al., 2003).
We consider a specific example in marketing, where the study of online
customer ratings and reviews has become a trending topic (Chevalier and
Mayzlin, 2006; Zhu and Zhang, 2010; Leung and Yang, 2020). Customer reviews
are a good proxy to the overall word of mouth (WOM) and can significantly
influence customers’ decisions (Zhu and Zhang, 2010). Many research works aim
to understand the patterns in online reviews and their impacts on sales.
Classical studies only use the numerical ratings but ignore the rich text
reviews because of their unstructured nature. More recent works have revealed
the importance of analyzing text reviews (Chevalier and Mayzlin, 2006),
especially for hedonic products such as books, movies, and hotels. A question
of great interest is to detect the heterogeneity in reviewers’ response
styles. For example, Leung and Yang (2020) discovered that younger travelers,
women, and travelers with less review expertise tend to give more positive
reviews and that guests staying in high-class hotels tend to have more extreme
response styles than those staying in low-class hotels. Knowing such
differences will offer valuable insights for hotel managers and online
rating/review sites.
The aforementioned heterogeneity detection can be cast as a hypothesis test on
multinomial data. Suppose reviews are written on a vocabulary of $p$ distinct
words. Let $X_{i}\in\mathbb{R}^{p}$ denote the word counts in review $i$. We
model that
$X_{i}\sim\mathrm{Multinomial}(N_{i},\Omega_{i}),\qquad 1\leq i\leq n,$ (1.1)
where $N_{i}$ is the total length of review $i$ and
$\Omega_{i}\in\mathbb{R}^{p}$ is a probability mass function (PMF) containing
the population word frequencies. These reviews are divided into $K$ groups by
reviewer characteristics (e.g., age, gender, new/returning customer), product
characteristics (e.g., high-class versus low-class hotels), and numeric
ratings (e.g., from 1 star to 5 stars), where $K$ can be presumably large. We
view $\Omega_{i}$ as representing the ‘true response’ of review $i$. The
“average response” of a group $k$ is defined by a weighted average of the
PMFs:
$\mu_{k}=(n_{k}\bar{N}_{k})^{-1}\sum_{i\in S_{k}}N_{i}\Omega_{i},\qquad 1\leq
k\leq K.$ (1.2)
Here $S_{k}\subset\\{1,2,\ldots,n\\}$ is the index set of group $k$,
$n_{k}=|S_{k}|$ is the total number of reviews in group $k$, and
$\bar{N}_{k}=n_{k}^{-1}\sum_{i\in S_{k}}N_{i}$ is the average length of
reviews in group $k$. We would like to test
$H_{0}:\quad\mu_{1}=\mu_{2}=\ldots=\mu_{K}.$ (1.3)
When the null hypothesis is rejected, it means there exist statistically
significant differences among the group-wise “average responses”.
We call (1.1)-(1.3) the “$K$-sample testing for equality of average PMFs in
multinomials” or “$K$-sample testing for multinomials” for short.
Interestingly, as $K$ varies, this problem includes several well-defined
problems in text mining and discrete distribution inference as special cases.
1. 1.
Global testing for topic models. Topic modeling (Blei et al., 2003) is a
popular text mining tool. In a topic model, each $\Omega_{i}$ in (1.1) is a
convex combination of $M$ topic vectors. Before fitting a topic model to a
corpus, it is often desirable to determine if the corpus indeed contains
multiple topics. This boils down to the global testing problem, which tests
$M=1$ versus $M>1$. Under the null hypothesis, $\Omega_{i}$’s are equal to
each other, and in the alternative hypothesis, $\Omega_{i}$’s can take
continuous values in a high-dimensional simplex. This is a special case of our
problem with $K=n$ and $n_{k}=1$.
2. 2.
Authorship attribution (Mosteller and Wallace, 1963; Kipnis, 2022). In these
applications, the goal is to determine the unknown authorship of an article
from other articles with known authors. A famous example (Mosteller and
Wallace, 2012) is to determine the actual authors of a few Federalist Papers
written by three authors but published under a single pseudonym. It can be
formulated (Mosteller and Wallace, 1963; Kipnis, 2022) as testing the equality
of population word frequencies between the article of interest and the corpus
from a known author, a special case of our problem with $K=2$.
3. 3.
Closeness between discrete distributions (Chan et al., 2014; Bhattacharya and
Valiant, 2015; Balakrishnan and Wasserman, 2019). There has been a surge of
interest in discrete distribution inference. Closeness testing is one of most
studied problems. The data from two discrete distributions are summarized in
two multinomial vectors $\mathrm{Multinomial}(N_{1},\mu)$ and
$\mathrm{Multinomial}(N_{2},\theta)$. The goal is to test $\mu=\theta$. It is
a special case of our testing problem with $K=2$ and $n_{1}=n_{2}=1$.
In this paper, we provide a unified solution to all the aforementioned
problems. The key to our methodology is a flexible statistic called DELVE (DE-
biased and Length-assisted Variability Estimator). It provides a general
similarity measure for comparing groups of discrete distributions such as
count vectors associated with text corpora. Similarity measures (such as the
classical cosine similarity, log-likelihood ratio statistic, and others) are
fundamental in text mining and have been applied to problems in distribution
testing (Kim et al., 2022), computational linguistics (Gomaa et al., 2013),
econometrics (Hansen et al., 2018), and computational biology (Kolodziejczyk
et al., 2015). Our method is a new and flexible similarity measure that is
potentially useful in these areas.
We emphasize that our setting does not require that the $X_{i}$’s in the same
group are drawn from the same distribution. Under the null hypothesis (1.3),
the group-wise means are equal, but the $\Omega_{i}$’s within each group can
still be different from each other. As a result, the null hypothesis is
composite and designing a proper test statistic is non-trivial.
### 1.1 Our results and contributions
The dimensionality of the testing problem is captured by $(n,p,K)$ and
$\bar{N}:=n^{-1}\sum_{i=1}^{n}N_{i}$. We are interested in a high-dimensional
setting where
$n\bar{N}\to\infty,\quad p\to\infty,\quad\mbox{and}\quad
n^{2}\bar{N}^{2}/(Kp)\to\infty.$ (1.4)
In most places of this paper, we use a subscript $n$ to indicate asymptotics,
but our method and theory do apply to the case where $n$ is finite and
$\bar{N}\to\infty$. In text applications, $n\bar{N}$ is the total count of
words in the corpus, and a large $n\bar{N}$ means either there are
sufficiently many documents, or the documents are sufficiently long. Given
that $n\bar{N}\to\infty$, we further allow $(p,K)$ to grow with $n$ at a speed
such that $Kp\ll n^{2}\bar{N}^{2}$. In particular, our settings allow $K$ to
range from $2$ to $n$, so as to cover all the application examples.
We propose a test that enjoys the following properties:
1. (a)
Parameter-free null distribution: We show that the test statistic $\psi\to
N(0,1)$ under $H_{0}$. Even under the null hypothesis (1.3), the model
contains a large number of free parameters because the null hypothesis is only
about the equality of “average” PMFs but still allows $(N_{i},\Omega_{i})$ to
differ within each group. As an appealing property, the null distribution of
$\psi$ does not depend on these individual multinomial parameters; hence, we
can always conveniently obtain the asymptotic $p$-value for our proposed test.
2. (b)
Minimax optimal detection boundary: We define a quantity
$\omega_{n}:=\omega_{n}(\mu_{1},\mu_{2},\ldots,\mu_{K})$ in (3.5) that
measures the difference among the $K$ group-wise mean PMF’s. It satisfies that
$\omega_{n}=0$ if and only if the null hypothesis holds, and it has been
properly normalized so that $\omega_{n}$ is bounded under the alternative
hypothesis (provided some mild regularity conditions hold). We show that the
proposed test has an asymptotic full power if
$\omega_{n}^{4}n^{2}{\bar{N}}^{2}/(Kp)\to\infty.$ We also provide a matching
lower bound by showing that the null hypothesis and the alternative hypothesis
are asymptotically indistinguishable if
$\omega_{n}^{4}n^{2}\bar{N}^{2}/(Kp)\to 0.$ Therefore, the proposed test is
minimax optimal. Furthermore, in the boundary case where
$\omega_{n}^{4}n^{2}\bar{N}^{2}/(Kp)\to c_{0}$ for a constant $c_{0}>0$, for
some special settings, we show that $\psi\to N(0,1)$ under $H_{0}$, and
$\psi\to N(c_{1},1)$, under $H_{1}$, with the constant $c_{1}$ being an
explicit function of $c_{0}$.
To the best of our knowledge, this testing problem for a general $K$ has not
been studied before. The existing works primarily focused on closeness testing
and authorship attribution (see Section 1.2), which are special cases with
$K=2$. In comparison, our test is applicable to any value of $K$, offering a
unified solution to multiple applications. Even for $K=2$, the existing works
do not provide a test statistic that has a tractable null distribution. They
determined the rejection region and calculated $p$-values using either a
(conservative) large-deviation bound or a permutation procedure. Our test is
the first one equipped with a tractable null distribution. Our results about
the optimal detection boundary for a general $K$ are also new to the
literature. By varying $K$ in our theory, we obtain the optimal detection
boundary for different sub-problems. For some of them (e.g., global testing
for topic models, authorship attribution with moderate sparsity), the optimal
detection boundary was not known before; hence, our results help advance the
understanding of the statistical limits of these problems.
### 1.2 Related literature
First, we make a connection to discrete distribution inference. Let
$X\sim\mathrm{Multinomial}(N,\Omega)$ represent a size-$N$ sample from a
discrete distribution with $p$ categories. The one-sample closeness testing
aims to test $H_{0}:\Omega=\mu$, for a given PMF $\mu$. Existing works focus
on finding the minimum separation condition in terms of the $\ell^{1}$-norm or
$\ell^{2}$-norm of $\Omega-\mu$. Balakrishnan and Wasserman (2019) derived the
minimum $\ell^{1}$-separation condition and proposed a truncated chi-square
test to achieve it. Valiant and Valiant (2017) studied the “local critical
radius”, a local separation condition that depends on the “effective sparsity”
of $\mu$, and they proposed a “2/3rd + tail” test to achieve it. In the two-
sample closeness testing problem, given
$X_{1}\sim\mathrm{Multinomial}(N_{1},\Omega_{1})$ and
$X_{2}\sim\mathrm{Multinomial}(N_{2},\Omega_{2})$, it aims to test
$H_{0}:\Omega_{1}=\Omega_{2}$. Again, this literature focuses on finding the
minimum separation condition in terms of the $\ell^{1}$-norm or
$\ell^{2}$-norm of $\Omega_{1}-\Omega_{2}$. When $N_{1}=N_{2}$, Chan et al.
(2014) derived the minimum $\ell^{1}$-separation condition and proposed a
weighted chi-square test to attain it. Bhattacharya and Valiant (2015)
extended their results to the unbalanced case where $N_{1}\neq N_{2}$,
assuming $\|\Omega_{1}-\Omega_{2}\|_{1}\geq p^{-1/12}$. This assumption was
later removed by Diakonikolas and Kane (2016), who established the minimum
$\ell^{1}$-separation condition in full generality. Kim et al. (2022) proposed
a two-sample kernel $U$-statistic and showed that it attains the minimum
$\ell^{2}$-separation condition.
Since the two-sample closeness testing is a special case of our problem with
$K=2$ and $n_{1}=n_{2}=1$, our test is directly applicable. An appealing
property of our test is its tractable asymptotic null distribution of
$N(0,1)$. In contrast, for the chi-square statistic in Chan et al. (2014) or
the $U$-statistic in (Kim et al., 2022), the rejection region is determined by
either an upper bound from concentration inequalities or a permutation
procedure, which may lead to a conservative threshold or need additional
computational costs. Regarding the testing power, we show in Section 4.3 that
our test achieves the minimum $\ell^{2}$-separation condition, i.e., our
method is an optimal “$\ell^{2}$ testor.” Our test can also be turned into an
optimal “$\ell^{1}$ testor” (a test that achieves the minimum
$\ell^{1}$-separation condition) by re-weighting terms in the test statistic
(see Section 4.3).
Next, we make a connection to text mining. In this literature, a multinomial
vector $X\sim\mathrm{Multinomial}(N,\Omega)$ represents the word counts for a
document of length $N$ written with a dictionary containing $p$ words. In a
topic model, each $\Omega_{i}$ is a convex combination of $M$ “topic vectors”:
$\Omega_{i}=\sum_{k=1}^{M}w_{i}(k)A_{k}$, where each $A_{k}\in\mathbb{R}^{p}$
is a PMF and the combination coefficient vector $w_{i}\in\mathbb{R}^{K}$ is
called the “topic weight” vector for document $i$. Given a collection of
documents $X_{1},X_{2},\ldots,X_{n}$, the global testing problem aims to test
$M=1$ versus $M>1$. Interestingly, the optimal detection boundary for this
problem has never been rigorously studied. As we have explained, this problem
a special case of our testing problem with $K=n$. Our results (a) provide a
test statistic that has a tractable null distribution and (b) reveal that the
optimal detection boundary is
$\omega^{2}_{n}\asymp(\sqrt{n}\bar{N})^{-1}\sqrt{p}$. Both (a) and (b) are new
results. When comparing our results with those about estimation of $A_{k}$’s
(Ke and Wang, 2022), it suggests that global testing requires a strictly lower
signal strength than topic estimation.
For authorship attribution, Kipnis (2022) treats the corpus from a known
author as a single document and tests the null hypothesis that this combined
document and a new document have the same population word frequencies. It is a
two-sample closeness testing problem, except that sparsity is imposed on the
difference of two PMFs. Kipnis (2022) proposed a test which applies an “exact
binomial test” to obtain a $p$-value for each word and combines these
$p$-values using Higher Criticism (Donoho and Jin, 2004). Donoho and Kipnis
(2022) analyzed this test when the number of “useful words” is $o(\sqrt{p})$,
and they derived a sharp phase diagram (a related one-sample setting was
studied in Arias-Castro and Wang (2015)). In Section 4.2, we show that our
test is applicable to this problem and has some nice properties: (a) tractable
null distribution; (b) allows for $s\geq c\sqrt{p}$, where $s$ is the number
of useful words; and (c) does not require documents from the known author to
have identical population word frequencies, making the setting more realistic.
On the other hand, when $s=o(\sqrt{p})$, our test is less powerful than the
one in Kipnis (2022); Donoho and Kipnis (2022), as our test does not utilize
sparsity explicitly. We can further improve our test in this regime by
modifying the DELVE statistic to incorporate sparsity (see the remark in
Section 4.2).
### 1.3 Organization
The rest of this paper is arranged as follows. In Section 2, we introduce the
test statistic and explain the rationale behind it. We then present in Section
3 the main theoretical results, including the asymptotic null distribution,
power analysis, a matching lower bound, the study of two special cases ($K=n$
and $K=2$), and a discussion of the contiguity regime. Section 4 applies our
results to text mining and discrete distribution testing. Simulations are in
Section 5 and real data analysis is in Section 6. The paper is concluded with
a discussion in Section 7. All proofs are in the appendix.
## 2 The DELVE Test
Recall that $X_{i}\sim\mathrm{Multinomial}(N_{i},\Omega_{i})$ for $1\leq i\leq
n$. There is a known partition $\\{1,2,\ldots,n\\}=\cup_{k=1}^{K}S_{k}$. Write
$n_{k}=|S_{k}|$, $\bar{N}_{k}=n_{k}^{-1}\sum_{i\in S_{k}}N_{i}$, and
$\bar{N}=n^{-1}\sum_{i=1}^{n}N_{i}$. In (1.2), we have defined the group-wise
mean PMF $\mu_{k}=(n_{k}\bar{N}_{k})^{-1}\sum_{i\in S_{k}}N_{i}\Omega_{i}$. We
further define the overall mean PMF $\mu\in\mathbb{R}^{p}$ by
$\mu:=\frac{1}{n\bar{N}}\sum_{k=1}^{K}n_{k}\bar{N}_{k}\mu_{k}=\frac{1}{n\bar{N}}\sum_{i=1}^{n}N_{i}\Omega_{i}.$
(2.1)
We introduce a quantity $\rho^{2}=\rho^{2}(\mu_{1},\ldots,\mu_{K})$ by
$\rho^{2}:=\sum_{k=1}^{K}n_{k}\bar{N}_{k}\|\mu_{k}-\mu\|^{2}.$ (2.2)
This quantity measures the variations across $K$ group-wise mean PMFs. It is
true that the null hypothesis (1.3) holds if and only if $\rho^{2}=0$.
Inspired by this observation, we hope to construct an unbiased estimator of
$\rho^{2}$ and develop it to a test statistic.
We can easily obtain the minimum variance unbiased estimators of $\mu_{k}$ and
$\mu$:
$\hat{\mu}_{k}=\frac{1}{n_{k}\bar{N}_{k}}\sum_{i\in
S_{k}}X_{i},\qquad\mbox{and}\qquad\hat{\mu}=\frac{1}{n\bar{N}}\sum_{k=1}^{K}n_{k}\bar{N}_{k}\hat{\mu}_{k}=\frac{1}{n\bar{N}}\sum_{i=1}^{n}X_{i}.$
(2.3)
For each $1\leq j\leq p$, let $\mu_{kj}$, $\mu_{j}$, $\hat{\mu}_{kj}$ and
$\hat{\mu}_{j}$ represent the $j$th entry of $\mu_{k}$, $\mu$, $\hat{\mu}_{k}$
and $\hat{\mu}$, respectively. A naive estimator of $\rho^{2}$ is
$\widetilde{T}=\sum_{j=1}^{p}\widetilde{T}_{j},\qquad\mbox{where}\quad\widetilde{T}_{j}=\sum_{k=1}^{K}n_{k}\bar{N}_{k}(\hat{\mu}_{kj}-\hat{\mu}_{j})^{2}.$
(2.4)
This estimator is biased. In Section C.1 of the appendix , we show that
$\mathbb{E}[\widetilde{T}_{j}]=\sum_{k=1}^{K}\bigl{[}n_{k}\bar{N}_{k}(\mu_{kj}-\mu_{j})^{2}+\bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\bigr{)}\sum_{i\in
S_{k}}N_{i}\Omega_{ij}(1-\Omega_{ij})\bigr{]}.$ It motivates us to debias
$\widetilde{T}_{j}$ by using an unbiased estimate of
$\Omega_{ij}(1-\Omega_{ij})$. By elementary properties of the multinomial
distribution,
$\mathbb{E}[X_{ij}(N_{i}-X_{ij})]=N_{i}(N_{i}-1)\Omega_{ij}(1-\Omega_{ij})$.
We thereby use $\frac{1}{N_{i}(N_{i}-1)}X_{ij}(N_{i}-X_{ij})$ to estimate
$\Omega_{ij}(1-\Omega_{ij})$. This gives rise to an unbiased estimator of
$\rho^{2}$ as
$T=\sum_{j=1}^{p}T_{j},\quad
T_{j}=\sum_{k=1}^{K}\biggl{[}n_{k}\bar{N}_{k}(\hat{\mu}_{kj}-\hat{\mu}_{j})^{2}-\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}\sum_{i\in
S_{k}}\frac{X_{ij}(N_{i}-X_{ij})}{N_{i}-1}\biggr{]}.$ (2.5)
###### Lemma 2.1.
Under Models (1.1)-(1.2), the estimator in (2.5) satisfies that
$\mathbb{E}[T]=\rho^{2}$.
To use $T$ for hypothesis testing, we need a proper standardization of this
statistic. In Sections A.1-A.2 of the appendix , we study $\mathbb{V}(T)$, the
variance of $T$. Under mild regularity conditions, it can be shown that
$\mathbb{V}(T)=\Theta_{n}\cdot[1+o(1)]$, where
$\displaystyle\Theta_{n}:=4\sum_{k=1}^{K}\sum_{j=1}^{p}n_{k}\bar{N}_{k}(\mu_{kj}-\mu_{j})^{2}\mu_{kj}+2\sum_{k=1}^{K}\sum_{i\in
S_{k}}\sum_{j=1}^{p}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}^{2}\frac{N_{i}^{3}}{N_{i}-1}\Omega_{ij}^{2}$
(2.6) $\displaystyle+\frac{2}{n^{2}\bar{N}^{2}}\sum_{1\leq k\neq\ell\leq
K}\sum_{i\in S_{k}}\sum_{m\in
S_{\ell}}\sum_{j=1}^{p}N_{i}N_{m}\Omega_{ij}\Omega_{mj}+2\sum_{k=1}^{K}\sum_{\begin{subarray}{c}i\in
S_{k},m\in S_{k},\\\ i\neq
m\end{subarray}}\sum_{j=1}^{p}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}^{2}N_{i}N_{m}\Omega_{ij}\Omega_{mj}.$
In $\Theta_{n}$, the first term vanishes under the null, so it suffices to
estimate the other three terms in $\Theta_{n}$. By properties of multinomial
distributions, $\mathbb{E}[X_{ij}X_{mj}]=N_{i}N_{m}\Omega_{ij}\Omega_{mj}$,
$\mathbb{E}[X^{2}_{ij}]=N_{i}^{2}\Omega_{ij}^{2}+N_{i}\Omega_{ij}(1-\Omega_{ij})$,
and
$\mathbb{E}[X_{ij}(N_{i}-X_{ij})]=N_{i}(N_{i}-1)\Omega_{ij}(1-\Omega_{ij})$.
It inspires us to estimate $\Omega_{ij}\Omega_{mj}$ by
$\frac{X_{ij}X_{mj}}{N_{i}N_{m}}$ and estimate $\Omega_{ij}^{2}$ by
$\frac{X_{ij}^{2}}{N_{i}^{2}}-\frac{X_{ij}(N_{i}-X_{ij})}{N^{2}_{i}(N_{i}-1)}=\frac{X_{ij}^{2}-X_{ij}}{N_{i}(N_{i}-1)}$.
Define
$\displaystyle V=2\sum_{k=1}^{K}\sum_{i\in
S_{k}}\sum_{j=1}^{p}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}^{2}\frac{X_{ij}^{2}-X_{ij}}{N_{i}(N_{i}-1)}+\frac{2}{n^{2}\bar{N}^{2}}\sum_{k\neq\ell}\sum_{i\in
S_{k}}\sum_{m\in S_{\ell}}\sum_{j=1}^{p}X_{ij}X_{mj}$ (2.7)
$\displaystyle\qquad+2\sum_{k=1}^{K}\sum_{\begin{subarray}{c}i\in S_{k},m\in
S_{k},\\\ i\neq
m\end{subarray}}\sum_{j=1}^{p}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}^{2}X_{ij}X_{mj}.$
(2.8)
The test statistic we propose is as follows (in the rate event $V<0$, we
simply set $\psi=0$):
$\psi=T/\sqrt{V}.$ (2.9)
We call $\psi$ the DEbiased and Length-adjusted Variability Estimator (DELVE).
In Section 3.1, we show that under mild regularity conditions, $\psi\to
N(0,1)$ under the null hypothesis. For any fixed $\alpha\in(0,1)$, the
asymptotic level-$\alpha$ DELVE test rejects $H_{0}$ if
$\psi>z_{\alpha},\qquad\mbox{where $z_{\alpha}$ is the $(1-\alpha)$-quantile
of $N(0,1)$}.$ (2.10)
### 2.1 The special cases of $K=n$ and $K=2$
As seen in Section 1, the application examples of $K=n$ and $K=2$ are
particularly intriguing. In these cases, we give more explicit expressions of
our test statistic.
When $K=n$, we have $S_{k}=\\{i\\}$ and $\hat{\mu}_{kj}=N_{i}^{-1}X_{ij}$. The
null hypothesis becomes $H_{0}:\Omega_{1}=\Omega_{2}=\ldots=\Omega_{n}.$ The
statistic in (2.5) reduces to
$T=\sum_{j=1}^{p}\sum_{i=1}^{n}\biggl{[}\frac{(X_{ij}-N_{i}\hat{\mu}_{j})^{2}}{N_{i}}-\Bigl{(}1-\frac{N_{i}}{n\bar{N}}\Bigr{)}\frac{X_{ij}(N_{i}-X_{ij})}{N_{i}(N_{i}-1)}\biggr{]}.$
(2.11)
Moreover, in the variance estimate (2.7), the last term is exactly zero, and
it can be shown that the third term is negligible compared to the first term.
We thereby consider a simpler variance estimator by only retaining the first
term in (2.7):
$V^{*}=2\sum_{i=1}^{n}\sum_{j=1}^{p}\Bigl{(}\frac{1}{N_{i}}-\frac{1}{n\bar{N}}\Bigr{)}^{2}\frac{X_{ij}^{2}-X_{ij}}{N_{i}(N_{i}-1)}.$
(2.12)
The simplified DELVE test statistic is $\psi^{*}=T/\sqrt{V^{*}}$.
When $K=2$, we observe two collections of multinomial vectors, denoted by
$\\{X_{i}\\}_{1\leq i\leq n}$ and $\\{G_{i}\\}_{1\leq i\leq m}$. We assume for
$1\leq i\leq n$ and $1\leq j\leq m$,
$X_{i}\sim\mathrm{Multinomial}(N_{i},\Omega_{i}),\qquad
G_{j}\sim\mathrm{Multinomial}(M_{j},\Gamma_{j}).$ (2.13)
Write $\bar{N}=n^{-1}\sum_{i=1}^{n}N_{i}$ and
$\bar{M}=m^{-1}\sum_{i=1}^{m}M_{i}$. The null hypothesis becomes
$H_{0}:\quad\eta=\theta,\qquad\mbox{where
}\eta=\frac{1}{n\bar{N}}\sum_{i=1}^{n}N_{i}\Omega_{i},\mbox{ and
}\theta=\frac{1}{m\bar{M}}\sum_{i=1}^{m}M_{i}\Gamma_{i},$ (2.14)
where $\theta$ and $\eta$ are the two group-wise mean PMFs. We estimate them
by $\hat{\eta}=(n\bar{N})^{-1}\sum_{i=1}^{n}X_{i}$ and
$\hat{\theta}=(m\bar{M})^{-1}\sum_{i=1}^{m}G_{i}$. The statistic in (2.5) has
an equivalent form as follows:
$T=\frac{n\bar{N}m\bar{M}}{n\bar{N}+m\bar{M}}\biggl{[}\|\hat{\eta}-\hat{\theta}\|^{2}-\sum_{i=1}^{n}\sum_{j=1}^{p}\frac{X_{ij}(N_{i}-X_{ij})}{n^{2}\bar{N}^{2}(N_{i}-1)}-\sum_{i=1}^{m}\sum_{j=1}^{p}\frac{G_{ij}(M_{i}-G_{ij})}{m^{2}\bar{M}^{2}(M_{i}-1)}\biggr{]}.$
(2.15)
The variance estimate (2.7) has an equivalent form as follows:
$\displaystyle
V=\frac{4\sum_{i=1}^{n}\sum_{i^{\prime}=1}^{m}\sum_{j=1}^{p}X_{ij}G_{i^{\prime}j}}{(n\bar{N}+m\bar{M})^{2}}+\frac{2m^{2}\bar{M}^{2}\big{[}\sum_{i=1}^{n}\frac{X_{ij}^{2}-X_{ij}}{N_{i}(N_{i}-1)}+\sum_{1\leq
i\neq i^{\prime}\leq
n}X_{ij}X_{i^{\prime}j}\big{]}}{n^{2}\bar{N}^{2}(n\bar{N}+m\bar{M})^{2}}$
(2.16)
$\displaystyle\quad+\frac{2n^{2}\bar{N}^{2}\big{[}\sum_{i=1}^{m}\frac{G_{ij}^{2}-G_{ij}}{M_{i}(M_{i}-1)}+\sum_{1\leq
i\neq i^{\prime}\leq
m}G_{ij}G_{i^{\prime}j}\big{]}}{m^{2}\bar{M}^{2}(n\bar{N}+m\bar{M})^{2}}.$
(2.17)
The DELVE test statistic is $\psi=T/\sqrt{V}$.
### 2.2 A variant: DELVE+
We introduce a variant of the DELVE test statistic to better suit real data.
Let $\hat{\mu}$, $T$ and $V$ be as in (2.3), (2.5) and (2.7). Define
$\psi^{+}=T/\sqrt{V^{+}},\qquad\mbox{where}\quad
V^{+}=V\cdot\bigl{(}1+\|\hat{\mu}\|_{2}T/\sqrt{V}\bigr{)}.$ (2.18)
We call (2.18) the DELVE+ test statistic. In theory, this modification has
little effect on the key properties of the test. To see this, we note that
$\|\hat{\mu}\|_{2}=o_{\mathbb{P}}(1)$ in high-dimensional settings. Suppose
$T/\sqrt{V}\to N(0,1)$ under $H_{0}$. Since $\|\hat{\mu}\|_{2}\to 0$, it is
seen immediately that $V^{+}/V\to 1$; hence, the asymptotic normality also
holds for $\psi^{+}$. Suppose $T/\sqrt{V}\to\infty$ under the alternative
hypothesis. It follows that $V^{+}\leq 2\max\\{V,\|\hat{\mu}\|_{2}\cdot
T\sqrt{V}\\}$ and
$\psi^{+}\geq\frac{1}{\sqrt{2}}\min\\{T/\sqrt{V},\,\|\hat{\mu}\|_{2}^{-1}(T/\sqrt{V})^{1/2}\\}\to\infty$.
We have proved the following lemma:
###### Lemma 2.2.
As $n\bar{N}\to\infty$, suppose $\|\hat{\mu}\|_{2}\to 0$ in probability. Under
$H_{0}$, if $T/\sqrt{V}\to N(0,1)$, then $T/\sqrt{V^{+}}\to N(0,1)$. Under
$H_{1}$, if $T/\sqrt{V}\to\infty$, then $T/\sqrt{V^{+}}\to\infty$.
In practice, this modification avoids extremely small $p$-values. In some real
datasets, $V$ is very small and leads to an extremely small $p$-value in the
original DELVE test. In DELVE+, as long as $T$ is positive, $\psi^{+}$ is
smaller than $\psi$, so that the $p$-value is adjusted.
In the numerical experiments, we consider both DELVE and DELVE+. For
theoretical analysis, since these two versions have almost identical
theoretical properties, we only focus on the original DELVE test statistic.
## 3 Theoretical Properties
We first present the regularity conditions. For a constant $c_{0}\in(0,1)$, we
assume
$\min_{1\leq i\leq n}N_{i}\geq 2,\qquad\max_{1\leq i\leq
n}\|\Omega_{i}\|_{\infty}\leq 1-c_{0},\qquad\max_{1\leq k\leq
K}\frac{n_{k}\bar{N}_{k}}{n\bar{N}}\leq 1-c_{0}.$ (3.1)
In (3.1), the first condition is mild. The second condition is also mild: note
that $\|\Omega_{i}\|_{1}=1$ for each $i$; this condition excludes those cases
where one of the $p$ categories has an extremely dominating probability in the
PMF $\Omega_{i}$. In the third condition, $n_{k}\bar{N}_{k}$ is the total
number of counts in all multinomials of group $k$, and this condition excludes
the extremely unbalanced case where one group occupies the majority of counts.
Note that in the special case of $K=2$, we relax this condition to allow for
severely unbalanced groups (see Section 3.4).
Recall that $\mu_{k}=\frac{1}{n_{k}\bar{N}_{k}}\sum_{i\in
S_{k}}N_{i}\Omega_{i}$ is the mean PMF within group $k$. We also define a
‘covariance’ matrix of PMF’s for group $k$ by
$\Sigma_{k}=\frac{1}{n_{k}\bar{N}_{k}}\sum_{i\in
S_{k}}N_{i}\Omega_{i}\Omega_{i}^{\prime}$. Let
$\alpha_{n}:=\max\left\\{\sum_{k=1}^{K}\frac{\|\mu_{k}\|_{3}^{3}}{n_{k}\bar{N}_{k}},\quad\sum_{k=1}^{K}\frac{\|\mu_{k}\|^{2}}{n_{k}^{2}\bar{N}_{k}^{2}}\right\\}\bigg{/}\bigg{(}\sum_{k=1}^{K}\|\mu_{k}\|^{2}\bigg{)}^{2},$
(3.2)
and
$\beta_{n}:=\max\biggl{\\{}\sum_{k=1}^{K}\sum_{i\in
S_{k}}\frac{N^{2}_{i}}{n_{k}^{2}\bar{N}_{k}^{2}}\|\Omega_{i}\|_{3}^{3},\quad\sum_{k=1}^{K}\|\Sigma_{k}\|_{F}^{2}\bigg{\\}}\bigg{/}(K\|\mu\|^{2}).$
(3.3)
We assume that as $n\bar{N}\to\infty$,
$\alpha_{n}=o(1),\qquad\beta_{n}=o(1),\qquad\mbox{and}\quad\frac{\|\mu\|_{4}^{4}}{K\|\mu\|^{4}}=o(1).$
(3.4)
Here $\alpha_{n}$ and $\beta_{n}$ only depend on group-wise quantities, such
as $\mu_{k}$, $\Sigma_{k}$ and $\sum_{i\in
S_{k}}N^{2}_{i}\|\Omega_{i}\|_{3}^{3}$; hence, a small number of ‘outliers’
(i.e., extremely large entries) in $\Omega$ has little effect on $\alpha_{n}$
and $\beta_{n}$. Furthermore, in a simple case where $\max_{k}n_{k}\leq
C\min_{k}n_{k}$, $\max_{k}\bar{N}_{k}\leq C\min_{k}\bar{N}_{k}$ and
$\|\Omega\|_{\max}=O(1/p)$, it holds that
$\alpha_{n}=O(\max\\{\frac{1}{n\bar{N}},\frac{Kp}{n^{2}\bar{N}^{2}}\\})$,
$\beta_{n}=O(\max\\{\frac{K^{2}}{n^{2}p},\frac{1}{p}\\})$ and
$\frac{\|\mu\|_{4}^{4}}{K\|\mu\|^{4}}=O(\frac{1}{Kp})$. When
$n\bar{N}\to\infty$ and $p\to\infty$, (3.4) reduces to
$n^{2}\bar{N}^{2}/(Kp)\to\infty$. This condition is necessary for successful
testing, because our lower bound in Section 3.3 implies that the two
hypotheses are asymptotically indistinguishable if $n^{2}\bar{N}^{2}/(Kp)\to
0$.
### 3.1 The asymptotic null distribution
Under the null hypothesis, the $K$ group-wise mean PMF’s
$\mu_{1},\mu_{2},\ldots,\mu_{K}$, are equal to each other, but this hypothesis
is still highly composite, as $(N_{i},\Omega_{i})$ are not necessarily the
same within each group. We show that the DELVE test statistic always enjoys a
parameter-free asymptotic null distribution. Let $T$, $\Theta_{n}$ and $V$ be
as in (2.5)-(2.7). The next two theorems are proved in the appendix.
###### Theorem 3.1.
Consider Models (1.1)-(1.2), where the null hypothesis (1.3) holds. Suppose
(3.1) and (3.4) are satisfied. As $n\bar{N}\to\infty$, $T/\sqrt{\Theta_{n}}\to
N(0,1)$ in distribution.
###### Theorem 3.2.
Under the conditions of Theorem 3.1, as $n{\bar{N}}\to\infty$,
$V/\Theta_{n}\to 1$ in probability, and $\psi:=T/\sqrt{V}\to N(0,1)$ in
distribution.
By Theorem 3.2, the asymptotic $p$-value is computed via $1-\Phi(\psi)$, where
$\Phi(\cdot)$ is the cumulative distribution function of the standard normal.
Moreover, for any fixed $\alpha\in(0,1)$, the rejection region of the
asymptotic level-$\alpha$ test is as given in (2.10).
The proofs of Theorems 3.1-3.2 contain two key steps: in the first step, we
decompose $T$ into the sum of mutually uncorrelated terms. We introduce a set
of independent, mean-zero random vectors $\\{Z_{ir}\\}_{1\leq i\leq n,1\leq
r\leq N_{i}}$, where
$Z_{ir}\sim\mathrm{Multinomial}(1,\Omega_{i})-\Omega_{i}$. By properties of
multinomial distributions, $X_{i}=N_{i}\Omega_{i}+\sum_{r=1}^{N_{i}}Z_{ir}$ in
distribution. We plug it into (2.5) to obtain $T=T_{1}+T_{2}+T_{3}+T_{4}$,
where $T_{1}$ is a linear form of $\\{Z_{ir}\\}$, $T_{2}$, $T_{3}$ and $T_{4}$
are quadratic forms of $\\{Z_{ir}\\}$, and the four terms are uncorrelated
with each other (details are contained in Section A of the appendix ). In the
second step, we construct a martingale for each term $T_{j}$. This is
accomplished by rearranging the double-index sequence $Z_{ir}$ to a single-
index sequence and then successively adding terms in this sequence to $T_{j}$.
We then apply the martingale central limit theorem (CLT) (Hall and Heyde,
2014) to prove the asymptotic normality of each $T_{j}$. The asymptotic
normality of $T$ follows by identifying the dominating terms in
$T_{1}$-$T_{4}$ (as model parameters change, the dominating terms can be
different) and studying their joint distribution. This step involves extensive
calculations to bound the conditional variance and to verify the Lindeberg
conditions of the martingale CLT, as well as numerous subtle uses of the
Cauchy-Schwarz inequality to simplify the moment bounds.
### 3.2 Power analysis
Under the alternative hypothesis, the PMF’s $\mu_{1},\mu_{2},\ldots,\mu_{K}$
are not the same. In Section 2, we introduce a quantity $\rho^{2}$ (see (2.2))
to capture the total variation in $\mu_{k}$’s, but this quantity is not scale-
free. We define a scaled version of $\rho^{2}$ as
$\omega_{n}=\omega_{n}(\mu_{1},\mu_{2},\ldots,\mu_{K}):=\frac{1}{n\bar{N}\|\mu\|^{2}}\sum_{k=1}^{K}n_{k}\bar{N}_{k}\|\mu_{k}-\mu\|^{2}.$
(3.5)
It is seen that
$\omega_{n}\leq\max_{k}\\{\frac{\|\mu_{k}-\mu\|^{2}}{\|\mu\|^{2}}\\}$, which
is properly scaled.
###### Theorem 3.3.
Consider Models (1.1)-(1.2), where (3.1) and (3.4) are satisfied. Then,
$\mathbb{E}[T]=n\bar{N}\|\mu\|^{2}\omega_{n}^{2}$, and
$\mathbb{V}(T)=O\bigl{(}\sum_{k=1}^{K}\|\mu_{k}\|^{2}\bigr{)}+\mathbb{E}[T]\cdot
O\bigl{(}\max_{1\leq k\leq K}\|\mu_{k}\|_{\infty}\bigr{)}$.
For the DELVE test to have an asymptotically full power, we need
$\mathbb{E}[T]\gg\sqrt{\mathbb{V}(T)}$. By Theorem 3.3, this is satisfied if
$\mathbb{E}[T]\gg\sqrt{\sum_{k}\|\mu_{k}\|^{2}}$ and
$\mathbb{E}[T]\gg\max_{k}\|\mu_{k}\|_{\infty}$. Between these two
requirements, the latter one is weaker; hence, we only need
$\mathbb{E}[T]\gg\sqrt{\sum_{k=1}^{K}\|\mu_{k}\|^{2}}$. It gives rise to the
following theorem:
###### Theorem 3.4.
Under the conditions of Theorem 3.3, we further assume that under the
alternative hypothesis, as $n\bar{N}\to\infty$,
$\mathrm{SNR}_{n}:=\frac{n\bar{N}\|\mu\|^{2}\omega_{n}^{2}}{\sqrt{\sum_{k=1}^{K}\|\mu_{k}\|^{2}}}\;\;\to\;\;\infty.$
(3.6)
The following statements are true. Under the alternative hypothesis,
$\psi\to\infty$ in probability. For any fixed $\alpha\in(0,1)$, the
level-$\alpha$ DELVE test has an asymptotic level of $\alpha$ and an
asymptotic power of $1$. If we choose $\alpha=\alpha_{n}$ such that
$\alpha_{n}\to 0$ and $1-\Phi(\mathrm{SNR}_{n})=o(\alpha_{n})$, where $\Phi$
is the CDF of $N(0,1)$, then the sum of type I and type II errors of the DELVE
test converges to $0$.
The detection boundary in (3.6) has simpler forms in some special cases. For
example, if $\|\mu_{k}\|\asymp\|\mu\|$ for $1\leq k\leq K$, then
$\mathrm{SRN}_{n}\asymp n\bar{N}\omega_{n}^{2}\|\mu\|/\sqrt{K}$. If,
furthermore, all entries of $\mu$ are at the same order, which implies
$\|\mu\|\asymp p^{-1/2}$, then $\mathrm{SRN}_{n}\asymp
n^{2}\bar{N}^{2}\omega_{n}^{2}/\sqrt{Kp}$. In this case, the detection
boundary simplifies to $\omega_{n}^{4}n^{2}\bar{N}^{2}/(Kp)\to\infty.$
###### Remark 1 (The low-dimensional case $p=O(1)$).
Although we are primarily interested in the high-dimensional setting
$p\to\infty$, it is also worth investigating the case $p=O(1)$. We can show
the same detection boundary for our test, but the asymptotic normality may not
hold, because the variance estimator $V$ in (2.7) is not guaranteed to be
consistent. To fix this issue, we propose a variant of our test by replacing
$V$ with a refined variance estimator $\widetilde{V}$, which is consistent for
a finite $p$. The expression of $\widetilde{V}$ is a little complicated. Due
to space limits, we relegate it to Section E of the appendix.
### 3.3 A matching lower bound
We have seen that the DELVE test successfully separates two hypotheses if
$\mathrm{SNR}_{n}\to\infty$, where $\mathrm{SNR}_{n}$ is as defined in (3.6).
We now present a lower bound to show that the two hypotheses are
asymptotically indistinguishable if $\mathrm{SNR}_{n}\to 0$.
Let $\ell_{i}\in\\{1,2,\ldots,K\\}$ denote the group label of $X_{i}$. Write
$\xi=\\{(N_{i},\Omega_{i},\ell_{i})\\}_{1\leq i\leq n}$. Let $\mu_{k}$,
$\alpha_{n}$, $\beta_{n}$, and $\omega_{n}$ be the same as defined in (1.2),
(3.2), (3.3), and (3.5), respectively. For each given $(n,p,K,\bar{N})$, we
write $\mu_{k}=\mu_{k}(\xi)$ to emphasize its dependence on parameters, and
similarly for $\alpha_{n},\beta_{n},\omega_{n}$. For any $c_{0}\in(0,1)$ and
sequence $\epsilon_{n}$, define
${\cal
Q}_{n}(c_{0},\epsilon_{n}):=\Big{\\{}\xi=\\{(N_{i},\Omega_{i},\ell_{i})\\}_{i=1}^{n}:\,\mbox{\eqref{cond1-basic}
holds for
$c_{0}$},\,\,\max(\alpha_{n}(\xi),\beta_{n}(\xi))\leq\epsilon_{n}\Big{\\}}$
(3.7)
Furthermore, for any sequence $\delta_{n}$, we define a parameter class for
the null hypothesis and a parameter class for the alternative hypothesis:
$\displaystyle{\cal Q}_{0n}^{*}(c_{0},\epsilon_{n})$ $\displaystyle={\cal
Q}_{n}(c_{0},\epsilon_{n})\cap\left\\{\xi:\omega_{n}(\xi)=0\right\\},$ (3.8)
$\displaystyle{\cal Q}_{1n}^{*}(\delta_{n};c_{0},\epsilon_{n})$
$\displaystyle={\cal
Q}_{n}(c_{0},\epsilon_{n})\cap\left\\{\xi:\frac{n\bar{N}\|\mu(\xi)\|^{2}\omega^{2}_{n}(\xi)}{\sqrt{\sum_{k=1}^{K}\|\mu_{k}(\xi)\|^{2}}}\geq\delta_{n}\right\\}.$
(3.9)
###### Theorem 3.5.
Fix a constant $c_{0}\in(0,1)$ and two positive sequences $\epsilon_{n}$ and
$\delta_{n}$ such that $\epsilon_{n}\to 0$ as $n\to\infty$. For any sequence
of $(n,p,K,\bar{N})$ indexed by $n$, we consider Models (1.1)-(1.2) for
$\Omega\in{\cal Q}_{n}(c_{0},\epsilon_{n})$. Let ${\cal
Q}_{0n}^{*}(c_{0},\epsilon_{n})$ and ${\cal
Q}_{1n}^{*}(\delta_{n};c_{0},\epsilon_{n})$ be as in (3.8). If $\delta_{n}\to
0$, then
$\limsup_{n\to\infty}\inf_{\Psi\in\\{0,1\\}}\bigl{\\{}\sup_{\xi\in{\cal
Q}_{0n}^{*}(c_{0},\epsilon_{n})}\mathbb{P}_{\xi}(\Psi=1)+\sup_{\xi\in{\cal
Q}_{1n}^{*}(\delta_{n};c_{0},\epsilon_{n})}\mathbb{P}_{\xi}(\Psi=0)\bigr{\\}}=1.$
By Theorem 3.5, the null and alternative hypotheses are asymptotically
indistinguishable if $\mathrm{SRN}_{n}\to 0$. Combining it with Theorem 3.4,
the DELVE test achieves the minimax optimal detection boundary.
### 3.4 The special case of $K=2$
The special case of $K=2$ is found in applications such as closeness testing
and authorship attribution. We study this case more carefully. Given
$\\{X_{i}\\}_{1\leq i\leq n}$ and $\\{G_{i}\\}_{1\leq i\leq m}$, we assume
$X_{i}\sim\mathrm{Multinomial}(N_{i},\Omega_{i}),\qquad
G_{j}\sim\mathrm{Multinomial}(M_{j},\Gamma_{j}).$ (3.10)
Write $\bar{N}=n^{-1}\sum_{i=1}^{n}N_{i}$ and
$\bar{M}=m^{-1}\sum_{i=1}^{m}M_{i}$. The null hypothesis becomes
$H_{0}:\quad\eta=\theta,\qquad\mbox{where
}\eta=\frac{1}{n\bar{N}}\sum_{i=1}^{n}N_{i}\Omega_{i},\mbox{ and
}\theta=\frac{1}{m\bar{M}}\sum_{i=1}^{m}M_{i}\Gamma_{i},$ (3.11)
where $\theta$ and $\eta$ are the two group-wise mean PMFs. In this case, the
test statistic $\psi$ has a more explicit form as in (2.15)-(2.16).
In our previous results for a general $K$, the regularity conditions (e.g.,
(3.1)) impose restrictions on the balance of sample sizes among groups. For
$K=2$, the severely unbalanced setting is interesting (e.g., in authorship
attribution, $n=1$ and $m$ can be large). We relax the regularity conditions
to the following ones:
###### Condition 3.1.
Let $\theta$ and $\eta$ be as in (3.11) and define two matrices
$\Sigma_{1}=\frac{1}{n\bar{N}}\sum_{i=1}^{n}N_{i}\Omega_{i}\Omega_{i}^{\prime}$
and
$\Sigma_{2}=\frac{1}{m\bar{M}}\sum_{i=1}^{m}M_{i}\Gamma_{i}\Gamma_{i}^{\prime}$.
We assume that the following statements are true (a) For $1\leq i\leq n$ and
$1\leq j\leq m$, $N_{i}\geq 2$, $\|\Omega_{i}\|_{\infty}\leq 1-c_{0}$,
$M_{j}\geq 2$, and $\|\Gamma_{j}\|_{\infty}\leq 1-c_{0}$, where
$c_{0}\in(0,1)$ is a contant, (b)
$\max\big{\\{}\big{(}\frac{\|\eta\|_{3}^{3}}{n\bar{N}}+\frac{\|\theta\|_{3}^{3}}{m\bar{M}}\big{)},\,\big{(}\frac{\|\eta\|_{2}^{2}}{n^{2}\bar{N}^{2}}+\frac{\|\theta\|_{2}^{2}}{m^{2}\bar{M}_{2}^{2}}\big{)}\big{\\}}\big{/}\bigl{\|}\frac{m\bar{M}}{n\bar{N}+m\bar{M}}\eta+\frac{n\bar{N}}{n\bar{N}+m\bar{M}}\theta\bigr{\|}^{4}=o(1)$,
(c)
$\max\big{\\{}\sum_{i}\frac{N_{i}^{2}}{n^{2}\bar{N}^{2}}\|\Omega_{i}\|_{3}^{3},\,\sum_{i}\frac{M_{i}^{2}}{m^{2}\bar{M}^{2}}\|\Gamma_{i}\|_{3}^{3},\,\|\Sigma_{1}\|_{F}^{2}+\|\Sigma_{2}\|_{F}^{2}\big{\\}}\big{/}\|\mu\|^{2}=o(1)$,
and (d) $\|\mu\|_{4}^{4}/\|\mu\|^{4}=o(1)$.
Condition (a) is similar to (3.1), except that we drop the sample size balance
requriement. Conditions (b)-(d) are equivalent to (3.4) but have more explicit
expressions for $K=2$.
###### Theorem 3.6.
In Model (3.10), we test the null hypothesis $H_{0}$: $\theta=\mu$. As
$\min\\{n{\bar{N}},m\bar{M}\\}\to\infty$, suppose Condition 3.1 is satisfied.
Under the alternative hypothesis, we further assume
$\frac{\|\eta-\theta\|^{2}}{\big{(}\frac{1}{n\bar{N}}+\frac{1}{m\bar{M}}\big{)}\max\\{\|\eta\|,\,\|\theta\|\\}}\to\infty.$
(3.12)
Consider the DELVE test statistic $\psi=T/\sqrt{V}$. The following statements
are true. Under the null hypothesis, $\psi\to N(0,1)$ in distribution. Under
the alternative hypothesis, $\psi\to\infty$ in probability. Moreover for any
fixed $\alpha\in(0,1)$, the level-$\alpha$ DELVE test has an asymptotic level
of $\alpha$ and an asymptotic power of $1$.
Compared with the theorems for a general $K$, first, Theorem 3.6 allows the
two groups to be severely unbalanced and reveals that the detection boundary
depends on the harmonic mean of $n\bar{N}$ and $m\bar{M}$. Second, the
detection boundary is expressed using $\|\eta-\theta\|$, which is easier to
interpret.
### 3.5 The special case of $K=n$
The special case of $K=n$ is interesting for two reasons. First, the
application example of global testing in topic models corresponds to $K=n$.
Second, for any $K$, when $\Omega_{i}$’s within each group are assumed to be
the same (e.g., this is the case in closeness testing of discrete
distributions), it suffices to aggregate the counts in each group, i.e., let
$Y_{k}=\sum_{i\in S_{k}}X_{i}$ and operate on $Y_{1},\ldots,Y_{K}$ instead of
the original $X_{i}$’s; this reduces to the case of $K=n$.
When $K=n$, the null hypothesis has a simpler form:
$H_{0}:\quad\Omega_{i}=\mu,\qquad 1\leq i\leq n.$ (3.13)
Moreover, under the alternative hypothesis, the quantity $\omega_{n}^{2}$ in
(3.5) simplifies to
$\omega_{n}=\omega_{n}(\Omega_{1},\Omega_{2},\ldots,\Omega_{n})=\frac{1}{n\bar{N}\|\mu\|^{2}}\sum_{i=1}^{n}N_{i}\|\Omega_{i}-\mu\|^{2}.$
(3.14)
The DELVE test statistic also has a simplified form as in (2.11)-(2.12). We
can prove the same theoretical results under weaker conditions:
###### Condition 3.2.
We assume that the following statements are true: (a) For a constant
$c_{0}\in(0,1)$, $2\leq N_{i}\leq(1-c_{0})n\bar{N}$ and
$\|\Omega_{i}\|_{\infty}\leq 1-c_{0}$, $1\leq i\leq n$, and
(b)
$\max\big{\\{}\sum_{i}\frac{\|\Omega_{i}\|_{3}^{3}}{N_{i}},\,\sum_{i}\frac{\|\Omega_{i}\|^{2}}{N_{i}^{2}}\big{\\}}\big{/}(\sum_{i}\|\Omega_{i}\|^{2})^{2}=o(1)$,
and $(\sum_{i}\|\Omega_{i}\|_{3}^{3})/(n\|\mu\|^{2})=o(1)$
When $K=n$, Condition (a) is equivalent to (3.1); and Condition (b) is weaker
than (3.4), as we have dropped the requirement
$\frac{\|\mu\|_{4}^{4}}{K\|\mu\|^{4}}=o(1)$. We obtain weaker conditions for
$K=n$ because the dominant terms in $T$ differ from those for $K<n$.
###### Theorem 3.7.
In Model (1.1), we test the null hypothesis (3.13). As $n\to\infty$, we assume
that Condition 3.2 is satisfied. Under the alternative, we further assume that
$\frac{n\bar{N}\|\mu\|^{2}\omega_{n}^{2}}{\sqrt{\sum_{i=1}^{n}\|\Omega_{i}\|^{2}}}\to\infty.$
(3.15)
Let $T$ and $V^{*}$ be the same as in (2.11)-(2.12). Consider the simplified
DELVE test statistic $\psi^{*}=T/\sqrt{V^{*}}$. The following statements are
true. Under the null hypothesis, $\psi^{*}\to N(0,1)$ in distribution. Under
the alternative hypothesis, $\psi^{*}\to\infty$ in probability. Moreover for
any fixed $\alpha\in(0,1)$, the level-$\alpha$ DELVE test has an asymptotic
level of $\alpha$ and an asymptotic power of $1$.
The detection boundary in (3.15) has a simpler form if
$\sum_{i}\|\Omega_{i}\|^{2}\asymp n\|\mu\|^{2}$. In this case, (3.15) is
equivalent to $\sqrt{n}\bar{N}\|\mu\|\omega_{n}^{2}\to\infty.$ Additionally,
if all entries of $\mu$ are at the same order, then $\|\mu\|\asymp
1/\sqrt{p}$, and (3.15) further reduces to
$\sqrt{n\bar{N}^{2}/p}\cdot\omega_{n}^{2}\to\infty$.
### 3.6 A discussion of the contiguity regime
Our power analysis in Section 3.2 concerns $\mathrm{SNR}_{n}\to\infty$, and
our lower bound in Section 3.3 concerns $\mathrm{SNR}_{n}\to 0$. We now study
the contiguity regime where $\mathrm{SNR}_{n}$ tends to a constant. For
illustration, we consider a special choice of parameters, which allows us to
obtain a simple expression of the testing risk.
Suppose $K=n$ and $N_{i}=N$ for all $1\leq i\leq n$. Consider the pair of
hypotheses:
$H_{0}:\;\;\Omega_{ij}=p^{-1},\qquad\mbox{v.s.}\qquad
H_{1}:\;\;\Omega_{ij}=p^{-1}(1+\beta_{n}\delta_{ij}),$ (3.16)
where $\\{\delta_{ij}\\}_{1\leq i\leq n,1\leq j\leq p}$ satisfy that
$|\delta_{ij}|=1$, $\sum_{j=1}^{p}\delta_{ij}=0$ and
$\sum_{i=1}^{n}\delta_{ij}=0$. Such $\delta_{ij}$ always exist.333For example,
we can first partition the dictionary into two halves and then partition all
the documents into two halves; this divides
$\\{1,2,\ldots,p\\}\times\\{1,2,\ldots,n\\}$ into four subsets; we construct
$\delta_{ij}$’s freely on one subset and then specify the $\delta_{ij}$’s on
the other three subsets by symmetry. The $\mathrm{SNR}_{n}$ in (3.6) satisfies
that $\mathrm{SNR}_{n}\asymp(N\sqrt{n}/\sqrt{p})\beta_{n}^{2}$. We thereby set
$\beta_{n}^{2}=\frac{\sqrt{2p}}{N\sqrt{n}}\cdot a,\qquad\mbox{for a constant
}a>0.$ (3.17)
Since $K=n$ here, we consider the simplified DELVE test statistic $\psi^{*}$
as in Section 3.5.
###### Theorem 3.8.
Consider Model (1.1) with $N_{i}=N$. For a constant $a>0$, let the null and
alternative hypotheses be specified as in (3.16)-(3.17). As $n\to\infty$, if
$p=o(N^{2}n)$, then $\psi^{*}\to N(0,1)$ under $H_{0}$ and $\psi^{*}\to
N(a,1)$ under $H_{1}$.
Let $\Phi$ be the cumulative distribution function of the standard normal. By
Theorem 3.8, for any fixed constant $t\in(0,a)$, if we reject the null
hypothesis when $\psi^{*}>t$, then the sum of type I and type II errors
converges to $[1-\Phi(t)]+[1-\Phi(a-t)]$.
## 4 Applications
As mentioned in Section 1, our testing problem includes global testing for
topic models, authorship attribution, and closeness testing for discrete
distributions as special examples. In this section, the DELVE test is applied
separately to these three problems.
### 4.1 Global testing for topic models
Topic modeling (Blei et al., 2003) is a popular tool in text mining. It aims
to learn a small number of “topics” from a large corpus. Given $n$ documents
written using a dictionary of $p$ words, let
$X_{i}\sim\mathrm{Multinomial}(N_{i},\Omega_{i})$ denote the word counts of
document $i$, where $N_{i}$ is the length of this document and
$\Omega_{i}\in\mathbb{R}^{p}$ contains the population word frequencies. In a
topic model, there exist $M$ topic vectors
$A_{1},A_{2},\ldots,A_{M}\in\mathbb{R}^{p}$, where each $A_{k}$ is a PMF. Let
$w_{i}\in\mathbb{R}^{M}$ be a nonnegative vector whose entries sum up to $1$,
where $w_{i}(k)$ is the “weight” document $i$ puts on topic $k$. It assumes
$\Omega_{i}=\sum_{k=1}^{M}w_{i}(k)A_{k},\qquad 1\leq i\leq n.$ (4.1)
Under (4.1), the matrix $\Omega=[\Omega_{1},\Omega_{2},\ldots,\Omega_{n}]$
admits a low-rank nonnegative factorization.
Before fitting a topic model, we would like to know whether the corpus indeed
involves multiple topics. This is the global testing problem: $H_{0}:M=1$ v.s.
$H_{1}:M>1$. When $M=1$, by writing $A_{1}=\mu$, the topic model reduces to
the null hypothesis in (3.13). We can apply the DELVE test by treating each
$X_{i}$ as a separate group (i.e., $K=n$).
###### Corollary 4.1.
Consider Model (1.1) and define a vector $\xi\in\mathbb{R}^{n}$ by
$\xi_{i}=\bar{N}^{-1}N_{i}$. Suppose that $\Omega=\mu{\bf 1}_{n}^{\prime}$
under the null hypothesis, with $\mu=n^{-1}\Omega\xi$, and that $\Omega$
satisfies (4.1) under the alternative hypothesis, with
$r:=\mathrm{rank}(\Omega)\geq 2$. Suppose $\bar{N}/(\min_{i}N_{i})=O(1)$.
Denote by $\lambda_{1},\lambda_{2},\ldots,\lambda_{r}>0$ the singular values
of $\Omega[\mathrm{diag}(\xi)]^{1/2}$, arranged in the descending order. We
further assume that under the alternative hypothesis,
$\bar{N}\cdot\frac{\sum_{k=2}^{r}\lambda_{k}^{2}}{\sqrt{\sum_{k=1}^{r}\lambda_{k}^{2}}}\to\infty.$
(4.2)
For any fixed $\alpha\in(0,1)$, the level-$\alpha$ DELVE test has an
asymptotic level $\alpha$ and an asymptotic power $1$.
The least-favorable configuration in the proof of Theorem 3.5 is in fact a
topic model that follows (4.1) with $M=2$. Transferring the argument yields
the following lower bound that confirms the optimality of DELVE for the global
testing of topic models.
###### Corollary 4.2.
Let ${\cal R}_{n,M}(\epsilon_{n},\delta_{n})$ be the collection of
$\\{(N_{i},\Omega_{i})\\}_{i=1}^{n}$ satisfying the following conditions: 1)
$\Omega$ follows the topic model (4.1) with $M$ topics; 2) Condition 3.2 holds
with $o(1)$ replaced by $\leq\epsilon_{n}$; 3)
$\bar{N}(\sum_{k=2}^{r}\lambda_{k}^{2})/(\sum_{k=1}^{r}\lambda_{k}^{2})^{1/2}\geq\delta_{n}$.
If $\epsilon_{n}\to 0$ and $\delta_{n}\to 0$, then
$\limsup_{n\to\infty}\inf_{\Psi\in\\{0,1\\}}\Bigl{\\{}\sup_{{\cal
R}_{n,1}(\epsilon_{n},0)}\mathbb{P}(\Psi=1)+\sup_{\cup_{M\geq 2}{\cal
R}_{n,M}(\delta_{n},\delta_{n})}\mathbb{P}(\Psi=0)\Bigr{\\}}=1.$
The detection boundary (4.2) can be simplified when $M=O(1)$. Following Ke and
Wang (2022), we define $\Sigma_{A}=A^{\prime}H^{-1}A$ and
$\Sigma_{W}=n^{-1}WW^{\prime}$, where $A=[A_{1},A_{2},\ldots,A_{M}]$,
$W=[w_{1},w_{2},\ldots,w_{n}]$ and $H=\mathrm{diag}(A{\bf 1}_{M})$. Ke and
Wang (2022) argued that it is reasonable to assume that eigenvalues of these
two matrices are at the constant order. If this is true, with some mild
additional regularity conditions, each $\lambda_{k}$ is at the order of
$\sqrt{n/p}$. Hence, (4.2) reduces to $\sqrt{n}\bar{N}/\sqrt{p}\to\infty.$ In
comparison, Ke and Wang (2022) showed that a necessary condition for any
estimator $\hat{A}=[\hat{A}_{1},\hat{A}_{2},\ldots,\hat{A}_{M}]$ to achieve
$\frac{1}{M}\sum_{k=1}^{M}\|\hat{A}_{k}-A_{k}\|_{1}=o(1)$ is
$\sqrt{n\bar{N}/p}\to\infty$. We conclude that consistent estimation of topic
vectors requires strictly stronger conditions than successful testing.
### 4.2 Authorship attribution
In authorship attribution, given a corpus from a known author, we want to test
whether a new document is from the same author. It is a special case of our
testing problem with $K=2$. We can directly apply the results in Section 3.4.
However, the setting in Section 3.4 has no sparsity. Kipnis (2022); Donoho and
Kipnis (2022) point out that the number of words with discriminating power is
often much smaller than $p$. To see how our test performs under sparsity, we
consider a sparse model. As in Section 3.4, let
$X_{i}\sim\mathrm{Multinomial}(N_{i},\Omega_{i}),\;1\leq i\leq
n,\quad\mbox{and}\quad G_{i}\sim\mathrm{Multinomial}(M_{i},\Gamma_{i}),\;1\leq
i\leq m.$ (4.3)
Let $\bar{N}$ and $\bar{M}$ be the average of $N_{i}$’s and $M_{i}$’s,
respectively. Write $\eta=\frac{1}{n\bar{N}}\sum_{i=1}^{n}N_{i}\Omega_{i}$ and
$\theta=\frac{1}{m\bar{M}}\sum_{i=1}^{m}M_{i}\Gamma_{i}$. We assume for some
$\beta_{n}>0$,
$\eta_{j}=\theta_{j},\;\;\mbox{for }j\notin
S,\qquad\mbox{and}\qquad\bigl{|}\sqrt{\eta_{j}}-\sqrt{\theta_{j}}\bigr{|}\geq\beta_{n},\;\;\mbox{for
}j\in S.$ (4.4)
###### Corollary 4.3.
Under the model (4.3)-(4.4), consider testing $H_{0}:S=\emptyset$ v.s.
$H_{1}:S\neq\emptyset$, where Condition 3.1 is satisfied. Let $\eta_{S}$ and
$\theta_{S}$ be the sub-vectors of $\eta$ and $\theta$ restricted to the
coordinates in $S$. Suppose that under the alternative hypothesis,
$\frac{\beta_{n}^{2}\cdot(\|\eta_{S}\|_{1}+\|\theta_{S}\|_{1})}{\big{(}\frac{1}{n\bar{N}}+\frac{1}{m\bar{M}}\big{)}\max\\{\|\eta\|,\,\|\theta\|\\}}\to\infty.$
(4.5)
As $\min\\{n\bar{N},m\bar{M}\\}\to\infty$, the level-$\alpha$ DELVE test has
an asymptotic level $\alpha$ and an asymptotic power $1$. Furthermore, if
$n\bar{N}\asymp m\bar{M}$ and $\min_{j\in S}(\eta_{j}+\theta_{j})\geq cp^{-1}$
for a constant $c>0$, then (4.5) reduces to
$n\bar{N}\beta_{n}^{2}|S|/\sqrt{p}\to\infty$.
Donoho and Kipnis (2022) studied a case where $N=M$, $n=m=1$, $p\to\infty$,
$|S|=p^{1-\vartheta},\qquad\mbox{and}\qquad\beta_{n}=c\cdot
N^{-1/2}\sqrt{\log(p)}.$ (4.6)
When $\vartheta>1/2$ (i.e., $|S|=o(\sqrt{p})$), they derived a phase diagram
for the aforementioned testing problem (under a slightly different setting
where the data distributions are Poisson instead of multinomial). They showed
that when $\vartheta>1/2$ and $c$ is a properly large constant, a Higher-
Criticism-based test has an asymptotically full power. Donoho and Kipnis
(2022) did not study the case of $\vartheta\leq 1/2$. By Corollary 4.3, when
$\vartheta\leq 1/2$ (i.e., $|S|\geq C\sqrt{p}$), the DELVE test has
asymptotically full power.
###### Remark 2.
When $\vartheta>1/2$ in (4.6), the DELVE test is powerless. However, this
issue can be resolved by borrowing the idea of maximum test or Higher
Criticism test (Donoho and Jin, 2004) from the classical multiple testing. For
example, recalling $T_{j}$ in (2.5), we can use $\max_{1\leq j\leq
p}\\{T_{j}/\sqrt{V_{j}}\\}$ as the test statistic, where $V_{j}$ is a proper
estimator of the variance of $T_{j}$. We leave a careful study of this idea to
future work.
### 4.3 Closeness testing between discrete distributions
Two-sample closeness testing is a subject of intensive study in discrete
distribution inference (Bhattacharya and Valiant, 2015; Chan et al., 2014;
Diakonikolas and Kane, 2016; Kim et al., 2022). It is a special case of our
problem with $K=2$ and $n_{1}=n_{2}=1$. We thereby apply both Theorem 3.6 and
Theorem 3.7.
###### Corollary 4.4.
Let $Y_{1}$ and $Y_{2}$ be two discrete variables taking values on the same
$p$ outcomes. Let $\Omega_{1}\in\mathbb{R}^{p}$ and
$\Omega_{2}\in\mathbb{R}^{p}$ be their corresponding PMFs. Suppose we have
$N_{1}$ samples of $Y_{1}$ and $N_{2}$ samples of $Y_{2}$. The data are
summarized in two multinomial vectors:
$X_{1}\sim\mathrm{Multinomial}(N_{1},\Omega_{1}),X_{2}\sim\mathrm{Multinomial}(N_{2},\Omega_{2}).$
We test $H_{0}:\Omega_{1}=\Omega_{2}.$ Write
$\mu=\frac{1}{N_{1}+N_{2}}(N_{1}\Omega_{1}+N_{2}\Omega_{2})$. Suppose
$\min\\{N_{1},N_{2}\\}\geq 2$,
$\max\\{\|\Omega_{1}\|_{\infty},\|\Omega_{2}\|_{\infty}\\}\leq 1-c_{0}$, for a
constant $c_{0}\in(0,1)$. Suppose
$\frac{1}{(\sum_{k=1}^{2}\|\Omega_{k}\|^{2})^{2}}\max\big{\\{}\sum_{k=1}^{2}\frac{\|\Omega_{k}\|_{3}^{3}}{N_{k}},\sum_{k=1}^{2}\frac{\|\Omega_{k}\|^{2}}{N_{k}^{2}}\big{\\}}=o(1)$,
and $\frac{1}{n\|\mu\|^{2}}\sum_{k=1}^{2}\|\Omega_{k}\|_{3}^{3}=o(1)$. We
assume that under the alternative hypothesis,
$\frac{\|\Omega_{1}-\Omega_{2}\|^{2}}{\big{(}N_{1}^{-1}+N_{2}^{-1}\big{)}\max\\{\|\Omega_{1}\|,\,\|\Omega_{2}\|\\}}\to\infty.$
(4.7)
As $\min\\{N_{1},N_{2}\\}\to\infty$, the level-$\alpha$ DELVE test has level
$\alpha$ and power $1$, asymptotically.
We notice that (4.7) matches with the minimum $\ell^{2}$-separation condition
for two-sample closeness testing (Kim et al., 2022, Proposition 4.4).
Therefore, our test is an optimal $\ell^{2}$-testor. Although other optimal
$\ell^{2}$-testors have been proposed (Chan et al., 2014; Bhattacharya and
Valiant, 2015; Diakonikolas and Kane, 2016), they are not equipped with
tractable null distributions.
###### Remark 3.
We can modify the DELVE test to incorporate frequency-dependent weights. Given
any nonnegative vector $w=(w_{1},w_{2},\ldots,w_{p})^{\prime}$, define
$T(w):=\sum_{j=1}^{p}w_{j}T_{j}$ where $T_{j}$ is the same as in (2.5). These
weights $w_{j}$ serve to adjust the contributions of different words. For
example, consider $w_{j}=\bigl{(}\max\\{1/p,\;\hat{\mu}_{j}\\}\bigr{)}^{-1}$.
This kind of weights have been used in discrete distribution inference
(Balakrishnan and Wasserman, 2019; Chan et al., 2014) to turn an optimal
$\ell^{2}$ testor to an optimal $\ell^{1}$ testor. We can similarly study the
power of this modified test, except that we need an additional assumption
$n\bar{N}\gg p$ to guarantee that $\hat{\mu}_{j}$ is a sufficiently accurate
estimator of $\mu_{j}$.
## 5 Simulations
The proposed DELVE test is computationally efficient and easy to implement. In
this section, we investigate its numerical performance in simulation studies.
Real data analysis will be carried out in Section 6.
Figure 1: Histograms of the DELVE statistic (top three panels) and the DELVE+
statistic (bottom three panels) in Experiments 1.1-1.3. In each plot, the blue
and orange histograms correspond to the null and alternative hypotheses,
respectively; and the green curve is the density of $N(0,1)$.
Experiment 1 (Asymptotic normality) . Given
$(n,p,K,N_{\min},N_{\max},\alpha)$, we generate data as follows: first, we
divide $\\{1,\ldots,n\\}$ into $K$ equal-size groups. Next, we draw
$\Omega_{1}^{alt},\ldots,\Omega_{n}^{alt}$ i.i.d. from
$\text{Dirichlet}(p,\alpha\mathbf{1}_{p})$. Third, we draw
$N_{i}\stackrel{{\scriptstyle iid}}{{\sim}}\text{Uniform}[N_{\min},N_{\max}]$
and set $\Omega_{i}^{null}=\mu$, where
$\mu:=\frac{1}{n\bar{N}}\sum_{i}N_{i}\Omega_{i}^{alt}$. Last, we generate
$X_{1},\ldots,X_{n}$ using Model (1.1). We consider three sub-experiments. In
Experiment 1.1, $(n,p,K,N_{\min},N_{\max},\alpha)=(50,100,5,10,20,0.3)$. In
Experiment 1.2, $\alpha$ is changed to $1$, and the other parameters are the
same. When $\alpha=1$, $\Omega_{i}^{alt}$ are drawn from the uniform
distribution of the standard probability simplex; in comparison, $\alpha=0.3$
puts more mass near the boundary of the standard probability simplex. In
Experiment 1.3, we keep all parameters the same as in Experiment 1.1, except
that $(p,K)$ are changed to $(300,50)$. For each sub-experiment, we generate
2000 data sets under the null hypothesis and plot the histogram of the DELVE
test statistic $\psi$ (in blue); similarly, we generate 2000 data sets under
the alternative hypothesis and plot the histogram of $\psi$ (in orange). The
results are contained on the top three panels of Figure 1. In Section 2.2, we
introduced a variant of DELVE, called DELVE+, in which the variance estimator
$V$ is replaced by an adjusted one. DELVE+ has similar theoretical properties
as DELVE but is more suitable for real data. We plot the histograms of the
DELVE+ test statistics on the bottom three panels of Figure 1.
We have several observations. In all sub-experiments, when the null hypothesis
holds, the histograms of both DELVE and DELVE+ fit the standard normal density
reasonably well. This supports our theory in Section 3.1. Second, when $(p,K)$
increase, the finite sample effect becomes slightly more pronounced (c.f.,
Experiment 1.3 versus Experiment 1.1). Third, the tests have power in
differentiating two hypotheses. As $\alpha$ decreases or $K$ increases, the
power increases, and the histograms corresponding to two hypotheses become
further apart. Last, in the alternative hypothesis, DELVE+ has smaller mean
and variance than DELVE. By Lemma 2.2, these two have similar asymptotic
behaviors. The simulation results suggest that they have noticeable finite-
sample differences.
Figure 2: Power diagrams (based on $500$ repetitions) at level $5\%$. The
$x$-axis plots the SNR
$\lambda(\omega_{n})=K^{-1/2}n\bar{N}\|\mu\|\cdot\omega_{n}$.
Experiment 2 (Power curve). Similarly as before, we divide
$\\{1,2,\ldots,n\\}$ into $K$ equal-size groups and draw
$N_{i}\sim\text{Uniform}[N_{\min},N_{\max}]$. In this experiment, the PMF’s
$\Omega_{i}$ are generated in a different way. Under the null hypothesis, we
generate $\mu\sim\text{Dirichlet}(p/2,\alpha\mathbf{1}_{p/2})$ and set
$\Omega_{i}^{null}=\tilde{\mu}$, where $\tilde{\mu}_{j}=\frac{1}{2}\mu_{j}$
for $1\leq j\leq p/2$ and $\tilde{\mu}_{j}=\frac{1}{2}\mu_{p+1-j}$ for
$p/2+1\leq j\leq p$. Under the alternative hypothesis, we draw
$z_{1},\ldots,z_{K}$, $b_{1},\ldots,b_{p/2}\stackrel{{\scriptstyle
iid}}{{\sim}}\text{Rademacher}(1/2)$ and then let
$\Omega_{ij}^{alt}=\mu(1+\tau_{n}z_{k}b_{j})$, for all $i$ in group $k$ and
$1\leq j\leq p/2$, and $\Omega_{ij}^{alt}=\mu(1+\tau_{n}z_{k}b_{j})$ for
$p/2+1\leq j\leq p$. By applying our theory in Section 3.2 together with some
calculations, we obtain that the signal-to-noise ratio is captured by
$\lambda:=K^{-1/2}n\bar{N}\|\mu\|\tau_{n}.$ We consider three sub-experiments,
Experiment 2.1-2.3, in which the parameter values of
$(n,p,K,N_{\min},N_{\max},\alpha)$ are the same as in Experiments 1.1-1.3. For
each sub-experiment, we consider a grid of 10 equally-spaced values of
$\lambda$. When $\lambda=0$, it corresponds to the null hypothesis; when
$\lambda>0$, it corresponds to the alternative hypothesis. For each $\lambda$,
we generate $500$ data sets and compute the fraction of rejections of the
level-$5\%$ DELVE test. This gives a power curve for the level-$5\%$ DELVE
test, in which the first point corresponding to $\lambda=0$ is the actual
level of the test. The results are contained on the top three panels of Figure
2. We repeat the same experiments for the DELVE+ test, which results are on
the bottom three panels of Figure 2. In all three experiments, the actual
level of our proposed tests is $\leq 5\%$, suggesting that our tests perform
well at controlling the type-I error. As $\lambda$ increases, the power
gradually increased to $1$, suggesting that $\lambda$ is a good metric of the
signal-to-noise ratio. This supports our theory in Section 3.2.
Figure 3: (Left) Histogram of nonzero DELVE $Z$-scores for all authors in the
dataset. The mean is $4.52$ and the standard deviation is $2.94$. (Right)
Scatter plot of author DELVE scores versus the natural log of the number of
papers with five statisticians identified.
## 6 Real Data Analysis
We apply our proposed methods on two real corpora: one consists of abstracts
of research papers in four statistics journals, and the other consists of
movie reviews on Amazon. For the analysis of real data, we use DELVE+, which
modifies the variance estimator in DELVE and reduces the occurrence of
extremely small $p$-values.
### 6.1 Abstracts of statisticians
We use the data set from Ji and Jin (2016). It contains the bibtex information
of all published papers in four top-tier statistics journals, Annals of
Statistics, Biometrika, Journal of the American Statistical Association, and
Journal of the Royal Statistical Society - Series B, from 2003 to the first
half of 2012. We pre-process the abstracts of papers by tokenization and
stemming and turn each abstract to a word count vector.
We conduct two experiments. In the first one, we fix an author and treat the
collection of his/her co-authored abstracts as a corpus. We apply DELVE+ with
$K=n$, where $n$ is the total number of abstracts written by this author. The
$Z$-score measures the “diversity” or “variability” of this authors’
abstracts. An author with a high $Z$-score possesses either diverse research
interests or a variable writing style. A number of authors have only 1–2
papers in this data set, and the variance estimator $V$ is often negative; we
remove all those authors. In Figure 3 (left panel), we plot the histogram of
$Z$-scores of all retained authors. The mean is $4.52$ and the standard
deviation is $2.94$. In Figure 3 (right panel), we show the scatter plot of
$Z$-score versus logarithm of the number of abstracts written by this author,
and a few prolific authors who have many papers and a large $Z$-score are
labeled. For example, Peter Hall has the most papers in this dataset (82
papers in total). Hall’s $Z$-score is larger than $20$, implying a huge
diversity in his abstracts. There is also a positive association between
$Z$-score and total papers. It suggests that senior authors have more
diversity in their abstracts, which is as expected.
Figure 4: Pairwise $Z$-score plots for Peter Hall (left) and Jianqing Fan (right). In the cell $(x,y)$, we compare the corpus of an author’s abstracts from time $x$ with the corpus of that author’s abstracts from time $y$. The heatmap shows the value of DELVE+ with $K=2$ for each cell. | Year | Title | Journal
---|---|---
2011 | Nonparametric independence screening in sparse ultra-high-dimensional additive models | JASA
2011 | Penalized composite quasi-likelihood for ultrahigh dimensional variable selection | JRSS-B
2011 | Multiple testing via ${\rm FDR}_{L}$ for large-scale imaging data | Ann. Stat.
2012 | Vast volatility matrix estimation using high-frequency data for portfolio selection | JASA
2012 | A road to classification in high dimensional space: the regularized optimal affine discriminant | JRSS-B
2012 | Variance estimation using refitted cross-validation in ultrahigh dimensional regression | JRSS-B
Year | Title | Journal
---|---|---
2004 | Low order approximations in deconvolution and regression with errors in variables | JRSS-B
2004 | Nonparametric inference about service time distribution from indirect measurements | JRSS-B
2004 | Cross-validation and the estimation of conditional probability densities | JASA
2004 | Nonparametric confidence intervals for receiver operating characteristic curves | Biometrika
2004 | Bump hunting with non-Gaussian kernels | Ann. Stat.
2004 | Attributing a probability to the shape of a probability density | Ann. Stat.
Figure 5: (Left) Jianqing Fan’s papers in the dataset of Ji and Jin (2016)
from 2011 to 2012. (Right) Peter Hall’s papers in the dataset of Ji and Jin
(2016) from 2004.
In the second experiment, we divide the abstracts of each author into groups
by publication year. We divide Peter Hall’s abstracts into 9 groups, and each
group corresponds to one year. We divide Jianqing Fan’s abstracts into 6
groups, with unequal window sizes to make all groups have roughly equal
numbers of abstracts. Our test can be used to detect differences between all
groups, but to see more informative results, we do a pairwise comparison: for
each pair of groups, we apply DELVE+ with $K=2$. This yields a pairwise plot
of $Z$-scores. The plot reveals the temporal patterns of this author in
abstract writing. Figure 4 shows the results for Peter Hall and Jianqing Fan.
There are interesting temporal patterns. For Jianqing Fan (right panel of
Figure 4), the group consisting of his 2011-2012 abstracts has comparably
large $Z$-scores in the pairwise comparison with other groups. To interpret
this , we gathered the titles and abstracts of all his papers in the dataset
and compared the ones before/after 2011. He published six papers in these
journals during 2011-2012, whose titles are listed on the left of Figure 5. We
see that his papers in this period had a strong emphasis on screening and
variable selection: four out of the six papers mention this subject in their
titles and/or abstracts. This shows a departure from his previously published
topics such as covariance estimation (a focus from 2007–2009) and
semiparametric estimation (a focus before 2010). Though Jianqing Fan had
previously published papers on variable selection and screening in these
journals, he had never published so many in such a short time period. For
Peter Hall (left panel of Figure 4), the group of 2004 abstracts have
comparably large $Z$-scores in the pairwise comparison with other groups. We
examined the titles and abstracts of his 6 papers published in 2004 in this
data set. All of his 2004 papers, except the first one, mention bandwidth
selection or smoothing parameters, and in the last 4 papers, bandwidth
selection plays a central role. For instance, Bump hunting with non-Gaussian
kernels, (Ann. Stat., 2004) studies the relationship between the number of
modes of a kernel density estimator and its bandwidth parameter. Though Peter
Hall’s 2014 papers concern many nonparametric statistics topics, we find that
bandwidth selection is a theme underlying his research in these journals in
2004.
### 6.2 Amazon movie reviews
Rank | Title | $Z$-Score | Total reviews
---|---|---|---
1 | Prometheus | 34.44 | 813
2 | Expelled: No Intelligence Allowed | 34.17 | 830
3 | V for Vendetta | 32.24 | 815
4 | Sin City | 31.72 | 828
5 | No Country for Old Men | 30.57 | 819
$\vdots$ | $\vdots$ | $\vdots$ | $\vdots$
16 | John Adams | 20.78 | 857
17 | Cars | 19.98 | 902
18 | Food, Inc. | 17.81 | 876
19 | Jeff Dunham: Arguing with Myself | 4.96 | 860
20 | Jeff Dunham: Spark of Insanity | 4.46 | 877
Figure 6: (Left) Histogram of $Z$-scores for the 500 most-reviewed movies. The
mean is $19.97$ and the standard deviation is $5.07$. (Right) $Z$-scores for
the top 20 most reviewed movies.
Figure 7: Pairwise $Z$-scores for 3 movies. In each cell, we use DELVE+ to
compare reviews associated to a pair of star ratings. For each movie, the
title list the number of reviews of each rating from 1–5.
We analyze Amazon reviews from the dataset Maurya (2018) that consists of
1,924,471 reviews of 143,007 visual media products (ie, DVDs, Bluray, or
streams). We examine products with the largest number of reviews. Each
product’s review corpus is cleaned and stemming is used to group together
words with the same root. We obtain word counts for each review and a term-
document matrix of a product’s review corpus. In the first experiment, we fix
a movie and apply DELVE+ with $K=n$ to the corpus consisting of all reviews of
this movie. In Figure 6 (left panel), we plot the histogram of $Z$-scores for
the top 500 most reviewed movies. The mean is $19.97$ and the standard
deviation is $5.07$. Compared with the histogram of $Z$-scores for statistics
paper abstracts, there is much larger diversity in movie reviews. In Figure 6
(right panel), we list the 5 movies with the highest $Z$-scores and lowest
$Z$-scores out of the 20 most reviewed movies. Each movie has more than 800
reviews, but some have surprisingly low $Z$-scores. The works by comedian Jeff
Dunham have the lowest $Z$-scores, suggesting strong homogeneity among the
reviews. The 2012 horror film Prometheus has the highest degree of review
diversity among the 20 most reviewed movies. In the second experiment, we
divide the reviews of each movie into 5 groups by star rating. We compare each
pair of groups using DELVE+ with $K=2$, resulting in a pairwise $Z$-score
plot. In Figure 7, we plot this for 3 popular movies. We see a variety of
polarization patterns among the scores. In Harry Potter and the Deathly
Hallows Part I, DELVE+ signifies that the reviews with ratings in the range
2–4 stars are all similar. We see a smooth gradation in how the 1-star reviews
differ from those from 2–4 stars, and similarly for 5-star reviews versus
those from 2–4 stars. Twilight Saga: Eclipse shows three clusters: 1–2 stars,
3–4 stars, and 5 star, while Night of the living dead shows two clusters: 1–2
stars and 3–5 stars.
## 7 Discussions
We examine the testing for equality of PMFs of $K$ groups of high-dimensional
multinomial distributions. The proposed DELVE statistic has a parameter-free
limiting null that allows for computation of $Z$-scores and $p$-values on real
data. DELVE achieves the optimal detection boundary over the whole range of
parameters $(n,p,K,\bar{N})$, including the high-dimensional case
$p\to\infty$, which is very relevant to applications in text mining.
This work leads to interesting questions for future study. So far the focus is
on testing, but one can also consider inference for $\rho^{2}$ from (2.2),
which measures the heterogeneity among the group-wise means. Consistent
variance estimation under the alternative uses a similar strategy, though we
omit the calculations in this paper. Establishing asymptotic normality of
DELVE under the alternative would then lead to asymptotic confidence intervals
for our heterogeneity metric $\rho^{2}$. Based on the plots in Section 5, it
is possible that stronger regularity conditions are needed to obtain a pivotal
distribution under the alternative. As in the two-sample multinomial testing
problems described in Kipnis and Donoho (2021); Kipnis (2022), such as author
attribution, we may also consider an alternative where all the group means are
the same except for a small set of “giveaway words”. It is interesting to
develop a procedure for identifying these useful words. As discussed in
Section 4.2, we may modify DELVE by using a version based on the maximum test
or higher criticism. Another extension is to go beyond ‘bag-of-words’ style
analysis and use different types of counts besides raw word frequencies. One
option is to apply a suitably modified DELVE to the counts of multi-grams in
the corpus and another is to combine words with similar meanings into a
‘superword’ and use superword counts as the basis for DELVE. To do this, we
can combine words that are close together in some word embedding. We leave
these interesting tasks for future work.
Acknowledgments The research of T. Tony Cai was supported in part by NSF Grant
DMS-2015259 and NIH grant R01-GM129781. The research of Zheng Tracy Ke was
supported in part by NSF CAREER Grant DMS-1943902.
Notational conventions for the appendix: We write $A\lesssim B$ (respectively,
$A\gtrsim B$) if there exists an absolute constant $C>0$ such that $A\leq
C\cdot B$ (respectively $A\geq C\cdot B$). If both $A\lesssim B$ and
$B\lesssim A$, we write $A\asymp B$. The implicit constant $C$ may vary from
line to line. For sequences $a_{t},b_{t}$ indexed by an integer
$t\in\mathbb{N}$, we write $a_{t}\ll b_{t}$ if $b_{t}/a_{t}\to\infty$ as
$t\to\infty$, and we write $a_{t}\gg b_{t}$ if $a_{t}/b_{t}\to\infty$ as
$t\to\infty$. We also may write $a_{t}=o(b_{t})$ to denote $a_{t}\ll b_{t}$.
In particular, we write $a_{t}=(1+o(1))b_{t}$ if $a_{t}/b_{t}\to 1$ as
$t\to\infty$.
## Appendix A Properties of $T$ and $V$
We recall that
$X_{i}\sim\mathrm{Multinomial}(N_{i},\Omega_{i}),\qquad 1\leq i\leq n.$ (A.1)
For each $1\leq k\leq K$, define
$\mu_{k}=\frac{1}{n_{k}\bar{N}_{k}}\sum_{i\in
S_{k}}N_{i}\Omega_{i}\;\in\;\mathbb{R}^{p},\qquad\Sigma_{k}=\frac{1}{n_{k}\bar{N}_{k}}\sum_{i\in
S_{k}}N_{i}\Omega_{i}\Omega_{i}^{\prime}\;\in\;\mathbb{R}^{p\times p}.$ (A.2)
Moreover, let
$\mu=\frac{1}{n\bar{N}}\sum_{k=1}^{K}n_{k}\bar{N}_{k}\mu_{k}=\frac{1}{n\bar{N}}\sum_{i=1}^{n}N_{i}\Omega_{i}\,,\quad\Sigma=\frac{1}{n\bar{N}}\sum_{k=1}^{n}n_{k}\bar{N}_{k}\Sigma_{k}=\frac{1}{n\bar{N}}\sum_{i}N_{i}\Omega_{i}\Omega_{i}^{\prime}$
(A.3)
The DELVE test statistic is $\psi=T/\sqrt{V}$, where $T$ is as in (2.5) and
$V$ is as in (2.7). As a preparation for the main proofs, in this section, we
study $T$ and $V$ separately.
### A.1 The decomposition of $T$
It is well-known that a multinomial with the number of trials equal to $N$ can
be equivalently written as the sum of $N$ independent multinomials each with
the number of trials equal to $1$. This inspires us to introduce a set of
independent, mean-zero random vectors:
$\\{Z_{ir}\\}_{1\leq i\leq n,1\leq r\leq N_{i}},\qquad\mbox{with
}Z_{ir}=B_{ir}-\mathbb{E}B_{ir},\;\;\mbox{and}\;\;B_{ir}\sim\mathrm{Multinomial}(1,\Omega_{i}).$
(A.4)
We use them to get a decomposition of $T$ into mutually uncorrelated terms:
###### Lemma A.1.
Let $\\{Z_{ir}\\}_{1\leq i\leq n,1\leq r\leq N_{i}}$ be as in (A.4). For each
$Z_{ir}\in\mathbb{R}^{p}$, let $\\{Z_{ijr}\\}_{1\leq j\leq p}$ denote its $p$
coordinates. Recall that
$\rho^{2}=\sum_{k=1}^{K}n_{k}\bar{N}_{k}\|\mu_{k}-\mu\|^{2}$. For $1\leq j\leq
p$, define
$\displaystyle U_{1j}$ $\displaystyle=$ $\displaystyle
2\sum_{k=1}^{K}\sum_{i\in S_{k}}\sum_{r=1}^{N_{i}}(\mu_{kj}-\mu_{j})Z_{ijr},$
$\displaystyle U_{2j}$ $\displaystyle=$ $\displaystyle\sum_{k=1}^{K}\sum_{i\in
S_{k}}\sum_{1\leq r\neq s\leq
N_{i}}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}\frac{N_{i}}{N_{i}-1}Z_{ijr}Z_{ijs},$
$\displaystyle U_{3j}$ $\displaystyle=$
$\displaystyle-\frac{1}{n\bar{N}}\sum_{1\leq k\neq\ell\leq K}\sum_{i\in
S_{k}}\sum_{m\in S_{\ell}}\sum_{r=1}^{N_{i}}\sum_{s=1}^{N_{m}}Z_{ijr}Z_{mjs},$
$\displaystyle U_{4j}$ $\displaystyle=$
$\displaystyle\sum_{k=1}^{K}\sum_{\begin{subarray}{c}i\in S_{k},m\in S_{k}\\\
i\neq
m\end{subarray}}\sum_{r=1}^{N_{i}}\sum_{s=1}^{N_{m}}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}Z_{ijr}Z_{mjs}.$
Then, $T=\rho^{2}+\sum_{\kappa=1}^{4}{\bf 1}_{p}^{\prime}U_{\kappa}$.
Moreover, $\mathbb{E}[U_{\kappa}]={\bf 0}_{p}$ and
$\mathbb{E}[U_{\kappa}U_{\zeta}^{\prime}]={\bf 0}_{p\times p}$ for
$1\leq\kappa\neq\zeta\leq 4$.
### A.2 The variance of $T$
By Lemma A.1, the four terms $\\{{\bf
1}_{p}^{\prime}U_{\kappa}\\}_{1\leq\kappa\leq 4}$ are uncorrelated with each
other. Therefore,
$\mathrm{Var}(T)=\mathrm{Var}({\bf 1}_{p}^{\prime}U_{1})+\mathrm{Var}({\bf
1}_{p}^{\prime}U_{2})+\mathrm{Var}({\bf
1}_{p}^{\prime}U_{3})+\mathrm{Var}({\bf 1}_{p}^{\prime}U_{4}).$
It suffices to study the variance of each of these four terms.
###### Lemma A.2.
Let $U_{1}$ be the same as in Lemma A.1. Define
$\displaystyle\Theta_{n1}$
$\displaystyle=4\sum_{k=1}^{K}n_{k}\bar{N}_{k}\bigl{\|}\mathrm{diag}(\mu_{k})^{1/2}(\mu_{k}-\mu)\bigr{\|}^{2}$
(A.5) $\displaystyle L_{n}$
$\displaystyle=4\sum_{k=1}^{K}n_{k}\bar{N}_{k}\bigl{\|}\Sigma_{k}^{1/2}(\mu_{k}-\mu)\bigr{\|}^{2}$
(A.6)
Then $\mathrm{Var}({\bf 1}_{p}^{\prime}U_{1})=\Theta_{n1}-L_{n}$. Furthermore,
if $\max_{1\leq k\leq K}\|\mu_{k}\|_{\infty}=o(1)$, then $\mathrm{Var}({\bf
1}_{p}^{\prime}U_{1})=o(\rho^{2})$.
###### Lemma A.3.
Let $U_{2}$ be the same as in Lemma A.1. Define
$\displaystyle\Theta_{n2}$
$\displaystyle=2\sum_{k=1}^{K}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}^{2}\sum_{i\in
S_{k}}\frac{N_{i}^{3}}{N_{i}-1}\|\Omega_{i}\|^{2}$ (A.7) $\displaystyle A_{n}$
$\displaystyle=2\sum_{k=1}^{K}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}^{2}\sum_{i\in
S_{k}}\frac{N_{i}^{3}}{N_{i}-1}\|\Omega_{i}\|_{3}^{3}$ (A.8)
Then
$\Theta_{n2}-A_{n}\leq\mathrm{Var}({\bf 1}_{p}^{\prime}U_{2})\leq\Theta_{n2}.$
Furthermore, if
$\displaystyle\max_{1\leq k\leq K}\big{\\{}\frac{\sum_{i\in
S_{k}}N^{2}_{i}\|\Omega_{i}\|_{3}^{3}}{\sum_{i\in
S_{k}}N_{i}^{2}\|\Omega_{i}\|^{2}}\bigr{\\}}=o(1),$ (A.9)
then $\mathrm{Var}({\bf 1}_{p}^{\prime}U_{2})=[1+o(1)]\cdot\Theta_{n2}$.
###### Lemma A.4.
Let $U_{3}$ be the same as in Lemma A.1. Define
$\displaystyle\Theta_{n3}$
$\displaystyle=\frac{2}{n^{2}\bar{N}^{2}}\sum_{k\neq\ell}\sum_{i\in
S_{k}}\sum_{m\in S_{\ell}}\sum_{j}N_{i}N_{m}\Omega_{ij}\Omega_{mj}$ (A.10)
$\displaystyle B_{n}$
$\displaystyle=2\sum_{k\neq\ell}\frac{n_{k}n_{\ell}\bar{N}_{k}\bar{N}_{\ell}}{n^{2}\bar{N}^{2}}{\bf
1}_{p}^{\prime}(\Sigma_{k}\circ\Sigma_{\ell}){\bf 1}_{p}$ (A.11)
Then
$\Theta_{n3}-B_{n}\leq\mathrm{Var}({\bf
1}_{p}^{\prime}U_{3})\leq\Theta_{n3}+B_{n}.$
###### Lemma A.5.
Let $U_{4}$ be the same as in Lemma A.1. Define
$\displaystyle\Theta_{n4}$
$\displaystyle=2\sum_{k=1}^{K}\sum_{\begin{subarray}{c}i\in S_{k},m\in
S_{k}\\\ i\neq
m\end{subarray}}\sum_{j}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}^{2}N_{i}N_{m}\Omega_{ij}\Omega_{mj}.$
(A.12) $\displaystyle E_{n}$
$\displaystyle=2\sum_{k}\sum_{\begin{subarray}{c}i\in S_{k},m\in S_{k},\\\
i\neq m\end{subarray}}\sum_{1\leq j,j^{\prime}\leq
p}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}^{2}N_{i}N_{m}\Omega_{ij}\Omega_{ij^{\prime}}\Omega_{mj}\Omega_{mj^{\prime}}$
(A.13)
Then
$\Theta_{n4}-E_{n}\leq\mathrm{Var}({\bf
1}_{p}^{\prime}U_{4})\leq\Theta_{n4}+E_{n}$
.
Using Lemmas A.2-A.5, we derive regularity conditions such that the first term
in $\mathrm{Var}({\bf 1}_{p}^{\prime}U_{\kappa})$ is the dominating term.
Observe that $\Theta_{n}=\Theta_{n1}+\Theta_{n2}+\Theta_{n3}+\Theta_{n4}$,
where the quantity $\Theta_{n}$ is defined in (2.6). The following
intermediate result is useful.
###### Lemma A.6.
Suppose that (3.1) holds. Then
$\displaystyle\Theta_{n2}+\Theta_{n3}+\Theta_{n4}\asymp\sum_{k}\|\mu_{k}\|^{2}.$
(A.14)
Moreover, under the null hypothesis, $\Theta_{n}\asymp K\|\mu\|^{2}$.
The next result is useful in proving that our variance estimator $V$ is
asymptotically unbiased.
###### Lemma A.7.
Suppose that (3.1) holds, and recall the definition of $\Theta_{n}$ in (2.6).
Define
$\displaystyle\beta_{n}=\frac{\max\bigg{\\{}\sum_{k}\sum_{i\in
S_{k}}\frac{N^{2}_{i}}{n_{k}^{2}\bar{N}_{k}^{2}}\|\Omega_{i}\|_{3}^{3}\,,\,\,\sum_{k}\|\Sigma_{k}\|_{F}^{2}\bigg{\\}}}{K\|\mu\|^{2}}.$
(A.15)
If $\beta_{n}=o(1)$, then under the null hypothesis,
$\mathrm{Var}(T)=[1+o(1)]\cdot\Theta_{n}$.
We also study the case of $K=2$ more explicitly. In the lemmas below we use
the notation from Section 3.4. First we have an intermediate result analogous
to Lemma A.6 that holds under weaker conditions.
###### Lemma A.8.
Consider $K=2$ and suppose that $\min N_{i}\geq 2$, $\min M_{i}\geq 2$ Then
$\displaystyle\Theta_{n2}+\Theta_{n3}+\Theta_{n4}\asymp\bigg{\|}\frac{m\bar{M}}{n\bar{N}+m\bar{M}}\eta+\frac{n\bar{N}}{n\bar{N}+m\bar{M}}\theta\bigg{\|}^{2}.$
Moreover, under the null hypothesis, $\Theta_{n}\asymp\|\mu\|^{2}$.
The next result is a version of Lemma A.7 for the case $K=2$ that holds under
weaker conditions.
###### Lemma A.9.
Suppose that $\min_{i}N_{i}\geq 2$ and $\min_{i}M_{i}\geq 2$. Define
$\displaystyle\beta_{n}^{(2)}=\frac{\max\bigg{\\{}\sum_{i}N_{i}^{2}\|\Omega_{i}\|^{3},\,\,\sum_{i}M_{i}^{2}\|\Gamma_{i}\|^{3}\,,\,\,\|\Sigma_{1}\|_{F}^{2}+\|\Sigma_{2}\|_{F}^{2}\bigg{\\}}}{\|\mu\|^{2}}.$
(A.16)
If $\beta_{n}^{(2)}=o(1)$, then under the null hypothesis,
$\mathrm{Var}(T)=[1+o(1)]\cdot\Theta_{n}$.
### A.3 The decomposition of $V$
###### Lemma A.10.
Let $\\{Z_{ir}\\}_{1\leq i\leq n,1\leq r\leq N_{i}}$ be as in (A.4). Recall
that
$\displaystyle V$ $\displaystyle=2\sum_{k=1}^{K}\sum_{i\in
S_{k}}\sum_{j=1}^{p}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}^{2}\biggl{[}\frac{N_{i}X_{ij}^{2}}{N_{i}-1}-\frac{N_{i}X_{ij}(N_{i}-X_{ij})}{(N_{i}-1)^{2}}\biggr{]}$
(A.17) $\displaystyle+\frac{2}{n^{2}\bar{N}^{2}}\sum_{1\leq k\neq\ell\leq
K}\sum_{i\in S_{k}}\sum_{m\in
S_{\ell}}\sum_{j=1}^{p}X_{ij}X_{mj}+2\sum_{k=1}^{K}\sum_{\begin{subarray}{c}i\in
S_{k},m\in S_{k},\\\ i\neq
m\end{subarray}}\sum_{j=1}^{p}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}^{2}X_{ij}X_{mj}.$
Define
$\displaystyle\theta_{i}$
$\displaystyle=\big{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}})^{2}\frac{N_{i}^{3}}{N_{i}-1}\quad\text{for
}i\in S_{k}\,\,,\quad\text{and let}\,\,$ $\displaystyle\alpha_{im}$
$\displaystyle=\begin{cases}\frac{2}{n^{2}\bar{N}^{2}}&\quad\text{ if }i\in
S_{k},m\in S_{\ell},k\neq\ell\\\
2\big{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}})^{2}&\quad\text{ if
}i,m\in S_{k}\end{cases}$
If we let
$\displaystyle A_{1}$
$\displaystyle=\sum_{i}\sum_{r=1}^{N_{i}}\sum_{j}\big{[}\frac{4\theta_{i}\Omega_{ij}}{N_{i}}+\sum_{m\in[n]\backslash\\{i\\}}2\alpha_{im}N_{m}\Omega_{mj}\big{]}Z_{ijr},$
(A.18) $\displaystyle A_{2}$ $\displaystyle=\sum_{i}\sum_{r\neq
s\in[N_{i}]}\frac{2\theta_{i}}{N_{i}(N_{i}-1)}\big{(}\sum_{j}Z_{ijr}Z_{ijs}\big{)}$
(A.19) $\displaystyle A_{3}$ $\displaystyle=\sum_{i\neq
m}\sum_{r=1}^{N_{i}}\sum_{s=1}^{N_{m}}\alpha_{im}\big{(}\sum_{j}Z_{ijr}Z_{mjs}\big{)},$
(A.20)
then these terms are mean zero, are mutually uncorrelated, and satisfy
$\displaystyle V=A_{1}+A_{2}+A_{3}+\Theta_{n2}+\Theta_{n3}+\Theta_{n4}.$
(A.21)
### A.4 Properties of $V$
First we control the variance of $V$.
###### Lemma A.11.
Let $A_{1},A_{2},$ and $A_{3}$ be defined as in Lemma A.10. Then
$\displaystyle\mathrm{Var}(A_{1})$
$\displaystyle\lesssim\frac{1}{n\bar{N}}\|\mu\|_{3}^{3}+\sum_{k}\frac{\|\mu_{k}\|_{3}^{3}}{n_{k}\bar{N}_{k}}\lesssim\sum_{k}\frac{\|\mu_{k}\|_{3}^{3}}{n_{k}\bar{N}_{k}}$
$\displaystyle\mathrm{Var}(A_{2})$ $\displaystyle\lesssim\sum_{k}\sum_{i\in
S_{k}}\frac{N_{i}^{2}\|\Omega_{i}\|_{2}^{2}}{n_{k}^{4}\bar{N}_{k}^{4}}\lesssim\sum_{k}\frac{\|\mu_{k}\|^{2}}{n_{k}^{2}\bar{N}_{k}^{2}}$
$\displaystyle\mathrm{Var}(A_{3})$
$\displaystyle\lesssim\sum_{k}\frac{\|\mu_{k}\|^{2}}{n_{k}^{2}\bar{N}_{k}^{2}}+\frac{1}{n^{2}\bar{N}^{2}}\|\mu\|^{2}\lesssim\sum_{k}\frac{\|\mu_{k}\|^{2}}{n_{k}^{2}\bar{N}_{k}^{2}}.$
Next we show consistency of $V$ under the null, which is crucial in properly
standardizing our test statistic and establishing asymptotic normality.
###### Proposition A.1.
Recall the definition of $\beta_{n}$ in (A.15). Suppose that $\beta_{n}=o(1)$
and that the condition (3.1) holds. If under the null hypothesis we have
$\displaystyle K^{2}\|\mu\|^{4}$
$\displaystyle\gg\sum_{k}\frac{\|\mu\|^{2}}{n_{k}^{2}\bar{N}_{k}^{2}}\vee\sum_{k}\frac{\|\mu\|_{3}^{3}}{n_{k}\bar{N}_{k}},$
(A.22)
then $V/\mathrm{Var}{T}\to 1$ in probability.
To later control the type II error, we must also show that $V$ does not
dominate the true variance under the alternative. We first state an
intermediate result that is useful throughout.
###### Lemma A.12.
Suppose that, under either the null or alternative,
$\max_{i}\|\Omega_{i}\|_{\infty}\leq 1-c_{0}$ holds for an absolute constant
$c_{0}>0$. Then
$\displaystyle\mathrm{Var}(T)\gtrsim\Theta_{n2}+\Theta_{n3}+\Theta_{n4}.$
(A.23)
###### Proposition A.2.
Suppose that under the alternative (3.1) holds and
$\displaystyle\big{(}\sum_{k}\|\mu_{k}\|^{2}\big{)}^{2}$
$\displaystyle\gg\sum_{k}\frac{\|\mu_{k}\|^{2}}{n_{k}^{2}\bar{N}_{k}^{2}}\vee\sum_{k}\frac{\|\mu_{k}\|_{3}^{3}}{n_{k}\bar{N}_{k}}.$
(A.24)
Then $V=O_{\mathbb{P}}(\mathrm{Var}(T))$ under the alternative.
We also require versions of Proposition A.1 and Proposition A.2 that hold
under weaker conditions in the special case $K=2$. We omit the proofs as they
are similar. Below we use the notation of Section 3.4.
###### Proposition A.3.
Suppose that $K=2$ and recall the definition of $\beta_{n}^{(2)}$ in A.16.
Suppose that $\beta_{n}^{(2)}=o(1)$, $\min_{i}N_{i}\geq 2,\min_{i}M_{i}\geq
2$, and $\max_{i}\|\Omega_{i}\|_{\infty}\leq
1-c_{0},\max_{i}\|\Gamma_{i}\|_{\infty}\leq 1-c_{0}$. If under the null
hypothesis
$\displaystyle\|\mu\|^{4}\gg\max\Big{\\{}\,\big{(}\frac{\|\mu\|_{2}^{2}}{n^{2}\bar{N}^{2}}+\frac{\|\mu\|_{2}^{2}}{m^{2}\bar{M}_{2}^{2}}\big{)},\,\big{(}\frac{\|\mu\|_{3}^{3}}{n\bar{N}}+\frac{\|\mu\|_{3}^{3}}{m\bar{M}}\big{)}\Big{\\}},$
(A.25)
then $V/\mathrm{Var}(T)\to 1$ in probability.
Under the alternative we have the following.
###### Proposition A.4.
Suppose that $K=2$, $\min_{i}N_{i}\geq 2,\min_{i}M_{i}\geq 2$, and
$\max_{i}\|\Omega_{i}\|_{\infty}\leq
1-c_{0},\max_{i}\|\Gamma_{i}\|_{\infty}\leq 1-c_{0}$. If under the alternative
$\displaystyle\bigg{\|}\frac{m\bar{M}}{n\bar{N}+m\bar{M}}\eta+\frac{n\bar{N}}{n\bar{N}+m\bar{M}}\theta\bigg{\|}^{4}\gg\max\Big{\\{}\,\big{(}\frac{\|\eta\|_{2}^{2}}{n^{2}\bar{N}^{2}}+\frac{\|\theta\|_{2}^{2}}{m^{2}\bar{M}_{2}^{2}}\big{)},\,\big{(}\frac{\|\eta\|_{3}^{3}}{n\bar{N}}+\frac{\|\theta\|_{3}^{3}}{m\bar{M}}\big{)}\Big{\\}},$
(A.26)
then $V=O_{\mathbb{P}}(\mathrm{Var}(T))$.
In the setting of $K=n$ and utilize the variance estimator $V^{*}$. The next
results capture the behavior of $V^{*}$ under the null and alternative. The
proofs are given later in this section.
###### Proposition A.5.
Define
$\displaystyle\beta_{n}^{(n)}=\frac{\sum_{i}\|\Omega_{i}\|^{3}}{n\|\mu\|^{2}}.$
(A.27)
Suppose that (3.1) holds, $\beta_{n}^{(n)}=o(1)$, and
$\displaystyle
n^{2}\|\mu\|^{4}\gg\sum_{i}\frac{\|\mu\|^{2}}{N_{i}^{2}}\vee\sum_{i}\frac{\|\mu\|_{3}^{3}}{N_{i}}.$
(A.28)
Then $V^{*}/\mathrm{Var}(T)\to 1$ in probability as $n\to\infty$.
###### Proposition A.6.
Suppose that under the alternative (3.1) holds and
$\displaystyle\big{(}\sum_{i}\|\Omega_{i}\|^{2}\big{)}^{2}\gg\sum_{i}\frac{\|\Omega_{i}\|^{2}}{N_{i}^{2}}\vee\sum_{i}\frac{\|\Omega_{i}\|_{3}^{3}}{N_{i}}.$
(A.29)
Then $V^{*}=O_{\mathbb{P}}(\mathrm{Var}(T))$ under the alternative.
### A.5 Proof of Lemma A.1
We first show that $\mathbb{E}[U_{\kappa}]={\bf 0}_{p}$ and
$\mathbb{E}[U_{\kappa}U_{\zeta}^{\prime}]={\bf 0}_{p\times p}$ for
$\kappa\neq\zeta$. Note that $\\{Z_{ir}\\}_{1\leq i\leq n,1\leq r\leq N_{i}}$
are independent mean-zero random vectors. It follows that each $U_{\kappa}$ is
a mean-zero random vector. We then compute $\mathbb{E}[U_{\kappa
j_{1}}U_{\zeta j_{2}}]$ for $\kappa\neq\zeta$ and all $1\leq j_{1},j_{2}\leq
p$. By direct calculations,
$\mathbb{E}[U_{1j}U_{2j_{2}}]=2\sum_{(k,i,r,s)}\sum_{(k^{\prime},i^{\prime},r^{\prime})}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}(\mu_{k^{\prime}j}-\mu_{j})\frac{N_{i}}{N_{i}-1}\mathbb{E}[Z_{ij_{2}r}Z_{ij_{2}s}Z_{i^{\prime}j_{1}r^{\prime}}].$
If $i^{\prime}\neq i$, or if $i^{\prime}=i$ and $r^{\prime}\notin\\{r,s\\}$,
then $Z_{i^{\prime}j_{1}r^{\prime}}$ is independent of
$Z_{ij_{2}r}Z_{ij_{2}s}$, and it follows that
$\mathbb{E}[Z_{ij_{2}r}Z_{ij_{2}s}Z_{i^{\prime}j_{1}r^{\prime}}]=0$. If
$i^{\prime}=i$ and $r=r^{\prime}$, then
$\mathbb{E}[Z_{ij_{2}r}Z_{ij_{2}s}Z_{i^{\prime}j_{1}r^{\prime}}]=\mathbb{E}[Z_{ij_{2}r}Z_{ij_{1}r}]\cdot\mathbb{E}[Z_{ij_{2}s}]$;
since $r\neq s$, we also have
$\mathbb{E}[Z_{ij_{2}r}Z_{ij_{2}s}Z_{i^{\prime}j_{1}r^{\prime}}]=0$. This
proves $\mathbb{E}[U_{1j}U_{2j_{*}}]=0$. Since this holds for all $1\leq
j_{1},j_{2}\leq p$, we immediately have
$\mathbb{E}[U_{1}U_{2}^{\prime}]={\bf 0}_{p\times p}.$
We can similarly show that $\mathbb{E}[U_{\kappa}U_{\zeta}^{\prime}]={\bf
0}_{p\times p}$, for other $\kappa\neq\zeta$. The proof is omitted.
It remains to prove the desirable decomposition of $T$. Recall that
$T=\sum_{j=1}^{p}T_{j}$. Write $\rho^{2}=\sum_{j=1}^{p}\rho_{j}^{2}$, where
$\rho_{j}^{2}=2\sum_{k=1}^{K}n_{k}\bar{N}_{k}(\mu_{kj}-\mu_{j})^{2}$. It
suffices to show that
$T_{j}=\rho_{j}^{2}+U_{1j}+U_{2j}+U_{3j}+U_{4j},\qquad\mbox{for all }1\leq
j\leq p.$ (A.30)
To prove (A.30), we need some preparation. Define
$Y_{ij}:=\frac{X_{ij}}{N_{i}}-\Omega_{ij}=\frac{1}{N_{i}}\sum_{r=1}^{N_{i}}Z_{ijr},\qquad
Q_{ij}:=Y_{ij}^{2}-\mathbb{E}Y^{2}_{ij}=Y_{ij}^{2}-\frac{\Omega_{ij}(1-\Omega_{ij})}{N_{i}}.$
(A.31)
With these notations, $X_{ij}=N_{i}(\Omega_{ij}+Y_{ij})$ and
$N_{i}Y_{ij}^{2}=N_{i}Q_{ij}+\Omega_{ij}(1-\Omega_{ij})$. Moreover, we can use
(A.31) to re-write $Q_{ij}$ as a function of $\\{Z_{ijr}\\}_{1\leq r\leq
N_{i}}$ as follows:
$Q_{ij}=\frac{1}{N^{2}_{i}}\sum_{r=1}^{N_{i}}[Z_{ijr}^{2}-\Omega_{ij}(1-\Omega_{ij})]+\frac{1}{N^{2}_{i}}\sum_{1\leq
r\neq s\leq N_{i}}Z_{ijr}Z_{ijs}.$
Note that $Z_{ijr}=B_{ijr}-\Omega_{ij}$, where $B_{ijr}$ can only take values
in $\\{0,1\\}$. Hence, $(Z_{ijr}+\Omega_{ij})^{2}=(Z_{ijr}+\Omega_{ij})$
always holds. Re-arranging the terms gives
$Z^{2}_{ijr}-\Omega_{ij}(1-\Omega_{ij})=(1-2\Omega_{ij})Z_{ijr}$. It follows
that
$Q_{ij}=(1-2\Omega_{ij})\frac{Y_{ij}}{N_{i}}+\frac{1}{N^{2}_{i}}\sum_{1\leq
r\neq s\leq N_{i}}Z_{ijr}Z_{ijs}.$ (A.32)
This is a useful equality which we will use in the proof below.
We now show (A.30). Fix $j$ and write $T_{j}=R_{j}-D_{j}$, where
$R_{j}=\sum_{k=1}^{K}n_{k}\bar{N}_{k}(\hat{\mu}_{kj}-\hat{\mu}_{j})^{2},\quad\mbox{and}\quad
D_{j}=\sum_{k=1}^{K}\sum_{i\in
S_{k}}\xi_{k}\frac{X_{ij}(N_{i}-X_{ij})}{n_{k}\bar{N}_{k}(N_{i}-1)},\quad\mbox{with
}\xi_{k}=1-\frac{n_{k}\bar{N}_{k}}{n\bar{N}}$
First, we study $D_{j}$. Note that
$X_{ij}(N_{ij}-X_{ij})=N^{2}_{i}(\Omega_{ij}+Y_{ij})(1-\Omega_{ij}-Y_{ij})=N^{2}_{i}\Omega_{ij}(1-\Omega_{ij})-N^{2}_{i}Y_{ij}^{2}+N_{i}^{2}(1-2\Omega_{ij})Y_{ij}$,
where $Y_{ij}^{2}=Q_{ij}+N_{i}^{-1}\Omega_{ij}(1-\Omega_{ij})$. It follows
that
$\frac{X_{ij}(N_{ij}-X_{ij})}{N_{i}(N_{i}-1)}=\Omega_{ij}(1-\Omega_{ij})-\frac{N_{i}Q_{ij}}{N_{i}-1}+\frac{N_{i}}{N_{i}-1}(1-2\Omega_{ij})Y_{ij}.$
We apply (A.32) to get
$\frac{X_{ij}(N_{ij}-X_{ij})}{N_{i}(N_{i}-1)}=\Omega_{ij}(1-\Omega_{ij})+(1-2\Omega_{ij})Y_{ij}-\frac{1}{N_{i}(N_{i}-1)}\sum_{1\leq
r\neq s\leq N_{i}}Z_{ijr}Z_{ijs}.$ (A.33)
It follows that
$\displaystyle D_{j}$ $\displaystyle=\sum_{k=1}^{K}\sum_{i\in
S_{k}}\frac{\xi_{k}N_{i}}{n_{k}\bar{N}_{k}}\Omega_{ij}(1-\Omega_{ij})+\sum_{k=1}^{K}\sum_{i\in
S_{k}}\frac{\xi_{k}N_{i}}{n_{k}\bar{N}_{k}}(1-2\Omega_{ij})Y_{ij}$ (A.34)
$\displaystyle\qquad-\sum_{k=1}^{K}\sum_{i\in
S_{k}}\frac{\xi_{k}}{n_{k}\bar{N}_{k}(N_{i}-1)}\sum_{1\leq r\neq s\leq
N_{i}}Z_{ijr}Z_{ijs}.$ (A.35)
Next, we study $R_{j}$. Note that
$n_{k}\bar{N}_{k}(\hat{\mu}_{kj}-\hat{\mu}_{j})=\sum_{i\in
S_{k}}(X_{ij}-\bar{N}_{k}\hat{\mu}_{j})$. It follows that
$R_{j}=\sum_{k=1}^{K}\frac{1}{n_{k}\bar{N}_{k}}\biggl{[}\sum_{i\in
S_{k}}(X_{ij}-\bar{N}_{k}\hat{\mu}_{j})\biggr{]}^{2}.$
Recall that $X_{ij}=N_{i}(\Omega_{ij}+Y_{ij})$. By direct calculations,
$\sum_{i\in S_{k}}X_{ij}=n_{k}\bar{N}_{k}\mu_{kj}+\sum_{i\in
S_{k}}N_{i}Y_{ij}$, and
$\hat{\mu}_{j}=\mu_{j}+(n\bar{N})^{-1}\sum_{m=1}^{n}N_{m}Y_{mj}$. We then have
the following decomposition:
$\displaystyle\sum_{i\in S_{k}}(X_{ij}-\bar{N}_{k}\hat{\mu}_{j})$
$\displaystyle=n_{k}\bar{N}_{k}(\mu_{kj}-\mu_{j})+\sum_{i\in
S_{k}}N_{i}Y_{ij}-\frac{n_{k}\bar{N}_{k}}{n\bar{N}}\Bigl{(}\sum_{m=1}^{n}N_{m}Y_{mj}\Bigr{)}.$
Using this decomposition, we can expand $[\sum_{i\in
S_{k}}(X_{ij}-\bar{N}_{k}\hat{\mu}_{j})]^{2}$ to a total of 6 terms, where 3
are quadratic terms and 3 are cross terms. It yields a decomposition of
$R_{j}$ into 6 terms:
$\displaystyle R_{j}$
$\displaystyle=\sum_{k=1}^{K}n_{k}\bar{N}_{k}(\mu_{kj}-\mu_{j})^{2}+\sum_{k=1}^{K}\frac{1}{n_{k}\bar{N}_{k}}\Bigl{(}\sum_{i\in
S_{k}}N_{i}Y_{ij}\Bigr{)}^{2}+\sum_{k=1}^{K}\frac{n_{k}\bar{N}_{k}}{n^{2}\bar{N}^{2}}\Bigl{(}\sum_{m=1}^{n}N_{m}Y_{mj}\Bigr{)}^{2}$
(A.36)
$\displaystyle\qquad+2\sum_{k=1}^{K}(\mu_{kj}-\mu_{j})\Bigl{(}\sum_{i\in
S_{k}}N_{i}Y_{ij}\Bigr{)}-2\sum_{k=1}^{K}\frac{n_{k}\bar{N}_{k}}{n\bar{N}}(\mu_{kj}-\mu_{j})\Bigl{(}\sum_{m=1}^{n}N_{m}Y_{mj}\Bigr{)}$
(A.37) $\displaystyle\qquad-\frac{2}{n\bar{N}}\sum_{k=1}^{K}\Bigl{(}\sum_{i\in
S_{k}}N_{i}Y_{ij}\Bigr{)}\Bigl{(}\sum_{m=1}^{n}N_{m}Y_{mj}\Bigr{)}$ (A.38)
$\displaystyle\equiv I_{1}+I_{2}+I_{3}+I_{4}+I_{5}+I_{6}.$ (A.39)
By definition, $\sum_{k=1}^{K}n_{k}\bar{N}_{k}=n\bar{N}$ and
$\sum_{k=1}^{K}n_{k}\bar{N}_{k}\mu_{kj}=n\bar{N}\mu_{j}$. It follows that
$I_{3}=\frac{1}{n\bar{N}}\Bigl{(}\sum_{m=1}^{n}N_{m}Y_{mj}\Bigr{)}^{2},\qquad
I_{5}=0,\qquad
I_{6}=-\frac{2}{n\bar{N}}\Bigl{(}\sum_{m=1}^{n}N_{m}Y_{mj}\Bigr{)}^{2}=-2I_{3}.$
It follows that
$R_{j}=I_{1}+I_{2}-I_{3}+I_{4}.$ (A.40)
We further simplify $I_{3}$. Recall that
$\xi_{k}=1-(n\bar{N})^{-1}n_{k}\bar{N}_{k}$. By direct calculations,
$\displaystyle I_{3}$
$\displaystyle=\frac{1}{n\bar{N}}\Bigl{(}\sum_{m=1}^{n}N_{m}Y_{mj}\Bigr{)}^{2}=\frac{1}{n\bar{N}}\biggl{[}\sum_{k=1}^{K}\Bigl{(}\sum_{i\in
S_{k}}N_{i}Y_{ij}\Bigr{)}\biggr{]}^{2}$ (A.41)
$\displaystyle=\frac{1}{n\bar{N}}\sum_{k=1}^{K}\Bigl{(}\sum_{i\in
S_{k}}N_{i}Y_{ij}\Bigr{)}^{2}+\frac{1}{n\bar{N}}\sum_{1\leq k\neq\ell\leq
K}\Bigl{(}\sum_{i\in S_{k}}N_{i}Y_{ij}\Bigr{)}\Bigl{(}\sum_{m\in
S_{\ell}}N_{m}Y_{mj}\Bigr{)}$ (A.42)
$\displaystyle=\sum_{k=1}^{K}(1-\xi_{k})\frac{1}{n_{k}\bar{N}_{k}}\Bigl{(}\sum_{i\in
S_{k}}N_{i}Y_{ij}\Bigr{)}^{2}+\underbrace{\frac{1}{n\bar{N}}\sum_{k\neq\ell}\sum_{i\in
S_{k}}\sum_{m\in S_{\ell}}N_{i}N_{m}Y_{ij}Y_{mj}}_{J_{1}}$ (A.43)
$\displaystyle=I_{2}-\sum_{k=1}^{K}\sum_{i\in
S_{k}}\frac{\xi_{k}}{n_{k}\bar{N}_{k}}\Bigl{(}\sum_{i\in
S_{k}}N_{i}Y_{ij}\Bigr{)}^{2}+J_{1}$ (A.44)
$\displaystyle=I_{2}+J_{1}-\sum_{k=1}^{K}\frac{\xi_{k}}{n_{k}\bar{N}_{k}}\Bigl{(}\sum_{i\in
S_{k}}N_{i}^{2}Y_{ij}^{2}\Bigr{)}-\underbrace{\sum_{k=1}^{K}\frac{\xi_{k}}{n_{k}\bar{N}_{k}}\sum_{\begin{subarray}{c}i\in
S_{k},m\in S_{k}\\\ i\neq m\end{subarray}}N_{i}N_{m}Y_{ij}Y_{mj}}_{J_{2}}.$
(A.45)
By (A.31), $N_{i}Y_{ij}^{2}=N_{i}Q_{i}+\Omega_{ij}(1-\Omega_{ij})$. We further
apply (A.32) to get
$N_{i}^{2}Y_{ij}^{2}=N_{i}(1-2\Omega_{ij})Y_{ij}+\sum_{1\leq r\neq s\leq
N_{i}}Z_{ijr}Z_{ijs}+N_{i}\Omega_{ij}(1-\Omega_{ij}).$
It follows that
$\displaystyle\sum_{k=1}^{K}\frac{\xi_{k}}{n_{k}\bar{N}_{k}}$
$\displaystyle\Bigl{(}\sum_{i\in
S_{k}}N_{i}^{2}Y_{ij}^{2}\Bigr{)}=\underbrace{\sum_{k=1}^{K}\sum_{i\in
S_{k}}\frac{\xi_{k}N_{i}}{n_{k}\bar{N}_{k}}(1-2\Omega_{ij})Y_{ij}}_{J_{3}}$
(A.46) $\displaystyle+\underbrace{\sum_{k=1}^{K}\sum_{i\in
S_{k}}\frac{\xi_{k}}{n_{k}\bar{N}_{k}}\sum_{r\neq
s}Z_{ijr}Z_{ijs}}_{J_{4}}+\underbrace{\sum_{k=1}^{K}\sum_{i\in
S_{k}}\frac{\xi_{k}N_{i}}{n_{k}\bar{N}_{k}}\Omega_{ij}(1-\Omega_{ij})}_{J_{5}}.$
(A.47)
We plug (A.46) into (A.41) to get $I_{3}=I_{2}+J_{1}-J_{2}-J_{3}-J_{4}-J_{5}$.
Further plugging $I_{3}$ into the expression of $R_{j}$ in (A.40), we have
$\displaystyle R_{j}$
$\displaystyle=I_{1}+I_{4}-J_{1}+J_{2}+J_{3}+J_{4}+J_{5},$ (A.48)
where $I_{1}$ and $I_{4}$ are defined in (A.36), $J_{1}$-$J_{2}$ are defined
in (A.41), and $J_{3}$-$J_{5}$ are defined in (A.46).
Finally, we combine the expressions of $D_{j}$ and $R_{j}$. By (A.34) and the
definitions of $J_{1}$-$J_{5}$,
$\displaystyle D_{j}$ $\displaystyle=J_{5}+J_{3}-\sum_{k=1}^{K}\sum_{i\in
S_{k}}\frac{\xi_{k}}{n_{k}\bar{N}_{k}(N_{i}-1)}\sum_{r\neq s}Z_{ijr}Z_{ijs}$
$\displaystyle=J_{5}+J_{3}+J_{4}-\underbrace{\sum_{k=1}^{K}\sum_{i\in
S_{k}}\frac{\xi_{k}N_{i}}{n_{k}\bar{N}_{k}(N_{i}-1)}\sum_{r\neq
s}Z_{ijr}Z_{ijs}}_{J_{6}}.$
Combining it with (A.48) gives
$T_{j}=R_{j}-D_{j}=I_{1}+I_{4}-J_{1}+J_{2}+J_{6}$. We further plug in the
definition of each term. It follows that
$\displaystyle T_{j}$
$\displaystyle=\sum_{k=1}^{K}n_{k}\bar{N}_{k}(\mu_{kj}-\mu_{j})^{2}+2\sum_{k=1}^{K}\sum_{i\in
S_{k}}(\mu_{kj}-\mu_{j})N_{i}Y_{ij}-\frac{1}{n\bar{N}}\sum_{k\neq\ell}\sum_{i\in
S_{k},m\in S_{\ell}}N_{i}N_{m}Y_{ij}Y_{mj}$ (A.49)
$\displaystyle\qquad+\sum_{k=1}^{K}\sum_{\begin{subarray}{c}i\in S_{k},m\in
S_{k}\\\ i\neq
m\end{subarray}}\frac{\xi_{k}}{n_{k}\bar{N}_{k}}N_{i}N_{m}Y_{ij}Y_{mj}+\sum_{k=1}^{K}\sum_{i\in
S_{k}}\frac{\xi_{k}N_{i}}{n_{k}\bar{N}_{k}(N_{i}-1)}\sum_{r\neq
s}Z_{ijr}Z_{ijs}.$ (A.50)
We plug in $Y_{ij}=N_{i}^{-1}\sum_{r=1}^{N_{i}}Z_{ijr}$ and take a sum of
$1\leq j\leq p$. It gives (A.30) immediately. The proof is now complete. ∎
### A.6 Proof of Lemma A.2
Recall that $\\{Z_{ir}\\}_{1\leq i\leq n,1\leq r\leq N_{i}}$ are independent
random vectors. Write
${\bf 1}_{p}^{\prime}U_{1}=2\sum_{k=1}^{K}\sum_{i\in
S_{k}}\sum_{r=1}^{N_{i}}(\mu_{k}-\mu)^{\prime}Z_{ir}.$
The covariance matrix of $Z_{ir}$ is
$\mathrm{diag}(\Omega_{i})-\Omega_{i}\Omega_{i}^{\prime}$. It follows that
$\displaystyle\mathrm{Var}({\bf 1}_{p}^{\prime}U_{1})$
$\displaystyle=4\sum_{k=1}^{K}\sum_{i\in
S_{k}}\sum_{r=1}^{N_{i}}(\mu_{k}-\mu)^{\prime}\bigl{[}\mathrm{diag}(\Omega_{i})-\Omega_{i}\Omega_{i}^{\prime}\bigr{]}(\mu_{k}-\mu)$
(A.51)
$\displaystyle=4\sum_{k}(\mu_{k}-\mu)^{\prime}\Bigl{[}\mathrm{diag}\Bigl{(}\sum_{i\in
S_{k}}N_{i}\Omega_{i}\Bigr{)}-\Bigl{(}\sum_{i\in
S_{k}}N_{i}\Omega_{i}\Omega_{i}^{\prime}\Bigr{)}\Bigr{]}(\mu_{k}-\mu)$ (A.52)
$\displaystyle=4\sum_{k}(\mu_{k}-\mu)^{\prime}\Bigl{[}\mathrm{diag}(n_{k}\bar{N}_{k}\mu_{k})-n_{k}\bar{N}_{k}\Sigma_{k}\Bigr{]}(\mu_{k}-\mu)$
(A.53)
$\displaystyle=4\sum_{k}n_{k}\bar{N}_{k}\bigl{\|}\mathrm{diag}(\mu_{k})^{1/2}(\mu_{k}-\mu)\bigr{\|}^{2}-4\sum_{k}n_{k}\bar{N}_{k}\bigl{\|}\Sigma_{k}^{1/2}(\mu_{k}-\mu)\bigr{\|}^{2}.$
(A.54)
This proves the first claim. Furthermore, by (A.51),
$\mathrm{Var}({\bf 1}_{p}^{\prime}U_{1})\leq
4\sum_{k}n_{k}\bar{N}_{k}\bigl{\|}\mathrm{diag}(\mu_{k})^{1/2}(\mu_{k}-\mu)\bigr{\|}^{2}\leq
4\sum_{k}n_{k}\bar{N}_{k}\|\mathrm{diag}(\mu_{k})\|\|\mu_{k}-\mu\|^{2}.$
Note that $\|\mathrm{diag}(\mu_{k})\|=\|\mu_{k}\|_{\infty}$. Therefore, if
$\max_{k}\|\mu_{k}\|_{\infty}=o(1)$, the right hand side above is $o(1)\cdot
4\sum_{k}n_{k}\bar{N}_{k}\|\mu_{k}-\mu\|^{2}=o(\rho^{2})$. This proves the
second claim. ∎
### A.7 Proof of Lemma A.3
For each $1\leq k\leq K$, define a set of index triplets: ${\cal
M}_{k}=\\{(i,r,s):i\in S_{k},1\leq r<s\leq N_{i}\\}$. Let ${\cal
M}=\cup_{k=1}^{K}{\cal M}_{k}$. Write for short
$\theta_{i}=(\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}})^{2}\frac{N_{i}^{3}}{N_{i}-1}$,
for $i\in S_{k}$. It is seen that
${\bf 1}_{p}^{\prime}U_{2}=2\sum_{(i,r,s)\in{\cal
M}}\frac{\sqrt{\theta_{i}}}{\sqrt{N_{i}(N_{i}-1)}}W_{irs},\qquad\mbox{with}\quad
W_{irs}=\sum_{j=1}^{p}Z_{ijr}Z_{ijs}.$
For $W_{irs}$ and $W_{i^{\prime}r^{\prime}s^{\prime}}$, if $i\neq i^{\prime}$,
or if $i=i^{\prime}$ and $\\{r,s\\}\cap\\{r^{\prime},s^{\prime}\\}=\emptyset$,
then these two variables are independent; if $i=i^{\prime}$, $r=r^{\prime}$
and $s\neq s^{\prime}$, then
$\mathbb{E}[W_{irs}W_{irs^{\prime}}]=\sum_{j,j^{\prime}}\mathbb{E}[Z_{ijr}Z_{ijs}Z_{ij^{\prime}r}Z_{ij^{\prime}s^{\prime}}]=\sum_{j,j^{\prime}}\mathbb{E}[Z_{ijr}Z_{ij^{\prime}r}]\cdot\mathbb{E}[Z_{ijs}]\cdot\mathbb{E}[Z_{ij^{\prime}s^{\prime}}]=0$.
Therefore, $\\{W_{irs}\\}_{(i,r,s)\in{\cal M}}$ is a collection of mutually
uncorrelated variables. It follows that
$\mathrm{Var}({\bf 1}_{p}^{\prime}U_{2})=4\sum_{(i,r,s)\in{\cal
M}}\frac{\theta_{i}}{N_{i}(N_{i}-1)}\mathrm{Var}(W_{irs}).$
It remains to calculate the variance of each $W_{irs}$. By direction
calculations,
$\displaystyle\mathrm{Var}(W_{irs})$
$\displaystyle=\sum_{j}\mathbb{E}[Z_{ijr}^{2}Z_{ijs}^{2}]+2\sum_{j<\ell}\mathbb{E}[Z_{ijr}Z_{ijs}Z_{i\ell
r}Z_{i\ell s}]$ (A.55)
$\displaystyle=\sum_{j}[\Omega_{ij}(1-\Omega_{ij})]^{2}+2\sum_{j<\ell}(-\Omega_{ij}\Omega_{i\ell})^{2}$
(A.56)
$\displaystyle=\sum_{j}\Omega_{ij}^{2}-2\sum_{j}\Omega^{3}_{ij}+\Bigl{(}\sum_{j}\Omega^{2}_{ij}\Bigr{)}^{2}$
(A.57)
$\displaystyle=\|\Omega_{i}\|^{2}-2\|\Omega_{i}\|_{3}^{3}+\|\Omega_{i}\|^{4}$
(A.58)
Since $\max_{ij}\Omega_{ij}\leq 1$, we have
$\displaystyle\|\Omega_{i}\|^{2}-\|\Omega_{i}\|_{3}^{3}\leq\mathrm{Var}(W_{irs})\leq\|\Omega_{i}\|^{2}.$
Therefore,
$\displaystyle\mathrm{Var}({\bf 1}_{p}^{\prime}U_{2})$
$\displaystyle=4\sum_{k=1}^{K}\sum_{i\in S_{k}}\sum_{1\leq r<s\leq
N_{i}}\frac{\theta_{i}}{N_{i}(N_{i}-1)}\mathrm{Var}(W_{irs})$
$\displaystyle=2\sum_{k=1}^{K}\sum_{i\in
S_{k}}\theta_{i}\mathrm{Var}(W_{irs})\geq 2\sum_{k=1}^{K}\sum_{i\in
S_{k}}\theta_{i}\big{[}\|\Omega_{i}\|^{2}-\|\Omega_{i}\|_{3}^{3}\big{]}=\Theta_{n2}-A_{n},$
and similarly $\mathrm{Var}({\bf 1}_{p}^{\prime}U_{2})\leq\Theta_{n2}$, which
proves the first claim. To prove the second claim, note that
$\mathrm{Var}({\bf 1}_{p}^{\prime}U_{2})=\Theta_{n2}+O(A_{n})$. By (A.9) and
the assumption $\min N_{i}\geq 2$, we have
$\displaystyle A_{n}$
$\displaystyle\lesssim\sum_{k}\big{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\big{)}^{2}\sum_{i\in
S_{k}}N_{i}^{2}\|\Omega_{i}\|_{3}^{3}$
$\displaystyle=\sum_{k}\big{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\big{)}^{2}\cdot
o\bigg{(}\sum_{i\in S_{k}}N_{i}^{2}\|\Omega_{i}\|^{2}\bigg{)}=o(\Theta_{n2}),$
which implies that $\mathrm{Var}({\bf 1}_{p}U_{2})=[1+o(1)]\Theta_{n2}$, as
desired.
∎
### A.8 Proof of Lemma A.4
For each $1\leq k<\ell\leq K$, define a set of index quadruples: ${\cal
J}_{k\ell}=\\{(i,r,m,s):i\in S_{k},j\in S_{\ell},1\leq r\leq N_{i},1\leq s\leq
N_{m}\\}$. Let ${\cal J}=\cup_{(k,\ell):1\leq k<\ell\leq K}{\cal J}_{k\ell}$.
It is seen that
${\bf 1}_{p}^{\prime}U_{3}=-\frac{2}{n\bar{N}}\sum_{(i,r,m,s)\in{\cal
J}}V_{irms},\qquad\mbox{where}\;\;V_{irms}=\sum_{j=1}^{p}Z_{ijr}Z_{mjs}.$
For $V_{irms}$ and $V_{i^{\prime}r^{\prime}m^{\prime}s^{\prime}}$, if
$\\{(i,r),(m,s)\\}\cap\\{(i^{\prime},r^{\prime}),(m^{\prime},s^{\prime})\\}=\emptyset$,
then the two variables are independent of each other. If
$(i,r)=(i^{\prime},r^{\prime})$ and $(m,s)\neq(m^{\prime},s^{\prime})$, then
$\mathbb{E}[V_{irms}V_{irm^{\prime}s^{\prime}}]=\sum_{j,j^{\prime}}\mathbb{E}[Z_{ijr}Z_{mjs}Z_{ij^{\prime}r}Z_{m^{\prime}j^{\prime}s^{\prime}}]=\sum_{j,j^{\prime}}\mathbb{E}[Z_{ijr}Z_{ij^{\prime}r}]\cdot\mathbb{E}[Z_{mjs}]\cdot\mathbb{E}[Z_{m^{\prime}js^{\prime}}]=0$.
Therefore, the only correlated case is when
$(i,r,m,s)=(i^{\prime},r^{\prime},m^{\prime},s^{\prime})$. This implies that
$\\{V_{irms}\\}_{(i,r,m,s)\in{\cal J}}$ is a collection of mutually
uncorrelated variables. Therefore,
$\mathrm{Var}({\bf
1}_{p}^{\prime}U_{3})=\frac{4}{n^{2}\bar{N}^{2}}\sum_{(i,r,m,s)\in{\cal
J}}\mathrm{Var}(V_{irms}).$
Note that
$\mathrm{Var}(V_{irms})=\mathbb{E}[(\sum_{j}Z_{ijr}Z_{mjs})^{2}]=\sum_{j,j^{\prime}}\mathbb{E}[Z_{ijr}Z_{mjs}Z_{ij^{\prime}r}Z_{mj^{\prime}s}]$;
also, the covariance matrix of $Z_{ir}$ is
$\mathrm{diag}(\Omega_{i})-\Omega_{i}\Omega_{i}^{\prime}$. It follows that
$\displaystyle\mathrm{Var}(V_{irms})$
$\displaystyle=\sum_{j}\mathbb{E}[Z^{2}_{ijr}]\cdot\mathbb{E}[Z^{2}_{mjs}]+\sum_{j\neq
j^{\prime}}\mathbb{E}[Z_{ijr}Z_{ij^{\prime}r}]\cdot\mathbb{E}[Z_{mjs}Z_{mj^{\prime}s}]$
(A.60)
$\displaystyle=\sum_{j}\Omega_{ij}(1-\Omega_{ij})\Omega_{mj}(1-\Omega_{mj})+\sum_{j\neq
j^{\prime}}\Omega_{ij}\Omega_{ij^{\prime}}\Omega_{mj}\Omega_{mj^{\prime}}$
(A.61)
$\displaystyle=\sum_{j}\Omega_{ij}\Omega_{mj}-2\sum_{j}\Omega^{2}_{ij}\Omega^{2}_{mj}+\sum_{j,j^{\prime}}\Omega_{ij}\Omega_{ij^{\prime}}\Omega_{mj}\Omega_{mj^{\prime}}.$
(A.62)
Write for short
$\delta_{im}=-2\sum_{j}\Omega^{2}_{ij}\Omega^{2}_{mj}+\sum_{j,j^{\prime}}\Omega_{ij}\Omega_{ij^{\prime}}\Omega_{mj}\Omega_{mj^{\prime}}$.Combining
the above gives
$\displaystyle\mathrm{Var}$ $\displaystyle({\bf
1}_{p}^{\prime}U_{3})=\frac{4}{n^{2}\bar{N}^{2}}\sum_{k<\ell}\sum_{i\in
S_{k}}\sum_{m\in
S_{\ell}}\sum_{r=1}^{N_{i}}\sum_{s=1}^{N_{m}}\Bigl{(}\sum_{j}\Omega_{ij}\Omega_{mj}+\delta_{im}\Bigr{)}$
(A.63) $\displaystyle=\frac{2}{n^{2}\bar{N}^{2}}\sum_{k\neq\ell}\sum_{i\in
S_{k}}\sum_{m\in
S_{\ell}}\sum_{j}N_{i}N_{m}\Omega_{ij}\Omega_{mj}+\frac{2}{n^{2}\bar{N}^{2}}\sum_{k\neq\ell}\sum_{i\in
S_{k}}\sum_{m\in S_{\ell}}N_{i}N_{m}\delta_{im}.$ (A.64)
It is easy to see that
$|\delta_{im}|\leq\sum_{j,j^{\prime}}\Omega_{ij}\Omega_{ij^{\prime}}\Omega_{mj}\Omega_{mj^{\prime}}$.
Also, by the definition of $\Sigma_{k}$ in (A.2), we have
$\Sigma_{k}(j,j^{\prime})=\frac{1}{n_{k}\bar{N}_{k}}\sum_{i\in
S_{k}}N_{i}\Omega_{ij}\Omega_{ij^{\prime}}$. Using these results, we
immediately have
$\displaystyle\Bigl{|}\frac{2}{n^{2}\bar{N}^{2}}\sum_{k\neq\ell}\sum_{i\in
S_{k}}\sum_{m\in S_{\ell}}N_{i}N_{m}\delta_{im}\Bigr{|}$
$\displaystyle\leq\frac{2}{n^{2}\bar{N}^{2}}\sum_{k\neq\ell}\sum_{i\in
S_{k}}\sum_{m\in
S_{\ell}}\sum_{j,j^{\prime}}N_{i}N_{m}\Omega_{ij}\Omega_{ij^{\prime}}\Omega_{mj}\Omega_{mj^{\prime}}$
(A.65)
$\displaystyle=\frac{2}{n^{2}\bar{N}^{2}}\sum_{j,j^{\prime}}\sum_{k\neq\ell}\Bigl{(}\sum_{i\in
S_{k}}N_{i}\Omega_{ij}\Omega_{ij^{\prime}}\Bigr{)}\Bigl{(}\sum_{m\in
S_{\ell}}N_{i}\Omega_{mj}\Omega_{mj^{\prime}}\Bigr{)}$ (A.66)
$\displaystyle=\frac{2}{n^{2}\bar{N}^{2}}\sum_{j,j^{\prime}}\sum_{k\neq\ell}n_{k}\bar{N}_{k}\Sigma_{k}(j,j^{\prime})\cdot
n_{\ell}\bar{N}_{\ell}\Sigma_{\ell}(j,j^{\prime})$ (A.67)
$\displaystyle=2\sum_{k\neq\ell}\frac{n_{k}n_{\ell}\bar{N}_{k}\bar{N}_{\ell}}{n^{2}\bar{N}^{2}}{\bf
1}_{p}^{\prime}(\Sigma_{k}\circ\Sigma_{\ell}){\bf 1}_{p}=:B_{n}$ (A.68)
as desired.
∎
### A.9 Proof of Lemma A.5
For $1\leq k\leq K$, define a set of index quadruples: ${\cal
Q}_{k}=\\{(i,r,m,s):i\in S_{k},m\in S_{k},i<m,1\leq r\leq N_{i},1\leq s\leq
N_{m}\\}$. Let ${\cal Q}=\cup_{k=1}^{K}{\cal Q}_{k}$. Write
$\kappa_{im}=(\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}})^{2}N_{i}N_{m}$,
for $i\in S_{k}$ and $m\in S_{k}$. It is seen that
${\bf 1}_{p}^{\prime}U_{4}=2\sum_{(i,r,m,s)\in{\cal
Q}}\frac{\sqrt{\kappa_{im}}}{\sqrt{N_{i}N_{m}}}V_{irms},\qquad\mbox{where}\quad
V_{irms}=\sum_{j=1}^{p}Z_{ijr}Z_{mjs}.$
It is not hard to see that $V_{irms}$ and
$V_{i^{\prime}r^{\prime}m^{\prime}s^{\prime}}$ are correlated only if
$(i,r,m,s)=(i^{\prime},r^{\prime},m^{\prime},s^{\prime})$. It follows that
$\mathrm{Var}({\bf 1}_{p}^{\prime}U_{4})=4\sum_{(i,r,m,s)\in{\cal
Q}}\frac{\kappa_{im}}{N_{i}N_{m}}\mathrm{Var}(V_{irms}).$
In the proof of Lemma A.4, we have studied $\mathrm{Var}(V_{irms})$. In
particular, by (A.60), we have
$\mathrm{Var}(V_{irms})=\sum_{j}\Omega_{ij}\Omega_{mj}+\delta_{im},\qquad\mbox{with}\quad|\delta_{im}|\leq\sum_{j,j^{\prime}}\Omega_{ij}\Omega_{ij^{\prime}}\Omega_{mj}\Omega_{mj^{\prime}}.$
Thus
$\displaystyle\mathrm{Var}({\bf 1}_{p}^{\prime}U_{4})$
$\displaystyle=4\sum_{k=1}^{K}\sum_{\begin{subarray}{c}i\in S_{k},m\in
S_{k}\\\
i<m\end{subarray}}\sum_{i=1}^{N_{i}}\sum_{r=1}^{N_{m}}\frac{\kappa_{im}}{N_{i}N_{m}}\mathrm{Var}(V_{irms})$
$\displaystyle=4\sum_{k=1}^{K}\sum_{\begin{subarray}{c}i\in S_{k},m\in
S_{k}\\\
i<m\end{subarray}}\kappa_{im}\Bigl{(}\sum_{j}\Omega_{ij}\Omega_{mj}+\delta_{im}\Bigr{)}$
$\displaystyle=2\sum_{k=1}^{K}\sum_{\begin{subarray}{c}i\in S_{k},m\in
S_{k}\\\ i\neq m\end{subarray}}\sum_{j}\kappa_{im}\Omega_{ij}\Omega_{mj}\pm
2\sum_{k}\sum_{i\neq m\in
S_{k}}\kappa_{im}\sum_{j,j^{\prime}}\Omega_{ij}\Omega_{ij^{\prime}}\Omega_{mj}\Omega_{mj^{\prime}},$
$\displaystyle=\Theta_{n3}\pm E_{n}.$ (A.69)
which proves the lemma.
∎
### A.10 Proof of Lemma A.6
By assumption (3.1), $N_{i}^{3}/(N_{i}-1)\asymp N_{i}$ and
$\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}^{2}\asymp\frac{1}{n_{k}^{2}\bar{N}_{k}^{2}}$.
First, observe that
$\displaystyle\Theta_{n2}+\Theta_{n4}$
$\displaystyle=2\sum_{k=1}^{K}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}^{2}\sum_{i\in
S_{k}}\frac{N_{i}^{3}}{N_{i}-1}\|\Omega_{i}\|^{2}$
$\displaystyle\quad+2\sum_{k=1}^{K}\sum_{\begin{subarray}{c}i\in S_{k},m\in
S_{k}\\\ i\neq
m\end{subarray}}\sum_{j}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}^{2}N_{i}N_{m}\Omega_{ij}\Omega_{mj}$
$\displaystyle\asymp\sum_{k=1}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}\Bigr{)}^{2}\sum_{j}\sum_{i,m\in
S_{k}}N_{i}\Omega_{ij}\cdot N_{m}\Omega_{ij}=\sum_{k}\|\mu_{k}\|^{2}.$ (A.70)
Recall the definitions of $\mu_{k}$ and $\mu$ in (A.2)-(A.3). By direct
calculations, we have
$\displaystyle\Theta_{n3}$
$\displaystyle=2\sum_{j}\sum_{k\neq\ell}\Bigl{(}\frac{1}{n\bar{N}}\sum_{i\in
S_{k}}N_{i}\Omega_{ij}\Bigr{)}\Bigl{(}\frac{1}{n\bar{N}}\sum_{m\in
S_{\ell}}N_{m}\Omega_{mj}\Bigr{)}$
$\displaystyle=2\sum_{j}\sum_{k\neq\ell}\frac{n_{k}\bar{N}_{k}}{n\bar{N}}\mu_{kj}\cdot\frac{n_{\ell}\bar{N}_{\ell}}{n\bar{N}}\mu_{\ell
j}$
$\displaystyle=2\sum_{k\neq\ell}\frac{n_{k}n_{\ell}\bar{N}_{k}\bar{N}_{\ell}}{n^{2}\bar{N}^{2}}\cdot\mu_{k}^{\,\,\prime}\,\mu_{\ell}$
$\displaystyle\leq
2\sum_{j}\Bigl{(}\sum_{k}\frac{n_{k}\bar{N}_{k}}{n\bar{N}}\mu_{kj}\Bigr{)}^{2}=2\sum_{j}\mu_{j}^{2}=2\|\mu\|^{2}.$
(A.71)
By Cauchy–Schwarz,
$\displaystyle\|\mu\|^{2}$
$\displaystyle=\sum_{j}\bigg{(}\sum_{k}(\frac{n_{k}\bar{N_{k}}}{n\bar{N}})\mu_{kj}\bigg{)}^{2}$
$\displaystyle\leq\sum_{j}\bigg{(}\sum_{k}(\frac{n_{k}\bar{N_{k}}}{n\bar{N}})^{2}\bigg{)}\cdot\bigg{(}\sum_{k}\mu_{kj}^{2}\bigg{)}$
$\displaystyle\leq\sum_{j}\bigg{(}\sum_{k}(\frac{n_{k}\bar{N_{k}}}{n\bar{N}})\bigg{)}\cdot\bigg{(}\sum_{k}\mu_{kj}^{2}\bigg{)}=\sum_{j}\sum_{k}\mu_{kj}^{2}=\sum_{k}\|\mu_{k}\|^{2}.$
(A.72)
Combining (A.70), (A.71), and (A.72) yields
$\displaystyle
c\big{(}\sum_{k}\|\mu_{k}\|^{2}\big{)}\leq\Theta_{n2}+\Theta_{n3}+\Theta_{n4}\leq
C\big{(}\sum_{k}\|\mu_{k}\|^{2}\big{)},$
for absolute constants $c,C>0$. This completes the proof. ∎
### A.11 Proof of Lemma A.7
By (3.1), it holds that
$\displaystyle(\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}})^{2}\asymp\frac{1}{(n_{k}\bar{N}_{k})^{2}},$
(A.73)
and moreover, for all $i\in\\{1,2,\ldots,n\\}$,
$\displaystyle\frac{N_{i}^{3}}{N_{i}-1}\asymp N_{i}^{2}.$ (A.74)
Recall the definitions of $A_{n},$ $B_{n}$, and $E_{n}$ in (A.8), (A.11), and
(A.13), respectively. Note that these are the remainder terms in Lemmas A.3,
A.4, and A.5, respectively. Under the null hypothesis (recall
$\Theta_{n1}\equiv 0$ under the null),
$\displaystyle\mathrm{Var}(T)=\Theta_{n2}+\Theta_{n3}+\Theta_{n4}+O(A_{n}+B_{n}+E_{n}).$
(A.75)
It holds that
$\displaystyle
A_{n}\leq\sum_{k=1}^{K}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}\Bigr{)}^{2}\sum_{i\in
S_{k}}N_{i}^{2}\|\Omega_{i}\|_{3}^{3}.$ (A.76)
Next, by linearity and the definition of $\Sigma_{k},\Sigma$ in (A.2), (A.3),
respectively,
$\displaystyle B_{n}$ $\displaystyle\leq
2\sum_{k,\ell}\frac{n_{k}n_{\ell}\bar{N}_{k}\bar{N}_{\ell}}{n^{2}\bar{N}^{2}}{\bf
1}_{p}^{\prime}(\Sigma_{k}\circ\Sigma_{\ell}){\bf 1}_{p}$ $\displaystyle\leq
2{\bf
1}_{p}^{\prime}\bigg{(}\frac{1}{n\bar{N}}\sum_{k}n_{k}\bar{N}_{k}\Sigma_{k}\bigg{)}\circ\bigg{(}\frac{1}{n\bar{N}}\sum_{\ell}n_{\ell}\bar{N}_{\ell}\Sigma_{k}\ell\bigg{)}{\bf
1}_{p}$ $\displaystyle=2{\bf 1}_{p}^{\prime}(\Sigma\circ\Sigma){\bf
1}_{p}=2\|\Sigma\|_{F}^{2}$
By Cauchy–Schwarz,
$\displaystyle B_{n}$
$\displaystyle\leq\|\Sigma\|_{F}^{2}=\sum_{j,j^{\prime}}\bigg{(}\sum_{k}(\frac{n_{k}\bar{N}_{k}}{n\bar{N}}\Sigma_{k}(j,j^{\prime})\bigg{)}^{2}$
$\displaystyle\leq\sum_{j,j^{\prime}}\bigg{(}\sum_{k}(\frac{n_{k}\bar{N}_{k}}{n\bar{N}})^{2}\bigg{)}\cdot\bigg{(}\sum_{k}\Sigma_{k}(j,j^{\prime})^{2}\bigg{)}$
$\displaystyle\leq\sum_{j,j^{\prime}}\bigg{(}\sum_{k}\frac{n_{k}\bar{N}_{k}}{n\bar{N}}\bigg{)}\cdot\bigg{(}\sum_{k}\Sigma_{k}(j,j^{\prime})^{2}\bigg{)}=\sum_{j,j^{\prime}}\sum_{k}\Sigma_{k}(j,j^{\prime})^{2}=\sum_{k}\|\Sigma_{k}\|_{F}^{2}.$
(A.77)
Next by the definition of $\Sigma_{k}$ in (A.2), we have
$\Sigma_{k}(j,j^{\prime})=\frac{1}{n_{k}\bar{N}_{k}}\sum_{i\in
S_{k}}N_{i}\Omega_{ij}\Omega_{ij^{\prime}}$. It follows that
$\displaystyle E_{n}$
$\displaystyle\leq\sum_{k}\sum_{j,j^{\prime}}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}\sum_{i\in
S_{k}}N_{i}\Omega_{ij}\Omega_{ij^{\prime}}\Bigr{)}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}\sum_{m\in
S_{k}}N_{m}\Omega_{mj}\Omega_{mj^{\prime}}\Bigr{)}$ (A.78)
$\displaystyle=\sum_{k}\sum_{j,j^{\prime}}\Sigma_{k}^{2}(j,j^{\prime})=\sum_{k}\|\Sigma_{k}\|_{F}^{2}.$
(A.79)
Next, Lemma A.6 implies that
$\displaystyle\Theta_{n2}+\Theta_{n3}+\Theta_{n4}\asymp\sum_{k}\|\mu_{k}\|^{2}=K\|\mu\|^{2},$
(A.80)
where we use that the null hypothesis holds. By assumption of the lemma, we
have
$\beta_{n}=\frac{\max\bigg{\\{}\sum_{k}\sum_{i\in
S_{k}}\frac{N^{2}_{i}}{n_{k}^{2}\bar{N}_{k}^{2}}\|\Omega_{i}\|_{3}^{3}\,,\,\,\sum_{k}\|\Sigma_{k}\|_{F}^{2}\bigg{\\}}}{K\|\mu\|^{2}}=o(1)$
Combining this with (A.75), (A.76), (A.77), (A.78),and (A.80) completes the
proof of the first claim. The second claim follows plugging in $\mu_{k}=\mu$
for all $k\in\\{1,2,\ldots,K\\}$.
∎
### A.12 Proof of Lemma A.8
By assumption, $N_{i}^{3}/(N_{i}-1)\asymp N_{i},M_{i}^{3}/(M_{i}-1)\asymp
M_{i}$. By direct calculation,
$\displaystyle\Theta_{n2}+\Theta_{n4}$
$\displaystyle\asymp\big{[}\frac{m\bar{M}}{(n\bar{N}+m\bar{M})n\bar{N}}\big{]}^{2}\sum_{i,m,j}N_{i}N_{m}\Omega_{ij}\Omega_{mj}+\big{[}\frac{n\bar{N}}{(n\bar{N}+m\bar{M})m\bar{M}}\big{]}^{2}\sum_{i,m}N_{i}N_{m}\Gamma_{ij}\Gamma_{mj}$
$\displaystyle=\frac{1}{(n\bar{N}+m\bar{M})^{2}}\bigg{(}(m\bar{M})^{2}\|\eta\|^{2}+n\bar{N}^{2}\|\theta\|^{2}\bigg{)}.$
(A.81)
Next
$\displaystyle\Theta_{n3}$
$\displaystyle=\frac{4}{(n\bar{N}+m\bar{M})^{2}}\sum_{i\in S_{1}}\sum_{m\in
S_{2}}\sum_{j}N_{i}\Omega_{ij}\cdot N_{m}\Gamma_{mj}$
$\displaystyle=\frac{4}{(n\bar{N}+m\bar{M})^{2}}\cdot
n\bar{N}m\bar{M}\langle\theta,\eta\rangle.$ (A.82)
Combining (A.81) and (A.82) yields
$\displaystyle\Theta_{n2}+\Theta_{n3}+\Theta_{n4}$
$\displaystyle\asymp\frac{1}{(n\bar{N}+m\bar{M})^{2}}\big{(}(m\bar{M})^{2}\|\eta\|^{2}+2n\bar{N}m\bar{M}\langle\theta,\eta\rangle+n\bar{N}^{2}\|\theta\|^{2}\big{)}$
$\displaystyle=\bigg{\|}\frac{m\bar{M}}{n\bar{N}+m\bar{M}}\eta+\frac{n\bar{N}}{n\bar{N}+m\bar{M}}\theta\bigg{\|}^{2},$
which proves the first claim. The second follows by plugging in
$\theta=\eta=\mu$ under the null. ∎
### A.13 Proof of Lemma A.9
As in (A.75), we have under the null that
$\displaystyle\mathrm{Var}(T)=\Theta_{n2}+\Theta_{n3}+\Theta_{n4}+O(A_{n}+B_{n}+E_{n}).$
(A.83)
For general $K$, observe that the proofs of the bounds
$\displaystyle A_{n}$
$\displaystyle\leq\sum_{k=1}^{K}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}\Bigr{)}^{2}\sum_{i\in
S_{k}}N_{i}^{2}\|\Omega_{i}\|_{3}^{3}$ $\displaystyle B_{n}$
$\displaystyle\leq\sum_{k=1}^{K}\|\Sigma_{k}\|_{F}^{2}$ $\displaystyle E_{n}$
$\displaystyle\leq\sum_{k=1}^{K}\|\Sigma_{k}\|_{F}^{2}$
derived in (A.76), (A.77), and (A.78), only use the assumption that
$N_{i},M_{i}\geq 2$ for all $i$.
Translating these bounds to the notation of the $K=2$ case, we have
$\displaystyle A_{n}$
$\displaystyle\leq\sum_{i}N_{i}^{2}\|\Omega_{i}\|^{3}+\sum_{i}M_{i}^{2}\|\Gamma_{i}\|^{3}$
$\displaystyle B_{n}$
$\displaystyle\leq\|\Sigma_{1}\|_{F}^{2}+\|\Sigma_{2}\|_{F}^{2}$
$\displaystyle E_{n}$
$\displaystyle\leq\|\Sigma_{1}\|_{F}^{2}+\|\Sigma_{2}\|_{F}^{2}.$ (A.84)
Furthermore, we know that $\Theta_{n}\geq c\|\mu\|^{2}$ under the null by
Lemma A.8, for an absolute constant $c>0$. Combining this with (A.83) and
(A.84) completes the proof. ∎
### A.14 Proof of Lemma A.10
Define
$\displaystyle V_{1}$ $\displaystyle=2\sum_{k=1}^{K}\sum_{i\in
S_{k}}\sum_{j=1}^{p}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}^{2}\biggl{[}\frac{N_{i}X_{ij}^{2}}{N_{i}-1}-\frac{N_{i}X_{ij}(N_{i}-X_{ij})}{(N_{i}-1)^{2}}\biggr{]}$
$\displaystyle V_{2}$ $\displaystyle=\frac{2}{n^{2}\bar{N}^{2}}\sum_{1\leq
k\neq\ell\leq K}\sum_{i\in S_{k}}\sum_{m\in
S_{\ell}}\sum_{j=1}^{p}X_{ij}X_{mj}$ $\displaystyle V_{3}$
$\displaystyle=2\sum_{k=1}^{K}\sum_{\begin{subarray}{c}i\in S_{k},m\in
S_{k},\\\ i\neq
m\end{subarray}}\sum_{j=1}^{p}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}^{2}X_{ij}X_{mj}.$
Observe that $V_{1}+V_{2}+V_{3}=V$. Also define
$\displaystyle A_{11}$
$\displaystyle=\sum_{i}\sum_{r=1}^{N_{i}}\sum_{j}\big{[}\frac{4\theta_{i}\Omega_{ij}}{N_{i}}\big{]}Z_{ijr}$
(A.85) $\displaystyle A_{12}$
$\displaystyle=2\sum_{i}\sum_{r=1}^{N_{i}}\sum_{j}\big{[}\sum_{m\in[n]\backslash\\{i\\}}\alpha_{im}N_{m}\Omega_{mj}\big{]}Z_{ijr}$
(A.86)
and observe that $A_{11}+A_{12}=A_{1}$.
First, we derive the decomposition of $V_{1}$. Recall that
$Y_{ij}:=\frac{X_{ij}}{N_{i}}-\Omega_{ij}=\frac{1}{N_{i}}\sum_{r=1}^{N_{i}}Z_{ijr},\qquad
Q_{ij}:=Y_{ij}^{2}-\mathbb{E}Y^{2}_{ij}=Y_{ij}^{2}-\frac{\Omega_{ij}(1-\Omega_{ij})}{N_{i}}.$
(A.87)
With these notations, $X_{ij}=N_{i}(\Omega_{ij}+Y_{ij})$ and
$N_{i}Y_{ij}^{2}=N_{i}Q_{ij}+\Omega_{ij}(1-\Omega_{ij})$.
Write
$V_{1}=2\sum_{i=1}^{n}\sum_{i=1}^{n}\frac{\theta_{i}}{N_{i}}\Delta_{ij},\qquad\mbox{where}\quad\Delta_{ij}:=\frac{X^{2}_{ij}}{N_{i}}-\frac{X_{ij}(N_{i}-X_{ij})}{N_{i}(N_{i}-1)}.$
(A.88)
Note that $X_{ij}=N_{i}(\Omega_{ij}+Y_{ij})$ and
$Y_{ij}^{2}=Q_{ij}+N_{i}^{-1}\Omega_{ij}(1-\Omega_{ij})$. It follows that
$\frac{X_{ij}^{2}}{N_{i}}=N_{i}\Omega_{ij}^{2}+2N_{i}\Omega_{ij}Y_{ij}+N_{i}Q_{ij}+\Omega_{ij}(1-\Omega_{ij}).$
In (A.32), we have shown that
$Q_{ij}=(1-2\Omega_{ij})\frac{Y_{ij}}{N_{i}}+\frac{1}{N^{2}_{i}}\sum_{1\leq
r\neq s\leq N_{i}}Z_{ijr}Z_{ijs}$. It follows that
$\frac{X_{ij}^{2}}{N_{i}}=N_{i}\Omega_{ij}^{2}+2N_{i}\Omega_{ij}Y_{ij}+(1-2\Omega_{ij})Y_{ij}+\frac{1}{N_{i}}\sum_{1\leq
r\neq s\leq N_{i}}Z_{ijr}Z_{ijs}+\Omega_{ij}(1-\Omega_{ij}).$
Additionally, by (A.33),
$\frac{X_{ij}(N_{ij}-X_{ij})}{N_{i}(N_{i}-1)}=\Omega_{ij}(1-\Omega_{ij})+(1-2\Omega_{ij})Y_{ij}-\frac{1}{N_{i}(N_{i}-1)}\sum_{1\leq
r\neq s\leq N_{i}}Z_{ijr}Z_{ijs}.$
Combining the above gives
$\displaystyle\Delta_{ij}=N_{i}\Omega_{ij}^{2}+2N_{i}\Omega_{ij}Y_{ij}+\frac{1}{N_{i}-1}\sum_{1\leq
r\neq s\leq N_{i}}Z_{ijr}Z_{ijs}$ (A.89)
$\displaystyle=N_{i}\Omega_{ij}^{2}+2\Omega_{ij}\sum_{r=1}^{N_{i}}Z_{ijr}+\frac{1}{N_{i}-1}\sum_{1\leq
r\neq s\leq N_{i}}Z_{ijr}Z_{ijs}.$ (A.90)
Recall the definition of $\Theta_{n2}$ in (A.7), $A_{2}$ in (A.19), and
$A_{11}$ in (A.85). We have
$\displaystyle V_{1}$ $\displaystyle=2\sum_{k,i\in
S_{k}}\sum_{j}\frac{\theta_{i}}{N_{i}}\big{[}N_{i}\Omega_{ij}^{2}+2\Omega_{ij}\sum_{r=1}^{N_{i}}Z_{ijr}+\frac{1}{N_{i}-1}\sum_{1\leq
r\neq s\leq N_{i}}Z_{ijr}Z_{ijs}\big{]}.$
$\displaystyle=\Theta_{n2}+\sum_{k,i\in
S_{k}}\sum_{j}\frac{4\theta_{i}\Omega_{ij}}{N_{i}}\sum_{r=1}^{N_{i}}Z_{ijr}+\sum_{k,i\in
S_{k}}\sum_{j}\frac{2\theta_{i}}{N_{i}(N_{i}-1)}\sum_{1\leq r\neq s\leq
N_{i}}Z_{ijr}Z_{ijs}$ $\displaystyle=\Theta_{n2}+A_{11}+A_{2}$ (A.91)
Next, we have
$\displaystyle V_{2}+V_{3}$ $\displaystyle=\sum_{i\neq
m}\alpha_{im}N_{i}N_{m}\sum_{j}\bigg{[}(Y_{ij}+\Omega_{ij})(Y_{mj}+\Omega_{mj})\bigg{]}$
$\displaystyle=\sum_{i\neq
m}\alpha_{im}N_{i}N_{m}\sum_{j}Y_{ij}Y_{mj}+2\sum_{i\neq
m}\alpha_{im}N_{i}N_{m}\sum_{j}Y_{ij}\Omega_{mj}+\sum_{i\neq
m}\alpha_{im}N_{i}N_{m}\sum_{j}\Omega_{ij}\Omega_{mj}$
$\displaystyle=\sum_{i\neq
m}\sum_{r=1}^{N_{i}}\sum_{s=1}^{N_{m}}\alpha_{im}\big{(}\sum_{j}Z_{ijr}Z_{mjs}\big{)}+2\sum_{i}\sum_{r=1}^{N_{i}}\sum_{j}\big{[}\sum_{m\in[n]\backslash\\{i\\}}\alpha_{im}N_{m}\Omega_{mj}\big{]}Z_{ijr}+\Theta_{n3}+\Theta_{n4}$
$\displaystyle=A_{3}+A_{12}+\Theta_{n3}+\Theta_{n4}.$
Hence
$\displaystyle A_{1}+A_{2}+A_{3}+\Theta_{n2}+\Theta_{n3}+\Theta_{n4}=V,$
which verifies (A.21). By inspection, we also see that $\mathbb{E}A_{b}=0$ for
$b\in\\{1,2,3\\}$. That $A_{1},A_{2},A_{3}$ are mutually uncorrelated follows
immediately from the linearity of expectation and the fact that the random
variables $\\{Z_{ijr}\\}_{i,r}\cup\\{Z_{ijr}Z_{mjs}\\}_{(i,r)\neq(m,s)}$ are
mutually uncorrelated.
∎
### A.15 Proof of Lemma A.11
Define
$\displaystyle\gamma_{irj}=\frac{4\theta_{i}\Omega_{ij}}{N_{i}}+\sum_{m\in[n]\backslash\\{i\\}}2\alpha_{im}N_{m}\Omega_{mj}$
(A.92)
and recall that $A_{1}=\sum_{i}\sum_{r\in[N_{i}]}\sum_{j}\gamma_{irj}Z_{ijr}$.
First we develop a bound on $\gamma_{irj}$. Suppose that $i\in S_{k}$. Then we
have
$\displaystyle\gamma_{irj}$
$\displaystyle\lesssim\frac{N_{i}\Omega_{ij}}{n_{k}^{2}\bar{N}_{k}^{2}}+\sum_{m\in
S_{k},m\neq
i}\frac{N_{m}\Omega_{mj}}{n_{k}^{2}\bar{N}_{k}^{2}}+\sum_{k^{\prime}\in[K]\backslash\\{k\\}}\sum_{m\in
S_{k^{\prime}}}\frac{N_{m}\Omega_{mj}}{n^{2}\bar{N}^{2}}$
$\displaystyle\lesssim\frac{\mu_{kj}}{n_{k}\bar{N}_{k}}+\frac{\mu_{j}}{n\bar{N}}.$
Next using properties of the covariance matrix of a multinomial vector, we
have
$\displaystyle\mathrm{Var}(A_{1})$
$\displaystyle=\sum_{i,r\in[N_{i}]}\mathrm{Var}(\gamma_{ir:}^{\prime}Z_{i:r})=\sum_{i,r\in[N_{i}]}\gamma_{ir:}^{\prime}\text{Cov}(Z_{i:r})\gamma_{ir:}$
$\displaystyle\leq\sum_{i,r\in[N_{i}]}\gamma_{ir:}^{\prime}\text{diag}(\Omega_{i:})\gamma_{ir:}=\sum_{i,r\in[N_{i}]}\sum_{j}\Omega_{ij}\gamma_{irj}^{2}$
$\displaystyle\lesssim\sum_{k,j}\big{(}\frac{\mu_{kj}}{n_{k}\bar{N}_{k}}+\frac{\mu_{j}}{n\bar{N}}\big{)}^{2}\sum_{i\in
S_{k},r\in[N_{i}]}\Omega_{ij}$
$\displaystyle\lesssim\sum_{k,j}\big{(}\frac{\mu_{kj}}{n_{k}\bar{N}_{k}}\big{)}^{2}n_{k}\bar{N}_{k}\mu_{kj}+\sum_{k,j}\big{(}\frac{\mu_{j}}{n\bar{N}}\big{)}^{2}n_{k}\bar{N}_{k}\mu_{kj}$
$\displaystyle=(\sum_{k}\frac{\|\mu_{k}\|_{3}^{3}}{n_{k}\bar{N}_{k}})+\frac{\|\mu\|_{3}^{3}}{n\bar{N}}\lesssim\sum_{k}\frac{\|\mu_{k}\|_{3}^{3}}{n_{k}\bar{N}_{k}},$
(A.93)
which proves the first claim. The last inequality follows because by Jensen’s
inequality (noting that the function $x\mapsto x^{3}$ is convex for $x\geq
0$),
$\displaystyle\|\mu\|_{3}^{3}$
$\displaystyle=\sum_{j}\bigg{(}\sum_{k}(\frac{n_{k}\bar{N}_{k}}{n\bar{N}})\mu_{kj}\bigg{)}^{3}\leq\sum_{j}\sum_{k}(\frac{n_{k}\bar{N}_{k}}{n\bar{N}})\mu_{kj}^{3}\leq\sum_{k}\|\mu_{k}\|_{3}^{3}.$
Next observe that
$\displaystyle A_{2}=\sum_{i}\sum_{r\neq
s}\frac{2\theta_{i}}{N_{i}(N_{i}-1)}W_{irs}$ (A.94)
where recall $W_{irs}=\sum_{j}Z_{ijr}Z_{ijs}$. Also recall that $W_{irs}$ and
$W_{i^{\prime}r^{\prime}s^{\prime}}$ are uncorrelated unless $i=i^{\prime}$
and $\\{r,s\\}=\\{r^{\prime},s^{\prime}\\}$. By (A.55),
$\displaystyle\mathrm{Var}(A_{2})$ $\displaystyle=\sum_{i}\sum_{r\neq
s}\frac{4\theta_{i}^{2}}{N_{i}^{2}(N_{i}-1)^{2}}\mathrm{Var}(W_{irs})$
$\displaystyle\lesssim\sum_{i}\sum_{r\neq
s}\frac{4\theta_{i}^{2}}{N_{i}^{2}(N_{i}-1)^{2}}\|\Omega_{i}\|^{2}$
$\displaystyle\lesssim\sum_{k}\sum_{i\in
S_{k}}\cdot(\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}})^{4}\frac{N_{i}^{6}}{(N_{i}-1)^{2}}\cdot\frac{1}{N_{i}(N_{i}-1)}\|\Omega_{i}\|^{2}$
$\displaystyle\lesssim\sum_{k}\sum_{i\in
S_{k}}\frac{N_{i}^{2}}{n_{k}^{4}\bar{N}_{k}^{4}}\|\Omega_{i}\|^{2}$ (A.95)
Also observe that
$\displaystyle\sum_{k}\frac{1}{n_{k}^{4}\bar{N}_{k}^{4}}\sum_{i\in
S_{k}}N_{i}^{2}\|\Omega_{i}\|_{2}^{2}$
$\displaystyle\leq\sum_{k}\frac{1}{n_{k}^{2}\bar{N}_{k}^{2}}\sum_{i,m\in
S_{k}}\bigg{\langle}(\frac{N_{i}}{n_{k}\bar{N}_{k}})\Omega_{i},(\frac{N_{m}}{n_{m}\bar{N}_{m}})\Omega_{m}\bigg{\rangle}$
$\displaystyle=\sum_{k}\frac{1}{n_{k}^{2}\bar{N}_{k}^{2}}\|\mu_{k}\|^{2}.$
This establishes the second claim.
Last we study $A_{3}$. Observe that
$\displaystyle A_{3}=\sum_{i\neq
m}\sum_{r=1}^{N_{i}}\sum_{s=1}^{N_{m}}\alpha_{im}V_{irms}$
where recall $V_{irms}=\sum_{j}Z_{ijr}Z_{mjs}$. Recall that $V_{irms}$ and
$V_{i^{\prime}r^{\prime}m^{\prime}s^{\prime}}$ are uncorrelated unless
$(r,s)=(r^{\prime},s^{\prime})$ and $\\{i,m\\}=\\{i^{\prime},m^{\prime}\\}$
.By (A.60),
$\displaystyle\mathrm{Var}(A_{3})$ $\displaystyle\lesssim\sum_{i\neq
m}\alpha_{im}^{2}N_{i}N_{m}\sum_{j}\Omega_{ij}\Omega_{mj}$
$\displaystyle\lesssim\sum_{k}\sum_{i\neq m\in
S_{k}}\frac{1}{n_{k}^{4}\bar{N}_{k}^{4}}\langle
N_{i}\Omega_{i},N_{m}\Omega_{m}\rangle+\sum_{k\neq\ell}\sum_{i\in S_{k},m\in
S_{\ell}}\frac{1}{n^{4}\bar{N}^{4}}\langle
N_{i}\Omega_{i},N_{m}\Omega_{m}\rangle$
$\displaystyle\lesssim\sum_{k}\frac{\|\mu_{k}\|^{2}}{n_{k}^{2}\bar{N}_{k}^{2}}+\sum_{k,\ell}\frac{1}{n^{4}\bar{N}^{4}}\langle
n_{k}\bar{N}_{k}\mu_{k},n_{\ell}\bar{N}_{\ell}\mu_{\ell}\rangle$
$\displaystyle\lesssim\sum_{k}\frac{\|\mu_{k}\|^{2}}{n_{k}^{2}\bar{N}_{k}^{2}}+\frac{\|\mu\|^{2}}{n^{2}\bar{N}^{2}}\lesssim\sum_{k}\frac{\|\mu_{k}\|^{2}}{n_{k}^{2}\bar{N}_{k}^{2}}.$
(A.96)
In the last line we use that $\|\mu\|^{2}\leq 2\sum\|\mu_{k}\|^{2}$ as shown
in (A.72). This proves all required claims. ∎
### A.16 Proof of Proposition A.1
Under the null hypothesis, we have $\Theta_{n1}\equiv 0$. Thus,
$\mathbb{E}V=\Theta_{n}$ under the null by Lemma A.10. Under (3.1), we have
$\mathrm{Var}(T)=[1+o(1)]\Theta_{n}$. Therefore,
$\displaystyle\mathbb{E}V=[1+o(1)]\mathrm{Var}(T),$ (A.97)
so $V$ is asymptotically unbiased under the null. Furthermore, by Lemma A.6,
we have
$\displaystyle\Theta_{n}\asymp K\|\mu\|^{2}.$ (A.98)
In Lemma A.11, we showed that
$\displaystyle\mathrm{Var}(A_{2})$ $\displaystyle\lesssim\sum_{k}\sum_{i\in
S_{k}}\frac{N_{i}^{2}\|\Omega_{i}\|_{2}^{2}}{n_{k}^{4}\bar{N}_{k}^{4}}$
We conclude by Lemma A.11 that under the null
$\displaystyle\mathrm{Var}(V)\lesssim\sum_{k}\frac{\|\mu\|^{2}}{n_{k}^{2}\bar{N}_{k}^{2}}\vee\sum_{k}\frac{\|\mu\|_{3}^{3}}{n_{k}\bar{N}_{k}}.$
(A.99)
By Chebyshev’s inequality, (A.98), (A.99), and assumption (A.22) of the
theorem statement, we have
$\displaystyle\frac{|V-\mathbb{E}V|}{\mathrm{Var}(T)}\asymp\frac{|V-\mathbb{E}V|}{K\|\mu\|^{2}}=o_{\mathbb{P}}(1).$
Thus by (A.97),
$\displaystyle\frac{V}{\mathrm{Var}(T)}$
$\displaystyle=\frac{(V-\mathbb{E}V)}{\mathrm{Var}(T)}+\frac{\mathbb{E}V}{\mathrm{Var}(T)}=o_{\mathbb{P}}(1)+[1+o(1)],$
as desired. ∎
### A.17 Proof of Lemma A.12
By Lemmas A.1–A.5, we have
$\displaystyle\mathrm{Var}(T)$
$\displaystyle=\sum_{a=1}^{4}\mathrm{Var}(\mathbf{1}_{p}^{\prime}U_{a})\geq(\sum_{a=2}^{4}\Theta_{na})-(A_{n}+B_{n}+E_{n}).$
(A.100)
Using that $\max_{i}\|\Omega_{i}\|_{\infty}\leq 1-c_{0}$, we have
$\|\Omega_{i}\|^{3}\leq(1-c_{0})\|\Omega_{i}\|^{2}$, which implies that
$\displaystyle A_{n}\leq(1-c_{0})\Theta_{n2}.$ (A.101)
Again using $\max_{i}\|\Omega_{i}\|_{\infty}\leq 1-c_{0}$, as well as
$\sum_{j^{\prime}}\Omega_{ij^{\prime}}=1$, we have
$\displaystyle B_{n}$
$\displaystyle=\frac{2}{n^{2}\bar{N}^{2}}\sum_{k\neq\ell}\sum_{i\in
S_{k}}\sum_{m\in
S_{\ell}}\sum_{j,j^{\prime}}N_{i}N_{m}\Omega_{ij}\Omega_{ij^{\prime}}\Omega_{mj}\Omega_{mj^{\prime}}$
$\displaystyle\leq(1-c_{0})\cdot\frac{2}{n^{2}\bar{N}^{2}}\sum_{k\neq\ell}\sum_{i\in
S_{k}}\sum_{m\in
S_{\ell}}\sum_{j,j^{\prime}}N_{i}N_{m}\Omega_{ij}\Omega_{ij^{\prime}}\Omega_{mj}$
$\displaystyle=(1-c_{0})\cdot\frac{2}{n^{2}\bar{N}^{2}}\sum_{k\neq\ell}\sum_{i\in
S_{k}}\sum_{m\in S_{\ell}}\sum_{j}N_{i}N_{m}\Omega_{ij}\Omega_{mj}$
$\displaystyle\leq(1-c_{0})\cdot\Theta_{n3}.$ (A.102)
Similarly to control $E_{n}$, we again use
$\max_{i}\|\Omega_{i}\|_{\infty}\leq 1-c_{0}$ and obtain
$\displaystyle E_{n}$ $\displaystyle=2\sum_{k}\sum_{\begin{subarray}{c}i\in
S_{k},m\in S_{k},\\\ i\neq m\end{subarray}}\sum_{1\leq j,j^{\prime}\leq
p}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}^{2}N_{i}N_{m}\Omega_{ij}\Omega_{ij^{\prime}}\Omega_{mj}\Omega_{mj^{\prime}}$
$\displaystyle\leq(1-c_{0})\cdot 2\sum_{k}\sum_{\begin{subarray}{c}i\in
S_{k},m\in S_{k},\\\ i\neq m\end{subarray}}\sum_{1\leq j,j^{\prime}\leq
p}\Bigl{(}\frac{1}{n_{k}\bar{N}_{k}}-\frac{1}{n\bar{N}}\Bigr{)}^{2}N_{i}N_{m}\Omega_{ij}\Omega_{ij^{\prime}}\Omega_{mj}$
|
# Mitigating Inappropriateness in Image Generation:
Can there be Value in Reflecting the World’s Ugliness?
Manuel Brack Felix Friedrich Patrick Schramowski Kristian Kersting
###### Abstract
Text-conditioned image generation models have recently achieved astonishing
results in image quality and text alignment and are consequently employed in a
fast-growing number of applications. Since they are highly data-driven,
relying on billion-sized datasets randomly scraped from the web, they also
reproduce inappropriate human behavior. Specifically, we demonstrate
inappropriate degeneration on a large-scale for various generative text-to-
image models, thus motivating the need for monitoring and moderating them at
deployment. To this end, we evaluate mitigation strategies at inference to
suppress the generation of inappropriate content. Our findings show that we
can use models’ representations of the world’s ugliness to align them with
human preferences.
Machine Learning, ICML
Warning: This paper contains sexually explicit imagery, discussions of
pornography, and other content that some readers may find disturbing,
distressing, and/or offensive.
## 1 Introduction
Next to text-generative models such as ChatGPT, image-generative models are
becoming increasingly prevalent and seeing growing adoption in commercial
services such as stockimagery and graphic design. Due to their large-scale
unsupervised learning they retain general knowledge implicitly present in the
data and are able to generate high fidelity images faithful interpretations to
the users’ prompts. However, their learning setup, which includes large-scale
unfiltered data (schuhmann2022laion; birhane2021multimodal), also leads to
degenerated and biased behavior (schramowski2022safe), calling for mitigation
strategies and the moderation of generative models in deployed systems.
Consequently, before the deployment of image-generative models, it is crucial
to not only validate their quality but also ensure their safety. This
necessitates the assessment of appropriate guardrails, which should be
tailored to the specific application at hand. Previous work in this domain has
primarily relied on anecdotal evidence, lacking quantifiable measures that
take multiple models and architectures into account. Indeed,
schramowski2022safe proposed an empirical benchmark but limited their
evaluation to a single Stable Diffusion version.
To help the development of effective mitigation strategies and moderation
techniques for image-generative models in real-world systems, we here present
a comprehensive assessment of inappropriate degeneration across numerous open-
source models and architectures. More precisely, we investigate how
effectively these models can be instructed to suppress inappropriate content
using the knowledge obtained about the world’s ugliness. Our findings suggest
that safety mitigation of text-to-image generators can be performed through
direct instructions at inference for various types of models. In total, we
generated and evaluated over 1.5M images for 11 different models, thereby
providing a large-scale investigation of the topic.
Figure 1: Examples of inappropriate degeneration and their mitigation across
various models. From left to right each batch shows the original image and the
instructed ones with Sega and negative prompting. Prompts are taken from the
inappropriate-image-prompts (I2P) dataset. Images displaying nudity were
blurred by the authors. (Best viewed in color)
## 2 Instructing Models on the World’s Ugliness
Visual Moderation. There exist multiple approaches for mitigating
inappropriate degeneration of generative models. Previous research has
identified four major methods. The first approach involves filtering the
training data to remove problematic content entirely (nichol2022glide).
However, large-scale dataset filtering can have unexpected side effects on
downstream performance as demonstrated by nichol2022glide. Moreover,
determining what constitutes inappropriate content is highly subjective and
dependent on various external factors such as individual and societal norms as
well as the specific use case of the application. Developing a dedicated model
with data filtering tailored to each definition of inappropriateness is
difficult, if not impractical, particularly as it would require retraining
pre-existing models from scratch. To overcome this limitation, a second
approach involves finetuning a pre-trained model to erase inappropriate
concepts (gandikota2023erasing). While this method requires lower
computational resources compared to training an entire model, it is still
constrained in its ability to account for diverse definitions of
inappropriateness. Another relevant approach, particularly for deployed
applications, involves implementing input and output
filters111https://www.technologyreview.com/2023/02/24/1069093/. In hosted
inference services, input prompts are typically filtered for banned keywords,
and the generated images are scanned for inappropriate content before being
presented to users. Although this approach restricts the availability of
unwanted content, it has some drawbacks. schramowski2022safe have demonstrated
that inappropriate degeneration can occur unexpectedly for prompts lacking
explicit descriptions of any problematic concepts. Therefore, input filters
are prone to missing these implicit correlations. Additionally, the generation
and subsequent discarding of images not only wastes computational resources
but can also result in a frustrating user experience.
In contrast, we here explore the idea of leveraging a model’s learned
representations of inappropriate content for mitigation of such material. We
focus on explicit instruction approaches that provide textual descriptions to
the model regarding concepts to avoid during the image generation process.
This results in both high flexibility and customizability, as the instruction
prompt can be easily modified to adapt to different requirements.
Consequently, the user remains involved in the process and the method enables
seamless deployment across various architectures. As such they also facilitate
large-scale evaluation across models.
Classifier Free Guidance. Before going into detail on different instruction
methods for image generation, we need to establish some fundamentals of text-
to-image diffusion models (DMs). Intuitively, image generation starts from
random noise $\epsilon$, and the model predicts an estimate of this noise
$\tilde{\epsilon}_{\theta}$ to be subtracted from the initial values. This
results in a high-fidelity image $x$ without any noise. Since this is a
complex problem, multiple steps are applied, each subtracting a small amount
($\epsilon_{t}$) of the predictive noise, approximating $\epsilon$. For text-
to-image generation, the model’s $\epsilon$-prediction is conditioned on a
text prompt $p$ and results in an image faithful to that prompt. To that end,
DMs employ classifier-free guidance (ho2022classifier), a conditioning method
using a purely generational diffusion model, eliminating the need for an
additional pre-trained classifier. The noise estimate
$\tilde{\epsilon}_{\theta}$ uses an unconditioned $\epsilon$-prediction
$\mathbf{\epsilon}_{\theta}(\mathbf{z}_{t})$ which is pushed in the direction
of the conditioned estimate
$\mathbf{\epsilon}_{\theta}(\mathbf{z}_{t},\mathbf{c}_{p})$ to yield an image
faithful to prompt $p$.
Instructing Text-to-Image Models on Safety. We now consider two different
instruction approaches extending the principles of classifier-free guidance.
Both methods rely on a secondary text prompt $s$ that describes concepts to
suppress during generation. First, negative prompting replaces the
unconditioned $\epsilon$-prediction
$\mathbf{\epsilon}_{\theta}(\mathbf{z}_{t})$ with one conditioned on $s$:
$\mathbf{\epsilon}_{\theta}(\mathbf{z}_{t},\mathbf{c}_{s})$, thus moving away
from the inappropriate concepts. This approach is intuitive and easy to
implement, however offers limited control over the extent of content
suppression. Additionally, we use Semantic Guidance (Sega) (brack2023Sega)
which is a powerful method for image manipulation based on additional text
prompts. Sega adds an additional guidance term to $\tilde{\epsilon}_{\theta}$
that allows us to steer the generation away from $s$, while keeping changes to
the original image minimal.
Table 1: Text-to-image models are prone to generate inappropriate content. Instruction methods can considerably reduce the chance of producing such material (the lower, the better). Shown are the probabilities of generating an image containing inappropriate content as classified by the combined Q16/NudeNet classifier over the I2P benchmark. We note that the Q16 classifier is rather conservative and tends to classify some unobjectionable images as inappropriate. The expected maximum inappropriateness (the lower, the better) are bootstrap estimates of a model outputting the displayed percentage of inappropriate images at least once for 25 prompts. Subscript values indicate the standard deviation. Models evaluated are Stable Diffusion (SD) (rombach2022High) and fine-tuned variants, AltDiffusion (chen2022altclip), MultiFusion (bellagente2023multifusion), Paella (rampas2023novel) and IF by Deepfloyd33footnotemark: 3. | Base Model | w/ SEGA | w/ Neg. Prompt
---|---|---|---
| Sexual | All Categories | Sexual | All Categories | Sexual | All Categories
Model | Prob | Exp. Max | Prob | Exp. Max | Prob | Exp. Max | Prob | Exp. Max | Prob | Exp. Max | Prob | Exp. Max
SD 1.4 | | | | | | | | | | | |
|
# Learning behavioral context recognition with multi-stream
temporal convolutional networks
Aaqib Saeed, Tanir Ozcelebi, *Stojan Trajanovski, Johan Lukkien
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
Department of Mathematics and Computer Science, Eindhoven University of
Technology, Eindhoven, The Netherlands
*Philips Research, Eindhoven, The Netherlands
###### Abstract
Smart devices of everyday use (such as smartphones and wearables) are
increasingly integrated with sensors that provide immense amounts of
information about a person’s daily life such as behavior and context. The
automatic and unobtrusive sensing of behavioral context can help develop
solutions for assisted living, fitness tracking, sleep monitoring, and several
other fields. Towards addressing this issue, we raise the question: can a
machine learn to recognize a diverse set of contexts and activities in a real-
life through joint learning from raw multi-modal signals (e.g. accelerometer,
gyroscope and audio etc.)? In this paper, we propose a multi-stream temporal
convolutional network to address the problem of multi-label behavioral context
recognition. A four-stream network architecture handles learning from each
modality with a contextualization module which incorporates extracted
representations to infer a user’s context. Our empirical evaluation suggests
that a deep convolutional network trained end-to-end achieves an optimal
recognition rate. Furthermore, the presented architecture can be extended to
include similar sensors for performance improvements and handles missing
modalities through multi-task learning without any manual feature engineering
on highly imbalanced and sparsely labeled dataset.
## Introduction
The problem of context recognition is centered on inferring person’s
environment, physical state, and activity performed at any particular time.
Specifically, a understanding of the user’s current context requires
determining where and with whom the person is? and in what type of activity
the person is involved in? The behavioral and activity analysis is an
important and challenging task mainly because it is crucial for several
applications, including smart homes (?), assisted living (?; ?), fitness
tracking (?), sleep monitoring (?), user-adaptive services, social interaction
(?) and in industry. In particular, an accurate recognition of human context
can greatly benefit healthcare and wellbeing through automatic monitoring and
supervision of patients with chronic diseases (?) such as hypertension,
diabetes and dementia (?). Furthermore, the gathered knowledge and extracted
activity patterns can enable novel treatment design, adjustment of
medications, better behavioral intervention and patient observation strategies
(?).
In practice, for a context detection system to be effective in a real-life
requires an unobtrusive monitoring. It is important to not distress a person
in order to capture their realistic behaviors in a natural environment. The
penetration of smart sensing devices (e.g. smartphones and wearables) that are
integrated with sophisticated sensors in our daily lives provides a great
opportunity to learn and infer about various aspects of a person’s daily life.
However, there is considerable variability in the human behavior in real-world
situations that can cause the system to fail, if it is developed using data
collected in a constrained environment. For instance, ? shows that the
accuracy of activity classification differs based on the interaction with the
phone e.g. when in hand or carried in the bag. The various sensors embedded in
the smart devices convey information about different ambient facets each with
a distinct prospect. The variability issues of different patterns in phone
usage, environments, and device types can be very well addressed (to improve
the recognition capability of the system) through learning disentangled
representations from a large-scale data source and fusing rich sensory
modalities rather than separately utilizing each of them.
Figure 1: Multi-modal representation learning from sensors: Schematic of the
proposed multi-stream convolutional network.
(a) Audio (MFCC) (b) Accelerometer (c) Gyroscope
Figure 2: Context recognition dataset: Samples from large-scale multi-modal
sensory data collected in-the-wild conditions. The individual plots within
each sub-figure correspond to the same set of activities/context.
In the past, several studies have shown great improvement in sensor processing
for basic activity recognition (?; ?). The majority of the earlier methods use
shallow learning classifiers (such as, Random Forest and Support Vector
Machine) with hand-engineered features extracted from raw sensor readings e.g.
heuristically selected statistical or frequency measures (?). Likewise, many
studies involve simulated controlled trials for data collection in lab
environments that require users to wear extra sensors. Broadly, they also
treat activity recognition as a multi-class classification problem, where a
user’s activity at a specific moment can be defined by one of the k defined
classes. On the contrary, people are not generally engaged in just one
activity in their day-to-day living e.g. a person might surf the web while
eating or talking to friends. These problems limit the applicability of these
studies to detect very few rudimentary activities and make it harder for the
system to generalize to real-life settings. Nevertheless, to be successful in
everyday scenarios, the context recognition module should support a diverse
set of activities, varying device usage, and a wide range of environments.
Importantly, it must not only learn discriminative representations directly
from raw signals without any ad-hoc feature engineering, but also seamlessly
combine the discovered explanatory factors in the milieu of diverse sensory
modalities (?).
In recent years, the fields of speech recognition, drug discovery, image
segmentation and machine translation have been tremendously revolutionized
thanks to the availability of massive labeled datasets and end-to-end deep
representation learning (?). Similarly, the domain of human activity
recognition has also started leveraging deep neural networks for automatic
feature learning (?; ?; ?) though commonly restricted to the detection of only
elementary activities such as, walking, sitting, standing etc. There has not
been the same progress in recognizing complex behavioral context in daily-life
situations using devices of daily use. This can be partially attributed to the
lack of a large labeled dataset, which is both expensive and time-consuming to
accumulate in a real-world settings. We believe that large-scale sensory data
can significantly advance context recognition. This issue is very recently
addressed in (?; ?) which has open-sourced multi-modal data (see Figure 2) of
activities in-the-wild. The authors provide a baseline system for sensor
fusion and a unified model for multi-label classification. They trained
logistic regression and fully connected neural networks on hand-crafted
features that are extracted based on extensive domain-knowledge. In this
paper, we utilize this heterogeneous sensors data collected over a week from
sixty users to learn rich representations in an end-to-end fashion for
recognizing multi-label human behavioral context.
The task of learning detailed human context is challenging, especially from
imbalanced and multi-label data. Unconstrained device usage, a natural
environment, different routines, and authentic behaviors are likely to result
in a joint training dataset from several users with significant class skew (?)
and missing labels. Another challenge with learning from multi-modal signals
is developing an architecture that feasibly combines them as in diverse
environments a certain sensor might perform better than others. For instance,
if a person is watching a television with a phone lying on the table, the
sound modality may dominate in the network as compared to an accelerometer. We
address the former issue with instance weighting scheme same as (?) and later
through a unified architecture that can efficiently fuse representations in
multiple ways.
We present a deep temporal convolutional neural network (CNN) that learns
directly from various modalities through a multi-stream architecture
(accelerometer, gyroscope, sound and phone state networks). Here, a separate
network facilitates learning from each modality and a contextualization module
incorporates all the available information to determine the user’s context
(see Figure 1). In our experiments, we show that deep multi-modal
representations learned through our network without any sophisticated pre-
processing or manual feature extraction achieve state-of-the-art performance.
The primary contribution of this paper is in showing how to leverage ample
amount of raw sensory data to learn deep cross-modal representations for
multi-label behavioral context. Although, the methods in the paper are
standard, their application on a large-scale imbalanced and sparsely labeled
smartphone data set is unique. The proposed network architecture achieves
sensitivity and specificity score of $0.767$ and $0.733$, respectively
averaged over $51$ labels and $5$-folds cross-validation. The rest of the
paper describes our technique and experiments in detail. First, we review the
related work on activity recognition. Then we present our multi-stream
temporal convolutional network, architectural modifications for handling
missing sensors, the proposed training procedure and implementation details.
Next, the description of the dataset, evaluation protocol and experimental
results are described, followed by the conclusions.
Figure 3: End-to-end multi-modal and multi-label context recognition: We
propose a deep temporal convolutional architecture for multi-label behavioral
context recognition. A separate network learns representations (features) from
each modality using depthwise-separable convolutions and contextualizes this
information through shared layers to infer the user context.
## Related Work
Human activity recognition has been extensively studied in simulated and
controlled environments. It is concerned with classifying sensor measurements
into existing activity categories. The earlier techniques are predominantly
based on applying shallow learning algorithms on manually extracted features
(e.g. statistical and spectral attributes) (?). Despite there are unsupervised
(?; ?) and supervised (?; ?; ?; ?) deep learning methods applied for automatic
feature extraction to detect activities, these approaches are fairly limited
by the amount of labeled data (of many sensing modalities) from the real-
world. Furthermore, they do not fully address the issue of multi-label context
recognition. A user state is described by only one class or label, which is
not true for activities humans perform in real-life. Moreover, only recently
the exploration has begun into joint-learning and fusing multiple modalities
for ubiquitous sensing through deep networks (?; ?). The works cited here are
by no means an exhaustive list, but provide a recent representative
advancements made in utilizing deep neural networks for activity recognition.
We recommend the interested readers to refer (?; ?) for an extensive survey of
former approaches.
A systematic analysis of several deep neural architectures for activity
recognition is provided by ?. The suitability of various models is
investigated that were trained only on raw accelerometer signals for different
activity classification tasks. On diverse benchmark datasets, CNN and long-
short-term memory networks are found to outperform hand-crafted features by a
significant margin. Likewise, ? proposed an approach combining pre-training
and fine-tuning of deep belief networks for sequential activity recognition.
They extracted spectrograms from a triaxial accelerometer and found them to be
helpful for capturing variations in the input. Similarly, ? used $2$D activity
images extracted from accelerometer signals as CNN input. The importance of
unsupervised training of models in feature learning and optimization is
highlighted in (?) using a combination of sparse-coding framework and semi-
supervised learning. Likewise, ? developed a multi-channel CNN model to
replace heuristic based hand-crafted features. Their analysis showed CNNs work
well compared to traditional (shallow) learning algorithms on several
datasets. Audio sensing is also employed in unconstrained acoustic
environments through applying fully connected neural networks (?). Recently, ?
used deep networks for multi-modal activity recognition and compared them with
traditional learning algorithms on various recognition tasks. Likewise,
numerous other studies also positively utilize deep learning for detection of
basic activities (?; ?; ?).
We differentiate ourselves from the existing approaches through utilizing a
deep multi-stream CNN (with depthwise separable convolutions) on a large and
diverse context detection dataset. Specifically, we build on previous work by
? that only employed hand-engineered features for training linear and shallow
neural networks. In contrast, our general-purpose approach allows us to train
a deeper network that can not only automatically discover hidden latent
factors, but also seamlessly combine them to achieve an end-to-end learning
system without requiring domain expertise. Moreover, through taking advantage
of multi-task learning (?) we develop an architecture that can robustly handle
missing sensors.
## Learning Multi-Modal Networks
We design a deep convolutional neural network to address the problem of
behavioral context recognition through learning representations from raw
sensory inputs. To deal with cross-modality signals i.e. accelerometer (Acc),
gyroscope (Gyro), audio (MFCC/Aud), and phone state (PS), we use a multi-
stream architecture. The network comprises five main modules as demonstrated
in Figure 3. This section describes each component, presents a strategy to
modify the proposed architecture to handle missing sensors and provides the
implementation details.
### Modality Specific Networks
We present a deep multi-modal convolutional architecture for learning context
representations. We propose to use a series of depthwise-separable
convolutions (DPS-Conv) (?) for processing different components (or channels)
of raw signals. In general, CNNs are also found to be well suited for
processing $1$D sequences due to their ability to learn translation invariant
features, scale separation, and localization of filters across time and space
(?). DPS-Conv consists of two operations i.e. a depthwise convolution and a
pointwise (or $1$ x $1$) convolution. Specifically, the first function
(depthwise convolution) performs a convolution independently over each input
channel and it is followed by the second operation of $1$ x $1$ convolution
that projects the channels estimated by the earlier onto a distinct channel
space to have the same number of output filters (?). The intuition of this
formulation falls in line with the classical procedures utilized by domain
experts to extract several features from each signal component independently
(e.g. $x$, $y$ and $z$ constituents of an accelerometer) but pointwise
convolution goes one step further and tries to learn unified factors that may
capture relationships among independent elements. Moreover, separable
convolutions make efficient use of parameters as opposed to their classical
counterpart and this property has made them a very promising candidate for
contemporary architectures that run on smart devices with limited computing
and energy capabilities (?; ?). Formally, in case of $1$D input sequence
$\mathbf{x}$ of length $L$ with $M$ channels, the aforementioned operation can
be formulated as follows (?):
$\text{DepthwiseConv}(\mathbf{x},\mathbf{w})_{i}=\sum_{l}^{L}(\mathbf{x}[i:i+k-1]\odot\mathbf{w})_{l}$
$\text{PointwiseConv}(\mathbf{x},\mathbf{w})_{i}=\sum_{m}^{M}(\mathbf{x}[i:i+k-1]\cdot\mathbf{w})_{m}$
$\text{DepthwiseSeparableConv}(\mathbf{x},\mathbf{w_{d}},\mathbf{w_{p}})_{i}=\\\
\text{PointwiseConv}_{i}(\text{DepthwiseConv}_{i}(\mathbf{x}[i:i+k-1],\mathbf{w_{d}}),\mathbf{w_{p}}\\\
$
where $\odot$ is elements-wise product, $\mathbf{x}[i:j]$ represents a segment
of the complete sequence with adjacent columns from $i$ to $j$, and
$\mathbf{w}$ represents filter with receptive field size of $k$.
The proposed network takes four different signals as input, each with its
independent disjoint pathway in the earlier layers of the network. Towards the
end, they are merged into shared layers that are common across all modalities
that are described in the next subsection. This network configuration has the
benefit of not just extracting modality-specific (and channel-specific)
features but it can also feasibly extract mutual representations through
shared layers. Each of the presented Acc and Gyro networks consist of $2$
temporal convolution layers which act as feature extractors over raw signals
of dimensions $800$ x $3$. The convolution layers have kernel sizes of $64$
and $32$ with a stride of $2$ and each layer has $32$ and $64$ filters,
respectively. We use rectified linear activation in all the layers and apply
depth-wise L$2$-regularization with a rate of $0.0001$. The audio network
takes mel frequency cepstral coefficients (see Section Dataset and Modalities)
of size 420 x 13 as input and it has a similar architecture except the kernel
size, which is set to $8$ and $6$ in the first and second layers,
respectively. Likewise, the discrete attributes indicating PS are fed into a
single layer fully-connected (FC) network with $64$ units and L$1$-penalty is
used on the weights with a rate of $0.0001$. Furthermore, we explore different
mechanisms to get a fixed dimension vector from each modality that can be fed
into a shared network. Specifically, we use: a) global max pooling (GMP), b)
global average pooling (GAP), c) a FC layer, and d) exactly pass the
representations without any transformation to the shared network.
### Shared Network (Contextualization)
Given the concepts extracted from each modality, the shared network generates
a modal-agnostic representation. To achieve this, we fuse the output of
earlier networks either through concatenation or apply standard convolution
(only for Acc, Gyro and Aud). We then feed the output into $2$ FC layers
having $2048$, $1024$ hidden units, respectively. Same as earlier, we use
rectified linear non-linearity and L$1$-regularization with a weight decay
coefficient of $0.0001$. The final output layer contains $51$ units (one for
each label) with sigmoid activation. Figure 3 visualizes the sharing of the
network layers, where, earlier layers are modality specific but downstream
layers become more general.
### Missing Sensors
In a real-life setting, a context recognition system may encounter missing
modalities which can limit its inference capability. To make the model robust
against such a situation, we develop a multi-task network (?), where learning
from each sensor is posed as a task. The initial configuration of the model is
the same as before but an additional layer (of $128$ units for Acc, Gyro,
MFCC/Aud and $64$ units for PS) with a separate loss function is added after
only a single shared layer of $1024$ hidden units. Figure 4 provides a high-
level overview of the architecture. We employ joint-training (with a learning
rate of $0.0003$) on all the modalities through aggregating cost functions of
each model in order to get a total loss. This architectural configuration
allows not only to learn independent and shared factors but enables inference
even when any of the sensors is missing. It does so through averaging (which
can be weighted) over probabilities produced by the individual networks.
Figure 4: Handling Missing Sensors with a Multi-task Network: A variant of the
earlier defined architecture with additional task (modality-specific) layers
and a separate loss function for each modality. It is able to recognize user
context even if only one sensor is producing data and the others are
unavailable.
### Implementation and Training Details
The networks are implemented in Tensorflow (?) and the models are learned from
scratch; initializing the weights with Xavier technique (?). Dropout (?) is
applied on the hidden layers with a probability of $0.2$. We use the Adam
optimizer with a learning rate of $0.0001$ (unless mentioned otherwise) and
use a batch size of $100$. We optimize the model weights for a fixed number of
iterations (i.e. $15000$) with mini-batch stochastic gradient descent and
backpropagation using instance-weighted cross-entropy objective function:
$\mathcal{J}_{C}=\dfrac{1}{NC}\sum_{i=1}^{N}\sum_{c=1}^{C}\Psi_{i,c}\cdot\mathcal{L}_{CE}(\hat{y}_{i,c},y_{i,c})$
$\mathcal{L}_{ce}(\hat{y},y)=-[(y\log(\hat{y})+(1-y)\log(1-\hat{y}))]$
where $\mathcal{L}_{ce}$ is the binary cross-entropy loss, and $\Psi$ is an
instance-weighting matrix of size $N$ x $C$ (i.e. number of training examples
and total labels, respectively). The instance weights in $\Psi$ are assigned
by inverse class frequency. Likewise, the entries for the missing labels are
set to zero, to impose no contribution in the overall cost from such examples.
## Experimental Results
We conduct several experiments to analyze the capability of the proposed
method. First, we provide a brief description of the utilized dataset and
signals. Second, we describe the evaluation approach and metrics used to
determine the model’s performance on a multi-label and imbalanced dataset.
Finally, we discuss our empirical observations, effect of different
modalities’ representation, comparison of various procedures to learn shared
factors and visualization of the internal representation.
### Dataset and Modalities
We choose to learn discriminative representations directly from raw Acc, Gyro,
Aud/MFCC and PS attributes from a smartphone because of their wide
adoptability and ubiquity. For this purpose, we chose to leverage ExtraSensory
Dataset (?) since it is collected in a natural environment from users’
personal devices. The experimental setup was not scripted but data collection
was performed when participants were busy with their daily routines to capture
varied activities and context combinations, in-the-wild conditions. This data
source contains over $300,000$ multi-labeled instances (with classes such as
‘outside’, ‘at a restaurant’, ‘with friends’ from a total of $51$ labels) from
sixty users. The complete data collection protocol is described in (?). Here,
we provide a high-level overview of the signals that we used in this study.
The samples are collected for $20$ seconds duration every minute from tri-axis
Acc and Gyro at a sampling frequency of $40$Hz, mel frequency cepstral
coefficients (MFCCs) for $46$msec frame are extracted from Aud recorded at
$22,050$Hz. Likewise, several phone state binary features are also collected
such as those specifying, time of day, battery level, ringer mode and Wi-Fi
connection etc. A few randomly selected samples of these signals are
illustrated in Figure 2.
We seek to process raw sensory values without manual feature engineering.
Thus, the only pre-processing we applied is to transform variable length
inputs to an identical temporal length. For this purpose, the MFCCs of
environmental audio are repeated (along time dimension) to get equal size
input, this is reasonable for ambient soundscapes as we are not particularly
interested in inferring a specific sound event. Similarly, the Acc and Gyro
samples of varying sizes are zero-padded and instances, where MFCC length is
shorter than twenty are discarded. Furthermore, we treat Acc, Gyro and Aud as
$m$-channels inputs ($3$, $3$, and $13$ channels, respectively) as it allows
us to efficiently learn independent factors from every sensor axis, thus
maximally utilizing the large-scale dataset.
### Evaluation and Metrics
Our models are evaluated with five-folds cross-validation with the same
divisions of sixty users as of (?), where training and test folds contain $48$
and $12$ users, respectively. For hyper-parameter optimization, we use nested
cross-validation (?) by randomly dividing the training fold data into training
and validation sets with ratio of $80$-$20$. After hyper-parameters selection,
we train our models on the complete dataset of training folds (individually,
each time from scratch) and calculate metrics on the testing folds.
Furthermore, it is mentioned earlier that the considered dataset is highly
imbalanced with sparse labels. In this case, simply calculating naive accuracy
will be misleading due to not taking underrepresented classes into account.
Similarly, precision and f$1$-score are also very likely to be affected by the
class-skew due to involvement of true positives in the denominator. Hence, we
adopt a metric named balanced accuracy (BA) (?) as used in (?), which
incorporates both recall (or true positive rate) and true negative rate:
$\text{BA}=\frac{\text{Sensitivity}+\text{Specificity}}{2}$. BA can be
interpreted as average accuracy achieved on either class (positive or negative
regarding binary classification). It stays identical to traditional accuracy,
if a model performs equally well on each class but drops to a random chance
(i.e. $50$%) if a classifier performs poorly on a class with few instances
(?). We calculate BA for each label independently and average them afterwards
to get a trustworthy score of the model’s overall performance.
### Results and Analysis
#### Analysis of Fusing Multi-Modal Representations:
We quantify the effect of different procedures for getting a fixed dimension
feature vector from each modality-specific network and examine their fusion
through different configurations of the shared network. It is important to
note that, we keep an entire network’s configuration same but only the layers
under consideration are changed. Table 1 provides the averaged (metrics)
scores over $51$ contextual labels and $5$-folds as a result of applying
global (max and average) pooling, using FC layer or simply feeding the
extracted representations to the shared network for further processing. For
the latter, we explore learning mutual representation from Acc, Gyr, and
Aud/MFCC through an additional standard convolution layer and compare its
performance with directly using flattened representations. Our experiments
suggest that global max pooling (GMP) over each modality’s features
outperforms other utilized techniques; achieving BA of $0.750$ with a
sensitivity rate of $0.767$. We believe the reason is that, GMP is capable of
picking-up high-level shift-invariant features, which are most discriminative
among others. Figure 5 presents per label metrics for this network on all the
$51$ labels in the dataset. Specifically, we notice majority of the labels
have BA score in range of $70\%$-$80\%$.
Figure 5: Performance metrics per label of the best performing model (with GMP): The scores are averaged over $5$-folds cross-validation. Table 1: Multi-modal context recognition: The metrics are reported for $5$-folds cross-validation averaged over $51$ class labels. BA stands for balanced accuracy. | BA | Sensitivity | Specificity
---|---|---|---
GMP | 0.750 ($\pm$ 0.012) | 0.767 ($\pm$ 0.015) | 0.733 ($\pm$ 0.016)
GAP | 0.748 ($\pm$ 0.009) | 0.753 ($\pm$ 0.012) | 0.742 ($\pm$ 0.015)
FC | 0.744 ($\pm$ 0.009) | 0.735 ($\pm$ 0.014) | 0.753 ($\pm$ 0.008)
Flattened | 0.742 ($\pm$ 0.014) | 0.734 ($\pm$ 0.029) | 0.749 ($\pm$ 0.007)
Conv | 0.738 ($\pm$ 0.011) | 0.725 ($\pm$ 0.022) | 0.752 ($\pm$ 0.022)
#### Comparison of Convolution Variants:
We evaluate the complete multi-stream model through replacing only DPS-Conv
layers with standard convolution (Std-Conv) in modality-specific networks. We
did not observe major performance differences between the two models as shown
in Table 2. Nevertheless, a model with DPS-Conv should be preferred because of
having lower computational cost than Std-Conv (?).
Table 2: Performance evaluation with different convolution layers. | BA | Sensitivity | Specificity
---|---|---|---
Std-Conv | 0.751 ($\pm$ 0.011) | 0.750 ($\pm$ 0.017) | 0.751 ($\pm$ 0.007)
| DPS-Conv
---
0.750 ($\pm$ 0.012) | 0.767 ($\pm$ 0.015) | 0.733 ($\pm$ 0.016)
#### Quantifying Modality Influence:
To examine the effect of different combinations of sensors (or features
learned from them) on the recognition capability of the model, we experimented
with training several networks with modified architectures. Specifically, in
this case the model only consisted of layers that are relevant to the signals
under consideration e.g. for evaluating models with only Acc, Aud, and PS, we
removed the Gyro network entirely and then trained it end-to-end from scratch.
Table 3 shows the evaluation results that highlights the importance of joint-
learning and fusion of multiple modalities to improve detection rate.
Table 3: Effect of different modalities on recognition performance. | BA | Sensitivity | Specificity
---|---|---|---
Acc | 0.633 ($\pm$ 0.011) | 0.668 ($\pm$ 0.027) | 0.599 ($\pm$ 0.017)
Gyro | 0.639 ($\pm$ 0.011) | 0.638 ($\pm$ 0.017) | 0.640 ($\pm$ 0.020)
Aud | 0.669 ($\pm$ 0.024) | 0.731 ($\pm$ 0.028) | 0.608 ($\pm$ 0.025)
PS | 0.712 ($\pm$ 0.005) | 0.723 ($\pm$ 0.011) | 0.700 ($\pm$ 0.013)
Acc, Gyro, PS | 0.733 ($\pm$ 0.010) | 0.744 ($\pm$ 0.021) | 0.722 ($\pm$ 0.014)
Acc, Gyro, Aud | 0.708 ($\pm$ 0.010) | 0.722 ($\pm$ 0.027) | 0.693 ($\pm$ 0.012)
Acc, Aud, PS | 0.745 ($\pm$ 0.013) | 0.757 ($\pm$ 0.025) | 0.733 ($\pm$ 0.015)
Gyro, Aud, PS | 0.748 ($\pm$ 0.012) | 0.768 ($\pm$ 0.014) | 0.728 ($\pm$ 0.014)
All | 0.750 ($\pm$ 0.012) | 0.767 ($\pm$ 0.015) | 0.733 ($\pm$ 0.016)
#### Fusion and Effect of Missing Sensors:
We now evaluate the modified architecture’s predictive performance (presented
in Section Missing Sensors), confronting various combinations of missing
signals. Table 4 provides experimental results showing that the proposed
multi-task network can handle lost modalities, achieving similar BA score as
when separate models for each modality are developed (see Table 3). However,
this flexibility comes at the price of slightly lower BA but makes a model
capable of operation in the face of unavailable sensors.
Table 4: Assessment of multi-task network for handling missing modalities. Each row provide averaged metrics score as earlier but only mentioned modalities that are used for determining user’s context. | BA | SN | SP
---|---|---|---
Acc | 0.634 ($\pm$ 0.008) | 0.652 ($\pm$ 0.027) | 0.616 ($\pm$ 0.013)
Gyro | 0.619 ($\pm$ 0.016) | 0.632 ($\pm$ 0.040) | 0.606 ($\pm$ 0.023)
Aud | 0.656 ($\pm$ 0.026) | 0.670 ($\pm$ 0.046) | 0.641 ($\pm$ 0.015)
PS | 0.688 ($\pm$ 0.009) | 0.709 ($\pm$ 0.015) | 0.667 ($\pm$ 0.012)
Acc, Gyro | 0.646 ($\pm$ 0.009) | 0.670 ($\pm$ 0.028) | 0.622 ($\pm$ 0.018)
Acc, Aud | 0.687 ($\pm$ 0.015) | 0.695 ($\pm$ 0.035) | 0.679 ($\pm$ 0.008)
Acc, PS | 0.708 ($\pm$ 0.007) | 0.713 ($\pm$ 0.015) | 0.702 ($\pm$ 0.012)
Gyro, Aud | 0.687 ($\pm$ 0.020) | 0.699 ($\pm$ 0.045) | 0.676 ($\pm$ 0.015)
Gyro, PS | 0.708 ($\pm$ 0.007) | 0.719 ($\pm$ 0.023) | 0.696 ($\pm$ 0.019)
Aud, PS | 0.708 ($\pm$ 0.013) | 0.717 ($\pm$ 0.027) | 0.698 ($\pm$ 0.010)
Acc, Gyro, Aud | 0.690 ($\pm$ 0.012) | 0.703 ($\pm$ 0.031) | 0.677 ($\pm$ 0.011)
Acc, Gyro, PS | 0.705 ($\pm$ 0.007) | 0.714 ($\pm$ 0.023) | 0.696 ($\pm$ 0.019)
Acc, Aud, PS | 0.721 ($\pm$ 0.007) | 0.729 ($\pm$ 0.019) | 0.712 ($\pm$ 0.011)
Gyro, Aud, PS | 0.721 ($\pm$ 0.011) | 0.730 ($\pm$ 0.030) | 0.711 ($\pm$ 0.017)
All | 0.720 ($\pm$ 0.008) | 0.728 ($\pm$ 0.025) | 0.712 ($\pm$ 0.015)
Figure 6: Assessment of instance-weighting and regularization: We determine
the impact of cost sensitive loss function and regularization (i.e. weight
decay and dropout) on the network’s predictive power. The results labeled
under standard are with both IW and regularization.
(a) Environment (b) Body State (c) Transportation Mode (d) Phone Position
Figure 7: t-SNE embeddings: We visualize the mutual features learned through
fusion of multiple modalities (from the last layer) in the shared network.
Four sets of mutually-exclusive labels are identified from multi-labeled data
to use during final visualization of semantically related clusters extracted
through t-SNE.
#### Reliance on Instance Weighting and Regularization:
Our results thus far have been obtained through training a model with cross-
entropy loss. This incorporated instance-weights to handle class-imbalance. To
test network’s dependence on the cost sensitive loss function
($\mathcal{J}_{c}$), we examined a model’s performance that is trained without
it. As expected, the overall BA score drastically drops to a random chance
(see Figure 6) with worse performance on positive samples in comparison with
the negative ones. Likewise, we also trained a model without any sort of
regularization i.e. removing dropout, L$1$ and L$2$ penalties from the
network. The average recall rate on the held-out testing folds dropped to
$0.58$ which can be an indication of overfitting the training set. Hence,
incorporating both instance-weighting (IW) and regularization improved
performance significantly in learning from this imbalanced dataset. However,
further work will be necessary to investigate other techniques for managing
(sparse) rare labels such as oversampling and data augmentation in case of
multi-labeled instances.
#### Visualization:
In order to illustrate the semantic relevance of the learned features, we
applied t-SNE (?) to project high-dimensional data to $2$D embedding. We take
the output of the last FC layer (see Figure 3) from the shared network by
feeding a limited (but randomly selected) subset of the dataset to extract the
embeddings. Further, as the data under consideration is multi-labeled, we
identified sets of mutually-exclusive labels (e.g. Indoors vs. Outside) that
can be used to color code the data points to visually identify meaningful
clusters. Figure 7 provides a visualization for various sets of labels
suggesting the network is able to disentangle possible factors of variation
that may distinguish a class from the rest in large-scale sensory data.
Furthermore, to get better insights in the diversity of the extracted features
from each modality, in Figure 8, we visualize the feature maps produced by the
first layer of the DPS-Conv layer of modal-specific networks.
Figure 8: Feature Maps from Modality-Specific Networks: Illustration of
randomly selected (learned) features from first layer of convolutional
networks. (a), (b) and (c) represent outputs from Acc, Gyro and Aud models,
respectively.
## Conclusions
In this work, we tackled the problem of multi-label behavioral context
recognition with deep multi-modal convolutional neural networks. We propose to
train an end-to-end model for jointly-learning from low-level sensory data
(accelerometer, gyroscope, audio and phone state) of smart devices collected
in-the-wild. Our empirical results demonstrated various strategies for
feasibly fusing representations learned from different modalities and
quantifying their contribution on the predictive performance. We also showed
that instance-weighted cross-entropy loss (as also leveraged in (?)) and
regularization schemes enable the model to generalize well on highly
imbalanced (sparsely labeled) dataset. Furthermore, we present a slight
modification in the proposed network’s architecture to handle missing sensors;
potentially taking advantage of multi-task learning. We believe, the proposed
methodology is generic enough and can be applied to other related problems of
learning from multivariate time series. Additionally, potential directions for
future work would involve developing techniques to handle imbalanced multi-
label data, optimal sensor selection to reduce computation and battery
consumption, and incorporating other analogous sensors to further improve the
detection rate.
Acknowledgment SCOTT (www.scott-project.eu) has received funding from the
Electronic Component Systems for European Leadership Joint Undertaking under
grant agreement No 737422. This Joint Undertaking receives support from the
European Union’s Horizon 2020 research and innovation programme and Austria,
Spain, Finland, Ireland, Sweden, Germany, Poland, Portugal, Netherlands,
Belgium, Norway.
Various icons used in the figures are created by Anuar Zhumaev, Tim Madle,
Korokoro, Gregor Cresnar, Shmidt Sergey, Hea Poh Lin, Natalia Jacquier, Trevor
Dsouza, Adrien Coquet, Alina Oleynik, Llisole, Alena, AdbA Icons, Jeevan
Kumar, Artdabana@Design, lipi, Alex Auda Samora, and Michelle Colonna from the
Noun Project.
## References
* [Abadi et al. 2016] Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. 2016\. Tensorflow: A system for large-scale machine learning. In OSDI, volume 16, 265–283.
* [Alsheikh et al. 2016] Alsheikh, M. A.; Selim, A.; Niyato, D.; Doyle, L.; Lin, S.; and Tan, H.-P. 2016\. Deep activity recognition models with triaxial accelerometers. In AAAI Workshop: Artificial Intelligence Applied to Assistive Technologies and Smart Environments.
* [Bai, Kolter, and Koltun 2018] Bai, S.; Kolter, J. Z.; and Koltun, V. 2018\. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271.
* [Bengio, Courville, and Vincent 2013] Bengio, Y.; Courville, A.; and Vincent, P. 2013\. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence 35(8):1798–1828.
* [Bhattacharya et al. 2014] Bhattacharya, S.; Nurmi, P.; Hammerla, N.; and Plötz, T. 2014\. Using unlabeled data in a sparse-coding framework for human activity recognition. Pervasive and Mobile Computing 15:242–262.
* [Brodersen et al. 2010] Brodersen, K. H.; Ong, C. S.; Stephan, K. E.; and Buhmann, J. M. 2010\. The balanced accuracy and its posterior distribution. In Pattern recognition (ICPR), 2010 20th international conference on, 3121–3124. IEEE.
* [Caruana 1997] Caruana, R. 1997\. Multitask learning. Machine learning 28(1):41–75.
* [Cawley and Talbot 2010] Cawley, G. C., and Talbot, N. L. 2010\. On over-fitting in model selection and subsequent selection bias in performance evaluation. Journal of Machine Learning Research 11(Jul):2079–2107.
* [Chollet 2017] Chollet, F. 2017\. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1251–1258.
* [Figo et al. 2010] Figo, D.; Diniz, P. C.; Ferreira, D. R.; and Cardoso, J. M. 2010\. Preprocessing techniques for context recognition from accelerometer data. Personal and Ubiquitous Computing 14(7):645–662.
* [Glorot and Bengio 2010] Glorot, X., and Bengio, Y. 2010\. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, 249–256.
* [Hammerla, Halloran, and Ploetz 2016] Hammerla, N. Y.; Halloran, S.; and Ploetz, T. 2016\. Deep, convolutional, and recurrent models for human activity recognition using wearables. arXiv preprint arXiv:1604.08880.
* [Hoseini-Tabatabaei, Gluhak, and Tafazolli 2013] Hoseini-Tabatabaei, S. A.; Gluhak, A.; and Tafazolli, R. 2013\. A survey on smartphone-based systems for opportunistic user context recognition. ACM Computing Surveys (CSUR) 45(3):27.
* [Jiang and Yin 2015] Jiang, W., and Yin, Z. 2015\. Human activity recognition using wearable sensors by deep convolutional neural networks. In Proceedings of the 23rd ACM international conference on Multimedia, 1307–1310. ACM.
* [Kaiser, Gomez, and Chollet 2017] Kaiser, L.; Gomez, A. N.; and Chollet, F. 2017\. Depthwise separable convolutions for neural machine translation. arXiv preprint arXiv:1706.03059.
* [Lane, Georgiev, and Qendro 2015] Lane, N. D.; Georgiev, P.; and Qendro, L. 2015\. Deepear: robust smartphone audio sensing in unconstrained acoustic environments using deep learning. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, 283–294. ACM.
* [Lara and Labrador 2013] Lara, O. D., and Labrador, M. A. 2013\. A survey on human activity recognition using wearable sensors. IEEE Communications Surveys and Tutorials 15(3):1192–1209.
* [Lee et al. 2013] Lee, Y.; Min, C.; Hwang, C.; Lee, J.; Hwang, I.; Ju, Y.; Yoo, C.; Moon, M.; Lee, U.; and Song, J. 2013\. Sociophone: Everyday face-to-face interaction monitoring platform using multi-phone sensor fusion. In Proceeding of the 11th annual international conference on Mobile systems, applications, and services, 375–388. ACM.
* [Lin et al. 2012] Lin, M.; Lane, N. D.; Mohammod, M.; Yang, X.; Lu, H.; Cardone, G.; Ali, S.; Doryab, A.; Berke, E.; Campbell, A. T.; et al. 2012\. Bewell+: multi-dimensional wellbeing monitoring with community-guided user feedback and energy optimization. In Proceedings of the conference on Wireless Health, 10. ACM.
* [Lin et al. 2015] Lin, Q.; Zhang, D.; Connelly, K.; Ni, H.; Yu, Z.; and Zhou, X. 2015\. Disorientation detection by mining gps trajectories for cognitively-impaired elders. Pervasive and Mobile Computing 19:71–85.
* [Lorincz et al. 2009] Lorincz, K.; Chen, B.-r.; Challen, G. W.; Chowdhury, A. R.; Patel, S.; Bonato, P.; Welsh, M.; et al. 2009\. Mercury: a wearable sensor network platform for high-fidelity motion analysis. In SenSys, volume 9, 183–196.
* [Miluzzo et al. 2008] Miluzzo, E.; Lane, N. D.; Fodor, K.; Peterson, R.; Lu, H.; Musolesi, M.; Eisenman, S. B.; Zheng, X.; and Campbell, A. T. 2008\. Sensing meets mobile social networks: the design, implementation and evaluation of the cenceme application. In Proceedings of the 6th ACM conference on Embedded network sensor systems, 337–350. ACM.
* [Ordóñez and Roggen 2016] Ordóñez, F. J., and Roggen, D. 2016\. Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 16(1):115.
* [Plötz, Hammerla, and Olivier 2011] Plötz, T.; Hammerla, N. Y.; and Olivier, P. 2011\. Feature learning for activity recognition in ubiquitous computing. In IJCAI Proceedings-International Joint Conference on Artificial Intelligence, volume 22, 1729.
* [Rabbi et al. 2015] Rabbi, M.; Aung, M. H.; Zhang, M.; and Choudhury, T. 2015\. Mybehavior: automatic personalized health feedback from user behaviors and preferences using smartphones. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, 707–718. ACM.
* [Radu et al. 2018] Radu, V.; Tong, C.; Bhattacharya, S.; Lane, N. D.; Mascolo, C.; Marina, M. K.; and Kawsar, F. 2018\. Multimodal deep learning for activity and context recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1(4):157.
* [Rashidi and Cook 2009] Rashidi, P., and Cook, D. J. 2009\. Keeping the resident in the loop: Adapting the smart home to the user. IEEE Transactions on systems, man, and cybernetics-part A: systems and humans 39(5):949–959.
* [Rashidi and Mihailidis 2013] Rashidi, P., and Mihailidis, A. 2013\. A survey on ambient-assisted living tools for older adults. IEEE journal of biomedical and health informatics 17(3):579–590.
* [Ronao and Cho 2016] Ronao, C. A., and Cho, S.-B. 2016\. Human activity recognition with smartphone sensors using deep learning neural networks. Expert Systems with Applications 59:235–244.
* [Sandler et al. 2018] Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; and Chen, L.-C. 2018\. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4510–4520.
* [Shoaib et al. 2015] Shoaib, M.; Bosch, S.; Incel, O. D.; Scholten, H.; and Havinga, P. J. 2015\. A survey of online activity recognition using mobile phones. Sensors 15(1):2059–2085.
* [Srivastava et al. 2014] Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; and Salakhutdinov, R. 2014\. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1):1929–1958.
* [Vaizman, Ellis, and Lanckriet 2017] Vaizman, Y.; Ellis, K.; and Lanckriet, G. 2017\. Recognizing detailed human context in the wild from smartphones and smartwatches. IEEE Pervasive Computing 16(4):62–74.
* [Vaizman, Weibel, and Lanckriet 2018] Vaizman, Y.; Weibel, N.; and Lanckriet, G. 2018\. Context recognition in-the-wild: Unified model for multi-modal sensors and multi-label classification. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1(4):168.
* [van der Maaten and Hinton 2008] van der Maaten, L., and Hinton, G. 2008\. Visualizing data using t-sne. Journal of machine learning research 9(Nov):2579–2605.
* [Yang et al. 2015] Yang, J.; Nguyen, M. N.; San, P. P.; Li, X.; and Krishnaswamy, S. 2015\. Deep convolutional neural networks on multichannel time series for human activity recognition. In IJCAI, 3995–4001.
* [Zeng et al. 2014] Zeng, M.; Nguyen, L. T.; Yu, B.; Mengshoel, O. J.; Zhu, J.; Wu, P.; and Zhang, J. 2014\. Convolutional neural networks for human activity recognition using mobile sensors. In Mobile Computing, Applications and Services (MobiCASE), 2014 6th International Conference on, 197–205. IEEE.
* [Zhang et al. 2017] Zhang, X.; Zhou, X.; Lin, M.; and Sun, J. 2017\. Shufflenet: An extremely efficient convolutional neural network for mobile devices. arXiv preprint arXiv:1707.01083.
|
# Design and characterisation of an antiproton deceleration beamline for the
PUMA experiment
J. Fischera Corresponding author<EMAIL_ADDRESS>A. Schmidta N.
Azaryanb F. Butinb J. Ferreira Somozab A. Hussona
C. Klinka A. Obertellia M. Schlaicha A. Sinturelb N. Thausb and F. Wienholtza
(aTechnische Universität Darmstadt, Institut für Kernphysik,
Schloßgartenstraße 9, 64289 Darmstadt, Germany
bCERN, 1211 Geneva 23, Switzerland)
## 1 Introduction
The spatial distribution of protons and neutrons at and beyond the nuclear
surface of atomic nuclei challenges nuclear theory. In particular, nuclei with
a neutron excess exhibit a so-called neutron skin, where the neutron density
distribution extends beyond the proton density distribution. The thickness of
the neutron skin is defined as the difference in root-mean-square radii of the
density distributions
$\displaystyle\Delta r_{\mathrm{np}}=\langle
r^{2}_{\mathrm{n}}\rangle^{1/2}-\langle r^{2}_{\mathrm{p}}\rangle^{1/2}.$ (1)
The neutron skin thickness correlates with the slope parameter $L$ of the
nuclear equation of state [1], playing an important role in defining the
relation between the mass and radius of a neutron star [2, 3]. Neutron skin
thicknesses have been investigated with several methods [4, 5, 6, 7, 8],
mostly on stable nuclei, while the challenge lies in determining the radius of
the neutron distribution $\langle r^{2}_{\mathrm{n}}\rangle^{1/2}$ with enough
accuracy and controlled theoretical uncertainties. Information on unstable
nuclei is much more scarce, as illustrated by Ca isotopes: charge radii can be
accessed with precision from the relative measurement of isotope shifts from
laser spectroscopy and anchored to stable nuclei [9], while the interpretation
of the data related to the matter or neutron radius suffers from model
dependence [10, 11]. Nuclei close or at the neutron drip line can have loosely
bound nucleons, whose wave function extends far beyond the charge
distribution. Such systems are called halo nuclei [12, 13]. Neutron halos have
been so far observed in light nuclei only [14]. Indications for $p$-wave halos
in medium mass nuclei have been reported [15], while more halos are predicted
to exist in uncharted regions of the nuclear landscape [16]. Proton halos have
been predicted as well [17].
Most aforementioned methods to probe neutron skins and halos in stable and
unstable nuclei are sensitive to the nuclear surface where
$\rho\sim\rho_{0}/2$, not further out in the tail of the density distribution,
where the asymmetry is the largest. The antiProton Unstable Matter
Annihilation (PUMA) experiment aims to investigate these phenomena in the tail
of stable and unstable nuclei with low-energy antiprotons as a probe [18, 19].
Antiprotons are uniquely suited for this, as they annihilate with nucleons at
a mean radial position $\sim 2$ $\mathrm{f}\mathrm{m}$ further out from the
half density radius of the nucleus [5, 20, 21], probing a region of higher
neutron-to-proton asymmetry. The PUMA experiment will produce antiprotonic
atoms by combining nuclei and antiprotons in a Penning trap. By studying the
pions produced in the annihilation, the PUMA experiment can determine the
neutron-to-proton ratio in the tail of the nuclear density distribution. The
setup is located at the Antimatter Factory at CERN. Stable isotopes are
supplied by an offline ion source [22], and for the investigation of more
neutron-rich and unstable isotopes the setup will be transported to the ISOLDE
facility [23] at CERN.
The ELENA ring at the Antimatter Factory provides bunches of $5\cdot 10^{6}$
to $10^{7}$ antiprotons at 100 $\mathrm{k}\mathrm{e}\mathrm{V}$ to up to four
experiments every 2 minutes [24, 25, 26]. To further decelerate the
antiprotons to energies compatible with the PUMA Penning trap, one can use a
thin degrader foil or pulsed drift tubes (PDT) [27, 28, 29, 30, 31, 32].
Employing a foil for deceleration is space-efficient, but the yield is low and
the energy distribution broad [33], compared to a pulsed drift tube, which can
have a transmission of 100% while conserving the width of the energy
distribution. For antiprotons with an initial energy of approximately 100 keV,
trapping efficiencies vary from a few percent [34] to a maximum of 50%,
predicted in [35]. However, for the PUMA experiment, which relies on the
simultaneous trapping of antiprotons and stable and unstable ions, the use of
a foil is unfeasible, since low-energy ions cannot penetrate the foil.
An established method to change the energy of a particle beam is to use a
drift tube, where the potential can be changed rapidly. Here, the drift tube
is set to a potential and is used to decelerate the particles to the desired
energy. If the electrode is switched to a different potential, e.g., ground,
while the particles are still inside and in the field free region of the drift
tube, they are not reaccelerated on exit. Because only the longitudinal and
not the transversal kinetic energy is changed, the divergence angle of the
beam increases by a factor of $\sqrt{E_{\mathrm{in}}/E_{\mathrm{out}}}$, where
$E$ is the kinetic energy of the incoming and outgoing particles,
respectively. This can be compensated by additional ion optical elements or
beam cooling.
Several ion trap experiments use pulsed drift tubes to decelerate nuclei for
trapping [22, 36, 37, 38], often in combination with buffer-gas cooling [39]
to counteract the increase in transversal emittance, some from energies as
high as $60$ $\mathrm{k}\mathrm{e}\mathrm{V}$. The GBAR experiment at CERN is
confronted with a similar problem as the PUMA experiment, as they need to
decelerate antiprotons to $1$ $\mathrm{k}\mathrm{e}\mathrm{V}$ [40]. At the
PUMA experiment, the antiprotons are decelerated from 100
$\mathrm{k}\mathrm{e}\mathrm{V}$ to 4 $\mathrm{k}\mathrm{e}\mathrm{V}$ to
allow for an efficient beam transport and in a second step down to 100
$\mathrm{e}\mathrm{V}$ right in front of the trap.
To limit the annihilation of antiprotons with residual gas molecules, a vacuum
of a few $10^{-10}\,$\mathrm{m}\mathrm{b}\mathrm{a}\mathrm{r}$$ along and
$10^{-11}\,$\mathrm{m}\mathrm{b}\mathrm{a}\mathrm{r}$$ at the end of the
beamline is critical.
## 2 Beamline Design
### 2.1 Transfer Line from ELENA to PUMA
Figure 1: Schematic view of LNE51 transfer line to PUMA. Antiprotons are
ejected from ELENA into LNE50, from which LNE51 branches off. The insert shows
the position of LNE51 relative to ELENA and LNE00.
The transfer of 100 keV particles (H- ions or antiprotons) from the ELENA
machine to the PUMA experiment is performed by the so-called LNE51 transfer
line. LNE51 branches off from the LNE50 line (transfer from ELENA to the
adjacent GBAR experiment) using a standard ZDFA-ZDSA switching unit (fast
switch and electrostatic deflector) integrated in LNE50. This equipment is
interlocked with the access safety system of the PUMA zone, preventing any
beam to be sent from the ELENA machine, while the area is being accessed. The
sector valve at the interface between the experiment and the LNE51 transfer
line is interlocked with the access system to close automatically when the
zone is being accessed. This drastically limits the risk of contamination of
the upstream sections of ELENA machine in case of an incident while
manipulating the experimental equipment. To satisfy the integration
constraints and match the beam to the PUMA experiment at the end of the line,
four electrostatic quadrupole/H-V corrector units (ZQNA) are installed, along
with a 37.7° standalone deflector. At the focal point, the beam spot size
(rms) is approximately 2 mm and the horizontal and vertical geometric
emittance ($95\%=6\epsilon_{\mathrm{rms}}$) is 6 mm mrad and 4 mm mrad,
respectively [41]. The layout for LNE51 is shown schematically in Figure 1.
Two SEM grids (Secondary Emission Monitors) [42] are installed in LNE51. They
are standard equipment in the ELENA transfer lines that allow to extract the
profile of the impinging beam, either H- ions or antiprotons. Made from x-y
meshes of 50 $\upmu$m tungsten wires, covering the beam acceptance, spaced by
a pitch of 0.5 mm in the central region, they intercept only about 10% of the
beam at each station [43]. These monitors are ultra-high vacuum compatible, as
they can be baked-out to 200°C. As bake-out is required, the vacuum line is
fitted with permanently installed bake-out jackets.
Figure 2: Half-section view of the beamline without the supports. The
antiprotons traverse the beamline from left to right. Ions from the offline
source enter the beamline at the quadrupole bender in the direction into the
page and are deflected to the right. The gate valves separating the sections
are depicted in blue.
### 2.2 The PUMA Antiproton Beamline
Downstream of the handover point (HOP) to PUMA (see Fig. 1 and 2), the
beamline consists of two main sections, that can be isolated by gate valves
type 48236-CE44 from VAT (see Fig. 2). Section 1 includes the pulsed drift
tube itself. It is complemented by a high-voltage (up to -85 kV) as well as a
low-voltage (up to 5 kV) einzel lens (EL) on the injection and ejection sides,
respectively, to focus the antiproton bunches into and out of the pulsed drift
tube.
Section 2 consists of two low-voltage (up to 5 kV) einzel lenses with x-y-
steerers to guide the beam to the entrance of the PUMA Penning trap. In
between these lenses, a quadrupole ion beam bender allows the injection of
ions from an offline ion source setup, perpendicular to the antiproton
beamline. Even tough the bender has been designed to allow for simultaneous
injection of ions and antiprotons, it can be removed when it is not needed. A
beam imaging system (BTV), which consists of a phosphorous screen and a
camera, completes the section. The BTV can be moved in and out of the
beamline, as it is a completely destructive measurement of the beam. In the
future, the BTV will be replaced by a SEM grid.
Figure 3: The field strength at an unshielded triple junction (left) and one
shielded with a guard ring (right) is illustrated here. Blue indicates lower
and red higher electric field strengths.
### 2.3 The Pulsed Drift Tube
The pulsed drift tube (PDT) used for the PUMA experiment, is based on the GBAR
design [30]. Although the high-voltage einzel lens in front of the drift tube
counteracts the strong focusing effect of the decelerating electric field, the
drift tube has to accommodate an expansion of the beam. The inner diameter was
thus chosen to be 100 $\mathrm{m}\mathrm{m}$ with an outer diameter of 120
$\mathrm{m}\mathrm{m}$. At 4 $\mathrm{k}\mathrm{e}\mathrm{V}$, an antiproton
bunch from ELENA has a length of 250 $\mathrm{m}\mathrm{m}$ ($2\sigma$) [26].
The PUMA pulsed drift tube has been designed to be 700 $\mathrm{m}\mathrm{m}$
long. This ensures, that the bunch is in the field free region of the drift
tube when the potential is changed.
Because of the stringent vacuum requirements
($p<10^{-10}\,$\mathrm{m}\mathrm{b}\mathrm{a}\mathrm{r}$$), materials with the
lowest possible outgassing rates have to be used. Therefore, the pulsed drift
tube is made from aluminium ($\sim 1\cdot
10^{-13}\,$\mathrm{m}\mathrm{b}\mathrm{a}\mathrm{r}~{}\mathrm{l}\mathrm{/}\mathrm{s}\mathrm{/}\mathrm{c}\mathrm{m}^{2}$$),
which outgases less than stainless steel ($\sim 3\cdot
10^{-12}\,$\mathrm{m}\mathrm{b}\mathrm{a}\mathrm{r}~{}\mathrm{l}\mathrm{/}\mathrm{s}\mathrm{/}\mathrm{c}\mathrm{m}^{2}$$)
[44]. The insulators are made from MACOR®, which has an outgassing rate of
$1.1\cdot
10^{-11}\,$\mathrm{m}\mathrm{b}\mathrm{a}\mathrm{r}~{}\mathrm{l}\mathrm{/}\mathrm{s}\mathrm{/}\mathrm{c}\mathrm{m}^{2}$$
[45].
The walls of the vacuum chambers are coated with a non-evaporable getter (NEG)
to pump the section. Non-evaporable getters are made from an alloy of Zr, V,
Ti, Al and Fe, that can sputtered directly onto the wall of a vacuum chamber
[46]. It acts as a pump by absorbing hydrogen and chemically binding other
reactive gases like oxygen. To activate the NEG, the chambers are heated
(200°C to 400°C). Molecules at the surface (mainly carbon, nitrogen and
oxygen) diffuse into the bulk. Hydrogen is released and must be pumped away by
another pump. Therefore, all components, such as vacuum gauges, valves,
feedthroughs, pumps, cables and beam instrumentation, must be bakable at 250°C
at least. The coating of the inside surfaces of the chambers was done at CERN.
The installation of the pulsed drift tube inside the chamber must be done
without touching the coating to prevent damaging it. It is first mounted onto
its support structure before being lowered vertically into the vacuum chamber
and secured with screws. To facilitate individual access to the high- and low-
voltage einzel lens as well as the drift tube, the vacuum chamber is divided
into three parts.
At the intersections of vacuum, conductor and insulator, the electric field is
strongly enhanced due to gaps arising from imperfections on the corners of the
material (see Fig. 3). Special attention has been paid to these so-called
triple junctions to prevent possible discharges. They are shielded by purpose-
built rings, that surround the triple junction and thereby lower the electric
field (see Fig. 3). On all components, sharp edges have been avoided, and the
electrodes have been polished to an average surface finish of
$R_{\mathrm{a}}=0.05\,$\upmu\mathrm{m}$$, which helps to prevent discharges
[47].
#### 2.3.1 Electronics
To not reaccelerate the antiprotons as they exit the pulsed drift tube, it
must be discharged from -96 kV to 0 V before the first antiprotons exit the
field free region of the drift tube. For antiprotons with a kinetic energy of
4 keV, the time to discharge the drift tube is in the order of 500 ns.
Equipment that can withstand high voltages and high peak currents, as well as
a high-voltage switch with a short transient, are needed. The pulsed drift
tube is connected to a high-voltage power supply (Spellman SL130PN60) via a
$1\text{\,}\mathrm{M\SIUnitSymbolOhm}$ resistor. In order not to exceed the
voltage rating of the resistors, two Metallux HVR 969 resistors are used,
connected via polished brass cylinders with rounded edges. The value is chosen
as a compromise between the need for a high resistance to decouple the power
supply from the pulsed drift tube while switching, and the need for a low
resistance to minimize the effects of current fluctuations on the voltage
applied to the pulsed drift tube. For the discharge of the tube’s capacitance,
a fast high-voltage switch (Behlke HTS 1501-20-LC2) connects the pulsed drift
tube to ground. To make sure that the switch is not damaged, the pulsed drift
tube is connected to the switch via a two
$250\text{\,}\mathrm{\SIUnitSymbolOhm}$ Metallux HVR 969 resistors in series,
limiting the current. The high-voltage leads are connected with HN-70
connectors from R.E. Beverly III & Associates. The cables are suspended from
the ceiling to avoid triple junctions at the exposed high-voltage connectors.
The grounded mesh is removed on the load side, and special care is taken to
cover the pointy ends of the grounded mesh. As high-voltage feedthrough, a
HV125R-CE-CU39 from VACOM, rated for up to 125 kV is used.
Using a 1/1000 voltage divider (LeCroy PPE6kV) connected to a Tektronix
MDO3104 oscilloscope, the switching time from -5 kV to ground was measured. As
can be seen in Fig. 4, there is a $\sim$250 ns delay between the trigger
signal (blue) and the voltage on the pulsed drift tube (orange) which has to
be taken into account when triggering the switch. Independent of the voltage
applied to the switch, the transient time $\tau$ to $V_{0}/\mathrm{e}$ is
$\sim$80 ns which is consistent with the time constant estimated by a simple
RC-circuit, where the capacitance of the pulsed drift tube was measured and
within the specs of the switch:
$\displaystyle\tau=RC=500\Upomega\cdot 170\mathrm{pF}=85\,\mathrm{ns}.$ (2)
Figure 4: Switching time while switching from $5\,$\mathrm{k}\mathrm{V}$$ to
ground, measured with a $1/1000$ voltage divider. The trigger signal is shown
in blue and the voltage on the pulsed drift tube in orange.
#### 2.3.2 Safety Cage
The high-voltage system has unshielded $\sim$ 100 kV connections exposed to
air during operation. Therefore, the safety of the users has to be ensured by
a safety cage according to the ingress protection code level IP3X. Following
the European norm EN 50191, the dimensions of the safety cage are defined so
that any high-voltage point in air is at a distance of more than 74
$\mathrm{c}\mathrm{m}$ from the cage, corresponding to a maximum voltage of
130 kV, the maximum voltage of the high-voltage power supply. The high-voltage
system is interlocked via a switch (Telemecanique XCSDMC7902 coded magnetic
switch) at the sliding door of the cage to interlock the power supplies in the
event of unexpected access while the equipment is powered. The safety cage is
further secured with a trapped key system from Allen Bradley (Rockwell) to
prevent unauthorized access. It must first be locked to be able to switch on
the high-voltage power supplies. To simplify maintenance work, panels can be
removed from all sides of the cage.
## 3 Vacuum and Conditioning
### 3.1 Baseline Vacuum Pressure
Due to the strict vacuum requirements at the entrance of the PUMA trap,
special attention must be paid to the pressure. After activating the NEG
coating, a pressure of $2\cdot 10^{-11}$ mbar was measured at the end of the
pulsed drift tube section, a factor of 10 better than required. For the
subsequent tests, the NEG coating was not reactivated after venting, to
conserve it for the use with the PUMA trap attached. Without the NEG
activated, the pressure base level is around $1.4\cdot 10^{-10}$ mbar. This is
sufficient to condition and operate the pulsed drift tube.
### 3.2 High-Voltage Conditioning
Surface contamination and imperfections are sources of discharges that degrade
the vacuum and material when high voltage is applied. They also lead to a
leakage current that drains the set potential. This difficulty can be
countered by conditioning the high-voltage parts, which is therefore an
essential step before operating the pulsed drift tube. It was done by a
stepwise increase of the voltage, while keeping the leakage current below the
limit of the power supply and the vacuum better than $5\cdot 10^{-8}$ mbar.
The pulsed drift tube and high-voltage einzel lens were conditioned over
several weeks. The voltage was increased step by step and left in static
operation until the sudden spikes in current, associated with field emission
from imperfections on the electrode, subsided, which took between 12 and 72
hours per voltage step. In addition to the conditioning, modifications to the
setup were made outside the vacuum to reduce the leakage current. These
focussed on increasing the distance from any high-voltage parts to ground, as
well as polishing and rounding pieces in high electric fields. Ultimately, the
leakage current at -96 kV could be lowered from 100 $\upmu$A to 50 $\upmu$A by
polishing and increasing the corner radius of one high-voltage part from 3 mm
to 15 mm. Additionally, the current could be further decreased to 11 $\upmu$A
by increasing the ceiling height of the safety cage by 50 cm to 75 cm. The
leakage current of the high-voltage einzel lens could not be reduced in the
same way. At -85 kV, the 100 $\upmu$A current limit of the power supply is
reached. This means that the design value of -90 kV could not be achieved,
nevertheless it could be used for commissioning. A redesign with increased
distances between high-voltage parts and ground is planned.
### 3.3 Vacuum During Operation
During operation of the pulsed drift tube, the remaining leakage current
inside the vacuum degrades the pressure. To mitigate this, as done by the GBAR
collaboration, the voltage is kept at 0 V for most of the ELENA cycle and is
increased to -96 kV only 9.5 s before a bunch of antiprotons arrives. Ramping
up the voltage only shortly111compared to a repetition time of 120 s for
ELENA. before the bunch arrives has the advantage, that the vacuum is below
$2\cdot 10^{-10}$ mbar most of the time, since there is no leakage current at
0 V. When -96 kV are applied, the pressure reaches a value of $8\cdot
10^{-10}$ mbar and increases to $2\cdot 10^{-9}$ mbar when switching (see Fig.
5).
Figure 5: The pressure in section 1 and in section 2, while switching (three
cycles). This is without the NEG coating activated, to conserve it for the use
with the PUMA trap attached.
## 4 Measurement of Beam Properties
### 4.1 Detection System
For the characterization of the system, a vacuum chamber with several
detectors was installed at the end of the beamline. To visualize the beam
spot, a microchannel plate (MCP) by Hamamatsu with a phosphor screen with a
diameter of 40 mm was used. In combination with the camera CS505MU and lens
MVL7000 from Thorlabs, this results in the smallest resolvable feature being
40 $\upmu$m. The device was mounted on a tripod in front of a view port, which
allowed to capture the beam shape. A MagneToF detector by ETP ion detect was
used for two purposes: first, to determine the time of flight (ToF) of the
antiprotons (<1.5 ns multiple ion pulse width), and second, in combination
with an “energy grid”, to determine the kinetic energy distribution of the
decelerated antiprotons. The energy grid consists of a stack of three grids by
ETP ion detect with a diameter of 76.2 mm. The distance between the grids is
15 mm. The grid wires have a diameter of 0.018 mm, a centre-to-centre distance
of 0.25 mm, and a transmission of 92% to 95%. The two outer grids were
grounded, while a blocking voltage was applied to the middle one, with a
ripple of less than 10 mV. The energy grids and the MagneToF detector can be
moved out of the beam axis independently. In addition to those detectors, the
BTV further upstream in the beamline (see Fig. 2) was used for particle
detection and intensity determination.
### 4.2 Pulsed Drift Tube Switching Delay
When antiprotons arrive in the experimental zone, a trigger signal from the
ejection from ELENA is forwarded to the electronics. Relative to the trigger,
a switching time $t_{\mathrm{s}}$ has to be determined, at which the bunch is
fully contained inside the pulsed drift tube, so that the deceleration is
successful for the full antiproton bunch. To determine the ideal value,
$t_{\mathrm{s}}$ has to be scanned while observing the time of flight of the
antiprotons. If $t_{\mathrm{s}}$ is too small, the antiprotons see a grounded
electrode and traverse the pulsed drift tube at full speed, arriving the
earliest and with their initial energy. If $t_{\mathrm{s}}$ is too large, the
antiprotons are decelerated while entering the pulsed drift tube and
reaccelerated when leaving it, thus they arrive later than the ones never
decelerated, but still with their initial energy. When switching at the
correct time, the antiproton bunch is decelerated on entry but is not
reaccelerated on exit. Thus, it arrives later than in the other cases, as they
are slower, which can be seen in a simulation of the deceleration in the
pulsed drift tube performed in SIMION® (see top panel of Fig. 6).
The results from the measurement can be seen in the bottom panel of Fig. 6,
they match the behaviour expected from simulations. When $t_{\mathrm{s}}$ is
too small, the antiprotons arrived early. When increasing $t_{\mathrm{s}}$,
the bunch diffuses, as the bunch is partly in the fringe field of the
electrode when the pulsed drift tube is switched. Afterwards, in a window of
about 300 ns, the antiprotons are uniformly decelerated. As $t_{\mathrm{s}}$
is further increasing, the bunch diffuses again, because it is only partly
inside the pulsed drift tube when switching.
Figure 6: Simulated (top) and measured (bottom) beam intensity when switching
the pulsed drift tube from $-96\,$\mathrm{k}\mathrm{V}$$ to ground and varying
the switch delay $t_{\mathrm{s}}$. Yellow colours indicate lower and red
higher intensity. In both cases, a successful deceleration to
$4\,$\mathrm{k}\mathrm{e}\mathrm{V}$$ corresponds to a time of flight of
$t_{\mathrm{4keV}}=3.85\,$\upmu\mathrm{s}$$, with a bunch length ($1\sigma$)
of $0.09\,$\upmu\mathrm{s}$$. On the right, the integrated intensity from
$t_{\mathrm{4keV}}-2\sigma$ to $t_{\mathrm{4keV}}+2\sigma$ is shown,
$t_{\mathrm{s}}$ is chosen to maximise this intensity.
The measurement shows a successful deceleration, and an estimation with the
time of flight gives a deceleration to $(4.0\pm
0.5)\,$\mathrm{k}\mathrm{e}\mathrm{V}$$. A more precise measurement of the
energy distribution was done using the energy grids (see Sec. 4.4).
### 4.3 Transmission and Focusing
The intensity of the bunch after the pulsed drift tube $I$, can be compared to
the initial intensity of the bunch $I_{0}$. The total transmission through the
pulsed drift tube is thus defined by $T=I/I_{0}$. $I_{0}$ is determined before
the handover point by pick-ups in the ELENA transfer lines [24]. Besides
showing the beam spot shape, the total intensity on the BTV is proportional to
$I$, as can be seen in Fig. 7. Using the calibration in this plot, $T$ can be
calculated.
Figure 7: The bunch intensity of antiprotons determined by the ELENA detectors
is proportional to total intensity on the BTV. The transmission to the BTV is
$100\%$ when not decelerating the antiprotons. This allows to make a
calibration to determine the transmission through the pulsed drift tube while
decelerating.
$T$ for 100 keV bunches is about 100%. During the experiment, the transmission
of antiprotons decelerated to 4 keV reached ($55\pm 3$)%, while in simulations
a transmission of 100% could be reached. The main source of losses in
transmission can be assigned to a misalignment of the high-voltage einzel lens
and the pulsed drift tube and a high leakage current on the high-voltage
einzel lens, which limited the voltage to -85 kV. In addition, the parameters
assumed in the simulation for the incoming beam might also play a role. Figure
8 shows the beam profiles recorded by the BTV directly after the last einzel
lens. Using a Gaussian fit, the following parameters could be obtained:
$\displaystyle\sigma_{\mathrm{horiz}}=(3.0\pm
0.1)\,$\mathrm{m}\mathrm{m}$,\,\sigma_{\mathrm{vert}}=(3.8\pm
0.2)\,$\mathrm{m}\mathrm{m}$$
64% of the antiprotons are within a circle of radius
$r=5.6\,$\mathrm{m}\mathrm{m}$$, the smallest aperture of the PUMA Penning
trap. The focal point will have to be optimized at a later point for the
injection into the PUMA trap.
Figure 8: Beam profile after optimizing the LV einzel lenses for deceleration
to $4\text{\,}\mathrm{keV}$ and focus on the BTV. Fitting a Gaussian to the
centre peak yields
$\sigma_{\mathrm{horiz}}=3.0\,$\mathrm{m}\mathrm{m}$,\sigma_{\mathrm{vert}}=3.8\,$\mathrm{m}\mathrm{m}$$.
Yellow indicates a lower and red a higher intensity.
### 4.4 Energy Distribution
The standard deviation of the ions’ energy after deceleration to 4 keV at the
position of the MagneToF detector was simulated to be 101 eV. The kinetic
energy $E$ of the antiprotons was determined by blocking the antiprotons with
the energy grids, and measuring the transmission on the MagneToF. The results
from this can be seen in Fig. 9. In blue, the transmission onto the MagneToF
is displayed in dependence of the kinetic energy of the antiprotons. Fitting
the cumulative distribution function (CDF) of a normal distribution yields the
mean energy $\mu=(3898\pm 3)$ eV and energy spread $\sigma=(127\pm 4)$ eV. The
energy distribution calculated from the fit is shown in orange. 88% of
decelerated antiprotons are within $\pm$ 200 eV of the central energy, which
is the energy acceptance for successful trapping in the PUMA Penning trap,
according to simulations.
Figure 9: The energy distribution of decelerated antiprotons. The data and
fitted CDF of a normal distribution are shown in blue, and the probability
density function corresponding to the fit in orange. The mean energy is
$\mu=(3898\pm 3)$ $\mathrm{e}\mathrm{V}$ and the standard deviation
$\sigma=127\pm 4$ $\mathrm{e}\mathrm{V}$. $88\%$ of decelerated antiprotons
are within $\pm 200\,$\mathrm{e}\mathrm{V}$$ of the mean energy, which is the
estimated energy acceptance for trapping.
### 4.5 Bunch Length
The length of the antiproton bunch at 4 keV is relevant, because it determines
the losses in the second stage of deceleration to a few 100 eV right in front
of the trap. The simulation predicts an increase in length from 75 ns to 89 ns
at the position of the MagnetToF, with which 90% of the bunch can be trapped.
A measurement of the bunch length of the decelerated antiprotons with the
MagneToF yields a length ($1\sigma$) of 93$\pm$3 ns, consistent with the
simulation.
## 5 Conclusion
An overview of the design and the characterisation of the low-energy
antiproton beam line of PUMA at ELENA is presented. Design considerations for
high voltage and ultra-high vacuum are discussed, as well as procedures for
high-voltage conditioning and in-vacuum high-voltage operation. The antiproton
beamline is shown to be successful in decelerating antiprotons from 100 keV to
$(3898\pm 3)$ eV, the first step in trapping antiprotons for the PUMA
experiment. The pressure, with the pulsed drift tube not in operation, is
below $2\cdot 10^{-10}$ mbar. With the implemented high-voltage ramping
scheme, the pressure stays below $2\cdot 10^{-10}$ mbar 75% of the cycle time,
also during operation. Currently, a transmission of ($55\pm 3$)% for
antiprotons decelerated to 4 keV can be reached. The beam was focussed to a
spot with $\sigma_{\mathrm{horiz}}=(3.0\pm 0.1)\,$\mathrm{m}\mathrm{m}$,$ and
$\sigma_{\mathrm{vert}}=(3.8\pm 0.2)\,$\mathrm{m}\mathrm{m}$$, demonstrating
it can be focussed into the PUMA Penning trap. The length of the 4 keV
antiproton bunch, relevant for the second deceleration from 4 keV to 100 eV is
(93$\pm$3) ns. Further improvement of the beamline is foreseen in the future,
while the current performance already allows for first experiments with PUMA.
## Acknowledgements
We thank the ELENA team and the operators of the Antimatter Factory for
excellent beam during the runs in 2022 and 2023. We thank the technical teams
at CERN and TU Darmstadt for their support. The presented work benefited from
the support of the GBAR collaboration. PUMA is funded by the European Research
Council through the ERC grant PUMA-726276 and the Alexander-von-Humboldt
foundation. The development of PUMA and its implementation at CERN are
supported by the TU Darmstadt and CERN.
## References
* [1] X. Roca-Maza, M. Centelles, X. Viñas and M. Warda “Neutron Skin of 208Pb, Nuclear Symmetry Energy, and the Parity Radius Experiment” In _Phys. Rev. Lett._ 106 American Physical Society, 2011, pp. 252501 DOI: 10.1103/PhysRevLett.106.252501
* [2] K. Hebeler, J.. Lattimer, C.. Pethick and A. Schwenk “Equation of State and Neutron Star Properties Constrained by Nuclear Physics and Observation” In _Astrophys. J._ 773.1 The American Astronomical Society, 2013, pp. 11 DOI: 10.1088/0004-637X/773/1/11
* [3] C.. Horowitz and J. Piekarewicz “Neutron Star Structure and the Neutron Radius of ${}^{208}\mathrm{Pb}$” In _Phys. Rev. Lett._ 86 American Physical Society, 2001, pp. 5647–5650 DOI: 10.1103/PhysRevLett.86.5647
* [4] A. Krasznahorkay et al. “Neutron-skin thickness in neutron-rich isotopes” In _Nucl. Phys. A_ 731, 2004, pp. 224–234 DOI: https://doi.org/10.1016/j.nuclphysa.2003.11.034
* [5] B. Kłos et al. “Neutron density distributions from antiprotonic ${}^{208}\mathrm{Pb}$ and ${}^{209}\mathrm{Bi}$ atoms” In _Phys. Rev. C_ 76 American Physical Society, 2007, pp. 014311 DOI: 10.1103/PhysRevC.76.014311
* [6] J. Zenihiro et al. “Neutron density distributions of ${}^{204,206,208}\mathrm{Pb}$ deduced via proton elastic scattering at ${E}_{p}=295$ MeV” In _Phys. Rev. C_ 82 American Physical Society, 2010, pp. 044611 DOI: 10.1103/PhysRevC.82.044611
* [7] C.. Tarbert et al. “Neutron Skin of ${}^{208}\mathrm{Pb}$ from Coherent Pion Photoproduction” In _Phys. Rev. Lett._ 112 American Physical Society, 2014, pp. 242502 DOI: 10.1103/PhysRevLett.112.242502
* [8] D. Adhikari et al. “Accurate Determination of the Neutron Skin Thickness of ${}^{208}\mathrm{Pb}$ through Parity-Violation in Electron Scattering” In _Phys. Rev. Lett._ 126 American Physical Society, 2021, pp. 172502 DOI: 10.1103/PhysRevLett.126.172502
* [9] R.. Garcia Ruiz et al. “Unexpectedly large charge radii of neutron-rich calcium isotopes” In _Nat. Phys._ 12.6, 2016, pp. 594–598 DOI: 10.1038/nphys3645
* [10] Tomotsugu Wakasa et al. “Neutron-skin values and matter and neutron radii determined from reaction cross sections of proton scattering on ${}^{12}\mathrm{C}$, ${}^{40,48}\mathrm{Ca}$, ${}^{58}\mathrm{Ni}$, and ${}^{208}\mathrm{Pb}$” In _Phys. Rev. C_ 107 American Physical Society, 2023, pp. 024608 DOI: 10.1103/PhysRevC.107.024608
* [11] M. Enciu et al. “Extended ${p}_{3/2}$ Neutron Orbital and the $N=32$ Shell Closure in ${}^{52}\mathrm{Ca}$” In _Phys. Rev. Lett._ 129 American Physical Society, 2022, pp. 262501 DOI: 10.1103/PhysRevLett.129.262501
* [12] I. Tanihata et al. “Measurements of Interaction Cross Sections and Nuclear Radii in the Light $p$-Shell Region” In _Phys. Rev. Lett._ 55 American Physical Society, 1985, pp. 2676–2679 DOI: 10.1103/PhysRevLett.55.2676
* [13] P G Hansen, A S Jensen and B Jonson “Nuclear Halos” In _Annu. Rev. Nucl. Part S_ 45.1, 1995, pp. 591–634 DOI: 10.1146/annurev.ns.45.120195.003111
* [14] Björn Jonson “Light dripline nuclei” In _Phys. Rep._ 389.1, 2004, pp. 1–59 DOI: https://doi.org/10.1016/j.physrep.2003.07.004
* [15] Manju, Jagjit Singh, Shubhchintak and R. Chatterjee “Low-lying dipole strengths for probable p-wave one-neutron halos in the medium mass region” In _Eur. Phys. J. A_ 55.1, 2019, pp. 5 DOI: 10.1140/epja/i2019-12679-4
* [16] V. Rotival, K. Bennaceur and T. Duguet “Halo phenomenon in finite many-fermion systems: Atom-positron complexes and large-scale study of atomic nuclei” In _Phys. Rev. C_ 79 American Physical Society, 2009, pp. 054309 DOI: 10.1103/PhysRevC.79.054309
* [17] Zhongzhou Ren, Baoqiu Chen, Zhongyu Ma and Gongou Xu “One-proton halo in ${}^{26}\mathrm{P}$ and two-proton halo in ${}^{27}\mathrm{S}$” In _Phys. Rev. C_ 53 American Physical Society, 1996, pp. R572–R575 DOI: 10.1103/PhysRevC.53.R572
* [18] Michiharu Wada and Yasunori Yamazaki “Technical developments toward antiprotonic atoms for nuclear structure studies of radioactive nuclei” Low Energy Antiproton Physics (LEAP’03) In _Nucl. Instrum. Methods Phys. Res., Sect. B_ 214, 2004, pp. 196–200 DOI: https://doi.org/10.1016/j.nimb.2003.08.019
* [19] T. Aumann et al. “PUMA, antiProton unstable matter annihilation - PUMA collaboration” In _Eur. Phys. J. A_ 58.5, 2022, pp. 88 DOI: 10.1140/epja/s10050-022-00713-x
* [20] M. Leon and R. Seki “Determination of the neutron halo from antiproton absorption” In _Phys. Lett. B_ 48.3, 1974, pp. 173–175 DOI: https://doi.org/10.1016/0370-2693(74)90001-X
* [21] A.S. Iljinov, V.I. Nazaruk and S.E. Chigrinov “Nuclear absorption of stopped antiprotons: Multipion-nucleus interactions” In _Nucl. Phys. A_ 382.3, 1982, pp. 378–400 DOI: https://doi.org/10.1016/0375-9474(82)90352-9
* [22] M. Schlaich et al. “A multi-reflection time-of-flight mass spectrometer for the offline ion source of the PUMA experiment” In _Int. J. Mass Spectrom._ 495, 2024, pp. 117166 DOI: https://doi.org/10.1016/j.ijms.2023.117166
* [23] María J G Borge and Klaus Blaum “Focus on Exotic Beams at ISOLDE: A Laboratory Portrait” In _J. Phys. G: Nucl. Part. Phys._ 45.1 IOP Publishing, 2017, pp. 010301 DOI: 10.1088/1361-6471/aa990f
* [24] Wolfgang Bartmann et al. “The ELENA facility” In _Philos. Trans. R. Soc., A_ 376.2116, 2018, pp. 20170266 DOI: 10.1098/rsta.2017.0266
* [25] Stephan Maury et al. “ELENA: the extra low energy anti-proton facility at CERN” In _Hyperfine Interact._ 229.1, 2014, pp. 105–115 DOI: https://doi.org/10.1016/0370-2693(74)90001-X
* [26] Laurette Ponce et al. “ELENA - From Commissioning to Operation” In _JACoW IPAC_ 2022, 2022, pp. 2391–2394 DOI: 10.18429/JACoW-IPAC2022-THOXGD1
* [27] H. Kalinowsky “Deceleration of antiprotons from MeV to keV energies” In _Hyperfine Interact._ 76.1, 1993, pp. 73–80 DOI: 10.1007/BF02316707
* [28] C. Amole et al. “The ALPHA antihydrogen trapping apparatus” In _Nucl. Instrum. Methods Phys. Res., Sect. A_ 735, 2014, pp. 319–340 DOI: https://doi.org/10.1016/j.nima.2013.09.043
* [29] M. Tajima et al. “Antiproton beams with low energy spread for antihydrogen production” In _J. Instrum._ 14.05, 2019, pp. P05009 DOI: 10.1088/1748-0221/14/05/P05009
* [30] A. Husson et al. “A pulsed high-voltage decelerator system to deliver low-energy antiprotons” In _Nucl. Instrum. Methods Phys. Res., Sect. A_ 1002, 2021, pp. 165245 DOI: https://doi.org/10.1016/j.nima.2021.165245
* [31] Claude Amsler et al. “Pulsed production of antihydrogen” In _Commun. Phys._ 4.1, 2021, pp. 19 DOI: 10.1038/s42005-020-00494-z
* [32] B.. Latacz et al. “Ultra-thin polymer foil cryogenic window for antiproton deceleration and storage” In _Rev. Sci. Instrum._ 94.10, 2023, pp. 103310 DOI: 10.1063/5.0167262
* [33] K. Nordlund, M. Hori and D. Sundholm “Large nuclear scattering effects in antiproton transmission through polymer and metal-coated foils” In _Phys. Rev. A_ 106 American Physical Society, 2022, pp. 012803 DOI: 10.1103/PhysRevA.106.012803
* [34] N. Kuroda et al. “Confinement of a Large Number of Antiprotons and Production of an Ultraslow Antiproton Beam” In _Phys. Rev. Lett._ 94 American Physical Society, 2005, pp. 023401 DOI: 10.1103/PhysRevLett.94.023401
* [35] S.S. Fabbri and W. Bertsche “Optimization of Antiproton Capture for Antihydrogen Creation in the ALPHA Experiment” https://doi.org/10.18429/JACoW-IBIC2019-WEPP040 In _Proc. IBIC’19_ , International Beam Instrumentation Conferenc 8 JACoW Publishing, Geneva, Switzerland, 2019, pp. 637–641 DOI: 10.18429/JACoW-IBIC2019-WEPP040
* [36] F Herfurth et al. “A linear radiofrequency ion trap for accumulation, bunching, and emittance improvement of radioactive ion beams” In _Nucl. Instrum. Methods Phys. Res., Sect. A_ 469.2, 2001, pp. 254–275 DOI: https://doi.org/10.1016/S0168-9002(01)00168-1
* [37] S. Coeck et al. “A pulsed drift cavity to capture 30keV ion bunches at ground potential” In _Nucl. Instrum. Methods Phys. Res., Sect. A_ 572.2, 2007, pp. 585–595 DOI: https://doi.org/10.1016/j.nima.2006.11.054
* [38] J. Grund et al. “First online operation of TRIGA-TRAP” In _Nucl. Instrum. Methods Phys. Res., Sect. A_ 972, 2020, pp. 164013 DOI: https://doi.org/10.1016/j.nima.2020.164013
* [39] G. Savard et al. “A new cooling technique for heavy ions in a Penning trap” In _Phys. Lett. A_ 158.5, 1991, pp. 247–252 DOI: https://doi.org/10.1016/0375-9601(91)91008-2
* [40] P. Pérez et al. “The GBAR antimatter gravity experiment” In _Hyperfine Interact._ 233.1, 2015, pp. 21–27 DOI: 10.1007/s10751-015-1154-8
* [41] M.A. Fraser, D. Barna, W. Bartmann and R. Ostojić “Beam Dynamics Studies of the ELENA Electrostatic Transfer Lines” https://doi.org/10.18429/JACoW-IPAC2015-MOPJE044 In _Proc. IPAC’15, Richmond, VA, USA, May 3-8, 2015_ , International Particle Accelerator Conference 6 Geneva, Switzerland: JACoW, 2015, pp. 385–388 DOI: https://doi.org/10.18429/JACoW-IPAC2015-MOPJE044
* [42] M Martini and H Schönauer “Emittance measurements in the CERN PS complex”, 1997
* [43] M. McLean, J. Cenede, M. Hori and G. Tranquille “Commissioning of the SEM-Grid Monitors for ELENA” https://doi.org/10.18429/JACoW-IBIC2021-TUPP14 In _Proc. IBIC’21_ , International Beam Instrumentation Conference 10 JACoW Publishing, Geneva, Switzerland, 2021, pp. 223–226 DOI: 10.18429/JACoW-IBIC2021-TUPP14
* [44] C Benvenuti “Extreme Vacua: Achievements and Expectations” In _Phys. Scr._ 1988.T22, 1988, pp. 48 DOI: 10.1088/0031-8949/1988/T22/006
* [45] Katharina Battes, Christian Day and Volker Hauer “Systematic study of the outgassing behavior of different ceramic materials” In _J. Vac. Sci. Technol., B_ 39.3, 2021, pp. 034202 DOI: 10.1116/6.0000954
* [46] P. Chiggiato and P. Costa Pinto “Ti–Zr–V non-evaporable getter films: From development to large scale production for the Large Hadron Collider” Proceedings of the Eighth International Conference on Atomically Controlled Surfaces, Interfaces and Nanostructures and the Thirteenth International Congress on Thin Films In _Thin Solid Films_ 515.2, 2006, pp. 382–388 DOI: https://doi.org/10.1016/j.tsf.2005.12.218
* [47] D W Williams and W T Williams “Effect of electrode surface finish on electrical breakdown in vacuum” In _J. Phys. D: Appl. Phys._ 5.10, 1972, pp. 1845 DOI: 10.1088/0022-3727/5/10/314
|
# Measuring Internet Routing from the Most Valuable Points
Thomas Alfroy11footnotemark: 1, Thomas Holterbach11footnotemark: 1, Thomas
Krenc22footnotemark: 2, KC Claffy22footnotemark: 2, Cristel
Pelsser33footnotemark: 3
11footnotemark: 1 University of Strasbourg, 22footnotemark: 2 CAIDA / UC San
Diego, 33footnotemark: 3 UCLouvain
https://bgproutes.io
###### Abstract
While the increasing number of Vantage Points (VPs) in RIPE RIS and RouteViews
improves our understanding of the Internet, the quadratically increasing
volume of collected data poses a challenge to the scientific and operational
use of the data. The design and implementation of BGP and BGP data collection
systems lead to data archives with enormous redundancy, as there is
substantial overlap in announced routes across many different VPs. Researchers
thus often resort to arbitrary sampling of the data, which we demonstrate
comes at a cost to the accuracy and coverage of previous works. The continued
growth of the Internet, and of these collection systems, exacerbates this
cost. The community needs a better approach to managing and using these data
archives.
We propose MVP, a system that scores VPs according to their level of
redundancy with other VPs, allowing more informed sampling of these data
archives.
Our challenge is that the degree of redundancy between two updates depends on
how we define redundancy, which in turn depends on the analysis objective. Our
key contribution is a general framework and associated algorithms to assess
redundancy between VP observations. We quantify the benefit of our approach
for four canonical BGP routing analyses: AS relationship inference, AS rank
computation, hijack detection, and routing detour detection. MVP improves the
coverage or accuracy (or both) of all these analyses while processing the same
volume of data.
## 1 Introduction
Routing information services such as RIPE RIS [38] and RouteViews (RV) [51]
continuously collect and publish data from more than 2500 Vantage Points
(VPs), each of which is a BGP router that exports its best routes to the
collection platform. These data collection systems are critical to scientific
as well as operational analyses of the global Internet infrastructure. But
these systems face a cost-benefit trade-off [2]. The information-hiding
character of BGP means that improving the visibility of the Internet routing
system requires cultivating many peering relationships with operators willing
to contribute VPs to the platform. However, deployment of new VPs amplifies
the data management requirements caused by the growth of the Internet itself:
the number of unique IP prefixes (e.g., due to de-aggregation or new
assignments) constantly grows [12], as well as the number of unique ASes and
links between them. Even with a constant number of VPs, the volume of routing
data inevitably increases, contributing to a quadratic increase of observed
updates over time (Fig. 1(e)). The situation presents a challenge for users,
who often cannot or do not want to process terabytes of (redundant) data.
Users often resort to sampling the data in arbitrary ways, such as grabbing
all VPs on a single collector.
We design and implement a framework to optimize the use of these data
collection systems, which will also lower the barrier to their use in lower-
resourced circumstances. Our design relies on the principle of redundancy in
BGP data, a delicate concept since even two identical updates from two
different VPs may not be redundant (depending on the use case). We take a deep
dive into a context-specific framework for quantifying redundancy in BGP data,
grounded in operational principles and research use cases. Our resulting
system identifies a set of VPs whose exported routes collectively exhibit a
low level of redundancy—–enabling users to prioritize the processing of the
most valuable BGP updates.
1
2
3456
(a)
1
2
3456
(b)
1
2
3456
(c)
1
2
3456
(d)
Figure 1: Combining local views can help to map the AS topology. Gray links
are not visible from routes collected by VPs ( ).
(e) Growth in VPs
(f) Number of updates
per VP and per hour.
(g) Total number of updates per hour.
Figure 2: The number of VPs increases over time and so does the number of
collected updates. Both RIS and RV are considered in Fig. 1(f) and 1(g).
_Contributions._ We make the following contributions.
* •
We perform a comprehensive analysis based on simulations and a survey that
demonstrates the cost-benefit tradeoff of setting up new VPs, and the value of
strategically selecting them to analyze Internet routing. We show that current
approaches used by researchers to select VPs are largely unoptimized,
sacrificing coverage and accuracy of a wide range of measurement studies and
tools (§2-§3).
* •
We characterize redundancy between updates collected by different VPs. We
explore different definitions of redundancy and find that optimizing our
algorithms for a given definition leads to a undesirable overfitting effect
(§4-§5).
* •
We design a system, MVP, that returns a list of the “most valuable” VPs, i.e.,
those that enable users to minimize data redundancy (regardless of how we
define it) and prioritize valuable route updates. MVP relies on new data-
driven algorithms that quantify redundancy between VPs based on the four main
BGP attributes (time, prefix, AS path, and communities) while being robust
against typical biases observed in the Internet routing ecosystem (§6-§7).
* •
We run MVP as a service at https://bgproutes.io. We benchmark MVP and show
that it optimizes (without overfitting) the tradeoff between the volume of
data used and its utility for many objectives (§8-§9).
_Impact on scientific measurement studies._ The value of MVP is its wide
impact. Besides enabling a more systematic sampling of the RIS and RV data
archives, it can consistently, and at no cost for users, improve the accuracy
and coverage of measurement studies as well as monitoring tools fueled by BGP
routes collected by RIS and RV. To measure the impact of MVP, we replicated
the algorithms used in four studies/tools and used MVP to select the VPs from
which they process BGP routes. In all four cases, using MVP improved the
accuracy and coverage while processing the same data volume. We inferred more
AS relationships (+15%), fixed errors in the AS rank dataset, observed more
routing detours (+44%) while characterizing them more accurately, and inferred
more forged-origin hijacks (+35%) with $\approx$4$\times$ less incorrect
inferences (i.e., false positives).
## 2 Background
RIPE’s Routing Information Service (RIS) [38] and RouteViews (RV) [51] are two
widely-used platforms that collect BGP routes and make them available to the
community. These platforms use BGP speakers (a.k.a. collectors) to peer with
BGP routers in order to collect routes exported by those routers. We call
vantage points (VPs) the BGP routers that export their routes to a collection
platform. As of May 2023, 32% of the RIS and RV VPs [42, 33] are full feeders,
i.e., they send a route for roughly all of the announced IP prefixes on the
Internet ($\approx$941k prefixes [12]). A BGP route mainly carries routing
information in four of its attributes [36]: _(i)_ the timestamp at which the
route was received, _(ii)_ the IP (v4 or v6) prefix that the route announces,
_(iii)_ the AS path used to reach that prefix, and _(iv)_ a set of BGP
communities. Among other uses, researchers leverage the timestamp to find
transient paths [29], the prefix to detect hijacks [44], the AS paths to infer
AS relationships [31], and the communities to measure unnecessary BGP traffic
[28].
Each VP provides its local view, i.e., only the BGP routes it observes. Fig. 2
illustrates the effect of combining local views for inferring the AS topology
from the AS paths in BGP routes. In Fig. 2, every AS runs a single BGP router,
owns one prefix, and announces it in BGP. We configure routing policies based
on the Gao-Rexford model [21], i.e., routing paths follow a valley-free
pattern. Straight (resp. dashed) lines are customer-to-provider (resp. peer-
to-peer) links. With the local view of 1, one can infer all the AS links but
the two peering links 3 4 and 5 6 (Fig. 1(a)). Combining the local views of
1 and 2 does not help to discover more links (Fig. 1(b)). With the local view
of 5, one can infer all the AS links but the two customer-to-provider links 2
4, 4 6 (Fig. 1(c)). Combining the local views of 5 and 6 enables discovery of
the full topology (Fig. 1(d)). However, observe that this last scenario is
unlikely in practice as the location of the VPs is skewed with many more VPs
present in highly-connected or central (e.g., Tier1) ASes [45]. Observe also
that VPs can have a redundant view over the AS topology, e.g., the two VPs in
Fig. 1(b) observe the same set of links.
By May 2023, RIS had 1526 VPs and RV had 1071 VPs, and their number keeps
increasing (Fig. 1(e)). Users can download BGP routes exported by these
platforms at the granularity of the VP (with some limitations [41]) or the
collector. Users can download a RIB dump, i.e., a snapshot of the BGP routes
seen by a VP at a particular time, which (in Jan. 2023) yielded $\approx$941k
routes for a full feeder. Alternatively, users may download every single BGP
update observed by the VPs over time (e.g., using [36]), which currently
results in $\approx$18K updates per hour (median in May 2023) for a single VP
(Fig. 1(f)), and billions of updates per day for all RIS and RV VPs (Fig.
1(g)).
## 3 Problem
Deploying more VPs expands the visibility of the routing system (§3.1), but
also increases collected data volumes raising barriers to its use (§3.2). We
survey researchers and find that they resort to unoptimized sampling, which
they acknowledge can negatively impact the quality of their results (§3.3).
### 3.1 More VPs improves data completeness
A tiny fraction (1.3%) of the 74k ASes participating in the global routing
system [12] host a VP. This fraction remains low (8.4%) even when focusing on
the 11441 transit ASes (i.e., those with at least one customer). While we
cannot know how much additional topology we might observe from VPs that do not
peer with the public collection systems, we can estimate this gap using
simulations of topologies whose statistical parameters match those of the
known global Internet.
_Methodology._ We created a mini-Internet with 600 ASes, each running a single
BGP router. We generated the AS topology using the Hyperbolic Graph Generator
[3]. We set the average node degree to 6.1, which results in a comparable
degree of connectivity (a.k.a. Beta index) to the one observed in CAIDA’s AS
relationship dataset from December 2022 [16], and use as the degree
distribution a power law with exponent 2.1 (as in [3]). We defined the AS
relationships as follows. The three ASes with the highest degree are Tier1
ASes and are fully meshed. ASes directly connected to a Tier1 are Tier2s. ASes
directly connected to a Tier2 but not to a Tier1 are Tier3s, etc. Two
connected ASes have a peer-to-peer (p2p) relationship if they are on the same
level, and a customer-to-provider (c2p) relationship if not. The routing
policies follow the Gao-Rexford model [21].
Figure 3: Simulations of a mini Internet with 600 ASes. We make two key
observations: _(i)_ deploying more VPs helps to reveal more AS links, and
_(ii)_ arbitrarily selecting VPs performs poorly compared to selecting them
with greedy specific (a best-case approximation). The line in a box depicts
the median value; the whiskers show the 5 and the 95th percentile.
Fig. 3 shows the proportion of observed AS links as a function of the number
of ASes hosting a VP. We consider three VP deployment strategies: _(i)_
random, which randomly deploys VPs across all the ASes; _(ii)_ distance-based,
which aims to maximize the AS-level distance between the deployed VPs; and
_(iii)_ greedy specific, which approximates the best case for topology
discovery using a greedy approach. We ran every selection strategy twenty
times (with different random seeds). We computed the proportion of observed
links and show separately the p2p and c2p links in Fig. 3.
_Conclusions._ Although we take the results with a grain of salt because the
topology differs (but exhibits similar patterns) from the visible portion of
the actual (unknown) AS topology, we tentatively draw the following four
conclusions.
_(I)_ As expected, for a given VP deployment strategy, more VPs often lead to
more links observed; all links are observed only when all ASes host a VP.
_(II)_ P2p links are harder to observe than c2p links. We find that p2p links
are more visible from VPs at the edge. This result is consistent with the fact
that p2p links are generally not advertised upwards in the Internet hierarchy
when routing policies follow the Gao-Rexford model.
_(III)_ The distance-based deployment strategy performs poorly (even worse
than random) because it overprioritizes isolated VPs at the edge over some
other important VPs in the core.
_(IV)_ When 1.3% of ASes host a VP (same proportion as current RIS and RV
VPs), only $\approx$5% of the p2p links are seen when using the random
deployment strategy.
_Confirmation with real (but private) data._ We contacted a private BGP data
provider (bgp.tools) that collects BGP routes from $\approx$1000 routers and
compared the set of AS links observed from these private feeds against the set
of AS links observed by RIS and RV VPs (in September 2023). We find that the
private data provider saw 192k AS links that none of the RIS and RV VPs
observed, and vice versa, RIS and RV VPs observed 401k links that the private
data provider did not observe. In either case, the lack of VPs leads to
missing routing information. We can thus expect—and hope for—the number of VPs
to keep increasing.
### 3.2 BGP data management is challenging
Deploying more VPs generates more data as each of them collects BGP updates.
Moreover, new IP prefixes advertised in BGP (see [12]) increase the volume of
data collected by every VP as it triggers the propagation of new BGP routes
that many VPs (e.g., the full feeders) observe and send to the collection
platforms. The compound effect—more VPs (Fig. 1(e)) and more updates per VP
(Fig. 1(f))—yields a quadratic increase in updates reaching the collection
platforms (Fig. 1(g)), which challenges both users and data providers [2].
Although several tools can speed up data processing [36, 7, 5], many
measurement studies and monitoring tools use only a sample of data collected
by RIS and RV, either using only a subset of the VPs or a short time window,
or both111We purposively do not cite any paper to preserve the anonymity of
the respondents of our survey.. While authors do not typically explain why
they do not use all the data, the sampling suggests two (inter-related)
explanations: authors believe the sample is representative and sufficiently
complete; and/or the data volume is not worth trying to manage. We confirm
these explanations with a survey that we conducted involving authors of eleven
research papers.
_Methodology of our survey._ We classified eleven BGP-based studies from top
conferences222SIGCOMM, NSDI, S&P, USENIX Security, NDSS and IMC. into two
categories based on how they used BGP data.333A paper may be in both
categories. Nine papers used all routes collected from a subset of the VPs
(category $C_{1}$); six papers used a short time frame ($C_{2}$). For each
paper, we asked authors questions regarding their use of BGP data: whether
data volume limited their work, how and why they sampled BGP data sources,
their understanding of the impact on the quality of their results, and if they
would do things differently if they had more resources or time. We did not
receive answers from the authors of three papers. Thus, we have seven
respondents in $C_{1}$ and five in $C_{2}$. We summarize the results here;
details of the survey are in an appendix (§A).
_The volume of BGP data to process is often a limiting factor._ Seven (of
eight) respondents found the BGP data expensive to process. For three
respondents in $C_{1}$, processing time motivated them to use only a subset of
the VPs; three respondents in $C_{2}$ considered the processing time when
choosing a measurement interval. Even a respondent who used a Spark cluster
found it inhibitively time-consuming to process the BGP data.
_Respondents in $C_{1}$ selected VPs in an unoptimized fashion._ One
respondent picked geographically distant BGP collectors. Our experiments (Fig.
3) and evaluation (§9) show that this strategy, while intuitive, often fails
to optimize for any given metric (e.g., coverage). Other respondents said they
chose VPs randomly, or those with the highest number of prefixes. Another
responded to have unintentionally discarded some VPs, leaving an arbitrarily
selected set in the study. Two respondents did not remember how they selected
VPs.
### 3.3 Unoptimized sampling negatively impacts the quality of the results
We show the negative effects of an unoptimized sampling using our controlled
simulations as well as our survey.
_Selecting VPs arbitrarily performs poorly._ Our mini-Internet simulation
(§3.1) showed that arbitrary VP selection strategies perform significantly
worse than greedy specific (a best-case approximation) when the goal is to map
the AS topology. For instance, randomly selecting 20 VPs reveals 12% of the
p2p links compared to 56% when selecting them using greedy specific—a
4.7$\times$ improvement factor that we highlight in Fig. 3. Our evaluation
reveals that this performance gap between using an arbitrary VPs selection
strategy and a best-case approach also exists for various other metrics, e.g.,
hijacks or transient paths detection (§9).
_Six respondents in $C_{1}$ acknowledged that using more VPs would improve the
quality of their analysis._ The last respondent was not sure, given the
potential redundancy in the data sources (which he did not analyze). Two of
the six believed it would not significantly change the conclusion of their
measurement studies (e.g., one said that it could help to pinpoint corner
cases). However, six of the seven authors in $C_{1}$ affirmed that they would
have used more VPs if they had more resources and time.
_All five respondents in $C_{2}$ said that extending the duration of their
study would improve the quality of their results._ One respondent thought the
gain would not be significant; another said it could help detect rare routing
events. All respondents in $C_{2}$ would have extended the duration of their
observation window given more time and resources. We experimentally confirm in
§9.2.3 that extending the timeframe of analysis improves the quality of its
results with a case study on routing detour characterization [46].
## 4 Opportunity to Optimize Sampling
We propose a systematic framework to characterize redundancy across BGP routes
collected by the VPs. We use the term redundant to refer to updates with
similar (or identical, depending on the redundancy definition) attribute
values (see definitions below). Thus, two redundant VPs, i.e., that observe
redundant routes, likely provide similar views over routing events such as
hijacks, traffic engineering, etc.
_Methodology._ We characterize redundancy between pairs of VPs by computing
the proportion of redundant updates that they collect using three different,
gradually stricter, definitions of update redundancy. We denote $U_{i}$ the
set of updates observed by VP $i$. Consider a BGP update $u_{t,p}\in U_{i}$
with $t$ the time at which the route was observed and $p$ its prefix.
###### Definition 1 (prefix based)
The update $u_{t_{1},p_{1}}\in U_{1}$ is redundant with the update
$u_{t_{2},p_{2}}\in U_{2}$ if:
* •
$\lvert t_{1}-t_{2}\lvert<5$ minutes, and $p_{1}=p_{2}$.
We chose 5 minutes because it is an approximation of the BGP convergence time
[29]. This first definition might be appropriate to map prefixes with their
origin AS.
For our second definition, we denote $A_{i}(t,p)$ the set of AS links in the
AS path of the most recent BGP route observed by VP $i$ for prefix $p$ at time
$t$.
###### Definition 2 (prefix and as-path based)
The update $u_{t_{1},p_{1}}$ $\in U_{1}$ is redundant with the update
$u_{t_{2},p_{2}}\in U_{2}$ if:
* •
$\lvert t_{1}-t_{2}\lvert<5$ minutes, and $p_{1}=p_{2}$, and
* •
$A_{1}(t_{1},p_{1})\setminus A_{1}(t_{1}-\epsilon,p_{1})\subset
A_{2}(t_{2},p_{2})\setminus A_{2}(t_{2}-\epsilon,p_{2})$.
The second condition checks whether the changes (operator $\setminus$) in the
AS paths observed by VP $1$ for a given prefix are included (operator
$\subset$) in the set of changes observed by VP $2$ for the same prefix. This
second definition might be appropriate to detect new AS links or transient
paths.
Our third definition follows the same approach but adds BGP communities. We
denote $C_{i}(t,p)$ the set of community values of the most recent BGP route
observed by VP $i$ for prefix $p$ and at time $t$.
###### Definition 3 (prefix, as-path, and community-based)
The update $u_{t_{1},p_{1}}\in U_{1}$ is redundant with update
$u_{t_{2},p_{2}}\in U_{2}$ if:
* •
$\lvert t_{1}-t_{2}\lvert<5$ minutes, and $p_{1}=p_{2}$, and
* •
$A_{1}(t_{1},p_{1})\setminus A_{1}(t_{1}-\epsilon,p_{1})\subset
A_{2}(t_{2},p_{2})\setminus A_{2}(t_{2}-\epsilon,p_{2})$, and
* •
$C_{1}(t_{1},p_{1})\setminus C_{1}(t_{1}-\epsilon,p_{1})\subset
C_{2}(t_{2},p_{2})\setminus C_{2}(t_{2}-\epsilon,p_{2})$.
We note that Def. 2 and 3 are asymmetric because, given two set $X$ and $Y$ of
objects of same type, $X\subset
Y\mathrel{{\ooalign{$\not\phantom{=}$\cr$\implies$}}}Y\subset X$.
_Redundant pairs of VPs exist._ Fig. 4 (top row) shows the level of redundancy
for the three definitions and between 100 VPs randomly selected and computed
over the updates observed during two hours on August 1, 2022. Observe that we
performed 30 random selections with different seeds and show the median case
(in terms of redundant pairs of VPs). One cell in the matrix indicates the
redundancy of the VP on the ordinate with the VP on the abscissa. We define
the redundancy between VP $1$ and VP $2$ as the proportion of updates observed
by VP $1$ that are redundant with at least one update observed by VP $2$. For
better visibility, we show the most redundant VPs at the top of the figures.
Redundant pairs of VPs exist regardless of the redundancy definition used.
Logically, the stricter the definition, the fewer redundant pairs of VPs. Fig.
4 (left) shows that the VPs can be highly redundant when they are selected
randomly. For instance, with the loose Def. 1, we observe that 74 among the
100 randomly selected VPs have >50% of their updates that are redundant with
the ones observed by two other VPs or more (23 for Def. 2 and 16 for Def. 3).
We observe a similar redundancy level when considering only full feeders.
Figure 4: Redundancy among a subset of 100 existing VPs selected using two
different techniques for three increasingly stricter redundancy definitions.
Randomly selecting VPs (top row) returns significantly more pairs of redundant
VPs.
## 5 Main challenge: prevent overfitting
Our design objective is a general framework that can accommodate different
definitions of redundancy in selecting the set of least redundant VPs.
However, optimizing selection for one objective is likely to overfit, leading
to poor performance for other objectives. Thus, while the three definitions in
§4 enable illustrating the redundancy across current VPs, none of them are
used in the design of MVP. These definitions are too naive to accurately
quantify redundancies between the VPs.
We explore this risk of overfitting to a particular objective using a VPs
selection strategy optimized for one objective: minimizing redundancy. This
selection strategy, which we name greedy specific, iteratively selects (in a
greedy fashion) the VP that minimizes the proportion of redundant updates
across all the updates collected by the selected VPs. We implement three
versions of it, one for each redundancy definition used in §4. Thus, greedy
specific approximates an optimal VP selection when the goal is to minimize
redundancy between VPs according to a specific definition of redundancy.
_Greedy specific limits redundancy._ We select 100 VPs using greedy specific.
Logically, the selected VPs are less redundant (see Fig. 4, bottom row)
compared to the 100 VPs randomly selected. With the loose Def. 1, only 30 VPs
have >50% of their updates redundant with ones observed by two other VPs or
more. This number drops to 9 with Def. 2 and 5 with Def. 3. This result
highlights that while VPs can be highly redundant, nonredundant pairs of VPs
also exist.
_Greedy specific overfits._ Greedy specific overfits because it optimizes one
particular objective. Thus, it works well for this objective but not for the
others. We confirm this overfitting effect in §9 where we benchmark greedy
specific against MVP on various objectives and show that it performs poorly on
objectives that it does not optimize. Consequently, one would need to design a
greedy specific VPs selection for every possible definition of data
redundancy—which is unpractical given that there is an infinite number of
definitions.
## 6 Methodology Overview
MVP samples BGP updates from RIS and RV at the VP granularity. Our method has
four steps that we overview below.
_Step 1 (§ 7.1): Select a large, unbiased set of BGP events that we use to
gauge pairwise redundancy between VPs._ MVP evaluates the redundancy between
two VPs based on a carefully selected set of non-global BGP events (i.e., AS
path changes). Global events are typically seen by all VPs and have the same
impact on every VP view, rendering them less discriminating for this purpose.
We stratify our selection of sampled events across space and time to avoid
bias.
_Step 2 (§ 7.2): Characterize how VPs experience the selected events._ For
every BGP event, MVP quantifies topological features [48] of the ASes involved
as observed by each VP. These features embed information about the four
attributes of a BGP update: time, prefix, AS path, and communities.
_Step 3 (§ 7.3): Compute pairwise redundancy between VPs._ MVP computes the
pairwise Euclidean distance in a $n$-dimensional space, where $n$ is the
number of topological features times the number of events. VP pairs with
similar feature values for many events are close in this space and thus likely
redundant. MVP then computes the average Euclidean distances between each pair
of VPs computed over different and nonoverlapping time periods.
_Step 4 (§ 7.4): Sort and select the least redundant VPs._ MVP relies on a
greedy algorithm that considers both data redundancy and its volume to build a
set of the most valuable VPs. MVP first adds the VP with the lowest average
Euclidean distance to all other VPs, and then greedily adds the VP that
balances minimal redundancy with already selected VPs and minimal additional
data volume that the VP brings.
## 7 Methodology Details
In the following, we consider the set of VPs $V$ that includes all VPs from
RIS and RV. We compute the RIB of VP $v$ at time $t$ using its last RIB dump
before $t$ and subsequent updates until $t$. We use this RIB to construct and
maintain the undirected weighted graph $G_{v}(t)=(N_{v}(t),E_{v}(t))$ from the
AS paths of the best routes observed by $v$ at time $t$, with $N_{v}(t)$ the
set of nodes and $E_{v}(t)\in N_{v}(t)*N_{v}(t)$ the set of AS links. The
edges are undirected because two identical paths in opposite directions should
not appear as nonredundant. Each edge in $E_{v}(t)$ has a weight in
$\mathbb{Z^{+}}$ which is the number of routes in the RIB that includes this
edge in their AS path.
### 7.1 Select BGP events to assess redundancy
ID | Name | # of ASes | Avg.degree | Description
---|---|---|---|---
1 | Stub | 63310 | 3 | ASes without customer
2 | Transit-1 | 10845 | 27 | Transit ASes with a customer cone size lower than the average
3 | Transit-2 | 704 | 267 | Transit ASes $\notin$ Transit-1
4 | HyperGiant | 15 | 1078 | Top 15 as defined in [8]
5 | Tier1 | 19 | 1817 | Tier1 in the CAIDA dataset [16]
Table 1: MVP balances selected events across 5 AS types.
_MVP uses local and partially visible new-AS-link events._ MVP focuses on BGP
events that trigger a new AS link to appear in the path to reach prefix $p$
from different VPs. A new-AS-link event is a candidate event in $\mathcal{C}$
if at least two and fewer than half of the VPs begin to use the same new AS
link to reach the same prefix within a 10-minute window (to accommodate
typical BGP convergence and path exploration delays [29, 35]). Since the aim
of MVP is to find data unique to individual VPs, we exclude global events
(i.e., seen by most VPs) to focus on local events.
_MVP avoids biases across time and location._ From candidate set
$\mathcal{C}$, MVP builds the final set of events $\mathcal{E}$ by selecting
15 events in $500\text{\,}\mathrm{d}$ifferent and nonoverlapping 10-minute
time periods. Adding more periods does not affect significantly the results.
MVP samples time periods randomly within a one-month timeframe to avoid mis-
inferring one larger event (e.g., a route leak that continuously generates new
links for multiple hours) as several smaller AS-link-level events. Inspired by
previous approaches to mitigate the risk of over-sampling core or stub (edge)
ASes [45, 37], our approach classifies ASes into five categories (Table 1) and
selects an equal number of new-AS-link events for every pair of AS categories.
We distinguish two classes of transit providers by customer cone size
(Transit-1 and -2) since they have different topological properties. If an AS
belongs to more than one category, we classify it in the category with the
highest ID. ASes classified in a lower row of Table 1 have a higher degree,
and there are more low-degree ASes than high-degree ASes.
Fig. 5 shows the proportion of selected events for each of the 15 pairs of AS
category (the matrixes are symmetric) and for 7500 events selected in January
2023 using two schemes: balanced and random. The random selection (Fig. 5(b))
selects many more events involving Transit-2 ASes (69%) than hypergiants
(11%), while our balanced selection scheme mitigates biases by selecting the
same number of links in every category (Fig. 5(a)). For each time period, MVP
selects one event in each of the 15 pairs of AS, yielding $15*500=7500$ events
($|\mathcal{E}|=7500$) for use in the next step.
(a) Balanced selection. (b) Random selection.
Figure 5: MVP selects the new-AS-link events using a balanced selection scheme
that reduces bias (Fig. 5(a) vs. Fig. 5(b)). The x- and y-axis are the five
categories of ASes (see Table 1).
### 7.2 Quantifying the observation of the VPs
_MVP considers the four main BGP attributes._ MVP computes topological
features on the graphs $G_{v}(t)$ for all VPs. The combination of these
topological features prevents overfitting as the graphs on which they are
computed embed information about the four main BGP attributes (§2). More
concretely, the graphs $G_{v}(t)$ embed information about (i) the time as the
graph is updated over time, (ii) the AS path as it is used to build the AS
graph, (iii) the prefixes as they are used to weight every edge on the graph,
and (iv) the community values as they are strongly correlated with the AS
path. We confirm this correlation by downloading the first RIBs of Jan. 2023
for all VPs and analyzing the correlation between the AS path and the set of
BGP communities. We find that two identical AS paths share the exact same set
of BGP communities in 93% of the cases. We thus do not embed more information
about BGP communities because many of them encode local traffic engineering
decisions [17] that could lead to MVP overfitting. We validate this design
choice in §9.1.
Type | Categorie | Name | Weighted | Index
---|---|---|---|---
Node-based | Centrality Metrics | Closeness centrality | $\checkmark$ | 0
Harmonic centrality | $\checkmark$ | 1
Neighborhood Richness | Average neighbor degree | $\checkmark$ | 2
Eccentricity | $\checkmark$ | 3
Topological Pattern | Number of Triangles | $\times$ | 4
Clustering | $\checkmark$ | 5
Pair-based | Closeness Metrics | Jaccard | $\times$ | 6
Adamic Adar | $\times$ | 7
Preferential attachment | $\times$ | 8
Table 2: Node-based and pair-based features used by MVP.
_MVP uses 15 diverse topological features (Table 2)._ MVP computes topological
features (extracted from literature [20]) that are either node-based or link-
based. Node-based features are computed for the two ends of a new AS link,
while link-based are computed for the new AS link. MVP uses six node-based
features that we classify into three categories. The first one quantifies how
central and connected a node is in the graph; the second quantifies how
connected are the neighboring nodes; and the third quantifies the topological
patterns (e.g., triangles) that include the node. We classify the three pair-
based features into a single category that measures how close two nodes are
based on their neighboring nodes. Five features rely on edge weights. We omit
other topological features as they are redundant with the selected ones.
_MVP computes the value of the features for each VP and selected event._
Consider the event $e\in\mathcal{E}$ that is the appearance of the AS link
$(e_{AS1},e_{AS2})$ at time $e_{t}$, and the VP $v\in V$. Computation of the
feature values depends on the feature type. We denote $F_{n}$ (resp. $F_{p}$)
the set of node-based (resp. pair-based) features and show how MVP computes
the value of these two types of features for event $e$ and VP $v$.
Node-based features: Consider feature $f_{i}\in F_{n}$ and $f_{i}(x,G_{v}(t))$
its value for node $x$ on the graph $G_{v}(t)$, with $i$ the feature index in
Table 2. MVP computes the following 12-dimensional feature vector.
$\displaystyle
T_{node\\_based}(v,e)=[f_{0}(e_{AS1},G_{v}(e_{t})),f_{0}(e_{AS2},G_{v}(e_{t})),$
$\displaystyle\dots,f_{5}(e_{AS1},G_{v}(e_{t})),f_{5}(e_{AS2},G_{v}(e_{t}))]$
Pair-based features: Consider feature $f_{i}\in F_{p}$ and
$f_{i}(x_{1},x_{2},G_{v}(t))$ its value for the node pair $(x_{1},x_{2})$ on
the graph $G_{v}(t)$, with $i$ the feature index in Table 2. MVP computes the
following 3-dimensional feature vector.
$\displaystyle T_{pair\\_based}(v,e)=[f_{6}(e_{AS1},e_{AS2},G_{v}(e_{t})),$
$\displaystyle\dots,f_{8}(e_{AS1},e_{AS2},G_{v}(e_{t}))]$
The final feature vector used by MVP is $T(v,e)$, an 15-dimensional vector
that is the concatenation (denoted $\oplus$) of the node- and pair-based
features.
$\displaystyle T(v,e)=T_{node\\_based}(v,e)\oplus T_{pair\\_based}(v,e)$
### 7.3 Redundancy scoring
MVP computes pairwise redundancy between VPs in the following four steps.
_Step 1: Concatenate the feature vectors._ MVP first concatenates the computed
topological feature vectors (15 features) for all the events selected in the
same time period (15 events). We denote $\mathcal{E}_{p}$ the events selected
in the p-th time period ($|\mathcal{E}_{p}|=15$), with $0\leq p<500$, and
denote $e_{p,i}\in\mathcal{E}_{p}$ the i-th selected event in the p-th time
period. $F(v,p)$ is the concatenated feature vector for VP $v$ and the events
$\mathcal{E}_{p}$, which has $15*15=225$ dimensions and which MVP calculates
as:
$\displaystyle F(v,p)=T(v,e_{p,0})\oplus T(v,e_{p,1})\oplus\dots\oplus
T(v,e_{p,14})$
_Step 2: Normalize concatenated feature vectors._ MVP normalizes the data for
each time period using the feature matrix $\mathcal{M}(p)$ that includes the
concatenated feature vectors for all VPs (rows) and events (columns) in period
$p$.
$\mathcal{M}(p)=\begin{bmatrix}F(v_{0},p)\\\ \dots\\\
F(v_{|V|},p)\par\par\end{bmatrix}$
MVP normalizes (operation $\bigtriangledown$) the matrix $\mathcal{M}(p)$
column-wise using a standard scaler that transforms every column such that its
average is zero and its standard deviation is one.
_Step 3: Compute Euclidean distance between VPs._ MVP uses the normalized
matrix $\bigtriangledown(\mathcal{M}(p))$ to compute the Euclidean distance
between every pair of VPs and for all events in the time period $p$ (operation
$\diamond$). We denote $\bigtriangledown(\mathcal{M}(p))_{x}$ the x-th row in
the matrix $\bigtriangledown(\mathcal{M}(p))$ and
$\bigtriangledown(\mathcal{M}(p))_{x,i}$ its value at index $i$ (i.e., the
i-th column). We define the Euclidean distance between the n-th VP $v_{n}$ and
the m-th VP $v_{m}$ over the selected events in the time period $p$ as
follows.
$\displaystyle\diamond(v_{n},v_{m},p)=\sum_{i=0}^{225}(\bigtriangledown(\mathcal{M}(p))_{n,i}-\bigtriangledown(\mathcal{M}(p))_{m,i})^{2}$
_Step 4: Compute the average distance over all time periods._ The redundancy
score $\mathcal{R}(v_{n},v_{m})$ between two VPs $v_{n}$ and $v_{m}$ relates
to the normalized average Euclidean distance between them over the 500 time
periods, computed as:
$\displaystyle\mathcal{R}(v_{n},v_{m})=1-\coprod((\sum^{500}_{p=0}{\diamond(v_{n},v_{m},p)})*\frac{1}{500})$
The operator $\coprod$ applies a min-max scaler so that scores are between 0
and 1, with 1 meaning the most redundant pair of VPs and 0 the less redundant
pair of VPs. MVP thus computes and returns a redundancy score for every pair
of VPs.
### 7.4 Generating a set of VPs
We now explain how MVP generates a set of VPs $\mathcal{O}$ that minimizes the
proportion of redundant information collected. MVP initializes the set
$\mathcal{O}$ with the most redundant VP, i.e., the one with the lowest sum of
Euclidean distances to all the other VPs. This design choice allows the
redundant part of the BGP data (e.g., c2p links) to be visible by the first
selected VP. At every following iteration, MVP builds a candidate set of VPs
$\mathcal{K}$ that contains the unselected VPs exhibiting the lowest maximum
redundancy score. The maximum redundancy score $P$ measures the maximum
redundancy between a VP $v$ and the set of VPs $\mathcal{O}$ and is defined as
follows.
$P(\mathcal{O},v)=\max(\mathcal{R}(v,v_{i}),\forall v_{i}\in\mathcal{O})$
MVP adds in $\mathcal{K}$ the $\alpha=$ 25% of the nonselected VPs that
exhibit the lowest maximum redundancy score.
MVP then adds in set $\mathcal{O}$ the VP that is in the candidate set
$\mathcal{K}$ and that collects the lowest volume of data compared to the
other VPs in $\mathcal{K}$. MVP estimates the volume of data collected by the
VPs by counting the number of updates that they received over 365 one-hour
periods, one randomly selected in each day of the year to align with the
yearly update rate of MVP (§8). The $\alpha$ parameter allows tuning
redundancy and volume knobs: a low $\alpha$ prioritizes low redundancy while a
higher $\alpha$ prioritizes low resulting data volume. We found that $\alpha=$
25% performs well in practical scenarios (we tested a range from 10% to 50%).
## 8 System functionalities
MVP runs on a commodity server. Upon launch, it collects BGP routes from RIS
and RV using BGPStream [36] and computes the redundancy between every pair of
VPs at a yearly granularity, which is enough given that redundancies between
VPs remain stable over time (see §9.3). MVP then takes as input a year and a
volume of data and returns a set of VPs that generates a volume of data lower
than the volume specified as input. MVP returns the redundancy scores
calculated for every pair of VPs. Thus, users have the option to compute their
own set of complementary VPs based on these redundancy scores and some
additional constraints that they might have. This is useful when users want to
include (or exclude) some VPs (regardless of how redundant they are), which
will result in another set of VPs rather than the default set provided by MVP.
For instance, when trying to detect new peering, a user may want to take some
VPs at an IXP in addition to some VPs selected by MVP.
MVP runs at https://bgproutes.io, allowing users to get a list of VPs or the
redundancy scores without computational expenses. We implemented three
versions of MVP, one for IPv4 routes (MVPv4), one for IPv6 routes (MVPv6) and
one that considers both IPv4 and IPv6 routes (MVP${}^{v4}_{v6}$). The three
versions use the same methodology (described in §7) to compute redundancy and
generate a set of VPs.
## 9 Evaluation
We show that MVP improves the trade-off between the volume of data collected
and the routing information inferred compared to current VPs selection
strategies in five use cases for which we have ground truth (§9.1). We then
show that MVP would improve coverage and accuracy of previous studies for
which ground truth is unknown (§9.2). Finally, we show that the key design
choices of MVP are sound (§9.3).
Use case | Objective | Naives baselines | Greedy specifics use cases (§9.1) | Greedy specifics Def. (§4)
---|---|---|---|---
Random | AS-distance | unbiased [45] | _I_ | _II_ | _III_ | _IV_ | _V_ | Def. 1 | Def. 2 | Def. 3
Transient path detection
(_I_) | 50 % | 1.55 | 1.76 | 1.82 | 0.70 | 2.99 | 3.29 | 3.82 | 2.89 | 1.96 | 2.12 | 1.69
70 % | 1.38 | 1.62 | 1.53 | 0.76 | 3.24 | 3.51 | 3.42 | 3.09 | 1.56 | 1.56 | 1.78
90 % | 1.13 | 1.17 | 1.21 | 0.75 | 1.66 | 1.67 | 1.66 | 1.66 | 1.33 | 1.15 | 1.59
MOAS detection
(_II_) | 50 % | 2.35 | 3.38 | 3.41 | 2.31 | 0.98 | 1.80 | 2.83 | 1.53 | 3.39 | 2.85 | 3.98
70 % | 2.18 | 3.44 | 3.38 | 2.56 | 0.85 | 1.79 | 2.30 | 1.83 | 3.02 | 2.66 | 3.67
90 % | 1.98 | 2.69 | 3.06 | 2.37 | 1.04 | 2.31 | 2.82 | 2.56 | 2.46 | 2.19 | 3.31
AS topology mapping
(_III_) | 50 % | 2.59 | 2.97 | 2.43 | 1.58 | 1.29 | 0.71 | 1.53 | 1.94 | 2.27 | 2.18 | 3.35
70 % | 2.06 | 2.29 | 2.13 | 1.33 | 1.22 | 0.64 | 1.29 | 1.37 | 2.14 | 1.64 | 2.28
90 % | 1.72 | 1.88 | 1.80 | 1.30 | 1.18 | 0.77 | 1.23 | 1.27 | 1.73 | 1.64 | 1.80
Traffic engineering detection
(_IV_) | 50 % | 4.59 | 4.74 | 4.82 | 3.76 | 2.67 | 2.34 | 0.47 | 3.33 | 3.95 | 3.34 | 4.76
70 % | 2.71 | 3.37 | 3.52 | 3.04 | 1.86 | 2.89 | 0.41 | 1.85 | 4.34 | 2.02 | 3.51
90 % | 1.55 | 1.70 | 1.95 | 1.61 | 1.54 | 1.52 | 0.32 | 1.46 | 1.88 | 1.33 | 1.89
Unnecessary updates detection
(_V_) | 50 % | 1.72 | 2.89 | 2.41 | 2.19 | 2.10 | 2.63 | 2.20 | 0.38 | 2.43 | 2.92 | 2.59
70 % | 1.30 | 2.04 | 1.91 | 1.43 | 1.53 | 1.50 | 1.90 | 0.39 | 1.51 | 1.63 | 2.02
90 % | 1.01 | 1.36 | 1.39 | 1.17 | 1.18 | 1.14 | 1.16 | 0.50 | 1.09 | 1.38 | 1.35
| | | | | | | | | | | |
Table 3: Data reduction factors with MVPv4 compared to several baselines for
five use cases. MVP outperforms every baseline for all five use cases. Unlike
greedy specifics, MVP greatly avoids overfitting.
### 9.1 Benchmarking MVP
We benchmark MVP against three baselines per use case.
_Use cases._ We evaluate MVP on five different use cases that we carefully
picked such that each BGP attribute is useful for at least one of them. For
instance, the time is useful to detect transient events (use case _I_); the
prefix is useful to detect Multiple Origin ASes (MOAS) prefixes (use case
_II_); the AS path is useful to map the Internet topology (use case _III_);
and the community values are useful to detect traffic engineering (use case
_IV_) and unnecessary updates (use case _V_). Our goal is to demonstrate that
MVP does not overfit on some particular use cases or BGP attributes. For each
use case, we process the updates collected during 100 one-hour periods
(randomly selected in May 2023) and benchmark MVP on a set of events found. We
thus have ground truth. We briefly describe below each use case along with our
experimental settings.
_I_ Transient paths detection. Transient paths are BGP routes visible for less
than five minutes, a typical BGP convergence delay [29], and which can be
attributed to e.g., path exploration [35]. We focus on 200 randomly selected
transient path events for every one-hour period, making a total of
$100*200=20000$ events used.
_II_ MOAS prefixes detection. MOAS prefixes are announced by multiple distinct
ASes [44], which can be caused by legitimate [53] or malicious [40, 49, 13]
actions. We focus on 200 MOAS randomly selected events for every one-hour
period, making a total of $100*200=20000$ MOAS events used.
_III_ AS topology mapping. This is useful for e.g., inferring BGP policies
[31] or AS paths [32]. For each VP, we process the first RIB dump of May 2023
as well as the updates collected during the 100 one-hour periods and focus on
all distinct AS links observed.
_IV_ Traffic engineering detection. We focus on action communities i.e., those
associated with traffic engineering actions [50]. For every one-hour time
period, we focus on 80 updates for which a path change coincides with the
appearance of an action community, making a total of $100*80=8000$ path
changes used.
_V_ Unnecessary Updates detection. An unnecessary update is a BGP update that
only signals a change in the community values but not in the AS path [28]. We
consider 200 unnecessary updates randomly picked within each one-hour period,
making a total of $100*200=20000$ events used.
_Baselines._ We benchmarked MVP against three naive baselines commonly used in
practice (§3.2): _(i)_ random selection of VPs, which results in a skewed set
of VPs as they exhibit biases [45]; _(ii)_ AS-distance, i.e., select the first
VP randomly and the following ones to maximize the AS-level distance between
selected VPs; and _(iii)_ unbiased, i.e., start with all VPs and iteratively
remove the one that most increases the bias on the set of remaining VPs. We
measure the bias using the definition in [45].
We compare MVP against the three greedy specific VPs selection strategies
optimized for Def. 1, 2, and 3 (§4). Additionally, we compare MVP against five
other greedy specifics, one optimized for each of the five use cases described
above. Unlike the greedy specifics described in §5, these five greedy
specifics optimize the trade-off between the volume of the data and its
capacity to achieve a particular objective. For instance, when the objective
is to map the AS topology (use case _III_), greedy specific iteratively
selects the VP that best improves the trade-off between the number of
discovered AS links and the volume of processed data.
_Reduction factor definition._ We define the reduction factor to capture how
much MVP reduces the number of BGP updates required to fulfill a particular
objective. More precisely, assume an objective $O$ and a baseline $B$. We
iteratively build a set of VPs using baseline $B$. At every iteration, we
download all the updates that the newly selected VP observes during 100 one-
hour periods randomly selected in May 2023. We stop iterating when all updates
collected by the selected VPs enable the data to meet $O$. Similarly, we build
another set of VPs using MVP and stop selecting new VPs (see §7.4) when the
selected ones meet $O$. The reduction factor is the ratio between the number
of updates processed with $B$ and with MVP. More formally, the reduction
factor is $\frac{|U_{B}^{O}|}{|U_{MVP}^{O}|}$ with $|U_{B}^{O}|$ and
$|U_{MVP}^{O}|$ the number of updates processed to fulfill objective $O$ with
baseline $B$ and MVP respectively. A reduction factor $=2$ means that we can
fulfill objective $O$ with half as many updates when using MVP compared to
when using baseline $B$. More generally, a reduction factor $>$ 1 means that
we can fulfill the same objective with less data when using MVP compared to
when using $B$.
_Benchmark results._ Table 3 summarizes our results. For each use case, we
focus on three objectives: mapping X% of the AS topology (use case _III_) or
detecting X% of the events (use case _I_ , _II_ , _IV_ , and _V_), with X
equal to 50, 70, or 90. Here, we focus on the performance of MVPv4. MVPv6 and
MVP${}^{v4}_{v6}$ yield comparable performance (see §B).
Takeaway #1: MVP outperforms every naive baseline for every use case, i.e.,
the reduction factor is always above one. For instance, we detect 90% of the
MOAS events with 3.06$\times$ less data (the reduction factor is 3.06) when
using MVP compared to selecting the VPs using the unbiased baseline. This
means that MVP only needs 32% of the updates required by the unbiased baseline
to fulfill the objective. Comparably to what we observe in our mini-Internet
simulations (§3), the random baseline performs better than AS-distance.
Takeaway #2: We can see that MVP generalizes whereas greedy specific overfits.
In fact, for a particular use case, MVP is less performant than the greedy
specific strategy optimized for this use case. For any other use case, MVP
performs better than the greedy specifics not optimized for that use case.
These results demonstrate that the greedy specific strategies overfit. They
are also not practical as they need ground truth.
### 9.2 Impact on previous works
We show that MVP would improve the outcome of three measurement studies and
tools that are fueled by the BGP data from RIS and RV (and for which there is
no ground truth).
#### 9.2.1 Inference of AS properties
We show that MVP improves AS relationship inferences (a popular research
problem [31, 27, 22, 18]) and AS ranking [9].
_MVP helps to infer +15% more AS relationships._ We replicate the methodology
proposed in [31] that relies on public BGP data from RIS and RV to infer AS
relationships and build the widely-used CAIDA AS-relationship dataset [16]. We
compute the number of inferred AS relationships for every month in 2023 when
using the 648 VPs that CAIDA uses to build its dataset (In January 2023) and
when using VPs selected by MVP. We ensure that the VPs selected with MVP
generate the same volume of data as the 648 used by CAIDA so that any
performance gap can confidently be attributed to MVP. We find that the VPs
selected by MVP enable consistent (from Jan. 2023 to Aug. 2023) inference of
$\approx$90k additional AS relationships ($\approx$+17%) while missing only
$\approx$11k AS relationships ($\approx$2.2%) present in the original dataset.
Thus, the tradeoff is largely in favor of using MVP ($\approx$+15% overall).
We also replicated the AS relationship validation algorithm used in [31]
(which relies on the IRR and RIR data) and found that the true positive rate
(the metric used in [31]) remains identical (97%). Thus, MVP significantly
improves coverage without processing more data or losing accuracy.
_MVP prevents flawed inferences in the ASRank dataset._ We replicate the
methodology used by ASRank [9] to compute the AS Customer Cone Sizes (CCS). We
find that the CCS changes for 1067 ASes when using MVP and manually
investigated two cases of substantial changes:
Case I444https://asrank.caida.org/asns?asn=132337&type=search: AS132337 has a
CCS of 1 in the original dataset and a CSS of 18k when using MVP, making it
the 15th AS highest ranked by CCS. We contacted AS132337 who confirmed that it
has 18k customers. MVP correctly ranks AS132337 because it selects the unique
VP that sees it as a transit AS.
Case II:555https://asrank.caida.org/asns?asn=24745&type=search AS24745 is the
route server of Balcan-IX and has a CSS of 16 in the original ASrank dataset.
However, we manually checked its participants and found that the 16 customers
are misclassified and actually peer through AS24745. With MVP, the CSS of
AS24745 is 1 and these errors are avoided.
In both cases, MVP enables more accurate inferences of CCSs because it
collects more diverse AS paths. Thus, we can confidently say that MVP would
prevent many flawed inferences likely present in the dataset provided by
ASRank.
#### 9.2.2 Detection of forged-origin hijacks
We show that MVP improves forged-origin hijack detection, which is the goal of
many systems that use BGP routes from RIS and RV [11, 44, 1, 15, 26]. Forged-
origin hijacks are a type of BGP hijack where the attacker prepends the valid
origin to the AS path to make the hijacked route appear legitimate.
_MVP improves the accuracy of forged-origin hijack inferences._ We replicate
the algorithm of DFOH [26] that uses routes collected by 287 RIS and RV VPs to
infer forged-origin hijacks. We implement two versions of DFOH, one called
DFOHMVP which uses a set of VPs selected with MVP, and another one called
DFOHR that uses a random set of VPs. In both versions, we ensure that the
volume of data collected is identical to the one used in [26]. As DFOH relies
on probabilistic inference, we measure the performance of DFOHMVP and DFOHR in
terms of True Positive Rate (TPR) and False Positive Rate (FPR). We obtain an
approximation of ground truth (needed to compute the TPR and FPR) by
implementing a third version of DFOH, called DFOHALL that uses all VPs from
RIS and RV. Observe that DFOHALL is an approximation of ground truth because
incorrect inferences are still possible even if all VPs are used. We restrict
our analysis to one month (Jan. 2022) because DFOHALL is resource-hungry as it
uses all VPs. We find that DFOHMVP uncovers 947 suspicious cases against only
700 for DFOHR. DFOHMVP outperforms DFOHR for both the TPR and the FPR: It has
a TPR of 85.7% (against 61.1% for DFOHR) and a FPR of 14.4% (against 60.1% for
DFOHR)—a $\approx$4$\times$ better precision.
_DFOH R misses suspicious cases that DFOHMVP does not._ We manually
investigated, using public peering databases (e.g., PeeringDB) some of the
suspicious cases inferred by DFOHMVP and not by DFOHR. We find cases that
appear particularly suspicious (thus useful for operators) and describe two of
them below (also found by the original DFOH).
Case I666http://dfoh.uclouvain.be/cases/2022-01-01_1239_267548: On Jan. 1,
2022, AS267548, a small Peruvian AS, appears between Sprint, a Tier1 AS, and
AS199524, a large content provider. However, AS267548 is not supposed to
provide transit between these two ASes.
Case II777http://dfoh.uclouvain.be/cases/2022-01-06_9269_268568: On Jan. 6,
2022, AS9269, an ISP based in Hong Kong appears directly connected with
AS268568, a Brazilian ISP. These two ASes do not share any IXP and are not
supposed to peer directly.
These two cases show that MVP enables the detection of additional potential
routing attacks versus not using it.
#### 9.2.3 Characterizing international routing detours
Experiment | Duration | # of VPs | # of processed Updates | # of Detours
---|---|---|---|---
Original paper | 1 Month | All VPs | $\approx$61B | 174k
Random selection | 2 Months | 624 (median) | $\approx$61B | 165k (median)
4 Months | 313 (median) | $\approx$61B | 171k (median)
MVP selection | 2 Months | 413 | $\approx$61B | 250K
4 Months | 220 | $\approx$61B | 263k
Table 4: Using fewer VPs selected by MVP enables a longer study that detects
more detours with the same volume of data.
We focus on a study that uses all VPs to characterize international routing
detours over one month [46]. International detours occur when two ASes in the
same country are reachable through an AS in another country, which can lead to
extra forwarding delays. We show that by using fewer VPs selected by MVP, we
can lengthen the duration of the study to find more detours without processing
more data.
_MVP helps to detect +44% more routing detours._ We replicate the methodology
used in [46] to detect routing detours except that _(i)_ we use a set of VPs
selected using MVP that generates $\alpha\times$ less data compared to using
them all, with $\alpha=2$ and $\alpha=4$, and _(ii)_ we run the analysis over
two months when $\alpha=2$ and four months when $\alpha=4$. Thus, the overall
volume of data collected remains similar ($\approx$61B RIB entries),
regardless of $\alpha$. Table 4 shows the number of routing detours detected
in May 2023 (and until June and August 2023 when $\alpha=2$ and 4,
respectively). We detect 250k detours over two months ($\alpha=2$) when using
413 VPs selected by MVP—a +44% increase compared to using all VPs during one
month as in [46]. When $\alpha=4$, we use 220 VPs selected by MVP on four
months and find 263k detours—better than using them all on one month.
We explored the trade-off between the number of VPs and the duration of the
study using a random VPs selection strategy. We detected 165k detours when
using $\approx 624$ random VPs and running the analysis over two months (we
tested the random selection with 50 seeds and report the median in Table 4).
This is fewer than when we replicated the original experiment, which
demonstrates that optimized VP selection enables discovering more routing
detours.
_MVP enables improved characterization of routing detours._ We replicate the
methodology used in [46] to rank countries based on their number of detours,
and ASes based on how often they originate a detoured path. We find
differences when using MVP, including two interesting cases:
Case I: Using MVP (with $\alpha=2$), we discover 33k (+68%) additional detours
traversing the US and 22k (+37%) traversing Russia compared to when using the
settings in [46]. These additional detours rank the US as the #1 country with
the highest number of routing detours and Russia as #2, whereas with the
settings in [46] Russia is ranked #1 and the US #2.
Case II: Using MVP (with $\alpha=2$) enables detecting 720 (+83%) additional
routing detours involving AS262503 compared to when using the settings of
[46]. This changes rankings: AS262503 became #1 vs. #7 with the settings in
[46].
As our rankings are based on the highest number of routing detours compared to
[46], we can confidently say MVP improves the characterization of
international routing detours.
### 9.3 Soundness of design choices
We show that our three key design choices – yearly update frequency of
redundancy scores, balanced sampling, and topological feature selection – are
sound.
_MVP ’s redundancy scores are sufficiently stable over time that annual
recomputation is sufficient_ We ran MVP every six months, starting in January
2023 and then going backward until January 2018 (i.e., a total of ten
independent runs). We limit the scope of this experiment to 100 randomly
selected VPs to limit the computational resources required. Logically, we find
that the redundancy score differences increase as the time interval between
two runs of MVP increases. However, these differences are low. The median
difference between the scores of two runs of MVP separated by one year is only
0.021 (which corresponds to a difference of 9%), and it increases to 0.171
(i.e., a difference of 23%) when the two runs are separated by four years. We
thus configure MVP to recompute redundancy scores and update its set of
selected VPs on a yearly basis (see §8)—a good trade-off between computational
cost and performance.
Figure 6: MVP enables mapping more links than rMVP.
| $\setminus\\{{0,1}\\}$ | $\setminus\\{{2,3}\\}$ | $\setminus\\{{4,5}\\}$ | $\setminus\\{{6,7,8}\\}$
---|---|---|---|---
_I_ | 1.04 | 1.05 | 1.07 | 1.06
_II_ | 1.17 | 1.02 | 1.07 | 1.34
_III_ | 1.09 | 1.11 | 1.09 | 1.12
_IV_ | 1.32 | 1.25 | 1.15 | 1.16
_V_ | 1.61 | 1.59 | 1.71 | 1.62
| | | |
Table 5: Omitting one feature category reduces the performance of MVP for
every use case.
_The balanced sampling avoids biases in the collected data._ We implement
rMVP, a modified version of MVP where new-AS-links events used to compute
redundancy scores across VPs are sampled randomly, i.e., using the
distribution depicted in Fig. 5(a) (instead of using the balanced sampling in
§7.1). We compare the performance of MVP and rMVP on AS topology mapping. Note
that we observe similar results for other use cases. We map the AS topology
for May 2023 (following the methodology in §9.2.1) using both MVP and rMVP and
with the same volume of data in either case. Fig. 6 depicts the proportion of
additional AS links that we can map when using MVP compared to when using rMVP
for every new-AS-link category. MVP always yields better or identical
performance than rMVP. The highest difference is when mapping stub-to-stub
links (+3.9%) or Transit1-to-Transit1 links (+2.6%). These two link categories
are underrepresented when using a random sampling (see Fig. 5(a)),
demonstrating that our balanced sampling scheme mitigates biases.
_Every feature category is useful._ We implement MVP
$\setminus\\{f_{i},..,f_{j}\\}$, a modified version of MVP where we omit
features $\\{f_{i},..,f_{j}\\}$ when computing redundancy scores, with $i...j$
the feature indexes in Table 2. We use four different versions of MVP
$\setminus\\{...\\}$, each omitting a different feature category. We show the
reduction factor of MVP over each MVP $\setminus\\{...\\}$ for use cases _I_ ,
_II_ , _III_ , _IV_ , and _V_ in §9.1, with the objective of detecting 70% of
the events or mapping 70% of the AS-level topology. Regardless of which
feature category is omitted, MVP performs better (i.e., the reduction factor
is above 1). We conclude that every feature category is valuable.
## 10 Related work
_Redundancy and bias between the VPs._ Chen et al. showed that VPs observe
identical (redundant) AS links and that it is possible to reduce the number of
VPs while providing similar measurement power [10]. However, they only focus
on one objective (observing AS links) whereas MVP works for any objective.
Previous works reported that the VPs are biased (in terms of location, network
size, etc.) [45, 14, 47]. MVP is data-driven and does not consider these
biases as we show that an unbiased selection strategy performs poorly (§9.1).
_Strategies to select VPs._ Prior works demonstrated that carefully selecting
VPs increases the utility of the data [52], and proposed a greedy selection
strategy that performs better than other naive approaches [34, 52]. However,
their selection strategy optimizes one objective (discovering AS links) and
thus lacks generality (§9.1). Recent works also study the impact of the VP
selection on the discovered IP space and AS links [30].
_Placement of the VPs._ Gregori et al. proposed a methodology that finds a
relevant placement for a new VP [23]. Roughan et al. estimated that 700
strategically positioned VPs were enough to monitor the Internet topology
[43]. Finally, Cittadini et al. demonstrated the marginal utility of adding
new VPs at the core of the Internet [14].
_Strategies to select active measurement probes._ Active measurement platforms
(e.g., RIPE Atlas) also generate a large volume of data and several data-
driven approaches for probe selection exist [4, 25, 6]. Unlike MVP, these
approaches optimize the probe selection for specific use cases.
_Uses of topological features._ Previous works computed topological features
on the AS topology to detect routing anomalies [24, 26, 19].
## 11 Conclusion
We uncovered redundancy in the BGP routes exported by the RIS and RV VPs and
identified this redundancy as an opportunity to optimize the use of these data
collection systems. We presented MVP, a system that samples BGP data at the VP
granularity, enabling users to improve the coverage and accuracy of their
studies without processing more data.
The principles that MVP embodies can also lead to a better understanding of
the structure of the global Internet as well as how to optimize the
measurement and analysis of its routing system. For instance, our redundancy
scores could lead to more strategic approaches to gathering and retaining BGP
data, e.g., RIS and RV could deprioritize VPs which are overwhelmingly
redundant with many others, on a more scientific basis. Finally, our approach
can be adapted to active measurement platforms (e.g., Atlas [39]) to reach the
same objective of extensive coverage with reduced redundant data.
## 12 Acknowledgements
This work was supported by the ArtIC project (grant ANR-20-THIA-0006-01),
Région Grand Est, Inria Nancy-Grand Est, IHU of Strasbourg, University of
Strasbourg, University of Haute-Alsace, the RIPE NCC Community Projects Fund,
NSF CNS-2120399 and NSF OAC-2131987. Views are those of the authors and do not
represent the endorsements of the funding agencies.
## References
* [1] Global Routing Intelligence Platform. https://grip.inetintel.cc.gatech.edu/.
* [2] Emile Aben. Route Collection at the RIPE NCC - Where are we and where should we go? 2020\. https://labs.ripe.net/author/emileaben/.
* [3] Rodrigo Aldecoa, Chiara Orsini, and Dmitri Krioukov. Hyperbolic graph generator. In Computer Physics Communications, 2015.
* [4] Malte Apple, Emile Aben, and Romain Fontugne. Metis: Better Atlas Vantage Point Selection for Everyone. In TMA ’22, 2022.
* [5] Lorenzo Ariemma, Mariano Scazzariello, and Tommaso Caiazzi. MRT#: a Fast Multi-Threaded MRT Parser. In IFIP/IEEE IM ’21, 2021.
* [6] Vaibhav Bajpai, Steffie Jacob Eravuchira, Jürgen Schönwälder, Robert Kisteleki, and Emile Aben. Vantage point selection for IPv6 measurements: Benefits and limitations of RIPE Atlas tags. In 2017 IFIP/IEEE IM, 2017.
* [7] BGPKIT. BGPKIT. 2022\. https://blog.bgpkit.com/.
* [8] Timm Böttger, Félix Cuadrado, and Steve Uhlig. Looking for hypergiants in PeeringDB. In SIGCOMM CCR, 2018.
* [9] CAIDA. As rank. 2023\. https://asrank.caida.org/.
* [10] Kai Chen, Chengchen Hu, Wenwen Zhang, Yan Chen, and Bin Liu. On the eyeshots of BGP Vantage Points. In GLOBECOM ’09, 2009.
* [11] Shinyoung Cho, Romain Fontugne, Kenjiro Cho, Alberto Dainotti, and Phillipa Gill. BGP hijacking classification. In TMA’19, 2019.
* [12] CIDR. CIDR REPORT. 2023\. https://www.cidr-report.org/as2.0/.
* [13] CitizenLab. A case study of the China Telecom incident. 2012\. https://citizenlab.ca/2012/12/.
* [14] Luca Cittadini, Stefano Vissicchio, and Benoit Donnet. On the quality of BGP route collectors for iBGP policy inference. In IFIP ’14, 2014.
* [15] Avichai Cohen, Yossi Gilad, Amir Herzberg, and Michael Schapira. Jumpstarting BGP security with Path-End Validation. In SIGCOMM’16, 2016.
* [16] University San Diego. The CAIDA AS Relationships Dataset, 2022. 2022\. https://www.caida.org/catalog/datasets/as-relationships/.
* [17] Benoit Donnet and Olivier Bonaventure. On BGP communities. In SIGCOMM CCR, 2008.
* [18] Guoyao Feng, Srinivasan Seshan, and Peter Steenkiste. UNARI: An Uncertainty-Aware Approach to AS Relationships Inference. In CoNEXT ’19, 2019.
* [19] Romain Fontugne, Anant Shah, and Emile Aben. AS Hegemony: A Robust Metric for AS Centrality. In SIGCOMM’17, 2017.
* [20] Linton C. Freeman. Centrality in social networks conceptual clarification. Social Networks, 1978.
* [21] Lixin Gao and Jennifer Rexford. Stable Internet Routing without Global Coordination. In SIGMETRICS ’00, 2000.
* [22] Vasileios Giotsas, Matthew Luckie, Bradley Huffaker, and kc claffy. Inferring Complex AS Relationships. In IMC ’14, 2014.
* [23] Enrico Gregori, Alessandro Improta, Luciano Lenzini, Lorenzo Rossi, and Luca Sani. On the Incompleteness of the AS-Level Graph: A Novel Methodology for BGP Route Collector Placement. In IMC ’12, 2012.
* [24] Kevin Hoarau, Pierre Ugo Tournoux, and Tahiry Razafindralambo. BGNN: Detection of BGP Anomalies Using Graph Neural Networks. In ISCC ’22, 2022.
* [25] Thomas Holterbach, Emile Aben, Cristel Pelsser, Randy Bush, and Laurent Vanbever. Measurement Vantage Point Selection Using A Similarity Metric. In ANRW ’17, 2017.
* [26] Thomas Holterbach, Thomas Alfroy, Amreesh Phokeer, Alberto Dainotti, and Cristel Pelsser. A System to Detect Forged-Origin BGP Hijacks. In NSDI’24, 2023.
* [27] Zitong Jin, Xingang Shi, Yan Yang, Xia Yin, Zhiliang Wang, and Jianping Wu. TopoScope: Recover AS Relationships From Fragmentary Observations. In IMC ’20, 2020.
* [28] Thomas Krenc, Robert Beverly, and Georgios Smaragdakis. Keep Your Communities Clean: Exploring the Routing Message Impact of BGP Communities. In CoNEXT ’20, 2020.
* [29] Craig Labovitz, Abha Ahuja, Abhijit Bose, and Farnam Jahanian. Delayed Internet Routing Convergence. In SIGCOMM CCR, 2000.
* [30] Franziska Lichtblau. From the Edge to the Core : towards informed Vantage Point selection for Internet measurement studies. In Ph.D. Thesis, 2021.
* [31] Matthew Luckie, Bradley Huffaker, Amogh Dhamdhere, Vasileios Giotsas, and kc claffy. AS Relationships, Customer Cones, and Validation. In IMC ’13, 2013.
* [32] Z. Morley Mao, Lili Qiu, Jia Wang, and Yin Zhang. On AS-Level Path Inference. In SIGMETRICS ’05, 2005.
* [33] University of Oregon. Route Views Peers list. 2023\. http://www.routeviews.org/peers/peering-status.html.
* [34] Ricardo Oliveira, Mohit Lad, Beichuan Zhang, Dan Pei, Daniel Massey, and Lixia Zhang. Placing BGP monitors in the Internet. In Technical No. UCLA, TR, 2006.
* [35] Ricardo Oliveira, Beichuan Zhang, Dan Pei, Rafit Izhak-Ratzin, and Lixia Zhang. Quantifying path exploration in the internet. In ACM IMC’06, 2006.
* [36] Chiara Orsini, Alistair King, Danilo Giordano, Vasileios Giotsas, and Alberto Dainotti. BGPStream: A Software Framework for Live and Historical BGP Data Analysis. In IMC ’16, 2016.
* [37] Lars Prehn and Anja Feldmann. How Biased is Our Validation (Data) for AS Relationships? In IMC ’21, 2021.
* [38] RIPE. RIPE RIS Raw Data. 1\. https://www.ripe.net/data-tools/stats/ris/.
* [39] RIPE. The RIPE Atlas measurement platform. 1\. https://atlas.ripe.net/.
* [40] RIPE. YouTube Hijacking: A RIPE NCC RIS case study. 2018\. http://www.ripe.net/internet-coordination/news/industry-developments/.
* [41] RIPE. Per-peer dump files. 2023\. https://ris.ripe.net/docs/40_Prototypes/10_per_peer_dumps.html.
* [42] RIPE. RIPE RIS Peers list. 2023\. https://www.ris.ripe.net/peerlist/.
* [43] Matthew Roughan, Simon Jonathan Tuke, and Olaf Maennel. Bigfoot, Sasquatch, the Yeti and other missing links: what we don’t know about the AS graph. In IMC ’08, 2008.
* [44] Pavlos Sermpezis, Vasileios Kotronis, Petros Gigis, Xenofontas Dimitropoulos, Jae Hyun Park, Danilo Cicalese, Alistair King, and Alberto Dainotti. ARTEMIS: Neutralizing BGP hijacking within a minute. In ToN, 2018.
* [45] Pavlos Sermpezis, Lars Prehn, Sofia Kostoglou, Marcel Flores, Athena Vakali, and Emile Aben. Bias in Internet Measurement Platforms. In TMA’23, 2023.
* [46] Anant Shah, Romain Fontugne, and Christos Papadopoulos. Towards Characterizing International Routing Detours. In Asian Internet Engineering Conference ’16, 2016.
* [47] Y. Shavitt and U. Weinsberg. Quantifying the Importance of Vantage Points Distribution in Internet Topology Measurements. In INFOCOM, 2009.
* [48] Mattia Tantardini, Francesca Ieva, Lucia Tajoli, and Carlo Piccardi. Comparing methods for comparing networks. In Nature, 2019.
* [49] Ars Technica. Russian-controlled telecom hijacks financial services’ internet traffic. 2017\. https://arstechnica.com/security/2017/04/.
* [50] Krenc Thomas, Luckie Matthew, Marder Alexander, and kc Claffy. Coarse-grained Inference of BGP Community Intent. In IMC’23, 2023.
* [51] Oregon Univ. Route Views Project. 2021\. www.routeviews.org/.
* [52] Ying Zhang, Zheng Zhang, Z. Morley Mao, Y. Charlie Hu, and Bruce M. Maggs. On the impact of route monitor selection. In IMC ’07, 2007.
* [53] Xiaoliang Zhao, Dan Pei, Lan Wang, Dan Massey, Allison Mankin, S. Felix Wu, and Lixia Zhang. An Analysis of BGP Multiple Origin AS (MOAS) Conflicts. In IMW ’01, 2001.
## Appendix
## Appendix A Survey
_Detailed methdology._ We selected eleven papers and classified them based on
how authors collected the BGP data (categories $C_{1}$ and $C_{2}$). We then
emailed the authors and asked them about their experience with using BGP
routes from RIS and RV. We did not have answers for three papers. We promised
to share the answers of the participants in an anonymized fashion. Thus, we do
not show parts of a few answers that would make de-anonymization possible.
However, the missing parts never change the main message conveyed in the
answers.
_Detailed answers._ Table 6 lists the questions we asked the participants of
our survey along with their detailed answers. We color the answers based on
whether they are in favor (green) of using a tool such as MVP or not (red).
Neutral answers are colored in blue. The vast majority of the answers indicate
that MVP would be beneficial for users and improve the quality of their
measurement studies.
Collection strategy | Questions asked | Collected answers
---|---|---
$C_{1}$: All routes and subset of VPs (seven papers) | Why did you use a subset of the VPs ? | | To speed up data processing (x2)
---
For disk space and time efficiency (x1)
I thought the rest would be similar (x1)
I did not manage to use them all (x2)
How did you select your VPs ? | | I took them randomly (x2)
---
I do not remember (x2)
It was arbitrary: my script partially failed (x1)
I took geographically distant BGP collectors (x1)
I did not manage to use VPs from one data provider (x1)
| Do you think more VPs would improve
---
the quality of your results?
| Yes (x4)
---
Results would be similar, but it can help to find corner cases (x1)
Yes, but not significantly (x1)
I am not sure (x1)
| Would you have used more VPs
---
if you could?
| Yes (x4)
---
Yes, I’d love to (x1)
Definitely (x1)
I am not sure, but I don’t think so (x1)
$C_{2}$: Limited duration of experiment (five papers) | | Was the processing time a factor
---
that you considered when you decided
on the duration of your measurement study?
Yes (x3)
| Do you think extending the duration
---
of your measurement study would
improve the quality of your results?
| Yes (x2)
---
Yes, especially for rare events (x1)
Potentially (x1)
Yes, but not significantly (x1)
| Would have extended the duration
---
of your measurement study
if you had more resources?
| Yes (x2)
---
Yes, but it depends on the time remaining before the deadline (x1)
I think so, but also if I had more time before the deadline (x1)
All eight papers | | Do you find the data from RIS and
---
RouteViews expensive to process
in terms of computational resources?
| Yes (x1)
---
Yes, CPU and storage (x2)
Yes, the storage cost and the download cost are very large (x1)
CPU is the main issue (x1)
RIS data takes a lot of time to download, especially when we need data for
multiple days (x1)
Not the worst, but we definitely need a resourceful server if we want to catch
some deadline (x1)
We did that in a server so that was not a huge issue (x1)
No (x1)
| Is there any additional challenge
---
that you encountered when processing
the BGP data from RIS and RouteViews?
| Our team used Spark clusters and Python but it was too slow (x1)
---
We had to download the data from all VPs as there is no optimal solution for
selecting them,
the storage overhead and time overhead were extremely high (x1)
It’ll be helpful to make processing faster and less resource-consuming (x1)
Too many duplicate announcements make processing harder (x1)
Variable sizes of update files exacerbate scheduling parallelization (x1)
RIS took a lot longer than RouteViews (x1)
We had issues when collecting updates in real-time (x1)
We had to deal with bugs in BGPdump (x1)
Broken data feeds and data cleanup is also an issue that we need to take care
of (x1)
Our study was done pre-BGPStream, which would have helped quite a bit already
(x1)
Table 6: An exhaustive list of the questions asked to the participants of the
survey along with their detailed answers. We color an answer in (bold) green
if it (strongly) motivates the usage of a tool such as MVP. Blue answers are
neutral, i.e., they do not motivate MVP but also do not disincentive it.
Finally, (bold) red answers (strongly) disincentive the usage of a tool such
as MVP.
## Appendix B Extended evaluation
In this section, we evaluate the performances of MVP${}^{v4}_{v6}$ (Table 7)
and MVPv6 (Table 8) on the five use cases presented in §9.1, namely transient
paths detection (_I_), MOAS detection (_II_), AS topology mapping (_III_),
traffic engineering detection (_IV_), and unnecessary updates detection (_V_).
Similarly to §9.1, we compare MVPv6 and MVP${}^{v4}_{v6}$ against the three
naive baselines (random, AS-distance, and unbiased) as well as the eight
greedy specific VPs selection strategies (three optimized for Def. 1, 2, and 3
and one optimized for each of the five use cases). We present the results in
terms of data reduction factor, as defined in §9.1.
_MVP ${}^{v4}_{v6}$ and MVPv6 outperform the three naive baselines for every
objective._ For MVP${}^{v4}_{v6}$, the reduction factor can be as high as 6.57
when trying to detect 50% of the traffic engineering paths while for MVPv6 it
can be as high as 5.05 when trying to map 50% of the AS topology. On average,
MVP${}^{v4}_{v6}$ only needs 41.6% of the data (reduction factor of 2.4)
required by a naive baseline to meet the same objective while MVPv6 needs
44.5% (reduction factor of 2.26).
_MVP ${}^{v4}_{v6}$ and MVPv6 prevent overfitting._ For the vast majority of
the objectives, greedy specific performs better than MVP${}^{v4}_{v6}$ or
MVPv6 only for the use cases for which it is optimized. There are a few cases
where greedy specific performs better than MVP${}^{v4}_{v6}$ or MVPv6 for a
use case that it does not optimized. For instance, MVPv6 needs to process 20%
(reduction factor of 0.8) more data than greedy specific optimized for use
case _I_ to detect 90% of the MOAS (use case _II_). However, in the vast
majority of the cases, both MVP${}^{v4}_{v6}$ and MVPv6 outperform the greedy
specifics. For instance, MVPv6 only needs 26% (reduction factor of 3.74) of
the volume required by the greedy specific optimized for use case _IV_ to
detect 90% of the MOAS (use case _II_). These results show that MVP does not
overfit while greedy specific does.
Use case | Objective | Naives baselines | Greedy specifics use cases (§9.1) | Greedy specifics Def. (§4)
---|---|---|---|---
Random | AS-distance | unbiased | _I_ | _II_ | _III_ | _IV_ | _V_ | Def. 1 | Def. 2 | Def. 3
Transient path detection
(_I_) | 50 % | 1.32 | 1.87 | 1.94 | 0.61 | 1.19 | 1.11 | 1.36 | 1.21 | 2.08 | 2.24 | 1.79
70 % | 1.38 | 1.62 | 1.83 | 0.74 | 1.30 | 1.15 | 1.40 | 1.18 | 1.78 | 1.97 | 1.53
90 % | 1.16 | 1.42 | 1.40 | 0.71 | 1.39 | 1.74 | 1.69 | 1.21 | 1.34 | 1.36 | 1.40
MOAS detection
(_II_) | 50 % | 1.93 | 3.38 | 4.03 | 1.95 | 0.78 | 1.41 | 2.05 | 1.37 | 3.34 | 3.21 | 2.88
70 % | 1.96 | 3.49 | 4.16 | 2.14 | 0.68 | 1.91 | 2.52 | 1.56 | 2.91 | 2.60 | 2.81
90 % | 1.16 | 1.69 | 2.07 | 1.52 | 0.69 | 1.68 | 1.87 | 1.53 | 1.31 | 1.25 | 1.40
AS topology mapping
(_III_) | 50 % | 2.47 | 2.90 | 2.72 | 1.18 | 1.02 | 0.58 | 1.45 | 1.41 | 2.38 | 2.30 | 2.16
70 % | 2.27 | 2.52 | 2.29 | 1.26 | 1.14 | 0.68 | 1.25 | 1.19 | 2.03 | 1.71 | 2.03
90 % | 1.71 | 1.85 | 1.78 | 1.14 | 1.13 | 0.82 | 1.17 | 1.15 | 1.62 | 1.61 | 1.56
Traffic engineering detection
(_IV_) | 50 % | 3.77 | 6.57 | 4.43 | 3.21 | 1.89 | 1.47 | 0.47 | 2.57 | 3.43 | 2.89 | 2.57
70 % | 2.34 | 3.17 | 2.56 | 2.20 | 1.60 | 1.93 | 0.35 | 2.06 | 1.97 | 2.01 | 2.05
90 % | 1.90 | 2.02 | 1.76 | 2.02 | 1.78 | 1.94 | 0.31 | 1.93 | 2.13 | 1.67 | 1.93
Unnecessary updates detection
(_V_) | 50 % | 2.13 | 4.10 | 3.15 | 2.41 | 2.54 | 2.72 | 3.12 | 0.41 | 2.94 | 2.78 | 2.83
70 % | 1.27 | 1.95 | 1.80 | 1.47 | 1.12 | 1.28 | 1.59 | 0.35 | 1.71 | 1.81 | 1.57
90 % | 1.01 | 1.29 | 1.35 | 1.04 | 0.85 | 0.96 | 1.09 | 0.46 | 1.00 | 1.11 | 1.11
| | | | | | | | | | | |
Table 7: Data reduction factor for MVP${}^{v4}_{v6}$ compared to several
baselines for five use cases. MVP${}^{v4}_{v6}$ enables to detect 70% of the
MOAS using only 28.6% (reduction factor of 3.49) of the volume required by the
AS distance baseline to meet the same objective. The average reduction factor
over all objectives and naive baselines is 2.25.
Use case | Objective | Naives baselines | Greedy specifics use cases (§9.1) | Greedy specifics Def. (§4)
---|---|---|---|---
Random | AS-distance | unbiased | _I_ | _II_ | _III_ | _IV_ | _V_ | Def. 1 | Def. 2 | Def. 3
Transient path detection
(_I_) | 50 % | 1.43 | 1.65 | 2.29 | 0.44 | 1.11 | 1.14 | 1.67 | 1.20 | 1.56 | 1.36 | 1.90
70 % | 1.71 | 1.84 | 2.00 | 0.64 | 1.59 | 1.96 | 2.88 | 2.25 | 1.75 | 1.86 | 1.67
90 % | 1.52 | 1.43 | 1.42 | 0.62 | 1.49 | 1.49 | 1.79 | 1.48 | 1.51 | 1.72 | 2.11
MOAS detection
(_II_) | 50 % | 1.94 | 1.65 | 2.37 | 1.10 | 0.21 | 0.36 | 1.56 | 2.33 | 1.04 | 0.73 | 1.30
70 % | 4.24 | 1.70 | 3.25 | 1.26 | 0.51 | 1.05 | 4.98 | 3.38 | 4.20 | 3.71 | 4.13
90 % | 3.03 | 1.75 | 2.19 | 0.80 | 0.53 | 2.67 | 3.74 | 2.56 | 3.54 | 3.69 | 3.84
AS topology mapping
(_III_) | 50 % | 4.45 | 3.68 | 5.05 | 1.49 | 0.72 | 0.54 | 2.41 | 3.03 | 1.92 | 1.65 | 3.29
70 % | 2.83 | 3.26 | 3.14 | 1.18 | 1.14 | 0.73 | 2.07 | 1.38 | 2.27 | 2.17 | 2.48
90 % | 1.86 | 2.00 | 1.99 | 1.10 | 1.12 | 0.86 | 1.25 | 1.30 | 1.56 | 1.70 | 2.02
Traffic engineering detection
(_IV_) | 50 % | 2.27 | 1.68 | 1.34 | 2.68 | 0.51 | 0.58 | 0.12 | 1.89 | 0.75 | 0.95 | 0.53
70 % | 3.76 | 5.14 | 2.86 | 3.03 | 2.64 | 3.03 | 0.30 | 4.66 | 2.07 | 1.61 | 1.14
90 % | 1.29 | 1.36 | 1.18 | 1.49 | 1.49 | 1.49 | 0.65 | 1.49 | 0.88 | 0.56 | 1.19
Unnecessary updates detection
(_V_) | 50 % | 1.45 | 2.19 | 2.63 | 1.57 | 2.94 | 1.97 | 3.21 | 0.22 | 2.44 | 2.70 | 1.95
70 % | 1.37 | 2.13 | 2.09 | 1.79 | 1.98 | 2.07 | 2.26 | 0.31 | 1.91 | 2.18 | 1.82
90 % | 1.23 | 1.46 | 1.58 | 1.34 | 1.39 | 1.46 | 1.69 | 0.50 | 1.83 | 1.49 | 1.45
| | | | | | | | | | | |
Table 8: Data reduction factor for MVPv6 compared to several baselines for
five use cases. MVPv6 enables to detect 90% of the MOAS using only 33%
(reduction factor of 3.03) of the volume required by the random selection to
meet the same objective. The average reduction factor over all objectives and
naive baselines is 2.25.
|
$\overline{\mathcal{M}}^{B}(\bm{x},\bm{y};\overrightarrow{\rho})$
diffeomorphic to $(0,1]$; the count of such ends is equal to
$\\#\mathcal{M}^{B}(\bm{x},\bm{y};-\rho_{1230})$ when
$\overrightarrow{\rho}=(-\rho_{0},-\rho_{123})$, and is equal to
$\\#\mathcal{M}^{B}(\bm{x},\bm{y};-\rho_{3012})$ when
$\overrightarrow{\rho}=(-\rho_{012},-\rho_{3})$.
By Proposition 2.62 and 2.57, boundary degenerations are further divided into
boundary degenerations with or without corners. In particular, different types
of degeneration do not appear simultaneously. When boundary degeneration
without corners appear, the situation is covered in Proposition 2.62. When
boundary degenerations with corners appear, Proposition 2.58 and Proposition
2.60 show the moduli space of degenerate disks are smoothly cut out. In
particular, the standard gluing results can be applied to show each boundary
degeneration in
$\overline{\mathcal{M}}^{B}(\bm{x},\bm{y};\overrightarrow{\rho})$ has a
neighborhood diffeomorphic to $(0,1]$. The number of such ends is equal to
$\sum_{\\{(q,B_{1})|\exists B_{2}\in
T(q),B_{1}+B_{2}=B\\}}\\#(\mathcal{M}^{B_{1}}(\bm{x},\bm{y};q)\times_{ev}\mathcal{N}^{B_{2}}(q;\overrightarrow{\rho}))$
(We have suppressed the almost complex structure $J_{s}$, which can be chosen
generically so that the evaluation maps are transversal to each other.) This
quantity is even when $\overrightarrow{\rho}\neq(-\rho_{0},\ldots,-\rho_{3})$
in view of Proposition 2.60. Otherwise, it has the same parity as
$\sum_{\\{(q,B_{1})|\exists B_{2}\in
T(q),B_{1}+B_{2}=B\\}}\\#\mathcal{M}^{B_{1}}(\bm{x},\bm{y};q)$
in view of Proposition 2.58. ∎
### 2.8. Ends of moduli spaces of 1-P curves
This subsection characterizes the ends of one-dimensional moduli spaces of 1-P
holomorphic curves. Given a generator $\bm{x}$, we say
$\iota(\bm{x})=\iota_{1}$ if and only if $\bm{x}$ is in
$\mathbb{T}_{\alpha,1}$; otherwise $\iota(\bm{x})=\iota_{0}$. The main result
is the following.
###### Proposition 2.63.
Let $B\in\tilde{\pi}_{2}(\bm{x},\bm{y})$ such that $\iota(\bm{x})=\iota_{1}$
and $\text{ind}(B;U)=2$. Then fixing a generic almost complex structure, the
compactified moduli space $\overline{\mathcal{M}}^{B}(\bm{x},\bm{y};U)$ is a
compact 1-manifold with boundary. The boundaries are of the following types:
* (1)
Two-story building
* (2)
simple holomorphic combs $(u,v)$ with $v$ being an orbit curve
* (3)
boundary degeneration with corners
* (4)
boundary degeneration without corners
Moreover,
* (a)
The number of type (2) ends is
$\\#\mathcal{M}^{B}(\bm{x},\bm{y};-\rho_{1230})+\\#\mathcal{M}^{B}(\bm{x},\bm{y};-\rho_{3012})$.
* (b)
The number of type (3) ends is mod 2 congruent to
$\sum_{\\{(B_{1},\ q)|B_{2}\in T(q),\
B_{1}+B_{2}=B\\}}\\#\mathcal{M}^{B_{1}}(\bm{x},\bm{y};q)$
* (c)
The number of type (4) ends is even.
###### Remark 2.64.
A similar proposition holds in the case when $\iota(\bm{x})=\iota_{0}$. One
simply needs to change the Reeb chords in (a) by a cyclic permutation of the
digits in the subscript.
#### 2.8.1. Reformulation of the moduli spaces
We reformulate $\mathcal{M}^{B}(\bm{x},\bm{y};U)$ in terms of holomorphic
disks in $Sym^{g}(\Sigma)$. Assume $\iota(\bm{x})=\iota_{1}$ throughout the
rest of the section.
###### Definition 2.65.
${\mathcal{M}}^{B}_{Sym}(\bm{x},\bm{y};U)$ is defined to be the space of
holomorphic maps $u:[0,1]\times\mathbb{R}\backslash\\{(s_{0},0)\\}\rightarrow
Sym^{g}(\Sigma)$ such that:
* (1)
$(s_{0},0)$ is in the interior of $[0,1]\times\mathbb{R}$ and is allowed to
vary;
* (2)
$u(\\{0\\}\times\mathbb{R})\subset\mathbb{T}_{\beta}$;
* (3)
$u(\\{1\\}\times\mathbb{R})\subset\mathbb{T}_{\alpha,1}$. Moreover,
$u|_{\\{1\\}\times\mathbb{R}}$ lifts through
$f_{1}:(0,1)\times\mathbb{T}^{g-2}\times(\amalg S^{1})\rightarrow
Sym^{g}(\Sigma)$;
* (4)
$\lim_{t\rightarrow\infty}u(s+it)=\bm{y}$, and
$\lim_{t\rightarrow-\infty}u(s+it)=\bm{x}$;
* (5)
$\lim_{(s,t)\rightarrow(s_{0},0)}u(s+it)$ is a closed
$\overrightarrow{R}$-orbit $\sigma\times\bm{w}$, where $\bm{w}\in
Sym^{g-1}(\Sigma)$ and $\sigma$ stands for a closed Reeb orbit that traverses
$\partial\overline{\Sigma}$ once;
* (6)
$\frac{du}{ds}+J_{s}\frac{du}{dt}=0$;
* (7)
$u$ is in the homology class specified by $B$.
Again, we have the tautological correspondence that identifies the moduli
spaces defined here and the ones in Section 2.2. Therefore, we shall no longer
keep the subscript “Sym” in the notation.
We also define the moduli spaces of one-punctured degenerate disks (with or
without corners).
###### Definition 2.66 (One-punctured degenerate disks without corners).
Let $J$ be a nearly-symmetric almost complex structure. Let
$\bm{x}\in\mathbb{T}_{\alpha}$. $\mathcal{N}_{J}(\bm{x};U)$ is the space of
maps $v:\mathbb{H}\backslash\\{i\\}\rightarrow Sym^{g}(\Sigma)$ such that
* (1)
$v(\mathbb{R})\subset\mathbb{T}_{\alpha,1}$. Moreover, the restriction of
$v|_{\mathbb{R}}$ lifts through
$f_{1}:(0,1)\times\mathbb{T}^{g-2}\times(\amalg S^{1})\rightarrow
Sym^{g}(\Sigma)$;
* (2)
$\lim_{z\rightarrow\infty}v(z)=\bm{x}$, and the path obtained from
$v|_{\partial\mathbb{H}}$ by continuous extension at $\infty$ lifts through
$\iota_{1}$;
* (3)
$\lim_{z\rightarrow i}v(z)$ is some closed $\overrightarrow{R}$-orbit
$\sigma\times\bm{w}$, where $\bm{w}\in Sym^{g-1}(\Sigma)$ and $\sigma$ stands
for a closed Reeb orbit that traverses the $\partial\overline{\Sigma}$ once;
* (4)
$\frac{du}{ds}+J\frac{du}{dt}=0$.
###### Definition 2.67 (One-cornered one-punctured degenerate disks).
Let $J$ be a nearly-symmetric almost complex structure. Let $q$ be a self-
intersection point of $\alpha_{im}$. $\mathcal{N}_{J}(q;U)$ is the space of
maps $v:\mathbb{H}\backslash\\{i\\}\rightarrow Sym^{g}(\Sigma)$ such that
* (1)
$v(\mathbb{R})\subset\mathbb{T}_{\alpha,1}$. Moreover, the restriction of
$v|_{\mathbb{R}}$ lifts through
$f_{1}:(0,1)\times\mathbb{T}^{g-1}\times(\amalg S^{1})\rightarrow
Sym^{g}(\Sigma)$;
* (2)
$\lim_{z\rightarrow\infty}v(z)=(q,\bm{p})$ for some
$p\in\alpha^{a}_{1}\times\alpha^{c}_{1}\times\cdots\times\alpha^{c}_{g-2}$,
and and the path obtained from $v|_{\partial\mathbb{H}}$ by continuous
extension at $\infty$ does not lift through $\iota_{1}$;
* (3)
$\lim_{z\rightarrow i}v(z)$ is some closed $\overrightarrow{R}$-orbit
$\sigma\times\bm{w}$, where $\bm{w}\in Sym^{g-1}(\Sigma)$ and $\sigma$ stands
for a closed Reeb orbit that traverses $\partial\overline{\Sigma}$ once;
* (4)
$\frac{du}{ds}+J\frac{du}{dt}=0$.
We call $q$ the corner of such a degenerate disk. We also have an evaluation
map
$ev_{J}:\mathcal{N}_{J}(q;U)\rightarrow\alpha^{a}_{i}\times\alpha^{c}_{1}\times\cdots\times\alpha_{g-2}^{c}$
defined by $v\mapsto\bm{p}$ if $\lim_{z\rightarrow\infty}v(z)=(q,\bm{p})$.
#### 2.8.2. One-punctured boundary degeneration with corners
###### Proposition 2.68.
If a boundary degeneration with corners appears in the compactification of a
one-dimensional moduli space $\mathcal{M}^{B}(\bm{x},\bm{y};U)$, then the
nodal comb is of simple form, and the domain for the degenerate disk is a
stabilized teardrop with an acute corner.
###### Proof.
The proof is similar to the proof of Proposition 2.57. There is only one
modification needed: We no longer have east boundary punctures when
considering the bubble tree $\mathbb{B}$ of the nodal curve; instead, there is
one and only one interior puncture. With this, the rest of the proof follows
exactly as in Proposition 2.57. ∎
###### Proposition 2.69.
Let $q$ be a self-intersection point of $\alpha_{im}$ and let $B\in T(q)$ be a
stabilized teardrop with acute corner. For a generic nearly symmetric almost
complex structure $J$, the moduli space of degenerate disks
$\mathcal{N}_{J}^{B}(q;U)$ is a $(g-1)$-manifold, and a generic fiber of the
evaluation map
$ev_{J}:\mathcal{N}_{J}^{B}(q;U)\rightarrow\alpha^{a}_{1}\times\alpha^{c}_{1}\times\cdots\times\alpha_{g-2}^{c}$
is a compact 0-dimensional manifold consisting of an odd number of points.
###### Proof.
The regularity of $\mathcal{N}_{J}^{B}(q;U)$ and compactness of a generic
fiber are proved in the same way as in Proposition 2.69.
The parity of the cardinality of a generic fiber follows from a similar neck-
stretching and cobordism argument as in Proposition 2.69, using Lemma 2.70
below instead of Lemma 2.59. ∎
###### Lemma 2.70.
Assume $g(\Sigma)=2$. Fix some point $p\in\alpha^{a}_{1}$. For a sufficiently
stretched almost complex structure $j$ on $\Sigma$, the fiber
$ev_{Sym^{2}(j)}^{-1}(p)$ is transversally cut out and consists of one point.
###### Proof.
View $\Sigma$ be the connected sum $(E_{1},\alpha^{a}_{1},\alpha^{a}_{2})$ and
$(E_{2},\alpha_{im})$, where $E_{1}$ is the punctured Riemann surface of genus
one and $E_{2}$ is a closed Riemann surface of genus one. Let $z^{\prime}$
denote the points on $E_{1}$ and $E_{2}$ where the connected sum is performed.
The domain $B$ gives rise to a teardrop domain $B^{\prime}$ in $E_{2}$ with
$n_{z^{\prime}}(B^{\prime})=1$. The Riemann mapping theorem implies that the
moduli space $\mathcal{N}^{B^{\prime}}(q)$ of holomorphic disks in $E_{2}$
with corner $q$ and domain $B^{\prime}$ is smoothly cut out and has only one
element. The gluing argument in Section 10 of [OS04b] shows for a sufficiently
stretched almost complex structure, maps in $ev_{Sym^{2}(j)}^{-1}(p)$ are
obtained by splicing the one-punctured holomorphic sphere in $Sym^{2}(E_{1})$
passing through $(z^{\prime},p)$ and the holomorphic disk in
$\mathcal{N}^{B^{\prime}}(q)$.666Strictly speaking, Ozsváth and Szabó’s
argument concerns splicing closed holomorphic spheres while in our case the
sphere is punctured, but this does not affect the argument applying to our
case. Alternatively, we may treat one-punctured holomorophic disks or spheres
as the corresponding object without interior punctures, but intersecting
$\\{e\\}\times\Sigma_{\bar{e}}$ once in $Sym^{2}(\Sigma_{\bar{e}})$. In
particular, $ev_{Sym^{2}(j)}^{-1}(p)$ is identified with
$\mathcal{N}^{B^{\prime}}(q)$ and hence consists of only one element. ∎
#### 2.8.3. One-punctured boundary degeneration without corners
###### Proposition 2.71.
In the assumption of Proposition 2.63, if a boundary degeneration without
corner occurs, then:
* (1)
There is only one degenerate disk, and its domain $[B]$ is $[\Sigma]$.
* (2)
$\bm{x}=\bm{y}$.
* (3)
Such degenerate disk do not occur simultaneously with other types of
degeneration.
* (4)
The number of ends corresponding to such boundary degeneration is even.
###### Proof.
The proof of (1), (2), and (3) are straightforward modifications of that of
Proposition 2.62 and are omitted. (4) follows from the standard gluing result
and Proposition 2.72 below, which differs from the counterpart in the 0-P
case. ∎
###### Proposition 2.72.
For a generic almost complex structure $J$,
$\mathcal{N}^{[\Sigma]}_{J}(\bm{x};U)$ is a compact, 0-dimensional manifold
that consists of an even number of points.
###### Proof.
The argument for compactness and transversality is the same as in [OS04b,
Proposition 3.14], which is the counterpart of Proposition 2.72 when the
Heegaard surface is closed; we will omit this part. By a similar cobordism
argument used in Proposition 2.58, we can reduce understanding the parity of
the moduli space to the base case $g(\Sigma)=2$, which is addressed in Lemma
2.73 below. ∎
###### Lemma 2.73.
Assume $g(\Sigma)=2$. View
$(\Sigma,\alpha_{1}^{a},\alpha^{a}_{2},\alpha_{im})=(E_{1},\alpha_{1}^{a},\alpha_{2}^{a})\\#(E_{2},\alpha_{im})$,
where $E_{1}$ is a punctured Riemann surface of genus one and $E_{2}$ is a
Riemann surface of genus one. If $j$ is a sufficiently stretched complex
structure on $\Sigma$, then $\mathcal{N}^{[\Sigma]}_{Sym^{2}(j)}(\bm{x};U)$ is
empty.
###### Proof.
Otherwise, the same neck-stretching procedure as in Lemma 2.61 produces a
limit nodal holomorphic curve $u_{\infty}:\mathbb{B}\rightarrow
Sym^{2}(E_{1}\vee E_{2})$. It consists of a (possibly punctured) holomorphic
disk $v$ that maps to $E_{1}\times E_{2}$ with boundary in
$\mathbb{T}_{\alpha,1}$ and possibly some (possibly punctured) sphere bubbles
in $Sym^{2}(E_{i})$, $i=1,2$. We claim $v$ must be a constant map. It is clear
that $Pr_{E_{1}}\circ v$ is constant, for
$\pi_{2}(E_{1,\bar{e}},\alpha^{a}_{1}\cup\\{e\\})=0$, where $E_{1,\bar{e}}$
denote the Riemann surface obtained by filling in the east puncture. We move
to see $Pr_{E_{2}}\circ v$ is constant. Suppose $Pr_{E_{2}}\circ v$ is not a
constant map. Note the domain of $(Pr_{E_{2}}\circ v)$ is a zero-cornered
$\alpha$-bounded domain $D$ in $E_{2}$. Stabilizing by $E_{1}$, this domain
induces a zero-cornered $\alpha$-bounded domain $D^{\prime}$ in $\Sigma$ with
$n_{z}(D^{\prime})\leq 1$. If $n_{z}(D^{\prime})=0$, then $D^{\prime}$ does
not exist as $\mathcal{H}$ is unobstructed, and hence $D$ does not exist. So
$n_{z}(D^{\prime})=1$, and hence $D^{\prime}=\Sigma$ since $\mathcal{H}$ is
unobstructed. This implies $D=E_{2}$. Therefore, $\partial(Pr_{E_{2}}\circ v)$
is null-homotopic in $\alpha_{im}$. So $Pr_{E_{2}}\circ v$ induces a
nontrivial element in $\pi_{2}(E_{2})$. This, however, contradicts that
$\pi_{2}(E_{2})=0$. Therefore, $Pr_{E_{2}}\circ v$ is also constant, and hence
$v$ is the constant map with image $\bm{x}$. Now $\\{\bm{x}\\}$ intersects
neither $Sym^{2}(E_{i})$, $i=1,2$, and hence there are no sphere bubbles in
$u_{\infty}$. So the Gromov limit $u_{\infty}$ is a constant map. In
particular, $n_{z}(u_{\infty})=0$. However, $n_{z}(u_{\infty})=1$ as it is the
limit of a sequence of holomorphic maps whose multiplicity at $z$ is one. This
is a contradiction. Therefore, $\mathcal{N}^{[\Sigma]}_{Sym^{2}(j)}(\bm{x};U)$
is empty provided $j$ is sufficiently stretched. ∎
#### 2.8.4. Proof of Proposition 2.63
###### Proof of Proposition 2.63.
In view of Proposition 2.40, Proposition 2.68, and Proposition 2.71 we know
the degenerations that can appear in the boundary of the compactified moduli
spaces are two-story curves, simple combs with orbit curve ends, or simple
boundary degenerations with or without corners. In all cases, gluing arguments
can be applied to see the compactified moduli space
$\overline{\mathcal{M}}^{B}(\bm{x},\bm{y};U)$ is a one-manifold with boundary.
For conclusion (a), note that ends of type (2) correspond to pairs of curves
$(u,v)$ where $u$ is in $\mathcal{M}^{B}(\bm{x},\bm{y};-\rho_{1230})$ or
$\mathcal{M}^{B}(\bm{x},\bm{y};-\rho_{3012})$ and $v$ is an orbit curve, but
the moduli space of orbit curves consists of a single element by the Riemann
mapping theorem so the count of type (2) boundaries agrees with
$\\#\mathcal{M}^{B}(\bm{x},\bm{y};-\rho_{1230})+\\#\mathcal{M}^{B}(\bm{x},\bm{y};-\rho_{3012})$.
For conclusion (b), standard gluing results imply that the number of such ends
is equal to
$\sum_{\\{(q,B_{1})|\exists B_{2}\in
T(q),B_{1}+B_{2}=B\\}}\\#(\mathcal{M}^{B_{1}}(\bm{x},\bm{y};q)\times_{ev}\mathcal{N}^{B_{2}}(q;U))$
This is mod 2 equal to
$\sum_{\\{(q,B_{1})|\exists B_{2}\in
T(q),B_{1}+B_{2}=B\\}}\\#\mathcal{M}^{B_{1}}(\bm{x},\bm{y};q)$
as a generic fiber of $ev$ in $\mathcal{N}^{B_{2}}(q;U)$ is odd by Proposition
2.69. For (c), note by gluing results the number of such ends is equal to
$\\#\mathcal{N}^{[\Sigma]}(\bm{x};U)$. This is even by Proposition 2.72. ∎
### 2.9. Type D structures
We define type D structures from an immersed bordered Heeggard diagram
$\mathcal{H}=(\Sigma,\bm{\beta},\bm{\bar{\alpha}},z)$ in this subsection.
Figure 7. The quiver presentation of the torus algebra (left) and the pointed
match circle of $\mathcal{H}$ with reversed boundary orientation (right).
Let $\mathcal{A}$ denote the torus algebra, which is isomorphic to the quiver
algebra of the quiver in Figure 7 (left). For $I\in\\{1,2,3,12,23,123\\}$,
$\rho_{I}\in\mathcal{A}$ is understood as the product of the $\rho_{i}$’s for
those $i$ appear in $I$. This algebra arises naturally in the context of
bordered Heegaard diagrams, where $\mathcal{A}$ is associated to the pointed
match circle determined by $\mathcal{H}$ with the reversed boundary
orientation (Figure 7 (right)); we refer the readers to [LOT18, Chapter 11.1]
for a detailed definition of the torus algebra in terms of pointed match
circles, and we only point out that the element $\rho_{I}\in\mathcal{A}$ for
$I\in\\{1,2,3,12,23,123\\}$ corresponds to the Reeb chord with the same label
on the pointed match circle. Let
$\mathcal{I}=\langle\iota_{0}\rangle\oplus\langle\iota_{1}\rangle$ denote the
ring of idempotents of $\mathcal{A}$. We recall the definition of a type D
structure.
###### Definition 2.74.
A type D structure over the torus algebra $\mathcal{A}$ is a left
$\mathcal{I}$-module $N$ together with a linear map
$\delta:N\rightarrow\mathcal{A}\otimes N$ such that the map
$\partial\coloneqq(\mu_{\mathcal{A}}\otimes\mathbb{I}_{N})\circ(\mathbb{I}_{\mathcal{A}}\otimes\delta):\mathcal{A}\otimes
N\rightarrow\mathcal{A}\otimes N$
is a differential, i.e., $\partial^{2}=0$. The left differential
$\mathcal{A}$-module $\mathcal{A}\otimes N$ is called the type D module of the
type D structure $(N,\delta)$.
Next, we spell out the construction of a type D structure from an immersed
bordered Heegaard diagram. Recall
$\mathbb{T}_{\beta}=\beta_{1}\times\cdots\times\beta_{g}$ and
$\mathbb{T}_{\alpha,i}=\alpha^{a}_{i}\times\alpha_{1}^{c}\times\cdots\times\alpha_{g-1}^{c}$,
$i=1,2$. Let
$\mathbb{T}_{\alpha}=\mathbb{T}_{\alpha,1}\cup\mathbb{T}_{\alpha,2}$. Let
$\mathcal{G}(\mathcal{H})=\\{\bm{x}|\bm{x}\in\mathbb{T}_{\alpha}\cap\mathbb{T}_{\beta}\\}$.
Denote the local system on $\alpha_{im}$ as a vector bundle
$\mathcal{E}\rightarrow\alpha_{im}$ together with a parallel transport $\Phi$.
Note that this induces a local system on $\mathbb{T}_{\alpha}$, the tensor
product of $\mathcal{E}$ and the trivial local system on the other alpha
curves (or arcs). Abusing notation, we still denote the local system on
$\mathbb{T}_{\alpha}$ by $(\mathcal{E},\Phi)$. Now define an
$\mathcal{I}$-module
$X^{\mathcal{E}}(\mathcal{H})=\oplus_{\bm{x}\in\mathcal{G}(\mathcal{H})}\mathcal{E}|_{\bm{x}}$,
where the $\mathcal{I}$-action on an element $\eta\in\mathcal{E}|_{\bm{x}}$ is
specified by
$\iota_{i}\cdot\eta=\begin{cases}\eta,\ o(\bm{x})\equiv i\pmod{2}\\\ 0,\
\text{otherwise}\end{cases}$
Here $o(\bm{x})=i$ if and only if $\bm{x}\in\mathbb{T}_{\alpha,i}$, $i=1,2$.
Given a sequence of Reeb chords
$\overrightarrow{\sigma}=(\sigma_{1},\ldots,\sigma_{k})$ of a pointed match
circle $\mathcal{Z}$, $a(-\overrightarrow{\sigma})$ is defined to be
$(-\sigma_{1})\cdot(-\sigma_{2})\cdot\ldots(-\sigma_{k})\in\mathcal{A}(-\mathcal{Z})$.
Note given $B\in\pi_{2}(\bm{x},\bm{y})$, the parallel transport restricted to
the arc $\partial_{\alpha_{im}}B\subset\alpha_{im}$ induces an isomorphism
from $\mathcal{E}|_{\bm{x}}$ to $\mathcal{E}|_{\bm{y}}$, which we denote by
$\Phi^{B}_{\bm{x},\bm{y}}$.
###### Definition 2.75.
Let $\mathcal{H}$ be an unobstructed, provincially admissible, immersed
bordered Heegaard diagram. Fix a generic almost complex structure on
$\Sigma\times[0,1]\times\mathbb{R}$. The type D module
$\widehat{CFD}(\mathcal{H})$ is defined to be the $\mathcal{A}$-module
$\mathcal{A}\otimes_{\mathcal{I}}X^{\mathcal{E}}(\mathcal{H})$
together with a differential given by
$\partial(a\otimes\eta)=a\cdot(\sum_{\bm{y}}\sum_{\\{(B,\overrightarrow{\sigma})|\
n_{z}(B)=0,\
\text{ind}(B,\overrightarrow{\sigma})=1\\}}\\#\mathcal{M}^{B}(\bm{x},\bm{y};\overrightarrow{\sigma})a(-\overrightarrow{\sigma})\otimes\Phi^{B}_{\bm{x},\bm{y}}\eta),$
where $a\in\mathcal{A}$, $\eta\in\mathcal{E}|_{\bm{x}}$, and the pairs
$(B,\overrightarrow{\sigma})$ are compatible. The underlying type D structure
is the pair $(X^{\mathcal{E}}(\mathcal{H}),\delta)$ where
$\delta(\eta)\coloneqq\partial(1\otimes\eta)$ for any $\eta\in
X^{\mathcal{E}}(\mathcal{H})$.
Abusing notation, we will also use $\widehat{CFD}(\mathcal{H})$ to denote its
underlying type D structure.
###### Remark 2.76.
Note when the local system is trivial, we can identify $\bm{x}$ with
$\mathcal{E}|_{\bm{x}}$, and the differential defined above can be more
conveniently written as
$\partial(a\otimes\bm{x})=a\cdot(\sum_{\bm{y}}\sum_{\\{(B,\overrightarrow{\sigma})|\
n_{z}(B)=0,\
\text{ind}(B,\overrightarrow{\sigma})=1\\}}\\#\mathcal{M}^{B}(\bm{x},\bm{y};\overrightarrow{\sigma})a(-\overrightarrow{\sigma})\otimes\bm{y}).$
###### Proposition 2.77.
The operator $\partial$ in Definition 2.75 is well-defined and
$\partial^{2}=0$.
###### Proof.
We first point out $\partial$ is well-defined, i.e., the sum defining
$\partial$ is finite. This reduces to the provincial admissibility of
$\mathcal{H}$, which implies there are only finitely many positive domains
with prescribed Reeb chords connecting any given pair of generators. The proof
is standard, and we do not repeat it here.
We move to see $\partial^{2}(\bm{x})=0$. For ease of explanation, we begin
with the case of trivial local systems. Let $a$ be a non-zero element of
$\mathcal{A}$, and let $\langle\partial^{2}\bm{x},a\bm{y}\rangle\in\mathbb{F}$
denote the coefficient of the term $a\bm{y}$ in $\partial^{2}\bm{x}$. Then
(2.7)
$\langle\partial^{2}\bm{x},a\bm{y}\rangle=\sum_{\bm{w}\in\mathcal{G}}\sum\\#\mathcal{M}^{B_{1}}(\bm{x},\bm{w};\overrightarrow{\sigma_{1}})\\#\mathcal{M}^{B_{2}}(\bm{w},\bm{y};\overrightarrow{\sigma_{2}}),$
where the second sum is over all the index-one compatible pairs
$(B_{i},\overrightarrow{\sigma_{i}})$ ($i=1,2$) with
$a(-\overrightarrow{\sigma_{1}})\cdot a(-\overrightarrow{\sigma_{2}})=a$. In
view of Proposition 2.47 and the gluing result, the right-hand side of
Equation (2.7) is
$\sum_{\\{(B,\overrightarrow{\sigma})|\text{ind}(B,\overrightarrow{\sigma})=2,\
\
a(-\overrightarrow{\sigma})=a\\}}\\#\partial\overline{\mathcal{M}}^{B}(\bm{x},\bm{y};\overrightarrow{\sigma})\equiv
0\pmod{2}$
This finishes the proof in the case of trivial local systems. For the case of
non-trivial local systems, the proof is a slight modification of the above
argument. One needs to note that given $B_{1}\in\pi_{2}(\bm{x},\bm{w})$ and
$B_{2}\in\pi_{2}(\bm{w},\bm{y})$, we have
$\Phi_{\bm{x},\bm{w}}^{B_{1}}\circ\Phi_{\bm{w},\bm{y}}^{B_{2}}=\Phi_{\bm{x},\bm{y}}^{B_{1}+B_{2}}$.
Therefore, given an $\eta\in\mathcal{E}|_{\bm{x}}$, the terms in
$\partial^{2}(\eta)$ corresponding to two-story ends of a one-dimensional
moduli space $\mathcal{M}^{B}(\bm{x},\bm{y};\overrightarrow{\sigma})$ are
multiples of the same element in $\mathcal{E}|_{\bm{y}}$, namely
$\Phi_{\bm{x},\bm{y}}^{B}(\eta)$, and hence the coefficient is zero mod $2$. ∎
### 2.10. Weakly extended Type D structures
We define the weakly extended type D structure $\widetilde{CFD}(\mathcal{H})$
in this subsection. The weakly extended torus algebra $\tilde{\mathcal{A}}$
can be represented by the quiver with relations shown in Figure 8.
Figure 8. The weakly extended torus algebra. The subscripts in the relation
are understood mod $4$.
Note as in the torus algebra, we have the idempotent ring
$\mathcal{I}=\langle\iota_{0}\rangle\oplus\langle\iota_{1}\rangle$. Let
$\bm{U}$ be $\rho_{0123}+\rho_{1230}+\rho_{2301}+\rho_{3012}$, which is a
central element of $\tilde{\mathcal{A}}$.
###### Definition 2.78.
A weakly extended type D structure over $\tilde{\mathcal{A}}$ is a left
$\mathcal{I}$-module $N$ together with a linear map
$\tilde{\delta}:N\rightarrow\tilde{\mathcal{A}}\otimes N$ such that the map
$\tilde{\partial}\coloneqq(\mu_{\tilde{\mathcal{A}}}\otimes\mathbb{I}_{N})\circ(\mathbb{I}_{\tilde{\mathcal{A}}}\otimes\tilde{\delta}):\tilde{\mathcal{A}}\otimes
N\rightarrow\tilde{\mathcal{A}}\otimes N$
squares to $\bm{U}$, i.e. $\tilde{\partial}^{2}=\bm{U}$. The curved left
$\tilde{\mathcal{A}}$-module $\tilde{\mathcal{A}}\otimes N$ is called the
weakly extended type D module of the weakly extended type D structure
$(N,\tilde{\delta})$.
Let $X^{\mathcal{E}}(\mathcal{H})$ be the $\mathcal{I}$-module defined the
same way as in Section 2.9.
###### Definition 2.79.
Let $\mathcal{H}$ be an unobstructed, provincially admissible, immersed
bordered Heegaard diagram. Fix a generic admissible almost complex structure
on $\Sigma\times[0,1]\times\mathbb{R}$. The weakly extended type D module
$\widetilde{CFD}(\mathcal{H})$ is defined to be the
$\tilde{\mathcal{A}}$-module
$\tilde{\mathcal{A}}\otimes_{\mathcal{I}}X^{\mathcal{E}}(\mathcal{H})$
together with a differential given by
$\tilde{\partial}(a\otimes\eta)=a\cdot(\sum_{\bm{y}}\sum_{\\{(B,\overrightarrow{\sigma})|\
\text{ind}(B,\overrightarrow{\sigma})=1\\}}\\#\mathcal{M}^{B}(\bm{x},\bm{y};\overrightarrow{\sigma})a(-\overrightarrow{\sigma})\otimes\Phi^{B}_{\bm{x},\bm{y}}\eta),$
where $a\in\tilde{\mathcal{A}}$, $\eta\in\mathcal{E}|_{\bm{x}}$,
$\overrightarrow{\sigma}$ is a sequence of Reeb chords that include the case
of a single closed Reeb orbit $\\{U\\}$ (in which case the corresponding
moduli space consists of 1-P holomorphic curves), and the pairs
$(B,\overrightarrow{\sigma})$ are compatible. When
$\overrightarrow{\sigma}=\\{U\\}$, we define $a(-U)=\bm{U}$. The underlying
weakly extended type D structure is
$(X^{\mathcal{E}}(\mathcal{H}),\tilde{\delta})$ where
$\tilde{\delta}(\eta)\coloneqq\tilde{\partial}(1\otimes\eta)$.
###### Remark 2.80.
Again, by abusing notation, we also use $\widetilde{CFD}(\mathcal{H})$ to
denote the underlying weakly extended type D structure. When the local system
is trivial, we have the following more familiar formula for the differential:
$\tilde{\partial}(a\otimes\bm{x})=a\cdot(\sum_{\bm{y}}\sum_{\\{(B,\overrightarrow{\sigma})|\text{ind}(B,\overrightarrow{\sigma})=1\\}}\\#\mathcal{M}^{B}(\bm{x},\bm{y};\overrightarrow{\sigma})a(-\overrightarrow{\sigma})\otimes\bm{y}).$
###### Proposition 2.81.
The operator $\tilde{\partial}$ in Definition 2.79 is well-defined and
$\tilde{\partial}^{2}=\bm{U}$.
###### Proof.
A standard argument shows that the provincial admissibility of $\mathcal{H}$
implies the sum defining $\tilde{\partial}$ in Definition 2.79 is finite, and
hence $\tilde{\partial}$ is well-defined.
Next, we show $\tilde{\partial}^{2}=\bm{U}$. Once again, we first give the
proof when the local system is trivial for conciseness. Recall the length of
an element $a\in\tilde{\mathcal{A}}$ is the number of factors
$\rho_{i}\in\\{\rho_{0},\rho_{1},\rho_{2},\rho_{3}\\}$ when we write $a$ as a
product of the generators
$\\{\iota_{0},\iota_{1},\rho_{0},\rho_{1},\rho_{2},\rho_{3}\\}$. (For example,
$\rho_{123}$ has length $3$ and $\iota_{0}$ has length $0$.)
For an element $a\in\tilde{\mathcal{A}}$ whose length is less than or equal to
$3$, the proof of Proposition 2.77 carries over to show
$\langle\tilde{\partial}^{2}\bm{x},a\bm{y}\rangle=0$ for any $\bm{x}$ and
$\bm{y}$ (by permuting the region we where we put the base point $z$).
We are left to consider the case where the algebra element is of length $4$.
We claim that for a generator $\bm{x}$ such that
$\iota_{1}\cdot\bm{x}=\bm{x}$, we have
$\langle\tilde{\partial}^{2}\bm{x},\rho_{0123}\bm{y}\rangle=\begin{cases}0,\
if\ \bm{x}\neq\bm{y},\\\ 1,\ if\ \bm{x}=\bm{y}.\end{cases}$
Assuming this claim, by permuting the subscripts we also have that
$\langle\tilde{\partial}^{2}\bm{x},\rho_{2301}\bm{y}\rangle$ is $1$ if
$\bm{x}=\bm{y}$ and $0$ otherwise, and an idempotent consideration shows
$\langle\tilde{\partial}^{2}\bm{x},a\bm{y}\rangle=0$ when
$a\in\\{\rho_{1230},\rho_{3012}\\}$. These together imply
$\tilde{\partial}^{2}\bm{x}=\bm{U}\cdot\bm{x}$ when $\iota(\bm{x})=\iota_{1}$.
A similar consideration shows this is true for $\bm{x}$ with
$\iota(\bm{x})=\iota_{0}$ as well. This finishes the proof of the proposition
modulo the claim.
Next, we prove the claim. Note
(2.8)
$\langle\tilde{\partial}^{2}\bm{x},\rho_{0123}\bm{y}\rangle=\sum_{\bm{w}}\sum_{\begin{subarray}{c}\text{ind}(B_{i},\overrightarrow{\sigma_{i}})=1,\\\
i=1,2\end{subarray}}\\#\mathcal{M}^{B_{1}}(\bm{x},\bm{w};\overrightarrow{\sigma_{1}})\\#\mathcal{M}^{B_{2}}(\bm{w},\bm{y};\overrightarrow{\sigma_{2}}),$
where $(B_{i},\overrightarrow{\sigma_{i}})$ ($i=1,2$) is compatible and
$a(-\overrightarrow{\sigma_{1}})a(-\overrightarrow{\sigma_{1}})=\rho_{0123}$
or $\bm{U}$; the possible pairs of
$(\overrightarrow{\sigma_{1}},\overrightarrow{\sigma_{2}})$ are listed below:
$\displaystyle\bigg{\\{}(\emptyset,\\{-\rho_{0},-\rho_{1},-\rho_{2},-\rho_{3}\\}),(\\{-\rho_{0}\\},\\{-\rho_{1},-\rho_{2},-\rho_{3}\\}),(\\{-\rho_{0},-\rho_{1},-\rho_{2}\\},\\{-\rho_{3}\\}),$
$\displaystyle(\\{-\rho_{0},-\rho_{1}\\},\\{-\rho_{2},-\rho_{3}\\}),(\\{-\rho_{0},-\rho_{1},-\rho_{2},-\rho_{3}\\},\emptyset),(\emptyset,\\{-\rho_{0},-\rho_{123}\\}),$
$\displaystyle(\\{-\rho_{0}\\},\\{-\rho_{123}\\}),(\\{-\rho_{0},-\rho_{123}\\},\emptyset),(\emptyset,\\{-\rho_{012},-\rho_{3}\\}),(\\{-\rho_{012}\\},\\{-\rho_{3}\\}),$
$\displaystyle(\\{-\rho_{012},-\rho_{3}\\},\emptyset),(\emptyset,\\{U\\}),(\\{U\\},\emptyset)\bigg{\\}}.$
Let
$\overline{\mathcal{M}}_{0}\coloneqq\cup_{\text{ind}(B,\overrightarrow{\sigma})=2}\overline{\mathcal{M}}^{B}(\bm{x},\bm{y};\overrightarrow{\sigma}),$
where
$\overrightarrow{\sigma}\in\\{(-\rho_{0},-\rho_{1},-\rho_{2},-\rho_{3}),(-\rho_{0},-\rho_{123}),(-\rho_{012},-\rho_{3})\\}$.
Let
$\overline{\mathcal{M}}_{1}:=\cup_{\text{ind}(B,U)=2}\overline{\mathcal{M}}^{B}(\bm{x},\bm{y};U).$
Equation 2.8 and the gluing result imply that
$\langle\tilde{\partial}^{2}\bm{x},\rho_{0123}\bm{y}\rangle$ is equal to the
number of two-story ends of the moduli space
$\overline{\mathcal{M}}_{0}\cup\overline{\mathcal{M}}_{1}$.
According to Proposition 2.49, the other elements in
$\partial\overline{\mathcal{M}}_{0}$ are:
* (A-1)
simple holomorphic combs with a single split component;
* (A-2)
simple boundary degenerations with one corner;
* (A-3)
simple boundary degenerations without corners.
Proposition 2.63 shows the other boundary points in
$\overline{\mathcal{M}}_{1}$ in addition to two-story ends are:
* (B-1)
simple holomorphic combs with an orbit curve;
* (B-2)
simple boundary degenerations with one corner;
* (B-3)
simple boundary degenerations without corners.
Note by Proposition 2.49 and Proposition 2.63, the number of boundary points
of type (A-1) is equal to that of type (B-1), both of which is
$\sum_{\text{ind}(B,-\rho_{3012})=1}\\#\mathcal{M}^{B}(\bm{x},\bm{y};-\rho_{3012})+\sum_{\text{ind}(B,-\rho_{1230})=1}\\#\mathcal{M}^{B}(\bm{x},\bm{y};-\rho_{1230}).$
The parity of the number of boundary points of type (A-2) is equal to that of
type (B-2), which are both mod $2$ equal to
$\sum_{q}\sum_{\\{(B_{1},B_{2})|B_{2}\in T(q),\
\text{ind}(B_{1}+B_{2};U)=2\\}}\\#\mathcal{M}^{B_{1}}(\bm{x},\bm{y};q),$
where $q$ ranges over self-intersection points of $\alpha_{im}$, and $T(q)$
denotes the set of stabilized teardrops at $q$.
The parity of the number of boundary points of type (B-3) is even according to
Proposition 2.63.
In summary, the parity of the number of boundary points of
$\overline{\mathcal{M}}_{0}\cup\overline{\mathcal{M}}_{1}$ corresponding to
two-story ends is equal to that of type (A-3), which is odd if and only if
$\bm{x}=\bm{y}$ by Proposition 2.49. Therefore,
$\langle\tilde{\partial}^{2}\bm{x},\rho_{0123}\bm{y}\rangle$ is odd if and
only if $\bm{x}=\bm{y}$, finishing the proof of the claim.
In the presence of non-trivial local systems, we simply need to consider the
above argument for each domain. For a domain $B$, let
$\overline{\mathcal{M}}_{0}^{B}$ be the subset of $\overline{\mathcal{M}}_{0}$
consisting of holomorphic curves with domain $B$, and similarly define
$\overline{\mathcal{M}}_{1}^{B}$. The two-story ends in
$\overline{\mathcal{M}}_{0}^{B}\cup\overline{\mathcal{M}}_{1}^{B}$ all
correspond to the same parallel transport. When ends of type (A-3) do not
occur, the two-story ends cancel in pairs by the same argument as above. When
ends of type (A-3) appear, we have $B=[\Sigma]$ and
$\sigma=(-\rho_{0},-\rho_{1},-\rho_{2},-\rho_{3})$. In particular,
$\partial_{\alpha_{im}}B=\emptyset$, which induces the identity endomorphism
of $\mathcal{E}|_{\bm{x}}$. Also, the number of two-story ends is odd as the
number of (A-3) ends is odd. The claim follows from these. ∎
There is a canonical quotient map
$\pi:\tilde{\mathcal{A}}\rightarrow\mathcal{A}$. We say a weakly extended type
D structure $(N,\tilde{\delta})$ extends a type D structure
$(N^{\prime},\delta)$ if $(N^{\prime},\delta)$ is isomorphic to
$(N,(\pi\otimes\mathbb{I}_{N})\circ\tilde{\delta})$. Clearly,
$\widetilde{CFD}(\mathcal{H})$ extends $\widehat{CFD}(\mathcal{H})$ when both
are defined.
### 2.11. Invariance
In this subsection, we address the invariance of the (weakly extended) type D
structures.
###### Proposition 2.82.
The homotopy type of the type D structure defined in Definition 2.75 is
independent of the choice of the almost complex structure and is invariant
under isotopy of the $\alpha$\- or $\beta$-curves.
###### Remark 2.83.
We do not need the invariance under handleslides and stabilizations for our
applications. We only need to prove invariance when perturbing diagrams to
obtain nice diagrams, and this only requires isotopies.
###### Proof of Proposition 2.82.
The standard proof in Section 6.3 of [LOT18] carries over. For instance, to
prove independence of almost complex structures, one first constructs a
continuation map by counting holomorphic curves in
$\Sigma\times[0,1]\times\mathbb{R}$ for a generic almost complex structure $J$
that interpolates two admissible almost complex structures $J_{0}$ and
$J_{1}$. Then, one proves the continuation map is a chain map by analyzing the
ends of one-dimensional moduli spaces. The only possible complication comes
from boundary degenerations since $\alpha_{im}$ is immersed. However, this
does not happen as $\mathcal{H}$ is unobstructed and the holomorphic curves
have $n_{z}=0$. Therefore, no new phenomenon appears in the degeneration of
moduli spaces, and hence the proof stays the same. ∎
###### Proposition 2.84.
The homotopy type of the weakly extended type D structure defined in
Definition 2.79 is independent of the choice of the almost complex structure
and is invariant under isotopy of the $\alpha$\- and $\beta$-curves.
###### Proof.
One could prove this proposition similarly to the previous one. However, such
an approach would require generalizing the analysis of the ends of moduli
spaces in Proposition 6.20 of [LOT18] and hence is slightly tedious to write
down. Here we give a different approach. Let $\mathcal{H}$ denote the immersed
bordered Heegaard diagram. By Proposition 2.82, we know the homotopy type of
$\widehat{CFD}(\mathcal{H})$ is independent of the choice of almost complex
structures and isotopy of the $\alpha$\- or $\beta$-curves. Since
$\widetilde{CFD}(\mathcal{H})$ extends $\widehat{CFD}(\mathcal{H})$ and that
such extension is unique up to homotopy by Proposition 38 of [HRW23], we know
the homotopy type of $\widetilde{CFD}(\mathcal{H})$ is also independent of the
choice of almost complex structures and isotopy of the $\alpha$\- and
$\beta$-curves. ∎
## 3\. Knot Floer homology of immersed Heegaard diagrams
This section defines knot Floer chain complexes of immersed Heegaard diagrams
and proves the homotopy invariance under Heegaard moves.
### 3.1. Immersed doubly-pointed Heeggard diagram
###### Definition 3.1.
An _immersed doubly-pointed Heegaard diagram_ is a 5-tuple
$\mathcal{H}_{w,z}=(\Sigma,\bm{\alpha},\bm{\beta},w,z)$ where
* (1)
$\Sigma$ is a closed oriented surface of genus $g$.
* (2)
$\bm{\alpha}=\\{\alpha_{1},\ldots,\alpha_{g-1},\alpha_{g}\\}$, where
$\alpha_{1},\ldots,\alpha_{g-1}$ are embedded disjoint curves in $\Sigma$ and
$\alpha_{g}=\\{\alpha_{g}^{1},\ldots,\alpha_{g}^{n}\\}$ is a collection of
immersed curves decorated with local systems. Moreover, $\alpha_{i}$
($i=1,\ldots,g-1$) are disjoint from $\alpha_{g}$, $\alpha_{g}^{1}$ has the
trivial local system, and
$\\{\alpha_{1},\ldots,\alpha_{g-1},\alpha_{g}^{1}\\}$ induce linearly
independent elements in $H_{1}(\Sigma,\mathbb{Z})$. We also assume that
$\alpha_{g}^{i}$ is trivial in
$H_{1}(\Sigma,\mathbb{Z})/\langle\alpha_{1},\ldots,\alpha_{g-1}\rangle$ for
$i>1$. For convenience, we also denote $\alpha_{g}$ by $\alpha_{im}$.
* (3)
$\bm{\beta}=\\{\beta_{1},\ldots,\beta_{g}\\}$ are embedded disjoint curves in
$\Sigma$ which induce linearly independent elements in
$H_{1}(\Sigma,\mathbb{Z})$.
* (4)
$w$ and $z$ are base points such that they both lie in a single connected
region in the complement of $\alpha$-curves as well as a single region in the
complement of $\beta$-curves.
Domains, periodic domains, and $\alpha$-bounded domains are defined similarly
in this setting as for bordered Heegaard diagrams (by ignoring $\alpha$-arcs
and surface boundary). We make a similar but slightly different definition of
unobstructedness and admissibility below.
###### Definition 3.2.
Given an immersed doubly-pointed Heegaard diagram, $\bm{\alpha}$ is called
_unobstructed_ if there are no nontrivial zero- or one-cornered
$\alpha$-bounded domains $B$ with $n_{z}(B)=0$ (or equivalently $n_{w}(B)=0$).
An immersed doubly-pointed Heegaard diagram is called unobstructed if
$\bm{\alpha}$ is unobstructed.
###### Definition 3.3.
An immersed doubly-pointed Heegaard diagram is _bi-admissible_ if any
nontrivial periodic domain $B$ with $n_{z}(B)=0$ or $n_{w}(B)=0$ has both
positive and negative coefficients.
We remark that the restriction to having only one immersed multicurve in the
definition of immersed doubly-pointed Heegaard diagrams is not essential.
### 3.2. The knot Floer chain complex
We define the knot Floer chain complex of an immersed Heegaard diagram similar
to that in the ordinary setup. The only modification is that we only count
stay-on-track holomorphic curves. The definition and analysis of moduli spaces
in this setup is a straightforward modification of that in the previous
section; it is even simpler as we do not need to care about east punctures. We
hence do not repeat the moduli space theory but only mention the key
properties when we need them. We will let $\mathcal{G}(\mathcal{H}_{w,z})$
denote the set of generators, which are $g$-tuples $(x_{1},\ldots,x_{g})$ such
that $x_{i}\in\alpha_{i}\cap\beta_{\sigma(i)}$ $(i=1,\ldots,g)$ where $\sigma$
is a permutation of $\\{1,\ldots,g\\}$. Let
$\mathcal{R}=\mathbb{F}[U,V]/(UV)$. Implicit in the definition below is that
we choose a generic admissible almost complex structure $J$ on
$\Sigma\times[0,1]\times\mathbb{R}$.
###### Definition 3.4.
Let $\mathcal{H}_{w,z}$ be an unobstructed and bi-admissible immersed doubly-
pointed Heegaard diagram. $CFK_{\mathcal{R}}(\mathcal{H}_{w,z})$ is the free
$\mathcal{R}$-module generated over $\mathcal{G}(\mathcal{H}_{w,z})$ with
differential $\partial$ defined as
$\partial\bm{x}=\sum_{y}\sum_{B\in\pi_{2}(\bm{x},\bm{y}),\
\text{ind}(B)=1}\\#\mathcal{M}^{B}(\bm{x},\bm{y})U^{n_{w}(B)}V^{n_{z}(B)}\bm{y},$
where $\bm{x},\bm{y}\in\mathcal{G}$.
###### Remark 3.5.
Here we only give the definition assuming the local system on $\alpha_{im}$ is
trivial. The case in which the local system is non-trivial is only
notationally more complicated, and we leave it for the interested readers to
work out. See Definition 2.75 for an example.
###### Proposition 3.6.
$(CFK_{\mathcal{R}}(\mathcal{H}_{w,z}),\partial)$ is a chain complex, i.e.,
$\partial^{2}=0$.
###### Proof.
The same proof for Proposition 2.77 works here. Note we will only use moduli
spaces with domains $B$ such that $n_{w}(B)=0$ or $n_{z}(B)=0$, and the
unobstructedness of $\mathcal{H}_{w,z}$ excludes the possibility of boundary
degeneration in the compactified 1-dimensional moduli space supported in such
domains. Hence, an analogue version of Proposition 2.47 holds. With this
observation, the proof of Proposition 2.77 carries over. ∎
### 3.3. Bi-grading
We would like to consider gradings on knot Floer chain complexes.
###### Definition 3.7.
A (possibly immersed) doubly-pointed Heegaard diagram is gradable if all non-
trivial periodic domain $P$ satisfies $\text{ind}(P)-2n_{z}(P)=0$ and
$\text{ind}(P)-2n_{w}(P)=0$, where $\text{ind}(-)$ is defined in Definition
2.43.
If $\mathcal{H}_{w,z}$ is gradable then the knot Floer chain complex
$(CFK_{\mathcal{R}}(\mathcal{H}_{w,z}),\partial)$ admits a relative
$\mathbb{Z}\oplus\mathbb{Z}$-grading, as described below. We will be
interested in diagrams $\mathcal{H}_{w,z}$ for which
$\widehat{HF}(\mathcal{H}_{w})\cong\widehat{HF}(\mathcal{H}_{z})\cong\mathbb{F}$,
where $\widehat{HF}(\mathcal{H}_{w})$ and $\widehat{HF}(\mathcal{H}_{z})$ are
homology groups of the chain complexes obtained from
$CFK_{\mathcal{R}}(\mathcal{H}_{w,z})$ by setting $V=0$ and $U=1$ or $U=0$ and
$V=1$, respectively. In this case we say that the horizontal and vertical
homology has rank one. Gradable diagrams with this property can be given an
absolute grading, as follows.
###### Definition 3.8.
Let $\bm{x},\bm{y}\in\mathcal{G}(\mathcal{H}_{w,z})$ be two generators. Let
$B\in\tilde{\pi}_{2}(\bm{x},\bm{y})$ be a domain. Then the $w$-grading
difference between $\bm{x}$ and $\bm{y}$ is given by
$gr_{w}(\bm{x})-gr_{w}(\bm{y})=\text{ind}(B)-2n_{w}(B),$
and the $z$-grading difference between $\bm{x}$ and $\bm{y}$ is given by
$gr_{z}(\bm{x})-gr_{z}(\bm{y})=\text{ind}(B)-2n_{z}(B).$
If the horizontal and vertical homology of $\mathcal{H}_{w,z}$ is rank one,
then the absolute $w$-grading is normalized so that
$\widehat{HF}(\mathcal{H}_{w})$ is supported in $w$-grading $0$, and absolute
$z$-grading is normalized so that $\widehat{HF}(\mathcal{H}_{z})$ is supported
in $z$-grading $0$.
Equivalently, one can equip $CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{K}))$
with the Maslov grading and the Alexander grading. These two gradings can be
expressed in terms of the $w$-grading and $z$-grading: The Maslov grading is
equal to the $z$-grading, and the Alexander grading is given by
$\frac{1}{2}(gr_{w}-gr_{z})$.
###### Remark 3.9.
The normalization conditions for the absolute gradings are chosen so that the
bi-graded chain complexes model those associated to knots in the 3-sphere.
### 3.4. Invariance
We will show knot Floer chain complexes defined over immersed Heegaard
diagrams satisfy similar invariance properties when varying the almost complex
structure or modifying the Heegaard diagram by isotopy, handleslides, and
stabilizations. While the meaning of isotopy and stabilization are obvious for
immersed Heegaard diagrams, we give a remark on handleslides.
###### Remark 3.10.
When speaking of handleslides of an immersed Heegaard diagram
$\mathcal{H}_{w,z}$, we only allow an $\alpha$-curve to slide over another
_embedded_ $\alpha$-curve, not over an immersed $\alpha$-curve. Furthermore,
we point out that handle-slides do not change the unobstructedness, bi-
admissibility, and gradability of the diagram. To see this, note periodic
domains of two Heegaard diagrams before and after a handleslide are related. A
periodic domain in the old Heegaard diagram with boundary on the arc that
moves in the handleslide give rise to a periodic domain in the new Heegaard
diagram by boundary summing a thin annulus (whose multiplicity can be one or
negative one). In particular, if we started from a somewhere negative domain
$B$, then the new domain $B^{\prime}$ after this procedure is still somewhere
negative; it is also easy to see $\text{ind}(B)=\text{ind}(B^{\prime})$,
$n_{z}(B)=n_{z}(B^{\prime})$, and $n_{w}(B)=n_{w}(B^{\prime})$, which implies
the gradability of two diagrams are the same as well.
###### Proposition 3.11.
Let $\mathcal{H}_{w,z}$ be an unobstructed, bi-admissible, and gradable
immersed doubly-pointed Heegaard diagram. The bigraded chain homotopy type of
$CFK_{\mathcal{R}}(\mathcal{H}_{w,z})$ is invariant under varying the almost
complex structure, isotopy of the $\alpha$\- and $\beta$-curves, handleslides,
and stabilization/destabilization.
###### Proof.
The proof of the bigraded homotopy invariance under the variation of the
almost complex structure, isotopy, and stabilization is the same as the
corresponding results in the embedded-$\alpha$-curve set-up in [Lip06]. In
fact, changing the $\alpha$-curves from embedded to immersed can only
complicate the arguments in that boundary degeneration might appear as ends of
the moduli spaces involved, yet the unobstructedness dispels such worries.
Figure 9. The $\alpha$-curves in proving the handleslide invariance on a
genus-two surface, which is represented as a torus obtained by identifying the
edges of a square together with a handle attached to the two circles inside
the square. The labels $\theta_{im}^{\pm}$ are used interchangeably with
$\theta_{2}^{\pm}$. Similarly, $\theta_{im}^{H,\pm}$ and
$\theta_{im}^{{}^{\prime}\pm}$ are the same as $\theta_{2}^{H,\pm}$ and
$\theta_{2}^{{}^{\prime}\pm}$, respectively. (a) shows any self-intersection
point $q_{i}$ of $\alpha_{im}$ induces two intersection points between
$\alpha_{im}$ and its perturbation $\alpha^{\prime}_{im}$. (b) shows the small
triangles showing
$F(\Theta_{\alpha^{\prime},\alpha^{H}}\otimes\Theta_{\alpha^{H},\alpha})=\Theta_{\alpha^{\prime},\alpha}$
.
The handleslide invariance can also be proved using the same strategy as in
the embedded-$\alpha$-curve case with slightly more caution. The main
difference is that in the embedded-$\alpha$-curve case, there is a unique
maximal graded generator in the Heegaard Floer homology of a Heegaard diagram
where the set of $\alpha$-curves is a small Hamiltonian perturbation of the
$\beta$-curves. In contrast, such a generator needs to be specified more
carefully in our case. We spell this out in more detail.
Denote $\mathcal{H}=(\Sigma,\bm{\alpha},\bm{\beta},w,z)$. For clarity of
exposition, assume $\alpha_{im}$ consists of a single component with a trivial
local system and $n$ self-intersection points. We also restrict to the
interesting case, in which the handleslide is sliding $\alpha_{im}$ over an
embedded $\alpha$-curve. Let $\bm{\alpha^{\prime}}$ denote a small Hamiltonian
perturbation of $\bm{\alpha}$ so that $\alpha_{i}\cap\alpha_{j}=\emptyset$ for
$i\neq j$; for $i=1,\ldots,g-1$, the embedded curves $\alpha_{i}$ and
$\alpha_{i}^{\prime}$ intersects exactly at two points
$\\{\theta_{i}^{+},\theta_{i}^{-}\\}$; $\alpha_{im}$ intersects
$\alpha_{im}^{\prime}$ at $2+2n$ points
$\\{\theta_{g}^{+},\theta_{g}^{-},\xi_{1}^{+},\xi_{1}^{-},\ldots,\xi_{n}^{+},\xi_{n}^{-}\\}$,
where $\xi_{i}^{\pm}$ are intersection points corresponding to the self-
intersection points of $\alpha_{im}$. We label the $\theta$-intersection
points using the convention so that $(\theta_{i}^{+},*)$ is of higher grading
than $(\theta_{i}^{-},*)$ in
$CFK_{\mathcal{R}}(\Sigma,\bm{\alpha^{\prime}},\bm{\alpha},w,z)$,
$(i=1,\ldots,g)$ (see Figure 9 (a)). Let $\alpha_{im}^{H}$ denote the curve
obtained by sliding $\alpha_{im}$ over, say, $\alpha_{g-1}$, so that
$\alpha_{im}^{H}$ intersects each of $\alpha_{im}$ and $\alpha_{im}^{\prime}$
in $2+2n$ points; denote the $\theta$-intersection points by
$\\{\theta_{g}^{H,+},\theta_{g}^{H,-}\\}$ and
$\\{\theta_{g}^{{}^{\prime}+},\theta_{g}^{{}^{\prime}-}\\}$, respectively. Let
$\alpha_{i}^{H}$ ($i=1,\ldots,g-1$) be small Hamiltonian perturbations of
$\alpha_{i}^{\prime}$ so that $\alpha_{i}^{H}$ intersects each of $\alpha_{i}$
and $\alpha_{i}^{\prime}$ at exactly two points, denoted by
$\\{\theta_{i}^{H,+},\theta_{i}^{H,-}\\}$ and
$\\{\theta_{i}^{{}^{\prime}+},\theta_{i}^{{}^{\prime}-}\\}$, respectively. Let
$\Theta_{\alpha^{\prime},\alpha}=(\theta_{1}^{+},\ldots,\theta_{g}^{+})$,
$\Theta_{\alpha^{H},\alpha}=(\theta_{1}^{H,+},\ldots,\theta_{g}^{H,+})$, and
$\Theta_{\alpha^{\prime},\alpha^{H}}=(\theta_{1}^{{}^{\prime}+},\ldots,\theta_{g}^{{}^{\prime}+})$.
These correspond to the maximal graded intersection points used in the
embedded case.777A straightforward computation would show
$\Theta_{\alpha,\alpha^{\prime}}$ are indeed cycles in the Floer chain complex
associated to the immersed Heegaard diagram $(\Sigma,\bm{\alpha}$,
$\bm{\alpha^{\prime}},w,z)$; similar statements hold for
$\Theta_{\alpha^{H},\alpha}$ and $\Theta_{\alpha^{\prime},\alpha^{H}}$.
The rest of the proof is similar to the embedded case. We provide a sketch.
Let $\mathcal{H}^{H}=(\Sigma,\bm{\alpha}^{H},\bm{\beta},w,z)$ and
$\mathcal{H}^{\prime}=(\Sigma,\bm{\alpha^{\prime}},\bm{\beta},w,z)$. By
counting holomorphic triangles (with stay-on-track boundaries), one can define
chain maps
$F(\Theta_{\alpha^{H},\alpha}\otimes-):CFK_{\mathcal{R}}(\mathcal{H})\rightarrow
CFK_{\mathcal{R}}(\mathcal{H}^{H})$
and
$F(\Theta_{\alpha^{\prime},\alpha^{H}}\otimes-):CFK_{\mathcal{R}}(\mathcal{H}^{H})\rightarrow
CFK_{\mathcal{R}}(\mathcal{H}^{\prime})$
Again, the usual proof which shows the above maps are chain maps carries
through, as the unobstructedness excludes boundary degeneration when analyzing
the ends of one-dimensional moduli spaces of holomorphic triangles. Similarly,
by analyzing ends one-dimensional moduli spaces of holomorphic quadrilaterals,
one can show the composition of these two maps is chain homotopic equivalent
to
$F(F(\Theta_{\alpha^{\prime},\alpha^{H}}\otimes\Theta_{\alpha^{H},\alpha})\otimes-)$.
One can show this map is homotopic equivalent to
$F(\Theta_{\alpha^{\prime},\alpha}\otimes-):CFK_{\mathcal{R}}(\mathcal{H})\rightarrow
CFK_{\mathcal{R}}(\mathcal{H}^{\prime})$
by a standard computation which shows
$F(\Theta_{\alpha^{\prime},\alpha^{H}}\otimes\Theta_{\alpha^{H},\alpha})=\Theta_{\alpha^{\prime},\alpha}$
(see Figure 9 (b)). One can show that the map
$F(\Theta_{\alpha^{\prime},\alpha}\otimes-)$ is a chain isomorphism (using the
area-filtration technique in [OS04b], Proposition 9.8). ∎
## 4\. Paring theorems
In Section 4.1–4.2, we introduce a pairing construction which merges a (non-
immersed) bordered Heegaard diagram and an immersed multicurve to produce an
immersed Heegaard diagram. After that, we establish the unobstructedness and
admissibility of these pairing diagrams in Section 4.3–4.5, and then we prove
the bordered invariant of such pairing diagrams admits a box-tensor product
interpretation in Section 4.6. Finally, in Section 4.7 we prove a pairing
theorem for gluing a particular type of doubly-pointed bordered Heegaard
diagram and an immersed bordered Heegaard diagram; this theorem will be useful
in Section 5.
### 4.1. Immersed curves in the marked torus
###### Definition 4.1.
The _marked torus_ $T^{2}$ is the oriented surface
$\mathbb{R}^{2}/\mathbb{Z}^{2}$ together with a base point $z$ located at
$(1-\epsilon,1-\epsilon)$ for some sufficiently small $\epsilon>0$. The images
of the positively oriented $x$-axis and $y$-axis are called the _preferred
longitude_ and _preferred meridian_ respectively.
We will consider immersed multicurves with local systems in the marked torus.
Two immersed multicurves are equivalent if they are regularly homotopic in
$T^{2}\backslash{z}$ and the local systems are isomorphic. Throughout this
paper, we restrict to immersed multicurves $\alpha_{im}$ satisfying the
following assumptions:
* (C-1)
No component of $\alpha_{im}$ is a circle enclosing the base point $z$ once.
* (C-2)
No component of the immersed multicurve is null-homotopic in
$T^{2}\backslash\\{z\\}$, and the immersed multicurve is _unobstructed_ in the
sense that it does not bound any teardrops in $T^{2}\backslash\\{z\\}$.
* (C-3)
The immersed multicurve is _reduced_ , i.e., if we let $[0,1]\times[0,1]$ be
the square obtained by cutting the marked torus open along the preferred
meridian and longitude, then no sub-arcs of $\alpha_{im}$ contained in
$[0,1]\times[0,1]$ have both ends on the same edge of the square.
* (C-4)
Let $\pi$ denote the projection map from $\mathbb{R}^{2}$ to $T^{2}$. Using
regular homotopy, we assume all immersed curves in the marked torus are
contained in the complement of
$\pi([-\frac{1}{4},\frac{1}{4}]\times[-\frac{1}{4},\frac{1}{4}])$ in $T^{2}$,
the strands contained in
$\pi([-\frac{1}{4},\frac{1}{4}]\times[\frac{1}{4},\frac{3}{4}])$ are
horizontal, and the strands contained in the image of
$\pi([\frac{1}{4},\frac{3}{4}]\times[-\frac{1}{4},\frac{1}{4}])$ are vertical.
An immersed multicurve in the marked torus determines a type D structure over
the torus algebra as follows. First, we introduce some terminology.
Figure 10. Six types of elementary arcs. The orientations are the so-called
correct orientations.
###### Definition 4.2.
An _elementary arc_ is an embedded arc in the marked torus ${T}^{2}$ such that
it only intersects the preferred meridian or longitude at the endpoints. There
are six types of elementary arc based on the position of the endpoints, each
of which is labeled by a Reeb chord in
$\\{\rho_{1},\rho_{2},\rho_{3},\rho_{12},\rho_{23},\rho_{123}\\}$ as shown in
Figure 10.
If we ignore the local systems, then any immersed multicurve is comprised of a
collection of elementary arcs; one can see this by cutting $T^{2}$ open along
the preferred longitude and meridian. Sometimes we also need to consider
oriented elementary arcs.
###### Definition 4.3.
An orientation of an elementary arc is called the correct orientation if it is
the one shown in Figure 10.
Next, we describe how to obtain a type D structure from an immersed multicurve
in terms of elementary arcs. Denote the local system on $\alpha_{im}$ by
$(\mathcal{E},\Phi)$, where $\mathcal{E}$ is a vector bundle over
$\alpha_{im}$ and $\Phi$ is a parallel transport. Let
$\mathcal{G}(\alpha_{im})=\mathcal{G}_{m}\cup\mathcal{G}_{l}$, where
$\mathcal{G}_{m}$ (respectively, $\mathcal{G}_{l}$) is the set of intersection
points of $\alpha_{im}$ and the preferred meridian (respectively, longitude).
Let $\mathcal{X}$ be the vector space
$\oplus_{x\in\mathcal{G}(\alpha_{im})}\mathcal{E}|_{x}$. Next, we define an
$\mathcal{I}$-action on $\mathcal{X}$, where $\mathcal{I}$ is the ring of
idempotent of the torus algebra. If $x\in\mathcal{G}_{m}$, for any
$\tilde{x}\in\mathcal{E}|x$, $\iota_{0}\cdot\tilde{x}=\tilde{x}$ and
$\iota_{1}\cdot\tilde{x}=0$; if $x\in\mathcal{G}_{l}$, for any
$\tilde{x}\in\mathcal{E}|x$, $\iota_{0}\cdot\tilde{x}=0$ and
$\iota_{1}\cdot\tilde{x}=\tilde{x}$. The underlying $\mathcal{A}$-module for
$\widehat{CFD}(\alpha_{im})$ is $\mathcal{A}\otimes_{\mathcal{I}}\mathcal{X}$.
Finally, the differential on $\widehat{CFD}(\alpha_{im})$ decomposes linearly
as maps between $\mathcal{E}|_{x}$ for $x\in\mathcal{G}(\alpha_{im})$. Given
$x,y\in\mathcal{G}(\alpha_{im})$ and $\rho_{I}$ a Reeb element, there is a
differential map $\mathcal{E}|_{x}\rightarrow\rho_{I}\otimes\mathcal{E}|_{y}$
if and only if $x$ and $y$ are connected by a $\rho_{I}$-elementary arc whose
correct orientation goes from $x$ to $y$, in which case the differential is
given by $\partial(\tilde{x})=\rho_{I}\otimes\Phi(\tilde{x})$ for
$\tilde{x}\in\mathcal{E}|_{x}$. In particular, when the local system of
$\alpha_{im}$ is trivial, then the generators of $\widehat{CFD}(\alpha_{im})$
are in one-to-one correspondence with the intersection points of $\alpha_{im}$
with the preferred longitude/meridian, and the differentials are in one-to-one
correspondence with the elementary sub-arcs of $\alpha_{im}$.
The immersed-curve presentation of type D structures is empowered by the
following result.
###### Theorem 4.4 ([HRW23]).
Each Type D structure of a bordered 3-manifold with torus boundary is
homotopic to a type D structure determined by some immersed multicurve (with
local systems) in the marked torus.
###### Remark 4.5.
All immersed multicurves arising from 3-manifolds with torus boundary
satisfies the assumptions (C-1)-(C-4): (C-4) is straightforward, (C-2) and
(C-3) follows from the algorithm of converting type D structures to immersed
multicurves in [HRW23], and for (C-4) see the discussion around Figure 31 and
32 in [HRW22].
We will mainly be interested in the immersed multicurves corresponding to type
D structures of knot complements for knots in the 3-sphere; these immersed
multicurves satisfy some further properties that we specify in Definition 4.6
below, and the proofs of these properties can be found in [HRW22, Section 4].
###### Definition 4.6.
An immersed multicurve
$\alpha_{im}=\\{\alpha_{im}^{0},\ldots,\alpha_{im}^{n-1}\\}$ of $n$ components
(for some $n\geq 1$) with a local system is called knot-like if the local
system restricted to ${\alpha_{im}^{0}}$ is trivial, $\alpha_{im}^{0}$ (with
some orientation) is homologous to the preferred longitude in $T^{2}$, and
$[\alpha_{im}^{i}]$ for $i\geq 1$ is trivial in $H_{1}(T^{2},\mathbb{Z})$.
From now on, we assume all immersed multicurves are knot-like.
### 4.2. Pairing diagrams
We introduce a class of immersed bordered Heegaard diagrams and doubly pointed
Heegaard diagrams. They are respectively obtained from two types of pairing
constructions that we will define:
* (1)
Pairing an immersed multicurve in the marked torus and an _arced bordered
Heegaard diagram with two boundary components_ to construct an immersed
bordered Heegaard diagram.
* (2)
Paring an immersed multicurve in the marked torus with a doubly pointed
bordered Heegaard diagram to construct a closed immersed doubly pointed
Heegaard diagram.
We begin with the first type. For convenience, we first recall the definition
of arced bordered Heegaard diagrams below (in the special case where both
boundaries of the corresponding bordered manifold are tori).
###### Definition 4.7.
An arced bordered Heegaard diagram with two boundary components is a quadruple
$\mathcal{H}^{a}=(\bar{\Sigma},\bar{\bm{\alpha}},\bm{\beta},\bm{z})$ where
* (1)
$\bar{\Sigma}$ is a compact, oriented surface of genus $g$ with two boundary
components
$\partial\bar{\Sigma}=\partial_{L}\bar{\Sigma}\cup\partial_{R}\bar{\Sigma}$;
* (2)
$\bar{\bm{\alpha}}$ is a collection of pairwise disjoint properly embedded
arcs and curves
$\\{\alpha^{a,L}_{1},\alpha^{a,L}_{2},\alpha^{a,R}_{1},\alpha^{a,R}_{2},\alpha^{c}_{1},\ldots,\alpha^{c}_{g-2}\\}$.
Here, $\alpha^{a,L}_{1}$ and $\alpha^{a,L}_{2}$ are two arcs with endpoints on
$\partial_{L}\bar{\Sigma}$, $\alpha^{a,R}_{1}$ and $\alpha^{a,R}_{2}$ are two
arcs with endpoints on $\partial_{R}\bar{\Sigma}$, and the $\alpha^{c}_{i}$’s
($i=1,\ldots,g-2$) are embedded circles. Moreover, elements in
$\bar{\bm{\alpha}}$ induce linearly independent elements in
$H_{1}(\bar{\Sigma},\partial\bar{\Sigma};\mathbb{Z})$;
* (3)
$\bm{\beta}$ is a set of $g$ pairwise disjoint embedded circles
$\\{\beta_{1},\ldots,\beta_{g}\\}$ in the interior of $\bar{\Sigma}$ that are
linearly independent as elements in
$H_{1}(\bar{\Sigma},\partial\bar{\Sigma};\mathbb{Z})$;
* (4)
$\bm{z}$ is a properly embedded arc in
$\bar{\Sigma}\backslash(\bar{\bm{\alpha}}\cup\bm{\beta})$ with one endpoint
$z_{L}$ on $\partial_{L}\bar{\Sigma}$ and the other endpoint $z_{R}$ on
$\partial_{R}\bar{\Sigma}$.
Periodic and provincially period domains for arced bordered Heegaard diagrams
with two boundary components are defined similarly to the case of a single
boundary component. In the two boundary case we will also consider periodic
domains that are adjacent to only one of the boundaries.
###### Definition 4.8.
A domian is _left provincial_ if the multiplicity in the regions adjacent to
$\partial_{L}\bar{\Sigma}$ are zero. We say an arced bordered Heegaard
diagrams with two boundary components is _left provincially admissible_ if all
left provincial periodic domains have both positive and negative
multiplicities.
The pairing construction is illustrated in Figure 11, and is spelled out in
Definition 4.9.
Figure 11. Left: an arced bordered Heegaard diagram. Middle: an immersed
multicurve in the marked torus. The dashed lines are the boundary of
$\pi([-\frac{1}{4},\frac{1}{4}]\times[-\frac{1}{4},\frac{1}{4}])$. Right: a
bordered Heegaard diagram obtained by the pairing construction.
###### Definition 4.9.
Let $\mathcal{H}^{a}=(\bar{\Sigma},\bar{\bm{\alpha}},\bm{\beta},\bm{z})$ be an
arced bordered Heegaard diagram with two boundary components and let
$\alpha_{im}$ be an immersed multicurve in the marked torus $T^{2}$. The
_pairing diagram of $\mathcal{H}^{a}$ and $\alpha_{im}$_, denoted by
$\mathcal{H}^{a}(\alpha_{im})$, is a bordered Heegaard diagram obtained
through the following steps.
* (1)
Form $\bar{\Sigma}^{\prime}$ from $\bar{\Sigma}$ by collapsing
$\partial_{R}\bar{\Sigma}$. Let $\alpha^{\prime a}_{i}$ be the image of
$\alpha^{a,L}_{i}$ ($i=1,2$), $\alpha^{\prime c}_{i}$ be the image of
$\alpha^{c}_{i}$ ($i=1,\ldots,g-2$), $\bm{\beta}^{\prime}$ be the image of
$\bm{\beta}$, and $z^{\prime}_{L}$ be the image of $z_{L}$. The images of
$\alpha^{a,R}_{i}$ ($i=1,2$), denoted by $\tilde{\alpha}_{i}$, are two circles
intersecting at a single point ${z}^{\prime}_{R}$, the image of $z_{R}$.
* (2)
Take a neighborhood $U$ of $\tilde{\alpha}_{1}\cup\tilde{\alpha}_{2}$ which
admits a homeomorphism $h:U\rightarrow
T^{2}\backslash\pi([-\frac{1}{4},\frac{1}{4}]\times[-\frac{1}{4},\frac{1}{4}])$
such that $h(\tilde{\alpha}_{1})=\pi(\\{\frac{1}{2}\\}\times[0,1])$,
$h(\tilde{\alpha}_{2})=\pi([0,1]\times\\{\frac{1}{2}\\})$, and each connected
component of $h(\bm{\beta}^{\prime}\cap U)$ is an arc of the form
$\pi(\\{x\\}\times[\frac{1}{4},\frac{3}{4}])$ or
$\pi([\frac{1}{4},\frac{3}{4}]\times\\{y\\})$ for some $x$ or $y$ in
$(2\epsilon,\frac{1}{4})$.
* (3)
Let $\alpha_{im}^{\prime}=h^{-1}(\alpha_{im})$. Let
$\bar{\bm{\alpha}}^{\prime}=\\{\alpha^{\prime a}_{1},\alpha^{\prime
a}_{2},\alpha^{\prime c}_{1},\ldots,\alpha^{\prime
c}_{g-1},\alpha_{im}^{\prime}\\}$.
* (4)
Let
$\mathcal{H}^{a}(\alpha_{im})=(\bar{\Sigma}^{\prime},\bar{\bm{\alpha}}^{\prime},\bm{\beta}^{\prime},z^{\prime}_{L})$.
Recall a _doubly pointed bordered Heegaard diagram_ is a bordered Heegaard
diagram with an extra basepoint in the complement of the $\alpha$\- and
$\beta$-curves. It encodes a knot in a bordered 3-manifold. There is an
entirely similar pairing construction for a doubly-pointed bordered Heegaard
diagram and an immersed multicurve in the marked torus. We do not spell out
the wordy definition and simply refer the readers to Figure 12 for an example.
Figure 12. Pairing construction that gives rise to a doubly pointed Heegaard
diagram.
We want to establish the unobstructedness and admissibility of the immersed
Heegaard diagrams obtained from pairing constructions. For that we need two
tools, namely _z-adjacency_ and _the collapsing map_ introduced in the next
two subsections.
### 4.3. z-adjacency
We will consider a diagrammatic condition for immersed multicurves that
guarantees the unobstructedness of the paring diagram; this condition can be
achieved easily by finger moves. We begin by introducing some terminology for
convenience.
In the definition below, we orient the curves in $\alpha_{im}$ arbitrarily and
orient the four edges of the cut-open torus using the boundary orientation.
For each edge of the cut-pen torus, let $k_{+}$ and $k_{-}$ denote the number
of elementary arcs intersecting a given edge positively and negatively,
respectively.
Figure 13. The disks $U^{R}_{-}$ and $U^{L}_{-}$. The superscript is chosen to
suggest whether $z$ is one the left or on the right of the strands when we
traverse an arc in the indicated direction.
###### Definition 4.10.
Let $\alpha_{im}$ be an immersed multicurve in the marked torus. Then
$\alpha_{im}$ is _$z$ -adjacent_ if, for each of the four edges of the cut-
open torus, there exist four open disks $U_{\pm}^{R}$ and $U_{\pm}^{L}$ in
$T^{2}$ such that
* (1)
$(U_{-}^{L},U_{-}^{L}\cap(\alpha_{im}\cup\\{{z}\\})$,
$(U_{-}^{R},U_{-}^{R}\cap(\alpha_{im}\cup\\{{z}\\})$,
$(U_{+}^{L},U_{+}^{L}\cap(\alpha_{im}\cup\\{{z}\\})$ and
$(U_{+}^{R},U_{+}^{R}\cap(\alpha_{im}\cup\\{{z}\\})$ are homeomorphic to the
corresponding disks in Figure 13, where the arcs in the disks are sub-arcs on
the $k_{-}$ distinct elementary arcs intersecting the given edge negatively
for discs with subscript $-$ or sub-arcs on the $k_{+}$ distinct elementary
arcs intersecting the given edge positively for discs with subscript $+$;
* (2)
if the given edge is the top edge, then $U^{L}_{-}$ and $U^{R}_{+}$ are
contained in $[0,1]\times[0,1]$;
* (3)
if the given edge is the right edge, then $U^{R}_{-}$ and $U^{L}_{+}$ are
contained in $[0,1]\times[0,1]$.
###### Proposition 4.11.
Every immersed multicurve in the marked torus is regularly homotopic to a
$z$-adjacent multicurve.
###### Proof.
Orient $\alpha_{im}$ arbitrarily. We first define an operation on a collection
of oriented parallel arcs. Assume there are $k_{+}+k_{-}$ arcs, where
$k_{+}$-many of the arcs are oriented in one direction, and the rest are
oriented in the opposite direction.
Figure 14. Finger moves on parallel strands.
The operation is shown in Figure 14: First, by performing the finger moves in
Figure 14 (a) repeatedly, we can arrive at a collection of arcs as shown in
the left of Figure 14 (b): the $P$\- and $P^{-1}$-boxes indicate a pair of
mutually inverse permutations, and between the $P$\- and $P^{-1}$-boxes the
arcs are arranged so that all $k_{-}$ arcs with parallel orientations are
grouped on the left and all the other $k_{+}$ arcs with the opposite
orientations are grouped on the right. Next, do a sequence of finger moves to
the diagram on the left of Figure 14 (b) to arrive at the right-hand-side
diagram of Figure 14 (b). Now perform this operation to the arcs of
$\alpha_{im}$ near all four edges in the cut-open marked torus, then we have a
z-adjacent immersed multicurve; see Figure 15 for the desired open disks. Note
that conditions $(2)$ and $(3)$ are obviously satisfied because $z$ is in the
top right corner of the cut open torus. ∎
Figure 15. A $z$-adjacent immersed curve.
We shall need a technical lemma. Let $l$ be a one-cornered sub-loop of
$\alpha_{im}$ with a corner $q$. If we traverse $l$ in either direction, we
see it begins with an arc starting from $q$ to the meridian or longitude, then
a sequence of elementary arcs, and finally, an arc starting from the meridian
or longitude and ending at $q$. We call the starting and ending arcs the _non-
elementary sub-arcs of $l$_, and the other sub-arcs _the elementary sub-arcs
of $l$_.
###### Lemma 4.12.
Let $\alpha_{im}$ be a $z$-adjacent immersed curve. Let $D$ be a positive
domain in ${T}^{2}$ bounded by a $k$-cornered (sub)loop of $\alpha_{im}$.
* (1)
If $n_{z}(D)=n$ for some $n\geq 0$ and $k=0$ or $1$, then for any side of the
cut-open marked torus $[0,1]\times[0,1]$ and any sign, the number of
elementary sub-arcs in $\partial D$ intersecting the given side with the given
sign is less than or equal to $n$.
* (2)
If $n_{z}(D)=0$, then for arbitrary $k\geq 0$, there are no elementary subarcs
contained in $\partial D$.
###### Proof.
We prove (1) first. We will only consider the case in which the elementary
sub-arcs intersect the given edge negatively and remark that the other case is
similar.
We prove by contradiction. Suppose there are $k_{-}>n$ elementary sub-arcs
contained in $\partial D$ intersecting the given edge negatively. Since
$\partial D$ is $0$\- or $1$-cornered it has an orientation induced by the
orientation on $\alpha_{im}$. Examining the local diagram
$(U_{-}^{L},U_{-}^{L}\cap({\partial D}\cup\\{\bm{z}\\}))$ in Figure 13 one
sees $D$ has negative multiplicity $n-k_{-}$ in the left-most region, which
contradicts our assumption that $D$ is a positive domain. Therefore,
$k_{-}\leq n$.
Next, we prove (2). Assume there is an elementary sub-arc in $\partial D$.
Then no matter how this sub-arc is oriented, $z$ is on both the left and right
of it. As $n_{z}(D)=0$, there is a region with $-1$ multiplicity, which
contradicts that $D$ is positive. ∎
### 4.4. The collapsing operation
To relate the domains of the pairing diagram $\mathcal{H}^{a}(\alpha_{im})$
and the arced bordered diagram $\mathcal{H}^{a}$, we define the so-called
_collapsing operation_. This operation was previously defined in the case of
paring genus one bordered Heegaard diagrams with immersed curves [Che23], and
we give the general case here. The operation is pictorially shown in Figure
16, and the definition is given below.
###### Definition 4.13.
The collapsing operation on $\mathcal{H}^{a}(\alpha_{im})$ is defined to be
the composition of the following modifications of the diagram:
* (Step 1)
Extend the map $h$ in Definition 4.9 to identify
$T^{2}-\pi([-\frac{1}{4}+\epsilon,\frac{1}{4}-\epsilon]\times[-\frac{1}{4}+\epsilon,\frac{1}{4}-\epsilon])$
with a slightly larger neighborhood of
$U=h^{-1}(T^{2}-\pi([-\frac{1}{4},\frac{1}{4}]\times[-\frac{1}{4},\frac{1}{4}]))$.
Here $\epsilon$ is a sufficiently small positive number.
* (Step 2)
Puncture $h^{-1}((\frac{3}{4},\frac{3}{4}))$, and enlarge it to a hole so that
under the identification map $h$, the boundary of the hole is a square of side
length $\frac{1}{2}+2\epsilon$ and with rounded corner modeled on a quarter of
a circle of radius $\epsilon$. While enlarging the hole, we push immersed
curves it encountered along the way so that part of the immersed curves are
squeezed to the boundary of the hole.
* (Step 3)
Collapse
$h^{-1}(\pi([-\frac{1}{4}+\epsilon,\frac{1}{4}-\epsilon]\times[\frac{1}{4},\frac{3}{4}]))$
to the core
$h^{-1}(\pi([-\frac{1}{4}+\epsilon,\frac{1}{4}-\epsilon]\times\\{\frac{1}{2}\\}))$,
which is denoted $a^{a,R}_{1}$. Collapse
$h^{-1}(\pi([\frac{1}{4},\frac{3}{4}]\times[-\frac{1}{4}+\epsilon,\frac{1}{4}-\epsilon]))$
to the core
$h^{-1}(\pi(\\{\frac{1}{2}\\}\times[-\frac{1}{4}+\epsilon,\frac{1}{4}-\epsilon]))$,
which is denoted $a^{a,R}_{2}$.
###### Remark 4.14.
* (1)
Clearly, the outcome of the collapsing operation on
$\mathcal{H}^{a}(\alpha_{im})$ can be identified with $\mathcal{H}^{a}$.
* (2)
Each elementary arc in $\alpha_{im}$ standing for
$\rho_{I}\in\\{\rho_{1},\rho_{2},\rho_{3},\rho_{12},\rho_{23},\rho_{123}\\}$
is mapped under the collapsing map to an arc that passes the Reeb chord
$\rho_{I}$ in $\mathcal{Z}^{R}$ of $\mathcal{H}^{a}$. Note that an oriented
elementary sub-arc is correctly oriented if it induces a Reeb chord
$\mathcal{Z}^{R}$ under the collapsing map, i.e., the orientations coincide.
* (3)
The intersection points in $\mathcal{G}({\mathcal{H}^{a}(\alpha_{im})})$ are
of the form $\bm{x}\otimes a$ are in one-to-one correspondence with
$\mathcal{G}({\mathcal{H}^{a}})\otimes_{\mathcal{I}_{R}}\mathcal{G}({\alpha_{im}})$,
where the tensor product is taken over
$\mathcal{I}_{R}\subset\mathcal{A}(\mathcal{Z}_{R})$. Indeed, given an
intersection point $\xi\in\mathcal{G}({\mathcal{H}^{a}(\alpha_{im})})$, its
image under the collapsing map yields an intersection point $\bm{x}$ in
$\mathcal{H}^{a}$. Also, the component of $\xi$ on $\alpha_{im}$ uniquely
gives rise to an intersection point $a$ of $\alpha_{im}$ as follows. By the
definition of the pairing operation, when we pull back the intersection point
on $\alpha_{im}$ to the marked torus, it lies in a horizontal or vertical arc
as described in assumption (C-4) on immersed multicurves, which uniquely
corresponds to an intersection point of $\alpha_{im}$ with the longitude or
meridian. Therefore, every intersection point $\xi$ in
$\mathcal{H}^{a}(\alpha_{im})$ can be written as $\bm{x}\otimes y$. It is easy
to see this induces a one-to-one correspondence between
$\mathcal{G}({\mathcal{H}^{a}(\alpha_{im})})$ and
$\mathcal{G}({\mathcal{H}^{a}})\otimes_{\mathcal{I}_{R}}\mathcal{G}({\alpha_{im}})$.
Figure 16. The collapsing operation.
We will give a proposition relating the domains of
$\mathcal{H}^{a}(\alpha_{im})$ and $\mathcal{H}^{a}$. Let $l$ be an oriented
arc $l$ on $\alpha_{im}$ such that all the elementary sub-arcs are oriented
correctly. We use $\overrightarrow{\rho}(l)$ to denote the sequence of Reeb
chords determined by $l$.
###### Proposition 4.15.
Assume the immersed multicurve $\alpha_{im}$ is $z$-adjacent. Let $B$ be a
positive domain in $\mathcal{H}^{a}(\alpha_{im})$ corresponding to a homology
class in $\pi_{2}(\bm{x}\otimes a,\bm{y}\otimes b,\overrightarrow{\sigma})$
with $n_{z}(B)=0$. Then the image of $B$ under the collapsing map is a
positive domain $B^{\prime}$ in $\mathcal{H}^{a}$ corresponding to a homology
class
$\pi_{2}(\bm{x},\bm{y},\overrightarrow{\rho}(\partial_{\alpha_{im}}B),\overrightarrow{\sigma})$
with $n_{z}(B^{\prime})=0$. Here, $\partial_{\alpha_{im}}B$ refers to the arc
on $\alpha_{im}$ connecting the corresponding components of $\bm{x}\otimes a$
and $\bm{y}\otimes b$. Moreover,
$e(B^{\prime})=e(B)-\frac{|\overrightarrow{\rho}(\partial_{\alpha_{im}}B)|}{2}.$
###### Proof.
It is clear that $B^{\prime}$ is positive and $n_{z}(B^{\prime})=0$. It is
also clear that $B^{\prime}$ give rise to a domain connecting $\bm{x}$ and
$\bm{y}$. We need to show that $B^{\prime}$ has the Reeb chords
$\overrightarrow{\rho}(\partial_{\alpha_{im}}B)$ at the east infinity. We
claim all the elementary arcs appear in $\partial_{\alpha_{im}}B$ are
correctly oriented, and hence $\partial_{\alpha_{im}}B$ gives rise to a
monotonic arc (in the sense that all the Reeb chords appearing on the arc
respect the boundary orientation) connecting (the components on the $\alpha$
arc of) $\bm{x}$ to $\bm{y}$ under the collapsing map. The sequence of Reeb
chords appearing in this arc are exactly
$\overrightarrow{\rho}(\partial_{\alpha_{im}}B)$ in view of Remark 4.14 (2).
To see the claim, note $\alpha_{im}$ is $z$-adjacent and $B$ is positive with
$n_{z}(B)=0$. Therefore, if an elementary arc on $\partial_{im}B$ intersects
the top edge or the right edge, then its orientation is forced by the
positivity of domains and condition (2) and (3) in Definition 4.10, and the
orientation is the correct orientation. The only type of elementary arcs that
intersects neither the top edge nor the right edge corresponds to $\rho_{2}$.
If an elementary arc corresponding to $\rho_{2}$ on $\partial_{\alpha_{im}}B$
has a successor or precursor, then correct orientation on the successor or the
precursor would induce the correct orientation on it. Otherwise,
$\partial_{\alpha_{im}}B$ has only one elementary arc corresponding to
$\rho_{2}$, in which case it is clear that the elementary arc is correctly
oriented.
Next, we compare the Euler measures. Divide the domain $B$ into two parts
$B_{1}$ and $B_{2}$, along the square with rounded corners, which is the
boundary of the hole in Step 2 of the collapsing operation. (See Figure 17.)
This time, we do not puncture the interior of the square. Let $B_{1}$ denote
the part of $B$ outside of the square, and let $B_{2}$ denote the part inside
the square (which is pushed onto the boundary circle under the collapsing
map). Then $e(B_{1})=e(B^{\prime})$ since these two domains differ by a bunch
of rectangles whose Euler measure are zero; these rectangles are collapsed in
Step 3 of the collapsing operation. As $\alpha_{im}$ is $z$-adjacent, $B_{2}$
is positive, and $n_{z}(B)=0$, we see $B_{2}$ can be further expressed as a
sum of of simple domains determined by the elementary arcs appearing in
$\partial_{\alpha_{im}}B$ (counted with multiplicity). (See Figure 18.) Each
simple domain of multiplicity one has Euler measure $\frac{1}{2}$, and there
are $|\overrightarrow{\rho}(\partial_{\alpha_{im}}B)|$ many of them being
collapsed (in Step 2 of the collapsing operation) in order to obtain
$B^{\prime}$. Therefore,
$e(B^{\prime})=e(B)-\frac{|\overrightarrow{\rho}(\partial_{\alpha_{im}}B)|}{2}.$
∎
Figure 17. Left: $B=B_{1}+B_{2}$. Right: $B^{\prime}$. Figure 18. Simple
domains corresponding to Reeb elements in $\mathcal{A}$.
### 4.5. Unobstructedness and admissibility of paring diagrams
###### Proposition 4.16.
Let $\alpha_{im}\subset T^{2}$ be a $z$-adjacent immersed multicurve. Then the
pairing diagram $\mathcal{H}^{a}(\alpha_{im})$ of an arced bordered Heegaard
diagram $\mathcal{H}^{a}$ and $\alpha_{im}$ is unobstructed. Furthermore,
$\mathcal{H}^{a}(\alpha_{im})$ is provincially admissible provided
$\mathcal{H}^{a}$ is left provincially admissible. (See Definition 2.8 and
Definition 2.9.)
###### Proof of Proposition 4.16.
Consider the bordered Heegaard diagram
$\mathcal{H}^{a}(\alpha_{im})=(\bar{\Sigma}^{\prime},\bar{\bm{\alpha}}^{\prime},\bm{\beta}^{\prime},z^{\prime})$
obtained from pairing an arced bordered Heegaard diagram
$\mathcal{H}^{a}=(\bar{\Sigma},\bar{\bm{\alpha}},\bm{\beta},\bm{z})$ and a
$z$-adjacent immersed multicurve $\alpha_{im}$.
We begin by showing $\bar{\bm{\alpha}}^{\prime}$ is unobstructed in the sense
of Definition 2.8. Let $B$ be a zero- or one-cornered $\alpha$-bounded domain.
Since the curves
$\\{\bar{\alpha_{1}^{a}},\bar{\alpha_{2}^{a}},\alpha_{1},\ldots,\alpha_{g-2},\alpha_{im}^{0}\\}$
are pairwise disjoint and homologically independent in
$H_{1}(\bar{\Sigma},\partial)$, $[\partial B]$ (as a homology class) is equal
to a linear combination of at most one copy of $\partial\bar{\Sigma}$ and some
homologically trivial zero- or one-cornered loop contained in a single
connected component of $\alpha_{im}$.
We first show there are no positive zero- or one-cornered $\alpha$-bounded
domains $B$ with $n_{z^{\prime}}(B)=0$. In this case, $\partial B$ is a
homologically trivial zero- or one-cornered loop contained in a single
connected component of $\alpha_{im}$, i.e., $\partial\bar{\Sigma}$ does not
appear in $\partial B$. As the $\bm{\beta}$-curves are irrelevant to our
consideration, we may assume the $\alpha$-curves of $\mathcal{H}^{a}$ are in
standard position. Therefore, there is an obvious circle
$C\subset\bar{\Sigma}$ that splits $\bar{\Sigma}$ into a genus-$(g-1)$ surface
$E_{1}$ containing
$\\{\alpha^{a,L}_{1},\alpha^{a,L}_{2},\alpha^{c}_{1},\ldots,\alpha^{c}_{g-2}\\}$
and a genus-one surface $E_{2}$ containing $\alpha^{a,R}_{1}$ and
$\alpha^{a,R}_{2}$. Let $C^{\prime}$ be the corresponding curve on
$\bar{\Sigma}^{\prime}$. Then after surgery along $C^{\prime}$, $B$ induces a
positive domain $D$ in the marked torus $T^{2}$ (obtained from $E_{2}$ in an
obvious way), and $D$ is bounded by a zero- or one-cornered (sub)loop of
$\alpha_{im}$. According to Lemma 4.12, $\partial D$ contains no elementary
sub-arcs, so $D$ cannot exist.
Next we show that if $n_{z^{\prime}}(B)=1$, $B$ is a stabilized teardrop or
$[\Sigma^{\prime}]$ depending on whether $\partial B$ is one-cornered or zero-
cornered. In this case, after performing surgery along the same $C^{\prime}$
as in the previous paragraph, $B$ gives rise to two domains: one is
$[E_{1}^{\prime}]$, where $E^{\prime}_{1}$ is the genus-$(g-1)$ surface, and
the other is a positive domain $D$ contained in the marked torus $T^{2}$ with
$n_{z}(D)=1$. We first consider the case in which $\partial D$ is zero-
cornered. If $\partial D=\emptyset$, then $D=E_{2}$ and hence
$B=[\Sigma^{\prime}]$. If $\partial D\neq\emptyset$, then according to Lemma
4.12, it consists of at most (and at least) $4$ elementary sub-arcs, and hence
is a circle enclosing the $z$-basepoint once. However, such circles are
assumed not to exist. When $\partial D$ is one-cornered, we claim $D$ is a
teardrop. To see this, note that Lemma 4.12 implies that $\partial D$ crosses
the meridian at most three times and the longitude at most three times since
each time the meridian or the longitude is crossed (except possibly the last
time) the intersection is the beginning of an elementary sub-arc and there are
at most two elementary sub-arcs starting on each. Because $\partial D$ is
homologically trivial in $H_{1}({T}^{2})$ it crosses both of the meridian and
the longitude and even number of times, so it crosses each at most twice. It
follows that $\partial D$ must circle once around $z$ and $D$ is a teardrop
with $n_{z}(D)=1$.
Now we show any two-cornered positive $\alpha$-bounded domain $B$ with
$n_{z}(B)=0$ is a bigon. To see, we may split $\Sigma^{\prime}$ as
$E_{1}\\#E_{2}$ as before and regard $B$ as a domain in $E_{2}$ with
$n_{z^{\prime}}=0$. Note by Lemma 4.12 (2), we know $\partial B$ consists of
no elementary subarcs, and hence $B$ must be of the form shown in Figure 19
(up to rotation), which is a bigon. (Note we do not require the corners of the
bigon $B$ to be convex.)
Figure 19. Two-cornered positive $\alpha$-bounded domains.
So far, we have proved $\bar{\bm{\alpha}}^{\prime}$ is unobstructed. We now
show there are no non-trivial positive provincial periodic domains. If not,
let $B$ be a positive provincial periodic domain for
$\mathcal{H}^{a}(\alpha_{im})$. Then by Proposition 4.15, $\Psi(B)$ is a
positive periodic domain for $\mathcal{H}^{a}$, where $\Psi$ denotes the
collapsing map. Note $\Psi(B)$ is left provincial. As $\mathcal{H}^{a}$ is
left provincially admissible, we have $\Psi(B)=0$, and hence $\partial B$ has
no $\beta$-curves. So, $B$ is a positive zero-cornered $\alpha$-bounded domain
with $n_{z}(B)=0$, but such domains are already excluded by unobstructedness.
∎
### 4.6. The first paring theorem
Recall a bordered Heegaard diagram is _nice_ if every connected region in the
complement of the $\alpha$\- and $\beta$-curves is a disk with at most four
corners except for the region containing $z$. Any bordered Heegaard diagram
can be turned into a nice diagram via isotopy and handleslides of the
$\beta$-curves (Proposition 8.2 of [LOT18]). The key property of nice Heegaard
diagrams is that the Euler measure of any region away from the base point is
non-negative. This property imposes great constraints on domains supporting
holomorphic representatives via the index formula. Hence it opens up the
combinatorial vein for proving the pairing theorem.
See 1.4
###### Proof.
In view of the homotopy equivalence of the relevant invariants under isotopy
of $\beta$ curves (Proposition 2.82), we may assume $\mathcal{H}^{a}$ is a
nice arced bordered Heegaard diagram. Note nice arced bordered Heegaard
diagrams are automatically left provincially admissible. Therefore,
$\widehat{CFDA}(\mathcal{H}^{a})$ and
$\widehat{CFD}(\mathcal{H}^{a}({\alpha_{im}}))$ are defined. In fact, a
stronger admissibility condition holds for $\mathcal{H}^{a}$: any periodic
domains with $n_{z}=0$ has both positive and negative local multiplicities.
This implies $\widehat{CFDA}(\mathcal{H}^{a})$ is bounded, and hence the box-
tensor product is expressed as a finite sum.
Implicit in the proof is that we will be using split almost complex structures
for defining $\widehat{CFD}(\mathcal{H}^{a}({\alpha_{im}}))$ and
$\widehat{CFDA}(\mathcal{H}^{a})$. A split almost complex structure is
sufficient for defining $\widehat{CFD}(\mathcal{H}^{a}({\alpha_{im}}))$, since
all the domains involved will be bigons and rectangles. In this setting, up to
a generic perturbation of the $\alpha$ and $\beta$ curves, moduli spaces
defined using a split almost complex structure are transverse (c.f. [Lip06,
Proposition 3.9]).
We will call the two punctures in $\mathcal{H}^{a}$ the $\sigma$-puncture and
the $\rho$-puncture, where the $\rho$-puncture is the one that gets capped off
in the pairing diagram. For now, we assume the local systems on $\alpha_{im}$
are trivial, and we will indicate the modifications needed for dealing with
nontrivial local system later on. First, the generators
$\mathcal{G}(\mathcal{H}^{a}(\alpha_{im}))$ and
$\mathcal{G}(\mathcal{H}^{a})\otimes_{\mathcal{I}_{R}}\mathcal{G}(\alpha_{im})$
are identified as pointed out in Remark 4.14 (3). Next, we prove the
differentials have a one-to-one correspondence.
We first show any differential incurred by the box-tensor product has a
corresponding differential in $\widehat{CFD}(\mathcal{H}^{a}(\alpha_{im}))$. A
differential arising from the box tensor product comes in two types, depending
on whether it involves nontrivial differentials in
$\widehat{CFD}(\alpha_{im})$. If it does not involve non-trivial differential
in $\widehat{CFD}(\alpha_{im})$, then the input from
$\widehat{CFDA}(\mathcal{H}^{a})$ counts curves with the domain being a
provincial bigon, a provincial rectangle, or a bigon with a single Reeb chord
on the $\sigma$-puncture; see [LOT18, Proposition 8.4]. Such bigons or
rectangles clearly have their counterparts in $\mathcal{H}^{a}(\alpha_{im})$,
giving the corresponding differentials in
$\widehat{CFD}(\mathcal{H}^{a}(\alpha_{im}))$. If the box-tensor differential
involves differentials in $\widehat{CFD}(\alpha_{im})$, then the corresponding
input from $\widehat{CFDA}(\mathcal{H}^{a})$ counts curves with the domain
being a bigon with a single Reeb chord on the $\rho$-puncture [LOT18,
Proposition 8.4]. As it pairs with a differential in
$\widehat{CFD}(\alpha_{im})$, this bigon gives rise to a bigon in
$\mathcal{H}^{a}(\alpha_{im})$ (which is a pre-image of the collapsing map),
giving the corresponding differential in
$\widehat{CFD}(\mathcal{H}^{a}(\alpha_{im}))$.
Next, we show that every differential in
$\widehat{CFD}(\mathcal{H}^{a}(\alpha_{im}))$ corresponds to a differential
incurred by the box-tensor product. Suppose $u\in\pi_{2}(\bm{x}\otimes
a,\bm{y}\otimes b)$ admits a holomorphic representative contributing to a
differential for $\widehat{CFD}(\mathcal{H}^{a}(\alpha_{im}))$. Let $B$ be the
domain of $u$, and let $B^{\prime}$ denote the image of $B$ under the
collapsing operation. By Proposition 4.15,
$e(B)=e(B^{\prime})+\frac{|\overrightarrow{\rho}(\partial_{\alpha_{im}}B)|}{2}$.
As $B^{\prime}$ is a positive domain with $n_{z}(B^{\prime})=0$ and
$\mathcal{H}^{a}$ is a nice Heegaard diagram, we have $e(B)\geq
e(B^{\prime})\geq 0$. By the index formula, denoting the source surface of $u$
by $S$, we have
$\text{Ind}(u)=g-\chi(S)+2e(B)+|\overrightarrow{\sigma}|.$
As $\text{Ind}(u)=1$ and $2e(B)+|\overrightarrow{\sigma}|\geq 0$, we have
$\chi(S)=g$ or $g-1$.
When $\chi(S)=g$, $S$ consists of $g$ topological disks; each disk has a $+$
and a $-$ puncture, and there is at most one $\sigma$-puncture overall since
$2e(B)+|\overrightarrow{\sigma}|=1$. We separate the discussion according to
the number of $\sigma$-punctures. First, if there is a $\sigma$-puncture, then
the corresponding domain $B$ in $\mathcal{H}^{a}(\alpha_{im})$ is a bigon with
a single Reeb chord on the $\sigma$-puncture and does not involve
$\alpha_{im}$. This domain clearly has its counterpart in $\mathcal{H}^{a}$
under the collapsing map, giving rise to an operation in
$\widehat{CFDA}(\mathcal{H}^{a})$; the corresponding differential in the box-
tensor product is obtained by pairing this $DA$-operation with an element in
$\widehat{CFD}(\alpha_{im})$. Secondly, if there is no $\sigma$-puncture, then
the domain $B$ is a provincial bigon in $\mathcal{H}^{a}(\alpha_{im})$. There
are two sub-cases to consider depending on whether the $\alpha$-boundary of
$B$ overlaps with $\alpha_{im}$. If the $\alpha$ boundary of $B$ is not on
$\alpha_{im}$, then we argue as in the first case to see that $B$ gives a
corresponding differential in the box-tensor product. If, on the other hand,
the boundary of $B$ is on $\alpha_{im}$, since
$1/2=e(B)=e(B^{\prime})+|\overrightarrow{\rho}(\partial_{\alpha_{im}}B)|/{2}$
we have $|\overrightarrow{\rho}(\partial_{\alpha_{im}}B)|$ is either $0$ or
$1$. If $|\overrightarrow{\rho}(\partial_{\alpha_{im}}B)|=0$, then
$B^{\prime}$ is a provincial domain, giving the type-DA operation for the
corresponding differential obtained by the box-tensor product. If
$|\overrightarrow{\rho}(\partial_{\alpha_{im}}B)|=1$, then $B^{\prime}$ is
obtained from $B$ subtracting a simple region (as in the proof of Proposition
4.15) and then applying the collapsing map. We can see $B^{\prime}$ is a bigon
with a single Reeb chord corresponding to the Reeb chord specified by
$\partial_{\alpha_{im}}B$ on the $\rho$-puncture. The DA-operation given by
$B^{\prime}$ and the type D operation given by $\partial_{\alpha_{im}}B$ pair
up to give the corresponding differential in the box-tensor product.
When $\chi(S)=g-1$, then $S$ consists of $g-1$ topological disks; $g-2$ of the
disks are bigon, while the remaining one is a rectangle. As $e(B)=0$, the
bigons are mapped trivially to $\Sigma$. Therefore, the domain $B$ is a
rectangle. Again, since
$e(B)=e(B^{\prime})+|\overrightarrow{\rho}(\partial_{\alpha_{im}}B)|/{2}$ and
$e(B^{\prime})\geq 0$, we have
$|\overrightarrow{\rho}(\partial_{\alpha_{im}}B)|=0$. Then $B^{\prime}$ is a
provincial rectangular domain in $\mathcal{H}^{a}$, giving rise to a DA-
operation that pairs with a trivial type-D operation to give the corresponding
differential in the box-tensor product. We have finished the proof when the
local system is trivial.
Next, we consider the case where $\alpha_{im}$ admits a non-trivial local
system $(\mathcal{E},\Phi)$. The local system induces a local system on the
$\alpha$ curves in the pairing diagram $H^{a}(\alpha_{im})$. First, the
discussion above identifies the generators at the vector space level: let
$\bm{x}\otimes y$ be an intersection point in
$\mathcal{G}(\mathcal{H}^{a}(\alpha_{im}))$, where
$\bm{x}\in\mathcal{G}(H^{a})$ and $y\in\mathcal{G}(\alpha_{im})$; then
$\bm{x}\otimes y$ corresponds to a direct summand $\mathcal{E}|_{\bm{x}\otimes
y}$ of $\widehat{CFD}(\mathcal{H}^{a}(\alpha_{im}))$ as a vector space, and
$\mathcal{E}|_{\bm{x}\otimes y}$ can be naturally identified with
$\bm{x}\otimes\mathcal{E}|_{y}$, a summand of
$\widehat{CFDA}(\mathcal{H}^{a})\boxtimes\widehat{CFD}(\alpha_{im})$.
Secondly, the discussion in the trivial-local-system case shows that
$\widehat{CFD}(\mathcal{H}^{a}(\alpha_{im}))$ has a differential map between
the summands $\mathcal{E}|_{\bm{x}\otimes
y}\rightarrow\sigma_{I}\otimes\mathcal{E}|_{\bm{x^{\prime}}\otimes
y^{\prime}}$ if and only if the box-tensor product has a differential map
between the corresponding summands
$\bm{x}\otimes\mathcal{E}|_{y}\rightarrow\sigma_{I}\otimes(\bm{x^{\prime}}\otimes\mathcal{E}|_{y^{\prime}})$
in the box-tensor product, and under the natural identification between these
summands both differential maps are induced by the same parallel transport
from $\mathcal{E}|_{y}$ to $\mathcal{E}|_{y^{\prime}}$.
∎
### 4.7. The second pairing theorem
We are interested in computing knot Floer chain complexes over
$\mathcal{R}=\mathbb{F}[U,V]/(UV)$ using bordered Floer homology. We have
already defined an extended type-D structure, and we want to pair it with an
extended type-A structure to get a bi-graded chain complex over
$\mathcal{R}=\mathbb{F}[U,V]/(UV)$. Here we will only restrict to a specific
extended type-A structure associated to the doubly-pointed bordered Heegaard
diagram $\mathcal{H}_{id}$ given in Figure 20. The diagram $\mathcal{H}_{id}$
corresponds to the pattern knot given by the core of a solid torus, which is
the identity pattern.
Figure 20. The bordered diagram $\mathcal{H}_{id}$.
Recall $\tilde{\mathcal{A}}$ denotes the weakly extended torus algebra, and
$\mathcal{I}\subset\tilde{\mathcal{A}}$ is the ring of idempotents (see Figure
8).
###### Definition 4.17.
The extended type-A structure $\widetilde{CFA}(\mathcal{H}_{id})$ is a free
$\mathcal{R}$-module generated by the single intersection point $x$ in
$\mathcal{H}_{id}$. It is equipped with an $\mathcal{I}$-action given by
$x\cdot\iota_{0}=x$ and $x\cdot\iota_{1}=0$, together with a family of
$\mathcal{R}$-linear maps
$m_{i+1}:\widetilde{CFA}(\mathcal{H}_{id})\otimes_{\mathcal{I}}\tilde{A}^{\otimes
i}\rightarrow\widetilde{CFA}(\mathcal{H}_{id})$ ($i\in\mathbb{N}$), where up
to $\mathcal{R}$-linearity the only non-zero relations are:
$\displaystyle m_{2}(x,1)=x,$ $\displaystyle
m_{3+i}(x,\rho_{3},\overbrace{\rho_{23},\ldots,\rho_{23}}^{i},\rho_{2})=U^{i}x,\quad
i\in\mathbb{N},$ $\displaystyle
m_{3+i}(x,\rho_{1},\overbrace{\rho_{01},\ldots,\rho_{01}}^{i},\rho_{0})=V^{i}x,\quad
i\in\mathbb{N}.$
###### Remark 4.18.
This extends the hat-version type A-structure
$\widehat{CFA}(\mathcal{H}_{id})$ by allowing Reeb chords crossing the base
point $z$.
Straightforwardly, the hat-version box-tensor product can be generalized to be
an operation between the extended type A structure
$\widetilde{\mathcal{M}}:=\widetilde{CFA}(\mathcal{H}_{id})$ and a weakly
extended type D structure $(\widetilde{\mathcal{N}},\delta^{i})$: It is the
$\mathcal{R}$-module
$\widetilde{\mathcal{M}}\otimes_{\mathcal{I}}\widetilde{\mathcal{N}}$ together
with a differential $\partial_{\boxtimes}:=\sum_{i\geq
0}(m_{i+1}\otimes\mathbb{I}_{\widetilde{\mathcal{N}}})\circ(\mathbb{I}_{\widetilde{\mathcal{M}}}\otimes\delta^{i})$;
the finiteness of the sum can be guaranteed for type D structures defined
using bi-admissible diagrams (see the proof of Theorem 1.6 below). One may
verify $\partial_{\boxtimes}^{2}=0$ algebraically using the structure
equations defining the (weakly) extended type D and type A structures. We omit
such computation and instead content ourselves with Theorem 1.6 below, which
implies that the $\partial_{\boxtimes}$ induced by gluing bordered Heegaard
diagrams is indeed a differential. We further remark that
$\widetilde{\mathcal{M}}\boxtimes_{\mathcal{I}}\widetilde{\mathcal{N}_{1}}$ is
chain homotopic to
$\widetilde{\mathcal{M}}\boxtimes_{\mathcal{I}}\widetilde{\mathcal{N}_{2}}$
provided $\widetilde{\mathcal{N}_{1}}$ is homotopic to
$\widetilde{\mathcal{N}_{2}}$. The proof of this is similar to that in the hat
version and is omitted.
See 1.6 (See Definition 2.8 and 2.10 for the unobstructedness and bi-
admissibility for $\mathcal{H}$.)
###### Proof of Theorem 1.6.
Note periodic domains of $\mathcal{H}_{id}\cup\mathcal{H}_{im}$ with $n_{z}=0$
(respectively $n_{w}=0$) corresponds to periodic domains of $\mathcal{H}_{im}$
with $n_{\rho_{0}}=n_{\rho_{1}}=0$ (respectively
$n_{\rho_{2}}=n_{\rho_{3}}=0$). Therefore, since $\mathcal{H}_{im}$ is bi-
admissible, $\mathcal{H}_{id}\cup\mathcal{H}_{im}$ is bi-admissible in the
sense of Definition 3.3. Also, zero- or one-cornered $\alpha$-bounded domains
in $\mathcal{H}_{id}\cup\mathcal{H}_{im}$ with $n_{z}=n_{w}=0$ must lie in
$\mathcal{H}_{im}$. So, unobstructedness of $\mathcal{H}_{im}$ implies the
unobstructedness of $\mathcal{H}_{id}\cup\mathcal{H}_{im}$. In summary,
$\mathcal{H}_{id}\cup\mathcal{H}_{im}$ is bi-admissible and unobstructed, and
hence $CFK_{\mathcal{R}}(\mathcal{H}_{id}\cup\mathcal{H}_{im})$ is defined.
The bi-admissibility of $\mathcal{H}_{im}$ also implies $\partial_{\boxtimes}$
is expressed as a finite sum and hence is well-defined. To see this, note for
any $\bm{x},\bm{y}\in\mathcal{G}(\mathcal{H}_{im})$, bi-admissibility implies
there are only finitely many positive domains connecting $\bm{x}$ and $\bm{y}$
with a prescribed Reeb-chord sequence of the form
$\rho_{1},\rho_{01},\ldots\rho_{01},\rho_{0}$ and
$\rho_{3},\rho_{23},\ldots\rho_{23},\rho_{2}$.
Recall that the differential in
$CFK_{\mathcal{R}}(\mathcal{H}_{id}\cup\mathcal{H}_{im})$ counts holomorphic
curves that can only cross at most one of the $w$\- and $z$-base points. Note
also that both of the base points in $\mathcal{H}_{id}$ are adjacent to the
east boundary. Therefore, by the symmetry of the base points $w$ and $z$, it
suffices to prove the theorem for the hat version knot Floer homology, i.e.,
$\widehat{CFK}(\mathcal{H}_{id}\cup\mathcal{H}_{im})\cong\widehat{CFA}(\mathcal{H}_{id})\boxtimes\widehat{CFD}(\mathcal{H}_{im}).$
Though our Heegaard diagrams have immersed $\alpha$-multicurves, given that no
boundary degeneration can occur, the proof for the embedded-$\alpha$-curves
case, which uses neck stretching and time dilation, carries over without
changes; see Chapter 9 of [LOT18] for detail or Section 3 of [LOT14a] for an
exposition. ∎
## 5\. Knot Floer homology of satellite knots
We apply the machinery developed in the previous sections to study the knot
Floer homology of satellite knots. First, we introduce a gentler condition on
the immersed curves than the z-adjacency condition.
###### Definition 5.1.
An immersed multicurve $\alpha_{im}$ in the marked torus $(T^{2},z)$ is
admissible if there are no nontrivial zero- and one-cornered $\alpha$-bounded
positive domains $B$ with $n_{z}(B)=0$.
Note any $z$-adjacent immersed multicurve is admissible in view of Lemma 4.12.
Let $\mathcal{H}_{w,z}$ be a doubly pointed bordered Heegaard diagram for a
pattern knot $(S^{1}\times D^{2},P)$. Recall that we can construct a doubly
pointed immersed diagram $\mathcal{H}_{w,z}(\alpha_{im})$. The admissibility
condition guarantees that $CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{im}))$
is defined in view of the following proposition.
###### Proposition 5.2.
If $\alpha_{im}$ is an admissible immersed multicurve, then
$\mathcal{H}_{w,z}(\alpha_{im})$ is bi-admissible and unobstructed.
###### Proof.
First, we show the diagram $\mathcal{H}_{w,z}(\alpha_{im})$ is unobstructed
(in the sense of Definition 3.2). Let $B$ be a zero- or one-cornered
$\alpha$-bounded domain for $\mathcal{H}_{w,z}(\alpha_{im})$. Note
$n_{w}(B)=n_{z}(B)$ since the base points $w$ and $z$ are in the same region
in the complement of the $\alpha$-curves. We restrict to the domains with
$n_{z}(B)=n_{w}(B)=0$. Recall we want to prove $B$ must be somewhere negative.
Splitting the Heegaard surface as in the proof of Proposition 4.16, we see $B$
corresponds to a zero- or one-cornered $\alpha$-bounded domain $B^{\prime}$ in
the marked torus with $n_{z}(B^{\prime})=0$. Since $\alpha_{im}$ is
admissible, $B^{\prime}$ is somewhere negative. Therefore, $B$ is somewhere
negative.
Next we show the pairing diagram $\mathcal{H}_{w,z}(\alpha_{im})$ is bi-
admissible. Recall that bi-admissibility means any nontrivial periodic domains
$B$ with $n_{w}(B)=0$ or $n_{z}(B)=0$ must have both positive and negative
coefficients. To see this, we first claim any given periodic domain $B$ is
bounded by some multiple of a homologically trivial component of
$\alpha_{im}$. (We warn the reader that the claim is no longer true if one
further performs a handleslide of such a component over an embedded alpha
curve.) To see the claim, note that as homology classes, the curves
$[\alpha_{i}]$ ($i=1,\ldots,g-1$), $[\alpha^{1}_{im}]$, and $[\beta_{i}]$
($i=1,\ldots,g$) are linearly independent, just as the attaching curves in a
Heegaard diagram for $S^{3}$. Now, the claim implies $B$ is a zero-cornered
$\alpha$-bounded domain. In view of the unobstructedness established above,
$B$ is somewhere negative. ∎
###### Definition 5.3.
Let $\alpha_{im}$ and $\alpha^{\prime}_{im}$ be two admissible immersed
multicurves. They are said to be admissibly equivalent if there exists a
finite sequence of admissible immersed curves $\alpha^{i}_{im}$,
$i=1,\ldots,n$ such that
* (1)
$\alpha^{1}_{im}=\alpha_{im}$ and $\alpha^{n}_{im}=\alpha^{\prime}_{im}$,
* (2)
For $i=1,\ldots,n-1$, $\alpha^{i}_{im}$ and $\alpha^{i+1}_{im}$ are related by
a finger move that creates/cancels a pair of self-intersection points of the
immersed curves.
###### Proposition 5.4.
Let $\alpha_{im}$ and $\alpha_{im}^{\prime}$ be two admissibly equivalent
immersed multicurves, and let $\mathcal{H}_{w,z}$ be a doubly pointed bordered
Heegaard diagram. Then
$CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{im}))\cong
CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha^{\prime}_{im})).$
###### Proof.
The proof follows the same strategy as the usual proof of isotopy invariance.
Let $\bm{\alpha_{0}}$ and $\bm{\alpha_{1}}$ be the two sets of
$\alpha$-curves. For simplicity, assume they are related by a single finger
move. We model the finger move using a locally supported exact Hamiltonian
isotopy on $\Sigma$. The isotopy induces a family of $\alpha$-curves,
$\bm{\alpha}_{t}$, on $\Sigma$ ($t\in\mathbb{R}$); for $t\ll 0$ (resp. $t\gg
0$), $\bm{\alpha}_{t}$ is constant with respect to $t$ and is identified with
$\bm{\alpha}_{0}$ (resp. $\bm{\alpha}_{1}$). $\bm{\alpha}_{t}$ induces an
immersed totally real submanifold
$C_{\alpha}=\bm{\alpha}_{t}\times\\{1\\}\times\\{t\\}$ in
$\Sigma\times[0,1]\times\mathbb{R}$. $C_{\alpha}$ can be realized as an
immersion
$\Psi_{t}:(\amalg_{i=1}^{g}S^{1})\times\mathbb{R}\rightarrow\Sigma\times\\{1\\}\times\mathbb{R}.$
Let $C_{\beta}$ be the Lagrangian induced by the $\beta$ curves. For
$\bm{x}\in\mathbb{T}_{\bm{\alpha}_{0}}\cap\mathbb{T}_{\bm{\beta}}$ and
$\bm{y}\in\mathbb{T}_{\bm{\alpha}_{1}}\cap\mathbb{T}_{\bm{\beta}}$, one then
define $\mathcal{M}_{\Psi_{t}}(\bm{x},\bm{y})$ to be the moduli space of
holomorphic curves in $\Sigma\times[0,1]\times\mathbb{R}$ with boundary on
$C_{\alpha}\cup C_{\beta}$ such that the $\alpha$-boundary can be lifted
through $\Psi_{t}$. With this, one can define a map
$\Phi_{0}:CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{im}))\rightarrow
CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha^{\prime}_{im}))$ by
$\bm{x}\mapsto\sum_{\bm{y}}\sum_{\phi\in\pi_{2}(\bm{x},\bm{y})}\\#\mathcal{M}_{\Psi_{t}}(\bm{x},\bm{y})U^{n_{w}(\phi)}V^{n_{z}(\phi)}\bm{y},$
where $\mathcal{M}_{\Psi_{t}}(\bm{x},\bm{y})$ has dimension zero. Define
$\Phi_{1}:CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha^{\prime}_{im}))\rightarrow
CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{im}))$ similarly. We remark that
the compactness and gluing results still apply to this setup. The bi-
admissibility of the diagrams obstructs the appearance of boundary
degeneration in the compactification of one-dimensional moduli spaces, and
hence we can still apply the usual argument to show (1) $\Phi_{0}$ and
$\Phi_{1}$ are chain maps, and (2) $\Phi_{0}\circ\Phi_{1}$ and
$\Phi_{1}\circ\Phi_{0}$ are homotopy equivalent to the identity map.
Therefore, $\Phi_{0}$ and $\Phi_{1}$ are chain homotopy equivalences. ∎
###### Definition 5.5.
An immersed multicurve is called $z$-passable if it is admissibly equivalent
to a $z$-adjacent multicurve.
###### Remark 5.6.
We can easily arrange $\alpha_{K}$ to be a $z$-passable multicurve; see
Example 5.7 below. Moreover, when the pattern knot admits a genus-one doubly
pointed Heegaard diagram, we can even drop the admissibility condition; see
Section 6.2.
###### Example 5.7.
We give a simple way to arrange an immersed multicurve $\alpha_{K}$ to be
$z$-passable. Without loss of generality, we consider a single component
$\gamma$ of $\alpha_{K}$ each time, and we orient $\gamma$ arbitrarily. We
view the torus $T^{2}$ as a square as usual and position $\gamma$ such that
the elementary arcs hitting the top edge are separated into two groups of arcs
where the arcs in a single group intersect the top edge in the same direction;
see Figure 21 (1). Next, we perform a Reidemeister-II-like move to the two
groups as in Figure 21 (2). Perform the above modification for every component
of $\alpha_{K}$. We claim the resulting multicurve, which we denote
$\alpha_{K}^{\prime}$, is a $z$-passable multicurve.
Figure 21. (2) is an $z$-passable immersed curve obtained from (1).
We justify the claim when $\gamma$ is homologically trivial; the case where
$\gamma$ is homologically essential is similar. We first check that
$\alpha_{K}^{\prime}$ is admissible by checking that there are no zero- or
one-cornered $\alpha$ bounded domains $B$ with $n_{z}(B)=0$. First note that
for any zero- or one-cornered $\alpha$ bounded domains $B$, $\partial B$ must
include an elementary arc meeting the top edge of the square. To see this,
note that $\partial B$ is a nullhomologous curve in the torus and thus lifts
to a closed path in the universal cover. Cutting along (lifts of) the meridian
(i.e., $\mathbb{Z}\times\mathbb{R}$) breaks $\partial B$ into pieces, with at
least two of these (the leftmost and rightmost piece) forming bigons with the
meridian. At least one of those two pieces has no corners (since $B$ is zero-
or one-cornered). The cornerless piece must intersect the longitude because
$\alpha_{K}^{\prime}$ is reduced, and the subarc of $\partial B$ directly
below this intersection with the longitude gives an elementary arc meeting the
top edge of the square. Next we observe that the elementary arcs near the top
edge of the square are arranged such that each arc has the base point $z$ both
on its left and on its right, in each case without oppositely oriented arcs in
between the arc and $z$, and this implies that no domain whose boundary
includes one of these elementary arcs can have $n_{z}(B)=0$. Having shown the
immersed curve $\alpha_{K}^{\prime}$ is admissible, it remains to check that
it is $z$-passable. Recall from Proposition 4.11 that we can perform a
sequence of finger moves to achieve a $z$-adjacent position. Note that all the
intermediate diagrams are admissible by exactly the same argument above.
### 5.1. Proof of the main theorem, ungraded version
This subsection is devoted to proving the ungraded version of Theorem 1.1.
A satellite knot is constructed via the so-called satellite operation that
requires a pattern knot and a companion knot as input. A pattern knot is an
oriented knot $P$ in an oriented solid torus $S^{1}\times D^{2}$, where an
oriented meridian $\mu$ and an oriented longitude $\lambda$ are chosen for
$\partial(S^{1}\times D^{2})$ so that the orientation determined by
$(\mu,\lambda)$ coincides with the induced boundary orientation. A companion
knot is an oriented knot $K$ in the 3-sphere. We orient any Seifert longitude
of $K$ using the parallel orientation, and orient any meridian $m$ of $K$ so
that $lk(m,K)=1$. The satellite knot $P(K)$ is obtained by gluing
$(S^{1}\times D^{2},P)$ to the companion knot complement
$S^{3}\backslash\nu(K)$ so that the chosen meridian $\mu$ is identified with a
meridian of $K$ and that the chosen longitude $\lambda$ is identified with the
Seifert longitude of $K$; $P(K)$ is given by viewing $P$ as a knot in the
glued-up 3-sphere $(S^{1}\times D^{2})\cup(S^{3}\backslash\nu(K))$.
We state the main theorem again below for the readers’ convenience. Recall
that any pattern knot can be represented by a doubly-pointed bordered Heegaard
diagram [LOT18, Section 11.4]. See 1.1
Given a doubly pointed bordered Heegaard diagram $\mathcal{H}_{w,z}$ for the
pattern knot, we will construct an arced bordered Heegaard diagram
$\mathcal{H}_{X(P)}$; the Heegaard diagram $\mathcal{H}_{X(P)}$ specifies a
bordered 3-manifold $X(P)$ with two boundary components888Strictly speaking,
an arced bordered Heegaard diagram specifies a strongly bordered 3-manifold in
the sense of Definition 5.1 in [LOT15], where there is also a framed arc in
addition to the underlying bordered 3-manifold. This extra structure will not
be relevant to us, so we will not specify it. , where (1) the underlying
3-manifold is $S^{1}\times D^{2}\backslash\nu(P)$, (2) the parametrization of
$\partial(S^{1}\times D^{2})$ is the standard meridian-longitude
parametrization, and (3) the parametrization of interior boundary
$\partial(\nu(P))$ is given by a meridian of $P$ and some longitude of $P$.
(The choice of the longitude of $P$ does not matter).
We describe how to obtain $\mathcal{H}_{X(P)}$ from the doubly pointed
bordered Heegaard diagram $\mathcal{H}_{w,z}$. This is a standard
construction, similar to the one appearing in [LOT18] Section 11.7; the reader
familiar with it may skip this paragraph and consult Figure 22 for an
overview.
Figure 22. An example of obtaining $\mathcal{H}_{X(P)}$ from
$\mathcal{H}_{w,z}$. Here, $\mathcal{H}_{w,z}$ is showed on the top row; it is
a genus-one Heegaard diagram for the $(3,1)$-cable pattern.
$\mathcal{H}_{X(P)}$ is the rightmost diagram on the second row.
Assume $\mathcal{H}_{w,z}$ is of genus $g$. First, we stabilize
$\mathcal{H}_{w,z}=(\bar{\Sigma},\bm{\bar{\alpha}},\bm{\beta},w,z)$ to get a
new doubly pointed bordered Heegaard diagram
$\mathcal{H}^{\prime}_{w,z}=(\bar{\Sigma}^{\prime},\bm{\bar{\alpha}}\cup\\{\alpha^{c}_{g}\\},\bm{\beta}\cup\\{\beta_{g+1}\\},w,z)$.
More concretely, $\bar{\Sigma}^{\prime}$ is obtained from $\bar{\Sigma}$ by
attaching a two-dimensional one-handle, with feet near the base points $w$ and
$z$. Parametrize the new one-handle by $S^{1}\times[0,1]$, where
$S^{1}\times\\{0\\}$ is the feet circle near $z$, and $S^{1}\times\\{1\\}$ is
the feet circle near $w$. We also parametrize $S^{1}$ by $[0,2\pi]/(0\sim
2\pi)$. The new $\alpha$-circle $\alpha^{c}_{g}$ is the belt circle
$S^{1}\times\\{1/2\\}$ of the new one-handle. Let $p_{1}=(0,0)$ and
$p_{2}=(0,1)$ be two points on the two feet circles of the one-handle. The new
$\beta$ circle $\beta_{g+1}$ is the union of two arcs $l_{1}$ and $l_{2}$
connecting $p_{1}$ and $p_{2}$, where $l_{1}$ is an arc in
$\bar{\Sigma}\backslash\bm{\beta}$ and $l_{2}$ is the arc
$\\{(0,t)|t\in[0,1]\\}$ in new one-handle. Next, introduce a new curve
$\bar{\alpha}_{1}^{a,L}$ as follows. Let $l_{z}$ be an arc from $z$ to the
point $(-1,0)\in S^{1}\times\\{0\\}$ does not intersect any of the $\alpha$\-
and $\beta$-curves. Let $l_{2}^{\prime}$ be the arc $\\{(1,t)|t\in[0,1]\\}$ in
the one-handle; denote the endpoints of $l_{2}^{\prime}$ by $p_{1}^{\prime}$
and $p_{2}^{\prime}$. Let $l_{1}^{\prime}$ be an arc connecting
$p_{1}^{\prime}$ and $p_{2}^{\prime}$ in
$\bar{\Sigma}\backslash\\{\bm{\bar{\alpha}}\cup l_{z}\\}$. Let
$\bar{\alpha}_{1}^{a,L}=l_{1}^{\prime}\cup l_{2}^{\prime}$. Then
$\bar{\alpha}_{1}^{a,L}$ intersects $\alpha^{c}_{g}$ geometrically once at a
point $p$. Note $\alpha_{g}^{c}$ is the meridian of $P$, and
$\bar{\alpha}_{1}^{a,L}$ is a longitude of $P$. Let
$\bar{\Sigma}^{\prime\prime}$ be the circle compactification of
$\bar{\Sigma}^{\prime}\backslash\\{p\\}$. Denote the new boundary circle by
$\partial_{L}\bar{\Sigma}^{\prime\prime}$, and denote the boundary circle
inherited from $\partial\bar{\Sigma}$ by
$\partial_{R}\bar{\Sigma}^{\prime\prime}$. Let
${\alpha}_{1}^{a,L}=\bar{\alpha}_{1}^{a,L}\backslash\\{p\\}$, and let
${\alpha}_{2}^{a,L}=\alpha^{c}_{g}\backslash\\{p\\}$. Let
$\alpha_{1}^{a,R}=\alpha_{1}^{a}$, and let $\alpha_{2}^{a,R}=\alpha_{2}^{a}$.
Let
$\bm{\bar{\alpha}}^{\prime\prime}=\\{\alpha^{a,L}_{1},\alpha^{a,L}_{2},\alpha^{a,R}_{1},\alpha^{a,R}_{2},\alpha^{c}_{1},\ldots,\alpha^{c}_{g-1}\\}$.
Let $\bm{\beta}^{\prime\prime}=\bm{\beta}\cup\\{\beta_{g+1}\\}$. Label the
Reeb chords corresponding to the new boundary circle
$\partial_{L}\bar{\Sigma}^{\prime\prime}$ by $\sigma_{i}$ ($i=0,1,2,3$) so
that $\sigma_{2}$ and $\sigma_{3}$ lie on the side attached to the feet near
$w$, and $\sigma_{0}$ and $\sigma_{1}$ lie on the side attached to the feet
near $z$. Let $z_{R}=z$, and let $z_{L}$ be a point on $\sigma_{0}$. Let
$\bm{z}$ be an arc connecting $z_{R}$ and $z_{L}$ in the complement of
$\bm{\bar{\alpha}}^{\prime\prime}\cup\bm{\beta}^{\prime\prime}$; $\bm{z}$
exists since we can obtain such an arc by extending $l_{z}$. Finally, we let
$\mathcal{H}_{X(P)}=(\bar{\Sigma}^{\prime\prime},\bm{\bar{\alpha}}^{\prime\prime},\bm{\beta}^{\prime\prime},\bm{z})$.
See Figure 22.
###### Lemma 5.8.
Let $\mathcal{H}_{X(P)}$ be the arced bordered Heegaard diagram obtained from
$\mathcal{H}_{w,z}$ via the above procedure. Let $\alpha_{im}$ be a
$z$-adjacent multicurve. Then $\mathcal{H}_{X(P)}(\alpha_{im})$ is
unobstructed and bi-admissible.
###### Proof.
The unobstructedness follows from Proposition 4.16. We move to see bi-
admissibility in the sense of Definition 2.10. Note that periodic domains $B$
for $\mathcal{H}_{X(P)}(\alpha_{im})$ with
$n_{\sigma_{0}}(B)=n_{\sigma_{1}}(B)=0$ (respectively
$n_{\sigma_{2}}(B)=n_{\sigma_{3}}(B)=0$) correspond to periodic domains
$B^{\prime}$ for $\mathcal{H}_{w,z}(\alpha_{im})$ with $n_{z}(B^{\prime})=0$
(respectively $n_{w}(B^{\prime})=0$). Therefore, the bi-admissibility of
$\mathcal{H}_{w,z}(\alpha_{im})$, which was shown in Proposition 5.2, implies
the bi-admissibility of $\mathcal{H}_{X(P)}$. ∎
Recall $\mathcal{H}_{id}$ is the standard doubly pointed bordered Heegaard
diagram for the identity pattern knot.
###### Lemma 5.9.
$CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{im}))$ is chain homotopy
equivalent to
$CFK_{\mathcal{R}}(\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}(\alpha_{im}))$.
###### Proof.
Note the doubly pointed Heegaard diagram
$\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}(\alpha_{im})$ is obtained from
$\mathcal{H}_{w,z}(\alpha_{im})$ by two stabilizations; see Figure 23.
Figure 23. An example of $\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}(\alpha_{im})$
(left) and $\mathcal{H}_{w,z}(\alpha_{im})$ (lower right). Here, $P$ is the
$(3,1)$-cable. These two diagrams are related via handleslides and
destabilizations, where the handleslides do not involve sliding over the
immersed $\alpha$-curve.
In particular, it is also bi-admissible, and hence one can define
$CFK_{\mathcal{R}}(\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}(\alpha_{im}))$. We
claim there is a sequence of Heegaard moves relating
$\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}(\alpha_{im})$ and
$\mathcal{H}_{w,z}(\alpha_{im})$ which do not involve sliding $\alpha$ curves
over $\alpha_{im}$. To see this, note that on
$\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}(\alpha_{im})$ there is a
$\beta$-circle between the $w$ and $z$ base points that intersects an
$\alpha$-circle geometrically once; denote these curves by $\beta_{g+2}$ and
$\alpha_{g+2}$ respectively. After sliding other beta curves over
$\beta_{g+2}$ if necessary, we may assume $\alpha_{g+2}$ does not intersect
other beta curves, and hence we can destabilize
$\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}(\alpha_{im})$ along $\alpha_{g+2}$ and
$\beta_{g+2}$. Now we arrive at an intermediate Heegaard diagram; see Figure
23 (upper right). It is a stabilization of $\mathcal{H}_{w,z}(\alpha_{im})$.
On this intermediate Heegaard diagram, there is an $\alpha$-circle
$\alpha_{g+1}$ that intersects only one $\beta$-circle $\beta_{g+1}$, and the
geometric intersection number is one. So, we may slide other $\alpha$-curves
over $\alpha_{g+1}$ if necessary so that $\beta_{g+1}$ do not intersect other
$\alpha$-curves. After this, we destabilize the Heegaard diagram along
$\alpha_{g+1}$ and $\beta_{g+1}$, and the resulting Heegaard diagram is
$\mathcal{H}_{w,z}(\alpha_{im})$. The homotopy equivalence between
$CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{im}))$ and
$CFK_{\mathcal{R}}(\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}(\alpha_{im}))$
follows from the homotopy invariance of knot Floer chain complexes established
in Proposition 3.11. ∎
With these lemmas at hand, we now prove the ungraded version of Theorem 1.1.
###### Proof of Theorem 1.1, ungraded version.
In view of Proposition 5.4, we may assume the immersed multicurve $\alpha_{K}$
for the knot complement of $K$ is z-adjacent. Let $\mathcal{H}_{X(P)}$ be the
arced bordered Heegaard diagram obtained from $\mathcal{H}_{w,z}$ via the
“punctured-stabilization procedure”. Throughout, when referring to the type D
structure of a knot complement, we use the meridian and Seifert longitude to
parametrize the boundary. By standard arguments, we can arrange that
$\mathcal{H}_{X(P)}$ is left provincially admissible at the cost of isotopy of
the $\beta$ curves. By Theorem 1.4, we have
$\displaystyle\widehat{CFD}(\mathcal{H}_{X(P)}(\alpha_{K}))$
$\displaystyle\cong\widehat{CFDA}(\mathcal{H}_{X(P)})\boxtimes\widehat{CFD}(\alpha_{K})$
$\displaystyle\cong\widehat{CFDA}(\mathcal{H}_{X(P)})\boxtimes\widehat{CFD}(S^{3}\backslash\nu(K))$
$\displaystyle\cong\widehat{CFD}(S^{3}\backslash\nu(P(K)))$
Therefore, up to homotopy equivalence, the extended type D structure
$\widetilde{CFD}(\mathcal{H}_{X(P)}(\alpha_{K}))$ extends
$\widehat{CFD}(S^{3}\backslash\nu(P(K)))$. Consequently, we have the
following:
$\displaystyle CFK_{\mathcal{R}}(P(K))$
$\displaystyle\cong\widetilde{CFA}(\mathcal{H}_{id}){\boxtimes}\widetilde{CFD}(S^{3}\backslash\nu(P(K)))$
$\displaystyle\cong\widetilde{CFA}(\mathcal{H}_{id}){\boxtimes}\widetilde{CFD}(\mathcal{H}_{X(P)}(\alpha_{K}))$
$\displaystyle\cong
CFK_{\mathcal{R}}(\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}(\alpha_{K}))$
Here, the last equality follows from applying Theorem 1.6. Note ${\boxtimes}$
in the above equation is well-defined since $\mathcal{H}_{X(P)}(\alpha_{K})$
is bi-admissible by Lemma 5.8. Now, by Lemma 5.9, $CFK_{\mathcal{R}}(P(K))$ is
chain homotopy equivalent to
$CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{K}))$. ∎
### 5.2. $\mathcal{H}_{w,z}(\alpha_{K})$ is gradable
We want to show that the chain homotopy equivalence established in the
previous subsection preserves the $w$-grading and $z$-grading of knot Floer
chain complexes. As the first step, we need to show that
$\mathcal{H}_{w,z}(\alpha_{K})$ is gradable (in the sense of Definition 3.7).
###### Proposition 5.10.
The diagram $\mathcal{H}_{w,z}(\alpha_{K})$ is gradable.
In addition to being gradable, note that the results in the previous
subsection also imply that
$\widehat{HF}(\mathcal{H}_{w}(\alpha_{K}))\cong\widehat{HF}(\mathcal{H}_{z}(\alpha_{K}))\cong\mathbb{F}$.
Therefore we can define an absolute bigrading on
$CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{K}))$.
We will reduce the proof of Proposition 5.10 to the case where
$\mathcal{H}_{w,z}$ is of genus one. If $\mathcal{H}_{w,z}$ is a genus-one
bordered Heegaard diagram, then one can define a Maslov grading $m(-)$ on
$CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{K}))$ as follows. Given any two
generators $x$ and $y$, let $p_{0}$ and $p_{1}$ be two paths from $x$ to $y$
in $\alpha_{K}$ and $\beta$ respectively such that $p_{0}-p_{1}$ lifts to a
closed path $\gamma$ in the universal cover $\mathbb{R}^{2}$ of the genus-one
Heegaard surface. Up to perturbing the curves, we may assume that $p_{0}$ and
$p_{1}$ intersect in right angles at $x$ and $y$. Then $m(x)-m(y)$ is equal to
$\frac{1}{\pi}$ times the total counterclockwise rotation along the smooth
segments of $\gamma$ minus twice the number of the (lifts of) base point $z$
enclosed by $\gamma$; see [HRW22, Definition 35]. This Maslov grading is also
defined (by the same definition) when the $\beta$ curve is only immersed. In
[HRW22], it is shown that the Maslov grading thus defined on a pairing diagram
of two immersed curves agrees with the Maslov grading computed using the
grading package of bordered Heegaard Floer homology. Next, we show this Maslov
grading can be equivalently defined in terms of the index of domains.
###### Proposition 5.11.
Let $\mathcal{H}_{w,z}$ be a genus-one bordered Heegaard diagram and let
$m(-)$ be the Maslov grading on $\mathcal{G}(\mathcal{H}(\alpha_{K}))$
mentioned above. Let $B\in\pi_{2}(x,y)$ be a domain connecting $x$ and $y$
with $\partial B=p_{0}-p_{1}$. Then $m(x)-m(y)=\text{ind}(B)-2n_{z}(B)$.
Moreover, this result extends to the case where the $\beta$ is immersed, in
which we define the index of $B$ by
$\text{ind}(B)=e(B)+n_{x}(B)+n_{y}(B)-s(\partial_{\alpha_{K}}B)-s(\partial_{\beta}B).$
(Here $s(-)$ denotes the self-intersection number of an oriented immersed arc
as defined in Section 2.6)
Before proving Proposition 5.11 we introduce some terminology. It will be
clear later that we can assume $p_{0}-p_{1}$ is immersed and only has discrete
double points.
###### Definition 5.12.
A cornered immersed loop in $T^{2}$ is the union of two oriented immersed arcs
$p_{0}$ and $p_{1}$ with at most discrete double points such that
* (1)
$p_{0}$ and $p_{1}$ share common endpoints,
* (2)
the interior of $p_{0}$ and $p_{1}$ intersect transversally,
* (3)
$p_{0}-p_{1}$ is an oriented loop which is null-homologous,
* (4)
$p_{0}$ and $p_{1}$ intersect transversally at the endpoints if $p_{0}$ and
$p_{1}$ are non-degenerate (i.e., not a point), and
* (5)
if one of $p_{0}$ and $p_{1}$ is degenerate, the remaining arc forms a smooth
loop after identifying the endpoints.
The endpoints of $p_{0}$ (or equivalently, $p_{1}$) are called corners of the
cornered immersed loop.
###### Definition 5.13.
Two cornered immersed loops $p_{0}-p_{1}$ and $p_{0}^{\prime}-p_{1}^{\prime}$
in $T^{2}$ are called cornered identical if they share the same set of corners
$\\{x,y\\}$ (or $\\{x\\}$ if the loops have degenerate arcs) and there are
arbitrarily small neighborhoods $N_{x}$ and $N_{y}$ of $x$ and $y$
respectively such that
$(p_{0}-p_{1})|_{N_{x}}=(p^{\prime}_{0}-p^{\prime}_{1})|_{N_{x}}$ and
$(p_{0}-p_{1})|_{N_{y}}=(p^{\prime}_{0}-p^{\prime}_{1})|_{N_{y}}$.
Figure 24. Upper row from left to right: Reidemeister I, II, and III move.
Lower row from left to right: an isotopy that crosses a non-degenerate corner
and a degenerate corner.
###### Lemma 5.14.
If two cornered immersed loops $p_{0}-p_{1}$ and
$p_{0}^{\prime}-p_{1}^{\prime}$ are cornered identical, then they are related
by a finite sequence of moves of the following types:
* (1)
Reidemeister moves that do not involve the corners and
* (2)
isotopy that possibly cross the corners.
(See Figure 24) Here, we require $(p_{0}-p_{1})|_{N_{x}}$ and
$(p_{0}-p_{1})|_{N_{y}}$ are fixed throughout the modification for some
sufficiently small neighborhoods ${N_{x}}$ and ${N_{y}}$ of the corners.
###### Proof.
One can prove this applying the usual Reidemeister-move equivalence of knot
diagrams (by treating both immersed loops as diagrams for the unknot via
imposing proper crossing information); note that any Reidemeister move
involving a corner can be traded by an isotopy crossing the corner and a
Reidemeister move that does not involve the corner. ∎
###### Definition 5.15.
Given a cornered immersed loop $p_{0}-p_{1}$ in $T^{2}$. Let
$\tilde{p}_{0}-\tilde{p}_{1}$ be a lift of $p_{0}-p_{1}$ in $\mathbb{R}^{2}$
and let $\tilde{B}$ be the bounded domain in $\mathbb{R}^{2}$ such that
$\partial\tilde{B}=\tilde{p}_{0}-\tilde{p}_{1}$. Let $B$ be the domain in
$T^{2}$ obtained from $\tilde{B}$ by applying the covering projection. Define
the index of the cornered immersed loop as
$\text{ind}(p_{0}-p_{1})=e(B)+n_{x}(B)+n_{y}(B)-s(\partial_{p_{0}}B)-s(\partial_{p_{1}}B),$
where $x$ and $y$ are the corners.
Define the net rotation number $nr(p_{0}-p_{1})$ to be $\frac{1}{\pi}$ times
the counterclockwise net rotation along the smooth segments $p_{0}$ and
$p_{1}$.
###### Lemma 5.16.
Suppose $p_{0}-p_{1}$ and $p_{0}^{\prime}-p_{1}^{\prime}$ are cornered
immersed loops differ by an isotopy or a Reidemeister move. Then
$\text{ind}(p_{0}-p_{1})-\text{ind}(p^{\prime}_{0}-p^{\prime}_{1})=nr(p_{0}-p_{1})-nr(p^{\prime}_{0}-p^{\prime}_{1}).$
Figure 25. Local diagrams for isotopies that cross a corner. The numbers $a$,
$a-1$, and $a+1$ indicate the multiplicities of the regions.
###### Proof.
First, we examine the effect of an isotopy on both quantities. Clearly, the
net rotation number is unchanged. We separate the discussion of the index into
two cases according to whether the isotopy crosses corners or not. If the
isotopy does not cross the corners, it clearly does not change the index as
well whence we are done. If the isotopy crosses a corner, then we claim the
local multiplicity and the self-intersection numbers change in a way that
cancel each other, leaving the index unchanged. This claim can be seen by
examining local diagrams, which are further divided into two cases according
to whether the corner is degenerate or not. When the corner is non-degenerate,
the local diagram of one case is shown in Figure 25 (i); all the other cases
can be obtained from this case by swapping the labels and orientations of the
arcs, and the analysis of all cases are similar. In the case shown in Figure
25 (i), only $n_{x}(B)$ and $s(\partial_{p_{0}}B)$ change: the diagram on the
left has $n_{x}(B)=\frac{a+(a-1)+(a-1)+(a-1)}{4}$ and the local self-
intersection of $p_{0}$ contributes $s_{p_{0}}=-1$; the diagram on the right
has $n_{x}(B)=\frac{(a+1)+a+a+a}{4}$ and there are no self-intersections of
the arcs in the local diagram so the local contribution $s_{p_{0}}=0$. In both
diagrams we have $n_{x}(B)-s_{p_{0}}=\frac{4a+1}{4}$, and hence the index is
unchanged. When the corner is degenerate, one of the cases is shown in Figure
25 (ii). In this case, only $n_{x}$ and the self-intersection of $p_{0}$
change: the diagram on the left has $n_{x}=\frac{a+a+(a-1)+(a-1)}{4}$ and a
local contribution of the self-intersection of $p_{0}$ given by
$s_{p_{0}}=-1$; the diagram on the right has $n_{x}=\frac{(a+1)+(a+1)+a+a}{4}$
and a local contribution of the self-intersection of $p_{0}$ given by
$s_{p_{0}}=1$. In both local diagrams we have
$n_{x}(B)+n_{x}(B)-s_{p_{0}}=2a$, and hence the index is unchanged. All other
cases can be obtained from this case by swapping the labels and orientations
of the arcs, and the analysis of all cases are similar.
Figure 26. The local diagram for Reidemeister I move. The numbers $a$, $a-1$,
and $a-2$ indicate the multiplicities of the regions.
Next, we examine the effect of Reidemeister I move. Up to swapping
orientations and the labels, we may assume the local diagram is as shown in
Figure 26 The net rotation number of the diagram on the right is 2 less than
that of the diagram on the left. For the index comparison, the Euler measure
of the local domain on the right is $1$ less than that of the left diagram and
the self-intersection number $s(\partial_{p_{0}}B)$ of the right diagram is
$1$ more than that of the left diagram; in total the index of the diagram on
the right is 2 less than that of the diagram on the left. Therefore, the
changes in the net rotation and in the index are the same after doing a
Reidemeister I move.
Next, we examine the effect of Reidemeister II moves. It does not change the
net rotation number. Also, it does not affect the Euler measure and the local
multiplicities at the corners. A Reidemeister II move creates/annihilates a
pair of self-intersection points whose signs cancel each other if both arcs
involved are on $p_{0}$ or $p_{1}$, and otherwise does not involve self-
intersections; in both cases the self-intersection numbers are unchanged. So,
the index does not change as well.
Finally, it is easy to see that a Reidemeister III move does not change the
net rotation number. It is also easy to see a Reidemeister III move does not
change the Euler measure, local multiplicities at the corners, or self-
intersections, and hence it does not change the index either. ∎
###### Proposition 5.17.
Let $p_{0}-p_{1}$ be a cornered immersed loop. Then
$\text{ind}(p_{0}-p_{1})=nr(p_{0}-p_{1})$.
###### Proof.
By Lemma 5.16 and Lemma 5.14, it suffices to show that $p_{0}-p_{1}$ is
cornered identical with some cornered immersed loop whose index coincides with
the net rotation number.
If at least one of $p_{0}$ and $p_{1}$ is degenerate, $p_{0}-p_{1}$ is
cornered identical with an embedded circle that passes the corner, and it is
easy to see the index and the net rotation number coincide on an embedded
circle with a degenerate corner.
Figure 27. Deforming $p_{0}-p_{1}$.
Next, we discuss the case where $p_{0}-p_{1}$ is non-degenerate. We first
construct a cornered immersed loop $p_{0}^{\prime}-p_{1}^{\prime}$ that is
cornered identical to $p_{0}-p_{1}$ as follows. Let $p_{0}^{\prime}=p_{0}$. We
shall construct $p_{1}^{\prime}$ to be a path which is almost a parallel push-
off of $p_{0}$. (See Figure 27 for examples.) To spell out the construction,
let $f_{0}:[0,1]\rightarrow T^{2}$ be an immersion such that
$f_{0}([0,1])=p_{0}$. Let $\hat{N}$ be a sufficiently small tubular
neighborhood of $p_{0}$ such that it can realized as the image of an extension
of $f_{0}$, i.e., there exits an immersion
$\tilde{f_{0}}:[0,1]\times[-\epsilon,\epsilon]\rightarrow T^{2}$ such that
$\tilde{f_{0}}|_{[0,1]\times\\{0\\}}=f_{0}$ and
$\tilde{f_{0}}([0,1]\times\\{pt\\})$ is a parallel push-off of $p_{0}$ for any
$pt\in[-\epsilon,0)\cup(0,\epsilon]$. We can further assume near the two
corners $x=f_{0}(0)$ and $y=f_{0}(1)$, the other arc $p_{1}$ is contained in
$\tilde{f_{0}}(\\{0,1\\}\times[-\epsilon,\epsilon])$; denote these two arcs on
$p_{1}$ near $x$ and $y$ by $p_{x}$ and $p_{y}$ respectively. We construct
|
# Full-frequency dynamic convolution: a physical frequency-dependent
convolution for sound event detection
###### Abstract
Recently, 2D convolution has been found unqualified in sound event detection
(SED). It enforces translation equivariance on sound events along frequency
axis, which is not a shift-invariant dimension. To address this issue, dynamic
convolution is used to model the frequency dependency of sound events. In this
paper, we proposed the first full-dynamic method named _full-frequency dynamic
convolution_ (FFDConv). FFDConv generates frequency kernels for every
frequency band, which is designed directly in the structure for frequency-
dependent modeling. It physically furnished 2D convolution with the capability
of frequency-dependent modeling. FFDConv outperforms not only the baseline by
6.6% in DESED real validation dataset in terms of PSDS1, but outperforms the
other full-dynamic methods. In addition, by visualizing features of sound
events, we observed that FFDConv could effectively extract coherent features
in specific frequency bands, consistent with the vocal continuity of sound
events. This proves that FFDConv has great frequency-dependent perception
ability. Code is available at FFDConv.
Index Terms— sound event detection, full-frequency dynamic convolution,
frequency-dependent modeling, independent representation spaces, vocal
continuity
## 1 introduction
Sound event detection (SED) is one of the subtasks of computational auditory
scene analysis (CASA) [1], which helps machines understand the content of an
audio scene. Similar to visual object detection [2] and segmentation [3], SED
aims to detect sound events and corresponding timestamps (onset and offset),
considered as a prior task of automatic speech recognition (ASR) and speaker
verification. It has wide applications in information retrieval [4], smart
homes [5], and smart cities [6].
Fig. 1: Illustration of frequency-dependent modeling. Top models time-
frequency patterns in the same space with a shared kernel. Bottom models them
in serval spaces with frequency-adaptive kernels, in which time-frequency
patterns specific to sound events can be considered.
SED has achieved great success with the help of deep learning (DL). The
general paradigm is that acoustic spectral features are passed through a deep
neural network and then transformed into discriminative acoustic
representations to distinguish different sound events. Designing an effective
feature extractor has become a hot topic in SED, which has been adopting
methods qualified in other domains in the past few years. Convolutional neural
networks (CNN) from the field of computer vision, such as SENet [7], SKNet
[8], and CBAM [9] have been migrated to SED in the spirit of acoustic
spectrogram being similar to two-dimensional image data. With the intention
that speech and audio are both sound data, Conformer [10] from the field of
speech recognition has migrated to SED. However, they all failed to show good
performance in SED. Specifically, SENet, SKNet, and CBAM are designed on image
data with a clear 2D spatial concept, while audio data is a sequence.
Conformer is designed on speech data containing only the speech sound event,
meaning time-frequency patterns of speech data are distributed only in a
certain fixed frequency band. However, audio data always contains multiple
sound events, and so has diverse time-frequency patterns of sound events. All
of the above emphasize that DL methods qualified in other domains may not
necessarily be compatible with SED.
Dynamic convolution network [11] was initially proposed for video prediction.
It was designed to generate future frames based on the motion pattern within a
particular video. The parameters of the dynamic convolution kernel are always
adapted to the input. In SED, different sound events are distributed in
different frequency regions, and this frequency dependence is invariant over
time. This has motivated some researchers to investigate whether dynamic
convolution can improve the capability of 2D convolution in modeling the
frequency dependence of sound events. [12] proposed frequency dynamic
convolution (FDConv), which found that the time-frequency spectrogram is not
translation invariant on frequency dimension like image data. FDConv extracts
frequency-adaptive attention weights from input for several pre-initialized
convolution kernels. These kernels are then weightedly combined in the number
dimension to obtain one convolution kernel. Then, the combined kernel is
convoluted with the input in a standard manner. [12, 13], [14] proposed multi-
dimensional frequency dynamic convolution (MFDConv), which extends the
frequency-adaptive dynamic properties of convolutional kernels to more
dimensions of the kernel space, i.e. in-channels, out-channels, and kernel
numbers.
Although FDConv and MFDConv have achieved great performance, they are
essentially the same as basic convolution, which is spatially shared. They
belong to semi-dynamic convolution in the field of dynamic convolution. As
shown in the upper part of Fig. 1, their perception abilities of different
frequency bands are identical. They can only model time-frequency patterns in
one representation space, where sound events are not easily recognized from
each other. Compared with semi-dynamic convolution, full-dynamic convolution
[11, 15, 16, 17, 18] attracts more attention recently, which uses a separate
network branch to predict a specific filter for each pixel. [18] found this
type of dynamic convolution is equivalent to applying attention on unfolded
input features, which enables it more effective when modeling complex
patterns. Sound events’ time-frequency patterns are highly frequency-
dependent, and full-dynamic convolution can model features of spatial pixels
with different filters. Full-dynamic convolution may be optimal in dealing
with recognizing sound events.
In this paper, we propose a novel method named _full-frequency dynamic
convolution_ (FFDConv), which is the first full-dynamic convolution method for
SED. As shown in the lower part of Fig. 1, FFDConv generates frequency-
specific kernels, resulting in distinct representation spaces. This design is
applied directly in the structure for frequency-dependent modeling. In this
way, the 2D convolution is physically furnished with the capability of
frequency-dependent modeling, so that the specific time-frequency patterns can
be acquired for different sound events. In the end, sound events can be easily
recognized from each other in subsequent classification.
The main contributions of this paper are summarized as follows:
* •
We proposed full-frequency dynamic convolution that can model time-frequency
patterns in independent representation spaces. This method will extract more
discriminative features of sound events, resulting in effective
classification.
* •
The Proposed method outperforms not only baseline but also pre-existing full
dynamic filters method in other domain.
* •
By visualizing features of sound events, we found the ability to model
temporally coherent features is essential to the detection of sound events.
And the FFDConv has this ability.
## 2 mthodology
### 2.1 Full-dynamic convolution
A basic 2D convolution can be denoted as $y=\boldsymbol{W}\ast
x+\boldsymbol{b}$, where $x\in{\mathbb{R}^{T\times F\times C_{in}}}$ and
$y\in{\mathbb{R}^{T\times F\times C_{out}}}$ denote the input feature and
output feature; $\boldsymbol{W}\in{\mathbb{R}^{k\times k\times C_{in}\times
C_{out}}}$ and $\boldsymbol{b}\in{\mathbb{R}^{C_{out}}}$ denote the weight and
bias of a basic convolution kernel. In contrast to basis convolution, full-
dynamic convolution [11] leverages separate network branches to generate the
filters for each pixel. Full-dynamic convolution operation can be written as:
$\displaystyle y=\boldsymbol{Concat}$ $\displaystyle{(\boldsymbol{W}_{t,f}\ast
x(t,f))}$ (1) $\displaystyle\boldsymbol{W}_{t,f}=$ $\displaystyle G(x,t,f)$
where $\boldsymbol{W}_{t,f}$ denotes weight for the current pixel; The $G$ is
the filter generating function; $Concat$ here aims to convey that convolution
operation of each pixel is independent. For simplicity, the bias term is
omitted.
Fig. 2: Illustration of full-frequency dynamic convolution. In general, the
factory produces frequency-dependent kernels from acoustic feature, and then
kernels are convoluted with input along the time axis. In the factory, there
are two workshops aiming to produce spatial filters and channel filters,
respectively. And they are integrated in the assembly workshop.
### 2.2 Overall of proposed method
As is commonly understood, different sound events have different frequency
band distributions. For instance, catcall, which is sharp, shrill, and high-
pitched, is often heard in the high-frequency range; running water, which is
low, soft, and soothing, is often heard in the low-frequency range. Based on
this, we explore designing a new convolution for SED, which can capture the
distribution of frequency bands and model time-frequency patterns of sound
events in different frequency representation spaces.
Inspired by full dynamic convolution [18], we designed the full-frequency
dynamic convolution (FFDConv) for SED. Overall, as shown in Fig. 2, FFDConv
employs a separate branch to predict kernels for each frequency band, in which
the content of kernels is based on input feature. In the kernel-generating
branch, there are two sub-branches: the spatial filter-generating branch for
the spatial space of kernels and the channel filter-generating branch for the
channel space of kernels. After spatial and channel filters are obtained, they
are combined and then convoluted with the input feature. Note that similarly,
full-temporal dynamic convolution (FTDConv) predicts kernels for each temporal
frame, and kernels are convoluted with input along the frequency axis.
### 2.3 Full-frequency dynamic convolution
Unlike the previous semi-dynamic convolution, FFDConv is designed directly in
the structure for frequency-dependent modeling. It models the feature along
the frequency axis in different representation spaces. Mathematically, FFDConv
can be written as:
$\displaystyle y=\boldsymbol{Concat}$ $\displaystyle{(\boldsymbol{W}_{f}\ast
x(f),\,dim=f)}$ (2) $\displaystyle\boldsymbol{W}_{f}=$
$\displaystyle\;G_{s}(x,f)\odot G_{c}(x,f)$
where $\boldsymbol{W}_{f}$ is the content-adaptive kernel for the $f^{th}$
frequency band; $x(f)\in\mathbb{R}^{T}$ is the $f^{th}$ frequency band of
input feature; $G_{s}$ and $G_{c}$ are the spatial and channel filter-
generating function; $\odot$ denotes the elemental dot product operator. For
clarity, $Concat$ here aims to convey that $\boldsymbol{W}_{f}$ is convoluted
with input along the time axis.
FFDConv employs a separate branch to generate convolution kernels for each
frequency band, in which there are two sub-branches: spatial filter-generating
branch and channel filter-generating branch. The spatial filter-generating
module is designed to predict the spatial content of dynamic kernels, and the
channel-generating module is designed to predict the channel content of
dynamic kernels. For efficiency, the dynamic filters are decoupled into
spatial and channel ones, following [18].
Spatial filter generating. As illustrated in Fig. 3, we use a standard Conv2D
to compress the time dimension of input and map channel dimension from $C$ to
$K^{2}$, whose kernel weight $W\in\mathbf{R}^{C\times K^{2}\times T\times W}$,
where $W$ is the window size of the kernel in the frequency dimension. It
moves along the frequency axis when convoluted with input. In this way, not
only are the adjacent frequency components considered, but information along
the time axis is aggregated. Then, the spatial filter of FFDConv is obtained,
which assigns $K\times K$ spatial weight to every frequency kernel and is
highly related to the input. Note that full dynamic convolution [18] assigns
$K\times K$ spatial weight to every pixel. Consequently, FFDConv can model
features from different frequency bands of the input in independent
representation spaces.
Considering these representation spaces may be far apart from each other, we
employ an attention module following [12] to limit individual differences
between them so as not to be too large. Finally, the spatial filter is passed
through a Filter-Norm module following [18], avoiding the gradient
vanishing/exploding during training.
Channel filter generating. As illustrated in Fig. 3, the channel filter
generating module is similar to the SE block [7]. It compresses the time and
frequency feature of input by applying an average pooling and maps the channel
dimension from $C$ to $CK^{2}$ by two fully connected (FC) layers. Between two
fully connected layers, the ReLU activation function is applied to introduce
non-linearity. After input is passed through this module, the channel filter
of FFDConv is obtained, which assigns $C$ channel weight to each spatial
location of the frequency kernel. It should be noted that the channel filter
for $F$ frequency kernels is the same. In the end, the channel filter is also
passed through the Filter-Norm [18]. The spatial and channel filters are mixed
by dot product, and the full frequency kernels are obtained. We then use them
to model time-frequency patterns of input features.
### 2.4 FFDConv block
Considering frequency kernels of FFDConv don’t have output channel dimension,
we design an FFDConv block that contains the channel mapping. As illustrated
in Fig. 3, firstly, the channel dimension of input is mapped from $C_{in}$ to
$C_{out}$ after passing through the channel transformation module. Then, based
on the input feature, the spatial and channel filters are obtained by passing
through the spatial and channel filter generating module. Full-frequency
dynamic kernels are obtained by mixing the spatial and channel filters.
Finally, the kernels are convoluted with input along the time axis.
In the actual algorithm, following [18], spatial filters, channel filters, and
input are sent to DDF operation to get the output, which is implemented in
CUDA, alleviating any need to save intermediate multiplied filters during
network training and inference. Note that the DDF op needs $H\times W$ spatial
filters. We repeat the $1\times F$ spatial filters to $T\times F$ so that the
kernel’s weights are the same along the time axis when convoluted with input
in $f^{th}$ frequency band.
Fig. 3: Details of the FFDConv
## 3 experiment
### 3.1 Dataset and experiment setup
All experiments are conducted on the dataset of Task 4 in the DCASE 2022. The
training set consists of three types of data: weakly labeled data (1578
clips), synthetic strongly labeled data (10000 clips), and unlabeled in-domain
data (14412 clips). The real validation set (1168 clips) is used for
evaluation. The input acoustic feature is the log Mel spectrogram extracted
from 10-second-long audio data with a sampling rate of 16 kHz. The feature
configuration is the same as [13], in which the input feature has 626 frames
and 128 mel frequency bands.
The baseline model is the CRNN architecture [19], which consists of 7 layers
of conv blocks and 2 layers of Bi-GRU. Attention pooling module is added at
the last FC layer for joint training of weakly labeled data, and mean teacher
(MT) [20] is applied for consistency training with unlabeled data for semi-
supervised learning. Data augmentations such as MixUp [21], time masking [22],
frame-shift, and FilterAugment [23] are used. The data augmentation parameters
are identical to [12].
Poly-phonic sound event detection scores (PSDS), collar-based F1 score
(EB-F1), intersection-based F1 score (IB-F1) are used to evaluate the model
performance. Median filters with fixed time length are used for post-
processing, and sound events have different thresholds from each other to
obtain hard predictions for calculating EB-F1. The metrics hyper-parameters
are identical to [12]. The model is trained using the Adam optimizer with a
maximum learning rate of 0.001, and ramp-up is used for the first 80 epochs.
Table 1: _SED performance comparison between models using different dynamic convolution on the validation set._ Model | PSDS1 $\uparrow$ | PSDS2 $\uparrow$ | CB-F1 $\uparrow$ | IB-F1 $\uparrow$
---|---|---|---|---
Baseline [19] | 0.370 | 0.579 | 0.469 | 0.714
DDFConv [18] | 0.387 | 0.624 | 0.467 | 0.720
FTDConv | 0.395 | 0.651 | 0.495 | 0.740
FFDConv | 0.436 | 0.685 | 0.526 | 0.751
### 3.2 Full-frequency dynamic convolution on SED
We compared the performances of baseline with full dynamic convolution
methods, including decoupled dynamic convolution (DDFConv) [18], full-temporal
dynamic convolution (FTDConv), and full-frequency dynamic convolution
(FFDConv). For full dynamic convolution methods, dynamic convolution layers
replaced all convolution layers except the first layer from the baseline model
[19].
The results are shown in Table 1. Three types of full dynamic convolution can
all outperform the baseline, which proves full dynamic convolution qualifies
in SED. In addition, it can be seen that the effects of three types of
convolution are in increasing order. First, FTDConv and FFDConv employ
content-adaptive temporal or frequency kernels, which can be viewed as giving
prior knowledge to SED compared with DDFConv. Second, FFDConv outperforms
FTDConv, which can prove that time-frequency patterns of sound events are
highly frequency-dependent, and this dependency is time-invariant. Moreover,
FFDConv models acoustic features with different kernels along the frequency
axis, which can be thought to be frequency components modeled in different
representation spaces. As if components of the feature are split into
different frequency spaces and then reassembled. This is consistent with the
characteristics of sound events.
Fig. 4: Feature comparison of FFDConv and CRNN. Features activation of the 5th
Conv block are shown in the 4th row. The trends of frequency band features
over time are shown in the 5th row. Note that y-axis labels of strong
prediction are abbreviations of the sound event categories. For example, Abr
stands for Alarm bell ringing.
### 3.3 Fine-grained modeling study
To explore FFDConv’s ability to understand acoustic spectral information at a
fine-grained level. We visualized feature of the middle layer. More
visualizations can be found in the supplementary material.
The visualization results are shown in Fig. 4. Comparing the features of
FFDConv and CRNN, we can see that most of the time-frequency patterns modeled
by CRNN are temporally isolated and disjoint. In contrast, FFDConv’s patterns
and their neighbors are in a whole, thereby forming a distinct time-frequency
representation. Moreover, this phenomenon can also be found in trends of
frequency band features over time. The waveforms of FFDConv are smoother than
CRNN. Specifically, the duration of peak and trough is longer in FFDConv’s
waveform, which results from the feature being mostly coherent over time.
There are more pulses in the resting state of CRNN’s waveforms, which are in a
disorganized state. Besides, the distributions of frequency band features are
consistent with alarm_bell_ring’s spectrogram in FFDConv’s waveforms. The
values of the low-frequency band features are smaller than those of the middle
and high-frequency bands when the alarm bell rings. However, the differences
between frequency bands in CRNN are ambiguous. As for the model’s prediction,
the CRNN’s isolated features directly lead to the incoherent output compared
with ground truth, which proves that the feature’s coherence over time is
essential. It’s interesting that the low-frequency white noise of the sound
clip is filtered by FFDConv, but CRNN tagged it as a speech. This has to do
with that dynamic convolution concentrates more on high-frequency texture
information, and white noise in the spectrogram lacks clear contour
information.
Actually, most SED models are trained in a frame-based supervised way, which
always leads to the feature and output being discrete over time. However,
FFDConv can alleviate this by frequency-dependent modeling, which models
different patterns for frequency bands, leading to a distinct representation
of a sound event. This modeling way is like an attention mechanism in which
the distribution of frequency band information of the spectrogram is
maintained. Besides, the convolution kernel for a frequency band is shared in
all frames, which produces temporally coherent representations. This is
consistent with both the continuity of the sound waveform and the vocal
continuity of sound events.
### 3.4 Ablation study
We compared the performance of different window sizes of the build kernel when
generating spatial filters. Note that the size of the spatial filter $K$ is
set to 3.
Table 2: _Comparison of different window size, W._ Model | $Atten$ | $W$ | PSDS1 | PSDS2
---|---|---|---|---
| ✗ | 3 | 0.421 | 0.650
| ✓ | 1 | 0.421 | 0.659
FFDConv | ✓ | 3 | 0.436 | 0.685
| ✓ | 5 | 0.423 | 0.656
| ✓ | 7 | 0.432 | 0.666
The results are shown in Table 2. With constraints of the attention module,
FFDConv can get better performance. This proves that before attention, spatial
filters of different frequency spaces may have a large distance from each
other. The performance of FFDConv is the best when window size is set to 3.
This is because the adjacent frequency components are considered compared to
size 1 when generating the spatial filter, and size 5 may suffer from
overfitting. In addition, it’s interesting that the performance recovers when
the window size is set to 7. This may have to do with the fact that dynamic
convolutions are relatively unstable.
## 4 conclusions
In this paper, we proposed full-frequency dynamic convolution, the first full-
dynamic method for SED. Full-frequency dynamic convolution is designed to
model time-frequency patterns in different frequency spaces. This design in
structure physically furnished 2D convolution with capability of frequency-
dependent modeling. Experiments on the DESED show that full-frequency dynamic
convolution is superior to not only baseline but also other full-dynamic
convolutions, which proves FFDConv qualifies in SED. In addition, by
visualizing features of sound events, we found that FFDConv can extract
temporally coherent features in specific frequency bands, which is consistent
with the vocal continuity of sound events. This proves that FFDConv has great
frequency-dependent perception ability. In the future, we aim to explore new
methods to model vocal continuity of sound events.
## References
* [1] J. Rouat, “Computational Auditory Scene Analysis: Principles, Algorithms, and Applications,” IEEE TNN, vol. 19, no. 1, pp. 199–199, Jan. 2008.
* [2] Zhengxia Zou, Keyan Chen, Zhenwei Shi, Yuhong Guo, and Jieping Ye, “Object Detection in 20 Years: A Survey,” Proc. IEEE, vol. 111, no. 3, pp. 257–276, March. 2023.
* [3] Jiaxing Sun, Yujie Li, Huimin Lu, Tohru Kamiya, and Seiichi Serikawa, “Deep Learning for Visual Segmentation: A Review,” in COMPSAC, July. 2020, pp. 1256–1260.
* [4] Qin Jin, Peter Schulam, Shourabh Rawat, Susanne Burger, Duo Ding, and Florian Metze, “Event-based video retrieval using audio,” in INTERSPEECH, Sept. 2012, pp. 2085–2088.
* [5] Christian Debes, Andreas Merentitis, Sergey Sukhanov, Maria Niessen, Nikolaos Frangiadakis, and Alexander Bauer, “Monitoring Activities of Daily Living in Smart Homes: Understanding human behavior,” IEEE SPM, vol. 33, no. 2, pp. 81–94, March. 2016.
* [6] Juan Pablo Bello, Charlie Mydlarz, and Justin Salamon, Sound analysis in smart cities, pp. 373–397, Springer International Publishing, Sept. 2017.
* [7] Jie Hu, Li Shen, and Gang Sun, “Squeeze-and-Excitation Networks,” in CVPR, Jun. 2018, pp. 7132–7141.
* [8] Xiang Li, Wenhai Wang, Xiaolin Hu, and Jian Yang, “Selective Kernel Networks,” in CVPR, Jun. 2019, pp. 510–519.
* [9] Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon, “CBAM: Convolutional Block Attention Module,” in ECCV, Sept. 2018, pp. 3–19.
* [10] Tong Na and Qinyi Zhang, “Convolutional network with conformer for semi-supervised sound event detection,” Tech. Rep., DCASE2021 Challenge, 2021.
* [11] Xu Jia, Bert De Brabandere, Tinne Tuytelaars, and Luc V Gool, “Dynamic filter networks,” in NEURIPS, Dec. 2016.
* [12] Hyeonuk Nam, Seong-Hu Kim, Byeong-Yun Ko, and Yong-Hwa Park, “Frequency Dynamic Convolution: Frequency-Adaptive Pattern Recognition for Sound Event Detection,” in INTERSPEECH, Sept. 2022, pp. 2763–2767.
* [13] Chao Li, Aojun Zhou, and Anbang Yao, “Omni-Dimensional Dynamic Convolution,” in ICLR, Apr. 2022.
* [14] Shengchang Xiao, Xueshuai Zhang, and Pengyuan Zhang, “Multi-Dimensional Frequency Dynamic Convolution with Confident Mean Teacher for Sound Event Detection,” in ICASSP, Jun. 2023, pp. 1–5.
* [15] Julio Zamora Esquivel, Adan Cruz Vargas, Paulo Lopez Meyer, and Omesh Tickoo, “Adaptive Convolutional Kernels,” in ICCV Workshops, Oct. 2019, pp. 0–0.
* [16] Zhi Tian, Chunhua Shen, and Hao Chen, “Conditional Convolutions for Instance Segmentation,” in ECCV, Aug. 2020, pp. 282–298.
* [17] Xinlong Wang, Rufeng Zhang, Tao Kong, Lei Li, and Chunhua Shen, “SOLOv2: Dynamic and Fast Instance Segmentation,” in NEURIPS, Dec. 2020, pp. 17721–17732.
* [18] Jingkai Zhou, Varun Jampani, Zhixiong Pi, Qiong Liu, and Ming-Hsuan Yang, “Decoupled Dynamic Filter Networks,” in CVPR, Jun. 2021, pp. 6647–6656.
* [19] Emre Çakır, Giambattista Parascandolo, Toni Heittola, Heikki Huttunen, and Tuomas Virtanen, “Convolutional Recurrent Neural Networks for Polyphonic Sound Event Detection,” IEEE/ACM TASLP, vol. 25, no. 6, pp. 1291–1303, Jun. 2017.
* [20] Antti Tarvainen and Harri Valpola, “Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results,” in NEURIPS, Dec. 2017.
* [21] Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz, “mixup: Beyond Empirical Risk Minimization,” ICLR, Oct. 2018.
* [22] Daniel S. Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D. Cubuk, and Quoc V. Le, “SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition,” in INTERSPEECH, Sept. 2019, pp. 2613–2617.
* [23] Hyeonuk Nam, Seong-Hu Kim, and Yong-Hwa Park, “Filteraugment: An Acoustic Environmental Data Augmentation Method,” in ICASSP, May. 2022, pp. 4308–4312.
|
* Sandage & Tammann (1975) Sandage, A., & Tammann, G.A. 1975, ApJ, 197, 265
* Santiago & Strauss (1992) Santiago, B.X. & Strauss, M.A. 1992, ApJ, 387, 9
* Saslaw (2000) Saslaw, William C. 2000, The Distribution of the Galaxies: Gravitational Clustering in Cosmology, Cambridge University Press: Cambridge
* Scargle (1998) Scargle, J. 1998, ApJ, 504, p.405.
* Scargle (2002) Scargle, J. 2002 “Bayesian blocks in two or more dimensions: Image segmentation and cluster analysis,” in Bayesian Inference and Maximum Entropy Methods in Science and Engineering, American Institute of Physics Conference Proceedings, Volume 617, pp. 163-173.
* Scargle et al. (2008) Scargle, J., Norris, J., & Jackson, B. 2008 “Studies in Astronomical Time Series Analysis. VI. Optimal Segmentation: Blocks, Triggers, and Histograms,” in preparation
* Schaap & van de Weygaert (2000) Schaap, W.E. & van de Weygaert, R. 2000, A&A, 363, L29
* Schaap (2007) Schaap, W.E. 2007, “The Delaunay Tessellation Field Estimator”, Ph.D. Thesis, Groningen University
* Schlegel et al. (2009) Schlegel, D.J. et al. 2009, arXiv:0904.0468v3
* Scott (1992) Scott, David W. 1992, Multivariate Density Estimation: Theory, Practice and Visualization, John Wiley & Sons, Inc.: New York
* Shandarin (1983) Shandarin, S.F. 1983, Sov Astr. Letters, 9, 104
* Shane & Wirtanen (1967) Shane, C.D. & Wirtanen, C.A. 1967, Publ. Lick. Obs. 22, Part 1
* Shapley (1933) Shapley, H. 1933, PNAS, 19, 389
* Shectman et al. (1996) Shectman, S.A. et al. 1996, ApJ, 470, 172
* Sheth et al. (2003) Sheth, J.V., Sahni, V., Shandarin, S.F. & Sathyaprakash, B.S. 2003, MNRAS, 343, 22
* Sheth & Tormen (2004) Sheth, J.V. & Tormen, G. 2004, MNRAS, 350, 1385
* Silverman (1986) Silverman, B.W. 1986, Density Estimation for Statistics and Data Analysis, (Chapman & Hall; reprinted in 1998 by CRC Press: Boca Raton)
* Slezak et al. (1990) Slezak, E., Bijaoui, A. & Mars, G. 1990, A&A, 227, 301
* Slezak et al. (1993) Slezak, E., de Lapparent, V. & Bijaoui, A. 1993, ApJ, 409, 517
* Snyder (1991) Snyder, Donald L. & Miller, Michael I. 1991, Random Point Processes in Time and Space, 2nd edition, Springer-Verlag: New York
* Sousbie, Colombi & Pichon (2009) Sousbie, T., Columbi, S.& Pichon C. 2009, MNRAS, 393, 457
* Sousbie (2010) Sousbie, T. 2010, submitted to MNRAS, arXiv:1009.4105
* Sousbie, Pichon & Kawahara (2010) Sousbie, T., Pichon, C. & Kawahara, H. 2010, submitted to MNRAS, arXiv:1009.4104
* Stein (1997) Stein, M.L. 1997 in Feigelson E.D., Babu G.J., eds., Statistical Challenges in Modern Astronomy II. Springer-Verlag, New York, p.166
* Stoyan et al. (1985) Stoyan, D., Kendall, W.S. & Mecke, J. 1995, Stochastic Geometry and Its Applications, ed. John Wiley & Sons, Chichester
* Springel et al. (2005) Springel, V., et al. 2005, Nature, 435, 629
* Strauss et al. (2002) Strauss, M. A., et al. 2002 AJ, 124, 1810
* Stril et al. (2010) Stril, A., Cahn, R., & Linder E.V. 2010, MNRAS, 404, 239
* Szapudi & Szalay (1998) Szapudi, I & Szalay, A.S. 1998, ApJ, 494, 41
* Totsuji & Kihara (1969) Totsuji, H. & Kihara, T. 1969, PASJ, 21, 221
* Turner & Gott (1976) Turner, E.L. & Gott, J.R. 1976, ApJS, 32, 409
* Turner et al. (1979) Turner, E.L., Aarseth, S.J., Gott, J.R., Blanchard, N.T. & Mathieu, R.D. 1979, ApJ, 228, 684
* Ueda & Itoh (1997) Ueda, H. & Itoh, M. 1997, PASJ, 49, 131
* van de Weygaert (1994) van de Weygaert R. 1994, A&A, 283, 361
* van de Weygaert (2003) van de Weygaert R. 2003, “The Cosmic Foam: Stochastic Geometry and Spatial Clustering across the Universe,” Invited contribution in Proceedings it Statistical Challenges in Modern Astronomy III, eds. E.D. Feigelson & G.J. Babu, Springer-Verlag, pp. 175-196
* van de Weygaert & Schaap (2009) van de Weygaert R. & Schaap, W. 2009, “The Cosmic Web: Geometric Analysis”, in Data Analysis in Cosmology, Lecture Notes in Physics, vol. 665, Eds V.J. Martinez, E. Saar, E. Mart nez-Gonz lez, and M.-J. Pons-Border a. Berlin: Springer, 2009., p.291-413
* van de Weygaert & Aragón-Calvo (2009) van de Weygaert R. & Aragón-Calvo, M. 2003, “Geometry and Morphology of the Cosmic Web: Analyzing Spatial Patterns in the Universe,” Invited review ISVD09 (International Symposium on Voronoi Diagrams and Engineering), Copenhagen, Denmark. IEEE CPS, E3781, ed. F. Anton.
* Villmann et al. (1997) Villmann, T., Der, R., Herrmann, M., & Martinetz, T., 1997, “Topology Preservation in Self-Organizing Feature Maps: Exact Definition and Measurement.” _IEEE Transactions on Neural Networks_ , 8, pp. 256-266.
* Vogeley et al. (1994) Vogeley, M.S., et al. 1994, ApJ, 420, 525
* Wright (1750) Wright, T. 1750, “An Original Theory or New Hypothesis of the Universe” New York: Elsevier 1971
* York et al. (2000) York, D.G. et al. 2000, AJ, 120, 1579
* Yoshioka & Ikeuchi (1989) Yoshioka, S. & Ikeuchi, S. 1989, ApJ, 341, 16
* Zehavi et al. (2002) Zehavi, I. et al. 2002, ApJ, 571, 172
* Zehavi et al. (2010) Zehavi, I. et al. 2010, arXiv:1005.2413
* Zeldovich (1970) Zel‘dovich, Ya. B. 1970, A&A5, 84
* Zeldovich et al. (1982) Zel‘dovich, Ya. B., Einasto, J., & Shandarin, S.F. 1982, Nature, 300, 407
* Zhang et al. (2010) Zhang, Y., Springel, V. & Yang, X. 2010, arXiv:1006.3768
* Zwicky (1957) Zwicky, F. 1957, PASP, 69, 518
* Zwicky et al. (1961) Zwicky, F., Wield, P., Herzog, E., Karpowicz, M., & Kowal, C.T. 1961-68, Catalogue of Galaxies and Clusters of Galaxies, 6 volumes. Pasadena, California Institute of Technology
|
Tissue evolution: Mechanical interplay of adhesion, pressure, and
heterogeneity
Tobias Büscher1, Nirmalendu Ganai1,2, Gerhard Gompper1, Jens Elgeti1*,
1 Theoretical Soft Matter and Biophysics, Institute of Complex Systems and
Institute for Advanced Simulation, Forschungszentrum Jülich, 52425 Jülich,
Germany
2 Department of Physics, Nabadwip Vidyasagar College, Nabadwip, Nadia 741302,
India
*<EMAIL_ADDRESS>
## Abstract
The evolution of various competing cell types in tissues, and the resulting
persistent tissue population, is studied numerically and analytically in a
particle-based model of active tissues. Mutations change the properties of
cells in various ways, including their mechanical properties. Each mutation
results in an advantage or disadvantage to grow in the competition between
different cell types. While changes in signaling processes and biochemistry
play an important role, we focus on changes in the mechanical properties by
studying the result of variation of growth force and adhesive cross-
interactions between cell types. For independent mutations of growth force and
adhesion strength, the tissue evolves towards cell types with high growth
force and low internal adhesion strength, as both increase the homeostatic
pressure. Motivated by biological evidence, we postulate a coupling between
both parameters, such that an increased growth force comes at the cost of a
higher internal adhesion strength or vice versa. This tradeoff controls the
evolution of the tissue, ranging from unidirectional evolution to very
heterogeneous and dynamic populations. The special case of two competing cell
types reveals three distinct parameter regimes: Two in which one cell type
outcompetes the other, and one in which both cell types coexist in a highly
mixed state. Interestingly, a single mutated cell alone suffices to reach the
mixed state, while a finite mutation rate affects the results only weakly.
Finally, the coupling between changes in growth force and adhesion strength
reveals a mechanical explanation for the evolution towards intra-tumor
heterogeneity, in which multiple species coexist even under a constant
evolutianary pressure.
## Introduction
Mutations change the cell fitness and thus its chance to survive and
proliferate [1]. Advantageous mutations are more likely to persist due to
natural selection, which drives the evolution of a tissue towards fitter cells
[2]. Cancer represents an example of evolution on a short time scale [3].
Furthermore, cancer is a multistep process, i.e. several mutations are needed
for a tumor in order to develop and become malignant [4]. Hence, tumorigenesis
might be expected to happen in a serial manner, i.e. a cell acquiring a
”beneficial” mutation and taking over the whole tissue. After some time, a
daughter cell acquires another mutation and again takes over. Interestingly,
however, tumors do not consist of a single cell type, but instead several
subpopulations coexist within the same tumor. This is called intra-tumor
heterogeneity [5].
Each mutation changes certain biochemical properties of a cell. This ranges
from misfunction in the error correction machinery during DNA replication and
disruptions in signaling pathways to epigenetic changes in the expression
level of certain proteins [1, 6, 7]. All these changes can also affect the
mechanical properties of the mutated cell, e.g. mutated cells which express
less adhesion proteins might be able to detach from the primary tumor more
easily [8], necessary to form metastases. On the other hand, mechanics feeds
back onto growth in several ways, e.g. increased apoptosis rate due to
mechanical stresses [9, 10] or dependence of the growth of tissue spheroids on
the properties of the surrounding medium [11, 12, 13].
It is the mechanical contribution to tissue development that we want to focus
on in this work. For mechanically regulated growth, homeostatic pressure plays
an important role [14]. In the homeostatic state, when apoptosis and division
balance each other, a tissue exerts a certain pressure onto its surrounding,
the homeostatic pressure $P_{\text{H}}$. The tissue is able to grow as long as
the external pressure $P$ is smaller than $P_{\text{H}}$. For the competition
between different tissues for space, it has been suggested that the tissue
with the higher homeostatic pressure grows at the expense of the weaker
tissue. Several theoretical studies employ this concept in order to describe
interface propagation between two competing tissues [15, 16, 17]. A metastasis
would need to reach a critical size, below which the additional Laplace
pressure due to surface tension would cause the metastasis to shrink and
disappear [14]. However, reduced adhesion between tissues, which increases
surface tension, leads to an enhanced growth rate at the interface between
them, stabilizing coexistence even for differing homeostatic pressures [18].
In this work, we study the influence of mutations that change the mechanical
properties of cells on the competition dynamics, especially the interplay
between changes in the adhesive properties and the strength with which a cell
pushes onto its surrounding. Particularly interesting is the case where loss
of adhesion comes at the cost of lower growth strength. This is motivated by
the observed down-regulation of E-cadherin, an adhesion protein in epithelia,
in many types of cancer [19]. Interestingly, E-cadherin is also involved in
signaling processes connected to cell growth [20]. We find that in this case
several cell types with different mechanical properties can coexist and that
the cell type with the highest homeostatic pressure does not necessarily
dominate the competition.
## Results
Several models have been developed previously in order to study tissue growth
[21], in combination with different simulation techniques, including vertex
[22, 23] and particle-based [24, 25] models as well as Cellular Potts models
[26, 27]. We employ the two particle growth (2PG) model of Refs. [28, 29, 18].
A cell is described by two particles which repel each other via a growth force
$\textbf{{F}}_{ij}^{\text{G}}=\frac{G}{(r_{ij}+r_{0})^{2}}\hat{\textbf{{r}}}_{ij}\text{,}$
(1)
with strength $G$, unit vector $\hat{\textbf{{r}}}_{ij}$, distance $r_{ij}$
between the two particles and a constant $r_{0}$. Different cells interact via
a soft repulsive force $\textbf{{F}}_{ij}^{\text{V}}$ on short distances,
maintaining an excluded volume, and a constant attractive force
$\textbf{{F}}_{ij}^{\text{A}}$ on intermediate distances, modeling cell-cell
adhesion, with
$\left.\begin{array}[]{@{}ll@{}}\textbf{{F}}_{ij}^{\text{V}}&=f_{0}\left(\frac{R_{\text{PP}}^{5}}{r_{ij}^{5}}-1\right)\hat{\textbf{{r}}}_{ij}\\\
\textbf{{F}}_{ij}^{\text{A}}&=-f_{1}\hat{\textbf{{r}}}_{ij}\par\end{array}\right\\}\text{for
}r_{ij}<R_{\text{PP}}\text{,}$ (2)
with exclusion coefficient $f_{0}$, adhesion strength coefficient $f_{1}$, and
cut-off length $R_{\text{PP}}$. A cell divides when the distance between its
two particles reaches a size threshold $r_{\text{ct}}$. A new particle is then
placed close (randomly within a short distance $r_{\text{d}}$) to each of the
two particles of the divided cell. Each of these pairs then constitutes a new
cell. Apoptosis is modeled by removing cells randomly at a constant rate
$k_{\text{a}}$.
We employ a dissipative particle dynamics-type thermostat, with an effective
temperature $T$, to account for energy dissipation and random fluctuations. We
choose the value of $T$ such that cells can escape local minima, but other
thermal effects are negligible. Note that all parameters can be set
individually for each cell type as well as between different cell types for
inter-cell interactions. We only vary the growth-force strength $G^{\alpha}$
and adhesion strength $f_{1}^{\alpha\beta}$ between cells of the same
($\alpha=\beta$) and different ($\alpha\neq\beta$) cell types, respectively,
where $\alpha$ and $\beta$ are cell-type numbers. We report simulation
parameters relative to a standard host cell type (see Materials and methods
for numerical values), denoted with a dagger, e.g. $G^{\dagger}=G/G^{0}$. Time
is measured in terms of the inverse apoptosis rate $k_{\text{a}}$, distance in
units of the pair potential cut-off length $R_{\text{PP}}$ and stresses in
units of $G^{0}/R_{\text{PP}}^{4}$. Quantities reported in these units are
denoted by an asterisk ∗. All simulations are performed in a cubic box with
edge length $L=12\cdot R_{\text{PP}}$ and periodic boundary conditions in all
directions, unless stated otherwise.
Fig 1: Evolution of a tissue with mutations altering growth-force strengh
$G^{\dagger}$ and adhesion strength $f_{1}^{\dagger}$ independently. Heatmaps
displaying cell-number fractions $\phi_{\alpha}$ after a) zero generations
(initial condition), b) 50 generations, c) 100 generations and d) 125
generations.
Tumor cells even within the same tumor are not all identical, but vary in
terms of all kind of attributes, e.g. expression levels of different proteins
[30] or their reaction to certain treatments [31]. Hence, there is not only a
competition between the tumor and the host, but also between cell-
subpopulations of the tumor. Different models exist to describe tumor
heterogeneity, e.g. cancer stem cells [32] or clonal evolution [33]. In the
latter case, a tumor originates from a single mutated cell, which can acquire
additional mutations over time, yielding additional subpopulations. We model
this behaviour by defining a fixed number $n$ of different ”genotypes”, each
having a different growth-force strength $G^{\alpha}$ and adhesion strength
$f_{1}^{\alpha\alpha}$. Mutations are implemented by offering each daughter
cell after a division event the chance to change its genotype with a certain
probability.
In tissues, several adhesion mechanisms exist, serving a variety of different
functions to maintain tissue integrity. Between epithelial cells, the strength
of cell-cell adhesion is to a large degree regulated by anchoring junctions,
e.g. adherens junctions, which connect the actin cytosceletons of neighbouring
cells. Adherens junctions are mediated by cadherins, which form homophilic
bonds between cells. Thus, the strength of adhesion between cells is limited
by the cell expressing less cadherin, or, in terms of our simulation model
$f_{1}^{\alpha\beta}=\min(f_{1}^{\alpha\alpha},f_{1}^{\beta\beta})$. A reduced
adhesion strength yields a higher homeostatic pressure [29], which is
otherwise dominated by the growth-force strength $G$. For free parameter
evolution, the tissue thus evolves to a strong-growing and low-adhesive
genotype (see Fig 1), as predicted by the homeostatic pressure approach [14].
Fig 2: Time evolution of the cell-number fractions $\phi_{\alpha}$ of each
genotype for tradeoff paramter a) $\tau=0$, b) $\tau=1$, c) $\tau=2$ and d)
$\tau\rightarrow\infty$, $d\rightarrow 0$. Simulations start from a host
(standard) tissue at homeostasis, with $n=21$ genotypes, $p_{\text{m}}=0.01$
in all and $d=0.025$ in a)-c). White space corresponds to times where no cells
of the genotype exist. Color is coded on a logarithmic scale. Curves above
display homeostatic pressure $P_{\text{H}}^{\alpha*}$ (black solid), growth-
force strength $G^{\alpha\dagger}$(red dashed) and self adhesion strength
$f_{1}^{\alpha\alpha\dagger}$ (green dotted) of the corresponding genotype.
However, E-cadherin also plays a role in signaling processes connected to cell
growth, and thus a reduced expression might come at the cost of a lower
growth-force strength $G$, which in turn yields a lower homeostatic pressure.
We thus turn our attention to the case where an increase in growth-force
strength $G^{\alpha}$ comes at the cost of a higher self-adhesion strength
$f_{1}^{\alpha\alpha}$. We assume the relations as
$\displaystyle G^{\alpha}$ $\displaystyle=(1+D^{\alpha})G^{0}$ (3)
$\displaystyle f_{1}^{\alpha\alpha}$
$\displaystyle=(1+D^{\alpha}\cdot\tau)f_{1}^{0}\text{,}$ (4)
with genotype number $\alpha$ in the range $[-(n-1)/2,(n-1)/2]$, evolutionary
distance $D^{\alpha}=d\cdot\alpha$, distance $d$ between neighbouring
genotypes and tradeoff paramteter $\tau$ (with
$G^{\alpha},f_{1}^{\alpha\alpha}>0\ \forall\ \alpha$). After a division event,
each daughter cell might mutate into a new genotype with probability
$p_{\text{m}}$. If the cell mutates, its genotype number is changed to
$\alpha_{\text{mother}}~{}\pm 1$ randomly. This yields a mutation rate
$k_{\text{m}}=2p_{\text{m}}k_{\text{a}}$.
Figure 2 displays results of such simulations for four different cases: only
variation of growth-force strength ($\tau=0$), balanced tradeoff ($\tau=1$),
adhesion strength varied twice as much as growth-force strength ($\tau=2$) and
only variation of adhesion strength ($\tau\rightarrow\infty$). Without
tradeoff (Fig 2a)), the tissue evolves towards the strongest growing genotype
or, equivalently, the one with the highest homeostatic pressure. Similarly,
for $\tau\rightarrow\infty$ (Fig 2d)), the system evolves towards the lowest
adhesive genotype (again, the one with the highest $P_{\text{H}}$). We find
the most dynamic evolution for a balanced tradeoff (Fig LABEL:snapshot and
2b)). At first, the system evolves to stronger growing and more adhesive
genotypes. Over time a noticable fraction of cells evolves also towards weak-
growing, less adhesive genotypes. The cell-number fractions
$\phi_{\alpha}=N_{\alpha}/N$ (with individual and total number of cells,
$N_{\alpha}$ and $N$), show large fluctuations (see Fig LABEL:snapshotb) and
c)), with individual genotypes not being populated at all for certain time
periods. Besides this highly dynamic temporal evolution, after an initial time
period the system is dominated by genotypes with increased growth force and
adhesion strength at all times, with the one at the upper boundary having the
highest cell-number fraction for most of the time (see Fig LABEL:snapshota)).
This result comes at a surprise, as this is also the genotype with the lowest
homeostatic pressure, while the one at the lower boundary, which is basically
never populated, has the highest $P_{\text{H}}$. For a higher tradeoff (Fig
2c)), we still find a broad distribution of genotypes, with less adhesive
genotypes dominating over the stronger growing ones, i.e. the loss in growth-
force strength is overcompensated by a lower adhesion strength.
In order to gain insight into the underlying mechanism of this dynamic
evolution, we study the competition between two genotypes and no mutations
($p_{\text{m}}=0$). Simulations are started from a single mutated cell (with
increased/decreased growth force and adhesion strength) in a host tissue at
the homeostatic state (we label the mutant with M and the host (wild type)
with W). Even in this simplified case, we find one parameter regime in which
the mutant is not able to grow, one regime with stable coexistence in a highly
mixed state and another regime in which the mutant outcompetes the host.
Figure LABEL:number_fractions shows the averaged number fractions of the
mutant at the steady state. For reduced growth force and adhesion strength
(Fig LABEL:number_fractionsa)), the mutant can only grow against the host if
its adhesion strength is reduced below a critical $f_{1}^{\text{crit}}$. In
terms of Eq. (4), the value of $f_{1}^{\text{crit}}$ roughly corresponds to a
balanced tradeoff ($\tau\approx 1$). Already for
$f_{1}^{\text{MM}}>f_{1}^{\text{crit}}$, the homeostatic pressure of the
mutant exceeds the one of the host, i.e. a parameter regime exists in which
the mutant is not able to grow, despite of the higher $P_{\text{H}}$. The
reverse happens when growth force and adhesion strength are increased. The
mutant completely takes over the compartment, although its homeostatic
pressure is smaller than that of the host. Again, coexistence is only found
when the adhesion strength is increased above $f_{1}^{\text{crit}}$. In the
coexistence regime, the mutant number fraction scales as
$\phi^{\text{M}}\propto 1/(f_{1}^{\text{MM}}-f_{1}^{\text{WW}})$.
Altogether, the competition between two genotypes alone yields the same
qualitative results as the more complex multi-genotype case discussed before.
Still, the question remains how a genotype with lower homeostatic pressure can
outcompete a stronger genotype. The answer can only lie in the adhesion
strength $f_{1}^{\text{MW}}=\min(f_{1}^{\text{MM}},f_{1}^{\text{WW}})$ between
mutant and host cells. This choice of cross-adhesion strength breaks symmetry,
as the stronger adhering genotype has more free space at the interface, which
favors divisions [18].
To address this question, we develop a phenomenological model which
incorporates pressure-dependent growth as well as interfacial effects, in
order to obtain a qualitative explanation of the simulation results.
We start with the expansion of the bulk growth rate $k_{\text{b}}$ around the
homeostatic pressure,
$k_{\text{b}}=\kappa(P-P_{\text{H}})\text{,}$ (5)
with the pressure response coefficient $\kappa$. Due to the high degree of
mixing, the number fractions $\phi^{\text{M/W}}$ and hence the strengh of
interfacial effects vary locally. In a mean-field approximation, we take the
interfacial effects to be proportional to
$\phi^{\text{M}}(1-\phi^{\text{M}})$, with individual prefactors $\Delta
k_{\text{s}}^{\text{M/W}}$ for each genotype. The time evolution is then given
by
$\displaystyle\partial_{t}\phi^{\text{M}}=$
$\displaystyle\kappa(P_{\text{H}}^{\text{M}}-P)\phi^{\text{M}}+\Delta
k_{\text{s}}^{\text{M}}\phi^{\text{M}}(1-\phi^{\text{M}})$ (6)
$\displaystyle\partial_{t}(1-\phi^{\text{M}})=$
$\displaystyle\kappa(P_{\text{H}}^{\text{M}}+\Delta
P_{\text{H}}-P)(1-\phi)+\Delta
k_{\text{s}}^{\text{W}}\phi^{\text{M}}(1-\phi^{\text{M}})\text{,}$ (7)
with the difference in homeostatic pressure $\Delta
P_{\text{H}}=P_{\text{H}}^{\text{W}}-P_{\text{H}}^{\text{M}}$. Addition of
Eqs. (6) and (7) yields the pressure
$\displaystyle P=P_{\text{H}}^{\text{W}}-\Delta
P_{\text{H}}\phi^{\text{M}}+\frac{\Delta k_{\text{s}}^{\text{M}}+\Delta
k_{\text{s}}^{\text{W}}}{\kappa}\phi^{\text{M}}(1-\phi^{\text{M}})\text{.}$
(8)
Thus, the pressure is given by the homeostatic pressures of the two genotypes
weighted by their number fraction plus an interfacial term. A figure
displaying the pressure measured during the simulations shown in Fig
LABEL:number_fractions can be found in the S1 Appendix.. Insertion of Eq. (8)
into Eq. (6) yields a differential equation for the number fraction with three
fixed points ($\partial_{t}\phi^{\text{M}}=0$), $\phi_{1}^{\text{M}}=0$,
$\phi_{2}^{\text{M}}=1$, and
$\displaystyle\phi_{3}^{\text{M}}=\frac{-\kappa\Delta P_{\text{H}}+\Delta
k_{\text{s}}^{\text{M}}}{\Delta k_{\text{s}}^{\text{M}}+\Delta
k_{\text{s}}^{\text{W}}}\text{.}$ (9)
We discuss this result for the case of reduced growth force and adhesion
strength of the mutant. $\Delta k_{\text{s}}^{\text{M}}$ might be expected to
vanish, as $f_{1}^{\text{MM}}=f_{1}^{\text{MW}}$ and mutant cells thus would
not feel whether neighbouring cells are mutant or host cells. However, in
order to grow, a cell needs to impose a strain on its surrounding. Host cells
adhere more strongly to each other, thus it is harder for a mutant cell to
impose a strain when surrounded by host cells. Hence, $\Delta
k_{\text{s}}^{\text{M}}$ is actually negative and the homeostatic pressure of
the mutant needs to exceed the host pressure by $-\Delta
k_{\text{s}}^{\text{M}}/\kappa$ in order to be able to grow against the host.
At this point, $\phi_{3}^{\text{M}}$ becomes positive, as long as $\Delta
k_{\text{s}}^{\text{M}}+\Delta k_{\text{s}}^{\text{W}}>0$. Host cells can
impose a strain more easily when surrounded by mutant cells and, additionally,
have more free space than when surrounded by other host cells. Hence, $|\Delta
k_{\text{s}}^{\text{M}}|<\Delta k_{\text{s}}^{\text{W}}$ and the above
mentioned condition is fulfilled. Similarly, coexistence can be found for
increased growth force and adhesion strength when $\Delta P_{\text{H}}>-\Delta
k_{\text{s}}^{\text{W}}/\kappa$. The above mentioned scaling of the mutant
number fraction can be obtained by an expansion of $\Delta P_{\text{H}}$ and
$\Delta k_{\text{s}}^{\text{M/W}}$ to linear order in terms of
$\epsilon:=(f_{1}^{\text{MM}}-f_{1}^{\text{WW}})/f_{1}^{\text{WW}}$ in Eq.
(9),
$\displaystyle\phi_{3}^{\text{M}}=\frac{-\kappa\Delta
P_{\text{H}}^{0}}{(\Delta k_{\text{s}}^{\text{M1}}+\Delta
k_{\text{s}}^{\text{W1}})\epsilon}+\frac{-\kappa\Delta P_{\text{H}}^{1}+\Delta
k_{\text{s}}^{\text{M1}}}{\Delta k_{\text{s}}^{\text{M1}}+\Delta
k_{\text{s}}^{\text{W1}}}\text{.}$ (10)
The zeroth order terms of $\Delta k_{\text{s}}^{\text{M/W}}$ vanish as there
are no interfacial effects when the adhesion strength between host and mutant
cells is equal to their self-adhesion strength, while $\Delta
P_{\text{H}}^{0}$ can be non-zero due to a changed growth-force strength.
Indeed, Eq. (10) reproduces the simulation data reasonably well (see Fig
LABEL:number_fractions). A discussion of the numerical values of the fitted
parameters and additional results can be found in S1 Appendix..
Figure LABEL:slope displays similar results as shown in Fig
LABEL:number_fractions, but now as a function of the tradeoff $\tau$ in Eq.
(4). For $\tau<1$ the genotype with higher growth-force strength outcompetes
the weaker genotype, for $1<\tau<2$ a transition towards the less adhesive
genotype occurs, while for even higher values of the tradeoff $\tau>2$ the
less adhesive genotype outcompetes the second genotype. This transition from
strongly growing, adhesive to weakly growing, less adhesive genotypes is found
in the same range of $\tau$ as in the competition between many genotypes.
Hence, the simplified case of two competing genotypes captures the essential
physics to explain the coexistence between many competing genotypes and,
additionally, provides a quantitative description.
Next, we turn our attention to the effect of a finite mutation rate on the
evolution of the system. Figure LABEL:mutationrate_clustera) shows the number
fraction of the mutant as a function of $k_{\text{m}}$ for different
combinations of evolutionary distance $D^{\alpha}$ and tradeoff $\tau$, in
comparision to the number fraction reached for a single mutation event. As
expexted, the number fraction converges towards $1/2$ with increasing
$k_{\text{m}}$ for all combinations. For moderate mutation rates, however, the
number fraction largely fluctuates around the same average as of a single
mutation event. The single mutation leads to a stable coexistence of the two
genotypes - additional mutations quickly relax back to this state.
Siginificant deviations occur only if in the steady state of the single
mutation event the number fraction of the weaker genotype is close to zero. In
that case, the weaker genotype consists only of one or very few small cohesive
clusters of cells, because cells of the weaker genotype need to detach from
the primary cluster in order to form new clusters, but are likely to die when
they do so, as they are only surrounded by cells of the stronger genotype.
Hence, the distribution of cells is highly non-homogenous. Compared to the
single mutation event, even a small mutation rate leads to the formation of
multiple small cluster all over the system, thus increasing the number
fraction of the weaker genotype (see Fig LABEL:mutationrate_clusterb) for
comparision in terms of number of clusters and Materials and methods for
further discussion). This result explains why at least two genotypes, in
addition to the dominating genotype, are populated as well in the cases shown
in Fig 2a) and d). When the number fractions of both genotypes are
sufficiently large (for $1\leq\tau\leq 2$), deviations from the average of a
single mutation are still small for the standard mutation probability.
Additionally, in the competitions between many genotypes, mutations change the
genotype to $\alpha\pm 1$ randomly and not in a preferred direction. Hence, we
conclude that the precise value of the mutation probability does not play an
important role in the regime where we find a heterogeneous distribution of
genotypes, as long as it is reasonably small ($k_{\text{m}}\ll k_{\text{a}}$).
Given that a single mutated cell can grow to tissue of macroscopic size in a
certain parameter regime for
$f_{1}^{\text{MW}}=\min(f_{1}^{\text{MM}},f_{1}^{\text{WW}})$, the question
arises how likely it is to actually reach this state. In order to study this
probability, we mutate again a single cell in a host tissue at its homeostatic
state. A mutation that reaches a certain threshold $N_{\text{t}}=20$ of cells
counts as a survival event (the chance to die after reaching this treshold
becomes extremely small), apoptosis of the last mutant cell as a death event.
Figure LABEL:survival shows the averages of many such simulations. For reduced
growth force and adhesion strength, the survival probability $p_{\text{s}}$ is
only non-zero below the critical adhesion strength $f_{1}^{\text{crit}}$. For
$f_{1}^{\text{MM}}<f_{1}^{\text{crit}}$, $p_{\text{s}}$ increases linearly
with further decreasing adhesion strength. On the other hand, when growth
force and adhesion strength are increased, the survival probability first
shows a plateau, whose value increases with increasing growth force strength,
from which it will probably drop to zero with further increase. Simulations in
this regime are difficult, because a mutated cell can easily grow to a few
cells, but will hardly reach the number threshold nor completely vanish again.
Due to the high self-adhesion strength on the one hand, it becomes hard to
detach from the other cells, but on the other hand easy to grow against the
host when only few or no other mutant cells are around. This explains the
larger error bars at the highest values of the adhesion strength, where the
sample size is small.
## Discussion
We have shown how intra-tumor heterogeneity, the existence of multiple
subpopulations within the same tumor, can arise due to mechanical interactions
alone. The simultaneous change of the adhesion and growth-force strength
stabilizes the coexistence of multiple subpopulations, in a highly dynamic
state. A higher growth-force strength alone, as well as a lower adhesion
strength, favor proliferation of a single subpopulation and the evolution of
the system to cell types with the highest growth-force strength, or lowest
adhesion strength, respectively. A tradeoff between the two, however, yields
coexistence between multiple subpopulations of different cell types.
Interestingly, the expression of the adhesion protein E-cadherin, which also
affects cell growth, has been found to be down-regulated in many real tumors
[19].
The simulations also reveal that the homeostatic pressure of a cell type is
not necessarily the only quantity that determines the result of a competition.
Interactions between different cell types, in our model determined by the
adhesion between them, can lead to a completely reverse outcome, i.e. a cell
type with lower homeostatic pressure can outcompete a stronger one completely.
A phenomenological model explains the results on a qualitative level. The
evolution of each cell type is governed by mechanically-regulated growth,
while mutation rates only play a minor role in the dynamics.
An interesting future aspect to be studied is the influence of open
boundaries. A tissue with a negative homeostatic pressure then naturally grows
to a spheroid of finite size, with an enhanced rate of division at the surface
[29]. For competing cell types, this would lead to an interplay between
surface and interfacial effects.
## Materials and methods
### Standard (host) tissue and simulation parameters
We define a set of reference simulation parameters, which we refer to as host
parameters. Table 1 shows the values in simulation units. In simulations we
keep the host W fixed and vary the parameters of the mutant M around the
values of the host.
Table 1: Simulation parameters and measured properties of the standard (host)
tissue
. Parameter Symbol Value Time Step $\Delta t$ $10^{-3}$ Pair potential
interaction range $R_{\text{PP}}$ $1$ Cellular expansion pressure constant
$r_{0}$ 1 Cell division distance treshold $r_{\text{ct}}$ $0.8$ New cell
particle initial distance $r_{\text{d}}$ 0.00001 Growth-force strength $G$
$40$ Mass $m$ 1 Intracell dissipation coefficient $\gamma_{\text{c}}$ $100$
Intercell dissipation coefficient $\gamma_{\text{t}}$ $50$ Background
dissipation coefficient $\gamma_{\text{b}}$ $0.1$ Apoptosis rate
$k_{\text{a}}$ 0.01 Mutation propability $p_{\text{m}}$ 0.01 Noise intensity
$k_{\text{B}}T$ 0.1 Repulsive cell-cell potential coefficient $f_{0}$
$2.39566$ Attractive cell-cell potential coefficient $f_{1}$ $6.0$ Isothermal
compressibility $\beta_{\text{T}}$ 1 Relaxation time constant $t_{\text{P}}$
$1$ Homeostatic pressure $P_{\text{H}}^{*}$ $0.1321\pm 0.0005\text{\,}$
Pressure response coefficient $\kappa^{*}$ $2.676\pm 0.080\text{\,}$
### Cluster analysis
As explained in the results section, a constant rate of mutation leads to an
enhanced formation of clusters when the weaker genotype is barely able to grow
against the stronger genotype and consists of only one or few clusters for a
single mutation event. We define a cluster as all cells of the same genotype
that are in interaction range to at least one other member of the cluster
(DBSCAN clustering algorithm with number of minimal points equal to one).
Figure LABEL:mutationrate_clusterb) displays the number of clusters of the
weaker genotype in the competitions displayed in Fig
LABEL:mutationrate_clustera), in comparison to the result of a single mutation
event. Indeed, when the number fraction of the weaker genotype is small for a
the single mutation event ($\tau=1$), we find significant deviations even for
small mutation rates. In this case, the number of clusters first strongly
increases with mutation rate, with roughly a tenfold increase at the peak. For
even higher mutation probability, the number of clusters decreases again, due
to merging of clusters, finally leading to percolation.
## Supporting information
#### S1 Appendix.
Additional Results. Appendix containing additional results and the
corresponding figures.
## Acknowledgments
The authors gratefully acknowledge the computing time granted through JARA-HPC
on the supercomputer JURECA [35] at Forschungszentrum Jülich.
## References
* 1. Weinberg RA. The biology of cancer. Garland Publishing, Inc.; 2007.
* 2. Greaves M, Maley CC. Clonal evolution in cancer. Nature 2012;481(7381):306.
* 3. Bozic I, Reiter JG, Allen B, Antal T, Chatterjee K, Shah P, et al. Evolutionary dynamics of cancer in response to targeted combination therapy. eLife. 2013;2:e00747. doi:10.7554/eLife.00747.
* 4. Vogelstein B, Kinzler KW. The multistep nature of cancer. Trends Genet. 1993;9(4):138 – 141. doi:https://doi.org/10.1016/0168-9525(93)90209-Z.
* 5. Heppner GH. Tumor Heterogeneity. Cancer Res. 1984;44(6):2259–2265.
* 6. Preston BD, Albertson TM, Herr AJ. DNA replication fidelity and cancer. Semin Cancer Biol. 2010;20(5):281 – 293. doi:https://doi.org/10.1016/j.semcancer.2010.10.009.
* 7. Schnekenburger M, Diederich M. Epigenetics Offer New Horizons for Colorectal Cancer Prevention. Curr Colorectal Cancer Rep. 2012;8(1):66–81. doi:10.1007/s11888-011-0116-z.
* 8. Petrova YI, Schecterson L, Gumbiner BM. Roles for E-cadherin cell surface regulation in cancer. Mol Biol Cell. 2016;27(21):3233–3244. doi:10.1091/mbc.E16-01-0058.
* 9. Wernig F, Xu Q. Mechanical stress-induced apoptosis in the cardiovascular system. Prog Biophys Mol Bio. 2002;78(2):105 – 137. doi:https://doi.org/10.1016/S0079-6107(02)00008-1.
* 10. Cheng G, Tse J, Jain RK, Munn LL. Micro-environmental mechanical stress controls tumor spheroid size and morphology by suppressing proliferation and inducing apoptosis in cancer cells. PLoS One. 2009;4(2):e4632. doi:10.1371/journal.pone.0004632.
* 11. Montel F, Delarue M, Elgeti J, Malaquin L, Basan M, Risler T, et al. Stress Clamp Experiments on Multicellular Tumor Spheroids. Phys Rev Lett. 2011;107(18):188102. doi:10.1103/PhysRevLett.107.188102.
* 12. Alessandri K, Sarangi BR, Gurchenkov VV, Sinha B, Kiesling TR, Fetler L, et al. Cellular capsules as a tool for multicellular spheroid production and for investigating the mechanics of tumor progression in vitro. Proc Natl Acad Sci USA. 2013;110(37):14843–14848. doi:10.1073/pnas.1309482110.
* 13. Helmlinger G, Netti PA, Lichtenbeld HC, Melder RJ, Jain RK. Solid stress inhibits the growth of multicellular tumor spheroids. Nat Biotechnol. 1997;15(8):778–783. doi:10.1038/nbt0897-778.
* 14. Basan M, Risler T, Joanny JF, Sastre-Garau X, Prost J. Homeostatic competition drives tumor growth and metastasis nucleation. HFSP J. 2009;3(4):265–272. doi:10.2976/1.3086732.
* 15. Williamson JJ, Salbreux G. Stability and Roughness of Interfaces in Mechanically Regulated Tissues. Phys Rev Lett. 2018;121:238102. doi:10.1103/PhysRevLett.121.238102.
* 16. Ranft J, Aliee M, Prost J, Jülicher F, Joanny JF. Mechanically driven interface propagation in biological tissues. New J Phys. 2014;16(3):035002.
* 17. Podewitz N, Jülicher F, Gompper G, Elgeti J. Interface dynamics of competing tissues. New J Phys. 2016;18(8):083020.
* 18. Ganai N, Büscher T, Gompper G, Elgeti J. Mechanics of tissue competition: interfaces stabilize coexistence. New J Phys. 2019;21(6):063017. doi:10.1088/1367-2630/ab2475.
* 19. Beavon IRG. The E-cadherin–catenin complex in tumour metastasis: structure, function and regulation. Eur J Cancer. 2000;36(13):1607 – 1620. doi:https://doi.org/10.1016/S0959-8049(00)00158-1.
* 20. Pece S, Gutkind JS. Signaling from E-cadherins to the MAPK Pathway by the Recruitment and Activation of Epidermal Growth Factor Receptors upon Cell-Cell Contact Formation. J Biol Chem. 2000;275(52):41227–41233. doi:10.1074/jbc.M006578200.
* 21. Van Liedekerke P, Palm MM, Jagiella N, Drasdo D. Simulating tissue mechanics with agent-based models: concepts, perspectives and some novel results. Comp Part Mech. 2015;2(4):401–444. doi:10.1007/s40571-015-0082-3.
* 22. Farhadifar R, Röper JC, Aigouy B, Eaton S, Jülicher F. The Influence of Cell Mechanics, Cell-Cell Interactions, and Proliferation on Epithelial Packing. Curr Biol. 2007;17(24):2095 – 2104. doi:https://doi.org/10.1016/j.cub.2007.11.049.
* 23. Alt S, Ganguly P, Salbreux G. Vertex models: From cell mechanics to tissue morphogenesis. Phil Trans R Soc B. 2017;372:20150520. doi:10.1098/rstb.2015.0520.
* 24. Drasdo D, Höhme S. A single-cell-based model of tumor growth in vitro: monolayers and spheroids. Phys Biol. 2005;2(3):133–147. doi:10.1088/1478-3975/2/3/001.
* 25. Schaller G, Meyer-Hermann M. Multicellular tumor spheroid in an off-lattice Voronoi-Delaunay cell model. Phys Rev E. 2005;71(5 Pt 1):051910.
* 26. Graner F, Glazier JA. Simulation of biological cell sorting using a two-dimensional extended Potts model. Phys Rev Lett. 1992;69:2013–2016. doi:10.1103/PhysRevLett.69.2013.
* 27. Szabó A, Merks RM. Cellular Potts Modeling of Tumor Growth, Tumor Invasion, and Tumor Evolution. Front Oncol. 2013;3:87. doi:10.3389/fonc.2013.00087.
* 28. Basan M, Prost J, Joanny JF, Elgeti J. Dissipative particle dynamics simulations for biological tissues: rheology and competition. Phys Biol. 2011;8(2):026014. doi:10.1088/1478-3975/8/2/026014.
* 29. Podewitz N, Delarue M, Elgeti J. Tissue homeostasis: A tensile state. EPL. 2015;109(5):58005.
* 30. Marusyk A, Almendro V, Polyak K. Intra-tumour heterogeneity: a looking glass for cancer? Nat Rev Cancer. 2012;12:323–334.
* 31. Marusyk A, Polyak K. Tumor heterogeneity: Causes and consequences. Biochim Biophys Acta. 2010;1805(1):105 – 117. doi:https://doi.org/10.1016/j.bbcan.2009.11.002.
* 32. Shackleton M, Quintana E, Fearon ER, Morrison SJ. Heterogeneity in Cancer: Cancer Stem Cells versus Clonal Evolution. Cell. 2009;138(5):822 – 829. doi:https://doi.org/10.1016/j.cell.2009.08.017.
* 33. Nowell P. The clonal evolution of tumor cell populations. Science. 1976;194(4260):23–28. doi:10.1126/science.959840.
* 34. Allen MP, Tildesley DJ. Computer simulation of liquids. Oxford: Clarendon Press; 1989.
* 35. Jülich Supercomputing Centre. JURECA: Modular supercomputer at Jülich Supercomputing Centre. Journal of large-scale research facilities. 2018;4(A132). doi:10.17815/jlsrf-4-121-1.
|
# Discovery of a Rare Eclipsing Be/X-ray Binary System, Swift J010902.6-723710
= SXP 182
Thomas M. Gaudin Pennsylvania State University Jamie A. Kennea Pennsylvania
State University M.J. Coe Physics and Astronomy, The University of
Southampton, SO17 1BJ, UK I. M. Monageng South African Astronomical
Observatory, PO Box 9, Observatory, Cape Town 7935, South Africa Department
of Astronomy, University of Cape Town, Private Bag X3, Rondebosch 7701, South
Africa Andrzej Udalski Astronomical Observatory, University of Warsaw, Al.
Ujazdowskie 4, 00-478 Warszawa, Poland L. J. Townsend Southern African Large
Telescope, PO Box 9, Observatory, Cape Town 7935, South Africa South African
Astronomical Observatory, PO Box 9, Observatory, Cape Town 7935, South Africa
David A.H. Buckley South African Astronomical Observatory, PO Box 9,
Observatory, Cape Town 7935, South Africa Southern African Large Telescope,
PO Box 9, Observatory, Cape Town 7935, South Africa Department of Astronomy,
University of Cape Town, Private Bag X3, Rondebosch 7701, South Africa
Department of Astronomy, University of the Free State, PO Box 339,
Bloemfontein, Cape Town, 9300, South Africa Phil A. Evans Leicester
University, UK
###### Abstract
We report on the discovery of Swift J010902.6-723710, a rare eclipsing
Be/X-ray Binary system by the Swift SMC Survey (S-CUBED). Swift
J010902.6-723710 was discovered via weekly S-CUBED monitoring observations
when it was observed to enter a state of X-ray outburst on 10 October 2023.
X-ray emission was found to be modulated by a 182s period. Optical
spectroscopy is used to confirm the presence of a highly-inclined
circumstellar disk surrounding a B0-0.5Ve optical companion. Historical UV and
IR photometry are then used to identify strong eclipse-like features re-
occurring in both light curves with a 60.623 day period, which is adopted as
the orbital period of the system. Eclipsing behavior is found to be the result
of a large accretion disk surrounding the neutron star. Eclipses are produced
when the disk passes in front of the OBe companion, blocking light from both
the stellar surface and circumstellar disk. This is only the third Be/X-ray
Binary to have confirmed eclipses. We note that this rare behavior provides an
important opportunity to constrain the physical parameters of a Be/X-ray
Binary with greater accuracy than is possible in non-eclipsing systems.
††facilities: Swift (XRT and UVOT), OGLE, SALT
## 1 Introduction
Be/X-ray Binaries (BeXRBs) are a type of interacting High Mass X-ray Binary
(HMXB) that contain a main sequence OBe star and a compact object, typically a
neutron star (NS). These binaries are characterized by moderately eccentric
elliptical orbits ($e\sim 0.3$), orbital periods on the order of $\sim$10-1000
days, and the presence of strong emission lines such as the Balmer series
H$\alpha$ line in the optical spectrum of the OBe star (for a comprehensive
review of BeXRBs, see Reig (2011)). The optical emission lines are interpreted
to be a strong indication of a geometrically thin circumstellar disk
(Rivinius, 2019) that surrounds the donor star and is variable in size.
Interactions between the NS and disk are capable of producing intermittent
X-ray outbursts, which are perhaps the most prominent feature of BeXRB
systems.
There are two types of outburst that can be produced by NS interactions with
the circumstellar disk (Stella et al., 1986; Okazaki & Negueruela, 2001; Reig,
2011). Type I outbursts are periodic in nature and associated with NS-disk
interactions that occur at the periastron passage in each orbit(Stella et al.,
1986; Okazaki & Negueruela, 2001). Type II outbursts are much more luminous
than Type I outbursts (Okazaki & Negueruela, 2001; Reig, 2011) and can even
reach super-Eddington luminosities, such as during the SMC X-3 outburst
described by Townsend et al. (2017) and the SMC X-2 outburst that occurred in
2015 (Jaisawal et al., 2023; Roy et al., 2022). These luminous outbursts are
independent of the orbital phase of the NS at the time of outburst and can
last for multiple orbits (Reig, 2011). Due to both the long duration and phase
independence of these events, Type II outbursts are thought to be related to
the growth and shape of the Be star’s disk (Martin et al., 2014; Monageng et
al., 2017) causing it to interact with the NS for a longer time.
Since X-ray outbursts in BeXRBs are dependent on the disk of their donor star,
these systems are prone to experiencing long quiescent states during which
they are hard to detect and identify (Coe et al., 2023b). Swift
J010902.6-723710 is an example of a system that escaped identification due to
a long period of quiescence. This newly-discovered system resides in the Small
Magellanic Cloud (SMC), a satellite galaxy of the Milky Way that is well-
documented to have a large population of HMXBs (Antoniou et al., 2010) due to
an unusually-active period of star formation that occurred in the SMC’s recent
past (Harris & Zaritsky, 2004; Rezaeikh et al., 2014). New SMC BeXRBs are
still occasionally discovered (Maitra et al., 2023), indicating that a hidden
population of previously-undetected systems still exists within the dwarf
galaxy.
In an effort identify X-ray outbursts and detect new sources, the Swift SMC
Survey (S-CUBED) (Kennea et al., 2018) has operated since 2016 to provide
regular X-ray and UV monitoring of the SMC. This survey utilizes the Neil
Gehrels Swift Observatory (Gehrels et al., 2004) to tile the galaxy on a
weekly cadence with 142 overlapping tiles and $\sim$1 minute spent on each
tile. Data recorded by S-CUBED are automatically analyzed to flag new X-ray
outburst events that are detected by Swift’s X-ray Telescope (XRT; Burrows et
al. 2005). S-CUBED also observes all tiles using the Ultraviolet/Optical
Telescope (UVOT; Roming et al. 2005) in the uvw1 band centered at 2600Å. The
archive of X-ray and UV photometric data taken by S-CUBED has proved to be an
invaluable resource for discovering previously-undetected BeXRBs (Monageng et
al., 2020; Kennea et al., 2020; Coe et al., 2021; Kennea et al., 2021;
Monageng et al., 2022). It was through an X-ray selected search of the S-CUBED
archive that Swift J010902.6-723710 was first identified as a candidate source
in the summer of 2023, several months before it entered outburst during
October 2023.
In this paper, we report on the recent X-ray outburst that was experienced by
Swift J010902.6-723710 which confirms its status as a newly-identified BeXRB
system (Coe et al., 2023a). In the following sections, we present the results
of observations made both before and during outburst using Swift, the Optical
Gravitational Lensing Experiment (OGLE), and the Southern African Large
Telescope (SALT). Additionally, all results are combined and analyzed in an
effort to obtain information about the orbital and physical properties of both
the NS and its mass donor OBe companion.
## 2 Observations
### 2.1 Swift - XRT
Figure 1: XRT light curve for Swift J010902.6-723710 from August 2023 to
present, combining data from S-CUBED observations and follow-up TOOs. Note
source visibility to Swift is indicated by the grey background, white vertical
strips indicate periods when the target could not be observed. Vertical dotted
lines mark predicted times for the periastron passages.
Swift J010902.6-723710 has remained in a quiescent state for most of the
duration of S-CUBED monitoring. XRT first detected this source during weekly
S-CUBED monitoring in March of 2020. Two more detections occurred in July and
September of 2021. This mini-flare of X-ray photons went un-reported due to
its sparse detection frequency. The X-ray luminosity of the flare peaked at an
XRT count rate of $0.071_{-0.035}^{+0.051}$ counts s-1 before returning to a
quiescent state. The peak XRT count rate implies an approximate 0.3-10 keV
band luminosity of $L_{X}=4.76\times 10^{36}$ erg s-1 if the standard distance
of 62.44 kpc (Graczyk et al., 2020) to the SMC is assumed.
The source was identified as a possible candidate BeXRB by examining the
optical/IR SEDs of all unidentified X-ray sources in the S-CUBED survey
(Gaudin et al. in prep.). This identification motivated a deep 10ks Swift
observation in August of 2023 to try and constrain the hardness ratio of the
source’s X-ray spectrum. However, the source was only weakly detected with a
mean luminosity of $L_{X}=9.07\times 10^{35}$ erg s-1.
The results of all XRT observations during the recent outburst are shown in
Figure 1. Weekly S-CUBED monitoring detected X-ray emission from the source as
it entered outburst on October 10th and again at peak XRT count rate on
October 31st. The peak XRT count rate of Swift J010902.6-723710 was
$0.22_{-0.058}^{+0.071}$ counts s-1, implying a peak 0.3-10 keV band
luminosity of $L_{X}=1.35\times 10^{37}$ erg s-1 at a distance of 62.44 kpc.
S-CUBED detection of outbursting behavior triggered deeper follow-up Swift
observations of 5 ks per day on November 2nd and 4th in an effort to both
monitor the outburst and constrain the spin period of the NS. Deep
observations show the XRT count rate start to decline exponentially as
expected, reaching a mean XRT count rate value of 0.11 counts s-1 during the
early November observations. Additional observations taken on November 22nd
and December 2nd confirm the continuing trend of exponential decline in X-ray
brightness as the outburst fades. However, weekly S-CUBED detections
persisted, with additional detections occurring on December 12th and December
26th. These two detections had 0.3-10 keV luminosities of $L_{X}=3.55\times
10^{36}$ erg s-1 and $L_{X}=3.98\times 10^{36}$ erg s-1, respectively, which
do not fit the trend of steadily declining flux values after the peak
luminosity was reached.
Figure 2: $Z_{2}^{2}$ Periodogram for four observations with exposures of
$\simeq$5 ks of Swift XRT data taken in PC mode. The periodogram shows a peak
at 182 seconds in all four observations made after outburst was detected by
S-CUBED weekly monitoring. There is evidence for pulsar spin-up over time,
however the errors on period measurement are large meaning this result is not
statistically significant.
All 5 ks follow-up Swift observations were searched for the presence of
periodic pulsations that are indicative of the spin period of the X-ray
pulsar. The results of this search, carried out using a $Z_{2}^{2}$
Periodogram (Buccheri et al., 1983), are shown in Figure 2. In all 5 ks
observations, the periodogram shows peaks at around $182$s (see Figure 2 for
individual period measurements). Pulsar periods show small evidence for spin-
up during the outburst, although given large errors on the periodicities due
to the low numbers of counts and low (2.5s) PC mode frame time, the spin-up is
not strongly detected. Folding the light curve of each individual XRT
observation of 182s reveals an asymmetric double-peaked periodic signature,
which is the expected shape for an X-ray pulsar light curve. Based on the
phase-folded light curve and the strength of the periodogram peaks, we argue
that the true spin period of the pulsar is 182s, allowing Swift
J010902.6-723710 to be designated as SXP 182.
Spectral fitting is used to derive values for the spectral index of this
source and column density along the line of sight. This is done by fitting the
time-averaged 0.3-10 keV spectrum to an absorbed power-law model using the
methods presented in Evans et al. (2014). Using these methods, best-fit
parameters and a 68% confidence interval can be derived for both properties.
SXP 182 is shown to have a hard X-ray spectrum, with a derived photon index
for the source of $\Gamma=0.52_{-0.15}^{+0.16}$, which is consistent with the
photon index of $\Gamma\sim 0-1$ expected of a BeXRB system (Kennea et al.,
2018). Additionally, power law fitting indicates that the column density along
the line of sight towards this source is $N_{H}=5.1_{-1.9}^{+2.3}\times
10^{21}$ cm-2 which is higher than the average value of the column density of
$N_{H}=5.34\times 10^{20}\text{cm}^{-2}$ towards the SMC (Willingale et al.,
2013; Kennea et al., 2018).
Figure 3: OGLE IV, Swift UVOT, and Swift XRT light curves for Swift
J010902.6-723710.
### 2.2 Swift - UVOT
Swift J010902.6-723710 has been observed 221 times by UVOT in the uvw1-band
since 2016 as part of S-CUBED monitoring, which represents approximately
weekly coverage. UVOT light curves are generated for S-CUBED sources using the
FTOOLS software package (Blackburn et al., 1999). Photometric data are
extracted from a circle with 5 arcsecond radius around the XRT source position
for the object using the uvotsource method that is part of FTOOLS. Using this
method, the light curve presented in Figure 3 was generated for the entire
duration of S-CUBED.
UVOT data identifies Swift J010902.6-723710 as a persistent emitter in the
uvw1 band as it is well-detected in every observation with a 14.4 mean
magnitude. The most prominent feature of this light curve is the presence of
strong UV variability on timescales of 500-1000 days. The source reached its
minimum average magnitude of 14.7 in October 2021 after years of gradual
dimming from its initial brightness of 14.1 at the start of S-CUBED
monitoring. At this point, the source began a period of rapid brightening
lasting for over 500 days, reaching a maximum magnitude of 13.7 on June 27,
2023. After the peak magnitude was reached, the source has entered a steep
dimming phase that corresponds with the start of the XRT outburst. This
dimming phase was observed to be concluded on MJD 60283, when the source was
observed to be at its pre-outburst uvw1 magnitude.
The long period of UV brightening that Swift J010902.6-723710 underwent was
crucial in identifying the system as a candidate BeXRB before the outburst
occurred. This type of UV increase leading to outburst has been observed in
other sources such as Swift J004516.6–734703 (Kennea et al., 2020) which
demonstrated very little UV variability before experiencing a similar
brightening in the lead up to a Type I outburst. In the outburst of Swift
J004516.6–734703, the UV brightening was interpreted to be the result of a
circumstellar disk forming around the companion Be star in the newly-
identified binary. For Swift J010902.6-723710, we can interpret the long-term
variability to be an indication of growth and decay in the size of the disk,
where the disk expanded over the last 500+ days to a large-enough radius for
NS-disk interactions to occur.
### 2.3 OGLE - the Optical Gravitational Lensing Experiment
The OGLE project (Udalski et al., 2015) provides long term I-band photometry
with a cadence of 1-3 days for sources in the Magellanic Clouds. From the
X-ray position the optical/IR counterpart to Swift J010902.6-723710 is
identified as 2MASS J01090226-7237101. It was observed continuously for nearly
3 decades by OGLE, though the observations were interrupted for approximately
3 years during COVID-19.
The counterpart to the X-ray source is identified in the OGLE catalogue as:
OGLE II (I-band): smc_sc11.107571
OGLE III (I-band): smc110.3.22311
OGLE IV (I band): smc726.26.15515
OGLE IV (V band): smc726.26.v.22358
The I band data from just the OGLE IV project are shown in the top panel of
Figure 3 for comparison with the contemporaneous Swift UVOT and XRT data.
Figure 4: Top panel - the whole OGLE data set covering 26 years. Lower panel - the above data detrended with a simple polynomial in preparation for timing analysis. Figure 5: Generalised Lomb-Scargle analysis of the detrended OGLE data set. The peak is at a period of $60.623\pm 0.001$ days.The horizontal dashed line represents the False Alarm Probabilty of 1%. Figure 6: Detrended OGLE data divided into 5 consecutive epochs (see Table 1) and each then folded using the ephemeris given in Equation 1. For the purpose of being able to see the separate profiles, those numbered 2-4 have been arbitrarily shifted in the y-axis such as to avoid overlap. Table 1: OGLE data date ranges used in Figure 6 Data block | Dates | Average I band
---|---|---
number | JD-2450000 | magnitude
1 | 600-2000 | 16.26
2 | 2000-3100 | 16.22
3 | 3100-5000 | 16.35
4 | 5200-6400 | 16.37
5 | 6400-9000 | 16.34
For the purpose of period analysis the whole of the 26.4 years of OGLE data
were used and first detrended using a simple polynomial - see Figure 4. This
data set was then analysed with a generalised Lomb-Scargle routine and the
resulting power spectrum is shown in Figure 5. The peak in the power spectrum
is at a period of $60.623\pm 0.001$ days. This peak is driven by the sharp
eclipse-like features that can be clearly seen by eye in the top panels of
Figures 3 and 4. It is assumed that this represents the orbital period of the
system.
If all the detrended OGLE data are folded at the proposed binary period, with
a phase 0.0 set to the date of the first OGLE measurement (JD 2450627.9) then
the sharp eclipse-like dips are clearly seen in Figure 6. The average FWHM of
the dips is 0.1 in phase. It is also clear that the ingress into the eclipse
is much less sharp than the egress. To explore the changes in the shape of the
eclipse over time the OGLE data were divided up into 5 separate time segments
representing the times when visible changes were occurring in the profile. The
resulting separate profiles are shown in Figure 6. The 5 epochs chosen are
listed in Table 1. Using the phase of the dip in the I-band shown in Figure 6
as a reference point, the ephemeris for the time of the optical eclipses is
given by Equation 1:
$T_{ecl}=2450645.1+N(60.62\pm{0.01})~{}\textrm{~{}JD}$ (1)
In addition to the regular I-band measurements the OGLE IV project also
records V band magnitudes every few days. This provides the opportunity to
investigate the overall colour changes seen in the system as function of
brightness - a colour-magnitude diagram (CMD). Since the V band measurements
are less frequent than the I band, the determination of (V-I) can only occur
when the I and V measurements are close enough together in time. In this
instance the proximity of the 2 measurements is set to be less than 3 days.
This means that occasionally one V band measurement may be partnered with more
than one I band measurements. The result is shown in Figure 7 and discussed
below.
### 2.4 SALT - the Southern African Large Telescope
Figure 7: $(V-I)-I$ (left) and $(U-I)-I$ (right) colour-magnitude diagrams of Swift J010902.6-723710 Table 2: The H$\alpha$ equivalent width (EW) and peak separation ($\Delta V$) measured from SALT observations. Date | MJD | EW (Å) | Grating | $\Delta V$ (km/s)
---|---|---|---|---
03-11-2023 | 60251.88 | -7.4$\pm$0.3 | PG0900 | $-$
09-11-2023 | 60257.87 | -7.3$\pm$0.5 | PG2300 | 373.6$\pm$0.4
02-12-2023 | 60280.85 | -7.4$\pm$0.4 | PG2300 | 383.4$\pm$0.9
Swift J010902.6-723710 was observed with the Southern African Large Telescope
(SALT; Buckley et al. 2006) with the Robert Stobie Spectrograph (Burgh et al.,
2003; Kobulnicky et al., 2003) using different settings. On 03-11-2023
(MJD60251.88) the PG0900 grating was used (grating angle of 15.125∘) with an
exposure time of 1200 sec covering a wavelength range of $4200-7250$ Å. The
PG2300 grating was used on 09-11-2023 (MJD60257.87) and 02-12-2023
(MJD60280.85) (grating angle of 48.875∘) with an exposure time of 1200 sec
covering a wavelength range $6100-6900$ Å. An additional observation was taken
on 09-11-2023 (MJD60257.87) with the PG2300 grating (grating angle of 30.5∘)
with an exposure time of 1500 sec covering a wavelength range $3840-4915$ Å.
The primary reductions, which include overscan corrections, bias subtraction,
gain and amplifier cross-talk corrections, were performed with the SALT
science pipeline (Crawford et al., 2012). The remainder of the data reduction
steps, which comprise wavelength calibration, background subtraction, and
extraction of the one-dimensional spectrum, were done in iraf. All spectra
were corrected for the heliocenter and redshift of the SMC of 145.6 km/s
(McConnachie, 2012).
Figure 8 shows the H$\alpha$ emission line profiles. The observations obtained
on MJD60257.87 and MJD60280.85 with the PG2300 grating exhibit asymmetric
double-peak profiles as a result of the Keplerian distribution of matter in
the disc when the disc is viewed at non-zero inclination angles. The
observation performed on MJD60251.88 with the PG0900 grating shows an
asymmetric single-peak profile since the resolution of the grating is
insufficient to resolve the two peaks. The H$\alpha$ equivalent width
measurements are recorded in Table 2.
## 3 Discussion
### 3.1 Corbet Diagram
Combining the spin period with the orbital period allows this new system to be
placed on the Corbet diagram. The original Corbet diagram (Corbet, 1984)
showed a correlation between the spin and orbital periods of neutron stars in
BeXRBs. However, instead of a correlation between spin and orbital periods, a
modern version (shown in Figure 9) shows that these properties are well-
constrained to a specific region of the Corbet diagram. BeXRB systems are
expected to fall above the diagonal of the diagram, which represents a log-
linear relationship between the orbital period in days and the spin period in
seconds of the NS. Figure 9 shows the location of SXP 182 when placed on the
Corbet diagram with all BeXRBs in the SMC that have known spin and orbital
periods (Haberl & Pietsch, 2005; Coe & Kirk, 2015; Haberl & Sturm, 2016;
McBride et al., 2017; Carpano et al., 2017; Kennea et al., 2018; Lazzarini et
al., 2019; Kennea et al., 2020; Carpano et al., 2022; Maitra et al., 2023).
The location of SXP 182 is consistent with the trend shown by other SMC
BeXRBs. This serves as a check that the periods derived in Sections 2.1 and
2.3 are consistent with the observational properties of other BeXRBs. For
Swift J010209.6-723710, the derived orbital period of 60.623 days and pulsar
spin period of 182 seconds place the system near the center of the
distribution of orbit and spin periods for SMC BeXRBs, providing a strong
piece of evidence validating the source as a newly-discovered BeXRB.
### 3.2 Inclination angle of the Be disc
The peak separation of the double-peak H$\alpha$ emission lines (Figure 8) can
be used to estimate the size of the H$\alpha$ emitting region (Huang, 1972):
$R=\frac{GM_{\ast}\sin^{2}i}{(0.5\Delta V)^{2}}$ (2)
Using $M_{\ast}=17.8$ M⊙ (Cox, 2000) based on the spectral type of the massive
companion, this results in a disc size radius range of $92-97\sin^{2}i$ R⊙. We
can estimate the inclination angle of the Be disc, assuming that the disc and
orbital planes are aligned and that during the period of X-ray activity the NS
was accreting matter from the outermost parts of the Be disc at periastron
passage. The semi-major axis of the orbit can be estimated by assuming a
companion mass of $M_{\ast}=17.8$ M⊙ and a neutron star mass of $M_{NS}=1.4$
M⊙. Similarly, the periastron passage can be estimated using a conservative
value for the eccentricity of $e\sim 0.5$ based on the relationship between
eccentricity and orbital period presented in Townsend et al. (2011). This
results in a range of disc inclination angles of $72-90^{\circ}$. The
suggestion of a high disc inclination angle is corroborated by the H$\alpha$
emission line displaying a double peak morphology with a deep central
depression from the high-resolution observations.
Figure 8: The evolution of the H$\alpha$ emission line from SALT observations.
The spectra are corrected for the heliocenter and redshift of the SMC.
Figure 7 shows the $(V-I)-I$ and $(U-I)-I$ colour-magnitude diagrams. The
general trend of the colour-magnitude plots is a redder-when-brighter pattern,
which is indicative of inclination angles below $90^{\circ}$(Harmanec, 1983;
Rajoelimanana et al., 2011; Reig & Fabregat, 2015). This trend is more
noticeable in the $(V-I)-I$ plot where the range in colour is broader since
the simultaneous $V$ and $I$ band observations were taken during a period of
substantial optical variability.
Figure 9: Corbet diagram for all BeXRBs in the SMC with known orbital periods
and spin periods (Haberl & Pietsch, 2005; Coe & Kirk, 2015; McBride et al.,
2017; Carpano et al., 2017; Kennea et al., 2018; Lazzarini et al., 2019;
Kennea et al., 2020; Carpano et al., 2022; Maitra et al., 2023). SXP 182 is
plotted alongside these data and is shown to demonstrate the same correlation
between X-ray pulsar spin period and NS orbital period that is seen in the
rest of the SMC BeXRB population.
### 3.3 Spectral Classification of the massive counterpart
Figure 10 shows the SALT spectrum of Swift J010902.6-723710 covering the blue
wavelength range. The spectrum shows clear signatures of an early-type star
with several Balmer and helium lines present. The H$\beta$ line is in weak
emission with an asymmetric profile, along with the other absorption lines
exhibiting infilling due to the presence of the circumstellar disc. According
to the criteria presented in Evans et al. (2004), the strong presence of the
He I lines at 4026, 4143, 4388 and 4471 Å as well as the weak presence of the
He II lines at 4541 and 4686 Å constrains the spectral type to B0-0.5. The
faintest V-band observation from OGLE measurements during our period of
monitoring is $\sim$16.5 magnitudes. Using this and the distance modulus of
the SMC of 18.95 (Graczyk et al., 2013), this results in a luminosity class of
V (Straizys & Kuriliene, 1981; Pecaut & Mamajek, 2013). In summary, the
spectral class of the massive companion in Swift J010902.6-723710 is B0-0.5 V.
Figure 10: The SALT spectrum of Swift J010902.6-723710 covering the blue
region with different line species labeled at their expected rest wavelengths.
The spectrum is corrected for the heliocenter and redshift of the SMC.
### 3.4 Eclipsing Behavior
The orbital period of Swift J010902.6-723710 derived from long-term OGLE
monitoring can be used to re-examine the UVOT light curve in search of
periodic behavior. This was done by de-trending the UVOT data using a 5th-
order polynomial and folding the de-trended data at the 60.623 day orbital
period of the system. Figure 11 shows the comparison between the binned and
time-averaged OGLE and UVOT light curves starting at MJD 57563, which is the
date of the first S-CUBED observation. Despite S-CUBED’s weekly cadence
providing infrequent sampling of any singular orbit, the eclipsing behavior is
clearly visible in the de-trended and folded UVOT light curve. When compared
to the OGLE data, it becomes evident that the shape of the eclipse profile is
wavelength-dependent. The UVOT data shows increasing emission that peaks just
before each eclipse begins, which is similar to the behavior exhibited by the
system during the 4th OGLE data block shown in Figure 6. Over the lifetime of
S-CUBED, the UVOT eclipse profile is also shown to be broader than the OGLE
eclipse profile with similar depth at the time of maximum eclipse. At the
maximum depth of the eclipse, the uvw1-band magnitude drops by an average of
0.22 magnitudes and I-band magnitude decreases by an average of 0.25
magnitudes. Converting these magnitude decreases to flux measurements for both
telescopes, the implied relative decrease in flux is at both wavelengths
$\frac{\Delta F}{F}\sim 0.2$.
Figure 11: A comparison of the OGLE IV and UVOT light curves folded at the
proposed binary period of 60.623 days using Equation 1. The OGLE data are
offset vertically to avoid overlap.
Equation 2 from Maggi et al. (2013) can be used to estimate the size of the
accretion disk based on the relative decrease in flux that is observed during
eclipse:
$\frac{\Delta F}{F}=\left(\frac{R_{X}}{R_{C}}\right)^{2}$ (3)
where $R_{X}$ is the radius of the eclipsing object and $R_{C}$ is the radius
of the optical companion. The value for $R_{X}$ can be estimated due to the
constraints placed on $R_{C}$ via the results of Section 3.3. The typical
radius for a spectral class B0V star is 7.4 $R_{\odot}$ (Allen, 1976). This
value can be used to constrain the upper limit for the radius of the eclipsing
object at $R_{X}=R_{C}\sqrt{\frac{\Delta F}{F}}=3.3\,R_{\odot}$. A radius of
this size rules out the NS, a large planet, or a Sun-like tertiary star as the
cause of the eclipsing behavior. Therefore, it can be concluded that the
eclipsing object is likely an extended accretion disk that surrounds the NS.
Maggi et al. (2013) derives the theoretical upper limit for the size of an
accretion disk in a BeXRB system to be $r_{c}\sim 11.5\,R_{\odot}$, assuming
Bondi-Hoyle accretion and a 10 $M_{\odot}$ companion star. The 3.3 $R_{\odot}$
accretion disk size derived for Swift J010902.6-723710 is well within this
upper limit value, providing further evidence in favor of an eclipsing NS
accretion disk.
Very few BeXRBs have been observed to demonstrate eclipsing behavior. Swift
J010902.6-723710 is only the third known eclipsing BeXRB system and the second
to be found in the SMC. LXP 168.8 (Maggi et al., 2013) was found to have a
eclipsing accretion disk with a 24.3 day orbital period. SXP 5.05 (Coe et al.,
2015) was observed to contain a Be star that eclipses its NS companion every
17 days. This newly-discovered eclipsing system thus provides a unique
opportunity to further constrain the physical parameters of the system such as
the sizes of both disks and the masses of both stars. More observations are
needed to further characterize the BeXRB using subsequent eclipses.
### 3.5 Outburst Type
Date | MJD | Phase | $L_{X}$ (erg s-1)
---|---|---|---
10-10-2023 | 60227 | 0.06 | $4.14_{-2.61}^{+4.27}\times 10^{36}$
10-31-2023 | 60248 | 0.40 | $1.35_{-0.391}^{+0.475}\times 10^{37}$
11-02-2023 | 60250.0 | 0.47 | $5.66_{-0.275}^{+0.275}\times 10^{36}$
11-02-2023 | 60250.9 | 0.48 | $7.06_{-0.527}^{+0.527}\times 10^{36}$
11-04-2023 | 60252.1 | 0.50 | $5.30_{-0.508}^{+0.508}\times 10^{36}$
11-05-2023 | 60253.0 | 0.52 | $4.96_{-0.253}^{+0.253}\times 10^{36}$
11-05-2023 | 60253.6 | 0.53 | $4.03_{-0.385}^{+0.385}\times 10^{36}$
11-06-2023 | 60254 | 0.54 | $4.95_{-0.518}^{+0.518}\times 10^{36}$
11-21-2023 | 60269.9 | 0.80 | $3.20_{-2.04}^{+3.33}\times 10^{36}$
11-22-2023 | 60270.6 | 0.81 | $2.25_{-0.190}^{+0.190}\times 10^{36}$
12-02-2023 | 60280 | 0.96 | $1.83_{-0.177}^{+0.177}\times 10^{36}$
12-08-2023 | 60286 | 0.06 | $1.43_{-0.192}^{+0.192}\times 10^{36}$
12-12-2023 | 60290 | 0.13 | $3.55_{-2.28}^{+3.78}\times 10^{36}$
12-14-2023 | 60292 | 0.16 | $1.17_{-0.397}^{+0.397}\times 10^{36}$
12-26-2023 | 60304 | 0.36 | $3.98_{-2.26}^{+3.69}\times 10^{36}$
Table 3: S-CUBED and Swift TOO XRT detection dates, orbital phases from
Equation 1, and luminosities during outburst. Figure 12: Truncated OGLE IV,
Swift UVOT, and Swift XRT light curves for Swift J010902.6-723710 showing
emission from MJD 60100 to MJD 60300. The times of optical eclipse, calculated
using Equation 1, are shown as vertical grey lines. Blue arrows show the
calculated XRT flux upper limit during S-CUBED observations where Swift
J010902.6-723710 is not detected.
The type of outburst observed in Swift J010902.6-723710 has remained uncertain
as long-term monitoring of the source continues to produce XRT detections well
past the peak of the initial outburst event. In Type I outbursts, X-ray
emission is often limited both by the duration of an orbit and the current
orbital phase. Emission is expected to be detected at a narrow range of values
near the periastron passage of the orbit and is expected to last for no longer
than a full orbital period (Stella et al., 1986). Additionally, Type I
outbursts typically demonstrate a moderate increase in luminosity, peaking at
$L_{X}\sim 10^{36}-10^{37}$ ergs s-1 (Okazaki & Negueruela, 2001; Reig, 2011).
Table 3 shows the date of XRT detections of Swift J010902.6-723710 and the
orbital phase at which they occurred with respect to Equation 1. The light
curve of XRT detections during outburst is plotted with respect to the time of
optical eclipses in Figure 12. Both the figure and table show the unusually
long duration of this outburst. The first S-CUBED detection of the outburst on
October 10th occurred at Phase 0.06, which is approximately the periastron
passage of orbit. However, other subsequent detections continue to occur well
past the periastron passage of the orbit, demonstrating phase independence of
the emission. The last two S-CUBED detections, occurring on December 12th and
December 26th, correspond to Phase 0.13 and 0.36 of a new orbit. These
subsequent detections are not consistent with the typical behavior
demonstrated by a Type I outburst and have more in common with the emission
signature of a Type II outburst (Okazaki & Negueruela, 2001; Townsend et al.,
2017; Tamang et al., 2022). If this is indeed a Type II outburst, then the
peak luminosity is much smaller than is typical for these outbursts. The peak
X-ray luminosity of $L_{X}=1.35\times 10^{37}$ erg s-1 is the only detection
to occur above $10^{37}$ erg s-1. Based on these features, one of two
situations has occurred. One possibility is that the system has produced an
abnormally long-duration Type I outburst with a typical peak luminosity
(Okazaki & Negueruela, 2001; Reig, 2011). Alternatively, the system has
produced a Type II outburst that fails to reach the characteristic large peak
luminosity that is expected during these events (Okazaki & Negueruela, 2001;
Reig, 2011).
However, it is important to remember that this traditional classification
scheme is very quantised - either it is called Type I or Type II. Since
originally proposed in 1986 (Stella et al., 1986) there have been many more
diverse examples of X-ray outbursts from BeXRB systems observed. In reality
there must be a scope for a whole spectrum of outburst types since the
outburst duration and the X-ray luminosity seen depend almost entirely on the
interaction between the circumstellar disc and the neutron star. That in turn
depends upon the characteristics of the circumstellar disc (density, size,
inclination, mass outflow rate etc.) and the characteristics of the neutron
star orbit (eccentricity, inclination to the circumstellar disk plane, phase
of periastron etc.). So, given all those free parameters one should expect a
whole range of observational characteristics, and not be driven to simply call
them Type I or Type II. SXP 182 is an excellent example of how the original
classification scheme can be overly simplistic.
## 4 Conclusion
This paper reports the detection of a previously-unknown BeXRB via weekly
observations of the S-CUBED survey. This new system, Swift J010902.6-723710,
was identified via a transient X-ray outburst and followed up via multi-
wavelength observations. Deep follow-up X-ray observations identify a proposed
spin period of 182s for the NS in this binary system. Historical light curve
analysis of both UV and IR emission reveal the presence of strong eclipse-like
features that re-appear every 60.623 days in the light curve, which is adopted
as the proposed orbital period of the system. Optical spectroscopy reveals a
strongly double-peaked H$\alpha$ emission line, indicating a highly-inclined
system with an inclination of $i=72-90^{\circ}$. Spectroscopic observations
are also used to constrain the spectral class of the optical companion as a
B0-0.5 star of spectral class Ve.
The proposed spin and orbital periods place Swift J010902.6-723710 place in
the center of the expected distribution for similar BeXRBs on the Corbet
Diagram. Eclipsing behavior is found to be caused by a $3.3R_{\odot}$
accretion disk that surrounds the NS, making this the third eclipsing BeXRB to
be detected so far. Finally, the type of outburst observed is found to be
uncertain, with characteristics of both Type I and Type II outbursts being
found in the X-ray emission of the source. More observations of this system
are needed, particularly during subsequent eclipses, in order to further
constrain the physical parameters of this rare eclipsing system and better
understand this long-lasting, moderately luminous X-ray outburst.
## 5 acknowledgments
This work made use of data supplied by the UK Swift Science Data Centre at the
University of Leicester. JAK and TMG acknowledge the support of NASA contract
NAS5-00136. We acknowledge the use of public data from the Swift data archive.
The SALT observations reported in this paper were obtained under the SALT
Large Science Programme on transients (2021-2-LSP-001; PI: DAHB), which is
also supported by Poland under grant no. MNiSW DIR/WK/2016/07.
PAE acknowledges UKSA support. DAHB acknowledges research support from the
National Research Foundation. LJT is supported by the SALT Foundation and the
NRF.
## References
* Allen (1976) Allen, C. W. 1976, Astrophysical Quantities
* Antoniou et al. (2010) Antoniou, V., Zezas, A., Hatzidimitriou, D., & Kalogera, V. 2010, ApJ, 716, L140, doi: 10.1088/2041-8205/716/2/L140
* Blackburn et al. (1999) Blackburn, J. K., Shaw, R. A., Payne, H. E., Hayes, J. J. E., & Heasarc. 1999, FTOOLS: A general package of software to manipulate FITS files, Astrophysics Source Code Library, record ascl:9912.002. http://ascl.net/9912.002
* Buccheri et al. (1983) Buccheri, R., Bennett, K., Bignami, G. F., et al. 1983, A&A, 128, 245
* Buckley et al. (2006) Buckley, D. A. H., Swart, G. P., & Meiring, J. G. 2006, in Proc. SPIE, Vol. 6267, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, 62670Z, doi: 10.1117/12.673750
* Burgh et al. (2003) Burgh, E. B., Nordsieck, K. H., Kobulnicky, H. A., et al. 2003, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 4841, Proc. SPIE, ed. M. Iye & A. F. M. Moorwood, 1463–1471, doi: 10.1117/12.460312
* Burrows et al. (2005) Burrows, D. N., Hill, J. E., Nousek, J. A., et al. 2005, Space Sci. Rev., 120, 165, doi: 10.1007/s11214-005-5097-2
* Carpano et al. (2022) Carpano, S., Haberl, F., Maitra, C., et al. 2022, A&A, 661, A20, doi: 10.1051/0004-6361/202141082
* Carpano et al. (2017) Carpano, S., Haberl, F., & Sturm, R. 2017, A&A, 602, A81, doi: 10.1051/0004-6361/201629299
* Coe et al. (2015) Coe, M. J., Bartlett, E. S., Bird, A. J., et al. 2015, MNRAS, 447, 2387, doi: 10.1093/mnras/stu2568
* Coe et al. (2021) Coe, M. J., Kennea, J. A., Evans, P. A., et al. 2021, MNRAS, 504, 1398, doi: 10.1093/mnras/stab972
* Coe et al. (2023a) Coe, M. J., Kennea, J. A., Gaudin, T. M., et al. 2023a, The Astronomer’s Telegram, 16321, 1
* Coe et al. (2023b) Coe, M. J., Kennea, J. A., Monageng, I. M., et al. 2023b, MNRAS, 524, 3263, doi: 10.1093/mnras/stad1987
* Coe & Kirk (2015) Coe, M. J., & Kirk, J. 2015, MNRAS, 452, 969, doi: 10.1093/mnras/stv1283
* Corbet (1984) Corbet, R. H. D. 1984, A&A, 141, 91
* Cox (2000) Cox, A. N. 2000, Allen’s astrophysical quantities
* Crawford et al. (2012) Crawford, S. M., Still, M., Schellart, P., et al. 2012, PySALT: SALT science pipeline, Astrophysics Source Code Library. http://ascl.net/1207.010
* Evans et al. (2004) Evans, C. J., Howarth, I. D., Irwin, M. J., Burnley, A. W., & Harries, T. J. 2004, MNRAS, 353, 601, doi: 10.1111/j.1365-2966.2004.08096.x
* Evans et al. (2014) Evans, P. A., Osborne, J. P., Beardmore, A. P., et al. 2014, ApJS, 210, 8, doi: 10.1088/0067-0049/210/1/8
* Gehrels et al. (2004) Gehrels, N., Chincarini, G., Giommi, P., et al. 2004, ApJ, 611, 1005, doi: 10.1086/422091
* Graczyk et al. (2013) Graczyk, D., Pietrzyński, G., Pilecki, B., et al. 2013, in IAU Symposium, Vol. 289, Advancing the Physics of Cosmic Distances, ed. R. de Grijs, 222–225, doi: 10.1017/S1743921312021436
* Graczyk et al. (2020) Graczyk, D., Pietrzyński, G., Thompson, I. B., et al. 2020, ApJ, 904, 13, doi: 10.3847/1538-4357/abbb2b
* Haberl & Pietsch (2005) Haberl, F., & Pietsch, W. 2005, A&A, 438, 211, doi: 10.1051/0004-6361:20042470
* Haberl & Sturm (2016) Haberl, F., & Sturm, R. 2016, A&A, 586, A81, doi: 10.1051/0004-6361/201527326
* Harmanec (1983) Harmanec, P. 1983, Hvar Observatory Bulletin, 7, 55
* Harris & Zaritsky (2004) Harris, J., & Zaritsky, D. 2004, AJ, 127, 1531, doi: 10.1086/381953
* Huang (1972) Huang, S.-S. 1972, ApJ, 171, 549, doi: 10.1086/151309
* Jaisawal et al. (2023) Jaisawal, G. K., Vasilopoulos, G., Naik, S., et al. 2023, MNRAS, 521, 3951, doi: 10.1093/mnras/stad781
* Kennea et al. (2020) Kennea, J. A., Coe, M. J., Evans, P. A., et al. 2020, MNRAS, 499, L41, doi: 10.1093/mnrasl/slaa154
* Kennea et al. (2021) —. 2021, MNRAS, 508, 781, doi: 10.1093/mnras/stab2632
* Kennea et al. (2018) Kennea, J. A., Coe, M. J., Evans, P. A., Waters, J., & Jasko, R. E. 2018, ApJ, 868, 47, doi: 10.3847/1538-4357/aae839
* Kobulnicky et al. (2003) Kobulnicky, H. A., Nordsieck, K. H., Burgh, E. B., et al. 2003, in Proc. SPIE, Vol. 4841, Instrument Design and Performance for Optical/Infrared Ground-based Telescopes, ed. M. Iye & A. F. M. Moorwood, 1634–1644, doi: 10.1117/12.460315
* Lazzarini et al. (2019) Lazzarini, M., Williams, B. F., Hornschemeier, A. E., et al. 2019, ApJ, 884, 2, doi: 10.3847/1538-4357/ab3f32
* Maggi et al. (2013) Maggi, P., Haberl, F., Sturm, R., et al. 2013, A&A, 554, A1, doi: 10.1051/0004-6361/201321238
* Maitra et al. (2023) Maitra, C., Haberl, F., Kaltenbrunner, D., et al. 2023, The Astronomer’s Telegram, 15886, 1
* Martin et al. (2014) Martin, R. G., Nixon, C., Armitage, P. J., Lubow, S. H., & Price, D. J. 2014, ApJ, 790, L34, doi: 10.1088/2041-8205/790/2/L34
* McBride et al. (2017) McBride, V. A., González-Galán, A., Bird, A. J., et al. 2017, MNRAS, 467, 1526, doi: 10.1093/mnras/stx181
* McConnachie (2012) McConnachie, A. W. 2012, AJ, 144, 4, doi: 10.1088/0004-6256/144/1/4
* Monageng et al. (2017) Monageng, I. M., McBride, V. A., Coe, M. J., Steele, I. A., & Reig, P. 2017, MNRAS, 464, 572, doi: 10.1093/mnras/stw2354
* Monageng et al. (2020) Monageng, I. M., Coe, M. J., Buckley, D. A. H., et al. 2020, MNRAS, 496, 3615, doi: 10.1093/mnras/staa1739
* Monageng et al. (2022) Monageng, I. M., Coe, M. J., Townsend, L. J., et al. 2022, MNRAS, 511, 6075, doi: 10.1093/mnras/stac106
* Okazaki & Negueruela (2001) Okazaki, A. T., & Negueruela, I. 2001, A&A, 377, 161, doi: 10.1051/0004-6361:20011083
* Pecaut & Mamajek (2013) Pecaut, M. J., & Mamajek, E. E. 2013, ApJS, 208, 9, doi: 10.1088/0067-0049/208/1/9
* Rajoelimanana et al. (2011) Rajoelimanana, A. F., Charles, P. A., & Udalski, A. 2011, MNRAS, 413, 1600, doi: 10.1111/j.1365-2966.2011.18243.x
* Reig (2011) Reig, P. 2011, Ap&SS, 332, 1, doi: 10.1007/s10509-010-0575-8
* Reig & Fabregat (2015) Reig, P., & Fabregat, J. 2015, A&A, 574, A33, doi: 10.1051/0004-6361/201425008
* Rezaeikh et al. (2014) Rezaeikh, S., Javadi, A., Khosroshahi, H., & van Loon, J. T. 2014, MNRAS, 445, 2214, doi: 10.1093/mnras/stu1807
* Rivinius (2019) Rivinius, T. 2019, IAU Symposium, 346, 105, doi: 10.1017/S1743921318008207
* Roming et al. (2005) Roming, P. W. A., Kennedy, T. E., Mason, K. O., et al. 2005, Space Sci. Rev., 120, 95, doi: 10.1007/s11214-005-5095-4
* Roy et al. (2022) Roy, A., Cappallo, R., Laycock, S. G. T., et al. 2022, ApJ, 936, 90, doi: 10.3847/1538-4357/ac82b6
* Stella et al. (1986) Stella, L., White, N. E., & Rosner, R. 1986, ApJ, 308, 669, doi: 10.1086/164538
* Straizys & Kuriliene (1981) Straizys, V., & Kuriliene, G. 1981, Ap&SS, 80, 353, doi: 10.1007/BF00652936
* Tamang et al. (2022) Tamang, R., Ghising, M., Tobrej, M., Rai, B., & Paul, B. C. 2022, MNRAS, 515, 5407, doi: 10.1093/mnras/stac2135
* Townsend et al. (2011) Townsend, L. J., Coe, M. J., Corbet, R. H. D., & Hill, A. B. 2011, MNRAS, 416, 1556, doi: 10.1111/j.1365-2966.2011.19153.x
* Townsend et al. (2017) Townsend, L. J., Kennea, J. A., Coe, M. J., et al. 2017, MNRAS, 471, 3878, doi: 10.1093/mnras/stx1865
* Udalski et al. (2015) Udalski, A., Szymański, M. K., & Szymański, G. 2015, Acta Astron., 65, 1. https://arxiv.org/abs/1504.05966
* Willingale et al. (2013) Willingale, R., Starling, R. L. C., Beardmore, A. P., Tanvir, N. R., & O’Brien, P. T. 2013, MNRAS, 431, 394, doi: 10.1093/mnras/stt175
|
# EnDex: Evaluation of Dialogue Engagingness at Scale
Guangxuan Xu1 Ruibo Liu2
Fabrice Harel-Canada1 Nischal Reddy Chandra1 Nanyun Peng1
1University of California, Los Angeles 2Dartmouth College
{gxu21, violetpeng} @cs.ucla.edu
###### Abstract
We propose EnDex, the first human-reaction based model to evaluate dialogue
engagingness. EnDex is trained on 80k Reddit-based Engagement Dataset (RED)
curated using a novel distant-supervision framework. Engagingness is a key
measure that captures high-level quality of AI dialogue systems and closely
reflects actual user experience. However, data shortage, plus the abstract and
extensive definition of engagingness makes it challenging to develop an
automatic metric. Our work departs from mainstream approaches that use
synthetic negative examples to train binary classifiers, and instead, proposes
a solution using distant-supervision from human-reaction feedback. To support
the soundness of our EnDex metric, we offer a theoretical foundation for
engagement, an extensive ablation study, and empirical evidence of high
correlation on five engagingness related datasets.111Off-the-shelf EnDex model
and the RED dataset is available at https://github.com/gxxu-ml/EnDex.
## 1 Introduction
Many modern generative language models are trained to maximize a likelihood
objective, but this paradigm tends to assign high probability to generic
responses (Li et al., 2016), such as “I don’t know.”. Prior research has
established that people prefer to converse with interesting, creative, and
informative agents (See et al., 2019), all concepts broadly related to the
notion of _engagingness_. Furthermore, engagingness is recognized as a key
evaluation metric for the quality of dialogue systems (Zhang et al., 2018;
Ghazarian et al., 2020). For example, FAIR’s ParlAI (Miller et al., 2017)
incorporated Engagingness as the default testing metric in the Blenderbot
system (Roller et al., 2021); dialogue data challenges, like ConvAI2 (Dinan et
al., 2019), Amazon Alexa Prize222https://www.amazon.science/alexa-prize, and
ensemble metrics like FED (Mehri and Eskenazi, 2020), all measure engagingness
to benchmark dialogue quality.
Figure 1: Example of an online post with scores for emotional engagement (EE),
attentional engagement (AE), and behavioral engagement (BE) in blue to
represent the 3 dimensions of _human engagement_ ; reply engagement (RE) in
red; and the aggregated EnDex score in green. We apply z-score to EnDex Score
and pick a hyper-parameter threshold to cluster posts into positive and
negative samples.
However, the current evaluation of engagingness still primarily relies on
expensive human annotation rather than off-the-shelf automatic tools, due to
several theoretical and technical challenges: firstly, unlike more well-
characterized properties such as fluency, the definition of engagingness is
significantly more abstract and multi-dimensional (See et al., 2019),
requiring well-tuned quality metrics for each sub-dimension to aggregate a
final score. Secondly, what qualifies as engaging is open-ended and many
different answers may embody the concept (Ghazarian et al., 2020). Therefore,
reference-based metrics requiring unique ground truth, such as BLEURT (Sellam
et al., 2020) and BERTScore (Zhang et al., 2020), cannot apply. Thirdly,
there’s an acute shortage of large-scale, high-quality data annotated for
engagingness.
Ghazarian et al. (2020) jump-started efforts to automatically measure dialogue
engagement, where they fine-tuned a BERT-based model Devlin et al. (2019) on
the ConvAI2 and DialyDialog datasets (Li et al., 2017) to predict an
engagingness score. However, finetuning on small size supervised data could
easily lead to overfitting and generalization problems. Another high
performing metric on engagingness USL-H (Phy et al., 2020) assumes a positive
set and generates synthetic negative samples to train model. However, credible
positive samples are not always available, and synthetic negative samples may
not be challenging enough to further advance classifier performance.
In light of the above challenges, we propose EnDex, a novel metric trained
with distantly supervised data to predict turn-level dialogue engagingness
(Figure 1). EnDex requires neither human annotations nor direct
disentanglement of engagingness. Instead, we leverage observed _user
reactions_ to posts as distant signals to model engagingness, which marks a
departure from mainstream approach to train on synthetic negative samples (Lan
et al., 2020; Ghazarian et al., 2022; Tao et al., 2018; Sato et al., 2020).
EnDex trains on real conversations sourced from Reddit, that are automatically
annotated as positive and negative examples with our framework. The novoel
dataset is named RED (Reddit Engagement Dataset) with over 80k labelled
samples. EnDex framework derives its theoretical underpinning from relevant
HCI works, and has shown superior performance on five benchmark datasets.
## 2 EnDex Metric
Engagingness is not only a linguistic concept useful for dialogue systems, but
also manifests itself in multi-modalities and is extensively leveraged to
benchmark gaming and online learning experiences (Silpasuwanchai et al., 2016;
Chen et al., 2005; Mcmahan, 2003; Schoenau-Fog, 2011). Our work is inspired by
HCI study of Human Engagement Ma (2018), which decomposes engagingness into
three major dimensions including attentional engagement (e.g., clicks and
scrolls), behavior engagement (e.g., facial expressions), and emotional
engagement (e.g., heart rate).
EnDex metric follows the same intuition: we can infer engagingness of a text
by analyzing human reactions to it, for which there is abundant data in social
media. EnDex metric learns from our distant-supervised RED dataset, which
measures dialogue engagement along four dimensions as shown in Figure 1;
three-dimensions correspond to the original Human Engagement definition, and
one distinct Reply Engagement dimension for the dialogue specific task.
### 2.1 Reddit Engagement Dataset (RED)
We curate the Reddit Engagement Dataset (RED), a distant-supervision set, with
80k single-turn conversations. We source RED from Reddit, sampling from 43
popular subreddits, and processed a total of 5 million posts, filtering out
data that was either non-conversational, toxic, or posts not possible to
ascertain popularity; the resulting data distribution of RED is shown in Table
1. The following sections will explain the procedure to automatically annotate
EnDex scores and cluster samples into positive and negative sets.
We also curated a RED testset with 150 human annotated samples obtained from a
different split from RED. The inter-annotator agreement is 0.34 Fleiss-Kappa,
indicating fair agreement, which reflects the challenge of determining
engagingness.
| Engaging | Non-engaging
---|---|---
# of samples | 40,162 | 40,162
Emotional | .605 $\pm$ .273 | .152 $\pm$ .120
Attentional | .759 $\pm$ .127 | .203 $\pm$ .100
Behavioral | .659 $\pm$ .274 | .318 $\pm$ .285
Reply | .718 $\pm$ .154 | .354 $\pm$ .980
EnDex | .709 $\pm$ .048 | .259 $\pm$ .033
Table 1: RED dataset has two classes, engaging and non-engaging, clustered by
applying z-score on EnDex score. This table shows the mean and standard
deviation of sub-dimension scores for both classes; the last row displays the
distribution of the overall EnDex score.
### 2.2 Distantly-Supervised Engagingness Scores
We use distant-supervision to provide samples in RED an EnDex Score, which is
the aggregate of 4 engaging dimensions. Section 2.2 discusses the intuition
for each engagingness dimension; section 2.3 explains how to adjust raw score
by thread popularity; section 2.4 lays out the formula to normalize and
aggregate sub-dimensions into the overall engagingness score; section 2.5
explains sampling with z-score to convert the task into binary classification.
* •
Emotional Engagement (EE): Emotional connection is a key sign of human
engagement (Savin-Baden et al., 2014); and we model EE using a multi-class
emotional classifier (Demszky et al., 2020) on post replies. If post receives
positive and emotional replies, it’s engaging; negative or neutral replies
indicates non-engaging.
* •
Attentional Engagement (AE): More user time spent indicates higher engagement
(Attfield et al., 2011). We model AE of a post by examining whether it has
editted replies, and the information specificity in the replies.
* •
Behavioral Engagement (BE): Human behavioral features closely correlate with
their engagement state (Attfield et al., 2011), and we model BE by examining
Reddit post scores, adjusted by popularity.
* •
Reply Engagement (RE): Following definition from (Ghazarian et al., 2020), if
a certain post is very likely to be continued by following threads, it is
considered engaging; reply_counts are also popularity adjusted.
### 2.3 Adjustment for Popularity
Raw score for Behavior Engagement(upvotes) and Reply Engagement(reply counts)
are heavily influenced by the popularity of the thread in which the post
appears. A non-engaging post may receive high user interaction because it
simply receives a lot of exposure; on the flip side, a very engaging most may
receive zero user interaction simply because it is rarely seen. To mitigate
the imbalanced exposure problem, we calculate a popularity value for each
thread, and adjust posts scores by the popularity value of the thread it
resides.
Popularity Value(PV) The PV of a post is given by the amount of exposure its
parent post attracts. Let the target post be $\theta$ and its parent $\sigma$,
$R_{eply}$ obtains the reply counts of a post, and $U_{pvote}$ obtains the
upvotes of a post. The PV is defined in equation (1), where coefficient 2 is
adopted to give equal weight for reply and upvotes; popularity value adjusted
RE score is given by PVRE in equation (6), where $M_{pv}$ and $M_{re}$ are the
median of popularity value and reply counts in the entire dataset. Only
popularity adjusted scores are used for calculating EnDex score.
$\textrm{PV}(\theta)=2*R_{eply}(\sigma)+U_{pvote}(\sigma)$ (1)
$\textrm{PVRE}(re)=re+\frac{M_{pv}}{M_{\textrm{re}}}*\frac{re}{\textrm{PV(re)}}*re$
(2)
### 2.4 Monotone Submodular Normalization
The final EnDex score is essentially a weighted sum of the 4 respective sub-
dimension scores; an importance nuance is the usage of submodular
normalization (shown in Eq. 8) for 3 dimensions to bring raw scores to the
scale of 0-1. We observe that unit increase in raw score lead to diminishing
positive effect on engagingness. For example, a sentence with 100 replies
should be more engaging than one with 1 replies, but not 99 times more; thus,
we normalize engagingness score with a monotone submodular function
$f(x)=\frac{x}{x+\alpha}$.
$N(x)=\Bigg{(}\displaystyle\sum_{n=1}^{3}w_{i}*\frac{x_{i}}{x_{i}+\alpha_{i}}\Bigg{)}+w_{\textrm{EE}}*x_{\textrm{EE}},$
(3)
$N$ is the normalized score for sample $x$, $x_{i}$ is $x$’s raw score on
dimension $i$, where $i\in\\{RE,BE,AE\\}$; $\alpha_{i}$ is the median of
$i$-th dimension; $w_{i}$ is the weight for $i$ dimension; $w_{\textrm{EE}}$
is the weight for EE dimension. The weight can be tuned for your own usage of
RED; 333For EnDex, $\alpha$ for three dimensions RE, BE, AE are 1, 2, 18,
respectively; we also applied weights of 3, 3, 2, 1 for RE, AE, EE, and BE.
### 2.5 Clustering with z-score
Essentially, engagingness prediction is a classification task, and we want to
prepare dataset for binary classification. We use $z$-score on the EnDex Score
to easily sample and cluster the data according to standard deviation from
mean. A confidence threshold $\kappa$(ours is 1) needs to be picked, which
means that we regard samples that fall between $\kappa$ standard deviation
from mean as uncertain, and are thus discarded. And we cluster positive and
negative samples using the following equation (9).
$Polarity(x)=\begin{cases}1&\text{if }z\\_score(x)>\kappa\\\ 0&\text{if
}z\\_score(x)<-\kappa\end{cases}$ (4)
The EnDex metric is then trained as a binary classification task by finetuning
a RoBERTa-large model (Liu et al., 2019) on turn-level RED data.
## 3 Experiments
Method | Better | PredEng-600 | Fed-eng | Red-Test | Grade
---|---|---|---|---|---
P | S | P | S | P | S | P | S | P | S
Random (ref.) | 0.025 | 0.025 | -0.012 | -0.013 | 0.080 | 0.081 | -0.053 | -0.053 | 0.053 | 0.045
Question | 0.167 | 0.167 | 0.073 | 0.074 | 0.320 | 0.320 | 0.194 | 0.194 | 0.009 | 0.008
Specificity | 0.357 | 0.357 | 0.076 | 0.102 | 0.254 | 0.254 | 0.122 | 0.122 | -0.090 | -0.090
USL-H | 0.356 | 0.343 | 0.688 | 0.699 | 0.267 | 0.277 | 0.121 | 0.125 | 0.234 | 0.243
Pred_En | 0.234 | 0.310 | -0.137 | -0.134 | 0.250 | 0.340 | 0.044 | 0.178 | -0.090 | -0.060
Pred_En (FT+DD) | 0.338 | 0.368 | 0.390 | 0.450 | 0.253 | 0.195 | 0.237 | 0.258 | 0.194 | 0.173
Ours: EnDex | 0.414* | 0.397* | 0.397 | 0.348 | 0.235* | 0.225* | 0.381** | 0.375** | 0.266 | 0.248
Ours: EnDex+NS | 0.478* | 0.455* | 0.597** | 0.577** | 0.229* | 0.214* | 0.389** | 0.378** | 0.308 | 0.282*
Ours: EnDex-Best | 0.521 | 0.511 | 0.620 | 0.629 | 0.286 | 0.253 | 0.414 | 0.405 | 0.406 | 0.352
Table 2: The correlation between engagement scores and ground truth human
judgment. Best scores are emboldened and second-best are underlined. We train
EnDex and EnDex+NS 10 times and report the mean with * and ** indicating a
stdev < 0.05 and < 0.03, respectively. EnDex-Best is the best score observed
over the 10 runs. Compared to existing metrics, the EnDex-framework achieves
SOTA correlation with human judgement on engagingness, leading by far on our
newly proposed Red-Test dataset with more complex and longer texts than
chitchats.
### 3.1 Experiment Set-up
We test the performance of the EnDex metric on 5 golden evaluation sets that
have turn-level labels. Among them, Better (Ghazarian et al., 2019),
PredEeng-600 (Ghazarian et al., 2020) are annotated specifically for
engagingness with high annotator agreement. Better samples are taken from
human conversation, while half of PredEng-600 are chatbot generations. FED
(Mehri and Eskenazi, 2020) annotates dialogue for 9 different dimensions, and
we use their engagingness scores as target labels. GRADE (Huang et al., 2020)
contains quality annotations for dialogue coherence, and we include it to test
whether our model can also have good zero-shot performance on related tasks.
Lastly, our own RED-Test is sourced from Reddit, contains discussions on
various topics. A table of evaluation set statistics is provided in the
Appendix3.
### 3.2 Ablation Study on Engaging Dimensions
To test the robustness of the 4 engagingness dimensions 2.2 of EnDex, we
conducted ablation study to train model using only signals from each of the 4
dimensions. We hypothesize that dimensions with high positive contribution
towards final results should have very successful clustering of engaging and
non-engaging samples by itself; so, if we train model on data clustered by
such dimension, we can still get good performance models.
We train five different models on different subsets of RED. All datasets
included the same 40k negative (i.e. non-engaging) samples drawn according to
our overall engagement score. However, the other 40k positive (i.e. engaging)
samples were selected according to a particular dimension score (e.g. EE, AE,
BE, and RE), except for EnDex, which is our aggregate score model. Figure 2
shows that all four dimensions correlate with engagingness to some degree, but
RE, AE, and EE are especially effective. We also observe a synergistic effect
of training on a composite score rather than any one dimension individually.
The experiment highlights and corroborates the multi-dimensionality of
engagingness previously reported in the literature (See et al., 2019).
Overall, having an aggregate score is crucial for successful distant-
supervised annotation of negative and positive examples.
Figure 2: Ablation study of our four engagement dimensions. The EnDex model
was trained on our aggregate engagingness score, while RE, EE, AE, and BE
indicate models trains only on scores reflecting that particular dimension.
### 3.3 Comparison with Related Works
We compare our EnDex metric, and heuristics-augmented EnDex+NS metric with
five baselines. Three baselines are rule-based, including Random, information
Specificity (See et al., 2019) that counts number of non-stopword tokens, and
Inquisitiveness (Ghandeharioun et al., 2019) that examines question asking
ability. We included them because in some dataset, rule-based system could
work surprisingly well (Yeh et al., 2021).
We selected USL-H (Phy et al., 2020) as a baseline because it is the top
performing metric on the PredEng-600 and Fed engagingness evaluation sets Yeh
et al. (2021). USL-H is designed to measure high-level dialog quality,
including understandability, sensibleness, and likability; it trains 3 BERT-
based (Devlin et al., 2019) classifiers for each component, and uses a
composite score named USL-H for overall assessment. Pred_En (Ghazarian et al.,
2020) uses BERT embedding plus MLP layer and train on ConvAI dataset (Dinan et
al., 2019) to make engagement score predictions. PRED_EN (FT+DD) further
finetunes the original PRED_EN metric on the DailyDialogue dataset, to get
better results.
Our model has two versions: EnDex is solely trained on human-reaction based
data. +NS means non-engaging samples set is mixed with some rule-based
negative samples, created by random insertion, random deletion, copying, and
generic replies;
The experiments in Table 2 demonstrate that our model achieves strong
performance on 4 engagingness related datasets, and good correlation with one
coherence dataset(Grade). EnDex surpasses Pred_En and USL-H by a large margin
on two real human conversations, Better and Red-Test. USL-H still leads in
PredEng-600, and EnDex+NS’s best model is a close second. Yeh et al. (2021)
shows achieving high score on FED-eng is challenging, with no one surpassing
0.3 spearman in 12 tested metrics. A strong rule-based question detection
algorithm surprisingly claims the highest result, and EnDex a close second.
We find that training solely with human reaction distant supervision signals
suffices for building competitive models on par or even surpassing mainstream
metrics, and it shows better generalization capability in new domains, which
seems to echo recent success on modeling human preferences via upvotes in
Reddits (Gao et al., 2020).
## 4 Conclusion
This paper proposes the first human reaction based model, EnDex, to evaluate
dialogue engagingness, and curates an 80k Reddit Engagement Dataset (Red)
using a novel distant-supervision framework. The success of EnDex demonstrates
the validity of training automatic metrics with human reaction signals,
offering a strong complement to a synthetic negative sampling approach. We
also release an off-the-shelf EnDex model, and a large scale dataset to
facilitate future research.
## Limitation
One limitation is that we only curated data for turn-level dialogue. Multi-
turn dialogues could also be useful, but it was computationally infeasible to
interactively query Reddit for entire threads of conversation. Future work can
explore this direction to produce dialogue-level and system-level engagingness
metrics.
We also haven’t fully explore our model’s performance on non-dialogue domains,
such as on story or creative generations. The training data distribution from
the Reddit corpus is diverse enough that it could potentially achieve good
performance in non-dialogue settings. A valuable direction of future work is
to adapt our method for more general engagingness, or another evaluation
metric for open-domain generation.
## Ethics
A caveat of using framing our approach around human attention is that not all
texts attracting high attention are good and ethical. Since being engaging
often carries a positive connotation, we made a deliberate design decision to
mitigate forms of _negative engagement_ in our metric. For example, we assign
lower scores to samples flagged by Reddit as controversial, and our behavioral
engagement dimension subtracts downvotes from upvotes to punish negative,
biased (Liu et al., 2021), and aggressive comments. Moreover, we implemented
our emotional engagement algorithm to reward posts with positive emotional
replies and punish posts that prompt negative emotions. Future may try to
account for the darker aspects of engagingness into our framework and improve
the EnDex metric to differentiate between positive and negative engagement.
Human annotations for RED-Test were obtained via Amazon Mechanical Turks. We
filtered out toxic samples to reduce the likelihood of offensive content and
paid $0.30 USD per instance for an expected hourly wage of $20 USD.
## Acknowledgement
We want to give special thanks to Sarik Ghazarian for great advice and help
with the baseline model; we also appreciate the brainstorming session with
Hongyuan Yang and the suggestion to use z-score for sampling from Zeyi Wang.
## References
* Attfield et al. (2011) Simon Attfield, Gabriella Kazai, and Benjamin Piwowarski. 2011. Towards a science of user engagement ( position paper ).
* Chen et al. (2005) Mark Chen, Beth E. Kolko, Elisabeth Cuddihy, and Eliana Medina. 2005. Modeling and measuring engagement in computer games. In _DiGRA Conference_.
* Demszky et al. (2020) Dorottya Demszky, Dana Movshovitz-Attias, Jeongwoo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi. 2020. GoEmotions: A dataset of fine-grained emotions. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 4040–4054, Online. Association for Computational Linguistics.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. _ArXiv_ , abs/1810.04805.
* Dinan et al. (2019) Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander H. Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur D. Szlam, Iulian Serban, Ryan Lowe, Shrimai Prabhumoye, Alan W. Black, Alexander I. Rudnicky, Jason Williams, Joelle Pineau, Mikhail S. Burtsev, and Jason Weston. 2019. The second conversational intelligence challenge (convai2). _ArXiv preprint_ , abs/1902.00098.
* Gao et al. (2020) Xiang Gao, Yizhe Zhang, Michel Galley, Chris Brockett, and Bill Dolan. 2020. Dialogue response ranking training with large-scale human feedback data. _ArXiv_ , abs/2009.06978.
* Ghandeharioun et al. (2019) Asma Ghandeharioun, Judy Hanwen Shen, Natasha Jaques, Craig Ferguson, Noah Jones, Àgata Lapedriza, and Rosalind W. Picard. 2019. Approximating interactive human evaluation with self-play for open-domain dialog systems. In _Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada_ , pages 13658–13669.
* Ghazarian et al. (2019) Sarik Ghazarian, Johnny Wei, Aram Galstyan, and Nanyun Peng. 2019. Better automatic evaluation of open-domain dialogue systems with contextualized embeddings. In _Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation_ , pages 82–89, Minneapolis, Minnesota. Association for Computational Linguistics.
* Ghazarian et al. (2020) Sarik Ghazarian, Ralph M. Weischedel, Aram Galstyan, and Nanyun Peng. 2020. Predictive engagement: An efficient metric for automatic evaluation of open-domain dialogue systems. In _The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020_ , pages 7789–7796. AAAI Press.
* Ghazarian et al. (2022) Sarik Ghazarian, Nuan Wen, A. G. Galstyan, and Nanyun Peng. 2022. Deam: Dialogue coherence evaluation using amr-based semantic manipulations. _ArXiv preprint_ , abs/2203.09711.
* Hanu and Unitary team (2020) Laura Hanu and Unitary team. 2020. Detoxify. Github. https://github.com/unitaryai/detoxify.
* Huang et al. (2020) Lishan Huang, Zheng Ye, Jinghui Qin, Liang Lin, and Xiaodan Liang. 2020. GRADE: Automatic graph-enhanced coherence metric for evaluating open-domain dialogue systems. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 9230–9240, Online. Association for Computational Linguistics.
* Lan et al. (2020) Tian Lan, Xian-Ling Mao, Wei Wei, Xiaoyan Gao, and Heyan Huang. 2020. Pone: A novel automatic evaluation metric for open-domain generative dialogue systems. _ArXiv preprint_ , abs/2004.02399.
* Li et al. (2016) Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In _Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 110–119, San Diego, California. Association for Computational Linguistics.
* Li et al. (2017) Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A manually labelled multi-turn dialogue dataset. In _Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 986–995, Taipei, Taiwan. Asian Federation of Natural Language Processing.
* Liu et al. (2021) Ruibo Liu, Chenyan Jia, Jason Wei, Guangxuan Xu, Lili Wang, and Soroush Vosoughi. 2021. Mitigating political bias in language models through reinforced calibration. _Proceedings of the AAAI Conference on Artificial Intelligence_ , 35(17):14857–14866.
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. _ArXiv preprint_ , abs/1907.11692.
* Ma (2018) Xiaojuan Ma. 2018. Towards human-engaged AI. In _Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden_ , pages 5682–5686. ijcai.org.
* Mcmahan (2003) Alison Mcmahan. 2003. Immersion, engagement, and presence: A method for analyzing 3-d video games. _The Video Game Theory Reader_ , pages 67–86.
* Mehri and Eskenazi (2020) Shikib Mehri and Maxine Eskenazi. 2020. Unsupervised evaluation of interactive dialog with DialoGPT. In _Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue_ , pages 225–235, 1st virtual meeting. Association for Computational Linguistics.
* Miller et al. (2017) Alexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. ParlAI: A dialog research software platform. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 79–84, Copenhagen, Denmark. Association for Computational Linguistics.
* Phy et al. (2020) Vitou Phy, Yang Zhao, and Akiko Aizawa. 2020. Deconstruct to reconstruct a configurable evaluation metric for open-domain dialogue systems. In _Proceedings of the 28th International Conference on Computational Linguistics_ , pages 4164–4178, Barcelona, Spain (Online). International Committee on Computational Linguistics.
* Roller et al. (2021) Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Eric Michael Smith, Y-Lan Boureau, and Jason Weston. 2021. Recipes for building an open-domain chatbot. In _Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume_ , pages 300–325, Online. Association for Computational Linguistics.
* Sato et al. (2020) Shiki Sato, Reina Akama, Hiroki Ouchi, Jun Suzuki, and Kentaro Inui. 2020. Evaluating dialogue generation systems via response selection. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 593–599, Online. Association for Computational Linguistics.
* Savin-Baden et al. (2014) Maggi Savin-Baden, Gemma Tombs, Roy Bhakta, and David J. H. Burden. 2014. Students’ experiences of emotional connection with pedagogical agents.
* Schoenau-Fog (2011) Henrik Schoenau-Fog. 2011. The player engagement process - an exploration of continuation desire in digital games. In _DiGRA Conference_.
* See et al. (2019) Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. 2019. What makes a good conversation? how controllable attributes affect human judgments. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 1702–1723, Minneapolis, Minnesota. Association for Computational Linguistics.
* Sellam et al. (2020) Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 7881–7892, Online. Association for Computational Linguistics.
* Silpasuwanchai et al. (2016) Chaklam Silpasuwanchai, Xiaojuan Ma, Hiroaki Shigemasu, and Xiangshi Ren. 2016. Developing a comprehensive engagement framework of gamification for reflective learning. pages 459–472.
* Tao et al. (2018) Chongyang Tao, Lili Mou, Dongyan Zhao, and Rui Yan. 2018. RUBER: an unsupervised method for automatic evaluation of open-domain dialog systems. In _Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018_ , pages 722–729. AAAI Press.
* Yeh et al. (2021) Yi-Ting Yeh, Maxine Eskénazi, and Shikib Mehri. 2021. A comprehensive assessment of dialog evaluation metrics. _ArXiv preprint_ , abs/2106.03706.
* Zhang et al. (2018) Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 2204–2213, Melbourne, Australia. Association for Computational Linguistics.
* Zhang et al. (2020) Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020\. Bertscore: Evaluating text generation with BERT. In _8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020_. OpenReview.net.
## Appendix A Appendix
### A.1 RED Processing Steps
Our Reddit data is downloaded from Pushshift.io
444https://files.pushshift.io/reddit/comments/, and we processed approximately
5M data to curate an 80k sample RED dataset. We deleted posts that do not have
an immediate parent thread, because we need pair turn-level data. Our
preprocessing removes non-conversational data, such as posts including &
gt(reply to symbol). We also removed explicitly toxic data filtered by
Detoxify (Hanu and Unitary team, 2020).
We also applied a key data processing trick to reduce noisy signals – the
exposure variable. It helps measure the amount exposure each post receives to
help normalize its upvote/reply score. We reward posts that are in low-
exposure, unpopular threads, while penalizing posts in high-exposure, popular
threads, because high upvotes and replies in popular threads may be more due
to exposure than true engagingness.
After computing the normalized score given in Equation 8, we also apply
another z-score to normalize the final EnDex score according to standard
deviation, so that we can easily sample our data from it. A score with higher
standard deviation will imply a higher probability that the sample is engaging
according to our metric. We apply a cut-off to sample high probability
engaging and non-engaging samples, and arrive at the RED dataset.
### A.2 Model training details
Our RED-Test set contains 300 human labeled data. The train validation split
during training is 0.8 and 0.2.
We used 4 Nvidia A6000 GPUs for training, and 1 Nvidia A6000 GPU for
inference. The average runtime for training one model is 2 minutes per epoch,
and inference time is in seconds, negligible for the testset. The estimated
energy cost per model is, assuming per second gpu energy cost of 245W:
245W*4*60 = 58800 per model.
We trained our model for 2 epochs, and only save the best checkpoints, with
learning rate of 5e-5 with no extensive hyperparameter search.
We used specificity and question examination inspired from (See et al., 2019);
USL-H (Phy et al., 2020) and PredEn is taken from a GitHub
repo555https://github.com/exe1023/DialEvalMetrics and modified to use a local
bert-base-uncased since the original ‘bert-as-service‘ code no longer
functions.
The Formula for calculating each dimension is given in the following:
* •
Reply Engagement; The raw Reply Engagement score(re) is just the reply counts
of a post. Popularity value adjusted RE score is given by PVRE in equation
(6), where $M_{pv}$ and $M_{re}$ are the median of popularity value and reply
counts in the entire dataset. Please refer to equation (1) for calculation of
the popularity value.
$\textrm{PVRE}(re)=re+\frac{M_{pv}}{M_{\textrm{re}}}*\frac{re}{\textrm{PV(re)}}*re$
(5)
* •
Behavioral Engagement; The raw BE score of a certain post(be) is obtained by
subtracting downvotes to upvotes and set to 0 if given controversy flag.
Popularity value adjusted BE score is given by PVBE in equation
(LABEL:eq:pvbe), where $M_{pv}$ and $M_{be}$ are the median of popularity
value and raw BE score in the entire dataset. Please refer to equation (1) for
calculation of the popularity value.
$\textrm{PVBE}(be)=be+\frac{M_{pv}}{M_{\textrm{be}}}*\frac{be}{\textrm{PV(be)}}*be$
(6)
* •
Attentional Engagement; It is calculated using maximum information
specificity, or the the maximum number of non-stopword tokens in a post’s
replies, and whether its children posts are editted; t is the maximum reply
specificity, and e stands for number of edited replies.
$\textrm{AE}(x)=t+10*e$ (7)
* •
Emotional Engagement; The EE score is the aggregate probability for all
positive emotion categories, produced by the go-emotion classifier (Demszky et
al., 2020).
### A.3 Submodular Normalization and z-score Clustering
After we obtained the sub-dimension scores, we want to aggregate them into a
single normalized EnDex Score, and lastly cluster them into positive and
negative sets to train a binary classifier. The formulas are list in the
following:
$\textsc{EnDex}(x)=\Bigg{(}\displaystyle\sum_{n=1}^{3}w_{i}*\frac{x_{i}}{x_{i}+\alpha_{i}}\Bigg{)}+w_{\textrm{EE}}*x_{\textrm{EE}},$
(8)
$N$ is the normalized score for sample $x$, $x_{i}$ is $x$’s raw score on
dimension $i$, where $i\in\\{RE,BE,AE\\}$; $\alpha_{i}$ is the median of
$i$-th dimension; $w_{i}$ is the weight for $i$ dimension; $w_{\textrm{EE}}$
is the weight for EE dimension. The weight can be tuned for your own usage of
RED;
A confidence threshold $\kappa$(ours is 1) needs to be picked, which means
that we regard samples that fall between $\kappa$ standard deviation from mean
as uncertain, and are thus discarded. And we cluster positive and negative
samples using the following equation (9).
$Polarity(x)=\begin{cases}1&\text{if }z\\_score(x)>\kappa\\\ 0&\text{if
}z\\_score(x)<-\kappa\end{cases}$ (9)
Figure 3: The screenshot of the task description of our Amazon MTurk
questionnaire. We have prepared instructions, demonstrations, and proper
warning of offensive content. Figure 4: The screenshot of the labeling area of
our Amazon MTurk questionnaire. Each pair will be labelled by three
annotators.
### A.4 Annotation data and test data
We performed annotation on Amazon Mechanical Turk, and selected annotators
based in the United State; in implemented restrictions to annotator with 98%
pass rate. We give four examples and clear instruction for the task carried
out. A screenshot of our annotation interface is provided below.
Table 3 gives summary of the evaluation datasets we used.
Dataset | # of Samples | Context Length | Response Length | Source | Agreement Rate
---|---|---|---|---|---
BETTER | 297 | 6 | 8 | Human | N/A
PREDENG-600 | 600 | 12 | 14 | Human+Bot | 0.51
FED-ENG | 261 | 26 | 12 | Human+Bot | N/A
RED-TEST | 150 | 16 | 17 | Human | 0.34
GRADE | 150 | 12 | 14 | Human | N/A
Table 3: Dataset Statistics for the 5 golden evaluation sets, with number of
samples, context-length, response length, and if applicable, inter-annotator
agreement rate.
|
Quantum many-body thermal machines enabled by atom-atom correlations
R. S. Watson1 $\dagger$ and K. V. Kheruntsyan1$\star$
1 School of Mathematics and Physics, University of Queensland, Brisbane,
Queensland 4072, Australia
<EMAIL_ADDRESS>
⋆<EMAIL_ADDRESS>
## Abstract
Particle-particle correlations, characterized by Glauber’s second-order
correlation function, play an important role in the understanding of various
phenomena in radio and optical astronomy, quantum and atom optics, particle
physics, condensed matter physics, and quantum many-body theory. However, the
relevance of such correlations to quantum thermodynamics has so far remained
illusive. Here, we propose and investigate a class of quantum many-body
thermal machines whose operation is directly enabled by second-order atom-atom
correlations in an ultracold atomic gas. More specifically, we study quantum
thermal machines that operate in a sudden interaction-quench Otto cycle and
utilize a one-dimensional Lieb-Liniger gas of repulsively interacting bosons
as the working fluid. The atom-atom correlations in such a gas are different
to those of a classical ideal gas, and are a result of the interplay between
interparticle interactions, quantum statistics, and thermal fluctuations. We
show that operating these thermal machines in the intended regimes, such as a
heat engine, refrigerator, thermal accelerator, or heater, would be impossible
without such atom-atom correlations. Our results constitute a step forward in
the design of conceptually new quantum thermodynamic devices which take
advantage of uniquely quantum resources such as quantum coherence,
correlations, and entanglement.
###### Contents
1. 1 Introduction
2. 2 Interaction-driven Otto heat engine.
3. 3 Work from second-order Glauber correlations.
4. 4 Interaction-driven Otto accelerator, heater, and refrigerator.
1. 4.1 Accelerator [A]
2. 4.2 Heater [H]
3. 4.3 Refrigerator [R]
5. 5 Summary and outlook.
6. A The Lieb-Liniger model for the 1D Bose gas
7. B Transverse Otto cycle
8. C Instantaneity of the sudden quench
9. D Glauber’s second order correlation function
10. E Exact thermodynamic Bethe ansatz results
11. F Maximum efficiency and maximum work
12. G Thermal operation regimes of other QTM’s
## 1 Introduction
Quantum thermal machines (QTM), such as quantum heat engines (QHE),
refrigerators, and quantum batteries, are central in the theoretical and
experimental development of the emerging field of quantum thermodynamics [1,
2]. Their primary utility is to explore the fundamental laws of thermodynamics
in the quantum realm and to demonstrate possible advantages gained by
utilising quantum resources. Accordingly, QTM’s are expected to play a similar
role in the development of quantum technologies as classical heat engines
played in fostering scientific advances during the Industrial Revolution. In
the past decade, progress in the control over quantum platforms, such as
single ions [3, 4], nitrogen vacancy centers [5], and single-atom impurities
immersed in an ultra-cold atomic bath [6], have led to the realization of
single-particle QHE’s. Such single-particle QHE’s represent the ultimate limit
in the realization of an ‘infinitesimal machine’ [7].
However, to utilize the breadth of quantum resources available, one must move
beyond single-particle systems—to engines that utilize interacting many-
particle systems. Such QHE’s are uniquely positioned to take advantage of
quantum resources, such as entanglement [8, 9, 10, 11], correlations [12, 13,
14], or quantum coherence [15, 16, 17, 18, 19, 20], to enhance the performance
of classical heat engines [21] or perform entirely new tasks that would be
impossible classically [22]. In particular, control over inter-particle
interactions allows for the creation of strictly many-body QHE’s [21, 23, 24,
25, 26], which have recently been realized in the laboratory [27, 28]. These
very recent experimental developments underscore the need for further studies
of thermodynamic processes in the context of interacting quantum many-body
thermal machines.
Here, we propose a quantum many-body Otto heat engine—as well as related
thermal machines including the Otto refrigerator, thermal accelerator, and
heater—using a uniform one-dimensional (1D) Bose gas with repulsive contact
interactions as the working fluid. In the proposed Otto cycles, the unitary
work strokes are driven by a sudden quench of the interaction strength. We
demonstrate how the thermodynamic performance, in particular net work and
efficiency, of these many-body QTM’s can be calculated through the
experimentally measurable atom-atom local pair correlation [29, 30, 31]. The
atom-atom local correlation, $g^{(2)}(0)$, is described by the second-order
Glauber correlation function $g^{(2)}(r)$ at zero interparticle separation
(i.e., when $r=0$, where $r=|x-x^{\prime}|$ is the distance between the two
particles with positions $x$ and $x^{\prime}$) and characterizes the
probability of pairs of atoms to be found at the same point, relative to
uncorrelated atoms.
The benefits of using the 1D Bose gas as the working fluid in the proposed
Otto cycles is that the underlying theoretical model—the Lieb-Liniger model—is
exactly solvable in the uniform limit via the Yang-Yang thermodynamic Bethe
ansatz (TBA) [32, 33, 34], in addition to being experimentally realizable
using ultracold atomic gases confined to highly anisotropic traps [35, 36,
37]. This offers unique opportunities for gaining physical insights into the
performance of such Otto QTM’s as a tractable and testable quantum many-body
problem. Additionally, the Lieb-Liniger gas has a rich phase diagram spanning
several nontrivial regimes [29, 30], from the weakly interacting
quasicondensate [38, 39] through to the strongly interacting Tonks-Girardeau
regime of fermionization [40, 41]. The atom-atom correlation within these
regimes takes on a wide range of values between $0<g^{(2)}(0)<2$ [29, 30, 42],
depending on the temperature and interaction strength, which aids the
operation of the proposed Otto cycles under a variety of conditions. We
evaluate the performance of the 1D Bose gas Otto QTM’s, but we emphasise that
the broad conclusions arrived at here are not limited to the Lieb-Liniger
model.
A related interaction-driven Otto engine cycle with a uniform 1D Bose gas as
the working fluid was previously studied by Chen et al. in Ref. [25] in the
opposite limit of a quasi-static, or isentropic (rather than sudden quench),
work strokes. In this case, the performance of the engine cannot be expressed
merely in terms of Glauber’s second-order correlation function, as both the
kinetic and interaction energies evolve and change during the interaction
quench. Nevertheless, the performance of such an isentropic Otto engine could
still be evaluated analytically in the low-temperature regime, owing to the
known thermodynamic properties of the system using the Tomonaga-Luttinger
liquid approach.
## 2 Interaction-driven Otto heat engine.
In a uniform 1D Bose gas, described by the integrable Lieb-Liniger model [32]
(see Appendix A), the interatomic interaction strength $\chi$ can be expressed
in terms of the harmonic trap frequency $\omega_{\perp}$ in the tightly
confined (transverse) direction and the 3D $s$-wave scattering length $a_{s}$
as $\chi\simeq 2\hbar\omega_{\perp}a_{s}$ [43]. Accordingly, changing the
interaction strength $\chi$ may be achieved by either tuning the external
trapping potential that controls the transverse confinement $\omega_{\perp}$
or by changing the scattering length $a_{s}$ by means of a magnetic Feshbach
resonance [44]. The former option leads to a volume change of the gas (i.e.
transverse expansion or compression), and hence can be thought of as analogous
to mechanical work in the conventional Otto cycle. However, changing $\chi$
via a change of the scattering length leads to identical results, which then
justifies our referral to the engine cycle as the Otto cycle [45, 46, 24, 25,
27, 6, 47, 48, 49, 50] regardless of the means of tuning the interaction
strength (for further discussion, see (see Appendix B)). We emphasize,
however, that the dynamics of the quantum Otto cycle explored here are
strictly longitudinal, with the gas always remaining in its transverse ground
state.
Figure 1: An interaction-driven quantum many-body Otto engine cycle, operating
between two interaction strengths, $\chi_{c}$ and $\chi_{h}$. Unitary work
strokes (BC and DA) are shown in black, while non-unitary thermalization
strokes (AB and CD) are color-coded to the cold (blue) and hot (red)
reservoirs at temperatures $T_{c}$ and $T_{h}$, respectively.
The interaction-driven Otto engine cycle, which we thus consider, consists of
four strokes (see Fig. 1):
* (1)
Thermalization with hot reservoir, A$\to$B: the working fluid, consisting of
$N$ total atoms at interaction strength $\chi_{h}$, is connected to a hot
($h$) reservoir at temperature $T_{h}$, where it is left to equilibrate,
taking in heat
$Q_{1}\\!=\\!\langle\hat{H}\rangle_{\textbf{B}}\\!-\\!\langle\hat{H}\rangle_{\textbf{A}}\\!>\\!0$,
which is to be partially converted into beneficial work in the subsequent
stroke. Here $\hat{H}$ is the system Hamiltonian (see Appendix A), and
$\langle\hat{H}\rangle_{\textbf{j}}$ is its expectation value, i.e., the total
energy of the system, in state $\textbf{j}=\\{\textbf{A,B,C,D}\\}$.
* (2)
Unitary expansion, B$\to$C: the working fluid, now in a thermal equilibrium
state at temperature $T_{h}$, is decoupled from the hot reservoir and has its
interaction strength quenched from $\chi_{h}$ to $\chi_{c}\\!<\\!\chi_{h}$,
generating beneficial work
$W_{1}\\!=\\!\langle\hat{H}\rangle_{\textbf{C}}\\!-\\!\langle\hat{H}\rangle_{\textbf{B}}\\!<\\!0$
done by the fluid.
* (3)
Thermalization with cold reservoir, C$\to$D: the working fluid is connected to
a cold ($c$) reservoir at temperature $T_{c}<T_{h}$ and allowed to equilibrate
at constant interaction strength $\chi_{c}$ while dumping energy in the form
of heat
$Q_{2}\\!=\\!\langle\hat{H}\rangle_{\textbf{D}}\\!-\\!\langle\hat{H}\rangle_{\textbf{C}}\\!<\\!0$
into the cold reservoir.
* (4)
Unitary compression, D$\to$A: disconnected from the cold reservoir, the
working fluid has its interaction strength quenched from
$\chi_{c}\\!\to\\!\chi_{h}$, with work
$W_{2}\\!=\\!\langle\hat{H}\rangle_{\textbf{A}}\\!-\\!\langle\hat{H}\rangle_{\textbf{D}}\\!>\\!0$
done on the fluid, and the system returning to the initial state of the
overall cycle.
Such an engine cycle generates net beneficial work (done by the fluid) if the
total work $W\\!=\\!W_{1}\\!+\\!W_{2}\\!<\\!0$, i.e., if $|W_{1}|>W_{2}$ (or
$Q_{1}>|Q_{2}|$), with efficiency
$\eta=-\frac{W}{Q_{1}}=1-\frac{|Q_{2}|}{Q_{1}},$ (1)
where we used the conservation of energy $W+Q=0$, with $Q=Q_{1}+Q_{2}$ being
the total heat [51].
## 3 Work from second-order Glauber correlations.
In this work, we specifically consider a sudden or instantaneous quench of the
interaction strength $\chi$ in the unitary strokes (2) and (4). (For a
discussion of “instantaneity” of the sudden quench, see Appendix C). Under a
sudden interaction quench, the initial ($i$) and final ($f$) expectation
values over field operators in the system Hamiltonian, i.e., the expectation
values before and after the quench, remain unchanged as the system did not
have sufficient time to evolve into a new state. Hence, the only contribution
to the difference in total energy before and after the quench,
$\langle\hat{H}\rangle_{f}-\langle\hat{H}\rangle_{i}$, comes from the
difference between the interaction terms, $\frac{1}{2}\chi_{f}\int
dz\langle\hat{\Psi}^{\dagger}\hat{\Psi}^{\dagger}\hat{\Psi}\hat{\Psi}\rangle_{f}-\frac{1}{2}\chi_{i}\int
dz\langle\hat{\Psi}^{\dagger}\hat{\Psi}^{\dagger}\hat{\Psi}\hat{\Psi}\rangle_{i}$,
where
$\langle\hat{\Psi}^{\dagger}\hat{\Psi}^{\dagger}\hat{\Psi}\hat{\Psi}\rangle_{f}=\langle\hat{\Psi}^{\dagger}\hat{\Psi}^{\dagger}\hat{\Psi}\hat{\Psi}\rangle_{i}$
in a sudden quench, and $\hat{\Psi}^{\dagger}(z)$ and $\hat{\Psi}(z)$
represent the field creation and annihilation operators. Accordingly, the
energy difference can be expressed as
$\langle\hat{H}\rangle_{f}-\langle\hat{H}\rangle_{i}\\!=\\!\frac{1}{2}(\chi_{f}\\!-\\!\chi_{i})\overline{G^{(2)}_{i}}$,
where we have defined the total (integrated) second-order correlation of the
thermal equilibrium state $\overline{G^{(2)}_{i}}\\!=\\!\int
dz\langle\hat{\Psi}^{\dagger}\hat{\Psi}^{\dagger}\hat{\Psi}\hat{\Psi}\rangle_{i}$
[30].
Identifying the $i$ and $f$ states as points B (hot, $h$) and C, or as D
(cold, $c$) and A in the digram of Fig. 1, the net work of the Otto heat
engine can be expressed as
$W=-\frac{1}{2}(\chi_{h}-\chi_{c})\left(\overline{G^{(2)}_{h}}-\overline{G^{(2)}_{c}}\right),$
(2)
whereas the efficiency, Eq. (1), of the engine as
$\eta=1-\frac{\langle\hat{H}\rangle_{h}-\langle\hat{H}\rangle_{c}-\frac{1}{2}\left(\chi_{h}-\chi_{c}\right)\overline{G^{(2)}_{h}}}{\langle\hat{H}\rangle_{h}-\langle\hat{H}\rangle_{c}-\frac{1}{2}\left(\chi_{h}-\chi_{c}\right)\overline{G^{(2)}_{c}}}.$
(3)
These equations allow for evaluation of the performance of the interaction-
driven Otto engine under a sudden quench through solely the _equilibrium_
properties of the gas as all relevant expectation values here are with respect
to $h$ (B) or $c$ (D) states.
The equilibrium phase diagram of the uniform 1D Bose gas consists of several
distinct regimes (connected by smooth crossovers) that can be characterized by
the normalized same-point atom-atom correlation function $g^{(2)}(0)$ [29].
This local pair correlation is a thermodynamic quantity that can be calculated
from the exact TBA, as well as via approximate analytic methods in six
asymptotic regimes; it is shown in Fig. 5 in Appendix D. The $g^{(2)}(0)$
function and the different regimes of the 1D Bose gas can be parameterized by
dimensionless interaction strength, $\gamma\\!=\\!m\chi/\hbar^{2}\rho$, and
dimensionless temperature, $\tau\\!=\\!2mk_{B}T/\hbar^{2}\rho^{2}$, where $m$
is the boson mass and $\rho\\!=\\!N/L$ is the 1D density for $N$ atoms in a
system of length $L$.
For a uniform 1D Bose gas, the total correlation in the hot or cold thermal
equilibrium state may be expressed as $\overline{G^{(2)}_{h(c)}}\\!=\\!N\rho
g^{(2)}_{h(c)}(0)$ (see Appendix D). Combining this with Eq. (2), the net work
per particle can be expressed as
$\frac{W}{N}=-\frac{\hbar^{2}\rho^{2}}{2m}(\gamma_{h}-\gamma_{c})\left(g^{(2)}_{h}(0)-g^{(2)}_{c}(0)\right),$
(4)
meaning the net work is directly proportional to the difference between atom-
atom correlations of the 1D Bose gas in the hot and cold thermal equilibrium
states. This simple relationship between thermodynamic work and Glauber
second-order correlation function represents the key result of this work.
From Eq. (4), and given that $\gamma_{h}$ is always larger than $\gamma_{c}$
(as $\chi_{h}>\chi_{c}$; see Fig. 1), we see that if the local pair
correlations did not depend on the respective interaction strengths and
temperatures, i.e. if they were the same,
$g^{(2)}_{h}(0)\\!=\\!g^{(2)}_{c}(0)$, then the net work per particle would
vanish. We therefore conclude that extracting net work ($W\\!<\\!0$) from this
Otto cycle, and hence operating it as a heat engine, can only be enabled by
atom-atom correlations; more specifically, the only way to extract net work is
to have $g^{(2)}_{h}(0)\\!>\\!g^{(2)}_{c}(0)$ (see Fig. 5 (c) in Appendix D
for an illustration).
Figure 2: Performance of the interaction driven quantum Otto heat engine.
Columns (a)–(c) demonstrate net work, $|W|$, and efficiency, $\eta$, as a
function of the ratio of interaction strength, $\gamma_{h}/\gamma_{c}$, and
temperature, $\tau_{h}/\tau_{c}$, between the hot ($h$) and cold ($c$) thermal
equilibrium states. Regions corresponding to $W\\!>\\!0$, here colored grey,
are outside the engine operation regime and are considered further in Figs. 3
and 4 below. The example of panel (a) is for $\gamma_{c}\\!=\\!10^{-3}$ and
$\tau_{c}\\!=\\!10^{-2}$, where $\gamma_{h}/\gamma_{c}$ and
$\tau_{h}/\tau_{c}$ explores the parameter range within the region II of the
equilibrium phase diagram of Fig. 5 (b) of Appendix D . Similarly, panel (b)
explores region IV, with $\gamma_{c}\\!=\\!1$ and $\tau\\!=\\!10$, whereas (c)
explores region VI, with $\gamma_{c}\\!=\\!10$ and $\tau\\!=\\!1$.
Net work, Eq. (4), and efficiency, Eq. (3), of this quantum Otto engine,
calculated for simplicity using analytic approximations to the atom-atom
correlation function and total energy [29, 52], are shown in Fig. 2 as a
function of the ratio of interaction strengths, $\gamma_{h}/\gamma_{c}$, and
temperatures, $\tau_{h}/\tau_{c}$, for three of the six asymptotic regimes
(for results outside of the regimes of analytic approximation, see Appendix
E). Notably, in this Otto engine cycle, for any fixed value of the temperature
ratio, the interaction strength quench corresponding to maximum net work is
approximately the same as that providing maximum efficiency; this occurs as,
to first order, the heat intake $Q_{1}$ varies slowly with
$\gamma_{h}/\gamma_{c}$, meaning $\eta\\!\propto\\!W$ (see Appendix F). The
observed increase of net work and efficiency in all regimes under large
temperature ratio may be attributed to the fact that the local correlation of
the hot thermal state in Eq. (4) is always a monotonically increasing function
of $\tau$. However, as the correlation function is also monotonically
decreasing under $\gamma_{h}$, this results in no extractable net work under
sufficiently large interaction strength ratios for any given temperature
ratio.
At a glance, one may conclude that the net work, which is enabled through the
$g^{(2)}(0)$ correlation function, is maximized under the largest possible
difference in correlation function, i.e.
$g^{(2)}_{h}(0)\\!-\\!g^{(2)}_{c}(0)\\!\simeq\\!2$. However, to achieve this,
while also guaranteeing that $\gamma_{h}\\!>\\!\gamma_{c}$, would require an
unrealistically high (from practical point of view) temperature ratio to
operate between regimes VI and IV, shown in Fig. 5 (b) of Appendix D. Rather,
we observe that, while the $g^{(2)}(0)$ correlation function is responsible
for enabling operation as a heat engine, the _magnitude_ of net work is
governed more strongly by the difference in the interaction strengths,
$\gamma_{h}-\gamma_{c}$, which is more susceptible to large variations in
practice using, e.g., magnetic Feshbach tuning of the $s$-wave scattering
length.
Consequently, it is in the weakly interacting ($\gamma\ll 1$) region II, shown
in Fig. 2 (a), where $\gamma_{h}-\gamma_{c}$ is small, that we observe the
lowest magnitude of net work. In comparison, the magnitude of net work is
largest in regime IV, shown in Fig. 2 (b), where $\gamma\\!\sim\\!1$ and hence
the difference $\gamma_{h}\\!-\\!\gamma_{c}$ can also be on the order of one.
The same considerations apply to the strongly interacting ($\gamma\\!\gg\\!1$)
regime VI, where one can operate under the largest magnitudes of interaction
strengths; however, in this regime the net work is diminished due to the
vanishing of correlation itself ($g^{(2)}\\!\ll\\!1$) arising from the effect
of fermionization [40]. In contrast to these observations, the efficiency of
the engine, Eq. (3), is inversely dependent on the total energy of the thermal
states, which is minimal in the weakly interacting low temperature regime II,
which thus has the largest efficiency.
## 4 Interaction-driven Otto accelerator, heater, and refrigerator.
Glauber’s $g^{(2)}(0)$ correlation function is inherently dependent on the
interaction strength and temperature [29]. This implies that the condition for
the Otto cycle to operate as a heat engine,
$g^{(2)}_{c}(0)\\!<\\!g^{(2)}_{h}(0)$, where $\gamma_{c}\\!<\\!\gamma_{h}$,
cannot hold under large quenches of interaction strength as the gas becomes
increasingly fermionized (i.e., $g_{h}^{(2)}(0)\\!\to\\!0$) in the limit
$\gamma\\!\to\\!\infty$ [40].
However, beyond the heat engine [E] operation regime, a further three QTM’s
are thermodynamically allowed [53], namely, the accelerator [A], heater [H],
and refrigerator [R]. For these QTM’s, one may define a coefficient of
performance (CoP) according to [54]:
$\mathrm{CoP[QTM]}=\frac{\mathrm{benefit\,of\,operation}}{\mathrm{cost\,of\,operation}}.$
(5)
We now discuss the operating conditions and the CoP’s of these three
additional QTM’s in detail.
### 4.1 Accelerator [A]
Figure 3: Energy flow diagrams and schematics of the interaction driven Otto
cycles in different regimes: (a) heat engine [E], (b) accelerator [A], (c)
heater [H], and (d) refrigerator [R]. In all four regimes, the difference
between the energies of the hot and cold thermal state (B and D) are the same,
however the energies of the resulting state after the interaction driven work
strokes can be different, leading to these four different physically valid
outcomes. Panel (d) shows the layout of how the operating regimes of these
different QTM’s depend on the ratio of interaction strengths,
$\gamma_{h}/\gamma_{c}$, and temperatures, $\tau_{h}/\tau_{c}$, of the hot and
cold thermal states in same the three asymptotic regimes (II, IV, and VI) as
in Fig. 2. The respective coefficients of performance of these QTM’s within
these operating boundaries are shown in Fig. 4. Figure 4: Coefficients of
performance (CoP) of QTMs in the accelerator [A], heater [H], and refrigerator
[R] regimes within the respective parameter ranges II, IV, and VI shown in
Fig. 3 (e).
The conditions of operating the Otto cycle as a thermal accelerator are given
by: $W\\!>\\!0$, $Q_{1}\\!>\\!0$, $Q_{2}\\!>\\!0$, with $|Q_{2}|>|Q_{1}|$,
where $W\\!=\\!0$ defines the border between the heat engine and the
accelerator, whereas $Q_{1}\\!=\\!0$ defines the border between the
accelerator and the next QTM, the heater; see Fig. 3, where we show the
simplified schematics of all additional QTM’s compared to the heat engine from
Fig. 1, which we repeat here in panel (a) for comparison. The thermal
accelerator [53], shown in panel (b) of Fig. 3, enhances the natural flow of
heat, taken into the working fluid from the hot reservoir, $Q_{1}$, and
transferred to the cold reservoir, $Q_{2}$, which is aided by the working
fluid through the investment of net mechanical work, $W>0$, in the process.
According to Eq. (5), its CoP is given by:
$\mathrm{CoP[A]}=-\frac{Q_{2}}{W}=1+\frac{Q_{1}}{W}>1,$ (6)
where,
$\displaystyle Q_{2}$
$\displaystyle=-\langle\hat{H}\rangle_{h}+\langle\hat{H}\rangle_{c}+\frac{N\hbar^{2}\rho^{2}}{2m}(\gamma_{h}-\gamma_{c})g^{(2)}_{h}(0),$
(7) $\displaystyle Q_{1}$
$\displaystyle=\langle\hat{H}\rangle_{h}-\langle\hat{H}\rangle_{c}-\frac{N\hbar^{2}\rho^{2}}{2m}(\gamma_{h}-\gamma_{c})g^{(2)}_{c}(0),$
(8)
whereas work $W$ is given by Eq. (4) as before. The magnitude of CoP[A] is
shown in Fig. 4 (a), where we note that, at the border between [E] and [A],
the coefficient of performance diverges due to its inverse dependence on $W$.
Operation of this QTM, enabled through $g^{(2)}_{h}(0)\\!<\\!g^{(2)}_{c}(0)$,
additionally requires that
$|W_{1}|=\langle\hat{H}\rangle_{\mathbf{B}}\\!-\\!\langle\hat{H}\rangle_{\mathbf{C}}>0$
(where $W_{1}<0$) and
$W_{2}=\langle\hat{H}\rangle_{\mathbf{A}}\\!-\\!\langle\hat{H}\rangle_{\mathbf{D}}>0$,
which are proportional to $g^{(2)}_{c}(0)$ and $g^{(2)}_{h}(0)$, respectively,
must individually remain smaller than the energy gap, $\Delta
E\\!=\\!\langle\hat{H}\rangle_{\mathbf{B}}\\!-\\!\langle\hat{H}\rangle_{\mathbf{D}}$,
between the hot and cold thermal states, as shown in the cycle diagram in Fig.
3 (b). The conditions $|W_{1}|<\Delta E$ and $W_{2}<\Delta E$ may be expressed
as
$\displaystyle
N\frac{\hbar^{2}\rho^{2}}{2m}(\gamma_{h}-\gamma_{c})g^{(2)}_{h}(0)<\Delta E,$
(9) $\displaystyle
N\frac{\hbar^{2}\rho^{2}}{2m}(\gamma_{h}-\gamma_{c})g^{(2)}_{c}(0)<\Delta E,$
(10)
and are equivalent to $Q_{1}\\!>\\!0$ and $Q_{2}\\!<\\!0$, respectively.
### 4.2 Heater [H]
Operating the Otto cycle in the heater regime requires:
$W>0,Q_{1}<0,\,Q_{2}<0$, and is shown schematically in Fig. 3 (c). This QTM
utilizes mechanical work to heat up both hot and cold thermal reservoirs. The
border with the accelerator regime is defined by $Q_{1}\\!=\\!0$, and the
lower border with the refrigerator QTM is defined by $Q_{2}\\!=\\!0$; see Fig.
3 (e). We note that the condition $Q_{1}\\!<\\!0$ corresponds to
$W_{2}\\!=\\!\langle\hat{H}\rangle_{\mathbf{A}}\\!-\\!\langle\hat{H}\rangle_{\mathbf{D}}\\!>\\!\Delta
E$, opposite to the condition for the accelerator regime [A]. Additionally,
for a fixed temperature ratio, $\tau_{h}/\tau_{c}$, since an arbitrarily large
interaction strength quench to $\gamma_{h}\gg 1$ incurs fermionization (i.e.
$g^{(2)}_{h}(0)\\!\to\\!0$), operation as a heater is inevitable as
$\gamma_{h}/\gamma_{c}\to\infty$ at any fixed value of $\tau_{h}/\tau_{c}$
(see Appendix G).
If one considers the benefit of operation of the heater to be heating of both
reservoirs, then its CoP is trivially $\mathrm{CoP[H]}\\!=\\!-Q/W\\!=\\!1$.
Instead, we define the benefit of operation of this QTM to be the heating of
the hot reservoir; thus
$\mathrm{CoP[H]}=-\frac{Q_{1}}{W}=1-\frac{|Q_{2}|}{W}<1,$ (11)
which shown in Fig. 4 (b).
One may alternatively define the benefit of the heater as heating the cold
reservoir, in which case the CoP would be given by:
$\mathrm{CoP[H]}^{\prime}=-\frac{Q_{2}}{W}=1-\mathrm{CoP[H]}<1.$ (12)
Both definitions of $\mathrm{CoP[H]}$, however, are limited to be less than or
equal to $1$ by energy conservation .
### 4.3 Refrigerator [R]
The conditions of operating the Otto cycle as a refrigerator are:
$W>0,\,Q_{1}<0,\,Q_{2}>0$, with $|Q_{2}|<|Q_{1}|$. The purpose of this thermal
machine is to cool down the cold reservoir by extracting heat and dumping it
into the hot reservoir, with the aid mechanical work done by the working
fluid. The boundary between the refrigerator and the heater is defined by
$Q_{2}\\!=\\!0$, see Fig. 3 (d). The CoP for the refrigerator is given by [54]
$\mathrm{CoP[R]}=\frac{Q_{2}}{W}=\frac{|Q_{1}|}{W}-1,$ (13)
and is shown in Fig. 4 (c); it diverges in the limit of infinitesimal quenches
in both interaction strength and temperature, because the benefit of
refrigeration, $Q_{2}$, vanishes slower than the cost, $W$, in these limits
(see Appendix G. Further, as noted in the section addressing the heater QTM,
large interaction strength quenches inevitably incur operation as a heater for
any fixed temperature ratio. This implies that refrigeration occurs only over
a finite parameter region, most clearly visible in regimes IV and VI of Fig. 3
(e).
The role of the atom-atom local correlation function $g^{(2)}(0)$ in the
thermal operation regimes and the respective boundaries of these additional
QTM’s is discussed further in Appendix G.
## 5 Summary and outlook.
We have proposed a sudden interaction-quench Otto cycle operating in a quantum
many-body regime using a repulsive 1D Bose gas as a working fluid. Extracting
net work from such a cycle in the heat engine regime was shown to be enabled
by atom-atom correlations. Such correlations are characterized by Glauber’s
second-order correlation function, $g^{(2)}(0)$, which is a thermodynamic
quantity that can be calculated from the exact TBA solution through
application of the Hellmann-Feynman theorem to the Helmholtz free energy [29].
Further, we have investigated other operational regimes of this Otto cycle,
such as the thermal accelerator, heater, and refrigerator cycles, defining and
examining their coefficients of performance.
Though our specific results for the net work and the efficiency were
calculated for a uniform 1D Bose as an example, the broad conclusions arrived
at here on the basis of Eqs. (2) and (3) are applicable to any other many-body
system with short-range contact interactions and should aid the tests of
quantum thermodynamic concepts and realization of novel QTMs in laboratory
settings.
## Acknowledgements
This work was supported through Australian Research Council Discovery Project
Grant No. DP190101515.
Appendices
In these Appendices, we briefly review the Lieb-Liniger model of the one-
dimensional (1D) Bose gas with contact interactions, and discuss how the
interaction quench can be achieved via changes of the strength of the
transverse confinement of the 1D Bose gas. We also discuss physical
considerations for the sudden quench to be effectively instantaneous. We next
present the relevant analytic results for the Glauber’s second-order
correlation $g^{(2)}(0)$ and the Hamiltonian energy $\langle\hat{H}\rangle$ in
six different asymptotic regimes of the 1D Bose gas, used in the main text for
evaluating the net work and efficiency of the Otto quantum heat engine. We
also present equivalent numerical results obtained using the exact
thermodynamic Bethe ansatz (TBA), and use these to explore a sudden quench
Otto cycle operation outside of the analytically tractable asymptotic regimes
of the 1D Bose gas. This is followed by a discussion of the maximum efficiency
at maximum work. Finally, we discuss the role of the pair correlation
$g^{(2)}(0)$ in the operation of other quantum thermal machines (QTMs)
presented in the main text, such as accelerator, heater, and refrigerator.
## Appendix A The Lieb-Liniger model for the 1D Bose gas
The Lieb-Liniger model for the uniform 1D Bose gas with repulsive contact
interactions [32] is described by the follwoing second-quantized Hamiltonian
$\begin{split}\begin{aligned}
\hat{H}=\hat{H}^{kin}+\hat{H}^{int}=-\frac{\hbar^{2}}{2m}\int
dz\hat{\Psi}^{\dagger}\frac{\partial^{2}}{\partial
z^{2}}\hat{\Psi}+\frac{\chi}{2}\int
dz\hat{\Psi}^{\dagger}\hat{\Psi}^{\dagger}\hat{\Psi}\hat{\Psi},\end{aligned}\end{split}$
(14)
where $m$ is the atomic mass, $\chi$ is the strength of the contact
interactions (see main text), and $\hat{\Psi}^{\dagger}(z)$ and
$\hat{\Psi}(z)$ are the bosonic field creation and annihilation operators,
respectively. We additionally highlight the separation of the Lieb-Liniger
Hamiltonian into its kinetic energy, $\hat{H}^{kin}$, and interaction energy,
$\hat{H}^{int}$, components, to be referred to later.
Ground state solutions to this integrable model are dependent only on a single
dimensionless interaction strength, $\gamma\\!=\\!m\chi/\hbar^{2}\rho$, where
$\rho\\!=\\!N/L$ is the linear density for $N$ particles in a system of size
$L$. Finite temperature solutions, on the other hand, can be obtained using
the Yang-Yang thermodynamic Bethe ansatz (TBA) [33], and can be parameterized
by an additional dimensional parameter, the dimensionless temperature
$\tau\\!=\\!2mk_{B}T/\hbar^{2}\rho^{2}$ [29].
## Appendix B Transverse Otto cycle
For a magnetically trapped ultracold 1D Bose gas, the work done via transverse
compression and expansion is ultimately magnetic: it is done by the magnetic
field on the atomic dipole moments when $\omega_{\perp}$ is increased, or vice
versa – by the atomic dipole moments on the magnetic field when
$\omega_{\perp}$ is decreased. Alternatively, the change in the interaction
strength $\chi$ is implemented through control over the $s$-wave scattering
length $a_{s}$ via a magnetic Feshbach resonance [44], in which case the
nature of the work is still magnetic.
We use the term Otto cycle in the same sense as used to describe, e.g., a
harmonic oscillator Otto engine [45, 46, 24, 25, 27, 6, 47, 48, 49, 50],
wherein the harmonic oscillator frequency (rather than the volume of the
system) is fixed as an external parameter during the thermalization strokes.
In our case, it is the interaction strength that is fixed, which itself is
proportional to the transverse harmonic confinement frequency of the 1D Bose
gas.
## Appendix C Instantaneity of the sudden quench
Realistically, a sudden quench of interaction strength from $\chi_{h(c)}$ to
$\chi_{c(h)}$ would still occur over a finite duration $\Delta t$. The
“instantaneity” of the quench utilized in the main text refers to the
assumption that $\Delta t$ is much shorter than the characteristic time scale
for longitudinal dynamics, i.e. that $\Delta
t\\!\ll\\!ml_{\text{cor}}^{2}/\hbar$, where $l_{\text{cor}}$ is the
characteristic short-range correlation length in the system, given,
respectively, by: the healing length $l_{h}\\!=\\!\hbar/\sqrt{m\chi\rho}$ in
regimes I and II; thermal phase coherence length
$l_{\phi}\\!=\\!\hbar^{2}\rho/mk_{B}T$ in regime III; thermal de Broglie
wavelength $\lambda_{T}\\!=\\!\sqrt{2\pi\hbar^{2}/mk_{B}T}$ in regime IV;
absolute value of the 1D scattering length $|a_{1D}|=2\hbar^{2}/m\chi$ in the
regime of high-temperature fermionization V; and the Fermi wavelength
$\lambda_{F}\\!=\\!2/\rho$ in the Tonks-Girardeau regime of low-temperature
fermionization VI. Thus, it is with respect to the _longitudinal_ dynamics
that we refer to our quench as sudden. With respect to the _transverse_
dynamics, on the other hand, we are assuming that $\Delta t$ is sufficiently
long ($\Delta t\gg 2\pi/\omega_{\perp}$) compared to the characteristic
transverse timescale, $2\pi/\omega_{\perp}$, governed by the transverse
harmonic trap frequency $\omega_{\perp}$ [43]. As a result, the quench would
retain the system in the transverse ground state, and hence would not
compromise the 1D character of the system. As such, the work done on (or by)
the system during the unitary strokes can be regarded as transversely
quasistatic.
Figure 5: Atom-atom correlations, described by Glauber’s $g^{(2)}(0)$
correlation function, for the uniform 1D Bose gas evaluated using the exact
Yang-Yang TBA [29, 33]. Panel a shows $g^{(2)}(0)$ as a function of the
dimensionless interaction strength, $\gamma$, and temperature, $\tau$. In
panel b, this is translated into a contour diagram, in which we also show the
crossover boundaries (white solid and dashed lines) between the different
asymptotic analytic regimes [29]. Panel (c) shows line plots of $g^{(2)}(0)$
vs $\gamma$, at different fixed values of $\tau$, together with two possible
choices, D-B or $\textbf{D-B}^{\prime}$, of the thermal equilibrium operating
points of the Otto cycle from Fig. 1; as we see, according to Eq. (4),
operating the Otto cycle as an engine (with $W<0$) can be achieved between the
points D-B ($\gamma_{c}\longleftrightarrow\gamma_{h}$), where
$g^{(2)}_{c}(0)\\!<\\!g^{(2)}_{h}(0)$, but not between $\textbf{D-B}^{\prime}$
($\gamma_{c}\longleftrightarrow\gamma^{\prime}_{h}$), where
$g^{(2)}_{c}(0)\\!>\\!g^{(2)}_{h}(0)$ due to the stronger interaction quench,
even though the temperature at D is still lower than at $\textbf{B}^{\prime}$.
## Appendix D Glauber’s second order correlation function
The two-point correlation function may be generally defined through
$g^{(2)}(z,z^{\prime})=\frac{\langle\hat{\Psi}^{\dagger}(z)\hat{\Psi}^{\dagger}(z^{\prime})\hat{\Psi}(z^{\prime})\hat{\Psi}(z)\rangle}{\rho(z)\rho(z^{\prime})}.$
(15)
For a uniform (translationally invariant) system with
$\rho(z^{\prime})\\!=\\!\rho(z)\\!=\\!\rho$, this $g^{(2)}(z,z^{\prime})$
depends only on the relative distance $|z-z^{\prime}|$, i.e.
$g^{(2)}(z,z^{\prime})\\!=\\!g^{(2)}(|z-z^{\prime}|)$. If one is interested in
the same point ($z\\!=\\!z^{\prime}$) correlation function, as utilized in the
main text for calculation of the net work and efficiency of the quantum Otto
cycle, this in turn becomes $g^{(2)}(0)$.
The 1D Bose gas can be characterized by six distinct asymptotic regimes
defined through the same-point correlation function [29], as shown in Fig. 5.
The weakly interacting ($\gamma\ll 1$), low temperature ($\tau\ll
2\sqrt{\gamma}$) quasicondensate regime can be treated using the Bogoliubov
theory for quasicondensates [39], and is characterised by suppressed density
fluctuations, but fluctuating phase. This may be subdivided into regions
dominated by quantum (I) and thermal (II) fluctuations [29]. At higher
temperatures, the gas becomes nearly ideal, and can be treated using
perturbation theory with respect to $\gamma$ [29, 42]. This asymptotic region
may in turn be subdivided into quantum degenerate (III) and non-degenerate
(IV) regimes. Finally, in the strongly interacting regime ($\gamma\gg 1$),
where the Fermi-Bose gas mapping applies, the 1D Bose gas can be well
approximated by a nearly ideal Fermi gas, and can be treated using
perturbation theory with respect to $1/\gamma$ [29, 42]. This regime can be
further subdivided into two regions corresponding to high-temperature (V) and
low-temperature (VI) fermionization.
In each of these asymptotic regimes, the pair correlation function
$g^{(2)}(0)$ can be derived in closed approximate analytic form, where we
additionally define the boundary of these regimes in terms of $\gamma$ and
$\tau$:
$\displaystyle\text{I}\\!:\,$ $\displaystyle
g^{(2)}(0)\\!\simeq\\!1-\frac{2}{\pi}\gamma^{1/2}+\frac{\pi\,\tau^{2}}{24\,\gamma^{3/2}},\,\left[\frac{\tau}{2}\\!\ll\\!\gamma\ll
1\right],$ (16) $\displaystyle\text{II}\\!:\,\,$ $\displaystyle
g^{(2)}(0)\simeq
1+\frac{\tau}{2\sqrt{\gamma}}\,\,,\quad\left[2\gamma\ll\tau\ll
2\sqrt{\gamma}\right],$ (17) $\displaystyle\text{III}\\!:\,\,$ $\displaystyle
g^{(2)}(0)\simeq
2-\frac{4\gamma}{\tau^{2}},\quad\left[2\sqrt{\gamma}\ll\tau\ll 1\right],$ (18)
$\displaystyle\text{IV}\\!:\,\,$ $\displaystyle g^{(2)}(0)\simeq
2-\gamma\sqrt{\frac{2\pi}{\tau}},\,\,\,\left[\tau\gg\text{max}\\{1,\gamma^{2}\\}\right],$
(19) $\displaystyle\text{V}\\!:\,\,$ $\displaystyle
g^{(2)}(0)\simeq\frac{2\tau}{\gamma^{2}},\,\,\left[\frac{\pi^{2}}{(1+2/\gamma)^{2}}\ll\tau\ll\gamma^{2}\right],$
(20) $\displaystyle\text{VI}\\!:\,$ $\displaystyle
g^{(2)}(0)\simeq\frac{4\pi^{2}}{3\gamma^{2}}\left(1\\!+\\!\frac{\tau^{2}}{4\pi^{2}}\right)\\!,\left[\tau\\!\ll\\!\frac{\pi^{2}}{(1+2/\gamma)^{2}},\,\gamma\\!\gg\\!1\right]\\!.$
(21)
Further, we may express the total energy of system in each asymptotic regime
as [52],
$\displaystyle\text{I}:\,\,\,$
$\displaystyle\langle\hat{H}\rangle\\!\simeq\\!N\frac{\hbar^{2}\rho^{2}}{2m}\left(\gamma-\frac{4}{3\pi}\gamma^{3/2}+\frac{\pi}{12}\frac{\tau^{2}}{\gamma^{1/2}}\right),$
(22) $\displaystyle\text{II}:\,\,\,$
$\displaystyle\langle\hat{H}\rangle\\!\simeq\\!N\frac{\hbar^{2}\rho^{2}}{2m}\left(\gamma+\frac{\zeta(3/2)}{4\sqrt{\pi}}\tau^{3/2}+\frac{\zeta(1/2)}{2\sqrt{\pi}}\tau^{1/2}\gamma\right),$
(23) $\displaystyle\text{III}:\,\,\,$
$\displaystyle\langle\hat{H}\rangle\\!\simeq\\!N\frac{\hbar^{2}\rho^{2}}{2m}\left(\frac{1}{2}\zeta(3/2)+2\gamma-\frac{6\gamma^{2}}{\tau^{2}}\right),$
(24) $\displaystyle\text{IV}:\,\,\,$
$\displaystyle\langle\hat{H}\rangle\\!\simeq\\!N\frac{\hbar^{2}\rho^{2}}{2m}\left(\frac{\tau}{2}+2\gamma-\frac{3}{2}\sqrt{\frac{\pi}{2}}\frac{\gamma^{2}}{\tau^{1/2}}\right),$
(25) $\displaystyle\text{V}:\,\,\,$
$\displaystyle\langle\hat{H}\rangle\\!\simeq\\!N\frac{\hbar^{2}\rho^{2}}{2m}\left(\frac{\tau}{2}+\frac{1}{2}\sqrt{\frac{\pi}{2}}\tau^{1/2}-\sqrt{\frac{\pi}{2}}\frac{\tau^{1/2}}{\gamma}\right),$
(26) $\displaystyle\text{VI}:\,\,\,$
$\displaystyle\langle\hat{H}\rangle\\!\simeq\\!N\frac{\hbar^{2}\rho^{2}}{2m}\left(\frac{\pi^{2}}{3}-\frac{4\pi^{2}}{3\gamma}+\frac{\tau^{2}}{3\gamma}\right),$
(27)
where $\zeta(s)$ is the Riemann zeta function of $s\\!\in\\!\mathbb{R}$.
As described in the main text, the normalized local second-order correlation
function may be rearranged and integrated for the total (integrated)
correlation function,
$\begin{split}\begin{aligned} \\!\overline{G^{(2)}}\equiv\\!\int
dz\langle\hat{\Psi}^{\dagger}(z)\hat{\Psi}^{\dagger}(z)\hat{\Psi}(z)\hat{\Psi}(z)\rangle\\!=\\!\int_{0}^{L}\\!dzg^{(2)}(0)\rho^{2}.\end{aligned}\end{split}$
(28)
Utilizing the linear density, $\rho\\!=\\!N/L$, this may be expressed as
$\overline{G^{(2)}}\\!=\\!N\rho g^{(2)}(0)$, which is used in Eq. (4) of the
main text for expressing the exact net work of the uniform 1D Bose gas in
terms of $g^{(2)}(0)$.
## Appendix E Exact thermodynamic Bethe ansatz results
Experimental realization of a 1D Bose gas often falls outside the asymptotic
regimes where analytic approximations are applicable. In such situations, we
may utilize the exact Yang-Yang thermodynamic Bethe ansatz [33, 34] to
evaluate the equilibrium properties of the gas required for calculating net
work and efficiency via Eqs. (2) and (3) of the main text, respectively. This
is presented in Fig. 6 (a) for experimentally realistic set of system
parameters that inhabit the boundary between asymptotic parameter regimes II
and III (see Fig. 5). Further, one may utilize the exact TBA to confirm the
results derived via approximate analytics in the main text. This is
illustrated in Figs. 6 (b) and (c), where we see excellent agreement between
these results when the parameters $\gamma$ and $\tau$ are sufficiently deep
into the analytic asymptotic regimes.
Figure 6: Performance of the sudden interaction quench quantum Otto cycle,
numerically evaluated via the thermodynamic Bethe ansatz. Panel a demonstrates
numerically evaluated net work and efficiency for a system with a cold thermal
state defined by $\gamma_{c}\\!=\\!0.1$, $\tau_{c}\\!=\\!0.5$, lying on the
border of regimes II and III (see Fig. 5), and thus lying outside the range of
the analytic approximations utilized in the main text. Panel (b) is a copy of
Fig. 2 (a) of the main text shown here for comparison with the results of
numerical TBA evaluation of the same cycle shown in panel (c). Here, there is
excellent agreement in the net work between panels (b) and (c), with small
disagreement under large interaction strength and temperature ratios, as the
hot thermal state is approaching the edge of the asymptotic regime where the
analytic approximations become less applicable.
## Appendix F Maximum efficiency and maximum work
For a fixed ratio of temperatures, it was noted in the main text that the
interaction strength ratio corresponding to maximum work approximately
coincides with that for maximum efficiency, which is uncommon for highly
nonequilibrium engine cycles [21, 46]. In the sudden interaction-quench Otto
cycle, such coincidence occurs due to the dependence of the total energy of
the hot and cold thermal equilibrium states on the interaction strength.
As shown in Eq. (14), the total energy may be separated into its kinetic
energy, which scales predominately with temperature, and interaction energy,
which scales predominately with interaction strength. Thus, for a fixed ratio
of temperatures, $\tau_{h}/\tau_{c}$, the difference between the total
energies of the hot and cold thermal state may be given as a sum of two terms:
the first is the kinetic energy difference, determined by the temperature
ratio and therefore approximately constant, the second given by the
interaction energy difference, which scales with the interaction strengths,
$\gamma_{h}$ and $\gamma_{c}$, of the hot and cold thermal states as
$\langle\hat{H}^{int}\rangle_{h}\\!-\\!\langle\hat{H}^{int}\rangle_{c}\\!=\\!N\frac{\hbar^{2}\rho^{2}}{2m}\left(\gamma_{h}g^{(2)}_{h}(0)\\!-\\!\gamma_{c}g^{(2)}_{c}(0)\right).$
(29)
However, when operating within a single asymptotic regime under a moderate
quench of interaction strength, the $g^{(2)}(0)$ correlation function is only
slowly varying with $\gamma$. This means, to first approximation,
$g^{(2)}_{h}(0)\\!\simeq\\!g^{(2)}_{c}(0)$, which in turn transforms the
interaction energy difference to
$\langle\hat{H}^{int}\rangle_{h}\\!-\\!\langle\hat{H}^{int}\rangle_{c}\\!\simeq\\!N\frac{\hbar^{2}\rho^{2}}{2m}\left(\gamma_{h}\\!-\\!\gamma_{c}\right)g^{(2)}_{c}(0).$
(30)
The heat intake, which is given by Eq. (8) of the main text, is therefore well
approximated by
$\displaystyle
Q_{1}=\langle\hat{H}\rangle_{h}-\langle\hat{H}\rangle_{c}-\frac{N\hbar^{2}\rho^{2}}{2m}(\gamma_{h}-\gamma_{c})g^{(2)}_{c}(0)\simeq\langle\hat{H}^{kin}\rangle_{h}-\langle\hat{H}^{kin}\rangle_{c},$
(31)
which is approximately constant for a fixed temperature ratio, as detailed
above. Therefore, the efficiency, which is given by $\eta\\!=\\!W/Q_{1}$,
scales predominately with $W$, hence $\eta\\!\propto\\!W$.
## Appendix G Thermal operation regimes of other QTM’s
Under large interaction strength quenches, for fixed temperatures $\tau_{c}$
and $\tau_{h}$, it was noted in the main text that the heater is the
inevitable mode of operation. This scenario requires the fulfilment of two
conditions: first, we require $Q_{1}\\!<\\!0$, meaning the magnitude of the
work out,
$|W_{1}|\\!=\\!N\frac{\hbar^{2}\rho^{2}}{2m}(\gamma_{h}\\!-\\!\gamma_{c})g^{(2)}_{c}(0)$
(where $W_{1}<0$) exceeds the energy gap between the hot and cold thermal
states, given by $\langle\hat{H}\rangle_{h}\\!-\\!\langle\hat{H}\rangle_{c}$.
We further require $Q_{2}\\!>\\!0$, meaning that the work in,
$W_{2}\\!=\\!N\frac{\hbar^{2}\rho^{2}}{2m}(\gamma_{h}\\!-\\!\gamma_{c})g^{(2)}_{h}(0)$
(where $W_{2}>0$), must remain less than this same gap (see Fig. 3 (c) of the
main text).
As detailed above, in Appendix F on maximum efficiency at maximum work, the
total energy difference between the hot and cold thermal states for fixed
temperature ratios, $\tau_{h}/\tau_{c}$, is given by a sum of the kinetic
energy difference, which is approximately constant, and the interaction energy
difference, where the correlation function is approximately constant in a
single asymptotic regime, and hence given by Eq. (29). In contrast, for a
large quench in interaction strength, the correlation function is no longer
approximately constant, and $g^{(2)}_{h}(0)$ is strongly monotonically
decreasing as a function of $\gamma_{h}$, i.e.
$g^{(2)}_{h}(0)\\!<\\!g^{(2)}_{c}(0)$. We therefore find that the work input,
$W_{2}$, exceeds the interaction energy difference given in Eq. (29),
$W_{2}\propto(\gamma_{h}-\gamma_{c})g^{(2)}_{c}(0)>\gamma_{h}g^{(2)}_{h}(0)\\!-\\!\gamma_{c}g^{(2)}_{c}(0).$
(32)
Further, as the kinetic energy term is approximately constant, $W_{2}$
inevitably exceeds the difference in total energy between the hot and cold
thermal states due to its linear dependence on $\gamma_{h}$.
Similarly, since $g^{(2)}_{h}(0)$ monotonically decreases with $\gamma_{h}$
for large quenches of interaction strength, the magnitude of the work output,
$|W_{1}|\propto(\gamma_{h}-\gamma_{c})g^{(2)}_{h}(0)$, remains less than the
energy gap between the hot and cold thermal states:
$|W_{1}|\propto(\gamma_{h}-\gamma_{c})g^{(2)}_{h}(0)<\gamma_{h}g^{(2)}_{h}(0)\\!-\\!\gamma_{c}g^{(2)}_{c}(0).$
(33)
These two conditions, taken together, imply operation as a heater under large
interaction quenches.
In contrast, for any fixed value of $\gamma_{c}$ and $\gamma_{h}$, an
increasingly higher temperature of the hot thermal state, $\tau_{h}$, means
that the corresponding correlation function, $g^{(2)}_{h}(0)$, monotonically
increases towards its maximum value of $g^{(2)}_{h}(0)\\!\simeq\\!2$, which is
achieved in regime IV, defined in Eq. (19). Thus, there is always a value of
$\tau_{h}$ such that $g^{(2)}_{h}(0)\\!>\\!g^{(2)}_{c}(0)$, turning the Otto
cycle into the engine operation regime.
Finally, in the refrigerator thermal operation regime, for $\tau_{h}=\tau_{c}$
and an infinitesimal quench of interaction strength,
$\gamma_{h}-\gamma_{c}=\delta\gamma$, the net work vanishes as
$W\\!\propto(\gamma_{h}-\gamma_{c})$
$\times(g^{(2)}_{h}(0)-g^{(2)}_{c}(0))\propto\delta\gamma^{2}$. This occurs as
the zeroth order terms in the correlation function cancel when taking their
difference in a single asymptotic regime. In contrast, the heat intake,
$Q_{1}$, depends on the difference in total energy, which to first order
vanishes as $Q_{1}\\!\propto\\!\delta\gamma$. This results in
$\mathrm{CoP[R]}\\!=\\!|Q_{1}|/W\\!-\\!1\\!\propto\delta\gamma^{-1}$, which
diverges as $\delta\gamma\\!\to\\!0$, as noted in the main text.
## References
* [1] S. Vinjanampathy and J. Anders, _Quantum thermodynamics_ , Contemporary Physics 57(4), 545 (2016), 10.1080/00107514.2016.1201896.
* [2] R. Kosloff and A. Levy, _Quantum Heat Engines and Refrigerators: Continuous Devices_ , Annual Review of Physical Chemistry 65(1), 365 (2014), 10.1146/annurev-physchem-040513-103724.
* [3] J. Roßnagel, S. T. Dawkins, K. N. Tolazzi, O. Abah, E. Lutz, F. Schmidt-Kaler and K. Singer, _A single-atom heat engine_ , Science 352(6283), 325 (2016), 10.1126/science.aad6320.
* [4] D. von Lindenfels, O. Gräb, C. T. Schmiegelow, V. Kaushal, J. Schulz, M. T. Mitchison, J. Goold, F. Schmidt-Kaler and U. G. Poschinger, _Spin Heat Engine Coupled to a Harmonic-Oscillator Flywheel_ , Phys. Rev. Lett. 123, 080602 (2019), 10.1103/PhysRevLett.123.080602.
* [5] J. Klatzow, J. N. Becker, P. M. Ledingham, C. Weinzetl, K. T. Kaczmarek, D. J. Saunders, J. Nunn, I. A. Walmsley, R. Uzdin and E. Poem, _Experimental Demonstration of Quantum Effects in the Operation of Microscopic Heat Engines_ , Phys. Rev. Lett. 122, 110601 (2019), 10.1103/PhysRevLett.122.110601.
* [6] Q. Bouton, J. Nettersheim, S. Burgardt, D. Adam, E. Lutz and A. Widera, _A quantum heat engine driven by atomic collisions_ , Nature Communications 12(1), 2063 (2021), 10.1038/s41467-021-22222-z.
* [7] R. Feynman, _There’s plenty of room at the bottom_ , In _Feynman and computation_ , pp. 63–76. CRC Press, Boca Raton, Florida, United States (2018).
* [8] F. G. Brandao and M. B. Plenio, _Entanglement theory and the second law of thermodynamics_ , Nature Physics 4(11), 873 (2008), 10.1038/nphys1100.
* [9] K. Funo, Y. Watanabe and M. Ueda, _Thermodynamic work gain from entanglement_ , Phys. Rev. A 88, 052319 (2013), 10.1103/PhysRevA.88.052319.
* [10] R. Alicki and M. Fannes, _Entanglement boost for extractable work from ensembles of quantum batteries_ , Phys. Rev. E 87, 042123 (2013), 10.1103/PhysRevE.87.042123.
* [11] S. Hilt and E. Lutz, _System-bath entanglement in quantum thermodynamics_ , Phys. Rev. A 79, 010101 (2009), 10.1103/PhysRevA.79.010101.
* [12] J. Oppenheim, M. Horodecki, P. Horodecki and R. Horodecki, _Thermodynamical Approach to Quantifying Quantum Correlations_ , Phys. Rev. Lett. 89, 180402 (2002), 10.1103/PhysRevLett.89.180402.
* [13] M. Perarnau-Llobet, K. V. Hovhannisyan, M. Huber, P. Skrzypczyk, N. Brunner and A. Acín, _Extractable Work from Correlations_ , Phys. Rev. X 5, 041011 (2015), 10.1103/PhysRevX.5.041011.
* [14] M. Huber, M. Perarnau-Llobet, K. V. Hovhannisyan, P. Skrzypczyk, C. Klöckl, N. Brunner and A. Acín, _Thermodynamic cost of creating correlations_ , New Journal of Physics 17(6), 065008 (2015), 10.1088/1367-2630/17/6/065008.
* [15] V. Narasimhachar and G. Gour, _Low-temperature thermodynamics with quantum coherence_ , Nature communications 6(1), 7689 (2015), 10.1038/ncomms8689.
* [16] K. Korzekwa, M. Lostaglio, J. Oppenheim and D. Jennings, _The extraction of work from quantum coherence_ , New Journal of Physics 18(2), 023045 (2016), 10.1088/1367-2630/18/2/023045.
* [17] M. Lostaglio, D. Jennings and T. Rudolph, _Description of quantum coherence in thermodynamic processes requires constraints beyond free energy_ , Nature communications 6(1), 6383 (2015), 10.1038/ncomms7383.
* [18] B. d. L. Bernardo, _Unraveling the role of coherence in the first law of quantum thermodynamics_ , Phys. Rev. E 102, 062152 (2020), 10.1103/PhysRevE.102.062152.
* [19] P. Kammerlander and J. Anders, _Coherence and measurement in quantum thermodynamics_ , Scientific reports 6(1), 22174 (2016), 10.1038/srep22174.
* [20] P. Ćwikliński, M. Studziński, M. Horodecki and J. Oppenheim, _Limitations on the Evolution of Quantum Coherences: Towards Fully Quantum Second Laws of Thermodynamics_ , Phys. Rev. Lett. 115, 210403 (2015), 10.1103/PhysRevLett.115.210403.
* [21] J. Jaramillo, M. Beau and A. del Campo, _Quantum supremacy of many-particle thermal machines_ , New Journal of Physics 18(7), 075019 (2016), 10.1088/1367-2630/18/7/075019.
* [22] N. Yunger Halpern, C. D. White, S. Gopalakrishnan and G. Refael, _Quantum engine based on many-body localization_ , Phys. Rev. B 99, 024203 (2019), 10.1103/PhysRevB.99.024203.
* [23] M. Beau, J. Jaramillo and A. Del Campo, _Scaling-Up Quantum Heat Engines Efficiently via Shortcuts to Adiabaticity_ , Entropy 18(5) (2016), 10.3390/e18050168.
* [24] J. Li, T. Fogarty, S. Campbell, X. Chen and T. Busch, _An efficient nonlinear Feshbach engine_ , New Journal of Physics 20(1), 015005 (2018), 10.1088/1367-2630/aa9cd8.
* [25] Y.-Y. Chen, G. Watanabe, Y.-C. Yu, X.-W. Guan and A. del Campo, _An interaction-driven many-particle quantum heat engine and its universal behavior_ , npj Quantum Information 5(1), 88 (2019), https://doi.org/10.1038/s41534-019-0204-5.
* [26] T. Keller, T. Fogarty, J. Li and T. Busch, _Feshbach engine in the Thomas-Fermi regime_ , Physical Review Research 2(3), 033335 (2020), 10.1103/PhysRevResearch.2.033335.
* [27] J. Koch, K. Menon, E. Cuestas, S. Barbosa, E. Lutz, T. Fogarty, T. Busch and A. Widera, _A quantum engine in the BEC–BCS crossover_ , Nature 621(7980), 723 (2023), https://doi.org/10.1038/s41586-023-06469-8.
* [28] E. Q. Simmons, R. Sajjad, K. Keithley, H. Mas, J. L. Tanlimco, E. Nolasco-Martinez, Y. Bai, G. H. Fredrickson and D. M. Weld, _Thermodynamic engine with a quantum degenerate working fluid_ , Phys. Rev. Res. 5, L042009 (2023), 10.1103/PhysRevResearch.5.L042009.
* [29] K. V. Kheruntsyan, D. M. Gangardt, P. D. Drummond and G. V. Shlyapnikov, _Pair Correlations in a Finite-Temperature 1D Bose Gas_ , Phys. Rev. Lett. 91, 040403 (2003), 10.1103/PhysRevLett.91.040403.
* [30] K. V. Kheruntsyan, D. M. Gangardt, P. D. Drummond and G. V. Shlyapnikov, _Finite-temperature correlations and density profiles of an inhomogeneous interacting one-dimensional Bose gas_ , Phys. Rev. A 71, 053615 (2005), 10.1103/PhysRevA.71.053615.
* [31] T. Kinoshita, T. Wenger and D. S. Weiss, _Local Pair Correlations in One-Dimensional Bose Gases_ , Phys. Rev. Lett. 95, 190406 (2005), 10.1103/PhysRevLett.95.190406.
* [32] E. H. Lieb and W. Liniger, _Exact Analysis of an Interacting Bose Gas. I. The General Solution and the Ground State_ , Phys. Rev. 130, 1605 (1963), 10.1103/PhysRev.130.1605.
* [33] C. N. Yang and C. P. Yang, _Thermodynamics of a One-Dimensional System of Bosons with Repulsive Delta-Function Interaction_ , Journal of Mathematical Physics 10(7), 1115 (1969), 10.1063/1.1664947.
* [34] V. E. Korepin, N. M. Bogoliubov and A. G. Izergin, _Quantum Inverse Scattering Method and Correlation Functions_ , Cambridge Monographs on Mathematical Physics. Cambridge University Press, Cambridge, United Kingdom, 10.1017/CBO9780511628832 (1993).
* [35] A. Görlitz, J. M. Vogels, A. E. Leanhardt, C. Raman, T. L. Gustavson, J. R. Abo-Shaeer, A. P. Chikkatur, S. Gupta, S. Inouye, T. Rosenband and W. Ketterle, _Realization of Bose-Einstein Condensates in Lower Dimensions_ , Phys. Rev. Lett. 87, 130402 (2001), 10.1103/PhysRevLett.87.130402.
* [36] M. Greiner, I. Bloch, O. Mandel, T. W. Hänsch and T. Esslinger, _Exploring Phase Coherence in a 2D Lattice of Bose-Einstein Condensates_ , Phys. Rev. Lett. 87, 160405 (2001), 10.1103/PhysRevLett.87.160405.
* [37] F. Schreck, L. Khaykovich, K. L. Corwin, G. Ferrari, T. Bourdel, J. Cubizolles and C. Salomon, _Quasipure Bose-Einstein Condensate Immersed in a Fermi Sea_ , Phys. Rev. Lett. 87, 080403 (2001), 10.1103/PhysRevLett.87.080403.
* [38] D. S. Petrov, G. V. Shlyapnikov and J. T. M. Walraven, _Regimes of Quantum Degeneracy in Trapped 1D Gases_ , Phys. Rev. Lett. 85, 3745 (2000), 10.1103/PhysRevLett.85.3745.
* [39] C. Mora and Y. Castin, _Extension of Bogoliubov theory to quasicondensates_ , Phys. Rev. A 67, 053615 (2003), 10.1103/PhysRevA.67.053615.
* [40] M. Girardeau, _Relationship between Systems of Impenetrable Bosons and Fermions in One Dimension_ , Journal of Mathematical Physics 1(6), 516 (1960), 10.1063/1.1703687.
* [41] T. Kinoshita, T. Wenger and D. S. Weiss, _Observation of a One-Dimensional Tonks-Girardeau Gas_ , Science 305(5687), 1125 (2004), 10.1126/science.1100700.
* [42] A. G. Sykes, D. M. Gangardt, M. J. Davis, K. Viering, M. G. Raizen and K. V. Kheruntsyan, _Spatial nonlocal pair correlations in a repulsive 1d bose gas_ , Phys. Rev. Lett. 100, 160406 (2008), 10.1103/PhysRevLett.100.160406.
* [43] M. Olshanii, _Atomic Scattering in the Presence of an External Confinement and a Gas of Impenetrable Bosons_ , Phys. Rev. Lett. 81, 938 (1998), 10.1103/PhysRevLett.81.938.
* [44] C. Chin, R. Grimm, P. Julienne and E. Tiesinga, _Feshbach resonances in ultracold gases_ , Rev. Mod. Phys. 82, 1225 (2010), 10.1103/RevModPhys.82.1225.
* [45] R. Kosloff, _A quantum mechanical open system as a model of a heat engine_ , The Journal of Chemical Physics 80(4), 1625 (1984), 10.1063/1.446862.
* [46] R. Kosloff and Y. Rezek, _The Quantum Harmonic Otto Cycle_ , Entropy 19(4) (2017), 10.3390/e19040136.
* [47] Y. Zheng and D. Poletti, _Work and efficiency of quantum Otto cycles in power-law trapping potentials_ , Phys. Rev. E 90, 012145 (2014), 10.1103/PhysRevE.90.012145.
* [48] G. Watanabe, B. P. Venkatesh, P. Talkner and A. del Campo, _Quantum Performance of Thermal Machines over Many Cycles_ , Phys. Rev. Lett. 118, 050601 (2017), 10.1103/PhysRevLett.118.050601.
* [49] O. Abah and E. Lutz, _Energy efficient quantum machines_ , Europhysics Letters 118(4), 40005 (2017), 10.1209/0295-5075/118/40005.
* [50] N. M. Myers, F. J. Peña, O. Negrete, P. Vargas, G. D. Chiara and S. Deffner, _Boosting engine performance with Bose–Einstein condensation_ , New Journal of Physics 24(2), 025001 (2022), 10.1088/1367-2630/ac47cc.
* [51] H. B. Callen, _Thermodynamics and an introduction to thermostatistics_ , John Wiley & Sons, Hoboken, New Jersey, 2nd ed edn., ISBN 0471862568 (1985).
* [52] G. D. Rosi, R. Rota, G. E. Astrakharchik and J. Boronat, _Hole-induced anomaly in the thermodynamic behavior of a one-dimensional Bose gas_ , SciPost Phys. 13, 035 (2022), 10.21468/SciPostPhys.13.2.035.
* [53] L. Buffoni, A. Solfanelli, P. Verrucchi, A. Cuccoli and M. Campisi, _Quantum measurement cooling_ , Phys. Rev. Lett. 122, 070603 (2019), 10.1103/PhysRevLett.122.070603.
* [54] D. V. Schroeder, _An introduction to thermal physics_ , Oxford University Press, Oxford, ISBN 0-19-289554-0 (2020).
|
# Towards Immersive Generosity: The Need for a Novel Framework to Explore
Large Audiovisual Archives through Embodied Experiences in Immersive
Environments
Giacomo Alliata Laboratory of Experimental Museology, EPFL Corresponding
author<EMAIL_ADDRESS>- Rue des Jordils 41, St-Sulpice, Switzerland
Sarah Kenderdine Laboratory of Experimental Museology, EPFL Lily Hibberd
Laboratory of Experimental Museology, EPFL UNSW Sydney Ingrid Mason
Australian National University
###### Abstract
This article proposes an innovative framework to explore large audiovisual
archives using Immersive Environments to place users inside a dataset and
create an embodied experience. It starts by outlining the need for such a
novel interface to meet the needs of archival scholars and the GLAM sector,
and discusses issues in the current modes of access, mostly restrained to
traditional information retrieval systems based on metadata. The paper
presents the concept of “generous interfaces” as a preliminary approach to
address these issues, and argues some of the key reasons why employing
Immersive Visual Storytelling might benefit such frameworks. The theory of
embodiment is leveraged to justify this claim, showing how a more embodied
understanding of a collection can result in a stronger engagement for the
public. By placing users as actors in the experience rather than mere
spectators, the emergence of narrative is driven by their interactions, with
benefits in terms of engagement with the public and understanding of the
cultural component. The framework we propose is applied to two existing
installations to analyze them in-depth and critique them, highlighting the key
directions to pursue for further development.
## 1 Introduction
At the beginning of his book _Zen and the Art of Motorcycle Maintenance_ ,
Robert [49] presents the difference felt while travelling by car compared to
on a motorbike. In the first case, travelers are watching the environment
through their car windows, while in the second, they are “feeling”, “living”
that same environment. The humidity in the air, the wind on their faces, the
vivid sound of the motor, the unconstrained visual field, all contribute to
create a more embodied and immersive experience for the motorcyclists. A
comparable experience is described by [59], in which one of the authors, who,
after several years of looking through the windows of a computer room in
passing, is one day obliged to enter and is confronted with the shock of
suddenly going “inside” what had only ever been seen from the “outside”. This
short story illustrates what Immersive Environments (IEs) offer to their
users: the possibility to be “inside” and therefore to feel as if you were
“there”, a more fully embodied experience, (more or less) completely
surrounded by a virtual world. Often described as the sense of “being there”,
this psychological state is sometimes also referred to as “presence” and is a
concept that is one of the central features of IEs [62]. “Immersion” is
arguably the second central feature of IEs: the extent to which the system can
deliver “an inclusive, extensive, surrounding and vivid illusion of reality to
the senses of a human participant” [[]p. 1]slater_framework_1997. Since the
display size remains a critical limitation in the presentation and
understanding of data visualizations [28], many large display systems (more or
less immersive) have been developed in the last decades, such as wide
gigapixel screens, dome-like structures, CAVEs and half-CAVEs or panoramic
screens (see [24] for an historical perspective on IEs).
Despite this progress, the exploration of cultural collections in IEs faces a
range of limitations and dilemmas, for which a new framework is required. This
article proposes that such a new paradigm is possible and that, within the
project _Narratives from the Long Tail: Transforming Access to Audiovisual
Archives_ , the foundations for this innovative framework are being laid and
will be applied to important audiovisual collections to create new ways to
explore them111Details on the archives explored can be found at
https://www.futurecinema.live/archives-and-collections/.. _Narratives_ ’s goal
is to produce a “groundbreaking visualization framework for interactively
(re)discovering hundreds of thousands of hours of audiovisual materials” [32].
The remainder of this introduction offers a summary of the key concepts and
technologies to be discussed.
As the foremost mnemonic records of the 21st century, audiovisual recordings
are omnipresent in our daily lives. From the second half of the 20th century,
starting with the introduction of television, to today’s online sharing
platforms, such as Youtube or TikTok, audiovisual recordings play an important
role in the way we document, disseminate and preserve knowledge and culture.
Broadcasting institutions are digitizing their collections, with examples such
as the Radio Télévision Suisse (RTS) with 200,000 hours of footage [54], or
the British Broadcasting Corporation (BBC) with more than a million recorded
hours [66]. Furthermore, cultural video collections are useful to preserve
Intangible Cultural Heritage (ICH), which has been defined as “the culture
that people practise as part of their daily lives” [35] and “all [the]
immaterial manifestations of culture [that] represent the variety of living
heritage of humanity as well as the most important vehicle of cultural
diversity” [37]. One can think of events like rituals, dance performances or
festivals that audiovisual collections, due to their temporal value as well as
the combination of video and audio channels, are more apt to represent than
images or textual elements ever could. Audiovisual archives by themselves,
however, can only offer a mostly linear, single-channel, non-interactive
narrative. They need to be augmented by some sort of interface to be explored
in a meaningful way.
In the field of cultural heritage, the need for innovative ways to present
such collections to the public is evident. A 2014 survey conducted on 1200
cultural institutions in Europe established that around 80% have digitized
their collections [20], and this number has certainly increased since then.
The Europeana platform222https://www.europeana.eu/en is, for instance, a key
example, with more than 30 million items accessible online through a category-
based interface. The majority of these institutions provide online access to
(at least part of) their collections, and even though innovative forms of
access can be noted [65], one can argue that these largely remain constrained
to the web. Web-based systems, such as Europeana’s, are obviously powerful
tools to enhance access to these collections and democratisation of culture,
however they lack the immersion that IEs installations can offer. These are
especially relevant when showcasing collections in public spaces such as
museums or cultural institutions venues. An innovative framework to explore
large digital collections is therefore necessary, in particular, to present
audiovisual archives to a larger public.
Within IE theory, one key aspect to consider is the way narrative can emerge
in an embodied experience, what can be referred to as Immersive Visual
Storytelling (IVS). This term comprises the ensemble of processes and
approaches that are employed to generate narratives in immersive experiences
(see [38, 36], for some examples). It is on this basis that we claim that,
through this emergence of narrative, users can better understand and explore
large cultural collections, by being placed “inside” the dataset [57] and
being offered levels of interactivity to freely browse the collection, as well
as making them actors in the experience rather than mere spectators, resulting
in a more “embodied understanding” of the content displayed [27].
The next part of this paper justifies the need for a new framework to explore
large audiovisual collections (and cultural collections, in general), drawing
on Whitelaw’s “generous interface” theory [64] and outlining the benefits
immersion can bring to the exploration of such archives. The relevant theories
of embodiment will then be presented and discussed in relations to IEs and IVS
with specific examples, in order to understand how “embodied understanding”
works and therefore how it can be leveraged to enhance the experience.
Finally, two previous works will be analyzed in-depth and critiqued in light
of the proposed framework: _T_Visionarium II_ , developed at UNSW’s iCinema
Research Centre in 2006, and _Jazz Luminaries_ , created in 2019 at EPFL’s
Laboratory of Experimental Museology. In each case, users are immersed within
large audiovisual datasets, and this results in an embodied understanding of
the archives, from which narrative can emerge.
## 2 The Need for a New Framework: Towards Immersive Generosity
From the perspective of scholarly archival community, it is clear that large
audiovisual archives are currently lacking proper frameworks to explore them.
These collections remain mostly inaccessible because cultural institutions are
constrained to screening sessions where only a handful of videos can be shown
at one time, without revealing the dataset in its entirety. The Digital
Humanities and GLAM sector (galleries, libraries, archives, and museums) have
also called for innovative forms of engagement through compelling frameworks
to explore this kind of collections [21].
For one, the classical information retrieval system restrains users to
classifications and tagging prior interpretations provided by curators and
data managers. Furthermore, the metadata compiled prevents users from
accessing visual features because images are by nature harder to verbally
capture [40]. A “simple search” interface is thus insufficient to provide
compelling public engagement with the diverse array of media formats held in
cultural institutions [53]. The framework proposed in this article intends to
solve this problem, as it provides users new ways to access large collections,
thus dispensing with the need to rely on metadata and traditional cataloguing
forms of description.
A catalogue-like database description of an audiovisual collection cannot
fully describe its content because discrete categories do not encompass the
visual and temporal complexity of videos. More continuous analytics can in
theory characterize visual features of a video but in practice these seem to
be harder to capture as they require some sort of interpretation. One could
imagine a scale from 0 to 1 measuring the visual complexity of a shot, for
instance. However, a catalogue of entries between 0 and 1 would be much harder
to interpret than actually watching the related videos and visually compare
them, as in the SEMIA333https://bertspaan.nl/semia/#/ project. This recent
initiative undertaken at the University of Amsterdam is a key example of
modern web-based approaches to the exploration of audiovisual archives [40].
Figure 1 presents the main view of the application, in which 103,273 shots
from 6,969 videos are spatially distributed in 2-D based on similarities on
visual features: color, shape, movement or visual complexity (as in the
example above). With this interface, one can easily appreciate comparable
shots and grasp the full collection at a glance, an impossible task with just
a list of catalogue entries.
Figure 1: Main view of the SEMIA application. Shots are spatially distributed
by color using a t-SNE algorithm of the similarity between all shots (credits:
Bert Spaan).
Furthermore, traditional access to cultural collections shapes the memories we
create of the past. These memories “are not inherent in the archival stock,
but are created in the context of reception, through processes of remediation
and recontextualisation” [9]. This relates to the idea that traditional
metadata rely on prior assumptions of their authors, as well as on the initial
goal of these descriptions. For an archival entity, a catalogue is first and
foremost a way to document their collections, usually connected to a database
system with an information retrieval tool to (at least internally) find the
relevant items. This purpose is quite different than browsing through a
collection without any prior knowledge (a goal that seems to be relevant for
casual users, [39] and therefore solely relying on these metadata might not be
enough. Browsing is indeed “a rich and fundamental human information
behaviour” [10], an iterative process based on scanning [51] or glimpsing [3]
a collection of items. These processes also depend on the questions one might
ask, and it has been shown that revealing a collection in its entirety through
the use of spatial distributions for instance can prompt innovative questions
[47]. People “browse with or without a goal in mind, and goals may change as
the process unfolds” [64].
Visualization is therefore at the center of an innovative framework to explore
large audiovisual collections. Visualization, as “a medium for communication
(or persuasion, or engagement)” and a tool for “understanding (or problem
solving, planning, orienting)” [[]pp. 1-2]scagnetti2011visual, can reveal
patterns, structures, relationships in the data and prompt new enquiries. Much
like Moretti’s “distant reading” approach to literature, which can disclose
hidden meanings in the text [45], visualization can expose new knowledge. For
design and humanities scholar Johanna Drucker, visualization “produces”
knowledge through “graphical forms expressing interpretation”, and that
because of the “fundamentally interpreted condition on which data is
constructed” visualizations are a feature of both “knowledge production and
[its] presentation” [19].
Shneiderman’s “visual information seeking” approach entails a taxonomy of
tasks users might want to perform while exploring a collection: overview,
zoom, filter, details-on-demand, relate, history, and extract [58]. This
interaction paradigm requires surrogates in the form of previews for single
items and overviews for groups of items [23], to represent the collection
objects while exploring it along its latent dimensions. According to [18],
however, Shneiderman’s information visualization “mantra” is “pragmatic, but
highly mechanistic” and supposes users with clear goals in mind, something
that might not actually hold true for casual visitors in a museum space. This
ubiquitous task-based approach, widespread in the fields of human-computer
interaction and information retrieval, does not meet the criteria of a more
humanistic approach. It is furthermore essential to recognize the inherent
reward component and creativity aspect in the action of browsing a collection
[51]. While browsing, a user is the sole director of the experience (although
they might be more or less consciously guided towards a certain path) and is
therefore creating their own narrative. It is a very different thing to go
through a curated list of items and discover the same items while autonomously
exploring the collection. In the latter case, serendipity, “the fact of
finding interesting or valuable things by chance” [50], plays a major role.
The feeling of finding new items by chance, of “serendipitous” discoveries
[63], entails a procedural emergence of narrative, driven by the user-agent of
the installation.
All these concepts are summarized by Whitelaw’s innovative notion of “generous
interfaces” [64]. As he argues, searching requires “rich, browsable interfaces
that reveal the scale and complexity of digital heritage collections”. It
should be a “humanistic model of interface and interaction that emphasises
exploration and interpretation over task and information retrieval”. The modes
of visual storytelling offered by these generous interfaces are simultaneously
“horizontal” (through the browsing features) and “vertical” (through the
details-on-demand functionalities), more or less completely driven by the
user. Whitelaw’s interfaces empower users to generate, to craft new knowledge,
through the narrative they are creating. Psychologist Jerome Bruner discusses
the importance of narrative for its fundamental role in creating and
interpreting human culture [8]. He states that human beings are natural
storytellers: they make sense of the world and themselves through narrative.
From the time they are very young, children learn that the way to integrate
their own desires with their family’s norms and rules is to construct a story
about their actions. This push to construct narrative, Bruner maintains,
shapes how children acquire language, and the habit persists into adulthood as
a primary instrument for making meaning. These storytelling skills insure our
place within human society. This point is sustained by constructionist
theories [14, 48], according to which individuals do not learn by passively
perceiving content but rather by actively crafting, manipulating and therefore
creating new knowledge.
Through their rich browsing features and interactivity, generous interfaces
evidently offer ideal modes of access to large digital collections, which are
far more interesting than traditional information retrieval systems. These
interfaces are however mainly web-based and therefore restrained to single
users in front of small and flat screens. One could argue the immersive
component of IVS is completely lacking here, as well as the multi-user aspect.
The framework we suggest would solve these issues through the use of Immersive
Environments, relying on the concept of an immersive generosity.
By transposing the generous interface concept to IEs, one must however
consider how it will affect the narrative. Indeed, a story is closely
correlated with the medium used to convey it. This is perfectly illustrated
with cinematic adaptations of books: the overall story being told is perhaps
the same, but the way it is told, its content and its intensity can greatly
differ. Applying this idea to IVS, it is clear that the active role a user has
in an immersive and embodied application vastly influences the way narrative
can emerge, requiring a distinction between purely authorial storytelling and
interactive approaches. [2] propose a useful model for this with four
dimensions to characterize the narrative component of different mediums:
“Contingency” (the contingency of time and space of the story being told with
respect to the real time and space of the user); “Presence” (how much the user
feels present in the story); “Interactivity” (the degree of controls they have
on the narrative) and “Narrative Representation” (the form narrative adopts,
be it through mental models for literature for instance or purely visual and
aural for cinema). They argue that, when compared to the most common cases of
narrative mediums, namely literature, cinema and theatre, virtual reality
offers the greater contingency in time and space, the strongest feeling of
“being there” and the highest degree of interactivity.
Combined, these considerations imply that IVS is a form of narrative that
moves beyond the Platonic concepts of “diegesis” and “mimesis” (based on an
authorial view of storytelling) to the idea of “experiencing” and “creating” a
story. When focusing on multi-user experiences, typically offered by large
interactive systems, the concept of a user-led narrative has even greater
implications for the way the other users in the interactive space experience
the story being created. In Geert Mul’s work, users can simultaneously be seen
as “highly productive, in that the appearance of the works changes based on
their input” and “merely one in a much larger series of variables that
determine the outcome of the calculation” [46]. Although these views might
seem at odds, they both imply that, when IVS is user-led, it requires an
external public (other users) to interpret it and appreciate it fully, through
a “third-person’s perspective”. One must also remember that museums are
historically social venues, and the relationships between visitors are
intrinsic to the experience of exploring their collections, meaning that these
principles must be applied to to the exploration of large datasets in multi-
user immersive installations.
In conclusion, Whitelaw’s “generous interfaces” have been highly influential
as a first attempt to solve the issues outlined by archival and digital
humanities scholars in the access to large audiovisual collections.
Nonetheless, we argue that these interfaces would benefit from a further
component of immersion, through the use of IEs and thus IVS, to create a truly
embodied exploration of a cultural archive. To conceptually frame this idea,
philosopher Mark Johnson’s theory of the body is leveraged in the next section
to better frame how narrative can emerge from such installations through
“embodied understanding” [27].
## 3 Embodied Understanding in Immersive Environments
For [27], the importance of “embodied understanding”, based on the 20th
century findings of cognitive science, challenges centuries during which the
body was considered less important than the mind. He provides the counter-
argument that “understanding is profoundly embodied, insofar as our
conceptualization and reasoning recruit sensory, motor, and affective patterns
and processes to structure our understanding of, and engagement with, our
world”. It is therefore clear that IVS, through its embodied approach, can
indeed be beneficial to cultural institutions aiming to give the wider public
meaningful access to their collections.
To put these ideas in practice in IEs-based installations, one must however
first appreciate how our understanding of the world is embodied. According to
the field of embodied cognition, organism-environment interactions are the
sources of all our human perception and understanding of the world [15]. To
really understand something, we must first experience it, a complex process
based on Damasio’s “homeostatis” balanced state between organism and
environment that comprises both how the organism is feeling and acting as well
as how the environment is structured [13]. This equilibrium is dynamic,
because it evolves as the organism and the environment evolve, and is also
related to the quality, the value of the experience and overall our well-
being. This is why emotions are such an important element of our “embodied
understanding” of the world. Neuroscientists have further shown how organisms,
through the detection of “emotionally competent stimulus”, move their body-
states to favorable positions for their survival and well-being [12]. Emotions
are therefore at the core of understanding, something that seems quite obvious
in storytelling, as any story plays with our emotions to convey its narrative.
Going one step further, [26] talks of the “bodily sources of meaning”. Humans
have indeed always used their bodies to express themselves, from spontaneous
gestures while talking to more elaborate performances such as dance or ritual
practices [25, 67]. It is thus necessary to define this “body” of ours, and
relate its dimensions to IVS in the frame of the exploration of large cultural
collections. Drawing on Merleau-Pointy’s _Phenomelogy of Perception_ and John
Dewey’s somatic naturalism, Johnson explains how our bodies are not just
“objects interacting with other objects” but are “lived”, “phenomenal
bod[ies]” [43], and require at least five intertwined dimensions to be fully
comprehended: the biological, ecological, phenomenological, social and
cultural body.
Going back to Damasio’s homeostatis state, our bodies are first and foremost
“flesh-and-blood”, “functioning biological organism[s] that can perceive, move
within, respond to, and transform [their] environments” [15]. Our “biological
bodies” are in a continuous exchange with their environments, continuously
evolving the aforementioned equilibrium, and are the locus of feelings and
emotions that push us towards our physical and social well-being [12]. When
being confronted with an immersive narrative experience, users are therefore
first and foremost a biological body, with all their individualities and body
specificities. In her interactive and immersive experience _Osmose_
444http://www.immersence.com/osmose/ (1995), media artist Char Davies empowers
visitors to automatically drive the narrative through their breathing and
balance, two fundamentals and unconscious human activities. These are part of
the preconscious activities [22] identifies as the “body schema” that govern
our interactions with the environment.
Once users start to actively engage and interact with an installation, the
second dimension of the body emerges: the “phenomenological body”. Our
“tactile-kinaesthetic body” [56] depends on proprioception (our feeling of our
bodily posture and orientation), our kinaesthetic sensations of bodily
movement and our awareness of our internal body states through our emotions
and feelings [12]. In contrast to the body schema, [22] associates our more
conscious activities to the “body image”: activities that comprehend the
affordances of the system to explore the collection, Drucker’s “conventions of
the diagrammatic knowledge form” [19]. Furthermore, this phenomenological
paradigm can result in the enhancement of a cognitive operation through a
shift in the nature of the task itself, where abstract operations (such as
finding all the items that correspond to a certain query) can be mapped to
more natural actions, the so-called “tangialities” [44].
Dario Rodighiero and his colleagues at metaLab at Harvard have pushed this
dimension to the extreme of posing the entire body as the “interaction
device”, a sort of “choreographic interface”. In _Surprise Machines_
555https://dariorodighiero.com/Surprise-Machines-for-Harvard-Art-Museums,
visitors explore Harvard Art Museum collections through the use of precise and
choreographed gestures [52], each one mapped to a specific task that remind us
of Shneiderman’s visualization mantra: “overview first, zoom and filter, then
details-on-demand” [58]. Figure 2 shows the digital collection spatially
distributed according to visual similarity of the different items and the
gesture vocabulary defined to explore this latent space.
Figure 2: _Surprise Machines_ media installation with the museum’s collection
spatially distributed on the left and the gesture vocabulary on the right
(credits: metaLAB (at) Harvard).
In this installation, the digital collection is shaped to create a virtual
environment that users can freely explore through their bodies. This intrinsic
relationship to the environment is elucidated by a third dimension: the
“ecological body”, which can be defined as the continuous process between our
bodies and the environment we evolve within [15, 43]. The twofold phenomena of
“embodied understanding’ and the “ecological body” is pushed even further in
IEs such as the EPFL Laboratory of Experimental Museology’s Panorama+ or its
predecessor, the UNSW’s iCinema Research Centre’s landmark system Advanced
Interaction and Visualization Environment (AVIE). The Panorama+ is a
360-degree stereoscopic, interactive environment of ten meters diameter and
four meters high, with five projectors and surround sound audio system,
controlled by a cluster of six computers (see [42] for a more in-depth
technical description of the AVIE). Its omnidirectional nature recreates a
fully immersive data space that allows for both allocentric (relationships
between objects) and egocentric (personal relationships to objects) cognition
and spatial perspectives simultaneously [5], cited in [30]. Visitors can
indeed physically represent themselves with respect to digital objects in the
virtual landscape and appreciate relationships between these digital items,
blurring the line between what is real and what is virtual. This phenomenon
increases their sense of presence and thus enhancing the narrative component
of the exploratory experience. The digital collection is no longer an ascetic
database or list of catalogue entries but a fully-fledged environment that
users can explore on their own terms and to which they can individually or
collectively relate with other visitors, in the case of multi-user systems.
It is in these multi-user experiences that the fourth dimension of the body
Johnson outlines, the “social body”, is revealed. Indeed, our environment is
not just physical or biological but also composed of human relationships and
interactions with our peers. In the field of developmental psychology, the
effect (and importance) of other people during our childhood is well-known
[60], and this continues as adults, through interactions with our colleagues
and friends. This is especially true in social spaces such as museums, where
users are generally not alone but rather continuously confronted with the
presence of others, usually strangers, who become the “spectators” in the
trichotomy “system-user-spectators” [17]. Embodiment, on this basis, can thus
be argued to have a “participatory” status where the user driving the
experience becomes the author of narrative for a larger public. Here again,
the “third-person’s perspective” Geert Mul mentioned in his work, as well as
the dual view of the role of users in generative interactive pieces are
crucial [46]. From the perspective of performance studies theory, “it is the
ways in which the user perceives and experiences the act of interacting with
the system under the potential scrutiny of spectators that greatly influences
the interaction as a whole … it is precisely this awareness of the
(potentiality of a) spectator that transforms the user into a performer” [11].
Similarly, the latest theoretical frameworks on creativity highlight the
importance of the “public” in the creative act [61]: the interaction with a
given system in this case. This creative act of interacting with an
installation also has an important effect on the “embodied understanding” of
the collection, as put forward by constructionist theories on learning [14,
48]. Individuals actively engaged with the knowledge they are being presented
will learn more than those passively witnessing it. Bloom’s famous taxonomy of
educational objectives supports this claim, since its main categories include
Application, Analysis, Synthesis and Evaluation (activities that require the
manipulation and creation of knowledge), in addition to Knowledge and
Comprehension [6]. One of the more modern revisions of this taxonomy puts
emphasis on the notion of creating new knowledge by splitting the original
classification into two dimensions: the first on the actual knowledge being
addressed, and the second on the cognitive processes applied to this knowledge
[34]. Furthermore, empirical evidence suggests the hierarchical characteristic
of these taxonomies, placing the category of Create at the top [1].
The fifth and last dimension of the body Johnson describes is the cultural
one. He argues that various cultural aspects contribute to the shaping of our
bodies and the way we see and relate to them. This explains why gestures and
postures vary across the world, as well as our attitudes towards our
environment. Cultures are enacted through rituals, practices, customs
performed by humans as inherently embodied beings [26]. Therefore, in IVS, the
emergent narrative will be enacted by the embodied users, attended by their
prior knowledge and specific cultures. The interpretative action of the
visitors mentioned before depends on the individual preconceptions people
bring to the experience, and consequently the emergent narrative that results
from their interactions can greatly differ. The dialogue between system and
users is driven by these interpretations, users with specific backgrounds will
draw connections that might appear rather strange to other visitors but
nonetheless “make sense” in their specific dialogue, in their specific
narrative. This fundamentally individual interpretative action reverts back to
the social body and Mul’s “third-person’s perspective”, where the individual
differences between users spark seemingly infinite combinations and unfolding
of different narratives. Viewers’ own cultures and previous knowledge become
new variables in the process of generating these narratives, both for the user
actually driving the experience and for the public interpreting it.
Furthermore, the cultural aspect is particularly relevant when exploring
audiovisual archives, because the “immaterial manifestations of culture” (what
we refer to as Intangible Cultural Heritage) can be captured and documented
through videos, so that exploring such a collection amounts to exploring
culture itself (or at least an aspect or portion of it). The power of IEs to
“plac[e] users inside the dataset” [57] can thus immerse them in a cultural
setting, with the various benefits for their experience, both for their
engagement and for their learning. This full immersion in a collection creates
an “embodied theatre of participation” that “permits an unprecedented level of
viewer co-presence in a narrative-discovery of a cultural landscape”,
facilitating “dynamic inter-actor participation and cultural learning” [29].
The need for an innovative framework having been outlined and said framework
being situated within embodiment theory, this thinking will now be applied to
analyze and critique two previously-built installations, to highlight the
particularities and advantages of our proposition.
## 4 Critique of Interactive Installations in Light of the Proposed
Framework: _T_Visionarium II_ and _Jazz Luminaries_
The need for an immersive generosity having been justified and the concept of
embodied understanding explained through Johnson’s theory of the body, we now
illustrate how this innovative framework we propose can be applied to
previously built installations created to explore large video collections.
These use cases will first be presented and then critiqued in light of the
proposed framework, in an attempt to formalize it and draw conclusions on what
the next stage of immersive interfaces for exploring large video collections
should aim for. The two installations discussed here are _T_Visionarium II_
and _Jazz Luminaries_.
_T_Visionarium II_ is part of the _T_Visionarium_ project, developed at the
UNSW’s iCinema Research Centre between 2004 and 2017, resulting in three
iterations of the work666See
http://www.icinema.unsw.edu.au/projects/t_visionarium/project-overview/ for a
project overview.. The first version was developed for an inflatable dome
structure [4], while the more advanced second and third versions use the AVIE
system, as it can be seen in Figure 3. The interactive and immersive
installation explores 24 hours of television footage, segmented, manually
annotated (based on a thesaurus) and transformed in a database of more than
20,000 clips of a few seconds each. The system is meant for a single user who
navigates it with a touch tablet and with the larger audience witnessing the
emergent narrative. Hundreds of clips are simultaneously playing on the
360-degree screen, and when the user-agent selects one, the digital landscape
rearranges itself, mapping semantically similar clips closer to the selected
item (based on the annotated metadata). The selected clips can be recombined
together, rewriting the linear narrative of the original footage (already
broken down by the initial segmentation) and resulting in a “recombinatory” or
“transcriptive” narrative [7].
Figure 3: View of the _T_Visionarium II_ installation (credits: Sarah
Kenderdine)
_Jazz Luminaries_ , on the other hand, is a much more recent project. Part of
the _Infinity Room II_ exhibition, the application was developed by EPFL’s
Laboratory of Experimental Museology in 2019, in a full dome structure of six
meters diameter. 13,000 videos of the Montreux Jazz Archive are arranged in a
network where nodes represent artists and links their collaborations during
the festival. The network can be navigated with a spherical controller,
mimicking the structure of the dome, and when passing over a certain node, its
corresponding sound excerpts rapidly plays, resulting in an acoustic search
akin to radio channel surfing. Lying down under the dome, users thus explore
the archive and when an artist is selected, they can choose a specific
performance to finally reach a fractal view of the corresponding video
(allowing users to appreciate it in spite of their relative position under the
dome). As in _T_Visionarium II_ , one spectator drives the experience with the
spherical controller, as shown in Figure 4, while the others appreciate the
unfolding of narrative, reclined under the dome [31].
Figure 4: View of the _Jazz Luminaries_ installation (credits: Sarah
Kenderdine)
Both installations evidently pivot on a browsing experience browsing
experience, where the user-agent explores hundreds of videos simultaneously,
gradually revealing details of the collection. The omnidirectional immersion
offered by the AVIE system relates to Johnson’s idea of the ecological body,
as users are “inside” the dataset, affected and affecting it through their
interactions. Similarly, the dome structure recalls a rich history of
planetarium structures with the obvious metaphor of the sky vault, a history
rich of hundreds of years where the goal is to cover the human field of view.
By completely covering the surface of the screen with videos or images, users
are naturally inclined to turn their head and look around, as if they were
physically exploring a real room filled with archival content, echoing the
ecological and phenomenological body through this kinaesthetic paradigm. The
first iteration of the _T_Visionarium_ project put even a stronger emphasis on
this phenomenological aspect, since the visitor wore a head-tracking system
enabling the projected portion of the dome surface, calibrated to the
orientation of their head. While turning around, they would therefore visually
unveil the extent of the database [4]. The phenomenon of wandering around in
the archive, without a clear path to follow, calls up the model of the
“information flaneur” [16]. Curious, creative and critical, the goal of this
information-seeker modeled on the 1840s Paris urban flaneur is not to find
something in particular but rather to appreciate the collection as a whole,
and to be surprised by what they have stumbled upon and ultimately simply be
immersed in the archive. Furthermore, this serendipitous search paradigm
relates to the idea that “information is organic” [41], and hence should be
explored in an organic way.
In _Jazz Luminaries_ , such a natural aspect of the data is also suggested
through the acoustic search, that draws on the biological body and references
the concept of “tangialities”, that are not just related to the touch but
include all the five senses [44]. The social interactions that might arise
from a single user driving the experience for a larger public recall the
social body as well as Mul’s “third-person’s perspective”, since one could
argue it is the public that is experiencing the full performance, defined by
the emergent narrative that the user-agent is creating. There is, as such, no
predefined narrative but an infinite combination of items sequences only
bounded by viewers’ interpretative power. While browsing the archive, it is as
if users were “sculpting” its contents [33], shaping the experience the way
they wish, while being guided by the relationships between elements based on
the metadata without being constrained by them. It is clear that such a
paradigm could be augmented with modern computational approaches, allowing to
generate much more intricacy and thus possibilities in the database, in
particular if relying also on more visual features, such as in the SEMIA
project illustrated previously [40]. Finally, the cultural body is inherent in
the exploration of a cultural collection, and entails that prior individual
knowledge as well as different cultures are yet another variable in the
process of generating narrative. Suffice to say, that a visitor actually
having attended one of the Montreux Jazz Festival concerts and then re-
experiencing it reclined under the dome will have a much different experience
than those discovering the Festival for the first time.
Recalling Aylett and Louchart’s narrative theory, _T_Visionarium II_ and _Jazz
Luminaries_ have a strong contingency in time and space: first, because the
narrative is completely generative and determined by users’ interactions;
secondly due to the important feeling of “being there” due to the full
immersion in the archive and the idea of “sculpting” it; third, thanks to the
high level of interactivity (at least for the user-agent driving the
experience). In this way, IVS adopts a form of narrative based on the concept
of creating a story through the millions of paths embedded in the latent
structures of the collections. Through interactivity and the freedom to
explore the horizontal axes of the archive, visitors obtain a strong authorial
power on the narrative. In Geert Mul’s words, visitors are “highly productive,
in that the appearance of the works changes based on their input” [46], and
this entails the clear benefits in the learning experience maintained by
constructionist theories. At the same time, it is important to ensure
intuitive interaction frameworks are enabled to minimize the learning curve to
adopt the system as well as not relying on users having clear goals in mind,
something that cannot reliably be expected from casual visitors in a museum
setting. In addition, as previously stated, museums are social venues, and
interactions between visitors are welcomed and encouraged, such as viewers
passing around the spherical controller in _Jazz Luminaries_. Johson’s concept
of the social body is one of the reasons why multi-user shared spaces in IEs
are arguably more interesting than traditional Head-Mounted Displays (HMDs)
for IVS in museums spaces. Indeed, even though from a technical point of view,
immersion might be higher while using HMDs, the stronger grounding in the
physical reality offered by large display screen systems such as the Panorama+
/ AVIE allows for a more humanistic approach to the collection, based on
social proximity with other visitors.
The two installations discussed, _T_Visionarium II_ (as well as its
predecessor) and _Jazz Luminaries_ , have provided clear use cases to
illustrate the innovative framework proposed. These projects have their
limitations, however, often because they are based on traditional metadata
rather than more intrinsic relationships related to strictly visual features
(such as colors, visual complexity, movement…) that modern computer vision
approaches can unveil. These restrictions are amongst the issues that the next
generation of audiovisual archives browsers and projects like _Narratives from
the Long Tail: Transforming Access to Audiovisual Archives_ are endeavouring
to overcome.
## 5 Conclusion
In this article, we present an innovative framework to explore large
audiovisual collections through embodied experiences in immersive
environments, drawing on Immersive Visual Storytelling theory, Whitelaw’s
concept of generous interfaces, and Johnson’s theory of the body. The need for
such a framework is highlighted by archival and humanities scholars as well as
the GLAM sector. Two use cases have been analyzed in-depth and critiqued in
light of our proposition, supporting our claim that a move towards a more
immersive generosity will enhance the experience of visitors engaging with
large cultural collections in museum settings through embodied understanding.
The physicality of moving through the collection within which users are
immersed, and modifying the way the system presents itself after each
interaction, entails the metaphor of sculpting the data, as if it were a raw
block of material offered to the viewer to create their own narrative.
Furthermore, the social relationships that arise in such shared immersive
spaces are key to obtaining a full picture of the embodied understanding of
the kinds of cultural collections that people can experience in a museum.
Nonetheless, work remains to be done in the field to improve access to further
democratize these collections, defining the research that the _Narratives from
the Long Tail: Transforming Access to Audiovisual Archives_ project intends to
undertake as it maps such possibilities.
## 6 Acknowledgements
This research is supported by the Swiss National Science Foundation through a
Sinergia grant for the interdisciplinary project _Narratives from the Long
Tail: Transforming Access to Audiovisual Archives_ , lead by co-author Prof.
Sarah Kenderdine (grant number CRSII5_198632, see
https://www.futurecinema.live/project/ for a project description).
## References
* [1] Lorin W Anderson and David R Krathwohl “A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives” Longman, 2001
* [2] R. Aylett and S. Louchart “Towards a Narrative Theory of Virtual Reality” In _Virtual Reality_ 7.1, 2003
* [3] Marcia J Bates “What is Browsing Really? A Model Drawing from Behavioural Science Research” In _Information Research - an International Electronic Journal_ 12.4 University of Sheffield, 2007
* [4] Jill Bennett “T_Visionarium : A user’s guide” University of New South Wales Press Ltd, 2008
* [5] Barry Blesser and Linda-Ruth Salter “Spaces Speak, Are You Listening? Experiencing Aural Architecture” MIT press, 2009
* [6] Benjamin Samuel Bloom “Taxonomy of Educational Objectives: The Classification of Educational Goals” In _Cognitive domain_ Longman, 1956
* [7] Neil C.M. Brown, Timothy S. Barker and Dennis Del Favero “Performing Digital Aesthetics: The Framework for a Theory of the Formation of Interactive Narratives” In _Leonardo_ 44.3, 2011, pp. 212–219
* [8] Jerome Bruner “Acts of Meaning” Harvard University Press, 1990
* [9] Dagmar Brunow “Curating Access to Audiovisual Heritage: Cultural Memory and Diversity in European Film Archives” In _Image [ &] Narrative_ 18.1 Open Humanities Press, 2017, pp. 97–110
* [10] Shan-Ju Chang and Ronald E Rice “Browsing: A Multidimensional Framework” In _Annual Review of Information Science and Technology (ARIST)_ 28 ERIC, 1993, pp. 231–76
* [11] Peter Dalsgaard and Lone Koefoed Hansen “Performing Perception—Staging Aesthetics of Interaction” In _ACM Transactions on Computer-Human Interaction (TOCHI)_ 15.3 ACM New York, NY, USA, 2008, pp. 1–33
* [12] Antonio R. Damasio “Looking for Spinoza: Joy, Sorrow, and the Feeling Brain” Houghton Mifflin Harcourt, 2003
* [13] Antonio R. Damasio “Self Comes to Mind: Constructing the Conscious” In _Brain_ , 2010
* [14] John Dewey “Democracy and Education: An Introduction to the Philosophy of Education” LondonNew York, Free Press Macmillan, 1966
* [15] John Dewey “The Later Works of John Dewey, Volume 1, 1925-1953: 1925, Experience and Nature” Carbondale: Southern Illinois University Press, 1981
* [16] Marian Dörk, Sheelagh Carpendale and Carey Williamson “The Information Flaneur: A Fresh Look at Information Seeking” In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ , 2011, pp. 1215–1224
* [17] Paul Dourish “Seeking a Foundation for Context-Aware Computing” In _Human–Computer Interaction_ 16.2-4 Taylor & Francis, 2001, pp. 229–241
* [18] Johanna Drucker “Performative Materiality and Theoretical Approaches to Interface” In _DHQ: Digital Humanities Quarterly_ 007.1, 2013
* [19] Johanna Drucker “Graphesis: Visual Forms of Knowledge Production” Harvard University Press Cambridge, MA, 2014
* [20] ENUMERATE “Survey Report on Digitisation in European Cultural Heritage Institutions 2014”, 2014 URL: http://www.enumerate.eu/fileadmin/ENUMERATE/documents/ENUMERATE-Digitisation-Survey-2014.pdf
* [21] Giovanna Fossati “Found Footage Filmmaking, Film Archiving and New Participatory Platforms” In _Found Footage. Cinema Exposed. Amsterdam: Amsterdam University Press/EYE Film Institute Netherlands_ , 2012, pp. 177–184
* [22] Shaun Gallagher “How the Body Shapes the Mind” Clarendon Press, 2006
* [23] Stephan Greene, Gary Marchionini, Catherine Plaisant and Ben Shneiderman “Previews and Overviews in Digital Libraries: Designing Surrogates to Support Visual Information Seeking” In _Journal of the American Society for Information Science_ 51.4, 2000, pp. 380–393
* [24] Erkki Huhtamo “Illusions in Motion: Media Archaeology of the Moving Panorama and Related Spectacles” MIT Press, 2013
* [25] Mark Johnson “The Meaning of the Body: Aesthetics of Human Understanding” University of Chicago Press, 2007
* [26] Mark Johnson “What Makes a Body?” In _The Journal of Speculative Philosophy_ 22.3, 2008, pp. 159–169
* [27] Mark Johnson “Embodied Understanding” In _Frontiers in Psychology_ 6 Frontiers, 2015, pp. 875
* [28] David J Kasik et al. “Data Transformations and Representations for Computation and Visualization” In _Information Visualization_ 8.4 SAGE Publications Sage UK: London, England, 2009, pp. 275–285
* [29] Sarah Kenderdine “Somatic Solidarity, Magical Realism and Animating Popular Gods: Place-Hampi ”where intensities are felt”” In _2007 11th International Conference Information Visualization (IV ’07)_ , 2007, pp. 402–408 DOI: 10.1109/IV.2007.103
* [30] Sarah Kenderdine “Embodiment, Entanglement, and Immersion in Digital Cultural Heritage” In _A New Companion to Digital Humanities_ John Wiley & Sons, Ltd, 2015, pp. 22–41 DOI: 10.1002/9781118680605.ch2
* [31] Sarah Kenderdine “Experimental Museology: Immersive Visualisation and Cultural (Big) Data” In _Experimental Museology_ , 2021, pp. 15
* [32] Sarah Kenderdine, Ingrid Mason and Lily Hibberd “Computational Archives for Experimental Museology” In _International Conference on Emerging Technologies and the Digital Transformation of Museums and Heritage Sites_ , 2021, pp. 3–18 Springer
* [33] Sarah Kenderdine, Jeffrey Shaw and Tobias Gremmler “Cultural Data Sculpting: Omnidirectional Visualization for Cultural Datasets” In _Knowledge Visualization Currents: From Text to Art to Culture_ London: Springer, 2013, pp. 199–220
* [34] David R Krathwohl “A Revision of Bloom’s Taxonomy: An Overview” In _Theory into practice_ 41.4 Taylor & Francis, 2002, pp. 212–218
* [35] Richard Kurin “Safeguarding Intangible Cultural Heritage in the 2003 UNESCO Convention: a critical appraisal” In _Museum International_ 56.1-2, 2004, pp. 66–77
* [36] Kris Layng et al. “CAVE: Making Collective Virtual Narrative: Best Paper Award” In _Leonardo_ 52.4 MIT Press One Rogers Street, Cambridge, MA 02142-1209, USA journals-info …, 2019, pp. 349–356
* [37] Federico Lenzerini “Intangible Cultural Heritage: The Living Culture of Peoples” In _European Journal of International Law_ 22.1, 2011, pp. 101–120 DOI: 10.1093/ejil/chr006
* [38] Joan Llobera, Kristopher J Blom and Mel Slater “Telling Stories within Immersive Virtual Environments” In _Leonardo_ 46.5 MIT Press, 2013, pp. 471–476
* [39] Irene Lopatovska, Iris Bierlein, Heather Lember and Eleanor Meyer “Exploring Requirements for Online Art Collections” In _Proceedings of the American Society for Information Science and Technology_ 50.1 Wiley Online Library, 2013, pp. 1–4
* [40] Eef Masson, Christian Gosvig Olesen, Nanne Noord and Giovanna Fossati “Exploring Digitised Moving Image Collections: The SEMIA Project, Visual Analysis and the Turn to Abstraction” In _DHQ: Digital Humanities Quarterly_ , 2020
* [41] Helen McCorry “Museums, the Web and the Serendipity Facilitator” In _MDA Information_ , 2001, pp. 133–135
* [42] Matthew McGinity et al. “AVIE: a Versatile Multi-User Stereo 360° Interactive VR Theatre” Association for Computing Machinery, 2007
* [43] Maurice Merleau-Ponty “Phenomenology of Perception” Routledge, 2013
* [44] Slavko Milekic “Towards Tangible Virtualities: Tangialities” ERIC, 2002
* [45] Franco Moretti “Distant Reading” Verso Books, 2013
* [46] Geert Mul and Eef Masson “Data-Based Art, Algorithmic Poetry: Geert Mul in Conversation with Eef Masson” In _TMG Journal for Media History_ 21.2 Netherlands Institute for SoundVision, 2018
* [47] Christian Gosvig Olesen et al. “Data-driven Research for Film History: Exploring the Jean Desmet Collection” In _Moving Image: The Journal of the Association of Moving Image Archivists_ 16.1 JSTOR, 2016, pp. 82–105
* [48] Jean Piaget “To Understand is to Invent: The Future of Education”, 1973
* [49] Robert M. Pirsig “Zen and the Art of Motorcycle Maintenance” William MorrowCompany, 1974
* [50] Cambridge University Press “Definition of serendipity from the Cambridge Advanced Learner’s Dictionary and Thesaurus”, 2021 URL: https://dictionary.cambridge.org/dictionary/english/serendipity
* [51] Ronald E Rice, Maureen McCreadie and Shan-Ju L Chang “Accessing and Browsing Information and Communication” MIT Press, 2001
* [52] Dario Rodighiero et al. “Surprise Machines: Revealing Harvard Art Museums’ Image Collection” In _Information Design Journal_ , forthcoming
* [53] Katja Rogers, Uta Hinrichs and Aaron Quigley “It doesn’t Compare to Being There: In-situ vs. Remote Exploration of Museum Collections”, 2014
* [54] RTSArchives “Le nouveau site RTSarchives”, 2018 URL: https://www.rts.ch/archives/5919889-le-nouveau-site-rtsarchives.html
* [55] G Scagnetti “Visual Epistemology for Design in Complex Systems, lecture at Tecnológica de Monterrey: Academic leaders’ program”, 2011
* [56] Maxine Sheets-Johnstone “The primacy of Movement” John Benjamins Publishing, 2011
* [57] Haifeng Shen et al. “Information Visualisation Methods and Techniques: State-of-the-art and Future Directions” In _Journal of Industrial Information Integration_ 16 Elsevier, 2019, pp. 100–102
* [58] Ben Shneiderman “The Eyes Have It: A Task by Data Type Taxonomy for Information Visualizations” In _The Craft of Information Visualization_ Morgan Kaufmann, 2003, pp. 364–371
* [59] Mel Slater and Sylvia Wilbur “A Framework for Immersive Virtual Environments (FIVE): Speculations on the Role of Presence in Virtual Environments” In _Presence: Teleoperators and Virtual Environments_ 6.6, 1997, pp. 603–616 DOI: 10.1162/pres.1997.6.6.603
* [60] Daniel N Stern “The Interpersonal World of the Infant: A View from Psychoanalysis and Developmental Psychology” Routledge, 2018
* [61] Robert J Sternberg and Sareh Karami “An 8P Theoretical Framework for Understanding Creativity and Theories of Creativity” In _The Journal of Creative Behavior_ 56.1 Wiley Online Library, 2022, pp. 55–78
* [62] Jonathan Steuer “Defining Virtual Reality: Dimensions Determining Telepresence” In _Journal of communication_ 42.4 Wiley Online Library, 1992, pp. 73–93
* [63] Elaine G Toms “Serendipitous Information Retrieval” In _DELOS_ , 2000
* [64] Mitchell Whitelaw “Generous Interfaces for Digital Cultural Collections” In _DHQ: Digital Humanities Quarterly_ , 2015
* [65] Florian Windhager et al. “Visualization of Cultural Heritage Collection Data: State of the Art and Future Challenges” In _IEEE Transactions on Visualization and Computer Graphics_ 25.6, 2019, pp. 2311–2330 DOI: 10.1109/TVCG.2018.2830759
* [66] R. Wright “The Future of Television Archives - Digital Preservation Coalition”, 2017 URL: https://www.dpconline.org/blog/wdpd/the-future-of-television-archives
* [67] Tom Ziemke, Jordan Zlatev and Roslyn M Frank “Body, Language, and Mind” Walter de Gruyter, 2007
|
Remotely Operating a Single-Point LDV System to Acquire 2D Measurements Alec
Ikei, Dr. Kaushik Sampath Acoustic Signal Processing and Systems Branch
Acoustics Division January 27, 2021 Distribution A: Approved for public
release: distribution unlimited NRL Memorandum Report01/01/2019 - 01/07/2021
Alec K. Ikei, Dr. Kaushik Sampath U.S Naval Research Laboratory
4555 Overlook Ave SW
Washington, DC 20375 U U U UU Alec K. Ikei (202) - 404 - 4816
# Remote Operation of a Single-Point LDV System to Acquire 2D Measurements
Alec K. Ikei Dr. Kaushik Sampath
Due to the unprecedented increase in telework requirements, the motivation to
further automate and remotely control experiments has become apparent. This
work documents the technical development of creating a two-dimensional (2D)
Laser Doppler Vibrometry (LDV) measurement using a single-point LDV system
through an automated and remotely controllable process. This report aims to
assist in rapid development of setups for similar use cases. The setup
described is also modular, and has been used to analyze the modal response of
samples actuated through air-based acoustic signals as well as those
mechanically induced.
###### Contents
1. 1 Introduction
1. 1.1 Motivation
2. 1.2 Background
3. 1.3 A Brief Overview of This Work
1. 1.3.1 Input Excitation
2. 1.3.2 Output Measurement
3. 1.3.3 Data Analysis
2. 2 Equipment and Connections
3. 3 Data Acquisition Using LabVIEW
1. 3.1 FPGA
2. 3.2 RT Controller
3. 3.3 Desktop PC
4. 4 Data Analysis in MATLAB
1. 4.1 Loading and Processing Data
2. 4.2 Using FFT Data to Perform Modal Analysis
5. 5 Discussion
6. 6 Conclusion
7. A LabVIEW Data Acquisition Code
8. B MATLAB Parallel Processing 2D LDV Code
9. C MATLAB Custom Function: func_SSA
10. D MATLAB Custom Function: importVelocity
Due to the unprecedented increase in telework requirements, the motivation to
further automate and remotely control experiments has become apparent. This
work documents the technical development of creating a two-dimensional (2D)
Laser Doppler Vibrometry (LDV) measurement using a single-point LDV system
through an automated and remotely controllable process. This report aims to
assist in rapid development of setups for similar use cases. A key achievement
of this work is the complete control integration of various types of hardware
and communication protocols, i.e. Field-Programmable Gate Array (FPGA),
function generator, translation stage, LDV and accelerometer. While several
experiments may be ’executed’ remotely once the equipment parameters have been
optimized in person, in this work, each tunable parameter of all the hardware
can be adjusted remotely as well, therefore, eliminating that extra
requirement. The setup described is also modular, and has been used to analyze
the modal response of samples actuated through air-based acoustic signals as
well as those induced mechanically.
In the setup described in this work, an arbitrary waveform is set in a
function generator, which is then amplified and played through a speaker in an
opened transmission loss tube. The acoustic signal from the speaker travels
through the air in the tube to vibrate the sample surface. The LDV is mounted
on a 2D motorized platform, which is controlled through LabVIEW on a desktop
computer. The LDV continuously sends velocity and signal strength data to a
Compact Rio (cRio) data acquisition device, which records data upon trigger
from the the function generator. The recorded data is then streamed and saved
on the desktop computer. The Fourier transform of each measurement location is
calculated in MATLAB, generating a 3D data array, consisting of two spatial
dimensions and one dimension in frequency. A single frequency slice of data is
taken from this array, which gives the amplitude of a 2D surface at a single
frequency, also called a mode shape. These mode shapes can then be compared to
expected results from COMSOL simulations.
## Chapter 1 Introduction
### 1.1 Motivation
The current COVID-19 pandemic has required adaptation to a rapidly changing
situation and long-term telework. To continue to do bench work, it makes sense
to automate processes that are able to be automated, and allow for remote
access and control of the testing environment. For example, one of our
experiments required finely spaced vibration measurements across the surface
of a sample, with various input excitations, measurement sampling rates and
spacing. With only a single-point LDV system and a data acquisition system
(DAQ), it would take many hours to take the measurements while also increasing
the chance of human error. Instead of the experiment operator manually
changing the excitation and measurement parameters, it is more efficient and
convenient to automate, remotely control and monitor the experiment.
Therefore, the function generator parameter setup was automated and integrated
with the pre-pandemic point-scanning setup.
An LDV system is widely used to measure the response of a system/sample to an
excitation. Such experiments involve three broad stages: (1) sample
fabrication and mounting, (2) excitation and (3) measurement scans. The sample
preparation is entirely problem-specific and is usually not integrable with
the latter two. A variety of different excitation mechanisms/inputs maybe
employed for experiments. In our lab, we perform vibrational or acoustic
excitations. Typically, once the sample is fabricated and mounted to the
excitation mechanism in a location accessible to the LDV for scanning, it
still requires a significant (at-work) time commitment to synchronize and
optimize the excitation and LDV settings before the experiment can begin.
Here, our remote operation of the LDV system integrates all the components of
the excitation mechanism as well, allowing the user to remotely control
excitation and LDV settings to optimize them and run the measurement scan.
This includes control of the function generator, thereby adjusting the output
of a vibrational exciter or sound tube.
Commercial solutions to just the scanning stage of the problem exist for some
specific applications, such as the 2D and 3D LDV systems sold by various
companies. However, higher dimensional systems can cost on the order of 10
times more than a single-point system for each added dimension of measurement
capability. For many lab setups, this may not be feasible, and in many
situations do not allow for enough adaptability and control of the scan
parameters. The methods described here can be reproduced at lower cost and
less administrative time used to purchase a 2D LDV while still producing high
quality results, in addition to customization and integration with the testing
environment. Another crucial advantage of the current approach is that a
vendor-built custom 2D scanning system is wedded to just the LDV measurement,
and has no use during the machine’s down-time. Whereas, as a consequence of
the present work, the 2D motorized stage can be used modularly for a wide
range of other experiments, and the automation/remote-control components carry
forward to several other case scenarios. For example, the motorized stage and
function generator can be placed in an acoustic (hydrophone) scan experiment
and moved back to the LDV setup as needed.
### 1.2 Background
Historically, in the Acoustic Signal Processing and Systems Branch (Code
7160), we performed experiments using scanning-type measurements for
underwater acoustic tests, using an automated system to move and acquire data
from a hydrophone in 2D and 3D scans. The code used to run the underwater
tests was modified to run on different hardware and connect to different
systems, but the idea behind the measurement remains the same. By moving the
data acquisition location around, the data collected represents the same data
that would have been collected if using a large array of measurement devices.
Conversely, the excitation source can be moved around with a constant data
acquisition location, which would emulate having an array of sources.
The data collected at each measurement point is usually then processed in
MATLAB to extract binned time series data or frequency domain data. These
analyses can then be represented as a propagating wave or modal shape,
respectively. As additive manufacturing and featuring capabilities achieve
finer resolutions, the demand for higher frequency scans has only been
increasing. Moreover, an interest in studying non-linear acoustic or transient
phenomena also requires large, dense collections of data points and high-
frequency scans. These result in a substantial increase in the amount of data
acquired per scan, necessitating parallel processing of the data processing
step.
### 1.3 A Brief Overview of This Work
#### 1.3.1 Input Excitation
The testing setup described in this work sets an arbitrary waveform in a
function generator, which is then amplified and played through a speaker in an
opened transmission loss tube. The acoustic signal from the speaker travels
through the air in the tube to vibrate the sample surface. The transmission
tube can be replaced by a vibration exciter, if the desired actuation is
mechanical rather than pressure based.
#### 1.3.2 Output Measurement
The LDV is mounted on a 2D motorized platform, which is controlled through
LabVIEW on a desktop computer. The LDV continuously sends velocity and signal
strength data to a cRio data acquisition device, which records data upon
trigger from the the function generator. At the same time, the data from the
accelerometer is also collected, so that the output can be normalized for each
measurement location. The recorded data is then streamed and saved on the
desktop computer. Since the controls for this setup are all on the desktop
personal computer (PC), this allows remote desktop users to modify the input
excitation, linear stages and DAQ parameters while teleworking.
#### 1.3.3 Data Analysis
The Fourier transform of each measurement location is calculated in MATLAB,
generating a 3D data array. A single frequency slice of data is taken from
this array, which gives the amplitude of a 2D surface at a single frequency,
also called a mode shape. These mode shapes can then be further analyzed and
compared to expected results in COMSOL simulations, as seen in Fig. 1.1.
Figure 1.1: Vibrational modes of a circular elastomeric plate, produced using
the setup described in this work and compared with a COMSOL model. Figure
reused with permission wissman2019soft wissman2019liquid.
## Chapter 2 Equipment and Connections
The desktop PC was connected through a Universal Serial Bus (USB) cable to the
function generator. The signal output of the function generator was connected
to the input of the amplifier through a Bayonet Neill-Concelman (BNC) cable.
The output of the amplifier was connected to the input of the tube through
banana cables. The impedance/sound tube kit can be substituted with a
vibrational exciter when mechanical actuation is required instead.
The PC was connected through RS-232 to the 2D stage controller. The LDV was
mounted on a custom mounting bracket to the stage. The PC was connected to the
DAQ through an Ethernet cable, and the DAQ modules were inserted into the DAQ
chassis. The signal strength and velocity outputs of the LDV were connected to
the analog input of the DAQ.
The trigger signal from the function generator was connected through BNC to
the digital module on the DAQ as well as the anti-drift input on the LDV
controller. This causes the voltage output of the LDV to be set to 0V at each
trigger, which helps to avoid the signal exceeding the maximum allowed by the
LDV controller (10V). The trigger is also used by the DAQ to determine when to
start recording the signal for each measurement location. The equipment and
connection schematics are illustrated in Fig. 2.1, and their make and model
are listed in Table 2.1.
Equipment | Model | Manufacturer
---|---|---
Function Generator | 33500B | Agilent Technologies
Power Amplifier | Type 2718 | Brüel & Kjær (B&K)
Transmission Loss Tube Kit | Type 4206 | Brüel & Kjær (B&K)
2D Linear Stage | Bi-Slide | Velmex Inc.
Single-Point LDV | CLV-2534 | Polytec Gmbh
DAQ Chassis | cRio-9035 | National Instruments
Analog Input Module | NI-9223 | National Instruments
Digital I/O Module | NI-9402 | National Instruments
Table 2.1: List of equipment and their respective makes and models, used to
take a 2D LDV scan of the elastomeric plate seen in Fig 1.1. Figure 2.1:
Equipment connections used to perform 2D LDV scan of the elastomeric plate
seen in Fig 1.1.
## Chapter 3 Data Acquisition Using LabVIEW
The description here assumes basic familiarity with LabVIEW. The code is run
from several systems: the desktop PC, the Real-Time (RT) controller, and the
FPGA inside of the controller chassis. The LabView code uses the FPGA and the
C-series DAQ modules on the cRio to take data, passes it along through the RT
controller and then the PC. In addition to the base LabVIEW program, the
Embedded Control Suite is required to operate the FPGA. A flow chart showing
the communication logic is displayed in Fig. A.1. LabVIEW block diagrams are
shown in Appendix A.
### 3.1 FPGA
In the FPGA code, the digital input is constantly read until it detects a
rising edge from the function generator trigger. It then takes the analog
voltage time series through a for-loop, and passes the values into a Direct
Memory Access (DMA) first-in first-out (FIFO) channel. DMA FIFO channels are a
way to communicate data at high sampling rates between the FPGA and the RT
controller. The number of iterations of the for-loop and the timing between
iterations is set by a control. Control values are modifiable from the RT
code. The LDV signal strength is passed into an indicator. The DMA FIFO is
also checked if it overflowed, and this value is passed onto an indicator. The
overflow should be incorporated with a feedback node, so that successive reads
of the overflow indicator do not reset the value. The FPGA code can be seen in
Figure A.3.
### 3.2 RT Controller
The RT controller opens the FPGA program, and sets the FPGA control values
using the “Read/Write Control” function. The RT controller waits until the 2D
stage has stopped, which is determined by a Boolean shared variable. Before
and after data acquisition, the RT controller code checks if the 2D stage has
stopped moving, and tells the PC that it is done acquiring data for that
sampling location through Boolean shared variables. The FPGA code is then
started, and the data is read from the DMA FIFO through a while loop. The
while loop queries the DMA FIFO to check if there are elements stored on it.
If there are, it reads the elements and places it in an array. The array is
appended to previous iterations of the while loop. When the while loop is
done, the array contains data from a single average. The for loop that
surrounds the while loop repeats the single average process until it contains
the data from the number of averages requested, as seen in Fig. A.4.
### 3.3 Desktop PC
The desktop PC controls the 2D stage and the function generator, and
coordinates with the RT controller to acquire data between movements of the
LDV. In the first section of the PC code, the 2D linear stage is initialized,
which gives the stage controller an origin to reference. The initialization
sub-function of the LabVIEW Virtual Instrument (sub VI) is custom made, based
on communication protocols supplied by the manufacturer. The function
generator parameters and the arbitrary waveform file are read into a custom
made sub VI, which utilizes drivers from the manufacturer. This is shown in
Fig. A.5.
In the next selected snippet, the PC code creates a raster grid based on user
input in Fig. A.6. The raster grid is iterated through nested for loops,
reading and saving data each time to the hard drive, as seen in Fig. A.7. The
PC tells the RT controller when the movement is done through a Boolean shared
variable, and waits for the data acquisition to finish before reading the
data. Once it finishes saving, it moves on to the next iteration of the nested
for loop. When the nested for loop is done, it tells the cRio that the scan is
done, and closes the communication port for the function generator. Failure to
close ports properly as mentioned here, will lead to communication issues in
subsequent operation, which can usually be remedied by resetting the device.
Some of the default settings on the function generator are not suited for our
application. For example, the function generator applies a low pass filter at
17kHz, which limits the types of response that can be induced in the sample.
By including code that can modify parameters on the function generator, these
settings can be turned off, modified and automated, allowing for different
parameter sets to be used when acquiring multiple data sets.
## Chapter 4 Data Analysis in MATLAB
### 4.1 Loading and Processing Data
The data taken during the experiment was saved in a folder as separate comma
delimited text files. In MATLAB, the working directory was changed to the
folder containing the data files. Since the script described here used
parallel processing to increase the processing speed, the MATLAB parallel
processing toolbox is required to run it. In the script, the file names in the
directory are read into an array, and the X and Y values are extracted from
their names into two 1D arrays. The frequency array is calculated based on the
sampling rate and the amount of zero padding that is desired. Zero padding
refers to a signal processing technique in which zeros are added to the
original time series data, which decreases the discrete frequency step size in
the corresponding fast Fourier Transform (FFT). While increasing the zero
padding of a data set provides an increasingly small frequency step size, the
resulting FFT calculations taking increasingly more time to calculate, and
therefore there is an inherent trade-off between the acceptable processing
time and the minimum frequency step size achievable. A parallelized for
(parfor) loop is then used to read the data and perform an FFT on all the
files in the directory, resulting in a 2D array (each column is the FFT of a
different position, and each row represents a different frequency).
### 4.2 Using FFT Data to Perform Modal Analysis
In the next section of the script, a new directory is created, which is used
later to save the figure files in. The frequency limit of the mode shapes to
be displayed are set here, because the frequency range of interest is usually
less than what the entire FFT contains. Using a parfor loop again, the images
are generated from the FFT data. The X and Y arrays are used to index the FFT
array, and the iteration counter of the parfor loop is used to select the row,
which generates a 2D array of amplitudes that represent a single frequency.
The 2D array is then plotted as a color map or a 3D mesh figure. Using this
code, 500 images containing 143 by 136 spatial points (representing 19,448
sampling locations) were created in about 2 minutes on a computer with a 16
core processor. Further optimization can be carried out by doing parallel
analysis on a Graphics Processing Unit (GPU) rather than on a Central
Processing Unit (CPU) as was done here, which would allow many additional
parallel processes, resulting in lower computation time.
In comparison, without a parallel processing approach the code would have to
run on a single processor, which would likely take approximately 16 times as
long, saving half an hour for a relatively small subset of the data. With
larger datasets or more detailed binning of the Fourier transform, the time
saved would likely increase linearly, with a similar factor of performance
improvement. The MATLAB code is included in Appendix B, and the custom
functions used are included in Appendix C and D.
## Chapter 5 Discussion
The method described here shows that a 2D LDV measurement can be done well
with a single-point LDV and easily obtained components. The modular nature of
the setup allows it to be used for different applications; to actuate a thin
elastomeric plate as seen here through pressure waves in air, or through
mechanical means to vibrate more rigid samples, using a vibration exciter and
a stinger. In addition, some components can be swapped out for cheaper parts,
especially the DAQ. A simple FPGA based DAQ card can be obtained on the order
of $100, which is two orders of magnitude cheaper than a cRio. Uniquely for
NRL, and perhaps many other research laboratories, FPGA DAQs are usually
present with a lot of down time. The modularity of this solution can be
extended to various existing hardware.
Conversely, to acquire data at faster speeds, better damping on the linear
stages and increasing its rigidity can be done to decrease the time needed for
each sampling location. The number of averages and the number of time samples
can also be modified to increase acquisition speed. However, this can affect
the quality of the data produced, since noise generally decreases with the
square root of the number of averages taken.
Another important factor to consider is the sample mounting. Poorly mounted
samples can lead to poor actuation of the sample. The sample shown in Fig. 1.1
has the elastomer bonded with an acrylic frame, which helps to ensure more
uniform actuation. Especially in mechanical actuation (e.g. actuation of more
solid objects, like plastic or metal plates), the contact between the
vibration exciter, the stinger, and the sample must be snug.
The signal strength value from the LDV system can also be used to filter out
low signal sampling locations. In many cases, signal strength varies
arbitrarily due to surface roughness effects, and more often a region of good
signal strength maybe found infinitesimally close to a spot with poor signal
strength. Therefore, for these scenarios, the LabVIEW code could also be
modified to step a small amount in the X or Y direction upon measurement of a
poor signal location. If these approaches are not feasible for a particular
use case, then the data can also be spatially filtered to smooth out the
figure surface.
## Chapter 6 Conclusion
To summarize the process described in this work, a sample is vibrated using an
arbitrary waveform played through a speaker. A 2D linear stage is used to move
a single-point LDV, which takes the velocity time series of the sample’s
surface. The time series is saved to individual text files, and the FFT is
taken of each file. For each frequency bin in the FFT, the sampling position
is used to assign its position in a 2D mesh. The mesh is then plotted either
as a surface plot or as a colorplot, resulting in a mode shape plot. By using
parallel processing, this process is sped up significantly, resulting in large
time savings which increase with the complexity and size of the analysis. The
modular nature of this setup also allows for modification and therefore
versatility for different use cases and budgets.
In addition to the time savings and low cost, the automation and remote
operation of its sub components makes this setup even more appealing in times
where lab access is restricted. As such, this report serves to document the
technical process used to develop the measurement system to accelerate the
development of similar setups and preserve institutional knowledge.
The extent of remote operation can be further improved from this setup by
employing a third-axis translation stage (in the direction of the LDV scanning
beam). For optimal LDV signal strength, it requires the sample to be separated
at discretized distances from the beam output location. The third stage can
help achieve this remotely, and eliminate the need for in-person adjustment of
the sample and/or LDV. Furthermore, when samples have a curvature, thereby
varying that distance during a scan, the third axis translation stage can be
used to dynamically adjust the LDV separation based on each sampling location
to provide the best signal strength throughout the sample.
The high level of detail and discussion of the code used are included to help
unfamiliar engineers to be brought up to speed quickly. The LabVIEW and MATLAB
codes used are shown in the appendices below, to give further guidance.
###### Acknowledgements.
The LabVIEW code displayed here was built upon work performed by Dr. Michael
Nicholas (NRL, Code 7165). We thank Wissman et. al. for permission to reuse
their figure. We also thank Dr. Matthew Guild (NRL, Code 7165) and Dr. David
Calvo (NRL, Code 7165) for their time spent reviewing this manuscript.
## References
* Wissman et al. (2019a) J. Wissman, K. Sampath, A. Ikei, K. B. Özütemiz, C. Majidi, and C. A. Rohde, “Soft-matter pressure sensors for turbulence detection,” Proceedings of the Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems 2019, volume 10970 (International Society for Optics and Photonics), 2019a, p. 109702D.
* Wissman et al. (2019b) J. P. Wissman, K. Sampath, A. Ikei, K. B. Özütemiz, M. Carmel, and C. Rohde, “Liquid metal-based resistive membranes for flow acoustics detection,” _The Journal of the Acoustical Society of America_ 146(4), 2997–2997 (2019b).
## Appendix A LabVIEW Data Acquisition Code
Figure A.1: Communication flow chart between systems running LabVIEW code.
Figure A.2: Organization of LabVIEW files. Opening the ”.lvproj” file opens
this LabVIEW project window. Figure A.3: FPGA code, located under cRio
$\rightarrow$ Chassis $\rightarrow$ FPGA Target. This code reads the digital
input and moves to the second panel after receiving a rising edge. The second
panel contains a for loop that collects the analog voltage signal and passes
the value into the DMA FIFO at specifically timed intervals. If the FIFO
overflows, it sets a boolean indicator to true. The signal strength of the LDV
is also recorded as a single value. The controls can be adjusted from the RT
controller code. Figure A.4: Selected portion of the Real-Time Controller
Code, located under cRIO. This code runs the FPGA code, then reads the LDV
strength into a shared variable. The DMA FIFO for each channel is queried. If
there are elements remaining, they are read and passed out of the while loop
into the for loop. The for loop continues until the number of averages taken
is complete. The data is then averaged and passed into shared variables and
the front panel display. Figure A.5: First part of the PC code, located under
My Computer. This section initializes the 2D stage, so that it has an origin
to reference. The waveform generator is also loaded with the arbitrary
waveform. Figure A.6: Second part of the PC code, located under My Computer.
This section generates the raster grid based on user input, and then moves the
stage, iterating through nested for loops. Figure A.7: Next important part of
the PC code, located under My Computer. This section reads the data stored int
he shared variables, and saves them to text files. The precision is set by the
format string ”%.6f”.
## Appendix B MATLAB Parallel Processing 2D LDV Code
%%Attribution
%Code written by Alec Ikei and Kaushik Sampath, November 2020
%Included in 2D LDV US Naval Research Laboratory Memorandum Report
%% Read Data and Parallel Calculate FFT
% get directory list
clc; clear; close all;
folder = uigetdir(); %prompts user to select folder containing data
fileList = dir(fullfile(folder, ’*.tsv’)); %only lists .tsv files
fileNames=strcat(folder,’\’,{fileList.name}); %extract the names of the files and
%put them into a n x 1 cell array
%User inputs
samplingRate=100000; %sampling rate used in Hz
padMultiplier=1; %number of TS lengths of 0s to pad
minFreq=100; maxFreq=20000; %set frequency limits for slicing
signalCutoff=0.05;
%end of user inputs
x=zeros(length(fileNames),1); y=x;z=x; badpxList=x; %length(a)=number of files
badpx=0; %# of low signal str pixels
%calculate the frequency vector for just one timeseries
velocityArray=importVelocity(fileNames{1}); %get LDV data for particular
%spatial location
localfft=func_SSA(velocityArray,padMultiplier); %calculate FFT vector
numofBins=length(velocityArray)/2*padMultiplier; %calculate # frequency bins
localfreqBinSize=samplingRate*0.5/numofBins; %calculate bin width
freqArray=0:localfreqBinSize:samplingRate/2; %generate frequency array
%locate index of max frequency
dist = abs(freqArray - maxFreq);
minDist = min(dist);
maxFreqidx = find(dist == minDist);
%locate index of min frequency
dist = abs(freqArray - minFreq);
minDist = min(dist);
minFreqidx = find(dist == minDist);
%Pre-allocate variable for FFT array
fftArray=zeros(length(localfft(minFreqidx:maxFreqidx)),length(fileNames));
Ψ%for FFT in columns
parfor m=1:length(fileNames) %parallel computation of FFTs
m %shows current iteration in command window
indexX=regexp(fileNames{m},’X_’);
indexY=regexp(fileNames{m},’Y2_’);
indexS=regexp(fileNames{m},’Strength’);
indexP=regexp(fileNames{m},’.tsv’);%location from filename
if str2double(fileNames{m}(indexS+8:indexP-1))>signalCutoff
Ψ% only use data if signal strength is good
x(m)=str2double(fileNames{m}(indexX+2:indexY-1));
ΨΨ% X coordinate from filename
y(m)=str2double(fileNames{m}(indexY+3:indexS-1));
ΨΨ% Y2 coordinate
velocityArray=importVelocity(fileNames{m});
ΨΨ%LDV data for particular spatial location
localfft=func_SSA(velocityArray,padMultiplier);
ΨΨ%calculate FFT
fftArray(:,m)=localfft(minFreqidx:maxFreqidx);
ΨΨ%place relevant FFT data in columns
else
badpx=badpx+1; %count number of excluded pixels
badpxList(m)=1; %list bad pixels
end
end
x(badpxList>0)=[];y(badpxList>0)=[];fftArray(:,badpxList>0)=[]; %remove bad pixels
fftArrayMax=max(max(abs(fftArray))); %global maximum amplitude
disp(’done reading data’);
%% Create color plot of each slice and save as .png
pngfolder=strcat(folder,’\pngs’);figfolder=strcat(folder,’\figs’);
%folders for images
mkdir(pngfolder); mkdir(figfolder);
xx=unique(x,’rows’);yy=unique(y,’rows’); %remove duplicate coordinates
%creates a 2d set of points to assign the sampNormPlt values to
[X,Y]=meshgrid(min(xx):xx(2)-xx(1):max(xx),min(yy):yy(2)-yy(1):max(yy));
%pre-allocate space to save all surface data
surfaceData=zeros(length(minFreqidx:maxFreqidx),length(yy),length(xx));
currentFreq=minFreqidx:maxFreqidx;
parfor k=1:length(currentFreq) %must start at 1, since parfor
f=scatteredInterpolant(x,y,fftArray(currentFreq(k),:)’);
Ψ%create function from data
Z=f(X,Y)-mean(f(X,Y)); %create surface values from function, minus DC offset
surf(X,Y,real(Z),’LineStyle’,’none’); %create surface plot
view(2); %flat view
imageName=strcat(num2str(freqArray(currentFreq(k))), ’ Hz’);
print(’-dpng’,’-r75’,strcat(pngfolder,’\’,imageName));
Ψ%convert current fig to png
saveas(gcf, strcat(figfolder,’\’,imageName));
Ψ%saves the current figure as Matlab file
end
## Appendix C MATLAB Custom Function: func_SSA
This function takes the single sided amplitude complex Fourier transform. It
also can zero pad the timeseries input, to change the frequency binning. Used
in the parallel processing 2D LDV code.
function [SSA,Phase] = func_SSA(X,k)
%k=1; changed from kaushiks code
% Sampling Length
L = length(X);
% Pad with a length of k times L
Lpad = k*L;X1 = zeros(Lpad,1);
X1(Lpad/2-L/2+1:Lpad/2+L/2)=X;
X=X1;
% Compute FFT
FFT1 = fft(X);
Phase=imag(FFT1);
% Single Sided Amplitude Spectrum
FFT2 = FFT1/Lpad;
SSA = FFT2(1:Lpad/2+1,:);
SSA(2:end-1,:) = 2*SSA(2:end-1,:);
SSA = SSA/(L/Lpad);
## Appendix D MATLAB Custom Function: importVelocity
This function reads the text data file and converts it into a vector. Used in
the parallel processing 2D LDV code.
function velocityArray = importVelocity(filename, dataLines)
if nargin < 2
dataLines = [1, Inf];
end
%% Setup the Import Options
opts = delimitedTextImportOptions("NumVariables", 2);
% Specify range and delimiter
opts.DataLines = dataLines;
opts.Delimiter = "\t";
% Specify column names and types
opts.VariableNames = ["VarName1", "Var2"];
opts.SelectedVariableNames = "VarName1";
opts.VariableTypes = ["double", "string"];
opts = setvaropts(opts, 2, "WhitespaceRule", "preserve");
opts = setvaropts(opts, 2, "EmptyFieldRule", "auto");
opts.ExtraColumnsRule = "ignore";
opts.EmptyLineRule = "read";
% Import the data
tbl = readtable(filename, opts);
%% Convert to output type
velocityArray = tbl.VarName1;
end
|
# ORIENT: A Priority-Aware Energy-Efficient Approach for Latency-Sensitive
Applications in 6G
Masoud Shokrnezhad1, and Tarik Taleb1, 2
1Oulu University, Oulu, Finland; {masoud.shokrnezhad<EMAIL_ADDRESS>
2Ruhr University Bochum, Bochum, Germany<EMAIL_ADDRESS>
###### Abstract
Anticipation for 6G’s arrival comes with growing concerns about increased
energy consumption in computing and networking. The expected surge in
connected devices and resource-demanding applications presents unprecedented
challenges for energy resources. While sustainable resource allocation
strategies have been discussed in the past, these efforts have primarily
focused on single-domain orchestration or ignored the unique requirements
posed by 6G. To address this gap, we investigate the joint problem of service
instance placement and assignment, path selection, and request prioritization,
dubbed PIRA. The objective function is to maximize the system’s overall profit
as a function of the number of concurrently supported requests while
simultaneously minimizing energy consumption over an extended period of time.
In addition, end-to-end latency requirements and resource capacity constraints
are considered for computing and networking resources, where queuing theory is
utilized to estimate the Age of Information (AoI) for requests. After
formulating the problem in a non-linear fashion, we prove its NP-hardness and
propose a method, denoted ORIENT. This method is based on the Double Dueling
Deep Q-Learning (D3QL) mechanism and leverages Graph Neural Networks (GNNs)
for state encoding. Extensive numerical simulations demonstrate that ORIENT
yields near-optimal solutions for varying system sizes and request counts.
###### Index Terms:
6G, Resource Allocation, Energy Consumption, Service Placement and Assignment,
Path Selection, Prioritization, E2E Latency, Age of Information (AoI),
Reinforcement Learning, Q-Learning, and Graph Neural Networks (GNNs).
## I Introduction
The advent of the 6th generation of telecommunication systems (6G) signifies a
pivotal era marked by unparalleled connectivity and technological
advancements. With ultra-low End-to-End (E2E) latency (less than $1$
milisecond), exceeding $1$ terabit per second peak data rates, and ultra-high
reliability surpassing $99.99999\%$ [1], 6G promises to revolutionize
industries such as holographic telepresence utilizing extended reality [2],
dynamic metaverse empowered by semantic communications [3], and quantum
networking [4]. However, achieving these capabilities raises substantial
energy consumption concerns for both computing and networking resources.
Presently, these resources consume around $200$ terawatt-hours of electricity
annually, approximately $1\%$ of the global total [5]. Many quality-sensitive
applications may require uploading up to $50\%$ of data to computing
facilities for processing [6], adding even more strain to computing and
networking resources. Moreover, the projected surge in 6G-connected devices
and global data exacerbates the energy consumption challenge, underscoring the
need for sustainable solutions.
In order to realize a 6G-enabled future, it may be necessary to create novel
resource orchestration mechanisms to address impending energy challenges. The
subject has been extensively studied in the literature. Xuan et al. [7]
addressed the Service Function Chaining (SFC) problem with the objective of
minimizing energy consumption by proposing an algorithm based on multi-agent
Reinforcement Learning (RL) and a self-adaptive division strategy. Solozabal
et al. [8] investigated the same problem and proposed a single-agent solution.
Other authors have also examined the SFC problem. By proposing a sampling-
based Markov approximation method, Pham et al. [9] solved the problem in an
effort to minimize operational and traffic energy consumption. Santos et al.
[10] developed two policy-aware RL algorithms based on actor-critic and
proximal policy optimization to maximize availability while minimizing energy
consumption. Reducing energy consumption was considered in the Service
Function (SF) placement problem as well. Sasan et al. [11] presented a
heuristic algorithm to tackle the joint problem of network slicing, path
selection, and SF placement, with the objective of maximizing user acceptance
while minimizing energy consumption. Farhoudi [12] and He et al. [13]
investigated a comparable problem and proposed RL-based solutions, taking into
account the dynamic nature of service requests and overall cost considerations
(including operation, deployment, and transmission), respectively.
While effective in specific contexts, the mentioned methods may not be
suitable for 6G systems. These approaches prioritize energy efficiency over
maximizing device support, whereas achieving an E2E efficient solution
requires holistic management of computing and networking resources,
considering the stringent Quality of Service (QoS) demands of 6G. Furthermore,
certain studies overlook or oversimplify critical network parameters like
latency, which contradicts the intricate and dynamic requirements of 6G
systems. This paper addresses this gap by investigating the joint problem of
allocating computing and networking resources (service instance placement and
assignment, path selection, and request prioritization), termed PIRA. The
objective is to optimize the system’s overall profit (as a function of
supported concurrent requests) while minimizing energy consumption over time,
accounting for E2E latency and resource capacity constraints. The $M/M/1$
queuing model is employed to accurately evaluate request latency on compute
nodes and network devices. To solve this problem, we propose ORIENT, an
approach leveraging Double Dueling Deep Q-Learning (D3QL) reinforced by Graph
Neural Networks (GNNs). This hybrid method effectively encodes the system
state and facilitates the identification of near-optimal solutions.
The remainder of this paper is organized as follows. Section II introduces the
system model. PIRA is defined and formulated in Section III, and ORIENT is
presented in Section IV. Finally, numerical results are illustrated in Section
V, followed by concluding remarks in Section VI.
## II System Model
As shown in Fig. 1, the following is an explanation of the two main components
of the system: resources and requests.
Figure 1: The system model, including network devices and distributed compute
nodes facilitating holographic telepresence services for end users.
### II-A Resources
The 6G system examined in this paper is an integrated infrastructure of
computing and networking resources comprised of $\mathcal{N}$ network devices
and $\mathcal{V}$ compute nodes (radio resources are excluded) [14].
$\mathbb{N}=\\{n|0\leq n\leq\mathcal{N}\\}$ is the set of network devices, and
$\mathbb{V}=\\{v|0\leq v\leq\mathcal{V}\\}$ denotes the set of compute nodes.
Compute nodes are connected through network devices via $\mathcal{P}$ paths,
the set of which is denoted by $\mathbb{P}=\\{p|0\leq p\leq\mathcal{P}\\}$,
and the immediate network device of compute node $v$ is indicated by $n_{v}$.
Each path $p$ contains a set of links $\mathbb{L}_{p}\subset\mathbb{L}$, where
$\mathbb{L}=\\{l:(n,n^{\prime})|n,n^{\prime}\in\mathbb{N}\\}$ is the set of
all network links, and $\mathcal{L}$ is its size. Network devices and compute
nodes are priority-aware, i.e., $\mathbb{K}=\\{k|0\leq k\leq\mathcal{K}\\}$ is
regarded as the set of permissible priority levels (where lower levels
indicate higher priorities), and the resources in both domains are virtually
partitioned, isolated, and guaranteed for each priority level $k$. Note that
higher priorities receive a larger share of available resources than lower
priorities.
To evaluate the performance of allocated resources, we will employ the $M/M/1$
queuing model for each priority level on network devices and compute nodes
assuming that this theory’s stability requirements are met and that all queues
are independent. The service rate allocated to priority level $k$ on network
device $n$ is $\widehat{\mathcal{B}_{n,k}}$, and those packets leaving this
queue will be forwarded through their corresponding link, let’s call it $l$,
allocating $\widehat{\mathcal{B}_{l,k}}$ bandwidth. Note that the overall
capacity of this network device and link is constrained by
$\widehat{\mathcal{B}_{n}}$ and
$\widehat{\mathcal{B}_{l}{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color<EMAIL_ADDRESS>respectively. Similarly, the requests of priority level $k$ will be served on
compute node $v$ leveraging a queue with dedicated service rate
$\widehat{\mathcal{C}_{v,k}}$, and this node is equipped with computing
resources limited to a predefined capacity threshold, dubbed
$\widehat{\mathcal{C}_{v}}$. In addition, the energy consumption for
transmitting bandwidth units over network device $n$ is
$\widetilde{\mathcal{E}_{n}}$, and compute node $v$ consumes
$\widetilde{\mathcal{E}_{v}}$ energy per capacity unit and
$\overline{\mathcal{E}_{v}}$ energy when its state changes (from the idle mode
to the operation mode or vice versa).
### II-B Requests
This paper investigates the system for $\mathcal{T}$ time slots while a set of
$\mathcal{R}_{t}$ requests, denoted by $\mathbb{R}_{t}=\\{r|0\leq
r\leq\mathcal{R}_{t}\\}$, arrives at time slot $t\in\mathbb{T}=\\{t|1\leq
t\leq\mathcal{T}\\}$. The set of all requests is
$\mathbb{R}=\\{\mathbb{R}_{t}|1\leq t\leq\mathcal{T}\\}$, and $\mathcal{R}$
represents the number of all requests. Each request $r$ enters the system
through an edge network device, denoted by $n_{r}$ and referred to as its
Point of Arrival (PoA), and orders service $s_{r}$ from the set of obtainable
service instances, that is $\mathbb{I}=\\{i|0\leq i\leq\mathcal{I}\\}$, where
instance $i$ provides service $s_{i}$. In order to successfully fulfill each
request, one instance of its target service must be replicated on one of the
compute nodes in order to receive the request, process it, and return it to
its entry point so that it can be delivered to the end user.
$\widehat{\mathcal{C}_{i{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color<EMAIL_ADDRESS>represents the maximum capacity of instance $i$. To fulfill each user’s
request, its QoS requirements must be met, including the minimum service
capacity and network bandwidth, as well as the maximum tolerable E2E latency,
denoted by
$\widecheck{\mathcal{C}_{r{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color<EMAIL_ADDRESS>$\widecheck{\mathcal{B}_{r}}$, and $\widecheck{\mathcal{D}_{r}}$,
respectively. Besides, the maximum permissible packet size for request $r$ is
$\widehat{\mathcal{H}_{r}}$. If request $r$ is successfully completed, the
system will achieve a profit, that is $\gamma_{r}$. Note that the arrival rate
for each queue is determined by a Poisson process and is assumed to be the sum
of
$\widecheck{\mathcal{C}_{r{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color<EMAIL_ADDRESS>(for compute queues) and $\widecheck{\mathcal{B}_{r}}$ (for network queues)
for all requests assigned to that queue, respectively.
## III Problem Definition
This section discusses the joint problem of instance placement and assignment,
request prioritization, and path selection for integrated compute-network
infrastructures to maximize the overall profit of the system while minimizing
its energy consumption. In this section, the constraints and objective
function are formulated, followed by the problem statement as a Mixed-Integer
Non-Linear Programming (MINLP) formulation and its complexity analysis.
### III-A Instance Orchestration Constraints
Constraints C1-C6 assign requests to instances and place them on compute nodes
while maintaining the capacity constraints of instances and compute nodes.
Considering that $\ddot{\mathcal{I}}^{t}_{r,i}$ is a binary variable whose
value is $1$ if request $r$ is assigned to instance $i$ at time slot $t$, C1
ensures that each request $r$ is assigned to no more than one instance of its
service for each time slot $t$. C2 defines a new binary variable,
$\dot{\mathcal{I}}^{t}_{i}$, which indicates that whether instance $i$ is
activated at time slot $t$. If
$\sum_{\mathbb{R}_{t}}\ddot{\mathcal{I}}^{t}_{r,i}$ is equal to or greater
than $1$ (i.e., at least one request is assigned to instance $i$),
$(\sum_{\mathbb{R}_{t}}\ddot{\mathcal{I}}^{t}_{r,i})/\mathcal{R}_{t}$ will be
a small number (between $0$ and $1$) and
$\sum_{\mathbb{R}_{t}}\ddot{\mathcal{I}}^{t}_{r,i}$ will be a large number, so
$\dot{\mathcal{I}}^{t}_{i}$ will be set to $1$. Otherwise, both sides of the
equation will equal $0$, causing $\dot{\mathcal{I}}^{t}_{i}$ to also equal
$0$. C3 ensures that each activated instance is assigned to exactly one
compute node, where $\ddot{\mathcal{G}}^{t}_{i,v}$ is a binary variable
indicating the compute node of instance $i$ at time slot $t$. Similar to C2,
C4 defines $\dot{\mathcal{G}}^{t}_{v}$ as a binary variable to determine
whether compute node $v$ should be activated at time slot $t$. Then, it must
be assured that assigned requests do not exceed the capacity limitations of
instances and compute nodes (C5 and C6).
$\displaystyle\sum\nolimits_{\mathbb{I}|s_{i}=s_{r}}\ddot{\mathcal{I}}^{t}_{r,i}\leq
1\quad\forall
t,r\in\mathbb{T},\mathbb{R}_{t}{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}ttttttttttttttttttttttttttttttttttt}$
(C1)
$\displaystyle\frac{1}{\mathcal{R}_{t}}\cdot\sum\nolimits_{\mathbb{R}_{t}}\ddot{\mathcal{I}}^{t}_{r,i}\leq\dot{\mathcal{I}}^{t}_{i}\leq\sum\nolimits_{\mathbb{R}_{t}}\ddot{\mathcal{I}}^{t}_{r,i}\quad\forall
t,i\in\mathbb{T},\mathbb{I}$ (C2)
$\displaystyle\sum\nolimits_{\mathbb{V}}\ddot{\mathcal{G}}^{t}_{i,v}=\dot{\mathcal{I}}^{t}_{i}\quad\forall
t,i\in\mathbb{T},\mathbb{I}$ (C3)
$\displaystyle\frac{1}{\mathcal{I}}\cdot\sum\nolimits_{\mathbb{I}}\ddot{\mathcal{G}}^{t}_{i,v}\leq\dot{\mathcal{G}}^{t}_{v}\leq\sum\nolimits_{\mathbb{I}}\ddot{\mathcal{G}}^{t}_{i,v}\quad\forall
t,v\in\mathbb{T},\mathbb{V}$ (C4)
$\displaystyle\sum\nolimits_{\mathbb{R}_{t}}\widecheck{\mathcal{C}_{r}}\cdot\ddot{\mathcal{I}}^{t}_{r,i}\leq\widehat{\mathcal{C}_{i{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color<EMAIL_ADDRESS>t,i\in\mathbb{T},\mathbb{I}$ (C5)
$\displaystyle\sum\nolimits_{\mathbb{I}}\widehat{\mathcal{C}_{i{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}.}}}\cdot\ddot{\mathcal{G}}^{t}_{i,v}\leq\widehat{\mathcal{C}_{v}}\quad\forall
v,t\in\mathbb{V},\mathbb{T}$ (C6)
### III-B Path Selection Constraints
Constraints C7-C9 ensure that an E2E path is selected for each request
considering the capacity constraints of network links and the traffic pattern,
where packets of each request enter the network through its PoA and, after
visiting its assigned instance, are returned to the same PoA to be handed off
to the corresponding end user. C7 determines the allocated path for each
request $r$, ensuring that it originates and terminates at $n_{r}$ and
traverses the network device directly connected to the compute node hosting
the instance assigned to the request. In this constraint, $\ddot{f}^{t}_{r,p}$
is a binary variable that represents the assigned path of request $r$ at time
slot $t$. Finally, C8 and C9 maintain the maximum capacity of network links
and devices.
$\displaystyle\sum\nolimits_{\mathbb{P}|n_{r}\&n_{v}\in
p}\ddot{f}^{t}_{r,p}=\ddot{\mathcal{I}}^{t}_{r,i}\cdot\ddot{\mathcal{G}}^{t}_{i,v}\quad\forall
t,r,i,v,\in\mathbb{T},\mathbb{R},\mathbb{I},\mathbb{V}$ (C7)
$\displaystyle\sum\nolimits_{\mathbb{R}_{t},\mathbb{P}|l\in\mathbb{L}_{p}}\widecheck{\mathcal{B}_{r}}\cdot\ddot{f}^{t}_{r,p}\leq\widehat{\mathcal{B}_{l}{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color<EMAIL_ADDRESS>t,l\in\mathbb{T},\mathbb{L}{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}tttttttttttttttttttttttttt}$
(C8)
$\displaystyle\sum\nolimits_{\mathbb{R}_{t},\mathbb{P}|n\in\mathbb{L}_{p}}\widecheck{\mathcal{B}_{r}}\cdot\ddot{f}^{t}_{r,p}\leq\widehat{\mathcal{B}_{n}{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color<EMAIL_ADDRESS>t,n\in\mathbb{T},\mathbb{N}$ (C9)
### III-C Request Prioritization Constraints
$\displaystyle\sum\nolimits_{\mathbb{K}}\ddot{\varrho}^{t}_{r,k}=\sum\nolimits_{\mathbb{I}}\ddot{\mathcal{I}}^{t}_{r,i}\quad\forall
t,r\in\mathbb{T},\mathbb{R}_{t}{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}ttttttttttttttttttttttttttttttt}$
(C10)
$\displaystyle\sum\nolimits_{\mathbb{R}_{t},\mathbb{I}}\widecheck{\mathcal{C}_{r}}\cdot\ddot{\varrho}^{t}_{r,k}\cdot\ddot{\mathcal{I}}^{t}_{r,i}\cdot\ddot{\mathcal{G}}^{t}_{i,v}<\widehat{\mathcal{C}_{v,k}}\quad\forall
t,k,v\in\mathbb{T},\mathbb{K},\mathbb{V}$ (C11)
$\displaystyle\sum\nolimits_{\mathbb{R}_{t},\mathbb{P}|l\in\mathbb{L}_{p}}\widecheck{\mathcal{B}_{r}}\cdot\ddot{\varrho}^{t}_{r,k}\cdot\ddot{f}^{t}_{r,p}<\widehat{\mathcal{B}_{l,k}{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color<EMAIL_ADDRESS>t,k,l\in\mathbb{T},\mathbb{K},\mathbb{L}$ (C12)
$\displaystyle\sum\nolimits_{\mathbb{R}_{t},\mathbb{P}|n\in\mathbb{L}_{p}}\widecheck{\mathcal{B}_{r}}\cdot\ddot{\varrho}^{t}_{r,k}\cdot\ddot{f}^{t}_{r,p}<\widehat{\mathcal{B}_{n,k}{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color<EMAIL_ADDRESS>t,k,n\in\mathbb{T},\mathbb{K},\mathbb{N}$ (C13)
To maintain integrity, it’s crucial to prevent any overuse of resources
allocated to each priority level. Given that $\ddot{\varrho}^{t}_{r,k}$ is the
priority of request $r$ at time slot $t$, C10 promises that the request’s
priority is determined if an instance is assigned to serve it. Then, C11 to
C13 satisfy the capacity constraints of priority queues on compute nodes and
network resources.
### III-D Latency Constraints
Each packet has to wait for three sources of latency through its request’s
assigned E2E path in the system: 1) the service latency experienced at the
network devices included in the path, 2) the transmission latency over the
network links of the path, and 3) the service latency at the assigned compute
node. Since the average latency of a packet in a $M/M/1$ queue is equal to
$1/(\mu-\lambda)$ when the arrival rate is $\lambda$ and the service rate is
$\mu$, the average latency experienced by the packets of request $r$ at
network device $n$ allocated to priority level $k$ during time slot $t$ can be
calculated as C14. In this constraint, the numerator will be $0$ for network
devices and priority levels that have not been allocated to request $r$,
causing $\ddot{\mathcal{D}}^{t}_{r,n,k}$ to equal $0$. Otherwise, the
numerator will be $1$, and the latency will be determined following the
adopted queuing theorem with the arrival rate of the queue set to the overall
bandwidth of requests assigned to priority level $k$ and traversing network
device $n$. C15 follows the same logic to calculate the average latency of
request $r$ allocated to priority level $k$ at time slot $t$ on compute node
$v$. C16 calculates the transmission latency of request $r$ over link $l$ at
time slot $t$, considering its priority level and maximum packet size, if the
link is part of the path assigned to the request. Otherwise, the latency will
be $0$. Finally, C17 determines the Age of Information (AoI), followed by C18,
which ensures the maximum acceptable latency requirement of requests.
$\displaystyle\ddot{\mathcal{D}}^{t}_{r,n,k}=\frac{\sum\nolimits_{\mathbb{P}|n\in\mathbb{L}_{p}}\ddot{\varrho}^{t}_{r,k}\cdot\ddot{f}^{t}_{r,p}}{\widehat{\mathcal{B}_{n,k}}-\sum\nolimits_{\mathbb{R}_{t},\mathbb{P}|n\in\mathbb{L}_{p}}\widecheck{\mathcal{B}_{r^{\prime}}}\cdot\ddot{\varrho}^{t}_{r^{\prime},k}\cdot\ddot{f}^{t}_{r^{\prime},p}}\quad\begin{aligned}
&\forall t,r,k,n\in\\\ &\mathbb{T},\mathbb{R}_{t},\mathbb{K},\mathbb{N}\\\
\end{aligned}$ (C14)
$\displaystyle\ddot{\mathcal{D}}^{t}_{r,v,k}=\frac{\sum\nolimits_{\mathbb{I}}\ddot{\varrho}^{t}_{r,k}\cdot\ddot{\mathcal{I}}^{t}_{r,i}\cdot\ddot{\mathcal{G}}^{t}_{i,v}}{\widehat{C_{v,k}}-\sum\nolimits_{\mathbb{R}_{t},\mathbb{I}}\widecheck{\mathcal{C}_{r^{\prime}}}\cdot\ddot{\mathcal{I}}^{t}_{r^{\prime},i}\cdot\ddot{\mathcal{G}}^{t}_{i,v}}\quad\begin{aligned}
&\forall t,r,k,v\in\\\ &\mathbb{T},\mathbb{R}_{t},\mathbb{K},\mathbb{V}\\\
\end{aligned}$ (C15)
$\displaystyle\ddot{\mathcal{D}}^{t}_{r,l,k}=\frac{\widehat{\mathcal{H}_{r}}}{\widehat{\mathcal{B}_{l,k}}}\cdot\sum\nolimits_{\mathbb{P}|l\in\mathbb{L}_{p}}\ddot{\varrho}^{t}_{r,k}\cdot\ddot{f}^{t}_{r,p}\quad\forall
t,r,l,k\in\mathbb{T},\mathbb{R}_{t},\mathbb{L},\mathbb{K}{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}tttt}$
(C16)
$\displaystyle\ddot{\mathcal{D}}^{t}_{r}=\sum\nolimits_{\mathbb{N},\mathbb{K},\mathbb{V},\mathbb{L}}(\ddot{\mathcal{D}}^{t}_{r,n,k}+\ddot{\mathcal{D}}^{t}_{r,v,k}+\ddot{\mathcal{D}}^{t}_{r,l,k})\quad\forall
t,r\in\mathbb{T},\mathbb{R}_{t}$ (C17)
$\displaystyle\ddot{\mathcal{D}}^{t}_{r}\leq\widecheck{\mathcal{D}_{r}}\quad\forall
t,r\in\mathbb{T},\mathbb{R}_{t}$ (C18)
### III-E Objective Function
The objective is to maximize the overall profit while minimizing the energy
consumption of resources, that is:
$\displaystyle\sum\nolimits_{\mathbb{T},\mathbb{R}_{t}}(\gamma_{r}\cdot\sum\nolimits_{\mathbb{I}}\ddot{\mathcal{I}}^{t}_{r,i})-\alpha\cdot(\sum\nolimits_{\mathbb{N}}\ddot{\mathcal{E}}_{n}+\sum\nolimits_{\mathbb{V}}\ddot{\mathcal{E}}_{v}),{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}ttttttttttttt}{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}ttt}$
(OF)
where $\sum_{\mathbb{I}}\ddot{\mathcal{I}}^{t}_{r,i}$ is $1$ if request $r$ is
supported at time slot $t$, $\alpha$ is a small positive number, and
$\ddot{\mathcal{E}}_{v}$ and $\ddot{\mathcal{E}}_{n}$ represent, respectively,
the total energy consumption of compute node $v$ and network device $n$. Note
that $\alpha$ must be set such that the total profit exceeds the total amount
of energy consumed. Otherwise, supporting requests would result in a negative
objective function value, and the only optimal solution would be to support no
requests, making OF equal to $0$. To determine $\ddot{\mathcal{E}}_{n}$, where
the only source of energy consumption is transmitting requests’ data, the
following equations are employed:
$\displaystyle\ddot{\mathcal{E}}_{n}=\widetilde{\mathcal{E}_{n}}\cdot\sum\nolimits_{\mathbb{T},\mathbb{R}_{t},\mathbb{P}|n\in\mathbb{L}_{p}}\widecheck{\mathcal{B}_{r}}\cdot\ddot{f}^{t}_{r,p}{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}ttttttttttttttttttttttttttttttttttt}$
(1)
To calculate $\ddot{\mathcal{E}}_{v}$, it should be noted that the energy
consumed on each compute node has two primary sources: 1) the energy consumed
to service each unit of requests’ data, and 2) the energy consumed during
booting up or shutting down the compute node. Consequently,
$\ddot{\mathcal{E}}_{v}$ for each $v\in\mathbb{V}$ is:
$\displaystyle\widetilde{\mathcal{E}_{v}}\cdot\sum\nolimits_{\mathbb{T},\mathbb{R}_{t},\mathbb{I}}\widecheck{\mathcal{C}_{r}}\cdot\ddot{\mathcal{I}}^{t}_{r,i}\cdot\ddot{\mathcal{G}}^{t}_{i,v}+\overline{\mathcal{E}_{v}}\cdot\sum\nolimits_{\mathbb{T}|0\leq
t<\mathcal{T}}\dot{\mathcal{G}}^{t}_{v}\oplus\dot{\mathcal{G}}^{t+1}_{v}{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}ttttttttt}$
(2)
In this equation,
$\ddot{\mathcal{I}}^{t}_{r,i}\cdot\ddot{\mathcal{G}}^{t}_{i,v}$ equals $1$ if
compute node $v$ is selected as the host for request $r$; therefore, the first
part of the equation calculates the total energy consumption of compute node
$v$ to service its assigned requests. Assuming $\dot{\mathcal{G}}^{0}_{v}$
equals $0$, $\dot{\mathcal{G}}^{t}_{v}\oplus\dot{\mathcal{G}}^{t+1}_{v}$ is
equal to $1$ in the second part if and only if $\dot{\mathcal{G}}^{t}_{v}$ and
$\dot{\mathcal{G}}^{t+1}_{v}$ differ, representing a boot-up or shutdown for
compute node $v$. Then, the second part demonstrates the total energy
consumption of state transitions in compute note $v$ within $\mathbb{T}$.
### III-F Problem
Considering the constraints and the objective function, the Problem of
Integrated Resource Allocation (PIRA) is:
$\footnotesize\text{PIRA: }\max\text{ OF}\text{ s.t. }\text{C1 -
C18.}{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}tttttttttttttttttttttttttttttttttttttttttt}$
(3)
The optimal solution involves assigning requests with stringent latency
requirements to high-priority queues and keeping those queues as empty as
possible to minimize latency. Resource selection should aim to minimize
activated resources and consider future requests to reduce energy consumption
through fewer start-ups and shutdowns, improving energy efficiency.
### III-G Complexity Analysis
The problem defined in (3) is an extended version of the Multi-Dimensional
Knapsack (MDK) problem. Assume the problem is relaxed and reformulated
specifically for time slot $t$ as the problem of maximizing profit and
minimizing energy consumption while the only decision is to assign requests to
instances concerning only their capacity constraints. Since the MDK problem is
NP-hard and this relaxed version is an MDK problem with at least
$\mathcal{R}_{t}$ items and $\mathcal{I}$ knapsacks, PIRA is at least as
difficult as the MDK problem and is also NP-hard.
## IV ORIENT
This section proposes an RL-based priORIty-aware Energy-efficieNt laTency-
sensitive resource allocation approach (ORIENT) to find near-optimal solutions
for PIRA. Subsequently, the learning mechanism is elaborated upon, followed by
an explanation of the agent’s design, and concluding with a description of the
algorithm.
### IV-A Learning Mechanism
Given the continuous operation of the system defined in this paper and the
recurring necessity for consistent resource allocation decisions at each time
slot of PIRA, the adoption of RL presents itself as a viable means to enhance
decision-making proficiency to solve it. Within the framework of RL, an agent
undergoes a process of learning by means of trial and error at each step
(here, time slot), with the primary aim of optimizing a specific decision-
making problem. The system’s designer defines a reward function in alignment
with the objectives of the problem. By learning and following the optimal
strategy derived from this reward function, the agent aims to maximize
cumulative discounted rewards, regardless of the initial state. Among various
RL-based algorithms, Q-Learning stands out as widely acknowledged.
In Q-Learning, every state-action pair is associated with a numeric value
referred to as the Q-value, where the agent selects the action with the
maximum Q-value at each step. In Deep Q-Learning (DQL), a Deep Neural Network
(DNN) serves as the approximator for these Q-values. In this arrangement, the
state and action are presented as inputs, and the DNN-based $Q$-function
encompassing all feasible actions, denoted by
$Q(s,.;\boldsymbol{\mathcal{W}})$, is generated as the output and
systematically updated over time according to the following equation:
$\footnotesize\boldsymbol{\mathcal{W}}^{t+1}=\boldsymbol{\mathcal{W}}^{t}+\sigma[Y^{t}-Q(\boldsymbol{S}^{t},a^{t};\boldsymbol{\mathcal{W}}^{t})]\nabla_{\boldsymbol{\mathcal{W}}^{t}}\cdot
Q(\boldsymbol{S}^{t},a^{t};\boldsymbol{\mathcal{W}}^{t}){\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}tttttttt}$
(4)
In this equation, $\boldsymbol{\mathcal{W}}$ is the set of DNN weights,
$\sigma$ is a scalar step size, $\boldsymbol{S}^{t}$ and $a^{t}$ are the
agent’s state and action at time slot $t$, and $Y^{t}$ (also known as the
target) shows the maximum value expected to be achieved by following $a^{t}$
at $\boldsymbol{S}^{t}$. $Y^{t}$ is the only variable that must be estimated
in this equation, and in Double DQL (DDQL), where the selection and evaluation
processes are decoupled, it can be expressed as follows:
$\footnotesize
Y^{t}=r^{t+1}+\gamma\;\widehat{Q}(\boldsymbol{S}^{t+1},a^{\prime},\boldsymbol{\mathcal{W}}^{t-}),{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}tttttttttttttttttttttttttttttttttttt}$
(5)
where $r^{t+1}$ is the earned reward at time slot $t+1$, $\gamma\in[0,1]$ is a
discount factor that balances the importance of immediate and future rewards,
$a^{\prime}=\text{argmax}_{a\in\boldsymbol{\mathcal{A}}}Q(\boldsymbol{S}^{t+1},a,\boldsymbol{\mathcal{W}}^{t})$,
and $\boldsymbol{\mathcal{A}}$ is the set of actions. In this equation,
$\boldsymbol{\mathcal{W}}$ represents the set of weights for the main $Q$ and
is updated in each step, whereas $\boldsymbol{\mathcal{W}}^{-}$ is for the
target $\widehat{Q}$ and is replaced with the weights of the main network
every $t$ steps. In other words, $\widehat{Q}$ remains a periodic copy of $Q$.
Furthermore, we augment DDQL by incorporating the dueling concept introduced
by Wang et al. [15]. Unlike conventional DDQL, which directly approximates
Q-values using DNNs, this method initially computes separate estimators for
state values ($\psi$) and action advantages ($\varphi$). Q-values are then
derived from these estimators, as illustrated below:
$\footnotesize
Q(\boldsymbol{S}^{t},a^{t},\boldsymbol{\mathcal{W}}^{t})=\psi(\boldsymbol{S}^{t},\boldsymbol{\mathcal{W}}^{t})\Bigg{(}\varphi(\boldsymbol{S}^{t},a^{t},\boldsymbol{\mathcal{W}}^{t})-\frac{\Phi}{\left|\boldsymbol{\mathcal{A}}\right|}\Bigg{)}{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}ttttttttttttt}$
(6)
where
$\Phi=\sum_{\boldsymbol{\mathcal{A}}}\varphi(\boldsymbol{S}^{t},a^{\prime},\boldsymbol{\mathcal{W}}^{t})$.
The primary benefit is the ability to generalize learning across actions
without modifying the learning algorithm, which improves policy evaluation in
the presence of numerous actions with similar state values. As a result of
combining the Dueling technique and DDQL, we can expect that the resultant
D3QL agent will outperform its predecessors. To bolster the effectiveness and
resilience of D3QL, observed transitions are archived in a memory bank known
as the experience memory. The learning process entails randomly selecting
transitions from this repository [16].
### IV-B Agent Customization
The first step toward exploiting D3QL to solve PIRA is to define the agent’s
action space, state space, and reward.
#### Action Space
We define the action space as set
$\boldsymbol{\mathcal{A}}=\\{a:(i,v,p,k)|i,v,p,k\in\mathbb{I},\mathbb{V},\mathbb{P},\mathbb{K}\\}$.
During each time slot and for every request, a specific action must be
executed to finalize the resource allocation pertaining to that request.
#### State Space
For encoding the system’s state, an architecture involving aggregation GNN
layers is employed, constructing an aggregation sequence across all compute
nodes, iteratively facilitating information exchange with neighboring nodes.
Therefore, at time slot $t$ when request $r$ is on the verge of receiving
service, the system state is denoted as
$\boldsymbol{S}^{t}(r)=\\{\boldsymbol{S}^{t}_{\mathbb{V}}(r),\boldsymbol{S}^{t}_{\mathbb{P}}(r)\\}$
and can be formally defined as:
$\displaystyle\boldsymbol{S}^{t}_{\mathbb{V}}(r)=\Bigg{\\{}\Big{[}[\widehat{\mathcal{C}^{t}_{v,k}}-\widecheck{\mathcal{C}_{r{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}.}}}]_{\mathbb{K}},[\ddot{\mathcal{D}}^{t}_{r,v,k}-\widecheck{\mathcal{D}_{r{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}.}}}]_{\mathbb{K}},\widetilde{\mathcal{E}_{v}},\overline{\mathcal{E}_{v}}\cdot(1-\dot{\mathcal{G}}^{t}_{v})\Big{]}_{\mathbb{V}}\Bigg{\\}},$
(7)
$\displaystyle\boldsymbol{S}^{t}_{\mathbb{P}}(r)=\Bigg{\\{}\Big{[}\big{[}\wedge_{\mathbb{L}_{p}}-\widecheck{\mathcal{B}_{r{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}.}}}\big{]}_{\mathbb{K}},[\ddot{\mathcal{D}}^{t}_{r}-\ddot{\mathcal{D}}^{t}_{r,v,k}-\widecheck{\mathcal{D}_{r{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}.}}}]_{\mathbb{K}},\widetilde{\mathcal{E}_{n}}\Big{]}_{\mathbb{P}}\Bigg{\\}},{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}ttttttttt}$
(8)
where
$\wedge_{\mathbb{L}_{p}}=\min\nolimits_{\mathbb{L}|n,l\in\mathbb{L}_{p}}\\{\widehat{\mathcal{B}^{t}_{n,k}},\widehat{\mathcal{B}^{t}_{l,k}}\\}$.
$\boldsymbol{S}^{t}_{\mathbb{V}}(r)$ and $\boldsymbol{S}^{t}_{\mathbb{P}}(r)$
represent the embeddings of compute nodes and network paths, respectively,
which function as inputs for the GNN layers. These embeddings encompass the
remaining resource capacity and the anticipated latency when request $r$ is
allocated to them, as well as their associated energy consumption.
#### Reward
Since the agent is designated to maximize OF, the reward should be engineered
to reinforce the support of high-profit requests while selecting resources
with low energy consumption. This goal is satisfied in (9), that is:
$\footnotesize r^{t+1}=\left\\{\begin{array}[]{ll}0&\mbox{otherwise}\\\
\dfrac{\mathcal{M}}{\text{OF}\big{(}\boldsymbol{S}^{t}(r),a^{t}\big{)}-\min_{\boldsymbol{\mathcal{A}}}\text{OF}\big{(}\boldsymbol{S}^{t}(r),a^{\prime}\big{)}}&r\text{
is
met}{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color<EMAIL_ADDRESS>(9)
where
$\mathcal{M}=\max_{\boldsymbol{\mathcal{A}}}\text{OF}\big{(}\boldsymbol{S}^{t}(r),a^{\prime}\big{)}-\min_{\boldsymbol{\mathcal{A}}}\text{OF}\big{(}\boldsymbol{S}^{t}(r),a^{\prime}\big{)}$,
$\max\text{/}\min_{\boldsymbol{\mathcal{A}}}\text{OF}\big{(}\boldsymbol{S}^{t}(r),a^{\prime}\big{)}$
is the maximum/minimum profit that can be achieved by allocating the available
resources at time slot $t$ to request $r$ without considering any constraints
or requirements, and $\text{OF}\big{(}\boldsymbol{S}^{t}(r),a^{t}\big{)}$ is
the profit of the allocation provided by the agent. If the action fails to
meet the requirements of the request, it results in a reward of $0$.
Conversely, actions that yield greater profits correspond to higher rewards.
TABLE I: Simulation Parameters. Parameter | Value
---|---
number of priority levels | $4$
resource capacity bounds | $\sim\mathcal{U}\\{250,300\\}$ mbps
Instance capacity bound | $20$ mbps
energy consumptions per capacity unit | $\sim\mathcal{U}\\{10,20\\}$
energy consumptions per state transition | $\sim\mathcal{U}\\{100,200\\}$
capacity requirement per request | $\sim\mathcal{U}\\{4,8\\}$ mbps
bandwidth requirement per request | $\sim\mathcal{U}\\{2,10\\}$ mbps
latency requirement per request | $\sim\mathcal{U}\\{1,3\\}$ ms
packet size per request | $1$
profit per request ($\gamma_{r}$) | $\mathcal{U}\\{5,15\\}$
Input: $\mathcal{T}$, $\epsilon^{\prime}$, and $\widetilde{\epsilon}$
1 $\boldsymbol{\Omega}\leftarrow\emptyset$,
$\boldsymbol{\mathcal{W}}\leftarrow\mathbf{0}$,
$\boldsymbol{\mathcal{W}^{-}}\leftarrow\mathbf{0}$, $\epsilon\leftarrow 1$,
$memory\leftarrow\\{\\}$
2 for _each $t$ in $[0:\mathcal{T}]$_ do
3 if _new request $r$ is arrived_ then
4 calculate
$\boldsymbol{S}^{t}(r)=\\{\boldsymbol{S}^{t}_{\mathbb{V}}(r),\boldsymbol{S}^{t}_{\mathbb{P}}(r)\\}$
5 $\zeta\leftarrow$ generate a random number from $[0:1]$
6 if _$\zeta >\epsilon$_ then
7 $a^{t}=(i,v,p,k)\leftarrow$
argmax${}_{\boldsymbol{\mathcal{A}}}Q(\boldsymbol{S}^{t},a^{\prime},\boldsymbol{\mathcal{W}}^{t})$
8
9 else
10 select a random $a^{t}=(i,v,p,k)$ from $\boldsymbol{\mathcal{A}}$
11 calculate $r^{t+1}$
12 if _$r^{t+1} >0$_ then
13 Establish request $r$ connection based on $a^{t}$
14
15 $memory\leftarrow memory\cup\\{(\boldsymbol{S}^{t},a^{t},r^{t+1})\\}$
16 choose a batch of samples from $memory$
17 train the agent
18 if _$\epsilon >\widetilde{\epsilon}$_ then
19 $\epsilon\leftarrow\epsilon-\epsilon^{\prime}$
20 $\boldsymbol{\Omega}\leftarrow\boldsymbol{\Omega}\cup\\{(t,r,a^{t})\\}$
return $\boldsymbol{\Omega}$
Algorithm 1 ORIENT
### IV-C ORIENT’s Algorithm
ORIENT is detailed in Algorithm 1. In this algorithm, $\epsilon^{\prime}$ and
$\widetilde{\epsilon}$ are small positive integers that control the
$\epsilon$-greedy mechanism. During each time slot $t$, the agent receives
notifications of new request arrivals ($r$), and it computes the state based
on request $r$ requirements and the current system state. The action is then
chosen using an $\epsilon$-greedy policy, which follows the evaluation
function of the corresponding agent with probability $(1-\epsilon)$ and
selects a random action with probability $\epsilon$. Subsequently, the reward
is calculated, and if it exceeds $0$, it indicates that $a^{t}$ is feasible
and meets request $r$’s QoS requirements, enabling its connection based on
$a^{t}$ allocations. Finally, the experience memory is updated, samples are
drawn from the memory bank, and the agent undergoes training. during the
training process, $\epsilon$ decreases from $1$ to $\widetilde{\epsilon}$. The
algorithm yields $\boldsymbol{\Omega}$ as the history of allocations.
## V Performance Evaluation
Figure 2: The mean energy consumption of supported requests and the total
profit vs. the system size (A & B) and the number of all requests (C & D).
In this section, we present numerical results based on the system model
parameters listed in Table I. Other parameters can be chosen arbitrarily so
long as the logic outlined throughout the paper remains valid. To evaluate the
efficiency of ORIENT, we conduct a comparative analysis with OPT, D3QL, and
RND. OPT represents the optimal solution for PIRA, obtained through the use of
CPLEX 12.10. D3QL, on the other hand, bears similarities to the method
outlined in Algorithm 1, but employs exclusively simple linear layers in its
DNN, without utilizing any GNNs. This approach forms the foundation for
several related studies, including A-DDPG [13] and MDRL-SaDS [7], both of
which are RL methods designed to enhance the utility of individual requests by
considering factors such as resource cost, required bandwidth, and E2E path
latency. Lastly, RND represents a random allocation strategy, where resources
are allocated to active requests without considering any constraints.
The results are depicted in Fig. 2, where subfigures A and B represent the
average energy consumption per request and total profit across various system
sizes, with a constant of $300$ active requests. Here, incrementing the system
size entails the creation of a new system graph, incorporating $\mathcal{N}+1$
network devices and $\mathcal{V}+1$ compute nodes. Particularly, from $10$ to
$13$, resources with a significant energy consumption are introduced into the
graph. Between $14$ and $17$, resources with a moderate energy consumption are
added to the graph, and the remaining resources included from $18$ to $21$ are
characterized by low energy consumption. Subfigures C and D display similar
quality metrics, but for different numbers of requests, while keeping the
system size fixed at $12$ with equal numbers of resources ($4$) from each
level of energy consumption.
Within the subfigures, it is evident that OPT serves as an upper performance
bound, while RND serves as the lower bound. Furthermore, when all resources
exhibit high energy consumption rates or are fully occupied (with
$\mathcal{N}\leq 13$ in A and $\mathcal{R}\geq 200$ in C), RND shows a similar
energy consumption pattern to D3QL-based techniques, but its support rate is
limited due to the absence of intelligence and feasibility checks. In
contrast, ORIENT excels in both scenarios. As demonstrated in A and B, it
achieves near-optimal results by prioritizing high-capacity resources with
minimal energy consumption, especially when multiple choices are available for
each request ($\mathcal{N}\geq 13$). Similarly, regardless of whether all
requests can be supported (C and D, with $\mathcal{R}\leq 200$), near-optimal
solutions are consistently attained. However, D3QL exhibits less efficiency
and stability compared to ORIENT, mainly due to its inferior state decoding
capability.
## VI Conclusion
In this paper, we examined the joint problem of service instance placement and
assignment, path selection, and request prioritization, dubbed PIRA, with the
objective of maximizing the overall profit of the system (as a function of the
number of supported concurrent requests) while minimizing the overall energy
consumption over a continuous period of time, taking into account E2E latency
and resource capacity constraints. This problem was formulated as a MINLP
problem, its complexity was analyzed, and it was demonstrated that it is an
NP-hard problem. Subsequently, a technique named ORIENT was introduced to
address the problem in a near-optimal manner, utilizing a GNN-empowered D3QL
strategy. The effectiveness of the suggested technique was validated through
numerical results. As potential future work, our intention is to tackle the
problem in the context of dynamic environments characterized by
temporal/spatial fluctuations in requests and resources.
## Acknowledgment
It is partially supported by the European Union’s Horizon 2020 Research and
Innovation Program through the aerOS project under Grant No. 101069732; the
Business Finland 6Bridge 6Core project under Grant No. 8410/31/2022; the
European Union’s HE research and innovation program HORIZON-JUSNS-2022 under
the 6GSandbox project (Grant No. 101096328); and the Research Council of
Finland 6G Flagship Programme under Grant No. 346208. This research was also
conducted at ICTFICIAL Oy. The paper reflects only the authors’ views, and the
commission bears no responsibility for any utilization of the information
contained herein.
## References
* [1] M. Latva-aho, “Key Drivers and Research Challenges for 6G Ubiquitous Wireless Intelligence,” 2019, publisher: University of Oulu.
* [2] H. Yu _et al._ , “Toward 6G-Based Metaverse: Supporting Highly-Dynamic Deterministic Multi-User Extended Reality Services,” _IEEE Network_ , vol. 37, no. 4, pp. 30–38, 2023.
* [3] H. Mazandarani _et al._ , “A Semantic-Aware Multiple Access Scheme for Distributed, Dynamic 6G-Based Applications,” _arXiv preprint arXiv:2401.06308_ , 2024.
* [4] J. Prados-Garzon _et al._ , “Deterministic 6GB-Assisted Quantum Networks with Slicing Support: A New 6GB Use Case,” _IEEE Network_ , 2023.
* [5] Z. Yang _et al._ , “Increasing the Energy Efficiency of a Data Center Based on Machine Learning,” _Journal of Industrial Ecology_ , vol. 26, no. 1, pp. 323–335, 2022.
* [6] “Cisco Annual Internet Report (2018–2023) White Paper.”
* [7] H. Xuan _et al._ , “Multi-Agent Deep Reinforcement Learning Algorithm with Self-adaption Division Strategy for VNF-SC Deployment in SDN/NFV-Enabled Networks,” _Applied Soft Computing_ , vol. 138, p. 110189, May 2023.
* [8] R. Solozabal _et al._ , “Virtual Network Function Placement Optimization With Deep Reinforcement Learning,” _IEEE Journal on Selected Areas in Communications_ , vol. 38, no. 2, pp. 292–303, Feb. 2020\.
* [9] C. Pham _et al._ , “Traffic-Aware and Energy-Efficient VNF Placement for Service Chaining: Joint Sampling and Matching Approach,” _IEEE Trans. on Services Computing_ , vol. 13, no. 1, pp. 172–185, Jan. 2020.
* [10] G. L. Santos _et al._ , “Availability-Aware and Energy-Aware Dynamic SFC Placement using Reinforcement Learning,” _The Journal of Supercomputing_ , vol. 77, no. 11, Nov. 2021.
* [11] Z. Sasan _et al._ , “Joint Network Slicing, Routing, and In-Network Computing for Energy-Efficient 6G,” _arXiv preprint arXiv:2401.06306_ , 2024\.
* [12] M. Farhoudi _et al._ , “Qos-Aware Service Prediction and Orchestration in Cloud-Network Integrated Beyond 5G,” _arXiv preprint arXiv:2309.10185_ , 2023.
* [13] N. He _et al._ , “Leveraging Deep Reinforcement Learning With Attention Mechanism for Virtual Network Function Placement and Routing,” _IEEE Trans. on Parallel and Distributed Systems_ , vol. 34, no. 4, Apr. 2023.
* [14] M. Shokrnezhad _et al._ , “Double Deep Q-Learning-based Path Selection and Service Placement for Latency-Sensitive Beyond 5G Applications,” _IEEE Transactions on Mobile Computing_ , pp. 1–14, 2023.
* [15] Z. Wang _et al._ , “Dueling Network Architectures for Deep Reinforcement Learning,” in _Proceedings of The 33rd International Conference on Machine Learning_ , vol. 48, Jun. 2016, pp. 1995–2003.
* [16] V. Mnih _et al._ , “Human-level Control through Deep Reinforcement Learning,” _Nature_ , vol. 518, no. 7540, pp. 529–533, Feb. 2015.
|
# Means refinements via convexity
M. Sababheh Department of Basic Sciences, Princess Sumaya University For
Technology, Al Jubaiha, Amman 11941, Jordan<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract.
The main goal of this article is to find the exact difference between a convex
function and its secant, as a limit of positive quantities. This idea will be
expressed as a convex inequality that leads to refinements and reversals of
well established inequalities treating different means. The significance of
these inequalities is to write one inequality that brings together and refine
almost all known inequalities treating the arithmetic, geometric, harmonic and
Heinz means, for numbers and operators.
###### Key words and phrases:
convex functions, means inequalities, norm inequalities.
###### 2010 Mathematics Subject Classification:
15A39, 15B48, 26D15, 26B25, 47A30, 47A63.
## 1\. introduction
Convex functions and their inequalities have played a major role in the study
of various topics in Mathematics; including applied Mathematics, Mathematical
Analysis and Mathematical Physics. Means and their comparison is indeed an
important application of convexity.
Recall that a function $f:\mathbb{I}\to\mathbb{R}$, defined on a real interval
$\mathbb{I}$, is said to be convex if $f(\alpha x_{1}+\beta x_{2})\leq\alpha
f(x_{1})+\beta f(x_{2})$, when $x_{1},x_{2}\in\mathbb{I}$ and
$\alpha,\beta\geq 0$ satisfying $\alpha+\beta=1.$ On the other hand,
$f:\mathbb{I}\to\mathbb{R}^{+}$ is said to be log-convex if $g(x)=\log f(x)$
is convex, or equivalently if $f(\alpha x_{1}+\beta x_{2})\leq
f^{\alpha}(x_{1})f^{\beta}(x_{2})$ for the above parameters.
Speaking of means, the comparison between the weighted arithmetic, geometric
and harmonic means is an immediate consequence of convexity or log-convexity
of the functions $x\nabla_{t}y=(1-t)x+ty,x\\#_{t}y=x^{1-t}y^{t}$ and
$x!_{t}y=((1-t)x^{-1}+ty^{-1})^{-1},x,y>0,$ defined for $0\leq t\leq 1$.
Adopting these notations, we drop $t$ when $t=\frac{1}{2}.$
Convexity of the function $f(t)=x\\#_{t}y$ implies the well known Young’s
inequality $x\\#_{t}\leq x\nabla_{t}y.$ On the other hand, convexity of the
function $g(t)=x!_{t}y$ implies the arithmetic-harmonic mean inequality
$x!_{t}y\leq x\nabla_{t}y$, while log-convexity of $g$ implies the geometric-
harmonic mean inequality $x!_{t}y\leq x\\#_{t}y.$
These inequalities, though very simple, have some significant applications.
For example, the above Young’s inequality implies the celebrated Holder’s
inequality $\|fg\|_{1}\leq\|f\|_{p}\|g\|_{q}$ for $f\in L^{p}(X)$ and $g\in
L^{q}(X)$, for the conjugate exponents $p,q$, where $X$ is some measure space.
Among the most interesting applications of the above mean inequalities is the
possible comparison between operators acting on a finite dimensional Hilbert
space $H$. In the sequel, $\mathbb{M}_{n}$ will denote the space of operators
acting on an $n-$deimentional Hilbert space $H$, $\mathbb{M}_{n}^{+}$ will
denotes the cone of semi positive operators in $\mathbb{M}_{n}$ while
$\mathbb{M}_{n}^{++}$ will denotes the cone of strictly positive operators in
$\mathbb{M}_{n}.$ Then the above numerical inequalities have their operator
versions such as $A\\#_{t}B\leq A\\#_{t}B$, where
$A,B\in\mathbb{M}_{n}^{++},A\nabla_{t}B=(1-t)A+tB$ and
$A\\#_{t}B=A^{\frac{1}{2}}\left(A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\right)^{t}A^{\frac{1}{2}}.$
In this context, we say that $A\leq B$ for two self-adjoint operators $A$ and
$B$ if $B-A\in\mathbb{M}_{n}^{+}.$
Obtaining the operator versions from the corresponding numerical versions can
be done in different approaches, among which is the application of the
following lemma [2].
###### Lemma 1.1.
Let $X\in\mathcal{M}_{n}$ be self-adjoint and let $f$ and $g$ be continuous
real valued functions such that $f(t)\geq g(t)$ for all $t\in{\text{Sp}}(X),$
the spectrum of $X$. Then $f(X)\geq g(X).$
Recent studies of the topic have investigated possible refinements of the
above inequalities, where adding a positive term to the left side becomes
possible. This idea has been treated in [3, 5, 6, 7, 9, sabjmaa, 10, 11, 12],
where not only refinements have been investigated, but reversed versions and
much more have been discussed.
Keeping our paper concise, we will not go through the exact results done in
the above references now, however we will comment later how the results in
this paper generalize almost all results in these references, regarding the
refinements and the reverses of the above mean inequalities.
The main goal of this article is to avoid dealing with the specific means, and
to treat a general convexity argument that leads to these refinements. In
particular, we prove that for certain positive quantities
$A_{j}(\nu)\Delta_{j}f(\nu;a,b),$ we have
$\displaystyle f\left((1-\nu)a+\nu
b\right)+\sum_{j=1}^{N}A_{j}(\nu)\Delta_{j}f(\nu;a,b)\leq(1-\nu)f(a)+\nu
f(b),N\in\mathbb{N},$
for the convex function $f:[a,b]\to\mathbb{R}$. This provides $N$ refining
terms of the inequality $f\left((1-\nu)a+\nu b\right)\leq(1-\nu)f(a)+\nu
f(b)$, which follows from convexity of $f$. Furthermore, we prove a reversed
version and we prove that as $N\to\infty$ the above inequality becomes an
equality. As a natural consequence, we obtain some refinements and reverses
for log-convex functions.
As we will see, the above inequality and its consequences happen to be
generalizations that imply almost all inequalities in the references [3, 5, 6,
9, sabjmaa, 10, 11, 12]. This is our main motivation behind this work; to find
a formula that implies and generalizes all other formulae and hence, to
enhance our understanding of these inequalities.
We remark that the proof of the first main result in this work is inspired by
our recent work in [sabjmaa].
## 2\. main results
For the rest of the paper, the following notations will be adopted. For
$0\leq\nu\leq 1$ and $j\in\mathbb{N}$, let
$\left\\{\begin{array}[]{cc}k_{j}(\nu)=[2^{j-1}\nu],r_{j}(\nu)=[2^{j}\nu]\;{\text{and}}\\\
A_{j}(\nu)=(-1)^{r_{j}(\nu)}2^{j-1}\nu+(-1)^{r_{j}(\nu)+1}\left[\frac{r_{j}(\nu)+1}{2}\right]\end{array}\right..$
(2.1)
Moreover, if $f:[a,b]\to\mathbb{R}$ is any function, define
$\displaystyle\Delta_{j}f(\nu;a,b)$ $\displaystyle=$ $\displaystyle
f\left(\left(1-\frac{k_{j}(\nu)}{2^{j-1}}\right)a+\frac{k_{j}(\nu)}{2^{j-1}}b\right)+f\left(\left(1-\frac{k_{j}(\nu)+1}{2^{j-1}}\right)a+\frac{k_{j}(\nu)+1}{2^{j-1}}b\right)$
(2.2) $\displaystyle-$ $\displaystyle
2f\left(\left(1-\frac{2k_{j}(\nu)+1}{2^{j}}\right)a+\frac{2k_{j}(\nu)+1}{2^{j}}b\right),0\leq\nu\leq
1.$
### 2.1. Convex functions
We discuss first the inequalities that govern convex functions, then we apply
these inequalities to log-convex functions.
###### Lemma 2.1.
If $f:[a,b]\to\mathbb{R}$ is convex, then $\Delta_{j}f(\nu;a,b)\geq 0$ for
$j\in\mathbb{N}$ and $0\leq\nu\leq 1.$
###### Proof.
Letting
$x_{j}(\nu)=\left(1-\frac{k_{j}(\nu)}{2^{j-1}}\right)a+\frac{k_{j}(\nu)}{2^{j-1}}b,y_{j}(\nu)=\left(1-\frac{k_{j}(\nu)+1}{2^{j-1}}\right)a+\frac{k_{j}(\nu)+1}{2^{j-1}}b$
and
$z_{j}(\nu)=\left(1-\frac{2k_{j}(\nu)+1}{2^{j}}\right)a+\frac{2k_{j}(\nu)+1}{2^{j}}b,$
it is easy that $z_{j}(\nu)=\frac{x_{j}(\nu)+y_{j}(\nu)}{2}.$ The
$\Delta_{j}f(\nu;a,b)=f(x_{j}(\nu))+f(y_{j}(\nu))-2f(z_{j}(\nu))\geq 0,$ by
convexity of $f$. ∎
###### Remark 2.2.
When $f:[a,b]\to\mathbb{R}$, we adopt the convention that $f(x)=0$ for
$x\not\in[a,b].$ This convention will be needed, for example, in the next
lemma, when $N=1$ and $\nu=1.$
###### Lemma 2.3.
Let $f:[0,1]\to\mathbb{R}$ be a function and let $N\in\mathbb{N}$. Then
$\displaystyle(1-\nu)f(0)$ $\displaystyle+$ $\displaystyle\nu
f(1)-\sum_{j=1}^{N}A_{j}(\nu)\Delta_{j}f(\nu;0,1)$ $\displaystyle=$
$\displaystyle\left([2^{N}\nu]+1-2^{N}\nu\right)f\left(\frac{[2^{N}\nu]}{2^{N}}\right)+\left(2^{N}\nu-[2^{N}\nu]\right)f\left(\frac{[2^{N}\nu]+1}{2^{N}}\right).$
###### Proof.
We proceed by induction on $N$.
When $N=1$ and $0\leq\nu<\frac{1}{2},$ $r_{1}(\nu)=0$ and $k_{1}(\nu)=0$.
Hence $A_{1}(\nu)=\nu$ and
$\Delta_{1}f(\nu;a,b)=f(a)+f(b)-2f\left(\frac{a+b}{2}\right).$ Then direct
computations show the result.
Now if $\frac{1}{2}\leq\nu<1$, then $r_{1}(\nu)=1$ and $k_{1}(\nu)=0$, hence
$A_{1}(\nu)=1-\nu$ and
$\Delta_{1}f(\nu;a,b)=f(a)+f(b)-2f\left(\frac{a+b}{2}\right).$ Again, direct
computations show the result.
When $\nu=1,$ the result follows immediately.
Now assume that (LABEL:exact_difference) is true for some $N\in\mathbb{N}$. We
assert its truth for $N+1.$ Notice that, using the inductive step,
$\displaystyle(1-\nu)f(0)$ $\displaystyle+$ $\displaystyle\nu
f(1)-\sum_{j=1}^{N+1}A_{j}(\nu)\Delta_{j}f(\nu;0,1)$ (2.4) $\displaystyle=$
$\displaystyle(1-\nu)f(0)+\nu
f(1)-\sum_{j=1}^{N}A_{j}(\nu)\Delta_{j}f(\nu;0,1)-A_{N+1}(\nu)\Delta_{N+1}f(\nu;0,1)$
$\displaystyle=$
$\displaystyle\left([2^{N}\nu]+1-2^{N}\nu\right)f\left(\frac{[2^{N}\nu]}{2^{N}}\right)+\left(2^{N}\nu-[2^{N}\nu]\right)f\left(\frac{[2^{N}\nu]+1}{2^{N}}\right)$
$\displaystyle-$
$\displaystyle\left((-1)^{[2^{N+1}\nu]}2^{N}\nu+(-1)^{[2^{N+1}\nu]+1}\left[\frac{[2^{N+1}\nu]+1}{2}\right]\right)\times$
$\displaystyle\times$
$\displaystyle\left(f\left(\frac{[2^{N}\nu]}{2^{N}}\right)+f\left(\frac{[2^{N}\nu]+1}{2^{N}}\right)-2f\left(\frac{2[2^{N}\nu]+1}{2^{N+1}}\right)\right).$
Now we treat two cases.
Case I If $[2^{N+1}\nu]$ is odd, then we easily see that
$[2^{N}\nu]=\frac{[2^{N+1}\nu]-1}{2}.$ Therefore,
$f\left(\frac{[2^{N}\nu]+1}{2^{N}}\right)=f\left(\frac{[2^{N+1}\nu]+1}{2^{N+1}}\right)\;{\text{and}}\;f\left(\frac{2[2^{N}\nu]+1}{2^{N+1}}\right)=f\left(\frac{[2^{N+1}\nu]}{2^{N+1}}\right).$
Substituting these values in (2.4) and simplifying imply
$\displaystyle(1-\nu)f(0)+\nu
f(1)-\sum_{j=1}^{N+1}A_{j}(\nu)\Delta_{j}f(\nu;0,1)$ $\displaystyle=$
$\displaystyle\left(2^{N+1}\nu-[2^{N+1}\nu]\right)f\left(\frac{[2^{N+1}\nu]+1}{2^{N+1}}\right)+\left([2^{N+1}\nu]+1-2^{N+1}\nu\right)f\left(\frac{[2^{N+1}\nu]}{2^{N+1}}\right),$
which completes the proof, when $[2^{N+1}\nu]$ is odd.
Case II If $[2^{N+1}\nu]$ is even, then $2[2^{N}\nu]=[2^{N+1}\nu]$ and
$f\left(\frac{[2^{N}\nu]}{2^{N}}\right)=f\left(\frac{[2^{N+1}\nu]}{2^{N+1}}\right)\;{\text{and}}\;f\left(\frac{2[2^{N}\nu]+1}{2^{N+1}}\right)=f\left(\frac{[2^{N+1}\nu]+1}{2^{N+1}}\right).$
Substituting these values in (2.4) and simplifying imply
$\displaystyle(1-\nu)f(0)+\nu
f(1)-\sum_{j=1}^{N+1}A_{j}(\nu)\Delta_{j}f(\nu;0,1)$ $\displaystyle=$
$\displaystyle\left(2^{N+1}\nu-[2^{N+1}\nu]\right)f\left(\frac{[2^{N+1}\nu]+1}{2^{N+1}}\right)+\left([2^{N+1}\nu]+1-2^{N+1}\nu\right)f\left(\frac{[2^{N+1}\nu]}{2^{N+1}}\right).$
This completes the proof. ∎
###### Corollary 2.4.
Let $f:[0,1]\to\mathbb{R}$ be convex and let $N\in\mathbb{N}$. Then
$f(\nu)+\sum_{j=1}^{N}A_{j}(\nu)\Delta_{j}f(\nu;0,1)\leq(1-\nu)f(0)+\nu f(1).$
(2.5)
###### Proof.
From Lemma 2.3 and convexity of $f$, we have
$\displaystyle(1-\nu)f(0)$ $\displaystyle+$ $\displaystyle\nu
f(1)-\sum_{j=1}^{N}A_{j}(\nu)\Delta_{j}f(\nu;0,1)$ $\displaystyle=$
$\displaystyle\left([2^{N}\nu]+1-2^{N}\nu\right)f\left(\frac{[2^{N}\nu]}{2^{N}}\right)+\left(2^{N}\nu-[2^{N}\nu]\right)f\left(\frac{[2^{N}\nu]+1}{2^{N}}\right)$
$\displaystyle\geq$ $\displaystyle
f\left(\left([2^{N}\nu]+1-2^{N}\nu\right)\frac{[2^{N}\nu]}{2^{N}}+\left(2^{N}\nu-[2^{N}\nu]\right)\frac{[2^{N}\nu]+1}{2^{N}}\right)$
$\displaystyle=$ $\displaystyle f(\nu).$
This completes the proof. ∎
Now our first main result in its general form can be stated as follows.
###### Theorem 2.5.
Let $f:[a,b]\to\mathbb{R}$ be convex. Then for each $N\in\mathbb{N}$ and
$0\leq\nu\leq 1,$ we have
$\displaystyle f\left((1-\nu)a+\nu
b\right)+\sum_{j=1}^{N}A_{j}(\nu)\Delta_{j}f(\nu;a,b)\leq(1-\nu)f(a)+\nu
f(b).$ (2.6)
###### Proof.
For the given $f$, define $g:[0,1]\to\mathbb{R}$ by $g(x)=f((1-x)a+xb).$ Then
$g$ is convex on $[0,1].$ Applying Corollary 2.4 on the function $g$ implies
the result. ∎
###### Remark 2.6.
We remark that a negative version of the above theorem has been recently shown
in [8]. Namely, it was proved
$\displaystyle(1+\nu)f(a)-\nu f(b)$
$\displaystyle+\sum_{j=1}^{N}2^{j}\nu\left[\frac{f(a)+f\left(\frac{(2^{j-1}-1)a+b}{2^{j-1}}\right)}{2}-f\left(\frac{(2^{j}-1)a+b}{2^{j}}\right)\right]$
$\displaystyle\leq f\left((1+\nu)a-\nu b\right),\nu\geq 0,a<b,$ (2.7)
for the convex function $f:\mathbb{R}\to\mathbb{R}.$ However, the method of
proof is considerably easier than the above proofs and the applications are
different.
Our next step is to prove a reversed version of (2.6).
###### Theorem 2.7.
Let $f:[a,b]\to\mathbb{R}$ be convex and let $N\in\mathbb{N}$. Then for
$0\leq\nu\leq\frac{1}{2},$ we have
$\displaystyle f\left((1-\nu)a+\nu b\right)$ $\displaystyle+$
$\displaystyle(1-A_{1}(\nu))\Delta_{1}f(\nu;a,b)$ $\displaystyle\geq$
$\displaystyle(1-\nu)f(a)+\nu
f(b)+\sum_{j=1}^{N}A_{j}(1-2\nu)\Delta_{j}f\left(1-2\nu;\frac{a+b}{2},b\right).$
On the other hand, if $\frac{1}{2}\leq\nu\leq 1,$ we have
$\displaystyle f\left((1-\nu)a+\nu b\right)$ $\displaystyle+$
$\displaystyle(1-A_{1}(\nu))\Delta_{1}f(\nu;a,b)$ $\displaystyle\geq$
$\displaystyle(1-\nu)f(a)+\nu
f(b)+\sum_{j=1}^{N}A_{j}(2-2\nu)\Delta_{j}f\left(2-2\nu;a,\frac{a+b}{2}\right).$
###### Proof.
For $0\leq\nu\leq\frac{1}{2},$ we have
$\displaystyle f\left((1-\nu)a+\nu
b\right)+(1-A_{1}(\nu))\Delta_{1}f(\nu;a,b)-\left((1-\nu)f(a)+\nu f(b)\right)$
$\displaystyle=$ $\displaystyle 2\nu
f\left(\frac{a+b}{2}\right)+(1-2\nu)f(b)+f\left((1-\nu)a+\nu
b\right)-2f\left(\frac{a+b}{2}\right)$ $\displaystyle\geq$
$\displaystyle\sum_{j=1}^{N}A_{j}(1-2\nu)\Delta_{j}f\left(1-2\nu;\frac{a+b}{2},b\right)+f\left(2\nu\frac{a+b}{2}+(1-2\nu)b\right)$
$\displaystyle+f\left((1-\nu)a+\nu b\right)-2f\left(\frac{a+b}{2}\right)$
$\displaystyle=$
$\displaystyle\sum_{j=1}^{N}A_{j}(1-2\nu)\Delta_{j}f\left(1-2\nu;\frac{a+b}{2},b\right)+f\left(\nu
a+(1-\nu)b\right)$ $\displaystyle+f\left((1-\nu)a+\nu
b\right)-2f\left(\frac{a+b}{2}\right)$ $\displaystyle\geq$
$\displaystyle\sum_{j=1}^{N}A_{j}(1-2\nu)\Delta_{j}f\left(1-2\nu;\frac{a+b}{2},b\right),$
where the last line follows from convexity of $f$, where one has
$\displaystyle f\left(\nu a+(1-\nu)b\right)+f\left((1-\nu)a+\nu b\right)$
$\displaystyle\geq 2f\left(\frac{\nu a+(1-\nu)b+(1-\nu)a+\nu
b}{2}\right)=2f\left(\frac{a+b}{2}\right).$
This completes the proof for $0\leq\nu\leq\frac{1}{2}.$ Similar computations
imply the desired inequality for $\frac{1}{2}\leq\nu\leq 1.$ ∎
In fact, the above reversed version turns out to be equivalent to convexity.
###### Proposition 2.8.
Let $f:\mathbb{I}\to\mathbb{R}$ be a function defined on the interval
$\mathbb{I}$. Assume that for all $a<b$ in $\mathbb{I}$ and all $0\leq\nu\leq
1$, we have
$f\left((1-\nu)a+\nu
b\right)+(1-A_{1}(\nu))\Delta_{1}f(\nu;a,b)\geq(1-\nu)f(a)+\nu f(b),$ (2.8)
then $f$ is convex on $\mathbb{I}.$
###### Proof.
Observe that when $0\leq\nu\leq\frac{1}{2},$ (2.8) is equivalent to
$f\left((1-\nu)a+\nu
b\right)+(1-\nu)\left(f(a)+f(b)-2f\left(\frac{a+b}{2}\right)\right)\geq(1-\nu)f(a)+\nu
f(b),$
or
$f\left(\frac{a+b}{2}\right)\leq\frac{1}{2-2\nu}f\left((1-\nu)a+\nu
b\right)+\frac{1-2\nu}{2-2\nu}f(b).$ (2.9)
On the other hand, if $\frac{1}{2}\leq\nu\leq 1,$ (2.8) is equivalent to
$f\left(\frac{a+b}{2}\right)\leq\frac{2\nu-1}{2\nu}f(a)+\frac{1}{2\nu}f\left((1-\nu)a+\nu
b\right).$ (2.10)
Let $x_{1}<x_{2}\in\mathbb{I}$ and let $0<\lambda<1.$ We assert that
$f((1-\lambda)x_{1}+\lambda x_{2})\leq(1-\lambda)f(x_{1})+\lambda f(x_{2}).$
If $0<\lambda\leq\frac{1}{2},$ let
$\nu=\frac{1-2\lambda}{2(1-\lambda)},a=(2-2\lambda)x_{1}+(2\lambda-1)x_{2}\;{\text{and}}\;b=x_{2}.$
Then one can easily check that when $0<\lambda\leq\frac{1}{2},$ we have
$0<\nu\leq\frac{1}{2}$ and $a<b$. With these choices, we have
$\frac{a+b}{2}=(1-\lambda)x_{1}+\lambda x_{2}\;{\text{and}}\;(1-\nu)a+\nu
b=x_{1}.$
Substituting these quantities in (2.9) implies $f((1-\lambda)x_{1}+\lambda
x_{2})\leq(1-\lambda)f(x_{1})+\lambda f(x_{2}).$ This proves the desired
inequality for $0<\lambda\leq\frac{1}{2}.$
Now if $\frac{1}{2}\leq\lambda<1,$ let
$\nu=\frac{1}{2\lambda},a=x_{1}\;{\text{and}}\;b=(1-2\lambda)x_{1}+2\lambda
x_{2}.$
With these choices, we have $\frac{1}{2}<\nu\leq 1$ and $a<b$. Now
substituting these quantities in (2.10) implies the desired inequality for
$\frac{1}{2}\leq\lambda<1.$ This completes the proof. ∎
As for the geometric meaning of these refinements, it turns out we are dealing
with the interpolation of the function $f$ over the dyadic partition.
###### Proposition 2.9.
Let $f:[0,1]\to\mathbb{R}$ be any function, and let $N\in\mathbb{N}$. Then, if
$\nu_{i}=\frac{i}{2^{N}}$ for some $i=0,1,\cdots,2^{N},$ we have
$f(\nu_{i})+\sum_{j=1}^{N}A_{j}(\nu_{i})\Delta_{j}f(\nu_{i};0,1)=(1-\nu_{i})f(0)+\nu_{i}f(1).$
(2.11)
###### Proof.
Observe that when $\nu_{i}=\frac{i}{2^{N}},$ we have
$[2^{N}\nu_{i}]=2^{N}\nu_{i}=i.$ From Lemma LABEL:exact_difference, we have
$\displaystyle(1-\nu_{i})f(0)$ $\displaystyle+$
$\displaystyle\nu_{i}f(1)-\sum_{j=1}^{N}A_{j}(\nu_{i})\Delta_{j}f(\nu_{i};0,1)$
$\displaystyle=$
$\displaystyle\left([2^{N}\nu_{i}]+1-2^{N}\nu_{i}\right)f\left(\frac{[2^{N}\nu_{i}]}{2^{N}}\right)+\left(2^{N}\nu_{i}-[2^{N}\nu_{i}]\right)f\left(\frac{[2^{N}\nu_{i}]+1}{2^{N}}\right)$
$\displaystyle=$ $\displaystyle f\left(\frac{i}{2^{N}}\right)=f(\nu_{i}).$
This completes the proof. ∎
###### Proposition 2.10.
Let $f:[0,1]\to\mathbb{R}$ be a given function. If $f$ is continuous, then
$f(\nu)+\lim_{N\to\infty}\sum_{j=1}^{N}A_{j}(\nu)\Delta_{j}f(\nu;0,1)=(1-\nu)f(0)+\nu
f(1),$ (2.12)
uniformly in $\nu\in[0,1].$
###### Proof.
Let $N\in\mathbb{N}$ and define the function
$g_{N}(\nu)=\sum_{j=1}^{N}A_{j}(\nu)\Delta_{j}f(\nu;0,1).$
From Proposition 2.9, we have
$g(\nu_{i})=(1-\nu_{i})f(0)+\nu_{i}f(1)-f(\nu_{i}),$ when
$\nu_{i}=\frac{i}{2^{N}}$ for some $i=1,\cdots,2^{N}.$ Noting the definitions
of $A_{j}$ and $\Delta_{j}f$, one can easily see that $g_{N}$ is linear on
each dyadic interval
$I_{i}:=\left[\frac{i}{2^{N}},\frac{i+1}{2^{N}}\right],i=0,\cdots,2^{N}-1.$
Now since $g_{N}$ is linear on $I_{i}$ and $g_{N}$ coincides with the
continuous function $h(\nu):=(1-\nu)f(0)+\nu f(1)-f(\nu),$ it follows that
$g_{N}$ is the linear interpolation of $h$ at the dyadic partition of $[0,1].$
Since $f$ is continuous, it follows that $g_{N}\to h$ uniformly, completing
the proof. ∎
### 2.2. Log-Convex function
The proof of the following result follows from Theorem 2.5 on replacing $f$ by
$\log f.$
###### Corollary 2.11.
Let $f:[a,b]\to(0,\infty)$ be log-convex. Then for $0\leq\nu\leq 1$ and
$N\in\mathbb{N}$, we have
$f\left((1-\nu)a+\nu
b\right)\prod_{j=1}^{N}\left(\frac{f(x_{j}(\nu))f(y_{j}(\nu))}{f^{2}(z_{j}(\nu))}\right)^{A_{j}(\nu)}\leq
f^{1-\nu}(a)f^{\nu}(b),$ (2.13)
where $x_{j}(\nu),y_{j}(\nu)$ and $z_{j}(\nu)$ are as in the proof of Lemma
2.1.
On the other hand, applying Theorem 2.7 implies the following.
###### Corollary 2.12.
Let $f:[a,b]\to(0,\infty)$ be log-convex. Then for $0\leq\nu\leq\frac{1}{2}$
and $N\in\mathbb{N}$, we have
$f\left((1-\nu)a+\nu
b\right)\left(\frac{f(a)f(b)}{f^{2}(\frac{a+b}{2})}\right)^{1-A_{1}(\nu)}\geq
f^{1-\nu}(a)f^{\nu}(b)\prod_{j=1}^{N}\left(\frac{f(t_{j}(\nu))f(u_{j}(\nu))}{f^{2}(w_{j}(\nu))}\right)^{A_{j}(1-2\nu)},$
(2.14)
where $t_{j}(\nu),u_{j}(\nu)$ and $w_{j}(\nu)$ are obtained from the above
$x_{j}(\nu),y_{j}(\nu)$ and $z_{j}(\nu)$ on replacing $(\nu,a,b)$ by
$\left(1-2\nu,\frac{a+b}{2},b\right).$
On the other hand, if $\frac{1}{2}\leq\nu\leq 1,$ we have
$f\left((1-\nu)a+\nu
b\right)\left(\frac{f(a)f(b)}{f^{2}(\frac{a+b}{2})}\right)^{1-A_{1}(\nu)}\geq
f^{1-\nu}(a)f^{\nu}(b)\prod_{j=1}^{N}\left(\frac{f(t_{j}(\nu))f(u_{j}(\nu))}{f^{2}(w_{j}(\nu))}\right)^{A_{j}(2-2\nu)},$
(2.15)
where $t_{j}(\nu),u_{j}(\nu)$ and $w_{j}(\nu)$ are obtained from the above
$x_{j}(\nu),y_{j}(\nu)$ and $z_{j}(\nu)$ on replacing $(\nu,a,b)$ by
$\left(2-2\nu,a,\frac{a+b}{2}\right).$
The following is a squared additive version for log-convex functions. This
inequality will help prove some squared versions of certain means.
###### Theorem 2.13.
Let $f:[a,b]\to[0,\infty)$ be log-convex. Then for $0\leq\nu\leq 1$ and $N\geq
2$, we have
$\displaystyle f^{2}((1-\nu)a+\nu b)$ $\displaystyle+$ $\displaystyle
A_{1}^{2}(\nu)\Delta_{1}f^{2}(\nu;a,b)+\sum_{j=2}^{N}A_{j}(\nu)\Delta_{j}f^{2}(\nu;a,b)$
$\displaystyle\leq$ $\displaystyle\left((1-\nu)f(a)+\nu f(b)\right)^{2}.$
###### Proof.
We prove the result for $0\leq\nu\leq\frac{1}{2}.$ Since $f$ is log-convex, it
follows that $g=f^{2}$ is log-convex too, and hence is convex. Therefore,
Theorem 2.5 implies
$g((1-\nu)a+\nu
b)+\sum_{j=1}^{N}A_{j}(\nu)\Delta_{j}g(\nu;a,b)\leq(1-\nu)g(a)+\nu g(b),$
which implies, for $0\leq\nu\leq\frac{1}{2},$
$\displaystyle f^{2}((1-\nu)a+\nu b)$ $\displaystyle+$
$\displaystyle\nu^{2}\Delta_{1}f^{2}(\nu;a,b)+\sum_{j=2}^{N}A_{j}(\nu)\Delta_{j}f^{2}(\nu;a,b)$
(2.16) $\displaystyle\leq$ $\displaystyle\left((1-\nu)f(a)+\nu
f(b)\right)^{2}+H(\nu;a,b),$
where
$\displaystyle H(\nu;a,b)$ $\displaystyle=$ $\displaystyle(1-\nu)f^{2}(a)+\nu
f^{2}(b)+\nu^{2}\Delta_{1}f^{2}(\nu;a,b)-\nu\Delta_{1}f^{2}(\nu;a,b)$
$\displaystyle-\left((1-\nu)f(a)+\nu f(b)\right)^{2}$ $\displaystyle=$
$\displaystyle
2\nu(1-\nu)\left(f^{2}\left(\frac{a+b}{2}\right)-f(a)f(b)\right)$
$\displaystyle\leq$ $\displaystyle 0,$
where the last inequality follows from log-convexity of $f$. Since
$H(\nu;a,b)\leq 0$, it follows from (2.16) that
$\displaystyle f^{2}((1-\nu)a+\nu b)$ $\displaystyle+$
$\displaystyle\nu^{2}\Delta_{1}f^{2}(\nu;a,b)+\sum_{j=2}^{N}A_{j}(\nu)\Delta_{j}f^{2}(\nu;a,b)$
$\displaystyle\leq$ $\displaystyle\left((1-\nu)f(a)+\nu f(b)\right)^{2}.$
Similar computations imply the result for $\frac{1}{2}\leq\nu\leq 1.$ ∎
Then reversed squared versions maybe obtained in a similar way from Theorem
2.7 as follows.
###### Theorem 2.14.
Let $f:[a,b]\to[0,\infty)$ be log-convex and $N\in\mathbb{N}$. If
$0\leq\nu\leq\frac{1}{2},$ we have
$\displaystyle f^{2}((1-\nu)a+\nu b)$ $\displaystyle+$
$\displaystyle(1-\nu)^{2}\Delta_{1}f^{2}(\nu;a,b)+2\nu(1-\nu)\left(f(a)f(b)-f^{2}\left(\frac{a+b}{2}\right)\right)$
$\displaystyle\geq$ $\displaystyle\left((1-\nu)f(a)+\nu f(b)\right)^{2}$
$\displaystyle+\sum_{j=1}^{N}A_{j}(1-2\nu)\Delta_{j}f^{2}\left(1-2\nu;\frac{a+b}{2},b\right).$
If $\frac{1}{2}\leq\nu\leq 1,$ we have
$\displaystyle f^{2}((1-\nu)a+\nu b)$ $\displaystyle+$
$\displaystyle\nu^{2}\Delta_{1}f^{2}(\nu;a,b)+2\nu(1-\nu)\left(f(a)f(b)-f^{2}\left(\frac{a+b}{2}\right)\right)$
$\displaystyle\geq$ $\displaystyle\left((1-\nu)f(a)+\nu f(b)\right)^{2}$
$\displaystyle+\sum_{j=1}^{N}A_{j}(2-2\nu)\Delta_{j}f^{2}\left(2-2\nu;a,\frac{a+b}{2}\right).$
## 3\. Application
### 3.1. Refinements of means inequalities
In this section we present some interesting applications of the above
inequalities. The first result is the following refinement of Young’s
inequality.
###### Corollary 3.1.
Let $x,y>0,N\in\mathbb{N}$ and $0\leq\nu\leq 1.$ Then
$x\\#_{\nu}y+\sum_{j=1}^{N}A_{j}(\nu)\left(\sqrt[2^{j}]{y^{k_{j}(\nu)}x^{2^{j-1}-k_{j}(\nu)}}-\sqrt[2^{j}]{y^{k_{j}(\nu)+1}x^{2^{j-1}-k_{j}(\nu)-1}}\right)^{2}\leq
x\nabla_{\nu}y.$ (3.1)
###### Proof.
This follows from Theorem 2.5, on letting $f(t)=x^{1-t}y^{t},a=0,b=1.$ Then
$f$ is convex. Moreover, direct computations show that
$\Delta_{j}f(\nu;0,1)=\left(\sqrt[2^{j}]{y^{k_{j}(\nu)}x^{2^{j-1}-k_{j}(\nu)}}-\sqrt[2^{j}]{y^{k_{j}(\nu)+1}x^{2^{j-1}-k_{j}(\nu)-1}}\right)^{2}.$
∎
The above theorem has been recently proved in [sabjmaa] as a refinement of
Young’s inequality. This inequality refines the corresponding refinements
appearing in [5] and [11], where the inequality was proved only for $N=1,2.$
On the other hand, letting $f(t)=x!_{t}y$, the weighted harmonic mean, we
obtain the following refinement of the arithmetic-harmonic mean inequality.
###### Corollary 3.2.
Let $x,y>0,N\in\mathbb{N}$ and $0\leq\nu\leq 1.$ Then
$x!_{\nu}y+\sum_{j=1}^{N}A_{j}(\nu)\left(x!_{\alpha_{j}(\nu)}y+x!_{\beta_{j}(\nu)}y-2x!_{\gamma_{j}(\nu)}y\right)\leq
x\nabla_{\nu}y,$ (3.2)
where
$\alpha_{j}(\nu)=\frac{[2^{j-1}\nu]}{2^{j-1}},\beta_{j}(\nu)=\frac{[2^{j-1}\nu]+1}{2^{j-1}}$
and $\gamma_{j}(\nu)=\frac{\alpha_{j}(\nu)+\beta_{j}(\nu)}{2}.$
This inequality is a significant refinement of the corresponding inequality in
[12], where the inequality was proved only for $N=1.$
Now noting log-convexity of the function $t\mapsto x!_{t}y$ on $[0,1],$ and
applying Corollary 2.11, we get the following multiplicative refinement of the
geometric-harmonic mean inequality.
###### Corollary 3.3.
Let $x,y>0,N\in\mathbb{N}$ and $0\leq\nu\leq 1$, we have
$x!_{\nu}y\prod_{j=1}^{N}\left(\frac{(x!_{\alpha_{j}(\nu)}y)(x!_{\beta_{j}(\nu)}y)}{(x!_{\gamma_{j}(\nu)}y)^{2}}\right)^{A_{j}(\nu)}\leq
x\\#_{\nu}y,$
where
$\alpha_{j}(\nu)=\frac{[2^{j-1}\nu]}{2^{j-1}},\beta_{j}(\nu)=\frac{[2^{j-1}\nu]+1}{2^{j-1}}$
and $\gamma_{j}(\nu)=\frac{\alpha_{j}(\nu)+\beta_{j}(\nu)}{2}.$
When $N=1$, Corollary 3.3, reduces to
$(x!_{\nu}y)\left(\frac{x\nabla y}{x\\#y}\right)^{2\nu}\leq
x\\#_{\nu}y,0\leq\nu\leq\frac{1}{2}$
and
$(x!_{\nu}y)\left(\frac{x\nabla y}{x\\#y}\right)^{2(1-\nu)}\leq
x\\#_{\nu}y,\frac{1}{2}\leq\nu\leq 1.$
The constant $\left(\frac{x\nabla y}{x\\#y}\right)^{2}$ appearing in these
inequalities is called the Kantorovich constant, and has appeared in recent
refinements of these mean inequalities. One can see [6] as a recent reference
treating some inequalities using this constant.
As for the squared version, applying Theorem 2.13 to the log-convex functions
$t\mapsto x\\#_{t}y$ and $t\mapsto x!_{t}y$ implies the following. The first
inequality refines the corresponding results in [3] and [11], while the other
inequality is new.
###### Corollary 3.4.
Let $x,y>0,0\leq\nu\leq 1,N\geq 2$ and
$\alpha_{j}(\nu)=\frac{k_{j}(\nu)}{2^{j-1}},\beta_{j}(\nu)=\frac{k_{j}(\nu)+1}{2^{j-1}}\;{\text{and}}\;\gamma_{j}(\nu)=\frac{\alpha_{j}(\nu)+\beta_{j}(\nu)}{2}.$
Then
$\displaystyle\left(x\\#_{\nu}y\right)^{2}$ $\displaystyle+$ $\displaystyle
A_{1}^{2}(\nu)(x-y)^{2}+\sum_{j=2}^{N}A_{j}(\nu)\left(x^{1-\alpha_{j}(\nu)}y^{\alpha_{j}(\nu)}-x^{1-\beta_{j}(\nu)}y^{\beta_{j}(\nu)}\right)^{2}$
$\displaystyle\leq$ $\displaystyle(x\nabla_{\nu}y)^{2},$
and
$\displaystyle\left(x!_{\nu}y\right)^{2}$ $\displaystyle+$ $\displaystyle
2A_{1}^{2}(\nu)(x^{2}\nabla
y^{2}-(x!y)^{2})+\sum_{j=2}^{N}A_{j}(\nu)\left((x!_{\alpha_{j}(\nu)}y)^{2}+(x!_{\beta_{j}(\nu)}y)^{2}-2(x!_{\gamma_{j}(\nu)}y)^{2}\right)$
$\displaystyle\leq$ $\displaystyle(x\nabla_{\nu}y)^{2}.$
### 3.2. Reversed Version
Applying Theorem 2.7 to the function $f(t)=x\\#_{t}y$ implies the following
reversed version of Young’s inequality.
###### Corollary 3.5.
For $x,y>0$, let $f(t)=x\\#_{t}y$ and let $N\in\mathbb{N}$. If
$0\leq\nu\leq\frac{1}{2},$ we have
$\displaystyle x\\#_{\nu}y+(1-\nu)(\sqrt{x}-\sqrt{y})^{2}\geq
x\nabla_{\nu}y+\sum_{j=1}^{N}A_{j}(1-2\nu)\Delta_{j}f\left(1-2\nu;\frac{1}{2},1\right).$
On the other hand, if $\frac{1}{2}\leq\nu\leq 1,$ we have
$\displaystyle x\\#_{\nu}y+\nu(\sqrt{x}-\sqrt{y})^{2}\geq
x\nabla_{\nu}y+\sum_{j=1}^{N}A_{j}(2-2\nu)\Delta_{j}f\left(2-2\nu;0,\frac{1}{2}\right).$
These inequalities refine those in [5] and [11]. Then an arithmetic-harmonic
reversed version maybe obtained by applying Theorem 2.7 to the function
$f(t)=x!_{t}y$ as follows.
###### Corollary 3.6.
For $x,y>0$, let $f(t)=x\\#_{t}y$ and let $N\in\mathbb{N}$. If
$0\leq\nu\leq\frac{1}{2},$ we have
$\displaystyle x!_{\nu}y+(1-\nu)\left(x+y-2x!y\right)\geq
x\nabla_{\nu}y+\sum_{j=1}^{N}A_{j}(1-2\nu)\Delta_{j}f\left(1-2\nu;\frac{1}{2},1\right).$
On the other hand, if $\frac{1}{2}\leq\nu\leq 1,$ we have
$\displaystyle x!_{\nu}y+\nu(x+y-2x!y)\geq
x\nabla_{\nu}y+\sum_{j=1}^{N}A_{j}(2-2\nu)\Delta_{j}f\left(2-2\nu;0,\frac{1}{2}\right).$
These inequalities refine those in [6].
Similarly, noting log-convexity of the function $f(t)=x!_{t}y$, we may apply
Corollary 2.12 to obtain reversed multiplicative version of the harmonic-
geometric mean inequality. We leave the application to the reader.
Following the same guideline, we may obtain reversed squared versions by
applying Theorem 2.14 to the functions $t\mapsto x\\#_{t}y$ and $x\mapsto
x!_{t}y.$ Observe that when $f(t)=x\\#_{t}y$ we have
$f(a)f(b)-f^{2}\left(\frac{a+b}{2}\right)=0.$ Therefore, applying Theorem 2.14
implies the following inequalities, which refine the corresponding
inequalities in [3] and [11].
###### Corollary 3.7.
Let $x,y>0$ and $N\geq 1.$ If $0\leq\nu\leq 1,$ we have
$\displaystyle\left(x\\#_{\nu}y\right)^{2}+(1-\nu)^{2}(x-y)^{2}\geq(x\nabla_{\nu}y)^{2}+\sum_{j=1}^{N}A_{j}(1-2\nu)\Delta_{j}f^{2}\left(1-2\nu;\frac{1}{2},1\right).$
If $\frac{1}{2}\leq\nu\leq 1,$ we have
$\displaystyle\left(x\\#_{\nu}y\right)^{2}+\nu^{2}(x-y)^{2}\geq(x\nabla_{\nu}y)^{2}+\sum_{j=1}^{N}A_{j}(2-2\nu)\Delta_{j}f^{2}\left(1-2\nu;0,\frac{1}{2}\right).$
Now letting $g(t)=x!_{t}y$ we obtain the following new inequalities for the
arithmetic-harmonic means.
###### Corollary 3.8.
Let $x,y>0$ and $N\geq 1.$ If $0\leq\nu\leq 1,$ we have
$\displaystyle\left(x!_{\nu}y\right)^{2}$ $\displaystyle+$ $\displaystyle
2(1-\nu)^{2}(x^{2}\nabla
y^{2}-(x!y)^{2})+2\nu(1-\nu)\left(xy-\left(\frac{2xy}{x+y}\right)^{2}\right)$
$\displaystyle\geq$
$\displaystyle(x\nabla_{\nu}y)^{2}+\sum_{j=1}^{N}A_{j}(1-2\nu)\Delta_{j}g^{2}\left(1-2\nu;\frac{1}{2},1\right).$
If $\frac{1}{2}\leq\nu\leq 1,$ we have
$\displaystyle\left(x!_{\nu}y\right)^{2}$ $\displaystyle+$ $\displaystyle
2\nu^{2}(x^{2}\nabla
y^{2}-(x!y)^{2})+2\nu(1-\nu)\left(xy-\left(\frac{2xy}{x+y}\right)^{2}\right)$
$\displaystyle\geq$
$\displaystyle(x\nabla_{\nu}y)^{2}+\sum_{j=1}^{N}A_{j}(2-2\nu)\Delta_{j}g^{2}\left(1-2\nu;0,\frac{1}{2}\right).$
### 3.3. Some $L^{p}$ inequalities
Let $(X,\mathcal{M},\mu)$ be a measure space, and let $0<p<q<r.$ Then
$L^{p}\cap L^{r}\subset L^{q}$ and
$\|f\|_{q}\leq\|f\|_{p}^{\nu}\|f\|_{r}^{1-\nu},\;{\text{where}}\;f\in
L^{p}\cap L^{r}\;{\text{and}}\;\nu=\frac{q^{-1}-r^{-1}}{p^{-1}-r^{-1}}.$
This inequality can be modified using Corollary 2.11 and a reversed version
can be obtained using Corollary 2.12.
###### Proposition 3.9.
Let $(X,\mathcal{M},\mu)$ be a measure space, $0<p<q<r$ and $\nu$ be as above.
If $f\in L^{p}\cap L^{r}$ and $N\in\mathbb{N}$, then we have
$\|f\|_{q}\prod_{j=1}^{N}\left(\frac{\|f\|_{x_{j}^{-1}(\nu)}\|f\|_{y_{j}^{-1}(\nu)}}{\|f\|^{2}_{z_{j}^{-1}(\nu)}}\right)^{A_{j}(\nu)}\leq\|f\|_{p}^{\nu}\|f\|_{r}^{1-\nu}.$
###### Proof.
It is easy to check that the function $h(t)=\|f\|_{1/t}$ is log-convex on
$[r^{-1},p^{-1}]$. Then direct application of Corollary 2.11 implies the
result. ∎
In particular, when $N=1$, the above proposition implies
$\|f\|_{q}\leq\left\\{\begin{array}[]{cc}\|f\|_{r}^{1-2\nu}\|f\|^{2\nu}_{\frac{2pr}{p+r}},&0\leq\nu\leq\frac{1}{2}\\\
\|f\|_{p}^{2\nu-1}\|f\|^{2-2\nu}_{\frac{2pr}{p+r}},&\frac{1}{2}\leq\nu\leq
1\end{array}\right\\}\leq\|f\|_{p}^{\nu}\|f\|_{r}^{1-\nu}.$
The condition $0\leq\nu\leq\frac{1}{2}$ can be interpreted as
$\frac{2pr}{p+r}\leq q\leq r$, while $\frac{1}{2}\leq\nu\leq 1$ means $p\leq
q\leq\frac{2pr}{p+r}.$
Moreover, a reversed version maybe obtained using Corollary 2.12.
###### Proposition 3.10.
Let $(X,\mathcal{M},\mu)$ be a measure space, $0<p<q<r$ and
$\nu=\frac{q^{-1}-r^{-1}}{p^{-1}-r^{-1}}$. If $\frac{2pr}{p+r}\leq q\leq r$,
$f\in L^{p}\cap L^{r}$ and $N\in\mathbb{N}$, then
$\|f\|_{q}\geq\|f\|^{2-2\nu}_{\frac{2pr}{p+r}}\|f\|_{p}^{2\nu-1}\prod_{j=1}^{N}\left(\frac{\|f\|_{t_{j}^{-1}(\nu)}\|f\|_{u_{j}^{-1}(\nu)}}{\|f\|^{2}_{w_{j}^{-1}(\nu)}}\right)^{A_{j}(1-2\nu)}\geq\|f\|^{2-2\nu}_{\frac{2pr}{p+r}}\|f\|_{p}^{2\nu-1},$
where $t_{j},u_{j}$ and $z_{j}$ are obtained from $x_{j},y_{j}$ and $z_{j}$ by
replacing $(\nu,a,b)$ with $\left(1-2\nu,\frac{p+r}{2pr},p^{-1}\right).$ On
the other hand, if $p\leq q\leq\frac{2pr}{p+r},$ then
$\|f\|_{q}\geq\|f\|^{2\nu}_{\frac{2pr}{p+r}}\|f\|_{r}^{1-2\nu}\prod_{j=1}^{N}\left(\frac{\|f\|_{t_{j}^{-1}(\nu)}\|f\|_{u_{j}^{-1}(\nu)}}{\|f\|^{2}_{w_{j}^{-1}(\nu)}}\right)^{A_{j}(1-2\nu)}\geq\|f\|^{2\nu}_{\frac{2pr}{p+r}}\|f\|_{r}^{1-2\nu},$
where $t_{j},u_{j}$ and $z_{j}$ are obtained from $x_{j},y_{j}$ and $z_{j}$ by
replacing $(\nu,a,b)$ with $\left(2-2\nu,r^{-1},\frac{p+r}{2pr}\right).$
Propositions 3.9 and 3.10 have been obtained using log-convexity of the
function $h(t)=\|f\|_{t^{-1}}.$ In fact, noting log-convexity of the function
$h(t)=\|f\|_{t}^{t}$, we obtain the same results! This is due to the
equivalence of log-convexity of the functions $t\mapsto\|f\|_{t^{-1}}$ and
that of $\|f\|_{t}^{t}.$ We refer the reader to [9] where these relations
between the different log-convex function criteria have been discussed.
The celebrated three lines lemma of Hadamard states the following.
###### Lemma 3.11.
Let $\mathbb{D}=\\{z\in\mathbb{C}:0\leq\Re z\leq 1\\}$ and let
$\varphi:\mathbb{D}\to\mathbb{C}$ be continuous on $\mathbb{D}$ and analytic
in the interior of $\mathbb{D}$. Then the function $f:[0,1]\to\mathbb{R}$
defined by $f(x)=\sup_{y}|\varphi(x+iy)|$ is log-convex.
This lemma is an extremely useful tool in the theory of complex functions. In
particular, this lemma becomes handy in proving different interpolation
versions of bounded linear operators between $L^{p}$ spaces.
Log-convexity implied by Lemma 3.11 allows us to apply our refined and
reversed versions for log-convex functions. In the following proposition, we
present one term refinement and reverse.
###### Proposition 3.12.
Let $\mathbb{D}=\\{z\in\mathbb{C}:0\leq\Re z\leq 1\\}$ and let
$\varphi:\mathbb{D}\to\mathbb{C}$ be continuous on $\mathbb{D}$ and analytic
in the interior of $\mathbb{D}$. Then the function $f:[0,1]\to\mathbb{R}$
defined by $f(x)=\sup_{y}|\varphi(x+iy)|$ satisfies the following
$f(x)\leq\left\\{\begin{array}[]{cc}f^{1-2x}(0)f^{2x}\left(\frac{1}{2}\right),&0\leq
x\leq\frac{1}{2}\\\
f^{2x-1}(1)f^{2-2x}\left(\frac{1}{2}\right),&\frac{1}{2}\leq x\leq
1\end{array}\right.$
and
$f(x)\geq\left\\{\begin{array}[]{cc}f^{2x-1}(1)f^{2-2x}\left(\frac{1}{2}\right),&0\leq
x\leq\frac{1}{2}\\\ f^{1-2x}(0)f^{2x}\left(\frac{1}{2}\right),&\frac{1}{2}\leq
x\leq 1\end{array}\right..$
### 3.4. Operator versions
The following theorem provides a refinement of the well known Heinz inequality
and its reverse. The proof follows immediately noting convexity of the Heinz
means, see [1].
###### Theorem 3.13.
For $A,B\in\mathbb{M}_{n}^{+},X\in\mathbb{M}_{n},0\leq\nu\leq 1$ and any
unitarily invariant norm $\||\;\;\||$, let
$f(\nu)=\||A^{\nu}XB^{1-\nu}+A^{1-\nu}XB^{\nu}\||.$
Then we have the following refinement of Heinz inequality
$\displaystyle\||A^{\nu}XB^{1-\nu}+A^{1-\nu}XB^{\nu}\||+\sum_{j=1}^{N}A_{j}(\nu)\Delta_{j}f(\nu;0,1)\leq\||AX+XB\||.$
Moreover, if $0\leq\nu\leq\frac{1}{2},$ we have
$\displaystyle\||A^{\nu}XB^{1-\nu}+A^{1-\nu}XB^{\nu}\||$ $\displaystyle+$
$\displaystyle 2(1-\nu)\left(\||AX+XB\||-\||\sqrt{A}X\sqrt{B}\||\right)$
$\displaystyle\geq$
$\displaystyle\||AX+XB\||+\sum_{j=1}^{N}A_{j}(1-2\nu)\Delta_{j}f\left(1-2\nu;\frac{1}{2},1\right).$
On the other hand, if $\frac{1}{2}\leq\nu\leq 1,$ we have
$\displaystyle\||A^{\nu}XB^{1-\nu}+A^{1-\nu}XB^{\nu}\||$ $\displaystyle+$
$\displaystyle 2\nu\left(\||AX+XB\||-\||\sqrt{A}X\sqrt{B}\||\right)$
$\displaystyle\geq$
$\displaystyle\||AX+XB\||+\sum_{j=1}^{N}A_{j}(2-2\nu)\Delta_{j}f\left(2-2\nu;0,\frac{1}{2}\right).$
In [9], it is shown that for $A,B\in\mathbb{M}_{n}^{+}$ and
$X\in\mathbb{M}_{n},$ the functions
$t\to\||A^{t}XB^{1-t}\||,t\to\||A^{t}XB^{1-t}\||\;\||A^{1-t}X^{B}{t}\||,t\to{\rm{tr}}(A^{t}XB^{1-t}X^{*})$
are log-convex on $[0,1].$ Therefore, we may apply Corollaries 2.11 and 2.12
to obtain refinements and reversed versions for such functions.
For the $\|\;\;\|_{2}$ norm, we can prove log convexity of the Heinz means,
which allows us to obtain further refinements of the Heinz inequality by
applying Corollaries 2.11 and 2.12.
###### Proposition 3.14.
Let $A,B\in\mathbb{M}_{n}^{+}$ and $X\in\mathbb{M}_{n}$, and define
$f(\nu)=\|A^{\nu}XB^{1-\nu}+A^{1-\nu}XB^{\nu}\|_{2}.$ Then $f$ is log-convex
on $[0,1]$.
###### Proof.
Since $A,B\in\mathbb{M}_{n}^{+}$, there are diagonal matrices
$D_{1}:={\text{diag}}(\lambda_{i}),D_{2}:={\text{diag}}(\mu_{i})$ and
unitarily matrices $U,V$ such that $\lambda_{i},\mu_{i}\geq 0,$
$A=UD_{1}U^{*}$ and $B=VD_{2}V^{*}.$ Letting $Y=U^{*}XV,$ we have
$A^{\nu}XB^{1-\nu}+A^{1-\nu}XB^{\nu}=U(\lambda_{i}^{\nu}y_{ij}\mu_{j}^{1-\nu}+\lambda_{i}^{1-\nu}y_{ij}\mu_{j}^{\nu})V^{*}.$
Since $\|\;\;\|_{2}$ is a unitarily invariant norm, we have
$\displaystyle f^{2}(\nu)$ $\displaystyle=$
$\displaystyle\|U(\lambda_{i}^{\nu}y_{ij}\mu_{j}^{1-\nu}+\lambda_{i}^{1-\nu}y_{ij}\mu_{j}^{\nu})V^{*}\|_{2}^{2}$
$\displaystyle=$
$\displaystyle\sum_{i,j}\left(\lambda_{i}^{\nu}\mu_{j}^{1-\nu}+\lambda_{i}^{1-\nu}\mu_{j}^{\nu}\right)^{2}|y_{ij}|^{2}.$
Notice that each summand is log-convex, being the square of a log-convex
function. This implies that $f^{2}$ is log-convex. Consequently, $f$ is log-
convex. ∎
Letting $f(\nu)=\|A^{\nu}XB^{1-\nu}+A^{1-\nu}XB^{\nu}\|_{2}$ and applying
Theorem 2.13 imply the following squared version of Heinz inequality.
###### Corollary 3.15.
Let $A,B\in\mathbb{M}_{n}^{+},X\in\mathbb{M}_{n},0\leq\nu\leq 1$ and $N\geq
2.$ Then
$\displaystyle\|A^{\nu}XB^{1-\nu}+A^{1-\nu}XB^{\nu}\|_{2}^{2}+2A_{1}^{2}(\nu)\left(\|AX+XB\|_{2}^{2}-2\|A^{\frac{1}{2}}XB^{\frac{1}{2}}\|_{2}^{2}\right)$
$\displaystyle\hskip
8.5359pt+\sum_{j=2}^{N}A_{j}(\nu)\Delta_{j}f^{2}(\nu;0,1)\leq\|AX+XB\|_{2}^{2}.$
We leave the application of Corollaries 2.11 and 2.12 to the reader.
Further operator versions maybe obtained using Lemma 1.1. The following
operator versions refine the corresponding results in [5] and [11].
###### Proposition 3.16.
Let $A,B\in\mathbb{M}_{n}^{++}$ and $0\leq\nu\leq 1.$ Then for
$\alpha_{j}(\nu)=\frac{k_{j}(\nu)}{2^{j-1}},\beta_{j}(\nu)=\frac{k_{j}(\nu)+1}{2^{j-1}},\gamma_{j}(\nu)=\frac{\alpha_{j}(\nu)+\beta_{j}(\nu)}{2}$
and $N\in\mathbb{N}$, we have
$\displaystyle
A\\#_{\nu}B+\sum_{j=1}^{N}A_{j}(\nu)\left(A\\#_{\alpha_{j}(\nu)}+A\\#_{\beta_{j}(\nu)}-2A\\#_{\gamma_{j}(\nu)}B\right)\leq
A\nabla_{\nu}B.$
###### Proof.
In Corollary 3.1, let $x=1$, expand the summand and apply Lemma 1.1 with $y$
replaced by $X=A^{-\frac{1}{2}}BA^{-\frac{1}{2}}.$ Then the result follows
upon conjugating both sides with $A^{\frac{1}{2}}.$ ∎
In a similar way one may obtain reversed versions by applying Corollary 3.5.
This provides refinements of the reversed versions of [11]. The following is
an operator arithmetic-harmonic version, refining the corresponding results in
[12].
###### Proposition 3.17.
Let $A,B\in\mathbb{M}_{n}^{++}$ and $0\leq\nu\leq 1.$ Then for
$\alpha_{j}(\nu)=\frac{k_{j}(\nu)}{2^{j-1}},\beta_{j}(\nu)=\frac{k_{j}(\nu)+1}{2^{j-1}},\gamma_{j}(\nu)=\frac{\alpha_{j}(\nu)+\beta_{j}(\nu)}{2}$
and $N\in\mathbb{N}$, we have
$\displaystyle
A!_{\nu}B+\sum_{j=1}^{N}A_{j}(\nu)\left(A!_{\alpha_{j}(\nu)}+A!_{\beta_{j}(\nu)}-2A!_{\gamma_{j}(\nu)}B\right)\leq
A\nabla_{\nu}B.$
The proof follows immediately on applying Lemma 1.1 together with Corollary
3.2. On the other hand, applying Corollary 3.6 implies the following
refinement of the corresponding inequalities in [6].
###### Proposition 3.18.
Let $A,B\in\mathbb{M}_{n}^{++}$ and $N\in\mathbb{N}$. If $0\leq\nu\leq 1,$ we
have
$\displaystyle A!_{\nu}B$ $\displaystyle+$ $\displaystyle(1-\nu)(A+B-2A!B)$
$\displaystyle\geq$ $\displaystyle
A\nabla_{\nu}B+\sum_{j=1}^{N}A_{j}(1-2\nu)\left(A!_{\alpha_{j}(\nu)}B+A!_{\beta_{j}(\nu)}B-2A!_{\gamma_{j}(\nu)}B\right),$
where
$\alpha_{j}(\nu)=\frac{1}{2}\left(1-\frac{k_{j}(1-2\nu)}{2^{j-1}}\right)+\frac{k_{j}(1-2\nu)}{2^{j-1}},$
$\beta_{j}(\nu)=\frac{1}{2}\left(1-\frac{k_{j}(1-2\nu)+1}{2^{j-1}}\right)+\frac{k_{j}(1-2\nu)+1}{2^{j-1}}$
and $\gamma_{j}(\nu)=\frac{\alpha_{j}(\nu)+\beta_{j}(\nu)}{2}.$
On the other hand, if $\frac{1}{2}\leq\nu\leq 1,$ we have
$\displaystyle A!_{\nu}B$ $\displaystyle+$ $\displaystyle\nu(A+B-2A!B)$
$\displaystyle\geq$ $\displaystyle
A\nabla_{\nu}B+\sum_{j=1}^{N}A_{j}(2-2\nu)\left(A!_{\alpha_{j}(\nu)}B+A!_{\beta_{j}(\nu)}B-2A!_{\gamma_{j}(\nu)}B\right),$
where $\alpha_{j}(\nu)=\frac{k_{j}(2-2\nu)}{2^{j-1}},$
$\beta_{j}(\nu)=\frac{k_{j}(2-2\nu)+1}{2^{j-1}}$ and
$\gamma_{j}(\nu)=\frac{\alpha_{j}(\nu)+\beta_{j}(\nu)}{2}.$
The following is an interesting one-term multiplicative refinement of the
operator geometric-harmonic mean inequality.
###### Theorem 3.19.
Let $A,B\in\mathbb{M}_{n}^{++}$ and $0\leq\nu\leq 1.$ Then
$\displaystyle(A!_{\nu}B)\left(\frac{A^{-1}B+2I+B^{-1}A}{4}\right)^{r}\leq
A\\#_{\nu}B,$
where $r=\min\\{\nu,1-\nu\\}.$
###### Proof.
We prove the desired inequality for $0\leq\nu\leq\frac{1}{2}.$ In Corollary
3.3, let $N=1$ and $x=1$, to get
$(1!_{\nu}y)\left(\frac{1+y}{2\sqrt{y}}\right)^{2\nu}\leq 1\\#_{\nu}y,$ or
$\displaystyle\frac{1}{4^{\nu}}\left((1-\nu)+\nu
y^{-1}\right)^{-1}\left(y+2+y^{-1}\right)^{\nu}\leq y^{\nu}.$ (3.3)
Let $X=A^{-\frac{1}{2}}BA^{-\frac{1}{2}}$ and apply Lemma 1.1. The left hand
side of (3.3) becomes
$\displaystyle\frac{1}{4^{\nu}}\left((1-\nu)I+\nu
A^{\frac{1}{2}}B^{-1}A^{\frac{1}{2}}\right)^{-1}\left(A^{-\frac{1}{2}}BA^{-\frac{1}{2}}+2I+A^{\frac{1}{2}}B^{-1}A^{\frac{1}{2}}\right)^{\nu}$
(3.4) $\displaystyle=$
$\displaystyle\frac{1}{4^{\nu}}\left[A^{-\frac{1}{2}}(A!_{\nu}B)A^{-\frac{1}{2}}\right]\left[A^{\frac{1}{2}}\left(A^{-1}B+2I+B^{-1}A\right)^{\nu}A^{-\frac{1}{2}}\right]$
$\displaystyle=$ $\displaystyle
A^{-\frac{1}{2}}(A!_{\nu}B)\left(\frac{A^{-1}B+2I+B^{-1}A}{4}\right)^{\nu}A^{-\frac{1}{2}}.$
On the other hand, the right hand side of (3.3) is simply
$\left(A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\right)^{\nu}.$ This together with
(3.4) imply the desired inequality, upon conjugating both sides with
$A^{\frac{1}{2}}.$ This completes the proof. ∎
## References
* [1] R. Bhatia, Matrix analysis, Springer-Verlag, New York, 1997.
* [2] T. Furuta, J. Micic Hot and J. Pecaric, Mond-Pecaric Method in Operator Inequalities, Element, Zagreb, 2005.
* [3] O. Hirzallah and F. Kittaneh, Matrix Young inequalities for the Hilbert-Schmidt norm, Linear Algebra Appl. 308 (2000), 77–84.
* [4] F. Kittaneh, On the convexity of the Heinz mean, Integr. Equ. Oper. Theory 68 (2010), 519–527.
* [5] F. Kittaneh and Y. Manasrah, Improved Young and Heinz inequalities for matrices, J. Math. Anal. Appl. 36 (2010), 262–269.
* [6] W. Liao and J. Wu, Reverse arithmetic-harmonic mean and mixed mean operator inequalities, J. Inequal. Appl., 2015:215.
* [7] Y. Manasrah and F. Kittaneh, A generalization of two refined Young inequalities, Positivity 19 (2015), 757–768.
* [8] M. Sababheh, Convex functions and means of matrices, Math. Ineq. Appl., accepted.
* [9] M. Sababheh, Log and harmonically log-convex functions related to matrix norms, Operators and Matrices, in press.
* [10] M. Sababheh, Integral inequalities of the Heinz means as convex functions, J. Math. Ineq., 10 (2) (2016), 313–325.
* [11] J. Zhao and J. Wu, Operator inequalities involving improved Young and its reverse inequalities, J. Math. Anal. Appl. 421 (2015) 1779–1789.
* [12] H. Zuo, G. Shi and M. Fujii, Refined Young inequality with Kantorovich constant, J. Math. Inequal. 5(4)(2011), 551-556 .
|
# Spin Hall Nano-Oscillator Empirical Electrical Model for Optimal On-chip
Detector Design
Rafaella Fiorelli<EMAIL_ADDRESS>Instituto de Microelectrónica de
Sevilla, CSIC and Universidad de Sevilla, Sevilla, 41092. Mona Rajabali
NanOsc AB, Kista, 16440 Sweden. Roberto Méndez-Romero Instituto de
Microelectrónica de Sevilla, CSIC and Universidad de Sevilla, Sevilla, 41092.
Akash Kumar Applied Spintronics Group, Department of Physics, University of
Gothenburg, 41296 Gothenburg, Sweden. Research Institute of Electrical
Communication and Center for Science and Innovation in Spintronics, Tohoku
University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577 Japan. Artem Litvinenko
Applied Spintronics Group, Department of Physics, University of Gothenburg,
41296 Gothenburg, Sweden. Teresa Serrano-Gotarredona Instituto de
Microelectrónica de Sevilla, CSIC and Universidad de Sevilla, Sevilla, 41092.
Farshad Moradi Integrated Circuits and Electronics Lab (ICELab), Electrical
and Computer Engineering Department, Aarhus University, 8200 Aarhus N,
Denmark. Johan Åkerman Applied Spintronics Group, Department of Physics,
University of Gothenburg, 41296 Gothenburg, Sweden. Research Institute of
Electrical Communication and Center for Science and Innovation in Spintronics,
Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577 Japan. Bernabé
Linares-Barranco Instituto de Microelectrónica de Sevilla, CSIC and
Universidad de Sevilla, Sevilla, 41092. Eduardo Peralías Instituto de
Microelectrónica de Sevilla, CSIC and Universidad de Sevilla, Sevilla, 41092.
###### Abstract
As nascent nonlinear oscillators, nano-constriction spin Hall nano-oscillators
(SHNOs) represent a promising potential for integration into more complicated
systems such as neural networks, magnetic field sensors, and radio frequency
(RF) signal classification, their tunable high-frequency operating regime,
easy synchronization, and CMOS compatibility can streamline the process. To
implement SHNOs in any of these networks, the electrical features of a single
device are needed before designing the signal detection CMOS circuitry. This
study centers on presenting an empirical electrical model of the SHNO based on
a comprehensive characterization of the output impedance of a single SHNO, and
its available output power in the range of 2-10 GHz at various bias currents.
## I Introduction
As complementary metal-oxide-semiconductor (CMOS) technology is reaching its
physical limit, other technologies are about to thrive. In particular,
nonlinear spiking and oscillatory spintronic devices exhibit tremendous
potential in leading-edge areas such as emulating the spiking behaviors of
neurons Torrejon _et al._ (2017); Romera _et al._ (2018); Zahedinejad _et
al._ (2020); Houshang _et al._ (2022); Kumar _et al._ (2023a), RF signal
classification Ross _et al._ (2023), ultra-fast microwave spectral analysis
Litvinenko _et al._ (2022), and the possibility of implementing highly
accelerated neuromorphic computing systems Müller _et al._ (2022); González
_et al._ (2024). Among the family of spintronic microwave oscillators, nano-
constriction spin Hall nano-oscillators (SHNOs) are simple heavy
metal/ferromagnet bilayers (HM/FM) through which the pure spin current is
produced by passing DC bias current ($I_{B}$) and leads to a steady-state
precession, known as auto-oscillation Demidov _et al._ (2014); Chen _et al._
(2016); Behera _et al._ (2024). Considering their facile fabrication, broad
frequency tunability Zahedinejad _et al._ (2018), easy injection
lockingRajabali _et al._ (2023), robust mutual synchronization Kumar _et
al._ (2023b), and individual tunability Muralidhar _et al._ (2022); Khademi
_et al._ (2024), the SHNOs are particularly promising for various applications
from magnetic field sensors Xie, Cheung, and Fuchs (2023) to neural network
integration Sethi _et al._ (2023); Kumar _et al._ (2024); González _et al._
(2024).
When implementing an SHNO-based network, it is essential to ascertain the
oscillation status of the SHNO (e.g., firing/non-firing in case of a neural
network). However, due to the relatively low power and high noise of the
output oscillating signal Litvinenko _et al._ (2023), the CMOS detector and
the SHNO necessitate achieving a good impedance matching Bendjeddou _et al._
(2023). If not, reflecting a significant share of the available SHNO output
power makes it more challenging to detect the SHNO signal, as confirmed in
Fiorelli _et al._ (2023).
In the realm of SHNO output signal detection, fully integrated approaches not
only reduce detector size and power consumption but also are more adaptable
for the joint integration of SHNO and detectors; whether homogeneous or
heterogeneous. The selection of the optimal technology and detector’s
architecture is dependent on the SHNO’s electrical characteristics, which
include its working frequency, output power, noise, and output impedance
Fiorelli _et al._ (2023). Hence, an accurate electrical model of the SHNO is
mandatory. This model must include information about the SHNO: (i) output
power, (ii) noise levels, and (iii) output impedance. This information allows
for knowing the SHNO signal-to-noise (SNR) ratio and therefore deduce the
maximum accepted noise level of the receiver so that the SHNO signal is not
masked by its noise. Moreover, an optimized design of the electrical circuitry
that senses the SHNO output signal requires information on the SHNO output
impedance to address an excellent impedance coupling between this detector and
the SHNO, minimizing the power losses.
It is known that the SHNO output power increases as its oscillation frequency
goes up, which in turn, allows a quicker classification of its state Fulara
_et al._ (2019). But this increase in frequency must be accompanied by a
guarantee of observation of the oscillation by the integrated detector, whose
implementation is limited to its (i) maximum detection frequency and, (ii)
minimum practicable bandwidth (BW) associated with the SNR. In this study, we
demonstrate that excellent trade-offs in terms of SHNO output power,
detector’s feasibility, and performance, can be achieved by selecting the
optimal oscillating frequency around 6 GHz. Furthermore, the empirical
electrical model of the SHNO is extracted and presented for the frequency
range of 2-10 GHz.
The SNR will not be notably favorable when the detector possesses a broad BW
Pozar (2011) (i.e., exceeding 100 MHz as noted later in Section III-A), which
is likely to be the case in fully integrated detectors. Hence, achieving the
desired signal matching and developing an accurate electrical AC SHNO model is
critical if we aim to leverage the full potential of the available SNR.
Although other types of nano-oscillators were modeled electrically Bendjeddou
_et al._ (2023), there is a notable absence of an AC SHNO model that
comprehensively describes its output impedance, power characteristics, and
noise equivalent output. For the first time, this paper introduces an AC
empirical electrical model for an SHNO derived from experimental measurements.
This AC evaluation holds immense significance since only considering the DC
range characterization fails to provide sufficient insights for an optimal
signal detector design, as in Fiorelli _et al._ (2023). This AC empirical
model is developed by the data captured from two sets of experiments that
measure the SHNO load output power, $P_{L}$, and the output impedance,
$Z_{SHNO}$.
Figure 1: (a) Fabricated SHNO with GSG CPW. (b) Magnified scheme of SHNO
geometry and the contact pads. The inset displays the detail of the
heterostructure on which SHNO is fabricated. (c) The SEM top-view image of a
single 180 nm wide SHNO.
This paper is outlined as follows. Section II presents fabrication details and
the experimental setup to measure the SHNO impedance and power. The
measurement results are displayed in Section III, whereas the empirical
electrical model is deployed in Section IV. Finally, Section V concludes our
approach to develop this empirical electrical model for SHNOs for an optimal
on-chip detector design.
## II Experimental part
The 180-nm wide SHNOs, used in the measurements to extract the empirical
electrical model, were fabricated on heterostructures consisting of
W(5nm)/Py(5nm)(Ni80Fe20)/Al2O3 (4nm) Kumar _et al._ (2023b),Litvinenko _et
al._ (2023). The fabrication process involves DC/RF magnetron sputtering of
the stack at room temperature onto a high-resistance silicon substrate
(20$\times$20 mm2) with a base pressure of less than 3$\times$10-8 Torr. The
heterostructure is then patterned into 8$\times$12 $\mu$m2 rectangles with
bow-tie shaped NCs using a Raith EBPG 5200 electron beam lithography (EBL)
system followed by Ar-ion etching. Optical lithography is used to define the
ground-signal-ground (GSG) coplanar waveguide (CPW), which is followed by a
deposition and liftoff process using a bi-layer of Cu(500nm)/Pt(20nm) as top
contacts (Fig. 1(a)). A magnified scheme of the SHNO is presented in Fig.
1(b), along with the SHNO layer arrangement details in the cross-sectional
view (inset). Furthermore, a top-view scanning electron microscopy (SEM) image
of the fabricated SHNO is seen in Fig. 1(c) [more details of fabrication can
be found in Kumar _et al._ (2022)].
Figure 2: Schematic and setup photo for Experiment 1 (a-b) and Experiment 2
(c-d).
### II.1 Impedance setup
Experiment 1 focuses on the impedance measurement, with the corresponding
setup scheme and photo presented in Fig. 2(a) and (b), respectively. This
measurement is conducted using an Agilent N5230A Vector Network Analyzer
(VNA). The SHNO is biased via the Mini-circuits ZX85-12G+ bias-tee. Its output
is connected to the VNA and evaluated between 2-10 GHz, under the values of
$I_{B}=\\{2.0,2.25,2.35,2.45\\}~{}mA$. To do impedance measurements over a
biased device-under-test (DUT), all external electromagnetic signals of the
DUT are voided except the one that injects into the DUT node where the
impedance is to be known. Hence, this test is performed in the absence of an
external magnetic field. Furthermore, although the frequency of interest to
obtain the model is around 6 GHz, the wide range of frequencies of [2,10] GHz
was evaluated to assess its behavior, ensuring both a smooth behavior at high
frequencies and a tendency to the expected values at the DC level.
The accuracy of the impedance measurement relies on the careful calibration of
the entire setup to remove the effect of cables C1, C2, bias-tee, and
picoprobe (Fig. 2(a)). We made use of the Cascade Microtech GSG 150
$\mu$m-pitch P/N 101-190 impedance standard substrate to carry out this
calibration, obtaining excellent results with a measured voltage standing wave
ratio lower than 1.1 at [2,10] GHz range.
### II.2 Power and noise setup
Experiment 2 aims to find the maximum SHNO output power ( $P_{meas\\_peak}$)
around 6 GHz, using a spectrum analyzer. Its setup and photo are given in Fig.
2(c) and (d), respectively. The GSG picoprobe (GGB Industries) conveys the
input $I_{B}$ and the output generated signal between the SHNO and a bias-tee
(MITEQ BT4000). The auto-oscillation is driven by $I_{B}$ running to the
sample through the bias-tee DC port. Then, the generated SHNO RF signal goes
to a low noise amplifier (LNA, B&Z BZ0218A) with a transducer power gain
$G_{T}=23~{}dB$ and eventually reaches the Spectrum Analyzer (SA, R&S FSV 40
GHz), where the spectra data is captured.
The optimum magnetic field $\overrightarrow{B}$ is obtained by preliminary
field scans at several values of $I_{B}$. At the chosen magnetic field, the
signal will then be recorded while increasing $I_{B}$ to reach a maximum power
value such that a further increase in $I_{B}$ does not substantially increase
the measured power, $P_{meas}$. Additionally, the second outcome of the
experiment pertains to the noise floor level, which will be useful in further
calculations. As a result, the sample is subjected to an external magnetic
field ($|B|=0.68~{}T$) at out-of-plane ($\theta=84^{\circ}$) and in-plane
($\varphi=22^{\circ}$) angles. The value $I_{B}=2.45~{}mA$ above which
$P_{meas}$ does not vary substantially is considered the one that generates
$P_{meas\\_peak}$.
## III Results and discussion
### III.1 Impedance results
Impedance measurement results are deployed in Fig. 3. In the RF/MW
frequencies, the real part of $Z_{SHNO}$, $Re(Z_{SHNO})$, decreases almost
linearly with frequency, reaching approximately 350 $\Omega$ at 6 GHz, and 250
$\Omega$ at 10 GHz (Fig. 3(a)). This result is promising since making the SHNO
work at higher frequencies implies a lower $Re(Z_{SHNO})$, simplifying the
detector design and allowing it to reach optimal matching with the detector.
In addition, Fig. 3(a) verifies that $Re(Z_{{SHNO}})$ increases as $I_{B}$
increases.
Figure 3: Real and imaginary part of $Z_{SHNO}$ for four $I_{B}$ (a,b), and
four SHNO samples at $I_{B}=2.45~{}mA$ (c,d).
Fig. 3(b) depicts the imaginary part $Z_{SHNO}$, $Im(Z_{SHNO})$, which
diminishes as $I_{B}$ decreases for the whole frequency range. In particular,
we observe a capacitive behavior over the entire frequency span under study.
Finally, we measure the values of $Re(Z_{SHNO})$ and $Im(Z_{SHNO})$ for four
SHNO identical samples to evaluate its dispersion. The findings, shown in Fig.
3(c) and (d), yield a variation of 5% and 2.5% around 6 GHz, respectively.
### III.2 Power and noise results
The captured spectrum corresponding to $I_{B}=2.45~{}mA$ is illustrated in
Fig. 4, in the range of [6.0,6.5] GHz. Using a BW of 1 MHz, the maximum
measured power is $P_{meas\\_peak}=-55~{}dBm$ at 6.25 GHz. This plot also
yields noise floor power in the order of
$P_{meas\\_noise\\_floor}\approx-80~{}dBm$. Then, an estimation of SHNO output
signal power will be about $P_{L}\approx P_{meas\\_peak}-G_{T}=-79~{}dBm$, and
an estimation of the power spectral density (PSD) of the signal at the SHNO
output is shown in the inset of Fig. 4 with a noise floor power density of
$P_{noise\\_floor}\approx
P_{meas\\_noise\\_floor}-10~{}log_{10}(BW)-G_{T}=-163~{}dBm/Hz$.
Figure 4: SHNO output spectrum for $I_{B}=2.45~{}mA$ with BW=1 MHz,
considering noise levels associated with a detector with BW = [1, 100, 500]
MHz. Inset: PSD for $I_{B}$ = [$2.0~{}mA$ (blue), $2.25~{}mA$ (orange),
$2.35~{}mA$ (yellow), $2.45~{}mA$ (purple)].
The feasibility of the detection strongly depends on the detector BW, which is
reflected in the SNR; for instance,
SNR=$\\{25dB_{@1MHz},5dB_{@100MHz},-2dB_{@500MHz}\\}$. Given that the BW is
anticipated to be no less than 100 MHz in a monolithically integrated CMOS
detector, we expect a low peak PSD level. This low SNR necessitates maximizing
the available power at the detector input, highlighting the critical role of
impedance matching.
## IV Electrical model
The information collected with Experiment 1 and Experiment 2 allows us to
develop the SHNO Thévenin model, as seen in Fig. 5(a). It comprises the
Thévenin voltage, $V_{SHNO}$, the equivalent output noise voltage $V_{n}$, and
the equivalent output impedance $Z_{SHNO}$. The equations that describe this
model. presented next, are derived from classical circuits’ theory Alexander
and Sadiku (2016). The amplitude of the Thévenin’s voltage, $|V_{SHNO}|$ and
$Z_{SHNO}$, are related to the estimated output peak power,
$P_{L}(\approx-79dBm)$, over the load impedance $Z_{L}$, by the following
expression:
$\left|V_{SHNO}\right|=\left|Z_{SHNO}+Z_{L}\right|\sqrt{\frac{2P_{L}}{Re(Z_{L})}}$
(1)
where $Z_{L}$ is the impedance seen at the output of the SHNO of the
experimental setup shown in Fig. 2(c), ideally, $Z_{L}=50\Omega$.
Figure 5: (a) SHNO empirical electrical model. (b) Real and (c) imaginary part
of $Z_{SHNO}$ measured at $I_{B}=2.45mA$ (red line) and
$Z_{SHNO}(\omega)^{model}$ (black).
The proposed impedance model is presented in the inset of Fig. 5(a), being
consistent with the device impedance measurements in DC. It comprises a
resistor $R_{SHNO}$ in parallel with a capacitor $C_{SHNO}$, and where
$Z_{SHNO}(\omega)^{model}=\frac{1}{1/R_{SHNO}+j\omega C_{SHNO}}$ (2)
The values of $R_{SHNO}$ and $C_{SHNO}$ are listed in the table embedded in
Fig. 5(a), which are valid over the whole frequency range of [2,10] GHz (for
$I_{B}=2.45~{}mA$). This is reflected in Fig. 5(b-c), where the real and
imaginary parts of $Z_{SHNO}$ are correctly described over the whole frequency
span with the black curves of $Z_{SHNO}(\omega)^{model}$.
The root-mean-square of noise voltage source, $v_{n}$, is obtained by using
(1), and substituting $2P_{L}$ for the estimated floor noise power,
$P_{{noise\\_floor}}(\approx-163dBm/Hz)$.
Finally, to complete the empirical model, we bring the available power gain of
the SHNO, $P_{av,SHNO}$. As it is known, this is the maximum power of the SHNO
that can be delivered to the load, and it occurs when the load is conjugately
matched to the SHNO output impedance. It does not depend on the load and gives
the designer a maximum value of power that could be delivered to the load,
although this is never feasible in practice. This expression is,
$P_{av,SHNO}=\frac{|V_{SHNO}|^{2}}{8Re(Z_{SHNO})}$ (3)
The numerical values of the empirical electrical model are listed in the table
of Fig. 5(a). They have been obtained from the experimental quantities
presented in subsections IIIA and IIIB and the expressions (1)-(3).
To conclude this section, we provide a practical example of how critical is to
provide a good SHNO electrical model in designing an associated signal
detector with an input impedance $Z_{det}$. Assuming the SHNO device
interfaces with a noiseless detector featuring a practical BW of several
hundred MHz, the detector output is anticipated to exhibit a maximum SNR of
around 3 dB (Refer to Fig. 4).
Figure 6: SNR at the input of the detector: (a) versus the real parts of
$Z_{det}$ and $Z_{SHNO}$ impedances with opposite imaginary parts, and (b) the
imaginary part of $Z_{SHNO}+Z_{det}$ for different $Re(Z_{det})$ at
$R_{SHNO}=350\Omega$.
Owing to the high value of $R_{SHNO}(\approx 350\Omega$ at 6 GHz), the main
challenge of achieving the matching condition lies in equalizing
$Re({Z_{det}})$ to $R_{SHNO}$. This is particularly troublesome when
attempting on-chip detectors, to achieve a joint integration of multiple
units. To illustrate this challenge, Fiorelli _et al._ (2023) has reported
the difficulty of designing on-chip detectors in CMOS technologies at 4.7 GHz,
with $Re(Z_{det})$ above 100 $\Omega$.
Presuming the input impedance of the detector, $Z_{det}$, can be complex, and
considering the maximum power-transfer theorem, the seek of maximum power
transfer from the SHNO to the detector implies that $Z_{det}=Z_{SHNO}^{*}$.
This is especially desirable due to the very low value of $P_{av,SHNO}$.
To elaborate on the effect of the mismatch between the SHNO and the detector
in the SNR of the system, with a maximum value of SNR = +3 dB, one can
initially consider a partial conjugation such that only the imaginary
components of the impedance are matched, i.e. $Im(Z_{det})=-Im(Z_{shno})$.
When both $Re(Z_{det})$ and $Re(Z_{SHNO})$ depart from equality, the SNR falls
below 0 (see Fig. 6(a)). For example, at $Re(Z_{SHNO})=350\Omega$ (red dash
line), for $Re(Z_{det})<100\Omega$, the SNR is below 1.5 dB, and thus
eliminating most of the margin that guarantees the detection of the
oscillation signal.
Let’s now consider the case when the net imaginary component,
$\rho=|Im(Z_{SHNO}+Z_{det})|$ departs from the ideal condition ($\rho=0$). For
$\rho=$[0,300]$~{}\Omega$ and $Re(Z_{det})$=[25,350] $\Omega$ we obtain the
plot shown in Fig. 6(b). For instance, when $\rho>$100 $\Omega$ and
$Re(Z_{det})<$ 200 $\Omega$, the deviation from matching conditions causes SNR
to drop below 0 dB and therefore making it impossible to detect the
oscillation signal. In other words, to ensure signal detection, it’s crucial
to design the detector with the goal of minimizing the net imaginary
component, ideally approaching $\rho=0$, while simultaneously optimizing
$Re(Z_{det})$ to closely match the oscillator resistance. By establishing
these key criteria, we proposed a practical solution for on-chip detection of
the SHNO signal, offering insights for future implementations of this
nonlinear oscillator in complex networks.
## V Conclusions
This paper presents an electrical empirical model of the SHNO working in the
6-GHz range, which is drawn from a comprehensive study of the SHNO output
impedance and its output power and noise levels shown at the SHNO signal
detector. From the results of the study, especially due to the high
$Re(Z_{SHNO})$ values and the non-negligible capacitive effect, it is clear
that there is a need to provide an empirical electrical model to the designer
of the fully integrated detector, and thus make the discrimination of the SHNO
operating state feasible.
## Acknowledgement
This work was supported in part by the Horizon 2020 Research and Innovation
Program No. 899559 “SpinAge”, DOI 10.3030/899559.
## References
* Torrejon _et al._ (2017) J. Torrejon, M. Riou, F. A. Araujo, S. Tsunegi, G. Khalsa, D. Querlioz, P. Bortolotti, V. Cros, K. Yakushiji, A. Fukushima, H. Kubota, S. Yuasa, M. D. Stiles, and J. Grollier, Nature 547, 428 (2017).
* Romera _et al._ (2018) M. Romera, P. Talatchian, S. Tsunegi, F. A. Araujo, V. Cros, P. Bortolotti, J. Trastoy, K. Yakushiji, A. Fukushima, H. Kubota, S. Yuasa, M. Ernoult, D. Vodenicarevic, T. Hirtzlin, N. Locatelli, D. Querlioz, , and J. Grollier, Nature 563, 230 (2018).
* Zahedinejad _et al._ (2020) M. Zahedinejad, A. A. Awad, S. Muralidhar, R. Khymyn, H. Fulara, H. Mazraati, M. Dvornik, and J. Åkerman, Nat. Nano. 15, 47 (2020).
* Houshang _et al._ (2022) A. Houshang, M. Zahedinejad, S. Muralidhar, J. Chȩciński, R. Khymyn, M. Rajabali, H. Fulara, A. A. Awad, M. Dvornik, and J. Åkerman, Phys. Rev. Appl. 17, 014003 (2022).
* Kumar _et al._ (2023a) A. Kumar, A. Litvinenko, N. Behera, A. A. Awad, R. Khymyn, and J. Åkerman, arXiv preprint arXiv:2312.09656 (2023a).
* Ross _et al._ (2023) A. Ross, N. Leroux, A. De Riz, D. Marković, D. Sanz-Hernández, J. Trastoy, P. Bortolotti, D. Querlioz, L. Martins, L. Benetti, _et al._ , Nature Nanotechnology 18, 1273 (2023).
* Litvinenko _et al._ (2022) A. Litvinenko, A. Sidi El Valli, V. Iurchuk, S. Louis, V. Tyberkevych, B. Dieny, A. N. Slavin, and U. Ebels, Nano Lett. 22, 1874 (2022).
* Müller _et al._ (2022) E. Müller, E. Arnold, O. Breitwieser, M. Czierlinski, A. Emmel, J. Kaiser, C. Mauch, S. Schmitt, P. Spilger, R. Stock, Y. Stradmann, J. Weis, A. Baumbach, S. Billaudelle, B. Cramer, F. Ebert, J. Göltz, J. Ilmberger, V. Karasenko, M. Kleider, A. Leibfried, C. Pehle, and J. Schemmel, Frontiers in Neuroscience 16 (2022), 10.3389/fnins.2022.884128.
* González _et al._ (2024) V. H. González, A. Litvinenko, A. Kumar, R. Khymyn, and J. Åkerman, arXiv preprint arXiv:2403.13564 (2024).
* Demidov _et al._ (2014) V. Demidov, S. Urazhdin, A. Zholud, A. Sadovnikov, and S. Demokritov, Appl. Phys. Lett. 105, 172410 (2014).
* Chen _et al._ (2016) T. Chen, R. K. Dumas, A. Eklund, P. K. Muduli, A. Houshang, A. A. Awad, P. Dürrenfeld, B. G. Malm, A. Rusu, and J. Åkerman, Proceedings of the IEEE 104, 1919 (2016).
* Behera _et al._ (2024) N. Behera, A. K. Chaurasiya, V. H. González, A. Litvinenko, L. Bainsla, A. Kumar, R. Khymyn, A. A. Awad, H. Fulara, and J. Åkerman, Advanced Materials 36, 2305002 (2024).
* Zahedinejad _et al._ (2018) M. Zahedinejad, H. Mazraati, H. Fulara, J. Yue, S. Jiang, A. A. Awad, and J. Åkerman, Appl. Phys. Lett. 112, 132404 (2018).
* Rajabali _et al._ (2023) M. Rajabali, R. Ovcharov, R. Khymyn, H. Fulara, A. Kumar, A. Litvinenko, M. Zahedinejad, A. Houshang, A. A. Awad, and J. Åkerman, Phys. Rev. Appl. 19, 034070 (2023).
* Kumar _et al._ (2023b) A. Kumar, H. Fulara, R. Khymyn, A. Litvinenko, M. Zahedinejad, M. Rajabali, X. Zhao, N. Behera, A. Houshang, A. A. Awad, and J. Åkerman, Nano Lett. 23, 6720 (2023b).
* Muralidhar _et al._ (2022) S. Muralidhar, A. Houshang, A. Alemán, R. Khymyn, A. A. Awad, and J. Åkerman, Appl. Phys. Lett. 120, 262401 (2022).
* Khademi _et al._ (2024) M. Khademi, A. Kumar, M. Rajabali, S. P. Dash, and J. Åkerman, IEEE Electron Device Lett. 45, 268 (2024).
* Xie, Cheung, and Fuchs (2023) Y. Xie, H. F. H. Cheung, and G. D. Fuchs, arXiv preprint arXiv:2303.02478 (2023), https://doi.org/10.48550/arXiv.2303.02478.
* Sethi _et al._ (2023) P. Sethi, D. Sanz-Hernández, F. Godel, S. Krishnia, F. Ajejas, A. Mizrahi, V. Cros, D. Marković, and J. Grollier, Physical Review Applied 19, 064018 (2023).
* Kumar _et al._ (2024) A. Kumar, A. K. Chaurasiya, V. H. González, N. Behera, R. Khymyn, A. A. Awad, and J. Åkerman, arXiv:2402.00586 (2024).
* Litvinenko _et al._ (2023) A. Litvinenko, A. Kumar, M. Rajabali, A. A. Awad, R. Khymyn, and J. Åkerman, Appl. Phys. Lett. 122 (2023).
* Bendjeddou _et al._ (2023) I. Bendjeddou, M. J. Garcia, A. S. el Valli, A. Litvinenko, V. Cros, U. Ebels, A. Jenkins, R. Ferreira, R. Dutra, D. Morche, E. Pistono, S. Bourdel, Y. Le Guennec, and F. Podevin, IEEE Transactions on Microwave Theory and Techniques 71, 2771 (2023).
* Fiorelli _et al._ (2023) R. Fiorelli, E. Peralías, R. Méndez-Romero, M. Rajabali, A. Kumar, M. Zahedinejad, J. Åkerman, F. Moradi, T. Serrano-Gotarredona, and B. Linares-Barranco, Electronics 12, 230 (2023).
* Fulara _et al._ (2019) H. Fulara, M. Zahedinejad, R. Khymyn, A. Awad, S. Muralidhar, M. Dvornik, and J. Åkerman, Science advances 5, eaax8467 (2019).
* Pozar (2011) D. M. Pozar, _Microwave Engineering_ (Wiley, USA, 2011).
* Kumar _et al._ (2022) A. Kumar, M. Rajabali, V. H. González, M. Zahedinejad, A. Houshang, and J. Åkerman, Nanoscale 14, 1432 (2022).
* Alexander and Sadiku (2016) C. Alexander and M. Sadiku, _Fundamentals of electric circuits, 6th Edition_ (McGraw Hill, 2016).
|
††thanks: These authors contributed equally to this work††thanks: These
authors contributed equally to this work
# Contactless Interfacial Rheology: Probing Shear at Liquid-Liquid Interfaces
without an Interfacial Geometry via Fluorescence Microscopy
Iain Muntz SUPA School of Physics and Astronomy,
The University of Edinburgh, Edinburgh,
EH9 3FD, Scotland, United Kingdom Department of Bionanoscience, Kavli
Institute of Nanoscience Delft,
Delft University of Technology, Van der Maasweg 9,
2629 HZ Delft, The Netherlands James A. Richards SUPA School of Physics and
Astronomy,
The University of Edinburgh, Edinburgh,
EH9 3FD, Scotland, United Kingdom Edinburgh Complex Fluids Partnership, The
University of Edinburgh,
Edinburgh EH9 3FD, United Kingdom Sam Brown SUPA School of Physics and
Astronomy,
The University of Edinburgh, Edinburgh,
EH9 3FD, Scotland, United Kingdom Andrew B. Schofield SUPA School of Physics
and Astronomy,
The University of Edinburgh, Edinburgh,
EH9 3FD, Scotland, United Kingdom Marcel Rey SUPA School of Physics and
Astronomy,
The University of Edinburgh, Edinburgh,
EH9 3FD, Scotland, United Kingdom Department of Physics,
University of Gothenburg, Gothenburg, Sweden
Job H. J. Thijssen<EMAIL_ADDRESS>SUPA School of Physics and
Astronomy,
The University of Edinburgh, Edinburgh,
EH9 3FD, Scotland, United Kingdom
###### Abstract
Interfacial rheology is important for understanding properties such as
Pickering emulsion or foam stability. Currently, the response is measured
using a probe directly attached to the interface. This can both disturb the
interface and is coupled to flow in the bulk phase, limiting its sensitivity.
We have developed a contactless interfacial method to perform interfacial
shear rheology on liquid/liquid interfaces with no tool attached directly to
the interface. This is achieved by shearing one of the liquid phases and
measuring the interfacial response via confocal microscopy. Using this method
we have measured steady shear material parameters such as interfacial elastic
moduli for interfaces with solid-like behaviour and interfacial viscosities
for fluid-like interfaces. The accuracy of this method has been verified
relative to a double-wall ring geometry. Moreover, using our contactless
method we are able to measure lower interfacial viscosities than those that
have previously been reported using a double-wall ring geometry. A further
advantage is the simultaneous combination of macroscopic rheological analysis
with microscopic structural analysis. Our analysis directly visualizes how the
interfacial response is strongly correlated to the particle surface coverage
and their interfacial assembly. Furthermore, we capture the evolution and
irreversible changes in the particle assembly that correspond with the
rheological response to steady shear.
## I Introduction
Interfacial rheometry is essential when characterising systems with large
interfacial area, such as emulsions or foams [1, 2, 3]. These systems are
ubiquitous in industries such as pharmaceuticals, cosmetics and foodstuffs [4,
5, 6, 7, 8, 9, 10]. In order to probe the rheological properties, one can use
shear rheology [11, 12, 13, 14, 15], dilational rheology [16, 17, 18, 19], or
simultaneously image the interface as shear is applied to connect the
rheological properties to the interfacial microstructure [20, 21].
Previous work on interfacial shear rheology has used probes which directly
attach to an oil-water or air-water interface, such as the magnetic rod
interfacial stress rheometer [22, 23, 24], or the double-wall ring (DWR)
geometry attached to a rotational rheometer [11, 20]. These experimental
setups are both based on the maximisation of the ratio of the surface force to
the sub-phase drag, the Boussinesq number [1, 25]:
$\mathrm{Bo}=\frac{\eta^{s}}{\eta l},$ (1)
where $\eta^{s}$ is the surface viscosity, $\eta$ is the sub-phase viscosity
and $l$ is a characteristic length scale roughly equal to the ratio of contact
area to contact perimeter. In order to accurately measure the surface
properties without unintentionally probing the sub-phase, this ratio must be
maximised for the surface to contribute at least an order of magnitude more
than the bulk. Of the two setups mentioned, the magnetic rod has the larger
Bo, while both have a Bo an order of magnitude larger than that of a rotating
disk, due to a much smaller $l$ [26]. This maximisation of Bo can be
considered as optimising the interface-to-bulk signal to noise ratio. Even
though the magnetic rod set up has higher sensitivity, DWR has the advantages
of using a conventional rotational rheometer combined with a larger dynamic
range [11].
In our work, we take a different approach which makes consideration of Bo less
tangible as we have no contact area or contact perimeter. Rather than affixing
a probe directly to the interface, we shear the upper phase, indirectly
deforming the interface, and measure the response using confocal microscopy: a
fundamentally different approach.
Using this contactless technique, we investigate the efficacy of this method
by studying a jammed core-shell PNIPAM-SiO2–laden interface labelled with
tracer particles, which we compare directly to DWR measurements. We then
demonstrate the advantages of this technique by looking at a weakly
interacting system of interfacially adsorbed colloidal particles. This system
has been studied previously using direct probe techniques [14, 27], and
considering the interparticle interactions [28].
Our technique has two main advantages: (i) the liquid-liquid interface we
probe is not disturbed by a large probe immersed therein, and (ii) this setup
models general applications of these large interfacial area systems, where
interfacial shear is applied indirectly via the continuous phase. A clear
example of this second point is in the application of skin creams, where the
continuous phase is sheared, which indirectly deforms the large area of
interface of the dispersed phase. Notably, the equipment required to perform
these measurements is relatively common. While we use confocal microscopy
coupled to a stress-controlled rheometer, reflection or fluorescence
microscopy and a fixed-rate motor should suffice. We show that our technique
can measure surfaces with lower viscosities than have been measured before
using a DWR geometry, due to the inherent sensitivity of the technique arising
from the absence of direct sub-phase drag. Finally, our setup lends itself to
simultaneous structural analysis, which we show is key to understanding the
rheological properties of a particle-laden interface.
## II Materials and Methods
### II.1 Materials
All chemicals were obtained from commercial sources and used as received if
not stated otherwise. N,N’-Methylenebis(acrylamide) (BIS; 99 $\%$, Sigma
Aldrich), ethanol (EtOH, 99.9 $\%$, Sigma Aldrich), ammonium persulfate (APS;
98 $\%$ Sigma Aldrich), tetraethyl orthosilicate (TEOS; 98 $\%$, Sigma
Aldrich), ammonium hydroxide solution (28-30 $\%$ NH3 basis, Sigma Aldrich),
(3-(trimethoxysilyl)propyl methacrylate (MPS; 98 $\%$, Sigma Aldrich) and
isopropyl alcohol (IPA, $>99.8$ $\%$, Sigma Aldrich), were used as received.
N-Isopropylacrylamide (NIPAM; 97 $\%$, Sigma Aldrich) was purified by
recrystallization from hexane (95 $\%$, Sigma Aldrich). Water was distilled
and deionized ($18\text{\,}\mathrm{M\SIUnitSymbolOhm}\text{\,}\mathrm{cm}$)
and n-dodecane (Acros organics, 99% pure) was filtered three times through a
column of alumina (Sigma-Aldrich, activated) to remove polar impurities
following a standard procedure [29]. Red fluorescent carboxyl-functionalized
polystyrene (PS) particles ($2\text{\,}\mathrm{\SIUnitSymbolMicro m}$
diameter, Thermo Fisher) were cleaned twice via centrifugation and
redispersion in water/ethanol (1:1).
### II.2 Synthesis and Characterisation
#### II.2.1 PNIPAM-SiO2 core-shell particles
Poly(N-isopropylacrylamide)(PNIPAM)-SiO2 core-shell particles were obtained by
growing a PNIPAM shell onto the silica cores via a batch surfactant-free
precipitation polymerization as described in previous work [30]. First,
colloidal silica particles used as cores with a diameter of
$160(10)\text{\,}\mathrm{nm}$ were prepared according to a modified Stöber
process[31]. In a round bottom flask, 250 mL EtOH , 12.5 mL deionised water
and 25 mL NH3 (aq) were stirred together. 18.75 mL of TEOS was stirred in 75
mL EtOH and both solutions were heated to
$50\text{\,}\mathrm{\SIUnitSymbolCelsius}$ and equilibrated for
$30\text{\,}\mathrm{min}$. Next, the TEOS solution was quickly added to the
first mixture under heavy stirring. We let the reaction proceed for
$2\text{\,}\mathrm{d}$ at $50\text{\,}\mathrm{\SIUnitSymbolCelsius}$. The
suspension was functionalised without any further purification by adding
$102.7\text{\,}\mathrm{\SIUnitSymbolMicro l}$ MPS. We allowed the reaction
mixture to stir at room temperature for at least $1\text{\,}\mathrm{d}$ and
then boiled it for $1\text{\,}\mathrm{h}$ to ensure successful
functionalisation. Afterwards, we purified the particles by centrifugation and
redispersed them three times in ethanol and three times in Milli-Q water.
In a 500 mL three-neck round bottom flask, 282.9 mg NIPAM and 19.3 mg BIS
($5\,{\rm mol.\,\%}$) were dissolved in 47 mL Milli-Q water. We added the
2.591 g aqueous SiO2 core dispersion (6.6 ${\rm wt}\,\%$). The solution was
heated to $80\text{\,}\mathrm{\SIUnitSymbolCelsius}$ and purged with nitrogen.
After equilibration for $30\text{\,}\mathrm{min}$, a balloon filled with
nitrogen was used to keep the nitrogen atmosphere. Subsequently, 11 mg APS was
rapidly added to initiate the reaction. We let the reaction proceed for 4 h,
and after it cooled down, we purified the suspension 6$\times$ by
centrifugation and redispersion in deionised water. The hydrodynamic diameter
at $20\text{\,}\mathrm{\SIUnitSymbolCelsius}$ was determined by dynamic light
scattering (Malvern Zetasizer Nano-ZS) to $525(53)\text{\,}\mathrm{nm}$.
#### II.2.2 PMMA particles
Poly(methyl methacrylate) (PMMA) particles, stabilized by poly(lauryl
methacrylate), were used as the hydrophobic system. To synthesize these, the
poly(lauryl methacrylate) stabilizer was fabricated first following the recipe
in Ref. 32, Sec. 3.9.1, and it was kept as a 40% solution in dodecane. To make
the particles, a mixture was created that contained 2.1% w/w poly(lauryl
methacrylate) stabilizer, 41.2% w/w methyl methacrylate, 0.84% methacrylic
acid, 11% butyl acetate, 29.6% hexane, 14.2% dodecane, 0.21% octyl mercaptan
and 0.47% of the dye NBD-MAA (7-nitrobenzo-2-oxa-1,3-diazole-methyl
methacrylate), whose preparation can be found in Ref. 33. This mixture was
placed in a 3-necked round-bottomed flask with a condenser attached, brought
under a nitrogen atmosphere, stirred at 350 rpm and heated to 80°C before 0.4%
w/w of the initiator azo-bis-isobutyronitrile was added to start the
polymerization reaction which was left to proceed for 6 hours. The resultant
particles were filtered through glass wool to remove any coagulum present. The
particles were qualitatively inspected using scanning electron microscopy, and
sized by static light scattering to find a diameter of
$3.0\text{\,}\mathrm{\SIUnitSymbolMicro m}$ with a dispersity of 5%. The
particles were cleaned by repeated centrifugation (5$\times$) in $n$-hexane
followed by repeated centrifugation (5$\times$) in $n$-dodecane. The particles
were kept as a dispersion in $n$-dodecane and sonicated for 30 minutes before
dilution, followed by a further 30 minutes of sonication before use to
minimise the number of aggregates in bulk.
### II.3 Contactless methods
Figure 1: Interfacial shear geometries. (a) Schematic setup for contactless
interfacial rheology using a parallel-plate geometry rotated at fixed angular
velocity, $\omega$. Interface imaging via confocal microscope. Dimensions:
pinned interface ring radius, $r_{r}=10$ mm; water sub-phase height, $h_{w}=3$
mm; and oil depth, $h_{o}$. $\omega$ and $h_{o}$ vary. $r_{\rm out}$ distance
from imaging region to outer edge varies with experiment from
$4\text{\,}\mathrm{mm}6\text{\,}\mathrm{mm}$. (b) Creep-recovery protocol for
contactless method. Recording (dashed lines) begins before applied $\omega$
(solid line) to $t_{\omega}=$30\text{\,}\mathrm{s}60\text{\,}\mathrm{s}$$,
followed by recovery to recording end. Multiple steps with increasing
$\omega$. (c) Schematic of double-wall ring geometry. Radii: ring, inner and
outer, $R_{r,i}=$35.5\text{\,}\mathrm{mm}$$ and
$R_{r,o}=$34.5\text{\,}\mathrm{mm}$$; trough, $R_{i}=$31\text{\,}\mathrm{mm}$$
and $R_{o}=$39.5\text{\,}\mathrm{mm}$$. Ring width,
$l=R_{r,o}-R_{r,i}=$1\text{\,}\mathrm{mm}$$ for Bo, Eq. (1). Torque, $T(t)$,
and angle, $\theta(t)$, to calculate interfacial stress, $\sigma^{s}$, and
strain, $\gamma^{s}_{0}$. (d) Increasing logarithmic oscillatory strain
amplitude sweep.
Interfaces were prepared in a custom made polytetrafluoroethylene (PTFE) cup
with an aluminium ring insert to pin a flat interface. Fig. 1(a) shows a
schematic representation of the cup. The PTFE cup’s inner radius ($r_{c}$) is
$21\text{\,}\mathrm{m}\mathrm{m}$, the aluminium ring’s inner radius ($r_{r}$)
is $10\text{\,}\mathrm{m}\mathrm{m}$. The aluminium ring has a height of
$3\text{\,}\mathrm{m}\mathrm{m}$ and its edge was roughened using silicon
carbide sandpaper to allow pinning of the interface.
The PTFE cup was filled with water, pinned at the edge of the aluminium ring.
As the first interface, we choose a monolayer of PNIPAM-SiO2 core-shell
particles mixed with fluorescent tracer particles at a fixed surface pressure
of $24\text{\,}\mathrm{mN}\text{\,}{\mathrm{m}}^{-1}$. We first created a
suspension by mixing $800\text{\,}\mathrm{\SIUnitSymbolMicro l}$ core-shell
particles (0.1 ${\rm wt}\,\%$) and $100\text{\,}\mathrm{\SIUnitSymbolMicro l}$
fluorescent PS microspheres (1 ${\rm wt}\,\%$) with
$100\text{\,}\mathrm{\SIUnitSymbolMicro l}$ IPA as a spreading agent. The
PNIPAM-SiO2 core-shell particles are smaller and able to spread and extend
once adsorbed to liquid interfaces [34, 35]. Therefore, they occupy more
interfacial area compared to the PS particles. Further, we should point out
that PNIPAM-based microgels are known to adsorb onto PS particles [36] and
accumulate around them when confined at liquid interfaces [37]. Thus, we
expect the PS particles to be integrated within the PNIPAM-SiO2 interface and
only minimally influence the rheological response. We then spread the mixed
suspension on a Langmuir trough and measure the surface pressure using the
Wilhelmy method. We determined that
$3.72\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{l}\mathrm{/}\mathrm{c}\mathrm{m}^{2}$
of prepared suspension is required to obtain a surface pressure of
$24\text{\,}\mathrm{mN}\text{\,}{\mathrm{m}}^{-1}$. Thus,
$11.7\text{\,}\mathrm{\SIUnitSymbolMicro l}$ was pipetted onto the air-water
interface pinned by the aluminum ring. Notably, in a control experiment, we
could directly verify the surface pressure of the interface within the cell
using the Wilhelmy method to be
$24\text{\,}\mathrm{mN}\text{\,}{\mathrm{m}}^{-1}$. Lastly,
$1.5\text{\,}\mathrm{ml}$ of dodecane was carefully pipetted on top of the
interface to give a depth of the oil phase ($h_{o}$) of
$1.6\text{\,}\mathrm{mm}$.
As a second interface, we choose hydrophobic PMMA colloidal particles. The
aluminum ring was again filled with water and $3\text{\,}\mathrm{ml}$ of a
dilute 0.005 ${\rm vol.}\,\%$ PMMA-in-oil dispersion was pipetted onto the
water sub-phase ($h_{o}=$2.2\text{\,}\mathrm{mm}$$). A 0.005 ${\rm vol.}\,\%$
dispersion generally leads to a low volume fraction interface, however there
is large variability in the final surface fraction for the same initial volume
fraction. To achieve higher surface fractions a higher initial volume fraction
was used. During equilibration for 1 hour, the PMMA particles sedimented to
the oil-water interface and formed a monolayer. The surface fraction, $\phi$,
was adjusted by either adjusting the amount of deposited oil dispersion or by
adjusting the particle concentration.
Surface fractions were measured from microscopy images using a pixel counting
method determining the fraction of foreground (particle) pixels to total
pixels after performing a thresholding procedure. This measurement was made
over multiple frames and the final value for surface fraction was determined
as the mean through one rotation of the interface. We note that this approach
likely overestimates the actual $\phi$, see Appendix A. However, this simple
approach allows systematic comparison between different surface coverages.
Connected pixel clusters can then be identified to assess interface
homogeneity, or aggregation state, via the dispersity [14],
$\mathcal{D}=\frac{s}{\langle A\rangle}\,,$ (2)
with $\langle A\rangle$ the average cluster size and $s$ the standard
deviation. These dispersities were measured from relatively zoomed out images.
A 25 mm diameter parallel-plate geometry was attached to the oil-air interface
in the centre of the PTFE cup. Fixed rotation speeds, $\omega$, were then
applied (MCR 301, Anton Paar), shearing the upper oil phase.
Using a rheoimaging setup, as described by Besseling _et al._ [38] (although
our rheometer setup lies directly on top of the confocal, as described by
Dutta _et al._ [39], providing greater stability), rheometry was conducted
while the interface was simultaneously imaged using a Leica SP8 confocal
microscope with a 10$\times$ / 0.3 NA air-immersion objective, at
$1024\\!\times\\!1024$ px2 (PNIPAM-SiO2) or $512\\!\times\\!512$ px2 (PMMA)
($$932\text{\,}\mathrm{\SIUnitSymbolMicro
m}$\\!\times\\!$932\text{\,}\mathrm{\SIUnitSymbolMicro m}$$). The imaging
setup was such that the motion of the interface under shear was horizontally
oriented. Velocimetry of the confocal images was performed using C code
written in house. This splits the images into 10 equally spaced horizontal
bands, i.e. the top 10% of the image, the second 10% of the image, etc. Each
band is offset horizontally by a well defined distance. The Pearson
correlation coefficient of this new offset band with the same band in the
previous frame is calculated [40]. The distance moved between that frame and
the previous is then the horizontal offset which maximises this correlation,
over time this gives the net displacement. Note that, as we use fluorescence
microscopy, and only the particles are fluorescently labelled, the strain we
measure is the strain of the (interfacial) colloidal particles in the field of
view of the microscope.
The interfacial strain is calculated as $x/r_{\rm out}$ for each band and
averaged, where $x$ is the measured displacement of the interface and $r_{\rm
out}$ is the distance from the measurement to the outer, pinned wall; $r_{\rm
out}$ varies for each experiment and is measured in situ, it is always
approximately $6\text{\,}\mathrm{mm}$. To probe the yielding and flow of the
interfaces, a creep-recovery protocol is used [41]. Fixed rotation rates were
set for $t_{\omega}=$60\text{\,}\mathrm{s}$$ (PNIPAM-SiO2) or
$120\text{\,}\mathrm{s}$ (PMMA), applying stress to the interface, before a
further period of fixing the rotation of the rheometer to 0,
$60\text{\,}\mathrm{s}$ (PNIPAM-SiO2) or $30\text{\,}\mathrm{s}$ (PMMA),
allowing the interface to relax, Fig. 1(b). The two steps allow separate
measurement of the elastic response and the plastic, or irrecoverable,
response. The (elastic) recoverable strain ($\gamma^{s}_{\rm rec}$), is given
by the recoil from the peak strain at the end of the applied rotation to the
end of the recovery step. The (plastic) irrecoverable shear rate,
$\dot{\gamma}^{s}$, can be calculated from the total change in strain from the
start of applied rotation to after recovery over the time of the applied
shear. Alternatively, for faster relaxing interfaces (PMMA) $\dot{\gamma}^{s}$
can be calculated from the average shear rate over a $45\text{\,}\mathrm{s}$
window towards the end of the applied rotation. This sequence is repeated at
multiple increasing rotation rates to determine the stress-dependent response
of the interfaces.
### II.4 Double-wall ring geometry
Conventional interfacial shear rheometry was performed using a double-wall
ring geometry, Fig. 1(c), connected to a stress-controlled rotational
rheometer (TA Instruments, DHR-2). This consists of a Platinum/Iridium ring
(diamond cross-section with inner/outer radius
$R_{r,i/o}=34.5/$35.5\text{\,}\mathrm{mm}$$ and hence width
$l=$1\text{\,}\mathrm{mm}$$) inside a ring-shaped polyoxymethylene trough
(inner/outer radius $R_{i/o}=31/$39.5\text{\,}\mathrm{mm}$$). All surfaces
were cleaned multiple times with ethanol and deionised water. To form an
interface the trough is first filled with the water sub-phase until level and
pinned at the edges of the trough. $70\text{\,}\mathrm{\SIUnitSymbolMicro l}$
of the mixed PNIPAM-SiO2 and PS tracer particle dispersion in a spreading
solvent were then carefully pipetted onto the air-water interface. The ring is
then lowered until pinned and level at the interface, before dodecane is
pipetted on top to cover the ring. The ring is therefore in direct contact
with both the interface and the sub-phase.
Figure 2: Thin ring element of oil-water interface, radius $r$ and width d$r$.
The interfacial stress, $\sigma^{s}$, can be found by considering the torque
balance of bulk oil flow drag and interfacial stress gradient, d$\sigma^{s}$,
with rotation rates (interface, $\omega_{i}$, and top of oil phase, $\omega$),
Sec. II.5.
Oscillatory strain amplitude sweeps were performed using controlled strain at
$f=$0.2\text{\,}\mathrm{Hz}$$ in the low-frequency response region, following
previous protocols for microgel-laden interfaces [16] with one equilibration
cycle and six measurement cycles per point, Fig. 1(d). Strain was increased
logarithmically at 20 points/decade from 0.001 to 1.0 strain amplitude and we
report the strain-dependent elastic ($G^{s\prime}$) and loss
($G^{s\prime\prime}$) moduli from the primary Fourier components. The
sinusoidal oscillation of the ring, $\theta(t)=\theta_{0}\sin(2\pi ft)$, is
converted into an interfacial strain,
$\gamma^{s}_{0}=\theta_{0}[(1-(R_{r,o}/R_{o})^{2})^{-1}+((R_{r,i}/R_{i})^{2}-1)^{-1}]$
using the 2D-equivalent expressions for a Couette cylinder at the position of
the ring [42]. The strain can be approximated as $\gamma^{s}_{0}\approx\theta
R_{r,o}/(R_{o}-R_{r,o})$ 111This can be recovered by factorising the
denominator,
$\gamma^{s}_{0}=\theta_{0}[R_{o}^{2}/[(R_{o}-R_{r,o})(R_{o}+R_{r,o})]+R_{i}^{2}/[(R_{r,i}-R_{i})(R_{r,i}+R_{i})]$,
and approximating to first order by taking $R_{i/o}+R_{r,i/o}\approx
2R_{r,i/o}$, $R_{r,i}\approx R_{r,o}$ and $R_{o}-R_{r,o}\approx
R_{r,i}-R_{i}$., i.e. the displacement of the ring divided by the distance
from the ring to the outer pinned interface, analogous with the expression for
the contactless geometry. As the ring is in direct contact with the interface,
the measured torque can be converted straight to an interfacial stress,
$\sigma^{s}=T/2\pi(R_{r,i}^{2}+R_{r,o}^{2})$. All reported data is at high Bo,
such that sub-phase drag correction is not applied, and low raw-phase angle,
where geometry inertia does not dominate. Due to noise from the strain-control
feedback loop, torque resolution is limited to
$0.07\text{\,}\mathrm{\SIUnitSymbolMicro N}\text{\,}\mathrm{m}$[44] and we
highlight or truncate data below this threshold.
### II.5 Measuring stress in a contactless geometry
Figure 3: Contactless interfacial rheometry of PNIPAM-SiO2–laden interfaces.
(a) Creep-recovery protocol, applied rotation rate, $\omega=6.31$ rpm, with
time, $t$. Stress,
$\sigma^{s}=$3.2\text{\times}{10}^{-6}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$$,
for $\omega_{i}\approx 0$, Eq. (9). Highlighted $t$: (i) dashed (blue), start
of applied $\omega$ at $t=$0\text{\,}\mathrm{s}$$; (ii) dot-dashed (grey), end
of applied $\omega$ at $t_{\omega}=$60\text{\,}\mathrm{s}$$; and (iii) dotted
(orange), end of recorded recovery at $t=$120\text{\,}\mathrm{s}$$. (b)
Measured interface response at highlighted $t$. Zoomed-in confocal microscopy
at $t$ in (a). Outline, distinctive particle cluster showing interface motion
along $x$, axis rotated to match (c). (c) Extracted time-dependent strain,
$\gamma^{s}(t)$. Horizontal lines trace strain from: (i) dashed, (ii) dot-
dashed and (iii) dotted. Arrows: recoverable (elastic) strain,
$\gamma^{s}_{\rm
rec}=\gamma^{s}($60\text{\,}\mathrm{s}$)-\gamma^{s}($120\text{\,}\mathrm{s}$)$;
and irrecoverable (plastic) strain, $\gamma^{s}_{\rm
irr}=\gamma^{s}($120\text{\,}\mathrm{s}$)-\gamma^{s}($0\text{\,}\mathrm{s}$)$.
In contrast to the DWR geometry, the interface is not in direct contact with
the geometry in the contactless method. Therefore, the stress cannot be
directly measured by the rheometer. To make meaningful statements on the
rheological properties of the interface, we must first find the interfacial
stress applied at the interface. It is well known that for a parallel-plate
setup, the stress is independent of the height through the sample upon
reaching a steady state [42]. The timescale for momentum diffusion and to
reach steady state is up to $\approx
0.3h_{o}^{2}\rho_{o}/\eta_{o}=\mathcal{O}($1\text{\,}\mathrm{s}$)$ for our
setup [45], where $\rho_{o}$ is the oil density; this is far below the creep
step duration. Considering the upper phase as a Newtonian fluid allows us to,
therefore, find the stress on the interface from the upper fluid using the
applied rotational speed of the geometry.
If the rheometer is rotated at a fixed angular velocity of $\omega$ then the
shear rate of the upper fluid at a radius $r$ (Fig. 2) of the parallel-plate
geometry is given by
$\dot{\gamma}=\frac{\omega r}{h_{o}},$ (3)
where $h_{o}$ is the oil phase depth. This definition relies on the interface
having zero speed. Therefore, this is a first approximation if the speed of
the geometry is much larger than the speed of the interface. If this is not
the case, then we can reduce $\omega$ by the angular speed of the interface at
$r$, $\omega_{i}$, giving,
$\dot{\gamma}=\frac{(\omega-\omega_{i})r}{h_{o}}.$ (4)
The stress induced is given by the product of $\dot{\gamma}$ with the upper
phase bulk viscosity, $\eta_{o}$, i.e.,
$\sigma=\frac{\eta_{o}(\omega-\omega_{i})r}{h_{o}}.$ (5)
To convert this bulk stress into an interfacial stress we consider the torque
applied from the bulk on an area element of a ring in the interface at $r$ of
width ${\rm d}r$, Fig. 2.
We can write the torque element, d$T$, as a product of the force element,
$\sigma{\rm d}A$, and the radius, where d$A=2\pi r{\rm d}r$ is the area of the
infinitesimal ring,
${\rm d}T=\sigma 2\pi r^{2}{\rm
d}r=2\pi\frac{\eta_{o}(\omega-\omega_{i})}{h_{o}}r^{3}{\rm d}r.$ (6)
This torque then gives rise to the interfacial stress, $\sigma^{s}$. With
$T(r)=2\pi r^{2}\sigma^{s}(r)$ the interfacial torque as the product of
$\sigma^{s}$, radius and perimeter, we can write the torque balance
$\begin{split}{\rm d}T=&T(r+{\rm d}r)-T(r)\\\ \approx&2\pi\left[(r+{\rm
d}r)^{2}\sigma^{s}(r+{\rm d}r)-r^{2}\sigma^{s}(r)\right]\\\ \approx&2\pi
r\left[r{\rm d}\sigma^{s}+2\sigma^{s}{\rm d}r\right]\end{split}$ (7)
where second order differential terms, e.g., $({\rm d}r)^{2}$, have been
dropped. Equating Eqs (6) and (7) and rearranging,
$\frac{{\rm d}\sigma^{s}}{{\rm
d}r}+\frac{2\sigma^{s}}{r}=\frac{\eta_{o}(\omega-\omega_{i})}{h_{o}}r.$ (8)
Figure 4: Strain response, $\gamma^{s}(t)$, of PNIPAM-SiO2–laden interface for
contactless rheology protocol, Fig. 3(a), showing transition from elastic
dominated response to plastic flow with increasing rotation rate, $\omega$.
(a) Low
$\omega=$1\text{\,}\mathrm{r}\mathrm{p}\mathrm{m}8\text{\,}\mathrm{r}\mathrm{p}\mathrm{m}$$
(dark to light), see legend, with dominant elastic response, $\gamma^{s}_{\rm
rec}$, after $t_{\omega}=$60\text{\,}\mathrm{s}$$. Shading, error from
standard deviation in image correlation analysis bands. (b) Moderate $\omega$
around yielding with rising plastic response. (c) Strain response at high
$\omega$ dominated by irreversible plastic flow, $\dot{\gamma}^{s}$ (see text
for details).
Equation (8) can be readily solved to give
$\sigma^{s}=\frac{\eta_{o}(\omega-\omega_{i})}{4h_{o}}r^{2}$ (9)
if $\frac{{\rm d}\omega_{i}}{{\rm d}r}\simeq 0$, so there is no interfacial
shear banding, for example; this aligns with our confocal-microscopy
observations, e.g., $\omega_{i}$ is constant in time and across the field of
view (within error). It is evident this expression yields the correct
dimensions for interfacial stress as Pa m, as well as physically reasonable
dependencies on viscosity, applied rotation, oil phase height, and radius. In
order to vary the interfacial stress, we vary $\omega$ while observing at a
fixed $r$, a practically simpler approach.
Finally, we need to ensure that we are measuring surface rather than bulk sub-
phase properties. Typically, this can be checked via Bo [Eq. (1)], but this
assumes a probe in contact with the interface. For our contactless setup, as
we measure interfacial strain via the motion of the interface itself, we focus
instead on the question: is the force on the _interface_ dominated by surface
or bulk sub-phase viscosity? This effect is included in sub-phase drag
corrections [46], but is typically neglected in Bo as drag on the probe
dominates. For our contactless geometry, the stress from the sub-phase can be
crudely approximated as parallel-plate flow with $\omega_{i}$, giving an
equivalent to Bo,
$\mathrm{Bo}^{*}=\frac{\sigma^{s}}{\sigma^{s}_{\rm
drag}}=\frac{\eta_{o}h_{w}}{\eta_{w}h_{o}}\frac{\omega-\omega_{i}}{\omega_{i}}\,\mathrm{for}~{}\sigma^{s}_{\rm
drag}\approx\frac{\eta_{w}\omega_{i}}{4h_{w}}r^{2},$ (10)
where $\eta_{w}$ and $h_{w}$ are the water sub-phase viscosity and depth, Fig.
1(a). The first term in Bo∗ is $\mathcal{O}(1)$ for our bulk phases and
dimensions, and likely most common uses, but the second term can be
arbitrarily large as $\omega_{i}\rightarrow 0$. This makes the technique
suitable for measuring weak interface yielding, a fact that arises from
decoupling stress application and interface motion by not having a direct
probe. For interfaces with significant $\omega_{i}$, Bo∗ can drop and this is
discussed where relevant.
## III Results and Discussion
### III.1 Elastic PNIPAM-SiO2–laden interface
To establish the validity of our novel contactless interfacial rheometric
technique we begin by measuring a highly elastic PNIPAM-SiO2–laden interface,
which can also be studied by conventional DWR interfacial rheometry.
#### III.1.1 Contactless rheometry
Using the contactless setup, we perform a creep-recovery test with increasing
rotation rates, $\omega$, from 0.1 to 400 revolutions per minute (rpm) with
logarithmic spacing at 5 pts/decade and higher resolution (20 pts/decade)
where behaviour is observed to be changing from the confocal microscopy
recordings. At each step, while recording, $\omega$ is applied for
$60\text{\,}\mathrm{s}$ before zero rotation rate is set for a further
$60\text{\,}\mathrm{s}$, Fig. 3(a). Imaging tracer particles embedded in the
interface throughout these steps gives the resulting deformation of the
interface. This is illustrated by following a distinctive cluster of particles
in magnified images, Fig. 3(b). From the start of applied rotation (i) to when
rotation is stopped (ii), the interface first moves along the flow direction,
$x$. After cessation and relaxation the interface recoils backwards, along the
previous flow direction, until the recovery step ends, Fig. 3(b)(ii) to (iii).
Over time, this particle motion can be seen as tracing out the strain response
of the interface, $\gamma^{s}(t)=x(t)/r_{\rm out}$, lines from (b)(i)–(iii) to
(c).
At low applied $\omega$, e.g., $6.31$ rpm (Fig. 3), there is an initial jump
in $\gamma^{s}$ as $\omega$ is applied before a further slow increase over
$t_{\omega}$ [the creep step length, Fig. 1(d)], see Fig. 3(c). At the
cessation of applied flow there is a rapid recoil, followed by a slower
further relaxation, approaching a constant value. From the strain profile,
$\gamma^{s}(t)$, we extract two quantities: the recoverable strain as the
decrease from the peak to the final strain [dot-dashed to dotted lines, (ii)
to (iii)], and the irrecoverable strain, $\gamma^{s}_{\rm irr}$, as the
increase from the initial strain, at the start of shear, to the final strain
[dashed to dotted lines, (i) to (iii)]. A strong initial deformation and near
complete elastic recovery can be seen over a range of $\omega\lesssim 8$ rpm,
Fig. 4(a), with both steps growing with $\omega$ (dark to light).
As $\omega$ increases further, up to $20$ rpm [Fig. 4(b) (dark to light)],
$\gamma^{s}_{\rm rec}$ increases proportionally; $\gamma^{s}_{\rm irr}$ also
begins to slowly increase as the interface does not fully recover. At higher
$\omega$ still, $\gtrsim 25$ rpm [Fig. 4(c)] there is a clear change in
behaviour, as $\gamma^{s}_{\rm irr}$ rapidly increases while the elastic
recovery remains unchanged. During applied rotation, the strain is linear in
time, giving a well-defined interfacial shear rate,
$\dot{\gamma}^{s}=\gamma^{s}_{\rm irr}/t_{\omega}$. This behaviour is
indicative of yielding in a creep-recovery test [41].
Using $\gamma^{s}_{\rm rec}$ and $\dot{\gamma}^{s}$ alongside the calculated
interfacial stress ($\sigma^{s}$), Eq. (9), the rheological response can be
quantified. The elastic response, $\sigma^{s}(\gamma^{s}_{\rm rec})$, shows
three regimes, Fig. 5(a) [solid circles]. At the lowest stresses,
$\sigma^{s}<${10}^{-6}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$$, no clear
trend is observed. This (shaded) region lies below a minimum strain,
$\gamma_{\rm rec}^{s,\min}\approx 2\times 10^{-3}$, set by noise, e.g.,
vibration or drift. With increasing stress, $\gamma^{s}_{\rm rec}$ increases
linearly until
$\sigma^{s}=$5\text{\times}{10}^{-5}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$$.
Above this the elastic recovery appears constant, but noisy (i.e. spatially
heterogeneous across the field of view). Within the linear region an elastic
constant, $G^{s\prime}=\sigma^{s}/\gamma^{s}_{\rm
rec}=$4.2(1)\text{\times}{10}^{-4}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$$
can be fitted (dashed line). The interface is well described as a linear
elastic solid below
$\sigma^{s}=$5\text{\times}{10}^{-5}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$$,
this can be emphasised by plotting on linear axes, Fig. 5(b).
However, this is only one side of the measured response. The stress-dependent
plastic flow, $\sigma^{s}(\dot{\gamma}^{s})$, further illuminates the change
around
$\sigma^{s}\approx$5\text{\times}{10}^{-5}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$$,
Fig. 5(c) (squares). Below this threshold $\dot{\gamma}^{s}$ is limited, but
on further increase the interface begins to flow
$\dot{\gamma}^{s}\sim\mathcal{O}($0.1\text{\,}{\mathrm{s}}^{-1}$)$. The
transition point, or yield stress ($\sigma^{s}_{y}$) can be quantified by
fitting a simple piecewise function to these data. We model the interface as a
Bingham fluid, one of the simplest models capturing suitable non-Newtonian
behaviour, described by:
$\begin{array}[]{lr}\dot{\gamma}^{s}=0&:\sigma^{s}<\sigma^{s}_{y}\\\
\sigma^{s}=\sigma^{s}_{y}+\eta^{s}\dot{\gamma}^{s}&:\sigma^{s}\geq\sigma^{s}_{y}.\end{array}$
(11)
Below $\sigma^{s}_{y}$ there is no flow, above this the excess stress leads to
a shear rate set by the interfacial viscosity, $\eta^{s}$. From this we can
obtain both a yield stress and an interfacial viscosity from the contactless
technique, $\sigma^{s,\rm
Contactless}_{y}=$3.5(1)\text{\times}{10}^{-5}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$$,
and
$\eta^{s}=$9.4(1)\text{\times}{10}^{-4}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}\text{\,}\mathrm{s}$$.
While the Bingham model is appropriate to capture a clear yield stress
transition, in general more complex interfacial yielding behaviour is often
observed [47, 48].
Figure 5: PNIPAM-SiO2–laden interface rheology. (a) Elastic response, stress
($\sigma^{s}$) vs strain. Points: solid, contactless [$\sigma^{s}$ from Eq.
(9) and strain, $\gamma^{s}_{\rm rec}$, from relaxation, Fig. 4]; open, DWR
elastic stress, [$\sigma^{s}=\gamma^{s}_{0}G^{s\prime}$ for strain amplitude,
$\gamma^{s}_{0}$]. Minimum limits: shading, contactless strain, $\gamma_{\rm
rec}^{s,\min}\\!\approx\\!0.002$; vertical dotted line, DWR torque at
$\gamma_{0}^{s,\min}$. Fit lines: dashed, contactless elastic response,
$G^{s\prime}\\!=\\!\sigma^{s}/\gamma^{s}_{\rm
rec}\\!=\\!$4.2(1)\text{\times}{10}^{-4}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$$;
horizontal dotted, yield stress from (c). Fit of linear elastic response for
DWR in Fig. 6. (b) Linear plot of elastic response before yielding, symbols as
in (a). (c) Contactless viscous response. Points, stress vs shear rate,
$\dot{\gamma}^{s}=\gamma^{s}_{\rm irr}/t_{\omega}$ from irrecoverable strain,
Fig. 4(c). Lines: bold dashed, Bingham plastic fit, Eq. 11; fine dashed, yield
stress, $\sigma_{y}^{s,\rm
Contactless}\\!=\\!$3.5\text{\times}{10}^{-5}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$$;
and dotted, DWR yield stress, $\sigma_{y}^{s,\rm
DWR}\\!=\\!$4.8\text{\times}{10}^{-5}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$$.
#### III.1.2 Comparison to DWR rheometry
Figure 6: DWR rheology of a PNIPAM-SiO2–laden interface. Elastic
($G^{s\prime}$, dark circles) and loss ($G^{s\prime\prime}$, light squares)
moduli vs strain amplitude, $\gamma^{s}_{0}$. Shading, below instrument
resolution. Lines: dotted,
$\bar{G^{s\prime}}=$3.7(4)\text{\times}{10}^{-4}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$$,
mean of $G^{s\prime}(\gamma^{s}_{0}\leq 0.05)$; dashed, yielding at
$G^{s\prime}=G^{s\prime\prime}$, $\sigma^{s,\rm
DWR}_{y}=\gamma^{s}_{0}\sqrt{(G^{s\prime})^{2}+(G^{s\prime\prime})^{2}}=$4.8\text{\times}{10}^{-5}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$$
As a strong and highly elastic interface we can directly compare the results
of the contactless geometry to DWR interfacial rheology for the same PNIPAM-
SiO2 at equal surface pressures. As the torque resolution for oscillatory
tests is finer than steady shear for rotational rheometers, an oscillatory
strain amplitude sweep is performed. This measures the strain-dependent
elastic and loss moduli, Fig. 6 (symbols). At low $\gamma^{s}_{0}<0.1$, the
elastic modulus is higher than the loss modulus, with $G^{s\prime}$ only
weakly decreasing once above the torque resolution (shaded region), indicative
of a solid elastic material as the stress in phase with the strain. With
increasing $\gamma^{s}_{0}$, $G^{s\prime}$ begins to drop while
$G^{s\prime\prime}$ remains near constant. At $\gamma^{s}_{0}=0.23$ the moduli
become equal, $G^{s\prime}=G^{s\prime\prime}$; above this point $G^{s\prime}$
continues to sharply drop, while $G^{s\prime\prime}$ weakly decreases. In this
region the stress is in phase with the shear rate, i.e. liquid-like.
Therefore, with increasing strain amplitude the interface yields from a solid-
like state to a liquid-like state where $G^{s\prime}=G^{s\prime\prime}$. This
stress, $\sigma^{s,\rm
DWR}_{y}=$4.8\text{\times}{10}^{-5}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$$,
is an operative yield stress at a finite frequency and shear rate, in contrast
to the ‘static’ measurement of creep recovery [49].
The comparison of $\sigma_{y}^{s,\rm DWR}$ [Fig. 5(c) (dotted line)] with the
contactless rheology value, $\sigma_{y}^{s,\rm Contactless}$ (fine dashed
line), finds similar values, with only a 30% difference, 4.8 vs
$3.5\text{\times}{10}^{-5}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$. This is
within the expected variation for different measurement protocols of a non-
linear property 222Oscillatory yielding is ambiguous, as it can be gradual,
with multiple definitions; $G^{s\prime}=G^{s\prime\prime}$ is an upper
(over)estimate [49]. E.g., a tangent analysis appears to give better agreement
with the contactless method, but requires extrapolation from data below
instrument resolution., e.g., in colloidal gels or glasses [51]. The yield
strains, $\gamma^{s}_{0}=0.23$ and $\gamma^{s}_{\rm rec}=0.1$, are also
comparable, Fig. 5(a), but greater than previously observed for microgel-laden
[16] or amorphous jammed interfaces [52]. Above yielding we are not able to
compare DWR and contactless measurements, as the applied shear rates for DWR
($\dot{\gamma}^{s}=2\pi
f\gamma^{s}_{0}\gtrsim$0.25\text{\,}{\mathrm{s}}^{-1}$$) are larger than those
in the contactless geometry. To yield at comparable
$\dot{\gamma}^{s}\approx$0.01\text{\,}{\mathrm{s}}^{-1}$$ would require
$f\approx$0.01\text{\,}\mathrm{Hz}$$, resulting in infeasibly long
experiments.
Figure 7: PMMA-laden interface at low surface coverage. (a) Fluorescent
confocal micrographs of PMMA particles (white) at an oil-water interface with
$\phi=31.0\%$. (b) Corresponding strain vs time, $\gamma^{s}(t)$, at low
imposed rheometer rotation rates, $\omega$, see inset legend. $r_{\rm
out}=$6.6\text{\,}\mathrm{mm}$$. $\omega$ applied from
$t\gtrsim$5\text{\,}\mathrm{s}$$ for $120\text{\,}\mathrm{s}$, followed by
$30\text{\,}\mathrm{s}$ further recording. (c) Corresponding high imposed
$\omega$ response.
In contrast to the yield stress, the linear elastic behaviour is well-defined.
Comparing the DWR elastic modulus below yielding, Fig. 5(a) (open circles),
and the contactless elastic modulus (filled circles) we see excellent
agreement. Fitting over the linear response region ($\gamma^{s}_{0}\leq 0.05$)
the average elastic modulus from DWR is only 12% lower than the contactless
measurement, $\bar{G^{s\prime}}=3.7(4)$ vs
$4.2(1)\text{\times}{10}^{-5}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$,
demonstrating that they measure equal quantities within error. As yielding is
approached, the DWR response appears to soften while the contactless
measurement remains linear, Fig. 5(b). However, the oscillatory strain
amplitude will contain both the recoverable elastic strain and plastic flow
[53], in contrast to a creep-recovery measurement that separates the terms.
So, approaching yielding, where plastic flow gradually begins [Fig. 5(c)] they
are no longer directly comparable. The comparable $\sigma^{s}_{y}$ values and
equal elastic moduli demonstrate that the contactless geometry accurately
measures well-defined interfacial rheological properties.
### III.2 PMMA particle laden interface
#### III.2.1 Low surface coverage
We now turn to an interface laden with solid colloidal hydrophobic PMMA
particles, which are typically challenging to measure using conventional
interfacial rheometric techniques [14]. First, we investigate interfaces with
low surface fraction, e.g. $\phi=31$% [Fig. 7(a)]. Interestingly, even though
the particles are hydrophobic sterically stabilised nearly hard-sphere
particles, they exhibit long-range repulsion when confined at an oil-water
interface and assemble predominantly in a non-close packed arrangement [28].
Some aggregation is present due to attractive capillary and Van der Waals
forces, Fig. 7(a). While capillary forces should be negligible, due to a
vanishingly small Bond number and the use of spherical particles [4], there
will be a certain particle roughness from the variable length of the steric
stabiliser “hairs”. This may cause contact line undulations that lead to
short-range capillary attraction [54, 55].
For this particle system, strain vs time plots show smooth flow with a
constant shear rate over the duration of the creep step, Fig. 7(b). As
expected, at larger imposed rotation rates from the rheometer a larger
interfacial shear rate is measured in response, Fig. 7(c). Note that, when
shear starts the interface appears to immediately (within temporal resolution
of the analysis method) begin flowing at a constant shear rate. Similarly,
when the shear ends the interface immediately stops flowing. This implies that
the interface response in this regime is purely viscous, with no measurable
elasticity.
To quantify the rheology of these particle-laden interfaces, we plot stress vs
shear rate, $\sigma^{s}(\dot{\gamma}^{s})$, with $\dot{\gamma}^{s}$ defined
from the slope of $\gamma^{s}(t)$, Sec. II, due to the immediate response. As
expected, for relatively low $\phi$ we observe a Newtonian response, Fig. 8(a)
[dot-dashed (orange) line]. We can therefore assign a constant
$\eta^{s}=$4.43(9)\text{\times}{10}^{-6}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}\text{\,}\mathrm{s}$$
for the interfacial viscosity of this interface. This $\eta^{s}$ is comparable
to that measured using a magnetic rod interfacial rheometer on a similar
system [56].
Figure 8: Newtonian rheological behaviour of low surface coverage interfaces
with varying parameters. (a) Stress–shear-rate behaviour of three interfaces
with varying oil heights ($h_{o}$), surface coverages ($\phi$) and aggregation
states ($\mathcal{D}$). Points, measured data. Lines, fits to constant
viscosity: (i)
$4.1(5)\text{\times}{10}^{-6}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}\text{\,}\mathrm{s}$,
solid (green); (ii)
$2.10(10)\text{\times}{10}^{-6}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}\text{\,}\mathrm{s}$,
dashed (blue); (iii)
$4.43(9)\text{\times}{10}^{-6}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}\text{\,}\mathrm{s}$,
dot-dashed (orange), also in Fig. 7. (b) Interface images: (i) $\phi=38\%$,
$h_{o}=$2.9\text{\,}\mathrm{mm}$$ and $\mathcal{D}=24.7$; (ii) $\phi=29\%$,
$h_{o}=$1.4\text{\,}\mathrm{mm}$$ and $\mathcal{D}=23.4$; and, (iii)
$\phi=31\%$, $h_{o}=$2.2\text{\,}\mathrm{mm}$$ and $\mathcal{D}=2.49$. NB:
surface fractions are determined from a rotational average, so single images
are not wholly representative.
##### Effect of particle assembly.
Next, we have performed repeats of these experiments while varying the oil
phase thickness to test the robustness of our technique, Fig. 8. Confocal
images prior to shearing reveal that the partial presence of aggregates, with
particles in direct contact, varies between samples, Fig. 8(b). The
aggregation state is characterized by the dispersity $\mathcal{D}$, Eq. (2),
where a low $\mathcal{D}$ corresponds to a homogeneous particle distribution
and a high $\mathcal{D}$ to an aggregated assembly. We will discuss the
particle assembly’s influence on the rheology below.
First, we observe a Newtonian response for all three oil thicknesses, Fig.
8(a). This suggests that, as expected, the height of the oil phase does not
seem to have a substantial effect, i.e. the rheology of the interface is not
expected to depend on the depth of the bulk phases. When comparing samples
with different $\phi$ and aggregation states, we observe the following trends.
First, an increase in $\phi$ leads to an increase in interfacial viscosity.
For example, when comparing interfaces with similar aggregation states
($\mathcal{D}=24.7$ and $\mathcal{D}=23.4$), but different $\phi$, the
interfacial viscosity at 38%, (green) solid line, is higher than at 29%,
(blue) dashed, 4.1 vs
$2.1\text{\times}{10}^{-6}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}\text{\,}\mathrm{s}$.
This trend is to be expected, as we later demonstrate elastic responses for
high surface fractions of 56.7%, Sec. III.2.2. Second, the aggregation state
seemingly affects the interfacial viscosity. When comparing samples with
similar $\phi$ (29% and 31%), but different aggregation states
($\mathcal{D}=23.4$ and $\mathcal{D}=2.49$ respectively), we measure a
significant difference in $\eta^{s}$ (2.1 vs
$4.4\text{\times}{10}^{-6}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}\text{\,}\mathrm{s}$),
cf. (blue) dashed and (orange) dot-dashed lines. This suggests that the
aggregation of PMMA particles at liquid interfaces leads to lower interfacial
viscosities compared to more homogeneously distributed particles.
Our observation that surface coverage and aggregation state make a significant
difference to measured interfacial viscosity demonstrates the utility of the
inherent combination of rheometric measurement with simultaneous imaging for
the contactless geometry. It also aligns with previous reports; for example,
Reynaert, Moldenaers, and Vermant [56] have shown that the complex surface
viscosity magnitude increases with the surface fraction of weakly aggregated
polystyrene particles at a water-oil interface. They also show that, for
surface coverages below 80%, as studied here, the interfacial viscosity of
aggregated particles at a water-oil interface is lower than that for stable
particles. Note that aggregation _can_ lead to higher interfacial viscosities
at high(er) surface coverage, see also, e.g., Ref. 21.
Notably, the results of varying the thickness of the oil phase also imply that
edge effects, i.e. deviations from our assumed parallel-plate flow field, Sec.
II.5, do not play a substantial role in our setup. This is consistent with the
agreement in the measured linear elastic modulus for PNIPAM-SiO2–laden
interfaces between DWR and contactless methods, Sec. III.1.2. A substantial
part of the top surface in our set-up is an oil-plate interface, but the outer
edge of it is an oil-air interface i.e. the sample cup is
$21\text{\,}\mathrm{mm}$ radius at the top, whereas the parallel-plate
geometry attached to the oil-air interface is $12.5\text{\,}\mathrm{mm}$
radius, leaving a radial gap of $9\text{\,}\mathrm{mm}$ between the parallel-
plate geometry and the inner wall of the PTFE cup. To mitigate any edge
effects, we measure interfacial strains at a distance of about 4 mm $\ll$ 12.5
mm from the rotational axis. The results in Fig. 8, i.e. that the interfacial
viscosities do not differ strongly with oil-phase thickness, imply that edge
effects do not substantially affect our results. However, we should note that
even in the contactless method, at low $\phi$, Fig. 7, such low interfacial
viscosities mean that measurements are at a moderate Bo, around
$\mathcal{O}(10)$. This suggests that for precise and absolute determination
of $\eta^{s}$ in this regime a sub-phase drag correction may still be
necessary.
##### Irreversible effect of shear.
Next, we take advantage of the simultaneous confocal imaging to observe the
evolution in particle assembly upon shearing. Before shearing, the particles
are mostly in a non-close packed arrangement with partial aggregation
($\mathcal{D}=2.49$), which we attribute to attractive capillary and Van der
Waals forces, Fig. 9(a). Upon mild shearing ($\omega\leq 6.3$ rpm), the
arrangement is preserved and only minor changes in aggregation state are
observed, Fig. 9(b) and (c). We see from images taken after high shear is
applied, $\omega\gtrsim 10$ rpm corresponding to
$\sigma^{s}=$6.6\text{\times}{10}^{-7}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$$
[Fig. 9(d) and (e)], that the interface changes to an inhomogeneous structure
with most particles forming one large aggregate percolating across the region
imaged combined with an increase in dispersity to $\mathcal{D}=13.5$ in the
final state. Importantly, the aggregation appears irreversible and persists
even when higher shear rates are applied, Fig. 9(f).
Figure 9: Structural evolution of PMMA particles at an oil-water interface
with 31% surface coverage via fluorescent confocal micrographs with increasing
applied rotation rate, $\omega$, and, hence, interfacial stress, $\sigma^{s}$.
Interface corresponding to (orange) dot-dashed line in Fig. 8(a). (a) After
low stress, $\omega=0.05$ rpm. Scale bar
$200\text{\,}\mathrm{\SIUnitSymbolMicro m}$. (b) $\omega=4.0$ rpm. (c)
$\omega=6.3$ rpm. (d) Aggregation threshold, $\omega=10$ rpm or
$\sigma^{s}=$6.6\text{\times}{10}^{-7}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$$.
(d) Continued aggregation, $\omega=16$ rpm. (e) Highest $\omega=25$ rpm.
Figure 10: PMMA-laden interface at high surface coverage. (a) Fluorescent
confocal micrographs of PMMA particles (white) at an oil-water interface,
$\phi=56.7\%$. (b) Corresponding strain vs time at low imposed rheometer
rotation rates, $\omega$, see inset legend. $r_{\rm
out}=$6.6\text{\,}\mathrm{mm}$$. $\omega$ applied from
$t\gtrsim$5\text{\,}\mathrm{s}$$ for $120\text{\,}\mathrm{s}$, followed by
$30\text{\,}\mathrm{s}$ further recording. (c) Corresponding high imposed
$\omega$ response.
We have seen that applying shear leads to considerable aggregation in this
system. In order for aggregation to occur the shear force must exceed the
maximum repulsive force between these particles. It has been observed that
these particles have an interaction that can be described by a repulsive
screened Coulomb potential [28]. In order to overcome this repulsion we assume
that they must overcome the maximum repulsive force. This maximum force can be
found to be $9.88\text{\times}{10}^{-13}\text{\,}\mathrm{N}$, where we have
rescaled the parameters from Ref. 28, as in this work we use larger particles.
This force can then be converted to an interfacial stress by dividing by the
particle diameter to give a critical aggregation stress of
$3.3\text{\times}{10}^{-7}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$.
Experimentally, we found that aggregation occurs starting at a stress of
$6.6\text{\times}{10}^{-7}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$, which is
in reasonable agreement. This agreement between experiment and prediction
lends further confidence to the use of Eq. (9) when calculating interfacial
stress in our unique geometry. In order to aggregate, the applied stress must
also overcome the steric repulsion, however the steric barrier is considerably
smaller (for a similar particle) than the electrostatic barrier [28, 57]. Once
these particles come into close contact, attractive capillary forces and/or
van der Waals forces are large enough such that this is irreversible.
#### III.2.2 High surface coverage
Now, we investigate the same PMMA particles at higher surface fractions of
56.7%, Fig. 10(a), where the particles assemble into a percolated aggregated
structure. We observe markedly different behaviour, with elastic behaviour
being evident from the strain vs time plots, Fig. 10(b)–(c). Focussing on the
low stress behaviour, Fig. 10(b), an initial jump to a higher strain is
observed, indicative of an elastic material. There is then some erratic motion
in the direction of shear (i.e. the strain is always positive), indicating
that there is some frustrated motion and rearrangements of the interfacial
structure [58]. While the initial elastic response is difficult to measure
precisely due to background noise in the flow, upon cessation of shear the
interface clearly recoils, allowing the elastic strain to be readily measured
[59]. This statement becomes even more apparent when looking at higher applied
stresses, Fig. 10(c), as at these large stresses the flowing behaviour
completely dominates the strain response and the initial elastic jump is
barely visible in the data. However, once the shear has been stopped, the
elastic recoil is clear. At the end of shear at low rotation rate we sometimes
observe motion, e.g., Fig. 10(b) (solid line), perhaps due to thermal
gradients or air flow — note, however, that the shear rates are small.
At higher $\phi$ we observe a more complex rheological response. By plotting
the plastic response, stress vs shear rate, we can infer that the particle-
laden interface behaves as a yield stress fluid, Fig. 11(a), as has been
observed previously using the DWR geometry [14]. By fitting a Bingham plastic
model, Eq. (11), we measure a yield stress of
$1.05(15)\text{\times}{10}^{-7}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$. The
effective interfacial viscosity
($2.16(14)\text{\times}{10}^{-5}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}\text{\,}\mathrm{s}$)
is, as expected, larger than that measured at the lower $\phi$, Fig. 8. We
feel that this is an appropriate model, as the parameter which we are most
interested in comparing to data in the published literature is the yield
stress. The measured $\sigma^{s}_{y}$ is, however, an order of magnitude lower
than the yield stress quoted in Ref. 14. As close agreement between the
techniques is found for a PNIPAM-SiO2–laden interface, Fig. 5(c), this
suggests that the different surface coverages may not be comparable, 56.7%
here vs 74% in Ref. 14. Moreover, there is a difference in PMMA stabilizer,
poly(12-hydroxystearic acid) in Ref. 14 and poly(lauryl methacrylate) here,
though that only makes a small difference in contact angle and a relatively
small difference in interaction potential [28].
When plotting the elastic response at higher $\phi$, stress vs recoverable
strain [Fig. 11(b)], the response is initially linear, with $\sigma^{s}$ and
$\gamma^{s}_{\rm rec}$ proportional up to a strain of 0.03. We fit a linear
dependence of the elastic strain response to the imposed shear stress in the
low-strain regime [inset (orange) points]. This modelled Hookean behaviour
gives us a shear modulus of
$3.16(16)\text{\times}{10}^{-6}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$; this
is a factor $10\times$ lower in stiffness compared to the PNIPAM-SiO2
interface, Fig. 5(a), and over a smaller linear region, leading to a far
weaker interface ($\sim 30\times$). Previously reported measurements on a
similar particle-laden interface [14] found interfacial moduli of
$\sim$2\text{\times}{10}^{-6}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$$, which
is consistent with our measurements here being at a slightly lower surface
coverage, similar to the difference in $\sigma^{s}_{y}$.
Figure 11: Rheological behaviour of an oil-water interface laden with PMMA
particles at $\phi=56.7\%$. (a) Viscous response to applied stress, i.e. after
initial elastic response, but before recoil. Points, data; line, fit to
Bingham fluid model, Eq. (11). (b) Elastic response measured from recoil.
Points, data; line, linear elastic fit to low strain, $\leq 0.03$ (orange and
inset),
$G^{s\prime}=$3.16(16)\text{\times}{10}^{-6}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$$.
Finally, we look at the high-stress elastic response, which suggests a complex
structural evolution. At $\gamma^{s}_{\rm rec}\gtrsim 0.03$, Fig. 11(b)
(blue), the recoverable strain initially remains unchanged with $\sigma^{s}$,
no longer increasing linearly. The shift in response at
$\sigma^{s}\\!\approx\\!$1\text{\times}{10}^{-7}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$$,
correlates well with the yield stress, Fig. 11(a). So, just above
$\sigma^{s}_{y}$ a fixed maximum $\gamma^{s}_{\rm rec}$ can be stored, similar
to the PNIPAM-SiO2 interface, Fig. 5(a). However, as $\sigma^{s}$ increases
further, $\gamma^{s}_{\rm rec}$ begins to increase more rapidly, i.e. strain
softening. Qualitatively, this aligns with a variety of literature results.
For example, Reynaert, Moldenaers, and Vermant [56] measured the surface
elastic modulus for polystyrene particle aggregates at a water-oil interface,
which decreased with strain amplitude. Zhang _et al._ [60] used large
amplitude oscillatory strain rheology and observed strain softening for weakly
attractive silica nanoparticles at an air-aqueous interface. Finally, Orsi
_et al._ [61] used an interfacial shear rheometer on gold nanoparticles at an
air-water interface and also observed strain softening, which they attributed
to breaking of weak bonds in a 2D gel. This suggests that the interface
evolves above yielding, notably, in the stress range for aggregation at low
$\phi$, $3.3\text{\times}{10}^{-7}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$,
which should be $\phi$ independent. It is then possible that the strain
softening is driven by aggregation, consistent with aggregation weakening
interfaces at all but the highest $\phi$ (and lowering $\eta^{s}$, Sec.
III.2.1).
### III.3 Limits and comparison of techniques
Our results, and the comparison to 1) DWR results using the same PNIPAM-SiO2
system, Fig. 5, and 2) similar colloidal particle laden interfaces using a DWR
[14] or magnetic rod [56], suggest that our setup represents a useful addition
to the field of interfacial rheology. In comparable situations, our results
are highly consistent with results using conventional probes directly attached
to the interface, e.g., the elastic modulus of the PNIPAM-SiO2 interface, or
within expected variation due to differing methods or interface preparation,
i.e. the yield stress for both a PNIPAM-SiO2 or PMMA-particle laden system.
Crucially, our setup allows us to both measure viscosities at lower $\phi$
than have been previously observed using the DWR, and to probe static yielding
at lower stresses. The lower stress limit can be estimated using
Bo${}^{*}\approx 10$, Eq. (10), alongside an estimate of the minimum
interfacial shear rate set by the imaging setup resolution. With a minimum
resolvable strain of $\gamma_{\rm rec}^{s,\min}=0.002$, Fig. 5(a), over a
$120\text{\,}\mathrm{s}$ experiment the minimum shear rate is
$\dot{\gamma}^{s,\min}\sim$2\text{\times}{10}^{-5}\text{\,}{\mathrm{s}}^{-1}$$,
or a rotation rate $\omega_{i}^{\min}=\dot{\gamma}^{s}r_{\rm
out}/(r_{r}-r_{\rm
out})\approx$2.5\text{\times}{10}^{-5}\text{\,}\mathrm{rad}\text{\,}{\mathrm{s}}^{-1}$$.
For pre-factors in Eq. (10) $\approx 1$, this sets $\omega^{\min}\approx
0.002$ rpm. The minimum rotation rate then sets a lower stress limit, Eq. (9),
$\sigma^{s,\min}=\mathcal{O}(${10}^{-8}\text{\,}\mathrm{Pa}\text{\,}\mathrm{m}$)$,
comparable to more sensitive interfacial techniques, e.g., a micro-needle [24,
44]. To specifically probe low stresses, $\dot{\gamma}^{s,\min}$, and so
$\sigma^{s,\min}$, could be further optimised by using a high magnification or
longer imaging period, together with minimisation of noise (e.g., thermal
gradients and vibrations). Most remarkably, our contactless technique retains
the maximum stresses, Fig. 5(c), attainable for a DWR using a closed feedback
loop [44], and hence has a dynamic range that spans the majority of
interfacial shear rheometry methods. This wide range of measurable stress
combined with in situ determination of interfacial characteristics, either
surface pressure via a Wilhelmy plate or surface fraction via imaging, opens
up the contactless technique to multiple future applications.
## IV Conclusion
In this work, we have developed a contactless method to perform interfacial
shear rheology on liquid-liquid interfaces without an interfacial geometry.
The shear is applied to the continuous phase using a rotational rheometer and
indirectly deforming the interface and the surface response is measured via
confocal microscopy, either of a fluorescent particle-laden interface or via
tracer particles embedded in the interface, enabling the measurement of a
broad range of interfaces formed from, e.g., proteins, polymers or molecular
surfactants [8]. While we use a confocal microscope and stress-controlled
rheometer, the same results should be achievable using any fixed-rate motor
and reflection or fluorescence microscopy with sufficient resolution and frame
rate. This enhances the applicability of this method as only relatively common
equipment is required.
The method has been verified using a PNIPAM-SiO2–laden interface measured
using both our novel contactless geometry and a conventional DWR method, with
equal elastic moduli found and comparable yield stress values. Our contactless
setup allows us to both measure interfacial viscosities at lower surface
fractions than have been previously observed using the DWR, owing to the high
sensitivity achieved by having no probe attached directly to the interface,
while maintaining the ability to apply large interfacial stresses.
Additionally, we have linked the rheological behaviour to the structural
behaviour of PMMA particle interfaces with different initial conditions. At
low surface coverage, the interface behaves as a two-dimensional Newtonian
fluid and is subject to aggregation above a certain shear threshold. At higher
surface coverage the interface begins to behave as an elastic sheet with a
measurable shear modulus, up to a yield stress where the interface begins to
flow. In addition, our results suggest that both surface coverage and
interfacial particle aggregation state affect the rheology of the interface,
in line with results in the literature.
This work has focussed on the motion of the particles in the plane of the
interface under steady shear. As the setup presented here does not have a
probe attached to the liquid interface, the effect of the interface on how
shear is propagated from the oil to the water phase can now be studied. This
would facilitate observation of how the inside of an emulsion droplet is
influenced by shear of the continuous phase, thereby greatly increasing the
understanding and predictability of the flow behaviour of these systems, which
are encountered ubiquitously in many formulation applications.
## Acknowledgments
IM acknowledges studentship funding from the EPSRC Centre for Doctoral
Training in Condensed Matter Physics (CM-DTC, EP/L015110/1). SB acknowledges
studentship funding from the EPSRC Centre for Doctoral Training in Soft Matter
and Functional Interfaces (SOFI-CDT, EP/L015536/1). MR acknowledges funding
from the Marie Sklodowska-Curie Individual Fellowship (Grant No. 101064381).
The authors acknowledge R. O’Neill, R. Van Hooghten, Damian Renggli and Jan
Vermant for useful discussions, A. Garrie for the PTFE cup and aluminium ring,
J. Royer for assistance with the rheoimaging setup, and M. Hermes for the
velocimetry C code. For the purpose of open access, the authors have applied a
Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript
version arising from this submission.
## Author declarations
### Conflicts of Interest
The authors declare that they have no conflicts of interest.
### Data Availability
The data that support the findings of this study are openly available in
Edinburgh DataShare at https://doi.org/10.7488/ds/3759. *
## Appendix A Surface Coverage Measurements
As small differences in surface fraction can cause significant differences in
rheological response, and surface fractions for non-close packed PMMA
monolayers formed by sedimentation are challenging to reproduce, the surface
coverage $\phi$ is determined for each interface prepared. First, microscopy
images are analysed, the magnification level being a balance between
individual particle resolution, which improves upon magnification, and
statistics of particle counting, which decreases upon magnification. Examples
are shown in Fig. 8(b). Surface fractions are quoted in terms of a pixel
counting method, as described in the main text. To determine how accurately
surface coverage can be defined, we compare the pixel fraction method with a
particle counting method. Here, we can count the number of particles and use
knowledge of particle size and image size to calculate surface fraction. For
the sample in Fig. 7(a), measurements of surface fraction via the pixel
fraction and the particle counting methods give respectively 31.0% and 23.0%
while for the sample in Fig. 10(a) these give respectively 56.7% and 46.1%.
Note the significant difference in these measurements, highlighting the
challenge in defining the surface coverage. With perfect particle resolution,
the particle counting method should yield the correct answer for that
particular region of the interface, however, perfect particle resolution is
rarely achieved (especially at high $\phi$), for instance because an aggregate
could be mistaken for one particle. The pixel fraction method is also flawed
in that it assumes a direct match between area of emitted light with area of
the particle. This however is not true due to the point spread function of the
imaging setup, a question over whether the particles are exactly in the focal
plane, and the brightness of the fluorophore itself, among other
considerations. Comparing these images with other, similar images taken using
the same imaging setup allows us to have an estimate for the surface fraction
and certainly allows us to observe trends in flow behaviour as surface
fraction changes.
It is also worth noting in Fig. 10(a) there appears to be two types of
particle with different intensity levels. There are a few possible reasons:
firstly, dispersity in particle size, fluorophore intensity, or contact angle
[62] may lead to this effect. This aligns with some particles in Fig. 7(a)
appearing smaller but with less appreciable change in intensity, presumably as
the excitation signal is at saturation in these imaging conditions. Secondly,
the particles, while in close contact, may be pushed out of the surface
leading to variations in their vertical $z$-position. This, however, is
unlikely as the $z$ resolution is $\gg$ particle diameter [63],
$\delta z=\frac{0.88\lambda_{\rm exc}}{1-\sqrt{(n^{2}-{\rm
NA}^{2})}}=$9.3\text{\,}\mathrm{\SIUnitSymbolMicro m}$,$ (12)
for excitation wavelength $\lambda_{\rm exc}=$488\text{\,}\mathrm{nm}$$, dry
objective refractive index $n=1$, and numerical aperture ${\rm NA}=0.3$.
Finally, aggregates might cause a (local) curvature in the liquid-liquid
interface, which could lead to some particles remaining interfacial but moving
relative to the imaging plane, which would result in particles appearing
smaller and/or less bright.
These possible artefacts highlight the challenge in precisely determining the
surface fraction from images. However, the pixel counting method gives a
consistent result that allows a systematic comparison with increasing surface
coverage of particles, Sec. III.2, and a reasonable comparison with the
literature results of e.g., Ref. 14.
## References
## References
* Fuller and Vermant [2012] G. G. Fuller and J. Vermant, “Complex fluid-fluid interfaces: Rheology and structure,” Annu. Rev. Chem. Biomol. Eng. 3, 519–543 (2012).
* Thijssen and Vermant [2018] J. H. J. Thijssen and J. Vermant, “Interfacial rheology of model particles at liquid interfaces and its relation to (bicontinuous) Pickering emulsions,” J. Phys. Condens. Matter 30, 023002 (2018).
* Derkach, Krägel, and Miller [2009] S. R. Derkach, J. Krägel, and R. Miller, “Methods of measuring rheological properties of interfacial layers (experimental methods of 2D rheology),” Colloid Journal 71, 1–17 (2009).
* Binks and Horozov [2006] B. P. Binks and T. S. Horozov, eds., _Colloidal Particles at Liquid Interfaces_ (Cambridge University Press, Cambridge, 2006).
* Tavacoli, Thijssen, and Clegg [2015] J. W. Tavacoli, J. H. J. Thijssen, and P. S. Clegg, “Chapter 6. bicontinuous emulsions stabilized by colloidal particles,” in _Particle-Stabilized Emulsions and Colloids: Formation and Applications_ (Royal Society of Chemistry, 2015) pp. 129–168.
* Leal-Calderon and Schmitt [2008] F. Leal-Calderon and V. Schmitt, “Solid-stabilized emulsions,” Curr. Opin. Colloid Interface Sci. 13, 217–227 (2008).
* Hunter _et al._ [2008] T. N. Hunter, R. J. Pugh, G. V. Franks, and G. J. Jameson, “The role of particles in stabilising foams and emulsions,” Adv. Colloid Interface Sci. 137, 57–81 (2008).
* Tcholakova, Denkov, and Lips [2008] S. Tcholakova, N. D. Denkov, and A. Lips, “Comparison of solid particles, globular proteins and surfactants as emulsifiers,” Phys. Chem. Chem. Phys. 10, 1608–1627 (2008).
* Fernandez-Rodriguez, Martín-Molina, and Maldonado-Valderrama [2021] M. A. Fernandez-Rodriguez, A. Martín-Molina, and J. Maldonado-Valderrama, “Microgels at interfaces, from mickering emulsions to flat interfaces and back,” Adv. Colloid Interface Sci. 288, 102350 (2021).
* Perrin _et al._ [2020] L. Perrin, G. Gillet, L. Gressin, and S. Desobry, “Interest of Pickering emulsions for sustainable micro/nanocellulose in food and cosmetic applications,” Polymers 12, 1–14 (2020).
* Vandebril _et al._ [2010] S. Vandebril, A. Franck, G. G. Fuller, P. Moldenaers, and J. Vermant, “A double wall-ring geometry for interfacial shear rheometry,” Rheol. Acta 49, 131–144 (2010).
* Masschaele, Fransaer, and Vermant [2011] K. Masschaele, J. Fransaer, and J. Vermant, “Flow-induced structure in colloidal gels: direct visualization of model 2D suspensions,” Soft Matter 7, 7717 (2011).
* Keim and Arratia [2013] N. C. Keim and P. E. Arratia, “Yielding and microstructure in a 2D jammed material under shear deformation,” Soft Matter 9, 6222 (2013).
* Van Hooghten _et al._ [2017] R. Van Hooghten, V. E. Blair, A. Vananroye, A. B. Schofield, J. Vermant, and J. H. J. Thijssen, “Interfacial rheology of sterically stabilized colloids at liquid interfaces and its effect on the stability of Pickering emulsions,” Langmuir 33, 4107–4118 (2017).
* Krägel and Derkach [2010] J. Krägel and S. R. Derkach, “Interfacial shear rheology,” Curr. Opin. Colloid Interface Sci. 15, 246–255 (2010).
* Brugger, Vermant, and Richtering [2010] B. Brugger, J. Vermant, and W. Richtering, “Interfacial layers of stimuli-responsive poly-(N-isopropylacrylamide-co- methacrylicacid) (PNIPAM-co-MAA) microgels characterized by interfacial rheology and compression isotherms,” Phys. Chem. Chem. Phys. 12, 14573–14578 (2010).
* Razavi _et al._ [2015] S. Razavi, K. D. Cao, B. Lin, K. Y. C. Lee, R. S. Tu, and I. Kretzschmar, “Collapse of particle-laden interfaces under compression: Buckling vs particle expulsion,” Langmuir 31, 7764–7775 (2015).
* Garbin _et al._ [2015] V. Garbin, I. Jenkins, T. Sinno, J. C. Crocker, and K. J. Stebe, “Interactions and stress relaxation in monolayers of soft nanoparticles at fluid-fluid interfaces,” Phys. Rev. Lett. 114, 108301 (2015).
* Ravera, Loglio, and Kovalchuk [2010] F. Ravera, G. Loglio, and V. I. Kovalchuk, “Interfacial dilational rheology by oscillating bubble/drop methods,” Curr. Opin. Colloid Interface Sci. 15, 217–228 (2010).
* Barman and Christopher [2014] S. Barman and G. F. Christopher, “Simultaneous interfacial rheology and microstructure measurement of densely aggregated particle laden interfaces using a modified double wall ring interfacial rheometer,” Langmuir 30, 9752–9760 (2014).
* Barman and Christopher [2016] S. Barman and G. F. Christopher, “Role of capillarity and microstructure on interfacial viscoelasticity of particle laden interfaces,” J. Rheol. 60, 35–45 (2016).
* Brooks _et al._ [1999] C. F. Brooks, G. G. Fuller, C. W. Frank, and C. R. Robertson, “Interfacial stress rheometer to study rheological transitions in monolayers at the air-water interface,” Langmuir 15, 2450–2459 (1999).
* Reynaert _et al._ [2008] S. Reynaert, C. F. Brooks, P. Moldenaers, J. Vermant, and G. G. Fuller, “Analysis of the magnetic rod interfacial stress rheometer,” J. Rheol. 52, 261–285 (2008).
* Tajuelo _et al._ [2015] J. Tajuelo, J. M. Pastor, F. Martínez-Pedrero, M. Vázquez, F. Ortega, R. G. Rubio, and M. A. Rubio, “Magnetic microwire probes for the magnetic rod interfacial stress rheometer,” Langmuir 31, 1410–1420 (2015).
* Jaensson, Anderson, and Vermant [2021] N. O. Jaensson, P. D. Anderson, and J. Vermant, “Computational interfacial rheology,” J. Non-Newt. Fluid Mech. 290, 104507 (2021).
* Kim _et al._ [2011] K. Kim, S. Q. Choi, J. A. Zasadzinski, and T. M. Squires, “Interfacial microrheology of DPPC monolayers at the air-water interface,” Soft Matter 7, 7782–7789 (2011).
* Mears, Muntz, and Thijssen [2020] R. Mears, I. Muntz, and J. H. J. Thijssen, “Surface pressure of liquid interfaces laden with micron-sized particles,” Soft Matter 16, 9347–9356 (2020).
* Muntz _et al._ [2020] I. Muntz, F. Waggett, M. Hunter, A. B. Schofield, P. Bartlett, D. Marenduzzo, and J. H. J. Thijssen, “Interaction between nearly hard colloidal spheres at an oil-water interface,” Phys. Rev. Research 2, 023388 (2020).
* Goebel and Lunkenheimer [1997] A. Goebel and K. Lunkenheimer, “Interfacial tension of the water/n-alkane interface,” Langmuir 13, 369–372 (1997).
* Ciarella _et al._ [2021] S. Ciarella, M. Rey, J. Harrer, N. Holstein, M. Ickler, H. Löwen, N. Vogel, and L. M. C. Janssen, “Soft particles at liquid interfaces: From molecular particle architecture to collective phase behavior,” Langmuir 37, 5364–5375 (2021).
* Sing _et al._ [2018] J. Sing, J. Tang, R. S. Bader, E. S. A. Goerlitzer, J. F. Wendisch, G. R. Bourret, M. Rey, and N. Vogel, “Surface patterning with SiO2@PNiPAm core – shell particles,” ACS Omega 3, 12089–12098 (2018).
* Barrett [1974] K. E. J. Barrett, ed., _Dispersion Polymerization in Organic Media_ (Wiley, New York, 1974).
* Jardine and Bartlett [2002] R. S. Jardine and P. Bartlett, “Synthesis of non-aqueous fluorescent hard-sphere polymer colloids,” Colloids Surf. A 211, 127–132 (2002).
* Rey _et al._ [2020a] M. Rey, M. A. Fernandez-Rodriguez, M. Karg, L. Isa, and N. Vogel, “Poly- N-isopropylacrylamide nanogels and microgels at fluid interfaces,” Acc. Chem. Res. 53, 414–424 (2020a).
* Rauh _et al._ [2017] A. Rauh, M. Rey, L. Barbera, M. Zanini, M. Karg, and L. Isa, “Compression of hard core–soft shell nanoparticles at liquid–liquid interfaces: influence of the shell thickness,” Soft Matter 13, 158–169 (2017).
* Rey _et al._ [2020b] M. Rey, M. J. Uttinger, W. Peukert, J. Walter, and N. Vogel, “Probing particle heteroaggregation using analytical centrifugation,” Soft Matter 16, 3407–3415 (2020b).
* Rey _et al._ [2017] M. Rey, A. D. Law, D. M. A. Buzza, and N. Vogel, “Anisotropic self-assembly from isotropic colloidal building blocks,” J. Am. Chem. Soc. 139, 17464–17473 (2017).
* Besseling _et al._ [2009] R. Besseling, L. Isa, E. R. Weeks, and W. C. K. Poon, “Quantitative imaging of colloidal flows,” Adv. Colloid Interface Sci. 146, 1–17 (2009).
* Dutta _et al._ [2013] S. K. Dutta, A. Mbi, R. C. Arevalo, and D. L. Blair, “Development of a confocal rheometer for soft and biological materials,” Rev. Sci. Instrum. 84, 063702 (2013).
* Hermes and Clegg [2013] M. Hermes and P. S. Clegg, “Yielding and flow of concentrated Pickering emulsions,” Soft Matter 9, 7568 (2013).
* Nguyen and Boger [1992] Q. D. Nguyen and D. V. Boger, “Measuring the flow properties of yield stress fluids,” Annu. Rev. Fluid Mech. 24, 47–88 (1992).
* Macosko [1994] C. W. Macosko, _Rheology: Principles, Measurements, and Applications_ (VCH, New York, 1994) p. 217.
* Note [1] This can be recovered by factorising the denominator, $\gamma^{s}_{0}=\theta_{0}[R_{o}^{2}/[(R_{o}-R_{r,o})(R_{o}+R_{r,o})]+R_{i}^{2}/[(R_{r,i}-R_{i})(R_{r,i}+R_{i})]$, and approximating to first order by taking $R_{i/o}+R_{r,i/o}\approx 2R_{r,i/o}$, $R_{r,i}\approx R_{r,o}$ and $R_{o}-R_{r,o}\approx R_{r,i}-R_{i}$.
* Renggli _et al._ [2020] D. Renggli, A. Alicke, R. H. Ewoldt, and J. Vermant, “Operating windows for oscillatory interfacial shear rheology,” J. Rheol. 64, 141–160 (2020).
* Oza and Venerus [2021] A. U. Oza and D. C. Venerus, “The dynamics of parallel-plate and cone–plate flows,” Phys. Fluids 33, 023102 (2021).
* Verwijlen _et al._ [2011] T. Verwijlen, P. Moldenaers, H. A. Stone, and J. Vermant, “Study of the flow field in the magnetic rod interfacial stress rheometer,” Langmuir 27, 9345–9358 (2011).
* Erni and Parker [2012] P. Erni and A. Parker, “Nonlinear viscoelasticity and shear localization at complex fluid interfaces,” Langmuir 28, 7757–7767 (2012).
* Truzzolillo _et al._ [2016] D. Truzzolillo, H. Sharaf, U. Jonas, B. Loppinet, and D. Vlassopoulos, “Tuning the structure and rheology of polystyrene particles at the air–water interface by varying the ph,” Langmuir 32, 6956–6966 (2016).
* Dinkgreve _et al._ [2016] M. Dinkgreve, J. Paredes, M. M. Denn, and D. Bonn, “On different ways of measuring “the” yield stress,” J. Non-Newt. Fluid Mech. 238, 233–241 (2016).
* Note [2] Oscillatory yielding is ambiguous, as it can be gradual, with multiple definitions; $G^{s\prime}=G^{s\prime\prime}$ is an upper (over)estimate [49]. E.g., a tangent analysis appears to give better agreement with the contactless method, but requires extrapolation from data below instrument resolution.
* Pham _et al._ [2008] K. N. Pham, G. Petekidis, D. Vlassopoulos, S. U. Egelhaaf, W. C. K. Poon, and P. N. Pusey, “Yielding behavior of repulsion-and attraction-dominated colloidal glasses,” J. Rheol. 52, 649–676 (2008).
* Galloway, Jerolmack, and Arratia [2020] K. L. Galloway, D. J. Jerolmack, and P. E. Arratia, “Quantification of plasticity via particle dynamics above and below yield in a 2D jammed suspension,” Soft Matter 16, 4373–4382 (2020).
* Kamani, Donley, and Rogers [2021] K. Kamani, G. J. Donley, and S. A. Rogers, “Unification of the rheological physics of yield stress fluids,” Phys. Rev. Lett. 126, 218002 (2021).
* Stamou, Duschl, and Johannsmann [2000] D. Stamou, C. Duschl, and D. Johannsmann, “Long-range attraction between colloidal spheres at the air-water interface: The consequence of an irregular meniscus,” Phys. Rev. E 62, 5263–5272 (2000).
* Van Hooghten _et al._ [2013] R. Van Hooghten, L. Imperiali, V. Boeckx, R. Sharma, and J. Vermant, “Rough nanoparticles at the oil-water interfaces: Their structure, rheology and applications,” Soft Matter 9, 10791–10798 (2013).
* Reynaert, Moldenaers, and Vermant [2007] S. Reynaert, P. Moldenaers, and J. Vermant, “Interfacial rheology of stable and weakly aggregated two-dimensional suspensions,” Phys. Chem. Chem. Phys. 9, 6463–6475 (2007).
* Mewis and Vermant [2000] J. Mewis and J. Vermant, “Rheology of sterically stabilized dispersions and lattices,” Prog. Org. Coat. 40, 111–117 (2000).
* Weeks and Weitz [2002] E. R. Weeks and D. A. Weitz, “Properties of cage rearrangements observed near the colloidal glass transition,” Phys. Rev. Lett. 89, 095704 (2002).
* Imperiali _et al._ [2012] L. Imperiali, K. H. Liao, C. Clasen, J. Fransaer, C. W. Macosko, and J. Vermant, “Interfacial rheology and structure of tiled graphene oxide sheets,” Langmuir 28, 7990–8000 (2012).
* Zhang _et al._ [2016] H. Zhang, K. Yu, O. J. Cayre, and D. Harbottle, “Interfacial particle dynamics: One and two step yielding in colloidal glass,” Langmuir 32, 13472–13481 (2016).
* Orsi _et al._ [2012] D. Orsi, G. Baldi, P. Cicuta, and L. Cristofolini, “On the relation between hierarchical morphology and mechanical properties of a colloidal 2D gel system,” Colloids Surf. A 413, 71–77 (2012).
* Isa _et al._ [2011] L. Isa, F. Lucas, R. Wepf, and E. Reimhult, “Measuring single-nanoparticle wetting properties by freeze-fracture shadow-casting cryo-scanning electron microscopy,” Nature Commun. 2, 438 (2011).
* Cole, Jinadasa, and Brown [2011] R. W. Cole, T. Jinadasa, and C. M. Brown, “Measuring and interpreting point spread functions to determine confocal microscope resolution and ensure quality control,” Nat. Protoc. 6, 1929–1941 (2011).
|
# Quantum Configuration and Phase Spaces: Finsler and Hamilton Geometries
Saulo Albuquerque<EMAIL_ADDRESS>Physics Department, Federal
University of Paraíba, Caixa Postal 5008, 58059-900, João Pessoa, PB, Brazil.
Valdir B. Bezerra<EMAIL_ADDRESS>Physics Department, Federal
University of Paraíba, Caixa Postal 5008, 58059-900, João Pessoa, PB, Brazil.
Iarley P. Lobo<EMAIL_ADDRESS>Department of Chemistry and Physics,
Federal University of Paraíba, Rodovia BR 079 - Km 12, 58397-000 Areia-PB,
Brazil. Physics Department, Federal University of Lavras, Caixa Postal 3037,
37200-000 Lavras-MG, Brazil. Gabriel Macedo<EMAIL_ADDRESS>Physics Department, Federal University of Paraíba, Caixa Postal 5008,
58059-900, João Pessoa, PB, Brazil. Pedro H. Morais<EMAIL_ADDRESS>Physics Department, Federal University of Paraíba, Caixa Postal 5008,
58059-900, João Pessoa, PB, Brazil. Ernesto Rodrigues<EMAIL_ADDRESS>Physics Department, Federal University of Paraíba, Caixa Postal 5008,
58059-900, João Pessoa, PB, Brazil. Luis C. N. Santos<EMAIL_ADDRESS>Physics Department, Federal University of Paraíba, Caixa Postal 5008,
58059-900, João Pessoa, PB, Brazil. Gislaine Varão<EMAIL_ADDRESS>Physics Department, Federal University of Paraíba, Caixa Postal 5008,
58059-900, João Pessoa, PB, Brazil.
###### Abstract
In this paper, we review two approaches that can describe, in a geometrical
way, the kinematics of particles that are affected by Planck-scale departures,
named Finsler and Hamilton geometries. By relying on maps that connect the
spaces of velocities and momenta, we discuss the properties of configuration
and phase spaces induced by these two distinct geometries. In particular, we
exemplify this approach by considering the so-called $q$-de Sitter-inspired
modified dispersion relation as a laboratory for this study. We finalize with
some points that we consider as positive and negative ones of each approach
for the description of quantum configuration and phases spaces.
## I Introduction
Since the original works by Bronstein Bronstein:2012zz that demonstrated
uncertainty in the localization of events when geometrical degrees of freedom
are quantized, it has been argued that attempts to formulate quantum gravity
in a differentiable manifold endowed with smooth geometric quantities would
not be an interesting path to follow if one aims to pursue a fundamental
approach to this problem. Attempts in this direction have accumulated over the
years, having prominent representatives such as loop quantum gravity (LQG)
Rovelli:2008zza and causal dynamical triangulation Ambjorn:2010rx . These
approaches to quantum gravity predict or describe several effects that should
be manifest at the Planckian regime of length and energy, such as the
discretization of geometry, which requires a language that obviously departs
from the usual Riemannian construction of general relativity. Despite the
elegance of such approaches, with current technology we are far from being
able to concretely address the regime in which such discretization would
become evident. Nevertheless, the notion that spacetime could effectively
behave like a medium formed by ”atoms of space” has led to a rich
phenomenological approach to quantum gravity, which by encoding generic
departures from relativistic equations, can describe common predictions
expected to be present at an intermediate stage between classical and quantum
gravity. Such an approach is encompassed in the area of quantum gravity
phenomenology, which addresses a myriad of effects beyond the one described in
this paragraph, as can be seen in Ref. Amelino-Camelia:2008aez , and in
particular, has found in multimessenger astronomy a fruitful environment to be
explored Addazi:2021xuf .
Usually, the regime, in which this idea is considered, is the regime, in which
the test particle approximation is valid consisting of the approximation, in
which one would have simultaneously faint gravitational and quantum effects,
described by the limits of the gravitational constant, $G\rightarrow 0$, and
the reduced Planck’s constant, $\hbar\rightarrow 0$, however, with the Planck
energy, $E_{P}=\sqrt{c^{5}\hbar/G}$, being finite, with $c$ the speed of
light. This deformed ”Minkowski limit,” which presents departures from
Minskowski spacetime’s structure has been suggested by various quantum gravity
proposals, such as the linearization of the hypersurface deformation algebra
inspired by LQG Amelino-Camelia:2016gfx ; Brahma:2016tsq ; Brahma:2018rrg and
non-commutative geometry Majid:1994cy ; Lukierski:1991pn ; Lukierski:1992dt ;
Majid:1996kd (for more details on this Miskowski limit, see Section 3.1.1 of
Ref. Amelino-Camelia:2008aez , and for more references on other theoretical
approaches in which such limit emerges, please refer to Section 2.2 of Ref.
Addazi:2021xuf ). It is expected that the path between the differentiable
Riemannian description of special (and general) relativity and the complete
quantum gravity theory should pass through an intermediate regime, in which
one has departures from the Riemannian character of spacetime but still has
geometric features that could describe a bottom-up phenomenology.
Furthermore, geometry plays an important role in the description of principles
that have guided the developments of relativistic theories; for example, the
principle of covariance is manifest through the use of tensorial equations of
motion, the local relativity principle is a physical manifestation of having
local equations of motion invariant under the Poincaré group (which is the
group of isometries of Minkowski space), the equivalence principle of general
relativity is manifest in the fact that the motion of free particles is
realized through geodesics, and the clock postulate can be expressed by
stating that an observer measures its proper time by the arc-length of its own
trajectory.
An important part of quantum gravity phenomenology is devoted to the question
of whether, in the aforementioned Minkowski limit, the Lorentz invariance, and
consequently, the local relativity principle, is preserved or broken due to
Planck-scale effects Amelino-Camelia:2010lsq . As is known, a length/energy
scale is not invariant under Lorentz transformations, which implies that
either a quantum gravity scale breaks the equivalence of inertial frames in
the aforementioned Minkowski limit, or the Lorentz or Poincaré group only
describes a low energy/large distance approximation of a deeper transformation
between inertial frames. The former possibility is known as a Lorentz
invariance violation (LIV) scenario Mattingly:2005re ; Liberati:2013xla , and
the latter is known as doubly (or deformed) special relativity (DSR) Amelino-
Camelia:2000stu ; Magueijo:2001cr . As the geometrization of special
relativity, due to Minkowski, paved the way to more fundamental descriptions
of nature, we shall follow a similar path, but of geometrizing DSR.
Geometric descriptions of DSR through non-commutative geometry are known
Majid:1994cy ; Lukierski:1991pn ; Lukierski:1992dt ; Majid:1996kd , but we
revise some continuous, differentiable ways of exploring non-Riemannian
degrees of freedom and the possibilities for preserving the aforementioned
principles. This way, we critically analyze two extensions of Riemannian
geometry that are capable of describing aspects of an emergent ”quantum
configuration and phase spaces” that preserve the intuition of those
principles: they are Finsler and Hamilton geometries. Finsler geometry
originally is related to the space of events and velocities (for this reason
we refer to a quantum configuration space), and Hamilton geometry originally
described the space of events and momenta (for this reason, we call it a
quantum phase space). In this paper, we revise the phenomenological
opportunities that emerge from these approaches and the interplay between
them. We also condensate the utility of each of these geometries and their
limitations in the current scenario.
We should also stress that the approaches described in this review, refer to
configuration and phase spaces probed by a single particle. The geometry
probed by a multi-particle system and its interplay with Finsler and Hamilton
languages (or even geometries that go beyond them) should still be further
explored, in which, possibly the intuition gained from the relative locality
framework Amelino-Camelia:2011lvm would play a prominent role in this
approach.
The paper is organized as follows. Section II revisits the origin of the idea
of describing the effective spacetime probed by a particle that propagates
through a modified dispersion relation (MDR) by the proposal of rainbow
metrics.
Section III revisits how this general idea is realized by the use of Finsler
geometry in the tangent bundle, whose dual version in the cotangent bundle is
discussed in Section IV, which is illustarated by considering the curved non-
trivial case of $q$-de Sitter-inspired Finsler geometry. Section V considers
the situation of deriving the geometry of the cotangent bundle, and, in
Section VI, its dual tangent bundle formalization defined by Hamilton geometry
is considered, which is illustrated by the $q$-de Sitter case. In Section VII,
we comparatively discuss these two approaches and highlight points that we
consider as useful as well as their limitations. Finally, some important
remarks are drawn in Section VIII. Throughout the paper, a system of units
with $c=\hbar=1$ is used, so that the Planck length is the inverse of the
Planck energy: $\sqrt{G}=\ell=E_{\text{P}}^{-1}$.
## II Preliminaries on Rainbow Geometries
As described above, over the years, the intuition that spacetime would behave
like material media, where instead of atoms of matter, one would have atoms of
spacetime, has been solidified through some approaches of quantum gravity.
Just as occurs in matter, in which one does not need to know the specific
details of the granular structure of a given medium to study the propagation
of particles through it, in spacetime, one can build phenomenology-inspired
ways of modeling how elementary particles interact with discrete gravitational
degrees of freedom while traveling through space, a so-called ”in-vaccuum
dispersion.” One could say that the most popular way of doing this is through
the assumption that particles would obey a modified dispersion relation, whose
corrections are given perturbatively by powers of the quantum gravity scale,
which we could assume as being in the order of Planck units. The dispersion
relation furnishes the group velocity of waves and defines the trajectory that
on-shell particles follow from the Hamilton equations. Actually, when the
interplay between the presence of amplifiers of observables and the
uncertainties of observations allows us to constrain this parameter at a level
close to its Planckian version, we say that we are at Planck-scale sensitivity
Amelino-Camelia:2008aez .
Such behavior also happens in meta-materials proutorov2018finsler , in which
it is possible to describe the motion of particles through it by geodesics in
a given geometry; it also appears in the motion of a charged particle in a
pre-metric formulation of electromagnetism hehl2003foundations , in the
description of seismic waves vcerveny2002fermat , etc.; for a review, see Ref.
Pfeifer:2019wus . Additionally, one could wonder if the motion of particles,
determined by Planck-scale modified dispersion relations, could also be
described by geodesics of a non-Riemannian geometry. Besides, the dispersion
relation itself is usually determined by the norm of the $4$-momentum measured
by a Riemannian metric, which also determines the symmetries observed by
measurements in that spacetime.
This intuition was early realized by the so-called ”rainbow geometries”
Magueijo:2002xx , idealized by João Magueijo and Lee Smolin which aimed to
extend the DSR formulation proposed by them in Ref. Magueijo:2001cr to curved
spacetimes. In that case, the way found to express local modified dispersion
relations through a norm, consisted in absorbing functions of the particle’s
energy divided by Planck energy, $\epsilon=E/E_{\text{P}}$, such as
$f(\epsilon)$ and $g(\epsilon)$, which would appear in the MDR that follows:
$m^{2}=f^{2}(\epsilon)E^{2}-g^{2}(\epsilon)|\vec{p}|^{2}\,,$ (1)
(with the three-momentum $\vec{p}$) into the definition of new spacetime
tetrads, $\tilde{e}_{(0)}^{\quad\mu}(\epsilon)=f(\epsilon)e_{(0)}^{\quad\mu}$
and $\tilde{e}_{(I)}^{\quad\mu}(\epsilon)=g(\epsilon)e_{(I)}^{\quad\mu}$, such
that the MDR reads
$m^{2}=\eta^{AB}\tilde{e}_{(A)}^{\quad\mu}\tilde{e}_{(B)}^{\quad\nu}p_{\mu}p_{\nu}=\tilde{g}^{\mu\nu}(\epsilon)p_{\mu}p_{\nu}\,,$
(2)
where
$g^{\mu\nu}(\epsilon)=\eta^{AB}\tilde{e}_{(A)}^{\quad\mu}\tilde{e}_{(B)}^{\quad\nu}$
is the rainbow metric, $\eta^{AB}$ is the Minkowski metric $\text{diag}(+---)$
, Greek letters denote four-dimensional indices and take on the values 0
(time) 1, 2, and 3 (space), low-case Latin letters denote the space indices,
and $p_{\mu}$ is the 4-momentum. This description would imply that when an
observer uses the motion of a particle with energy $E$ to probe spacetime,
then the line element assigned to that spacetime is the following:
$ds^{2}=\tilde{g}_{\mu\nu}dx^{\mu}dx^{\nu}=\frac{g_{00}}{f^{2}(\epsilon)}(dx^{0})^{2}+\frac{g_{ij}}{g^{2}(\epsilon)}dx^{i}dx^{j}+2\frac{g_{0i}}{f(\epsilon)g(\epsilon)}dx^{0}dx^{i}\,,$
(3)
where $g_{\mu\nu}$ is the metric found from undeformed tetrads. Thus, in a
nutshell, one identifies the rainbow functions, $f$ and $g$, from a MDR that
is usually inspired by fundamental theories of quantum gravity or by
phenomenological intuition; then, one uses $\tilde{g}_{\mu\nu}$ as an input
into the classical gravitational field equations. Considering modifications of
the stress-energy tensor due to the rainbow functions, one derives what should
be $g_{\mu\nu}$ (since $f$ and $g$ are determined a priori). Usually, this
procedure gives that $g_{\mu\nu}$ is the Riemannian metric found from the
usual gravitational field equations. Therefore, this approach gives basically
the usual metric components of a given theory, just modified by factors of the
rainbow functions as in Equation (3).
Effective energy-dependent spacetimes have emerged in different approaches to
the description of the quantization of gravitational/geometric degrees of
freedom Weinfurtner:2008if ; Assanioussi:2014xmz ; Olmo:2011sw . Along this
line of research, Magueijo-Smolin’s proposal has been applied in a myriad of
contexts, such as in black hole physics Ling:2005bp ; Lobo:2021bag , cosmology
Gorji:2016laj , wormholes Garattini:2015pmo ; Amirabi:2018ncf , cosmic strings
Bezerra:2019vrz , disformal geometries Carvalho:2015omv ; Lobo:2017bfh , and
electrostatic self-interaction of charged particles Santos:2019 . However,
despite its range of applicability and utility in furnishing intuition about
extreme scenarios, this approach presents some conceptual and technical
limitations that seem unavoidable, such as the lack of a rigorous mathematical
framework in which this idea is formulated or the imposition of a preferred
vielbein in which the particle’s energy is measured, which seems in
contradiction with the local DSR intention of this proposal. As shown below,
the solution to these problems is actually coincident, and the search for a
rigorous mathematical formulation for these ideas will be responsible for
giving a framework, in which proper physical questions can be answered and
novel phenomenological opportunities to born. The main issue here is what is
the proper formulation of a geometry that should not only depend on spacetime
points, but also should carry energy dependence of the particle itself that
probes this spacetime. This paper deals with the two main proposals—Finsler
and Hamilton geometries— solving some of the raised problems and also
discusses limitations on their owns.
## III Geometry of the Tangent Bundle: Finsler Geometry
The 1854 Habilitation Dissertation by Bernhard Riemann presents the germ of
the idea behind what would later be called Finsler geometry. In the second
part of the dissertation, it is said (see Ref. riemann , p. 35):
> ”For Space, when the position of points is expressed by rectilinear co-
> ordinates, $ds=\sqrt{\sum(dx)^{2}}$; Space is therefore, included in this
> simplest case. The next case in simplicity includes those manifoldnesses in
> which the line-element may be expressed as the fourth root of a quartic
> differential expression. The investigation of this more general kind would
> require no really different principles, but would take considerable time and
> throw little new light on the theory of space, especially as the results
> cannot be geometrically expressed; I restrict myself, therefore, to those
> manifoldnesses in which the line-element is expressed as the square root of
> a quadric differential expression.”
The exploration of such more general cases of line elements will be done only
64 years later, in 1918, in the Ph.D. thesis of Paul Finsler book:1130902 ,
where at least from the metric point of view, the distance between points is
(”distance”), please confirm. measured by a 1-homogeneous function
(homogeneous with the degree of 1) Such a metric tensor would be defined in
the tangent bundle of the base manifold, since it would depend not only on the
manifold points, but also on a direction, which is a manifestation of the non-
Pythagorean nature of this space. Later on, the issue of non-linear
connections was further developed and incorporated as a fundamental structure
for the dynamical description of Finsler spaces (for a historical perspective
on Finsler geometry, we refer the reader to the Preface of Ref.
bao2000introduction and references therein). The case of pseudo-Finsler
geometries, as an arena for describing spacetime, has been recently discussed
Hohmann:2021zbt ; Bernal:2020bul , where, for instance, different definitions
are presented and important theorems regarding its causal structure among
other issues are being derived Minguzzi:2014aua .
In Section II, a glimpse of the non-Riemannian nature of spacetime was
notified emerging as a manifestation of the quantization of gravitational
degrees of freedom. Actually, as one can anticipate, the non-quadratic, i.e.,
non-Pythagorean nature of a dispersion relation is connected to a possible
Finslerian nature of spacetime through an intermediate step that connects the
kinematics of particles in a Hamiltonian to a Lagrangian formulation.
Actually, the MDR corresponds to a Hamiltonian constraint, which physical
particles supposedly obey, the way that the trajectories of free particles,
induced by such a deformed Hamiltonian, capture the propagation of a particle
through a quantized spacetime. For this reason, the Helmholtz action,
associated with such a particle, is naturally given by the functional,
$S[x,p,\lambda]=\int d\mu(\dot{x}^{\alpha}p_{\alpha}-\lambda f(H(x,p),m))\,,$
(4)
where the dot denotes differentiation with respect to the parameter $\mu$,
$p_{\mu}$ is the particle’s momenta, $f$ is a function that is null if the
dispersion relation is satisfied, namely, $H(x,p)=m$, and $\lambda$ is a
Lagrange multiplier. This is a premetric formulation that is actually defined
in the space $T^{*}M\times{\mathbb{R}}$, where $T^{*}M$ is the phase space of
analytical mechanics or cotangent bundle. In order to find an arc-length, and
consequently, a geometric structure, one needs to calculate an equivalent
Lagrangian defined in the configuration space or the tangent bundle $TM$
described by points and velocities (such an observation was firstly presented
in Ref. Girelli:2006fw ). The algorithm for doing so is as follows
Lobo:2020qoa :
1. 1.
variation with respect to $\lambda$ enforces the dispersion relation;
2. 2.
variation with respect to $p_{\mu}$ yields an equation
$\dot{x}^{\mu}=\dot{x}^{\mu}(p,\lambda)$, which must be inverted to obtain
$p_{\mu}(x,\dot{x},\lambda)$ to eliminate the momenta $p_{\mu}$ from the
action;
3. 3.
using $p_{\mu}(x,\dot{x},\lambda)$ in the dispersion relation, one can solve
for $\lambda(x,\dot{x})$; and
4. 4.
finally, the desired length measure is obtained as
$S[x]=S[x,p(x,\dot{x},\lambda(x,\dot{x})),\lambda(x,\dot{x})]_{H}$.
This is a Legendre transformation, whose conditions of existence and
capability of providing a physical framework are discussed in Refs.
Raetzel:2010je ; Rodrigues:2022mfj . These formal conditions are always
guaranteed when one considers deformations at the perturbative level. This is
crucial because the following algorithm cannot be applied in practice if it is
not possible to invert the velocity function to find the momenta as a function
of the other variables. In general, this cannot be done, especially for
complicated dispersion relations, such as those that depend on sums of
hyperbolic functions Gubitosi:2011hgc . Anyway, since quantum gravity
phenomenology is usually concerned with first order effects, which are those
attainable by experiments nowadays, we shall concentrate on the perturbative
level in order to derive our conclusions.
For example, if this algorithm is applied to a Hamiltonian of the form,
$H(x,p)=g(p,p)+\varepsilon h(x,p)\,,$ (5)
where $g(p,p)=g^{ab}(x)p_{a}p_{b}$ is an undeformed dispersion relation,
$h(x,p)$ is a function of spacetime points and momenta that depends on the
model under consideration, and $\varepsilon$ is the perturbation parameter
that is usually a function of the energy scale of the deformation (such as the
Planck or quantum gravity length scale). As shown in Ref. Lobo:2020qoa , after
the Legendre transformation, the equivalent action takes the form,
$S[x]=m\int
d\mu\sqrt{g(\dot{x},\dot{x})}\left(1-\varepsilon\frac{h(x,\bar{p}(x,\dot{x}))}{2m^{2}}\right)\,,$
(6)
where $\bar{p}_{a}(x,\dot{x})=m{\dot{x}_{a}}/{\sqrt{g(\dot{x},\dot{x})}}$. In
particular, when $h$ is a polynomial function of momenta as (the index is
shifted: $n\rightarrow n+2$, in comparison with Ref. Lobo:2020qoa , such that
now $n$ corresponds to the power of Planck length in the MDR),
$h(x,p)=h^{\mu_{1}\mu_{2}....\mu_{n+2}}(x)p_{\mu_{1}}p_{\mu_{2}}...p_{\mu_{n+2}}\,,$
(7)
and $\varepsilon=\ell^{n}$, one finds an action of the form,
$S[x]=m\int d\mu\sqrt{g(\dot{x},\dot{x})}\left(1-(\ell
m)^{n}\frac{h_{\mu_{1}\mu_{2}....\mu_{n+2}}(x)\dot{x}^{\mu_{1}}\dot{x}^{\mu_{2}}...\dot{x}^{\mu_{n+2}}}{2g(\dot{x},\dot{x})^{\frac{n+2}{2}}}\right)\,,$
(8)
where we lowered the indices of $h$ with the components of $g$. The connection
between the mechanics of free particle and geometry takes place when the above
expression is identified with the arc-length functional, $s[x]$, of a given
geometry, i.e., $s[x]=S[x]/m$. Such an identification makes sense if we want
to state that the trajectories of free particles are extremizing curves or
geodesics in a given geometry, it is related to the preservation of the
equivalence principle even in this Planck-scale deformed scenario.
In this case, the spacetime in which a particle propagates by a MDR is
described by an arc-length functional that generalizes the one of Riemannian
geometry and is given by a function $F(x,\dot{x})$ that is 1-homogeneous in
the velocity $\dot{x}$, such that the arc-length is indeed parametrization
invariant, as it must be:
$s[x]=\int F(x,\dot{x})d\mu\,.$ (9)
Actually, this is the kind of scenario envisaged by Riemann in his
dissertation, and explored by Finsler, that emerges here quite naturally.
There are some definitions of a pseudo-Finsler spacetime in the literature,
but we rely on that given in Ref. Hohmann:2021zbt (the differences in
comparison to other definitions are discussed in Ref. Hohmann:2021zbt ). First
of all, we are going to work with a smooth manifold, $M$, endowed with a real
valued positive function $L$ that takes values on the tangent bundle $TM$,
described by coordinates $(x,y)$, where $\\{x^{\mu}\\}$ are spacetime
coordinates and $\\{y^{\mu}\\}$ refer to vector or velocity coordinates.
Actually, we shall need the slit tangent bundle $\widetilde{TM}=TM/\\{0\\}$,
in which we remove the zero section, and we also need the projection
$\pi:TM\rightarrow M$. A conic subbundle is a submanifold ${\cal
D}\subset\widetilde{TM}$ such that $\pi({\cal D})=M$ and with the conic
property that states that if $(x,y)\in{\cal D}\Rightarrow(x,\lambda y)\in{\cal
D},\,\forall\lambda>0$.
In a nutshell, a Finsler spacetime is a triple $(M,{\cal D},L)$, where
$L:{\cal D}\rightarrow{\mathbb{R}}$ is a smooth function satisfying the
conditions:
1. 1.
positive 2-homogeneity: $L(x,\alpha y)=\alpha^{2}L(x,y),\,\forall\alpha>0$;
2. 2.
at any $(x,y)\in{\cal D}$ and in any chart of $\widetilde{TM}$, the following
Hessian (metric) is non-degenerate:
$g_{\mu\nu}(x,y)=\frac{1}{2}\frac{\partial^{2}}{\partial y^{\mu}\partial
y^{\nu}}L(x,y)\,;$ (10)
3. 3.
the metric $g_{\mu\nu}$ has a Lorentzian signature.
The function $L$ is actually the square of the Finsler function,
$L(x,y)=F^{2}(x,y)$, and from it the Finsler arc-length is defined as given in
Equation (9). Condition $1$ above guarantees that Equation (9) does not depend
on the parametrization used to describe the curve and that using Euler’s
theorem for homogeneous functions, this expression can be cast as
$s[x]=\int\sqrt{g_{\mu\nu}(x,\dot{x})\dot{x}^{\mu}\dot{x}^{\nu}}d\mu\,.$ (11)
From a coordinate transformation,
$\displaystyle\tilde{x}^{\mu}=\tilde{x}^{\mu}(x)\,,$ (12)
$\displaystyle\tilde{y}^{\mu}=\frac{\partial\tilde{x}^{\mu}}{\partial
x^{\nu}}y^{\nu}\,,$ (13)
the functions $g_{\mu\nu}$ transform according to
$\tilde{g}_{\mu\nu}(\tilde{x},\tilde{y})=\frac{\partial
x^{\alpha}}{\partial\tilde{x}^{\mu}}\frac{\partial
x^{\beta}}{\partial\tilde{x}^{\nu}}g_{\alpha\beta}(x,y)\,.$ (14)
Due the property 14, $g_{\mu\nu}$ is referred here as the components of a
distinguished tensor field (or $d$-tensor field) on the manifold
$\widetilde{TM}$, which follows the notation adopted in Ref. Miron:1994nvt .
The extremization of the arc-lenght functional (9) gives the following
geodesic equation,
$\frac{d^{2}x^{\mu}}{d\mu^{2}}+2G^{\mu}(x,\dot{x})=2\frac{dF}{d\mu}\frac{\partial
F}{\partial\dot{x}^{\mu}}\,,$ (15)
where $G^{\mu}=G^{\mu}(x,\dot{x})$ are the spray coefficients spray-finsler
and are given in terms of the Christoffel symbols, $\gamma^{\alpha}_{\mu\nu}$,
of the metric $g_{\mu\nu}$:
$\displaystyle
G^{\alpha}(x,\dot{x})=\frac{1}{2}\gamma^{\alpha}_{\mu\nu}(x,\dot{x})\dot{x}^{\mu}\dot{x}^{\nu}\,,$
(16)
$\displaystyle\gamma^{\alpha}_{\mu\nu}(x,\dot{x})=\frac{1}{2}g^{\alpha\beta}\left(\frac{\partial
g_{\mu\beta}}{\partial x^{\nu}}+\frac{\partial g_{\nu\beta}}{\partial
x^{\mu}}-\frac{\partial g_{\mu\nu}}{\partial x^{\beta}}\right)\,.$ (17)
If we choose the arc-length parametrization, i.e., the one in which $F=1$, we
have a sourceless geodesic equation. This expression means that the
trajectories generated by a MDR of the form $H(x,\dot{x})=m^{2}$ are,
actually, geodesics of a Finsler metric. The presence of spray coefficients
allows us to construct another quite a useful quantity, the so-called Cartan
non-linear connection, given by (in this paper, we interchange the notation
$\dot{x}\leftrightarrow y$ freely)
$N^{\mu}{}_{\nu}(x,y)=\frac{\partial}{\partial y^{\nu}}G^{\mu}(x,y)\,,$ (18)
that transforms according to
$\tilde{N}^{\mu}{}_{\nu}=\frac{\partial\tilde{x}^{\mu}}{\partial
x^{\alpha}}\frac{\partial
x^{\beta}}{\partial\tilde{x}^{\nu}}N^{\alpha}{}_{\beta}-\frac{\partial^{2}\tilde{x}^{\mu}}{\partial
x^{\alpha}\partial x^{\beta}}\frac{\partial
x^{\beta}}{\partial\tilde{x}^{\nu}}y^{\alpha}\,.$ (19)
The introduction of this quantity allows us to introduce a useful basis of the
tangent space of the tangent bundle at each point. In fact, since according to
the coordinate transformation (12) and (13), the usual coordinate basis
transforms as
$\displaystyle\frac{\partial}{\partial\tilde{x}^{\mu}}=\frac{\partial
x^{\nu}}{\partial\tilde{x}^{\mu}}\frac{\partial}{\partial
x^{\nu}}+\frac{\partial^{2}x^{\nu}}{\partial\tilde{x}^{\mu}\partial\tilde{x}^{\alpha}}\frac{\partial\tilde{x}^{\alpha}}{\partial
x^{\beta}}y^{\beta}\frac{\partial}{\partial y^{\nu}}\,,$ (20)
$\displaystyle\frac{\partial}{\partial\tilde{y}^{\mu}}=\frac{\partial
y^{\nu}}{\partial\tilde{y}^{\mu}}\frac{\partial}{\partial y^{\nu}}\,.$ (21)
In addition, a non-linear connection allows us to define the following frame:
$\displaystyle\frac{\delta}{\delta
x^{\mu}}=\delta_{\mu}=\frac{\partial}{\partial
x^{\mu}}-N^{\nu}{}_{\mu}\frac{\partial}{\partial y^{\nu}}\,,$ (22)
$\displaystyle\dot{\partial}_{\mu}=\frac{\partial}{\partial y^{\mu}}\,.$ (23)
Due to the transformation properties of the non-linear connection, this basis
transforms as
$\displaystyle\tilde{\delta}_{\mu}=\frac{\partial
x^{\nu}}{\partial\tilde{x}^{\mu}}\delta_{\nu}\,,$ (24)
$\displaystyle\tilde{\dot{\partial}}_{\mu}=\frac{\partial
x^{\nu}}{\partial\tilde{x}^{\mu}}\dot{\partial}_{\nu}\,.$ (25)
This means that one is able to split the tangent space of the tangent bundle
into horizontal, $HTM=\text{span}\\{\delta_{\mu}\\}$, and vertical,
$VTM=\text{span}\\{\dot{\partial}_{\mu}\\}$, spaces, such that
$T\widetilde{TM}=HTM\oplus VTM$ in each point $(x,y)$. Similarly, the same
reasoning applies to the cotangent space; i.e., we split
$T^{*}\widetilde{TM}=H^{*}TM\oplus V^{*}TM$ spanned as
$H^{*}TM=\text{span}\\{dx^{\mu}\\}$ and $V^{*}TM=\text{span}\\{\delta
y^{\mu}\\}$, where
$\displaystyle\delta y^{\mu}=dy^{\mu}+N^{\mu}{}_{\nu}dx^{\nu}\,,$ (26)
which transforms as
$\displaystyle d\tilde{x}^{\mu}=\frac{\partial\tilde{x}^{\mu}}{\partial
x^{\nu}}dx^{\nu}\,,$ (27)
$\displaystyle\delta\tilde{y}^{\mu}=\frac{\partial\tilde{x}^{\mu}}{\partial
x^{\nu}}\delta y^{\nu}\,.$ (28)
Such a decomposition of the tangent and cotangent vector spaces implies that a
vector $X$ and a $1$-form $\omega$ with horizontal and vertical terms can read
as
$\displaystyle
X=X^{\mu}\delta_{\mu}+\dot{X}^{\mu}\dot{\partial}_{\mu}=X^{H}+X^{V}\,,$ (29)
$\displaystyle\omega=\omega_{\mu}dx^{\mu}+\dot{\omega}_{\mu}\delta
y^{\mu}=\omega^{H}+\omega^{V}\,.$ (30)
Endowed with this basis, the metric $\mathbb{G}(x,y)$ of the configuration
space is described by the so-called Sasaki-Matsumoto lift of the metric
$g_{\mu\nu}$:
$\mathbb{G}(x,y)=g_{\mu\nu}(x,y)dx^{\mu}\otimes dx^{\nu}+g_{\mu\nu}(x,y)\delta
y^{\mu}\otimes\delta y^{\nu}\,.$ (31)
###### Definition III.1
A tensor field $T$ of type $(m+n,p+q)$ on the manifold $\widetilde{TM}$ is
called a distinguished tensor field (or $d$-tensor field) if it has the
property
$\displaystyle
T\left(\overset{1}{\omega},...,\overset{m}{\omega},\overset{1}{\tau},...,\overset{n}{\tau},\underset{1}{X},...,\underset{p}{X},\underset{1}{Y},...,\underset{q}{Y}\right)=T\left(\overset{1}{\omega}{}^{H},...,\overset{m}{\omega}{}^{H},\overset{1}{\tau}{}^{V},...,\overset{n}{\tau}{}^{V},\underset{1}{X}{}^{H},...,\underset{p}{X}{}^{H},\underset{1}{Y}{}^{V},...,\underset{q}{Y}{}^{V}\right).$
(32)
This definition implies that one can write a $d$-tensor $T$ in the preferred
frame as
$\displaystyle
T=T^{\mu_{1}...\mu_{m}\nu_{1}...\nu_{n}}{}_{\alpha_{1}...\alpha_{p}\beta_{1}...\beta_{q}}\frac{\delta}{\delta
x^{\mu_{1}}}$ $\displaystyle\otimes...\otimes\frac{\delta}{\delta
x^{\mu_{m}}}\otimes\frac{\partial}{\partial
y^{\nu_{1}}}\otimes...\otimes\frac{\partial}{\partial y^{\nu_{n}}}$
$\displaystyle\otimes dx^{\alpha_{1}}\otimes...\otimes
dx^{\alpha_{p}}\otimes\delta y^{\beta_{1}}\otimes...\otimes\delta
y^{\beta_{q}}\,,$ (33)
and that it transforms according to the rule,
$\displaystyle\widetilde{T}^{\mu_{1}...\mu_{m}\nu_{1}...\nu_{n}}{}_{\alpha_{1}...\alpha_{p}\beta_{1}...\beta_{q}}$
(34) $\displaystyle=\frac{\partial\tilde{x}^{\mu_{1}}}{\partial
x^{\epsilon_{1}}}...\frac{\partial\tilde{x}^{\mu_{m}}}{\partial
x^{\epsilon_{m}}}\frac{\partial\tilde{x}^{\nu_{1}}}{\partial
x^{\lambda_{1}}}...\frac{\partial\tilde{x}^{\nu_{n}}}{\partial
x^{\lambda_{n}}}\frac{\partial
x^{\gamma_{1}}}{\partial\tilde{x}^{\alpha_{1}}}...\frac{\partial
x^{\gamma_{p}}}{\partial\tilde{x}^{\alpha_{p}}}\frac{\partial
x^{\rho_{1}}}{\partial\tilde{x}^{\beta_{1}}}...\frac{\partial
x^{\rho_{q}}}{\partial\tilde{x}^{\beta_{q}}}T^{\epsilon_{1}..\epsilon_{n}\lambda_{1}...\lambda_{m}}{}_{\gamma_{1}...\gamma_{p}\rho_{1}...\rho_{q}}\,.$
An example of $d$-tensor field is the metric whose components are given by
Equation (14).
### III.1 $N$-Linear Connection
Given a linear connection, $D$, on the manifold $\widetilde{TM}$, if it
preserves the parallelism of the horizontal and vertical spaces, i.e., if it
can be written as
$D_{\delta_{\nu}}\delta_{\mu}=L^{\alpha}_{\mu\nu}\delta_{\alpha}\,,\qquad
D_{\delta_{\nu}}\dot{\partial}_{\alpha}=L^{\mu}_{\alpha\nu}\dot{\partial}_{\mu}\,,$
(35)
$D_{\dot{\partial}_{\nu}}\delta_{\mu}=C^{\alpha}_{\mu\nu}\delta_{\alpha}\,,\qquad
D_{\dot{\partial}_{\nu}}\dot{\partial}_{\mu}=C^{\alpha}_{\mu\nu}\dot{\partial}_{\alpha}\,,$
(36)
then is called an $N$-linear connection. Let us consider a coordinate change;
thus, the coefficients (35) and (36) transform as
$\displaystyle\tilde{L}^{\alpha}_{\mu\nu}=\frac{\partial\tilde{x}^{\alpha}}{\partial
x^{\beta}}\frac{\partial x^{\lambda}}{\partial\tilde{x}^{\mu}}\frac{\partial
x^{\epsilon}}{\partial\tilde{x}^{\nu}}L^{\beta}_{\lambda\epsilon}+\frac{\partial^{2}x^{\beta}}{\partial\tilde{x}^{\mu}\partial\tilde{x}^{\nu}}\frac{\partial\tilde{x}^{\alpha}}{\partial
x^{\beta}}\,,$ (37)
$\displaystyle\tilde{C}^{\alpha}_{\mu\nu}=\frac{\partial\tilde{x}^{\alpha}}{\partial
x^{\beta}}\frac{\partial x^{\lambda}}{\partial\tilde{x}^{\mu}}\frac{\partial
x^{\epsilon}}{\partial\tilde{x}^{\nu}}C^{\beta}_{\lambda\epsilon}\,.$ (38)
Endowed with these coefficients, the derivative of a $d$-tensor can be
decomposed into a horizontal and a vertical parts, such that one can apply the
covariant derivative of a tensor $T$ of type $(m+n,p+q)$ in the direction of a
vector $X$ as a direction of a vector $X$ as
$\displaystyle D_{X}T=D_{X^{H}}T+D_{X^{V}}T$
$\displaystyle=\left(T^{\mu_{1}...\mu_{m}\nu_{1}...\nu_{n}}{}_{\alpha_{1}...\alpha_{p}\beta_{1}...\beta_{q}|\epsilon}X^{\epsilon}+T^{\mu_{1}...\mu_{m}\nu_{1}...\nu_{n}}{}_{\alpha_{1}...\alpha_{p}\beta_{1}...\beta_{q}||\epsilon}\dot{X}^{\epsilon}\right)\frac{\delta}{\delta
x^{\mu_{1}}}\otimes...\otimes\frac{\delta}{\delta x^{\mu_{m}}}$
$\displaystyle\otimes\frac{\partial}{\partial
y^{\nu_{1}}}\otimes...\otimes\frac{\partial}{\partial y^{\nu_{n}}}\otimes
dx^{\alpha_{1}}\otimes...\otimes dx^{\alpha_{p}}\otimes\delta
y^{\beta_{1}}\otimes...\otimes\delta y^{\beta_{q}}\,,$ (39)
where
$\displaystyle
T^{\mu_{1}...\mu_{m}\nu_{1}...\nu_{n}}{}_{\alpha_{1}...\alpha_{p}\beta_{1}...\beta_{q}|\epsilon}$
(40) $\displaystyle=\frac{\delta}{\delta
x^{\epsilon}}T^{\mu_{1}...\mu_{m}\nu_{1}...\nu_{n}}{}_{\alpha_{1}...\alpha_{p}\beta_{1}...\beta_{q}}+L^{\mu_{1}}_{\gamma\epsilon}T^{\gamma...\mu_{m}\nu_{1}...\nu_{n}}{}_{\alpha_{1}...\alpha_{p}\beta_{1}...\beta_{q}}+...-L^{\gamma}_{\alpha_{1}\epsilon}T^{\mu_{1}...\mu_{m}\nu_{1}...\nu_{n}}{}_{\gamma...\alpha_{p}\beta_{1}...\beta_{q}}\,,$
$\displaystyle
T^{\mu_{1}...\mu_{m}\nu_{1}...\nu_{n}}{}_{\alpha_{1}...\alpha_{p}\beta_{1}...\beta_{q}||\epsilon}$
(41) $\displaystyle=\frac{\partial}{\partial
y^{\epsilon}}T^{\mu_{1}...\mu_{m}\nu_{1}...\nu_{n}}{}_{\alpha_{1}...\alpha_{p}\beta_{1}...\beta_{q}}+C^{\mu_{1}}_{\gamma\epsilon}T^{\gamma...\mu_{m}\nu_{1}...\nu_{n}}{}_{\alpha_{1}...\alpha_{p}\beta_{1}...\beta_{q}}+...-C^{\gamma}_{\alpha_{1}\epsilon}T^{\mu_{1}...\mu_{m}\nu_{1}...\nu_{n}}{}_{\gamma...\alpha_{p}\beta_{1}...\beta_{q}}\,,$
and the property that the covariant derivative is linear in the direction $X$
is used. The triple $D\Gamma(N,L,C)$ describes the parallel transport and
decomposition of the tangent and cotangent spaces of the tangent bundle into
horizontal and vertical spaces. At this point, we need to comment on some
remarkable $N$-linear connections that are considered in the literature.
The first connection is the metrical Cartan connection,
$C\Gamma(N^{\mu}{}_{\nu},L^{\alpha}_{\mu\nu},C^{\alpha}_{\mu\nu})$. In this
case, $N^{\mu}{}_{\nu}$ is given by the canonical Cartan non-linear
connection, defined by the spray coefficients (18). The coefficients
$L^{\alpha}_{\mu\nu}$ and $C^{\alpha}_{\mu\nu}$ are given, respectively, by
$\displaystyle
L^{\alpha}_{\mu\nu}=\frac{1}{2}g^{\alpha\beta}\left(\frac{\delta
g_{\mu\beta}}{\delta x^{\nu}}+\frac{\delta g_{\nu\beta}}{\delta
x^{\mu}}-\frac{\delta g_{\mu\nu}}{\delta x^{\beta}}\right)\,,$ (42)
$\displaystyle
C^{\alpha}_{\mu\nu}=\frac{1}{2}g^{\alpha\beta}\left(\frac{\delta
g_{\mu\beta}}{\delta y^{\nu}}+\frac{\delta g_{\nu\beta}}{\delta
y^{\mu}}-\frac{\delta g_{\mu\nu}}{\delta y^{\beta}}\right)\,.$ (43)
This connection is metrical (i.e., without non-metricity tensors) considering
both horizontal and vertical covariant derivatives of the Finsler metric.
Besides, the Berwald connection is given by the triple
$B\Gamma(N^{\mu}{}_{\nu},\partial N^{\alpha}{}_{\mu}/\partial y^{\nu},0)$ and
presents horizontal and vertical non-metricities. The Chern–Rund connection,
$R\Gamma(N^{\mu}{}_{\nu},L^{\alpha}_{\mu\nu},0)$, is horizontally metrical,
but represents vertical non-metricity. Additionally, the Hashiguchi
connection, $H\Gamma(N^{\mu}{}_{\nu},\partial N^{\alpha}{}_{\mu}/\partial
y^{\nu},C^{\alpha}_{\mu\nu})$, represents horizontal non-metricity, but it is
vertically metrical. In these expressions, $N$ is the canonical Cartan non-
linear connection (18), $L$ is given by Equation (42), and $C$ is given by
Equation $\eqref{c-cartan}$.
### III.2 Symmetries
Geometrical language naturally realizes the concept of symmetry of physical
equations. General relativity given in terms of Riemannian geometry
encompasses the invariance under general coordinate transformations, and the
isometries of the Minkowski space describe the Poincaré transformations
(actually, one can further apply this technique for maximally symmetric
spaces, including de Sitter and anti-de Sitter ones). Finsler geometry, as we
have been using, allows us to go beyond this scope and to define deformed
Lorentz/Poincaré transformations that present Planck scale corrections even in
the presence of a local modified dispersion relation. One can see how this
will naturally emerge, since the invariance of the arc-length (9) is
compatible with the invariance of the action in the Hamiltonian formulation
(4), from which such an arc-length was derived. This idea was firstly noticed
in Ref. Girelli:2006fw and later explicitly explored in Refs. Amelino-
Camelia:2014rga ; Lobo:2016xzq . The master equation for this purpose is the
one that follows from the invariance of the Finslerian interval $ds^{2}$, as
done in Appendix A of Ref. Amelino-Camelia:2014rga . From this invariance, the
Finslerian killing equation for the killing vector was found, with components
$\xi^{\alpha}$, which should be solved in order to derive the deformed
symmetries in the DSR context,
$\xi^{\alpha}\partial_{\alpha}g_{\mu\nu}+g_{\alpha\nu}\partial_{\mu}\xi^{\alpha}+g_{\mu\alpha}\partial_{\nu}\xi^{\alpha}+y^{\alpha}\partial_{\alpha}\xi^{\beta}\dot{\partial}_{\beta}g_{\mu\nu}=0\,.$
(44)
### III.3 Finsler–q-de Sitter (Tangent Bundle Case)
As an example that presents a non-trivial non-linear connection, we shall
consider the case of a Finsler geometry inspired by the so-called $q$-de
Sitter deformed relativity. This case has been previously studied in the
literature, e.g., in Refs. Barcaroli:2015eqe ; Letizia:2016lew ; Lobo:2016xzq
; Lobo:2016lxm , and can be described by an algebra that deforms the one of
Poincaré in a way that gives the de Sitter symmetry when a quantum gravity
parameter goes to zero, and on the other hand, gives the so-called
$\kappa$-Poincaré algebra (that deforms the Poincaré one by an energy scale
parameter, supposedly the Planck energy) when the de Sitter curvature
parameter goes to zero. Therefore, it corresponds to an authentic realization
of a deformed relativity scenario, even in the presence of what can be
interpreted as spacetime curvature. In this subsection, we initially consider
results that were originally presented in Ref. Letizia:2016lew in $1+1$
dimensions.
The MDR related to this algebra (in a given basis) can be perturbed to first
order in the Planck length and de Sitter curvature parameters $\ell$ and $H$,
respectively, as
${\cal H}(x,p)=p_{0}^{2}-p_{1}^{2}(1+\ell p_{0})(1-2Hx^{0})\,.$ (45)
By using the action given by Equation (4) and the algorithm that follows it,
the following Finsler function can be obtained:
$F(x,\dot{x})=\sqrt{(\dot{x}^{0})^{2}-(1-2Hx^{0})(\dot{x}^{1})^{2}}+\ell\frac{m}{2}\frac{(1-2Hx^{0})\dot{x}^{0}(\dot{x}^{1})^{2}}{(\dot{x}^{0})^{2}-(1-2Hx^{0})(\dot{x}^{1})^{2}}\,,$
(46)
from which the Finsler metric can be found from Equation (10):
$\displaystyle
g_{\mu\nu}^{F}(x,\dot{x})=\begin{pmatrix}1+\frac{3a^{4}m\ell\dot{x}^{0}(\dot{x}^{1})^{4}}{2[(\dot{x}^{0})^{2}-a^{2}(\dot{x}^{1})^{2}]^{5/2}}&\frac{m\ell
a^{4}(\dot{x}^{1})^{3}[a^{2}(\dot{x}^{1})^{2}-4(\dot{x}^{0})^{2}]}{2[(\dot{x}^{0})^{2}-a^{2}(\dot{x}^{1})^{2}]^{5/2}}\\\
\frac{m\ell
a^{4}(\dot{x}^{1})^{3}[a^{2}(\dot{x}^{1})^{2}-4(\dot{x}^{0})^{2}]}{2[(\dot{x}^{0})^{2}-a^{2}(\dot{x}^{1})^{2}]^{5/2}}&-a^{2}+\frac{m\ell
a^{2}(\dot{x}^{0})^{3}[2(\dot{x}^{0})^{2}+a^{2}(\dot{x}^{1})^{2}]}{2[(\dot{x}^{0})^{2}-a^{2}(\dot{x}^{1})^{2}]^{5/2}}\end{pmatrix}\,,$
(47)
where $a=a(t)=e^{Ht}=1+Ht+{\cal O}(H^{2})$ (in this paper, the terms that grow
with higher orders of $H$ and $\ell$ are discarded). The geodesic equation is
found from the extremization of the Finsler arc-length defined by $F$, from
which Christoffel symbols and spray coefficients can be calculated. Actually,
the $\gamma^{\alpha}_{\mu\nu}(x,\dot{x})$ are given, for an arbitrary
parametrization, by the set of Equations (44) of Ref. Letizia:2016lew , from
which the spray coefficients are given by
$\displaystyle G^{0}(x,\dot{x})=\frac{1}{8}a^{2}H(\dot{x}^{1})^{2}$
$\displaystyle\left[4-\frac{\ell
m\dot{x}^{0}}{\left[(\dot{x}^{0})^{2}-a^{2}(\dot{x}^{1})^{2}\right]^{7/2}}\left(-28a^{6}(\dot{x}^{1})^{6}+12a^{2}(\dot{x}^{0})^{4}(\dot{x}^{1})^{2}\right.\right.$
$\displaystyle\left.\left.+a^{2}\left(17a^{2}+28\right)(\dot{x}^{0})^{2}(\dot{x}^{1})^{4}+16(\dot{x}^{0})^{6}\right)\right]\,,$
(48) $\displaystyle G^{1}(x,\dot{x})=H\dot{x}^{0}\dot{x}^{1}+\ell$
$\displaystyle\left[\frac{a^{2}Hm(\dot{x}^{1})^{3}\left(a^{6}(\dot{x}^{1})^{6}-6a^{4}(\dot{x}^{0})^{2}(\dot{x}^{1})^{4}+3a^{2}(\dot{x}^{0})^{4}(\dot{x}^{1})^{2}-28(\dot{x}^{0})^{6}\right)}{4\left((\dot{x}^{0})^{2}-a^{2}(\dot{x}^{1})^{2}\right)^{7/2}}\right]\,.$
(49)
As can be seen, these coefficients are $2$-homogeneous in the velocities, as
expected. The Cartan non-linear connection coefficients read:
$\displaystyle N^{0}{}_{0}(x,\dot{x})=$ $\displaystyle\frac{H\ell
m(\dot{x}^{1})^{4}\left(-28(\dot{x}^{1})^{6}-33(\dot{x}^{1})^{4}(\dot{x}^{0})^{2}+240(\dot{x}^{1})^{2}(\dot{x}^{0})^{4}+136(\dot{x}^{0})^{6}\right)}{8\left((\dot{x}^{0})^{2}-(\dot{x}^{1})^{2}\right)^{9/2}}\,,$
(50) $\displaystyle N^{0}{}_{1}(x,\dot{x})=$ $\displaystyle
H\dot{x}^{1}-\frac{H\ell
m\dot{x}^{1}\dot{x}^{0}}{8\left((\dot{x}^{0})^{2}-(\dot{x}^{1})^{2}\right)^{9/2}}\left(28(\dot{x}^{1})^{8}-179(\dot{x}^{1})^{6}(\dot{x}^{0})^{2}+306(\dot{x}^{1})^{4}(\dot{x}^{0})^{4}\right.$
$\displaystyle\left.+128(\dot{x}^{1})^{2}(\dot{x}^{0})^{6}+32(\dot{x}^{0})^{8}\right),$
$\displaystyle N^{1}{}_{0}(x,\dot{x})=$ $\displaystyle
H\dot{x}^{1}+\frac{H\ell
m(\dot{x}^{1})^{3}\dot{x}^{0}\left(5(\dot{x}^{1})^{6}+18(\dot{x}^{1})^{4}(\dot{x}^{0})^{2}+159(\dot{x}^{1})^{2}(\dot{x}^{0})^{4}+28(\dot{x}^{0})^{6}\right)}{4\left((\dot{x}^{0})^{2}-(\dot{x}^{1})^{2}\right)^{9/2}}\,,$
(51) $\displaystyle N^{1}{}_{1}(x,\dot{x})=$ $\displaystyle
H\dot{x}^{0}-\frac{H\ell
m(\dot{x}^{1})^{2}\left(2(\dot{x}^{1})^{8}-9(\dot{x}^{1})^{6}(\dot{x}^{0})^{2}+36(\dot{x}^{1})^{4}(\dot{x}^{0})^{4}+97(\dot{x}^{1})^{2}(\dot{x}^{0})^{6}+84(\dot{x}^{0})^{8}\right)}{4\left((\dot{x}^{0})^{2}-(\dot{x}^{1})^{2}\right)^{9/2}}\,,$
(52)
where the worldlines are autoparallel curves of this non-linear connection.
Let us note that some terms of the connection are only present due to the
coupling between the spacetime curvature parameter, $H$, and the one that
gives a non-trivial velocity space, $\ell$. Some curvature-triggered effects
in quantum gravity have been recently analyzed Amelino-Camelia:2020bvx .
Endowed with these coefficients, the preferred frames that induce the
horizontal and vertical decomposition can be immediately found, in addition
the $N$-linear connection coefficients $L^{\alpha}_{\mu\nu}$ and
$C^{\alpha}_{\mu\nu}$, as discussed in Section II. Till now, only kinematical
properties were discussed, but the choice of the given connection should be
given either by physical conditions imposed on the dynamics of the spacetime
or by possible effective gravitational field equations for a quantum
configuration space.
To finalize this Section, let us discuss the symmetries of the spacetime. A
deep analysis of the killing vectors of the $H\rightarrow 0$ limit of this
Finsler framework was carried out in Ref. Barcaroli:2015eqe . Even in that
simplified scenario, the equations are quite lengthy which we omit here.
However, some properties should be mentioned. Firstly, the transformations
generated by the killing vectors seem to not exactly preserve the line
element, but contribute with a term that is given by a total derivative in the
action parameter; therefore, the kinematical results of these two line
elements coincide. Secondly, the results found are compatible with the
$\kappa$-Poincaré scenario that inspired this approach. From the Finsler
perspective, it is possible to derive more general results, but they reduces
to those of the bicrossproduct basis of $\kappa$-Poincaré by an appropriate
choice of free functions and parameters. The third point is that a finite
version of transformations that preserve the $\kappa$-Poincaré dispersion
relation was recently made in Ref. Lobo:2021yem through an alternative
approach, which does not rely on the killing vectors but is determined by the
Finsler function and the definition of momentum (explored in Section IV
below); however, a complete integration of the finite isometry and a
comparison between these approaches is still missing in the literature. To
finalize, the case of $H\neq 0$ was investigated in Ref. Lobo:2016xzq , but in
conformal coordinates (which are not the ones that are considered in this
application), and was not done in so much detail as the flat case, but a
generator of the corresponding curved boost transformation was made explicit
in Equation (25) of Ref. Lobo:2016xzq .
## IV The Cotangent Bundle Version of Finsler Geometry
As was discussed in Ref. Girelli:2006fw , by mapping the velocity of the
particle to its momentum, it is possible to find the version of the Finsler
metric defined in the cotangent bundle or phase space. Already from the
definition of the 4-momentum,
$p_{\mu}=m\frac{\partial F}{\partial y^{\mu}}\,,$ (53)
when it is possible to invert this expression to find $y=y(p)$, one can
substitute this result in the Finsler metric as
$h^{F}_{\mu\nu}(x,p)=g_{\mu\nu}^{F}(x,y(p))$. This metric is defined on the
slit cotangent bundle, $\widetilde{T^{*}M}=T^{*}M/\\{0\\}$, where we also
remove the zero section in each spacetime point for the same technical reasons
as discussed in Section III above. Since the quantities are now defined in the
cotangent bundle, we need to also address some issues that were raised in
Section III concerning the tangent bundle. This Section’s notation is applied
according to Ref. Miron:1994nvt . For instance, under a change of coordinates,
the spacetime and momentum variables transformed according to
$\displaystyle\tilde{x}^{\mu}$ $\displaystyle=\tilde{x}^{\mu}(x)\,,$ (54)
$\displaystyle\tilde{p}_{\mu}$ $\displaystyle=\frac{\partial
x^{\nu}}{\partial\tilde{x}^{\mu}}p_{\nu}\,,$ (55)
which means that the frame $(\partial/\partial x^{\mu},\partial/\partial
p_{\nu})$ transforms as
$\displaystyle\frac{\partial}{\partial\tilde{x}^{\mu}}$
$\displaystyle=\frac{\partial
x^{\nu}}{\partial\tilde{x}^{\mu}}\frac{\partial}{\partial
x_{\nu}}+\frac{\partial
p_{\nu}}{\partial\tilde{x}^{\mu}}\frac{\partial}{\partial p_{\nu}}\,,$ (56)
$\displaystyle\frac{\partial}{\partial\tilde{p}_{\mu}}$
$\displaystyle=\frac{\partial\tilde{x}^{\mu}}{\partial
x^{\nu}}\frac{\partial}{\partial p_{\nu}}\,.$ (57)
On the other hand, the natural coframe $(dx^{\mu},dp_{\nu})$ changes as
$\displaystyle d\tilde{x}^{\mu}$
$\displaystyle=\frac{\partial\tilde{x}^{\mu}}{\partial x^{\nu}}dx^{\nu}\,,$
(58) $\displaystyle d\tilde{p}_{\mu}$ $\displaystyle=\frac{\partial
x^{\nu}}{\partial\tilde{x}^{\mu}}dp_{\nu}+\frac{\partial^{2}x^{\nu}}{\partial\tilde{x}^{\mu}\partial\tilde{x}^{\lambda}}p_{\nu}d\tilde{x}^{\lambda}\,.$
(59)
Simlarly to that in Section III, the presence of a nonlinear connection,
$O_{\mu\nu}$, allows one to split the cotangent bundle into a horizontal and a
vertical subbundle. Inspired by the consideration of the Hamilton case
considered in Ref. Barcaroli:2015xda (discussed below), we propose the
following dual non-linear connection (constructed in Appendix A):
$O_{\mu\nu}(x,p)=-m\left[N^{\alpha}{}_{\mu}\frac{(g_{\alpha\nu}-p_{\alpha}p_{\nu}/m^{2})}{F}-\partial_{\mu}\dot{\partial}_{\nu}F\right]\Bigg{|}_{(x,y(p))}\,,$
(60)
where $p=p(y)$ is the kinematical map defined by Equation (53). By
construction, these symbols have the transformation properties of a nonlinear
connection,
$\tilde{O}_{\mu\nu}=\frac{\partial
x^{\lambda}}{\partial\tilde{x}^{\mu}}\frac{\partial
x^{\epsilon}}{\partial\tilde{x}^{\nu}}O_{\lambda\epsilon}+\frac{\partial^{2}x^{\beta}}{\partial\tilde{x}^{\mu}\partial\tilde{x}^{\nu}}p_{\beta}\,.$
(61)
Endowed with a nonlinear connection $O_{\mu\nu}$, one can decompose the
tangent bundle of the cotangent bundle by the Whitney sum in each point
$T_{u}\widetilde{T^{*}M}=O_{u}\oplus V_{u},\,\forall u\in\widetilde{T^{*}M}$.
The subbundle $O_{u}$ is called horizontal space and is spanned by the frame,
$\frac{\delta}{\delta x^{\mu}}=\delta_{\mu}=\frac{\partial}{\partial
x^{\mu}}+O_{\mu\nu}\frac{\partial}{\partial p_{\nu}}\,,$ (62)
and the subbundle $V_{u}$ is called vertical space and is spanned by the frame
in each point of $\widetilde{T^{*}M}$:
$\bar{\partial}^{\mu}=\frac{\partial}{\partial p_{\mu}}\,,$ (63)
such that
$T_{u}\widetilde{T^{*}M}=\text{span}\\{\delta_{\mu},\bar{\partial}^{\nu}\\}$.
The transformation properties of the nonlinear connection are implied in the
following rule for transforming this basis:
$\displaystyle\frac{\delta}{\delta\tilde{x}^{\mu}}=\tilde{\delta}_{\mu}=\frac{\partial
x^{\nu}}{\partial\tilde{x}^{\mu}}\frac{\delta}{\delta x^{\nu}}=\frac{\partial
x^{\nu}}{\partial\tilde{x}^{\mu}}\delta_{\nu}\,,$ (64)
$\displaystyle\frac{\partial}{\partial\tilde{p}_{\mu}}=\tilde{\bar{\partial}}^{\mu}=\frac{\partial\tilde{x}^{\mu}}{\partial
x^{\nu}}\frac{\partial}{\partial
p_{\nu}}=\frac{\partial\tilde{x}^{\mu}}{\partial
x^{\nu}}\bar{\partial}^{\nu}\,.$ (65)
Equivalently, with the nonlinear connection, we can decompose the cotangent
space $T^{*}_{u}\widetilde{T^{*}M}=\text{span}\\{dx^{\mu},\delta p_{\nu}\\}$,
where
$\delta p_{\mu}=dp_{\mu}-O_{\nu\mu}dx^{\nu}\,.$ (66)
Therefore, the dual basis transforms as
$\displaystyle d\tilde{x}^{\mu}$
$\displaystyle=\frac{\partial\tilde{x}^{\mu}}{\partial x^{\nu}}dx^{\nu}\,,$
(67) $\displaystyle\delta\tilde{p}_{\mu}$ $\displaystyle=\frac{\partial
x^{\nu}}{\partial\tilde{x}^{\mu}}\delta p_{\nu}\,.$ (68)
Similarly to what has been done for the tangent bundle case, such a
decomposition allows us to express a vector and a $1$-form via horizontal and
vertical components, where now, the vertical component is considered along
momenta instead of velocities,
$\displaystyle
X=X^{\mu}\delta_{\mu}+\bar{X}_{\mu}\bar{\partial}^{\mu}=X^{H}+X^{V}\,,$ (69)
$\displaystyle\omega=\omega_{\mu}dx^{\mu}+\bar{\omega}^{\mu}\delta
p_{\mu}=\omega^{H}+\omega^{V}\,.$ (70)
Besides, the metric $\mathbb{H}(x,p)$ of the configuration space is defined as
follows. Given a metric $h^{\mu\nu}(x,p)$, and the nonlinear connection
$O_{\mu\nu}(x,p)$, the quantum phase space presents metrical properties given
by the tensor,
$\mathbb{H}(x,p)=h_{\mu\nu}(x,p)dx^{\mu}\otimes dx^{\nu}+h^{\mu\nu}(x,p)\delta
p_{\mu}\otimes\delta p_{\nu}\,.$ (71)
We refer to the tensor $\mathbb{H}$ as the $N$-lift to $\widetilde{T^{*}M}$ of
the metric $h_{\mu\nu}$. The map between $y$ and $p$ cannot be done, in
general, involving quantities that are parametrization-dependent because $p$
itself is parametrization-invariant, whereas $y$ is not. That is why one can
only assume $y(p)$ for the definition of the metric $h^{F}_{\mu\nu}$.
Endowed with these quantities, one can just extend the definition of
$d$-tensors III.1 to the cotangent case, in which one only needs to consider
the use of the nonlinear connection $O_{\mu\nu}$ and the adapted basis defined
in this Section.
The above implies that a $d$-tensor $T$ of type $(m+q,n+p)$ can be rewritten
in the preferred basis as
$\displaystyle
T=T^{\mu_{1}...\mu_{m}}{}_{\nu_{1}...\nu_{n}\alpha_{1}...\alpha_{p}}{}^{\beta_{1}...\beta_{q}}$
$\displaystyle\frac{\delta}{\delta
x^{\mu_{1}}}\otimes...\otimes\frac{\delta}{\delta
x^{\mu_{m}}}\otimes\frac{\partial}{\partial
p_{\nu_{1}}}\otimes...\otimes\frac{\partial}{\partial p_{\nu_{n}}}$
$\displaystyle\otimes dx^{\alpha_{1}}\otimes...\otimes
dx^{\alpha_{p}}\otimes\delta p_{\beta_{1}}\otimes...\otimes\delta
p_{\beta_{q}}\,,$ (72)
whose components transform according to usual linear transformation rules, as
the one of Equation (34).
### IV.1 N-Linear Connection
Equivalently, the notion of differentiation can be defined in the cotangent
bundle through the $N$-linear connection $D$, which has the following
coefficients in the frame $(\delta_{\mu},\bar{\partial}^{\nu})$ (see Theorem
4.9.1 in Ref. Miron:1994nvt ):
$\displaystyle
D_{\delta_{\nu}}\delta_{\mu}=H^{\alpha}_{\mu\nu}\delta_{\alpha}\,,\qquad
D_{\delta_{\nu}}\bar{\partial}^{\mu}=-H^{\mu}_{\alpha\nu}\bar{\partial}^{\alpha}\,,$
(73) $\displaystyle
D_{\bar{\partial}^{\nu}}\delta_{\mu}=C^{\alpha\nu}_{\mu}\delta_{\alpha}\,,\qquad
D_{\bar{\partial}^{\nu}}\bar{\partial}^{\mu}=-C_{\alpha}^{\mu\nu}\bar{\partial}^{\alpha}\,.$
(74)
Otherwise, in the frame $(dx^{\mu},\delta p_{\nu})$ one has (see Proposition
4.9.1 in Ref. Miron:1994nvt )
$\displaystyle
D_{\delta_{\nu}}dx^{\mu}=-H^{\mu}_{\alpha\nu}dx^{\alpha}\,,\qquad
D_{\delta_{\nu}}\delta p_{\mu}=H^{\alpha}_{\mu\nu}\delta p_{\alpha}\,,$ (75)
$\displaystyle
D_{\bar{\partial}^{\nu}}dx^{\mu}=-C^{\mu\nu}_{\alpha}dx^{\alpha}\,,\qquad
D_{\bar{\partial}^{\nu}}\delta p_{\mu}=C_{\mu}^{\alpha\nu}\delta
p_{\alpha}\,.$ (76)
Considering a $N$-linear connection $D$ with set of coefficients,
$D\Gamma(N)=(H^{\alpha}_{\mu\nu},C^{\alpha}_{\mu\nu})$, one can add to it a
nonlinear connection, $N_{\mu\nu}$, that is in general independent of the
coefficients of $D$, such that the new set is
$D\Gamma=(N_{\mu\nu},H^{\alpha}_{\mu\nu},C^{\alpha}_{\mu\nu})$. For this
reason, the derivative of a $d$-tensor in the cotangent bundle presents
similar usual rules for dealing with up and down indices:
$\displaystyle
T^{\mu_{1}...\mu_{m}}{}_{\nu_{1}...\nu_{n}\alpha_{1}...\alpha_{p}}{}^{\beta_{1}...\beta_{q}}{}_{|\epsilon}$
(77) $\displaystyle=\frac{\delta}{\delta
x^{\epsilon}}T^{\mu_{1}...\mu_{m}}{}_{\nu_{1}...\nu_{n}\alpha_{1}...\alpha_{p}}{}^{\beta_{1}...\beta_{q}}+H^{\mu_{1}}_{\gamma\epsilon}T^{\gamma...\mu_{m}}{}_{\nu_{1}...\nu_{n}\alpha_{1}...\alpha_{p}}{}^{\beta_{1}...\beta_{q}}+...-H^{\gamma}_{\nu_{1}\epsilon}T^{\mu_{1}...\mu_{m}}{}_{\gamma...\nu_{n}\alpha_{1}...\alpha_{p}}{}^{\beta_{1}...\beta_{q}}\,,$
$\displaystyle
T^{\mu_{1}...\mu_{m}}{}_{\nu_{1}...\nu_{n}\alpha_{1}...\alpha_{p}}{}^{\beta_{1}...\beta_{q}}{}^{||\epsilon}$
(78) $\displaystyle=\frac{\partial}{\partial
p_{\epsilon}}T^{\mu_{1}...\mu_{m}}{}_{\nu_{1}...\nu_{n}\alpha_{1}...\alpha_{p}}{}^{\beta_{1}...\beta_{q}}{}+C^{\mu_{1}\epsilon}_{\gamma}T^{\gamma...\mu_{m}}{}_{\nu_{1}...\nu_{n}\alpha_{1}...\alpha_{p}}{}^{\beta_{1}...\beta_{q}}{}+...-C^{\gamma\epsilon}_{\nu_{1}}T^{\mu_{1}...\mu_{m}}{}_{\gamma...\nu_{n}\alpha_{1}...\alpha_{p}}{}^{\beta_{1}...\beta_{q}}{}\,.$
Let us note that from the kinematical map relating velocities and momenta, the
coefficients $H^{\alpha}_{\mu\nu}(x,y(p))$ and $C^{\alpha}_{\mu\nu}(x,y(p))$
can be found as been parametrization-invariant.
### IV.2 Finsler–q-de Sitter (Cotangent Bundle Case)
Here, we again consider the $q$-de Sitter-inspired case. Then, using the
Finsler function (46), the momentum is given by Equation: (53)
$\displaystyle
p_{0}=\frac{m\dot{x}^{0}}{\sqrt{(\dot{x}^{0})^{2}-a^{2}(\dot{x}^{1})^{2}}}-\ell\frac{m^{2}a^{2}(\dot{x}^{1})^{2}(a^{2}(\dot{x}^{1})^{2}+(\dot{x}^{0})^{2})}{2[(\dot{x}^{0})^{2}-a^{2}(\dot{x}^{1})^{2})]^{2}}\,,$
(79) $\displaystyle
p_{1}=-\frac{ma^{2}\dot{x}^{1}}{\sqrt{(\dot{x}^{0})^{2}-a^{2}(\dot{x}^{1})^{2}}}+\ell\frac{m^{2}a^{2}(\dot{x}^{0})^{3}\dot{x}^{1}}{((\dot{x}^{0})^{2}-a^{2}(\dot{x}^{1})^{2}))^{2}}\,,$
(80)
which furnishes a helpful expression that is throughout this Section and is a
common trick when trying to find momentum-dependent quantities from the
Finsler approach:
$\displaystyle\frac{m\dot{x}^{0}}{\sqrt{(\dot{x}^{0})^{2}-a^{2}(\dot{x}^{1})^{2}}}=p_{0}+\ell\frac{a^{-2}(p_{1})^{2}(a^{-2}(p_{1})^{2}+(p_{0})^{2})}{2m^{2}}\,,$
(81)
$\displaystyle\frac{ma\dot{x}^{1}}{\sqrt{(\dot{x}^{0})^{2}-a^{2}(\dot{x}^{1})^{2}}}=-a^{-1}p_{1}\left(1+\ell\frac{(p_{0})^{3}}{m^{2}}\right)\,.$
(82)
The above expressions allow us to express the Finsler metric through its
momentum dependence:
$\displaystyle
g_{\mu\nu}^{F}(x,\dot{x}(p))=h_{\mu\nu}^{F}(x,p)=\begin{pmatrix}1+\frac{3\ell
p_{0}(p_{1})^{4}}{m^{4}}&-\frac{\ell
a(p_{1})^{3}[(p_{1})^{2}-4(p_{0})^{2}]}{2m^{4}}\\\ -\frac{\ell
a(p_{1})^{3}[(p_{1})^{2}-4(p_{0})^{2}]}{2m^{4}}&-a^{2}+\frac{\ell
a^{2}(p_{0})^{3}[2(p_{0})^{2}+(p_{1})^{2}]}{m^{4}}\end{pmatrix}\,,$ (83)
which can be called a ”Finsler-rainbow metric.”
One can also find the induced non-linear connection in the cotangent bundle
through the definition (60) to read as
$\displaystyle O_{00}(x,p)=$
$\displaystyle-\frac{H\ell(p_{1})^{2}}{8m^{10}}\left[4(p_{0})^{10}+44(p_{0})^{8}(p_{1})^{2}+190(p_{0})^{6}(p_{1})^{4}-196(p_{0})^{4}(p_{1})^{6}\right.$
$\displaystyle\left.+31(p_{0})^{2}(p_{1})^{8}+32(p_{1})^{10}\right]\,,$ (84)
$\displaystyle O_{01}(x,p)=$ $\displaystyle Hp_{1}-\frac{\ell
Hp_{0}p_{1}}{8m^{10}}\left[-4m^{8}(p_{0})^{2}+8(p_{0})^{10}+32(p_{0})^{8}(p_{1})^{2}+206(p_{0})^{6}(p_{1})^{4}\right.$
$\displaystyle\left.-212(p_{0})^{4}(p_{1})^{6}+43(p_{0})^{2}(p_{1})^{8}+28(p_{1})^{10}\right]\,,$
(85) $\displaystyle O_{10}(x,p)=$ $\displaystyle Hp_{1}-\frac{H\ell
p_{0}p_{1}}{8m^{10}}\left(-4m^{8}(p_{0})^{2}+4(p_{0})^{10}+140(p_{0})^{8}(p_{1})^{2}+2(p_{0})^{6}(p_{1})^{4}\right.$
$\displaystyle\left.-106(p_{0})^{4}(p_{1})^{6}+61(p_{0})^{2}(p_{1})^{8}+4(p_{1})^{10}\right)\,,$
(86) $\displaystyle O_{11}(x,p)=$ $\displaystyle
Hp_{0}+\frac{H\ell}{8m^{10}}\left(4(p_{0})^{2}(p_{1})^{2}\left(m^{8}+3(p_{1})^{8}\right)+8m^{8}(p_{1})^{4}-8(p_{0})^{12}-124(p_{0})^{10}(p_{1})^{2}\right.$
$\displaystyle\left.-30(p_{0})^{8}(p_{1})^{4}+138(p_{0})^{6}(p_{1})^{6}-89(p_{0})^{4}(p_{1})^{8}-4(p_{1})^{12}\right)\,.$
(87)
From these expressions, one can construct the decomposition of the tangent and
cotangent spaces of the cotangent bundle into horizontal and vertical parts,
accordingly.
## V Geometry of the Cotangent Bundle: Hamilton Geometry
Besides the Finsler geometry, another interesting proposal for building a
natural geometry for propagation of particles that probe a modified dispersion
relation consists of the so-called Hamilton geometry. In this case, different
from the Finsler geometry, we start with a geometric structure defined in the
cotangent bundle (the definitions used in this metric follow that in the book
Miron:1994nvt and in papers Barcaroli:2015xda ; Barcaroli:2016yrl ;
Barcaroli:2017gvg ; Pfeifer:2018pty ).
A Hamilton space is a pair, $(M,H(x,p))$, where $M$ is a smooth manifold and
$H:T^{*}M\rightarrow{\mathbb{R}}$ is a continuous function on the cotangent
bundle that satisfies the following properties:
1. 1.
$H$ is smooth on the manifold $\widetilde{T^{*}M}$;
2. 2.
the Hamilton metric, $h_{H}$, with components,
$h_{H}^{\mu\nu}(x,p)=\frac{1}{2}\frac{\partial}{\partial
p_{\mu}}\frac{\partial}{\partial p_{\nu}}H(x,p)\,,$ (88)
is nondegenerate.
Since one does not have an arc-length functional, worldlines as extremizing
curves are an absent concept in this approach. Instead, the equations of
motion of a particle that obeys a given Hamiltonian are given by the Hamilton
equations of motion:
$\displaystyle\dot{x}^{\mu}$ $\displaystyle=\frac{\partial H}{\partial
p_{\mu}}\,,$ (89) $\displaystyle\dot{p}_{\mu}$ $\displaystyle=-\frac{\partial
H}{\partial\dot{x}^{\mu}}\,.$ (90)
Since this is just another metric structure defined in the cotangent bundle,
the same results regarding the tools for coordinate transformations given by
Equation (54) are applicable here. As the case of Hamiltonian mechanics, the
definition of Poisson brackets is useful enough for our purposes. For two real
valued functions $F(x,p)$ and $G(x,p)$, their Poisson brackets are given in
Barcaroli:2015xda (the geometry of the cotangent bundle with deformed
Hamiltonian can also be described with the language of symplectic geometry,
which is reviewed in Ref. Nozari:2014qja ):
$\\{F(x,p),G(x,p)\\}=\partial_{\mu}F\bar{\partial}^{\mu}G-\partial_{\mu}G\bar{\partial}^{\mu}F\,.$
(91)
As above, in order to divide the tangent and cotangent spaces of the cotangent
bundle into horizontal and vertical spaces, a non-linear connection is
necessary, and the canonical choice is given in Theorem 5.5.1 of Ref.
Miron:1994nvt and Definition 2 of Ref. Barcaroli:2015xda as
$O_{\mu\nu}(x,p)=\frac{1}{4}(\\{h_{\mu\nu}^{H},H\\}+h^{H}_{\mu\alpha}\partial_{\nu}\bar{\partial}^{\alpha}H+h^{H}_{\nu\alpha}\partial_{\mu}\bar{\partial}^{\alpha}H)\,,$
(92)
where $h_{\mu\nu}^{H}$ is the inverse of the metric $h^{\mu\nu}_{H}$. This
non-linear connection allows us to use the basis
$\delta_{\mu}=\partial_{\mu}+O_{\mu\nu}\bar{\partial}^{\nu}$ and
$\bar{\partial}^{\mu}$ as a special basis of $T_{(x,p)}\widetilde{T^{*}M}$,
and to use the basis $dx^{\mu}$ and $\delta
p_{\mu}=dp_{\mu}-O_{\nu\mu}dx^{\nu}$ as a special basis of
$T^{*}_{(x,p)}\widetilde{T^{*}M}$, which transforms according to Equations
(64), (65) and (67), (68).
Endowed with these coefficients, following Theorem 5.6.1 of Ref. Miron:1994nvt
, there exists a unique $N$-linear connection
$D\Gamma(O)=(H^{\alpha}_{\mu\nu},C_{\alpha}^{\mu\nu})$ such that:
1. 1.
$O_{\mu\nu}$ is the canonical non-linear connection;
2. 2.
the metric $h^{\mu\nu}_{H}$ is $h$-covariant constant (no horizontal non-
metricity):
$D_{\delta_{\alpha}}h_{H}^{\mu\nu}=0\,;$ (93)
3. 3.
the metric $h^{\mu\nu}_{H}$ is $v$-covariant constant (no vertical non-
metricity):
$D{\bar{\partial}^{\alpha}}h_{H}^{\mu\nu}=0\,;$ (94)
4. 4.
$D\Gamma(N)$ is horizontally torsion free:
$T^{\alpha}_{\ \mu\nu}=H^{\alpha}_{\mu\nu}-H^{\alpha}_{\nu\mu}=0\,;$ (95)
5. 5.
$D\Gamma(N)$ is vertically torsion free:
$S_{\alpha}^{\ \mu\nu}=C_{\alpha}^{\mu\nu}-C_{\alpha}^{\nu\mu}=0\,;$ (96)
6. 6.
the triple $(O_{\mu\nu},H^{\alpha}_{\mu\nu},C_{\alpha}^{\mu\nu})$ has
coefficients given by
$\displaystyle
O_{\mu\nu}(x,p)=\frac{1}{4}(\\{h_{\mu\nu}^{H},H\\}+h^{H}_{\mu\alpha}\partial_{\nu}\bar{\partial}^{\alpha}H+h^{H}_{\nu\alpha}\partial_{\mu}\bar{\partial}^{\alpha}H)\,,$
(97) $\displaystyle
H_{\alpha}^{\mu\nu}=\frac{1}{2}h_{H}^{\alpha\beta}(\delta_{\mu}h^{H}_{\beta\nu}+\delta_{\nu}h^{H}_{\beta\mu}-\delta_{\beta}h^{H}_{\mu\nu})\,,$
(98) $\displaystyle
C_{\alpha}^{\mu\nu}=-\frac{1}{2}h_{\alpha\beta}^{H}\bar{\partial}^{\mu}h_{H}^{\beta\nu}\,.$
(99)
This is called a Cartan $N$-linear covariant derivative. Equivalently, the
notion of $d$-tensors and their derivatives discussed in Section IV.1 are
applicable.
### V.1 Symmetries
Hamilton geometry also allows one to encompass a DSR language, as was the case
for Finsler geometry discussed in Section III.2. However, its realization does
not come from the invariance of an interval $ds^{2}$, since one does not have
it, but from the invariance of the Hamiltonian function $H(x,p)$. The
approach, which we highlight here, was done starting from Definition 4 of
Section II-D of Ref. Barcaroli:2015xda . In a Hamilton space $(M,H)$ with
manifold $M$, and Hamiltonian $H$, let $X=\xi^{\mu}\partial_{\mu}$ be a vector
field in the basis manifold $M$ and
$X^{C}=\xi^{\mu}\partial_{\mu}-p_{\nu}\partial_{\mu}\xi^{\nu}\bar{\partial}^{\mu}$
be the so-called complete lift of $X$ to $\widetilde{T^{*}M}$. A symmetry of
the Hamiltonian is a transformation generated by $X^{C}$, whose components
satisfy
$X^{C}(H)\xi^{\mu}\partial_{\mu}H-p_{\nu}\partial_{\mu}\xi^{\nu}\bar{\partial}^{\mu}H=0\,.$
(100)
If one derivates this expression twice with respect to momenta, one gets the
following result:
$0=\frac{1}{2}\bar{\partial}^{\mu}\bar{\partial}^{\nu}X^{C}(H)=\xi^{\alpha}\partial_{\alpha}h_{H}^{\mu\nu}-h_{H}^{\mu\alpha}\partial_{\alpha}\xi^{\nu}-h_{H}^{\nu\alpha}\partial_{\alpha}\xi^{\mu}-p_{\beta}\partial_{\alpha}\xi^{\beta}\bar{\partial}^{\alpha}h_{H}^{\mu\nu}\,.$
(101)
This is just the generalization of the killing equation to a general Hamilton
space. In general, if $h_{H}$ does not depend on momenta, then it reduces to
the standard Riemannian case. Besides, from the expression of the Poisson
brackets (91), it can verified that such symmetries give rise to conserved
charges $\xi^{\mu}p_{\mu}$; i.e., that Poisson commutes with the Hamiltonian:
$\\{\xi^{\mu}p_{\mu},H\\}=0.$ (102)
These are the charges that, at an algebraic level, can generate translations,
boosts, and rotations, for instance.
### V.2 Hamilton–q-de Sitter (Cotangent Bundle Case)
As an example, we rely on the results presented in Ref. Barcaroli:2015xda ,
which are as well inspired by the $q$-de Sitter Hamiltonian (45). In this
case, the Hamilton metric, defined by Equation (88), reads:
$\displaystyle h^{\mu\nu}_{H}(x,p)=\begin{pmatrix}1&-\ell p_{1}(1+2Hx^{0})\\\
-\ell p_{1}(1+2Hx^{0})&-(1+2Hx^{0})(1+\ell p_{0})\end{pmatrix}\,,$ (103)
which, as can be seen, acquires a shape much simpler than the rainbow-Finsler
one (83) due to the much direct way in which it is calculated.
The non-linear connection can be read from Equation (92) and can be cast in a
matrix form due to its simplicity:
$\displaystyle O_{\mu\nu}(x,p)=\begin{pmatrix}H\ell p_{1}^{2}&Hp_{1}\\\
Hp_{1}&Hp_{0}(1-\ell p_{0})\end{pmatrix}\,.$ (104)
As expected, it coincides with the case (IV.2) in the Riemannian case, i.e.,
when $\ell=0$.
The Hamilton equations of motion can be found from Equation (89) and read:
$\displaystyle\dot{x}^{0}-2p_{0}+\ell p_{1}^{2}(1+2Hx^{0})=0\,,$ (105)
$\displaystyle\dot{x}^{1}+2p_{1}(1+Hx^{0})+2\ell p_{0}p_{1}(1+2Hx^{0})=0\,,$
(106) $\displaystyle\dot{p}_{0}-2Hp_{1}^{2}-2H\ell p_{0}p_{1}^{2}=0\,,$ (107)
$\displaystyle\dot{p}_{1}=0\,.$ (108)
The autoparallel (horizontal) curves of the non-linear connection satisfy (see
Equation (8.2) in Ref. Miron:1994nvt )
$\dot{p}_{\mu}-O_{\nu\mu}\dot{x}^{\nu}=0\,,$ (109)
and, as can be seen from Equation (104) for $O_{\mu\nu}$, the worldlines,
defined from the Hamilton equations of motion, are not autoparallels of the
non-linear connection.
The symmetries have also been analyzed in Ref. Barcaroli:2015xda , where it
has been noticed that the conserved charges that generate translations and the
boost coincide with the results from Ref. Barcaroli:2015eqe that do not rely
on the geometrical approach used in this paper.
## VI The Tangent-Bundle Version of Hamilton Geometry
Endowed with Hamilton equations of motion (89), one has a map between the
momenta and velocities from $\dot{x}^{\mu}=y^{\mu}=\partial H/\partial
p_{\mu}$. When it is possible to invert this map to find $p_{\mu}=p_{\mu}(y)$
(as done in Appendix B of Ref. Barcaroli:2017gvg ), one derives an interesting
map between the cotangent and tangent space version of Hamilton geometry.
Indeed, using this map, a Hamilton metric defined in the tangent bundle reads:
$\displaystyle g_{H}^{\mu\nu}(x,y)\doteq h_{H}^{\mu\nu}(x,p(y))\,.$ (110)
The dual non-linear connection in this case has been discussed in Appendix C
of Ref. Barcaroli:2015xda , and is given by
$N(x,y)^{\mu}{}_{\nu}=2O(x,p(y))_{\nu\alpha}h_{H}^{\alpha\mu}(x,p(y))-(\partial_{\nu}\bar{\partial}^{\mu}H)|_{p=p(y)}\,.$
(111)
Its main property is the preservation of the horizontal tangent spaces of the
cotangent and tangent bundle connected through the kinematical map
$y^{\mu}=\partial H/\partial p_{\mu}$.
With this map, it is possible to define the dual non-linear and $N$-linear
connections, now defined in the tangent bundle. It should be stressed that
although this gives geometrical quantities defined in the tangent bundle, this
does not represent a Finsler geometry, since there is no arc-length functional
and the Hamilton metric is not, in general, 0-homogeneous to start with.
### Hamilton-$\kappa$-Poincaré (Tangent Bundle Case)
The kinematical map that allows us to describe $y=y(p)$ is found by inverting
the relation $y^{\mu}=\partial H/\partial p_{\mu}$ for the $q$-de Sitter
Hamiltonian, given by
$\displaystyle p_{0}=\frac{y^{0}}{2}+\ell\frac{(y^{1})^{2}}{8}\,,$ (112)
$\displaystyle
p_{1}=-\frac{y^{1}}{2}+H\frac{x^{0}y^{1}}{2}+\ell\frac{y^{0}y^{1}}{4}\,.$
(113)
The metric in the tangent bundle reads:
$\displaystyle
g^{\mu\nu}_{H}(x,y)=\begin{pmatrix}1&\ell(Hx^{0}y^{1}+y^{1})/2\\\
\ell(Hx^{0}y^{1}+y^{1})/2&-(1+2Hx^{0})(1+\ell y^{0}/2)\end{pmatrix}\,.$ (114)
The dual non-linear connection reads
$\displaystyle N^{\mu}{}_{\nu}(x,y)=\begin{pmatrix}-H\ell(y^{1})^{2}/2&\ell
hy^{0}y^{1}+hy^{1}\\\ Hy^{1}&-hy^{0}-3\ell h(y^{1})^{2}/4\end{pmatrix}\,.$
(115)
In Section VII below, some key points of each approach are discussed while
comparing the descriptions of configuration and phase spaces.
## VII Advantages and Difficulties of Each Formalism
The approaches considered—Finsler and Hamilton spaces—present the points that
can be considered positive or negative. In this Section, we highlight some of
those points which look to be most important from theoretical and
phenomenological points of view.
### VII.1 Finsler Geometry
Let us emphasize that here not a complete list of positive or negative points
are given, and, certainly, the points listed represent just our view on the
subject under scrutiny and some points we are classifying in one way or
another can be seen by others completely differently.
#### VII.1.1 Advantages
Preservation of the equivalence principle. Due to the presence of an arc-
length functional, the extremizing geodesics of the Finsler function are the
same worldlines of the Hamiltonian, from which the arc-length was derived.
This means that, in the Finslerian language, the equivalence principle is
satisfied, as soon as the worldlines are trajectories of free particles in
this spacetime. There is a fundamental difference in comparison to the special
or general relativity formulation, since these trajectories are now mass-
dependent, since the Finsler function and the metric carry the mass of the
particle due to Planck-scale effects. Intriguingly, although the metric does
not present a massless limit (which is discussed below), it is possible to
find trajectories of massless particles, which are compatible with the
Hamiltonian formulation, by taking the limit $m\rightarrow 0$ in the geodesic
Equation Amelino-Camelia:2014rga ; Lobo:2016xzq . This finding leads to some
effects due to modifications of the trajectories of particles. For instance,
one of the most explored avenues of quantum gravity phenomenology (maybe
competing with threshold effects) is the time delay until particles with
different energies might arrive at a detector after a (almost) simultaneous
emission Jacob:2008bw ; Zhu:2022blp (for reviews, see Amelino-Camelia:2008aez
; Addazi:2021xuf ). This kind of experimental investigation is not exhausted,
and novelties have arrived in the analysis of sets of gamma-ray bursts and
candidate neutrinos emitted from them in the multimessenger astronomy approach
Amelino-Camelia:2016ohi ; Amelino-Camelia:2022pja .
Preservation of the relativity principle. This formalism allows one to derive
and solve the killing equation, which furnishes infinitesimal symmetry
transformations of the metric. It has been shown in Ref. Amelino-
Camelia:2014rga that generators of these transformations can be constructed
and identified with the transformations that are generally depicted in the
doubly special relativity. The latter implies, in a preservation of the
relativity principle that inertial frames should assign the same MDR to a
given particle which, in its turn, implies that the deformation scale of
quantum gravity is observer-independent, i.e., two observers would not assign
different values, in the same system of units, to the quantum gravity scale.
This preservation has important phenomenological consequences, such as the
point that the threshold constraints on the quantum gravity parameter do not
apply in the DSR scenario. The reason is that, accompanied by the deformation
of the Lorentz (Poincaré) symmetries, comes a deformation of the composition
law of momenta of particles (for instance $p$ and $q$), such that the nature
of interaction vertices to not get modified when transforming from one frame
to another:
$\Lambda(p\oplus q)=\Lambda(p)\oplus\Lambda(q)\,,$ (116)
where $\Lambda$ is a deformed Lorentz transformation induced by the killing
vectors and $\oplus$ represents a modified composition of components of the
involved momenta (this covariance condition usually needs a back-reaction on
the boost parameter, but we do not dwell on that here; for more details, see
Majid:2006xn ; Lobo:2021yem and references therein). Threshold constraints,
such as the one placed in Ref. HAWC:2019gui , assumes that the composition of
momenta is undeformed, although the dispersion relation is modified in a
Lorentz invariance violation (LIV) scenario. When this is the case, processes
that are forbidden in special relativity, such as the decay of the photon into
an electron–positron pair, becomes kinematicaly allowed for a given threshold
energy. The no observation of such decays allows one to place constraints on
the quantum gravity parameter. When the dispersion relation is modified as
well, what happens is that generally these kinds of processes remain forbidden
or modifications in the threshold energies are so minute that they are
unobservable for a quantum gravity parameter in the order of the Planck energy
Lobo:2021yem . This is an important feature of "deforming" instead of
"violating" the Lorentz symmetry.
Preservation of the clock postulate. The availability of an arc-length
functional leads to a possibility to analyze the consequences of having the
proper time of a given particle given by it. If this is the case, then the
worldlines or geodesics are just paths that extremize the proper time an
observer measures in spacetime, similar to that in special relativity. One of
the consequences of this feature consists of the possibility of connecting the
time elapsed in the comoving frame of a particle during its lifetime (which is
its lifetime at rest) and the coordinate time, which is the one that is
assigned to this phenomenon in the laboratory coordinates. Using this
expression, one can investigate the relativistic time dilation (responsible
for the "twin paradox") or the so-called first clock effect (for further
details on the first and also on the second clock effect, which can appear in
theories with a non-metricity tensor; see Ref. Lobo:2018zrz ), in which, for
instance, the lifetime of a particle is dilated in comparison to the one
assigned in the laboratory. Due to Finslerian corrections, the lifetime of a
particle in the laboratory would receive Planckian corrections, which,
actually, is a novel avenue of phenomenological investigation that is being
currently carried out Lobo:2020qoa ; Lobo:2021yem through the search for
signatures in particle accelerators and cosmic rays.
#### VII.1.2 Difficulties
Absence of massless rainbow Finsler metric. The Finsler approach had emerged
as an opportunity to describe in a consistent way the intuition that the
quantum spacetime probed by a high-energy particle would present some energy-
momentum (of the particle itself) corrections, which is justified by different
approaches to quantum gravity Assanioussi:2014xmz ; Weinfurtner:2008if . Since
then, proposals of rainbow metrics have considered a smooth transition from
massive to massless cases, not only from the point of view of the
trajectories, but from the metric itself. This is not the case for the Finsler
approach presented here. Although the trajectories and symmetries are defined
for both massive and massless cases by considering the $m\rightarrow 0$ limit,
the rainbow metric of Finsler geometry, given by Equation (83), is certainly
not defined for massless particles. The reason for this is the point that when
passing from the Hamiltonian to the Lagrangian formalism, we defined an arc-
length functional, which is not a legitimate action functional for massless
particles. In other words, a crucial step for deriving the Finsler function is
the handling of the Lagrange multiplier $\lambda$ of action (4), which can
only be solved if the particle is massive, as can be found in Refs. Amelino-
Camelia:2014rga ; Lobo:2016xzq ; Lobo:2016lxm ; Lobo:2020qoa . A possibility
that has been explored consisted of not solving the equation for $\lambda$ and
defining a metric that depends on $\lambda$ and on velocities from a Polyakov-
like action for free particles (instead of the Nambu–Goto one given by the
arc-length), which turned out to be out of the Finsler geometry scope
Lobo:2016xzq ; Lobo:2016lxm . However, this possibility has not been further
explored beyond preliminary investigations. The issue of the absence of a
massless rainbow-Finsler metric could be circumvented by proposing a different
kind of geometry, which from the very beginning started from the momenta
formulation, like the other possibility described in this paper, namely the
Hamilton geometry.
Definition only through perturbations. the Finsler geometry has been
considered in this paper in this context at most perturbatively around the
quantum-gravity-length scale (or inverse of energy scale), which may be
considered as a negative point if one aims to make it at a more fundamental or
theoretical level. Nevertheless, from the pragmatic perspective of
phenomenology, since such effects, if they exist, are minute, then the
perturbative approach is enough for proposing new effects that could serve as
avenues of experimental investigation.
The handling of finite symmetries. Another issue that can be problematic is
the handling of finite symmetries in the Finsler context. Up to today, the
connection between Finsler geometry and quantum gravity phenomenology has not
faced the issue of integrating the symmetries and finding finite versions of
deformed Lorentz transformations. Some initial investigations were carried out
in Ref. Lobo:2021yem from the momentum space perspective, but further issues
are being currently faced by some authors of the present paper.
### VII.2 Hamilton Geometry
The descriptions of the points in this section will be a bit shorter than the
previous ones, because some universal points we already described above;
therefore, we instruct the reader to check on them when that is the case.
#### VII.2.1 Advantages
Presence of a massless rainbow Hamilton metric. Differently from the Finsler
case, the Hamilton geometry does not need an arc-length functional; instead,
it only needs a given Hamiltonian, from which the metric, non-linear
connection, and symmetries are derived. This means that from the very
beginning, the massless limit of geometrical quantities exists.
Does not require perturbative methods. Another positive point about the
Hamilton geometry is the finding that one can handle with the exact form of
the proposed Hamiltonian, and it does not need to consider perturbations
around a certain scale. Instead, independently of the form of the (smooth)
dispersion relation that arises from de facto approaches to quantum gravity,
the geometry can be handled, as has been considered, e.g., in Refs.
Barcaroli:2016yrl ; Barcaroli:2017gvg .
Preservation of the relativity principle and the handling of symmetries. Due
to the proximity of this approach to the way that the DSR formalism generally
handles with Planck scale corrections, i.e., from the point of view of
momentum space and Hamilton equations, the handling of symmetries is
facilitated in this approach. For instance, it is straightforward to find the
conserved charges from the killing vectors, which generate finite
transformations that are momenta-dependent without tedious terms in the
denominator of the equations when one is working in velocity space, as Finsler
geometry is initially formulated (or without mass terms in the denominator in
the Finsler version of the phase space).
Generalization to curved spacetimes. This approach is considered in more
curved space cases, beyond the $q$-de Sitter exemplified in this paper; for
instance, its spherically symmetric and cosmological versions were explored
giving rise to interesting phenomenological opportunities, from the point of
view of time delays and gravitational redshift, among others (for some
applications of Hamilton geometry in this context, see Pfeifer:2018pty and
references therein).
#### VII.2.2 Difficulties
Non-geodesic trajectory. An issue that may be considered problematic is the
point that the worldlines of particles, given by the Hamilton equations, are
not geodesics of the non-linear connection that means that there exists a
force term in the geodesic equation, which is in contrast with the Finsler
case. This is a property of the Hamilton geometry, as has been shown in Ref.
Barcaroli:2015xda , and is not specific to the $q$-de Sitter case analyzed
here.
Absence of the arc-length. The Hamilton geometry does not dwell with an arc-
length functional that means that the only geodesics present are those of the
non-linear or of the $N$-linear connections and there are no extremizing ones.
The absence of a function that allows one to measure distances in spacetime
can be seen as a difficulty of this geometry; if the distances cannot be
calculated, one could wonder what such a metric means. Even if the norm of a
tangent vector can be integrated, this integral would not be, in general,
parametrization-independent, which is also a drawback of this tentative.
Besides, the absence of an arc-length limits the phenomenology of the
preservation of the clock postulate that was discussed in the Finsler case.
## VIII Final Remarks
We revised two proposals that have been considered as candidates for
describing the quantum configuration and phase spaces probed by particles
whose kinematics are modified by a length scale identified as the quantum
gravity scale.
Finsler geometry starts from a configuration space framework that presents
applications on its own in biology, thermodynamics, and modified gravity; and
it finds a natural environment in quantum gravity phenomenology due to its
power to describe a scenario in which important principles that guided physics
in the XXth century, such as the relativity principle, are preserved even at a
Planckian regime. Besides its traditional description in terms of the couple
spacetime and velocity space (configuration space), we also explored its
development in terms of the induced couple spacetime and momentum space (phase
space), which is actually more appropriate for a pure quantum description than
the configuration space. Some points that we consider positive and negative
and which are consequences of the requirements for using the Finsler language,
the derivation of an arc-length functional defined in the slit tangent bundle,
are discussed in Section VII.
The second case of the present study is the Hamilton geometry, whose
properties are derived directly from the Hamiltonian itself, without the need
to go through the definition of an arc-length. Actually, in general, the
Hamilton metric does not even define a curve-parametrization-invariant length
measure which brings some limitations to phenomenological investigations of
this subject in quantum gravity. On the other hand, this issue circumvents
some intrinsic difficulties of Finsler geometry, which were also discussed in
Section VII.
The goal of this paper was to review some topics of these two important
geometries by using kinematical descriptions of particles whose behavior might
present departures from special relativity results due to the effective
quantum gravity influence. We also aimed to bring some points that we consider
as under-explored perspectives on the subject by explicitly presenting some
geometric quantities that are dual to those, in which those quantities were
originally presented, such as the dual metrics and non-linear connections
(whose Finslerian one was proposed in this paper, by inspiration of
definitions in the Hamilton geometry literature) of Finsler and Hamilton
geometries in the cotangent and tangent bundles, respectively.
At least two global points could be considered insufficiently explored or
unexplored in this subject. One is the geometry probed by an (non-)interacting
multi-particle system. Some challenges of this problem can be found, for
instance, in Ref. Hossenfelder:2014ifa , but the relations between the
approaches there described and Finsler/Hamilton geometries remains unclear.
Another point that remains unexplored consists of the dynamics of the
configuration/phase space in a way that is compatible with quantum gravity
phenomenology-inspired approaches. For instance, one could wonder if there
exists a gravitational field theory defined in Finsler or Hamilton spaces that
has $q$-de Sitter or other proposals as solutions, and how matter would
interact in this scenario. The exploration of this topic might shed light on
the one regarding a multi-particle system. These are more challenges that
might be subjects of the future research in this area and which may help to
build a bridge between quantum and modified gravities.
## Ackowledgements
I.P.L. was partially supported by the National Council for Scientific and
Technological Development—CNPq grant 306414/2020-1, and by the grant
3197/2021, Paraíba State Research Foundation (FAPESQ). I.P.L. would like to
acknowledge the contribution of the COST Action CA18108. L.C.N.S. would like
to thank Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
for partial financial support through the research project no. 164762/2020-5.
V.B.B. is partially supported by CNPq through the research project no.
307211/2020-7. P.H.M. and S.A. thank Coordenação de Aperfeiçoamento de Pessoal
de Nível Superior—Brazil (CAPES)—Finance Code 001, for financial support.
G.V.S thank Conselho Nacional de Desenvolvimento Científico e Tecnológico
(CNPq) for financial support. E.R. and G.M were supported by the PIBIC program
of the Federal University of Paraíba.
## Appendix A Dual Finsler Nonlinear Connection
The momenta of a particle in Finsler geometry, given by the following
expression,
$p_{\mu}=m\frac{\partial F}{\partial y^{\mu}}\equiv m\dot{\partial}_{\mu}F\,,$
(117)
defines a kinematical map between velocity and momenta variables at each given
point in the base manifold $M$. We refer to such a map as
$\displaystyle\flat:\quad\widetilde{TM}\rightarrow\widetilde{T^{\ast}}M$ (118)
$\displaystyle(x,y)\mapsto\flat(x,y)=(x,m\dot{\partial}F(x,y))=(x,p(x,y))\,.$
(119)
Inspired by the construction of Ref. Barcaroli:2015xda , the condition that a
nonlinear connection in the tangent bundle is dual to one in the cotangent
bundle by a kinematical map, $\flat$, is that such an application maps the
tangent space of the tangent bundle onto the tangent space of the cotangent
bundle. This means that the differential of such a map maps the preferred
basis of one tangent space,
$\delta_{\mu}=\partial_{\mu}-N^{\nu}{}_{\mu}\dot{\partial}_{\nu}$, onto the
other,
$d\,\flat(\delta_{\mu})=\delta^{\prime}_{\mu}=\partial_{\mu}-O_{\mu\nu}\dot{\partial}^{\nu}$.
This means that the action of this differential on a vector
$X=X^{\mu}\partial_{\mu}+\dot{X}^{\mu}\dot{\partial}_{\mu}$ is given by
$\displaystyle d\,\flat_{(x,y)}:\quad T_{(x,y)}\widetilde{TM}$
$\displaystyle\rightarrow T_{\flat(x,y)}\widetilde{T^{*}M}\,,$ (120)
$\displaystyle X=X^{\mu}\partial_{\mu}+\dot{X}^{\mu}\dot{\partial}_{\mu}$
$\displaystyle\mapsto
d\,\flat_{(x,y)}(X)=X^{\mu}d\,\flat_{(x,y)}(\partial_{\mu})+\dot{X}^{\mu}d\,\flat_{(x,y)}(\dot{\partial}_{\mu})$
(121)
$\displaystyle=X^{\mu}(\partial_{\mu}+m\partial_{\mu}\dot{\partial}_{\nu}F\bar{\partial}^{\nu})+m\dot{X}^{\mu}\dot{\partial}_{\mu}\dot{\partial}_{\nu}F\bar{\partial}^{\nu}\,.$
(122)
By acting on the basis vectors
$\delta_{\mu}=\partial_{\mu}-N^{\nu}{}_{\mu}\dot{\partial}_{\nu}$, one finds:
$\displaystyle
d\,\flat_{(x,y)}(\delta_{\mu})=d\,\flat_{(x,y)}(\delta_{\mu})-N^{\nu}{}_{\mu}d\,\flat_{(x,y)}(\dot{\partial}_{\nu})=\partial_{\mu}+m\partial_{\mu}\dot{\partial}_{\nu}F\bar{\partial}^{\nu}-mN^{\nu}{}_{\mu}\dot{\partial}_{\nu}\dot{\partial}_{\alpha}F\bar{\partial}^{\alpha}\,.$
(123)
In order to simplify this expression, the relation
$2g_{\nu\alpha}=\dot{\partial}_{\nu}\dot{\partial}_{\alpha}F^{2}=\dot{\partial}_{\nu}(2F\dot{\partial}_{\alpha}F)$
is used that leads to
$\dot{\partial}_{\nu}\dot{\partial}_{\alpha}F=\frac{g_{\nu\alpha}-p_{\nu}p_{\alpha}/m^{2}}{F}\,.$
(124)
From this expression, one finds that
$d\,\flat_{(x,y)}(\delta_{\mu})=\partial_{\mu}-m\left[N^{\alpha}{}_{\mu}\frac{(g_{\alpha\nu}-p_{\alpha}p_{\nu}/m^{2})}{F}-\partial_{\mu}\dot{\partial}_{\nu}F\right]\bar{\partial}^{\nu}=\partial_{\mu}+O_{\mu\nu}\bar{\partial}^{\nu}\,,$
(125)
which leads to the dual nonlinear connection,
$O_{\mu\nu}(x,p)=-m\left[N^{\alpha}{}_{\mu}\frac{(g_{\alpha\nu}-p_{\alpha}p_{\nu}/m^{2})}{F}-\partial_{\mu}\dot{\partial}_{\nu}F\right]\Bigg{|}_{(x,y(p))}\,.$
(126)
## References
* (1) Bronstein, Matvei, “Quantum theory of weak gravitational fields,” Gen. Rel. Grav. 44 (2012) 267–283.
* (2) Rovelli, Carlo, “Loop quantum gravity,” Living Rev. Rel. 11 (2008) 5.
* (3) Ambjorn, J. and Jurkiewicz, J. and Loll, R., “Causal Dynamical Triangulations and the Quest for Quantum Gravity,” in Foundations of Space and Time: Reflections on Quantum Gravity, pp. 321–337. 4, 2010. arXiv:1004.0352 [hep-th].
* (4) Amelino-Camelia, Giovanni, “Quantum-Spacetime Phenomenology,” Living Rev. Rel. 16 (2013) 5, arXiv:0806.0339 [gr-qc].
* (5) Addazi, A. and others, “Quantum gravity phenomenology at the dawn of the multi-messenger era—A review,” Prog. Part. Nucl. Phys. 125 (2022) 103948, arXiv:2111.05659 [hep-ph].
* (6) Amelino-Camelia, Giovanni and da Silva, Malú Maira and Ronco, Michele and Cesarini, Lorenzo and Lecian, Orchidea Maria, “Spacetime-noncommutativity regime of Loop Quantum Gravity,” Phys. Rev. D 95 no. 2, (2017) 024028, arXiv:1605.00497 [gr-qc].
* (7) Brahma, Suddhasattwa and Ronco, Michele and Amelino-Camelia, Giovanni and Marciano, Antonino, “Linking loop quantum gravity quantization ambiguities with phenomenology,” Phys. Rev. D 95 no. 4, (2017) 044005, arXiv:1610.07865 [gr-qc].
* (8) Brahma, Suddhasattwa and Ronco, Michele, “Constraining the loop quantum gravity parameter space from phenomenology,” Phys. Lett. B 778 (2018) 184–189, arXiv:1801.09417 [hep-th].
* (9) Majid, S. and Ruegg, H., “Bicrossproduct structure of kappa Poincare group and noncommutative geometry,” Phys. Lett. B 334 (1994) 348–354, arXiv:hep-th/9405107.
* (10) Lukierski, Jerzy and Ruegg, Henri and Nowicki, Anatol and Tolstoi, Valerii N., “Q deformation of Poincare algebra,” Phys. Lett. B 264 (1991) 331–338.
* (11) Lukierski, Jerzy and Nowicki, Anatol and Ruegg, Henri, “New quantum Poincare algebra and k deformed field theory,” Phys. Lett. B 293 (1992) 344–352.
* (12) Majid, S., Foundations of quantum group theory. Cambridge University Press, 2, 2011.
* (13) Amelino-Camelia, Giovanni, “Doubly-Special Relativity: Facts, Myths and Some Key Open Issues,” Symmetry 2 (2010) 230–271, arXiv:1003.3942 [gr-qc].
* (14) Mattingly, David, “Modern tests of Lorentz invariance,” Living Rev. Rel. 8 (2005) 5, arXiv:gr-qc/0502097.
* (15) Liberati, Stefano, “Tests of Lorentz invariance: a 2013 update,” Class. Quant. Grav. 30 (2013) 133001, arXiv:1304.5795 [gr-qc].
* (16) Amelino-Camelia, Giovanni, “Relativity in space-times with short distance structure governed by an observer independent (Planckian) length scale,” Int. J. Mod. Phys. D 11 (2002) 35–60, arXiv:gr-qc/0012051.
* (17) Magueijo, Joao and Smolin, Lee, “Lorentz invariance with an invariant energy scale,” Phys. Rev. Lett. 88 (2002) 190403, arXiv:hep-th/0112090.
* (18) Amelino-Camelia, Giovanni and Freidel, Laurent and Kowalski-Glikman, Jerzy and Smolin, Lee, “The principle of relative locality,” Phys. Rev. D 84 (2011) 084010, arXiv:1101.0931 [hep-th].
* (19) Proutorov, Evgenii and Matsuyama, Naoki and Koibuchi, Hiroshi, “Finsler geometry modeling and monte carlo study of liquid crystal elastomers under electric fields,” Journal of Physics: Condensed Matter 30 no. 40, (2018) 405101.
* (20) Hehl, Friedrich W and Obukhov, Yuri N, Foundations of classical electrodynamics: Charge, flux, and metric, vol. 33. Springer Science & Business Media, 2003.
* (21) Červenỳ, V, “Fermat’s variational principle for anisotropic inhomogeneous media,” Studia geophysica et geodaetica 46 no. 3, (2002) 567–588.
* (22) Pfeifer, Christian, “Finsler spacetime geometry in Physics,” Int. J. Geom. Meth. Mod. Phys. 16 no. supp02, (2019) 1941004, arXiv:1903.10185 [gr-qc].
* (23) Magueijo, Joao and Smolin, Lee, “Gravity’s rainbow,” Class. Quant. Grav. 21 (2004) 1725–1736, arXiv:gr-qc/0305055.
* (24) Weinfurtner, Silke and Jain, Piyush and Visser, Matt and Gardiner, C. W., “Cosmological particle production in emergent rainbow spacetimes,” Class. Quant. Grav. 26 (2009) 065012, arXiv:0801.2673 [gr-qc].
* (25) Assanioussi, Mehdi and Dapor, Andrea and Lewandowski, Jerzy, “Rainbow metric from quantum gravity,” Phys. Lett. B 751 (2015) 302–305, arXiv:1412.6000 [gr-qc].
* (26) Olmo, Gonzalo J., “Palatini Actions and Quantum Gravity Phenomenology,” JCAP 10 (2011) 018, arXiv:1101.2841 [gr-qc].
* (27) Ling, Yi and Li, Xiang and Zhang, Hong-bao, “Thermodynamics of modified black holes from gravity’s rainbow,” Mod. Phys. Lett. A 22 (2007) 2749–2756, arXiv:gr-qc/0512084.
* (28) Lobo, Iarley P. and Santos, Luis C. N. and Bezerra, V. B. and Morais Graça, J. P. and Moradpour, H., “The extended phase space thermodynamics of Planck-scale-corrected Reissner-Nordström-anti-de Sitter black hole,” Nucl. Phys. B 972 (2021) 115568, arXiv:2110.05396 [gr-qc].
* (29) Gorji, M. A. and Nozari, K. and Vakili, B., “Gravity’s rainbow: a bridge between LQC and DSR,” Phys. Lett. B 765 (2017) 113–119, arXiv:1606.00910 [gr-qc].
* (30) Garattini, Remo and Lobo, Francisco S. N., “Gravity’s Rainbow and traversable wormholes,” in 14th Marcel Grossmann Meeting on Recent Developments in Theoretical and Experimental General Relativity, Astrophysics, and Relativistic Field Theories, vol. 2, pp. 1448–1453. 2017\. arXiv:1512.04470 [physics.gen-ph].
* (31) Amirabi, Z. and Halilsoy, M. and Mazharimousavi, S. Habib, “Thin-shell wormholes in rainbow gravity,” Mod. Phys. Lett. A 33 no. 09, (2018) 1850049.
* (32) Bezerra, V. B. and Lobo, I. P. and Mota, H. F. and Muniz, C. R., “Landau Levels in the Presence of a Cosmic String in Rainbow Gravity,” Annals Phys. 401 (2019) 162–173, arXiv:1901.01596 [gr-qc].
* (33) Carvalho, Gabriel G. and Lobo, Iarley P. and Bittencourt, Eduardo, “Extended disformal approach in the scenario of Rainbow Gravity,” Phys. Rev. D 93 no. 4, (2016) 044005, arXiv:1511.00495 [gr-qc].
* (34) Lobo, Iarley P. and Carvalho, Gabriel G., “The geometry of null-like disformal transformations,” Int. J. Geom. Meth. Mod. Phys. 16 no. 11, (2019) 1950180, arXiv:1707.01784 [gr-qc].
* (35) Santos, L. C. N. and Bezerra, V. B., “Electrostatic self-interaction of charged particles in the space-time of a cosmic string in the context of gravity’s rainbow,” Gen. Rel. Grav. 51 no. 11, (2019) 145, arXiv:1911.01996 [gr-qc].
* (36) Bernhard Riemann (auth.), Jürgen Jost (eds.), On the Hypotheses Which Lie at the Bases of Geometry. Classic Texts in the Sciences. Birkhäuser Basel, 1 ed., 2016.
* (37) Paul Finsler (auth.), Über Kurven und Flächen in allgemeinen Räumen. Lehrbücher und Monographien aus dem Gebiete der Exakten Wissenschaften 11. Springer Basel, 1 ed., 1951.
* (38) Bao, David and Chern, S-S and Shen, Zhongmin, An introduction to Riemann-Finsler geometry, vol. 200. Springer Science & Business Media, 2000.
* (39) Hohmann, Manuel and Pfeifer, Christian and Voicu, Nicoleta, “Mathematical foundations for field theories on Finsler spacetimes,” J. Math. Phys. 63 no. 3, (2022) 032503, arXiv:2106.14965 [math-ph].
* (40) Bernal, Antonio and Javaloyes, Miguel Ángel and Sánchez, Miguel, “Foundations of Finsler Spacetimes from the Observers’ Viewpoint,” Universe 6 no. 4, (2020) 55, arXiv:2003.00455 [gr-qc].
* (41) Minguzzi, E., “Light cones in Finsler spacetime,” Commun. Math. Phys. 334 no. 3, (2015) 1529–1551, arXiv:1403.7060 [math-ph].
* (42) Girelli, Florian and Liberati, Stefano and Sindoni, Lorenzo, “Planck-scale modified dispersion relations and Finsler geometry,” Phys. Rev. D 75 (2007) 064015, arXiv:gr-qc/0611024.
* (43) Lobo, Iarley P. and Pfeifer, Christian, “Reaching the Planck scale with muon lifetime measurements,” Phys. Rev. D 103 no. 10, (2021) 106025, arXiv:2011.10069 [hep-ph].
* (44) Raetzel, Dennis and Rivera, Sergio and Schuller, Frederic P., “Geometry of physical dispersion relations,” Phys. Rev. D 83 (2011) 044047, arXiv:1010.1369 [hep-th].
* (45) Rodrigues, Ernesto and Lobo, Iarley P., “Revisiting Legendre transformations in Finsler geometry,” arXiv:2208.11406 [gr-qc].
* (46) Gubitosi, Giulia and Mercati, Flavio, “Relative Locality in $\kappa$-Poincaré,” Class. Quant. Grav. 30 (2013) 145002, arXiv:1106.5710 [gr-qc].
* (47) Miron, Radu and Anastasiei, Mihai, The Geometry of Lagrange Spaces: Theory and Applications. Fundam.Theor.Phys. Springer Netherlands, Dordrecht, 1994.
* (48) Zhongmin Shen, Differential geometry of spray and Finsler spaces. Kluwer Academic Publishers, 1 ed., 2001.
* (49) Amelino-Camelia, Giovanni and Barcaroli, Leonardo and Gubitosi, Giulia and Liberati, Stefano and Loret, Niccoló, “Realization of doubly special relativistic symmetries in Finsler geometries,” Phys. Rev. D 90 no. 12, (2014) 125030, arXiv:1407.8143 [gr-qc].
* (50) Lobo, Iarley P. and Loret, Niccoló and Nettel, Francisco, “Investigation of Finsler geometry as a generalization to curved spacetime of Planck-scale-deformed relativity in the de Sitter case,” Phys. Rev. D 95 no. 4, (2017) 046015, arXiv:1611.04995 [gr-qc].
* (51) Barcaroli, Leonardo and Gubitosi, Giulia, “Kinematics of particles with quantum-de Sitter-inspired symmetries,” Phys. Rev. D 93 no. 12, (2016) 124063, arXiv:1512.03462 [gr-qc].
* (52) Letizia, Marco and Liberati, Stefano, “Deformed relativity symmetries and the local structure of spacetime,” Phys. Rev. D 95 no. 4, (2017) 046007, arXiv:1612.03065 [gr-qc].
* (53) Lobo, Iarley P. and Loret, Niccoló and Nettel, Francisco, “Rainbows without unicorns: Metric structures in theories with Modified Dispersion Relations,” Eur. Phys. J. C 77 no. 7, (2017) 451, arXiv:1610.04277 [gr-qc].
* (54) Amelino-Camelia, Giovanni and Rosati, Giacomo and Bedić, Suzana, “Phenomenology of curvature-induced quantum-gravity effects,” Phys. Lett. B 820 (2021) 136595, arXiv:2012.07790 [gr-qc].
* (55) Lobo, Iarley P. and Pfeifer, Christian and Morais, Pedro H. and Batista, Rafael Alves and Bezerra, Valdir B., “Two-body decays in deformed relativity,” JHEP 09 (2022) 003, arXiv:2112.12172 [hep-ph].
* (56) Barcaroli, Leonardo and Brunkhorst, Lukas K. and Gubitosi, Giulia and Loret, Niccoló and Pfeifer, Christian, “Hamilton geometry: Phase space geometry from modified dispersion relations,” Phys. Rev. D92 no. 8, (2015) 084053, arXiv:1507.00922 [gr-qc].
* (57) Barcaroli, Leonardo and Brunkhorst, Lukas K. and Gubitosi, Giulia and Loret, Niccoló and Pfeifer, Christian, “Planck-scale-modified dispersion relations in homogeneous and isotropic spacetimes,” Phys. Rev. D95 no. 2, (2017) 024036, arXiv:1612.01390 [gr-qc].
* (58) Barcaroli, Leonardo and Brunkhorst, Lukas K. and Gubitosi, Giulia and Loret, Niccoló and Pfeifer, Christian, “Curved spacetimes with local $\kappa$-Poincaré dispersion relation,” Phys. Rev. D 96 no. 8, (2017) 084010, arXiv:1703.02058 [gr-qc].
* (59) Pfeifer, Christian, “Redshift and lateshift from homogeneous and isotropic modified dispersion relations,” Phys. Lett. B 780 (2018) 246–250, arXiv:1802.00058 [gr-qc].
* (60) Nozari, Kourosh and Gorji, M. A. and Hosseinzadeh, V. and Vakili, B., “Natural Cutoffs via Compact Symplectic Manifolds,” Class. Quant. Grav. 33 no. 2, (2016) 025009, arXiv:1405.4083 [gr-qc].
* (61) Jacob, Uri and Piran, Tsvi, “Lorentz-violation-induced arrival delays of cosmological particles,” JCAP 01 (2008) 031, arXiv:0712.2170 [astro-ph].
* (62) Zhu, Jie and Ma, Bo-Qiang, “Lorentz-violation-induced arrival time delay of astroparticles in Finsler spacetime,” Phys. Rev. D 105 no. 12, (2022) 124069, arXiv:2206.07616 [gr-qc].
* (63) Amelino-Camelia, Giovanni and D’Amico, Giacomo and Rosati, Giacomo and Loret, Niccoló, “In-vacuo-dispersion features for GRB neutrinos and photons,” Nature Astron. 1 (2017) 0139, arXiv:1612.02765 [astro-ph.HE].
* (64) Amelino-Camelia, Giovanni and Di Luca, Maria Grazia and Gubitosi, Giulia and Rosati, Giacomo and D’Amico, Giacomo, “Could quantum gravity slow down neutrinos?,” arXiv:2209.13726 [gr-qc].
* (65) Majid, Shahn, “Algebraic approach to quantum gravity. II. Noncommutative spacetime,” arXiv:hep-th/0604130.
* (66) HAWC Collaboration, Albert, A. and others, “Constraints on Lorentz Invariance Violation from HAWC Observations of Gamma Rays above 100 TeV,” Phys. Rev. Lett. 124 no. 13, (2020) 131101, arXiv:1911.08070 [astro-ph.HE].
* (67) Lobo, I. P. and Romero, C., “Experimental constraints on the second clock effect,” Phys. Lett. B 783 (2018) 306–310, arXiv:1807.07188 [gr-qc].
* (68) Hossenfelder, Sabine, “The Soccer-Ball Problem,” SIGMA 10 (2014) 074, arXiv:1403.2080 [gr-qc].
|
# Identifying regime switches through Bayesian wavelet estimation: evidence
from flood detection in the Taquari River Valley
Flávia C. Motta Department of Statistics, Federal University of São Carlos,
São Carlos, SP, Brazil Institute of Mathematics and Computer Sciences,
University of São Paulo, São Carlos, SP, Brazil Michel H. Montoril
Department of Statistics, Federal University of São Carlos, São Carlos, SP,
Brazil
(May 2023)
###### Abstract
Two-component mixture models have proved to be a powerful tool for modeling
heterogeneity in several cluster analysis contexts. However, most methods
based on these models assume a constant behavior for the mixture weights,
which can be restrictive and unsuitable for some applications. In this paper,
we relax this assumption and allow the mixture weights to vary according to
the index (e.g., time) to make the model more adaptive to a broader range of
data sets. We propose an efficient MCMC algorithm to jointly estimate both
component parameters and dynamic weights from their posterior samples. We
evaluate the method’s performance by running Monte Carlo simulation studies
under different scenarios for the dynamic weights. In addition, we apply the
algorithm to a time series that records the level reached by a river in
southern Brazil. The Taquari River is a water body whose frequent flood
inundations have caused various damage to riverside communities. Implementing
a dynamic mixture model allows us to properly describe the flood regimes for
the areas most affected by these phenomena.
## 1 Introduction
In several data analysis problems, we want to cluster observations between two
groups. For instance, in many clinical studies, the goal is to classify
patients according to disease absent or present (see Hall and Zhou,, 2003,
Rindskopf and Rindskopf,, 1986, Hui and Zhou,, 1998). In contamination
problems found in astronomy investigations, on the other hand, the aim is to
separate the objects of interest, called members (e.g., stars), from
foreground/background objects contaminating the sample, known as contaminants
(see Walker et al.,, 2009). In genetics, studies based on microarray data are
usually driven to detecting differentially expressed genes under two
conditions, e.g., “healthy tissue versus diseased tissue” (see Bordes et al.,,
2006).
To address these scenarios of bimodal data sets, two-component mixture models
have shown to be excellent alternatives to cluster data observations within
the group that better describes their features (Patra and Sen,, 2016). In this
context, the mixture model with two components will assume that the sample of
data observations $y_{1},\dots,y_{n}$ is, in fact, the realization of a random
variable $Y$ that belongs to a population composed of two subpopulations,
known as mixture components. Thus, at each point $t$, $t=1,\dots,n$, $Y$ is
fitted according to some of the mixture components, dictated by a mixture
weight $\alpha$.
This setting may be very restrictive to some data sets. For instance, in
epidemiological studies that evaluate the response to medications, the
probability of classifying a patient in the group of “disease present” must be
allowed to vary across time so that the longitudinal effect of the treatment
can be properly measured. The same issue arises in quality control problems,
where the probability of the supervised system operating in a failure-free
regime is also not constant over time. In order to classify those features
properly, under a mixture model assumption, the mixture weight should be
allowed to vary according to the index (which could be time or location). In
other words, it would be appropriate for the mixture weight to present a
dynamic behavior.
Assuming dynamic mixture weights for mixture models is an extension that has
already been applied in different areas, from traffic flow applications (see
Nagy et al.,, 2011) to investigations in genetics (see Montoril et al.,, 2019;
2021). As discussed in Montoril et al., (2021), this generalization is similar
to the extension of Hidden Markov Models (HMM) into non-homogeneous Hidden
Markov Models (NHMM), first described by Hughes and Guttorp, (1994). In both
scenarios, one generalizes the model by considering unobserved varying
probabilities. In the case of mixture models, those dynamic probabilities are
the mixture weights, whereas, in HMM, they are the transition probabilities.
It is important to emphasize that, although connected, dynamic mixture weights
and transition probabilities are different things.
Considering a “non-homogeneous” structure for the mixture model implies that,
besides estimating the dynamic mixture weights, one also needs to estimate the
component parameters, and that increases the challenge. For instance, in
Montoril et al., (2019), from a frequentist approach, the authors rely on
wavelets to perform the estimation of the dynamic weights, where they
transform the data in order to deal with a nonparametric heteroscedastic
regression. Nonetheless, their procedure depends on assuming known means and
variances for the mixture components, which, in practice, may be unrealistic.
In this work, unlike the aforementioned paper, the leading motivation is to
provide a Bayesian approach that estimates not only the dynamic mixture
weights but also the component parameters of a two-component mixture model. To
accomplish this goal, we propose an efficient Gibbs sampling algorithm, which
allows the distribution of the posterior draws to be used for inference
purposes. Regarding the dynamic mixture weights, we use the data augmentation
method by Albert and Chib, (1993) and incorporate Bayesian wavelet denoising
techniques to estimate the dynamic behavior of the mixture weight. We do this
to exploit the good properties of wavelets in curves’ estimation.
Wavelets are families of basis functions that can be used to represent other
functions, signals, and images as a series of successive approximations
(Härdle et al.,, 2012, Abramovich et al.,, 2000). In statistical applications,
these mathematical tools have been successfully used to solve problems in
nonparametric regression (see Donoho and Johnstone,, 1994, Cai and Brown,,
1999); density estimation (see Donoho, 1993a, , Donoho et al.,, 1996, Hall and
Patil,, 1995); time series analysis (see, e.g., Morettin,, 1996, Priestley,,
1996, Percival and Walden,, 1999); among many other areas. There is a vast
literature that provides a review of wavelets in statistics (see, e.g.,
Vidakovic,, 1999, Ogden,, 1997).
In this paper, wavelet bases are applied to enable the estimation of the
dynamic mixture weights. To review the mathematical background and the
terminology associated with the wavelet theory, in the following section, we
provide a short introduction to the wavelet basis functions; the discrete
wavelet transform (DWT); and, the Bayesian approach for denoising in a
wavelet-based scenario. The remainder of the paper is organized as follows. In
Section 3, we describe the dynamic mixture model considered in this paper and
give details related to the MCMC sampling scheme constructed to perform the
estimation. In Section 4, we present some numerical experiments. We first
conduct Monte Carlo simulations to evaluate the method in a controlled
setting. Then, we apply the MCMC algorithm to a river data set to identify
periods when flood inundations occurred.
## 2 Wavelets
In this work, we use the term wavelets to refer to a system of orthonormal
basis functions for $L_{2}([0,1])$ or $L_{2}(\mathbb{R})$. The bases are
generated by dyadic translations and dilations of the functions
$\varphi(\cdot)$ and $\psi(\cdot)$, known, respectively, as the scaling and
wavelet functions. These systems of integer-translates and dilates are given
by
$\displaystyle\varphi_{j_{0}k}(t)$
$\displaystyle=2^{j_{0}/2}\varphi(2^{j_{0}}t-k),\quad k\in\mathbb{Z},$
$\displaystyle\psi_{jk}(t)$ $\displaystyle=2^{j/2}\psi(2^{j}t-k),\quad
j,k\in\mathbb{Z}.$
Thus, for any integer $j_{0}$ and $J$, a periodic function $f(t)\in
L_{2}([0,1])$ can be approximated in $L_{2}$-sense as the projection onto a
multiresolution space $V_{J}$:
$f(t)=\sum\limits_{k=0}^{2^{j_{0}}-1}c_{j_{0}k}\varphi_{j_{0}k}(t)+\sum\limits_{j=j_{0}}^{J-1}\sum\limits_{k=0}^{2^{j}-1}d_{jk}\psi_{jk}(t),$
where $c_{j_{0}k}$’s are known as scaling coefficients and $d_{jk}$’s are
called detail coefficients. The former are associated with the coarsest
resolution level in witch $f(t)$ was decomposed, $j_{0}$. As a result, they
capture the gross structure of $f(t)$. The detail coefficients, on the other
hand, being linked to finer resolution levels, can capture local information
about $f(t)$. Put simply, in moving from a coarser resolution level $j$ to a
finer $j+1$, we are increasing the resolution at which a function is
approximated, thus the expansion coefficients become more descriptive about
the local features of $f(t)$.
In practice, we access $f(t)\in L_{2}([0,1])$ through a grid of points in time
or space in which $f$ is applied. Therefore, consider
$\bm{f}=(f(1/n),f(2/n),\dots,f(n/n))^{T}$ to be a vector of samples of $f(t)$
on an equispaced grid of $n$ points, with $n=2^{J}$, for some positive integer
$J$. To obtain the scaling and detail coefficients that approximate $\bm{f}$,
we perform the discrete wavelet transform (DWT) of $\bm{f}$. In matrix
notation, the DWT of $\bm{f}$ is
$\bm{\theta}=\bm{Wf},$ (1)
where $\bm{\theta}=(c_{00},d_{00},\bm{d}_{1}^{T},\dots,\bm{d}_{J-1}^{T})^{T}$
is a vector of size $n$, having both scaling and detail coefficients
$\bm{d}_{j}=(d_{j0},d_{j1},\dots,d_{j2^{j}-1})^{T}$, and $\bm{W}$ is the DWT
matrix with $(jk,i)$ entry given by
$W_{jk,i}\sqrt{n}\approx\psi_{jk}(i/n)=2^{j/2}\psi(2^{j}i/n-k)$,
$k=0,\dots,2^{j}-1$, $j=1,\dots,J-1$. (Abramovich et al.,, 1998). By
orthogonality, the multiplication $\bm{W^{T}\theta}$ recovers the signal
$\bm{f}$. This transformation from wavelet coefficients to fitted values is
known as the inverse discrete wavelet transform (IDWT).
One of the main advantages provided by the DWT is the sparse representation
generally achieved. As shown by Donoho, 1993b , wavelets are unconditional
bases for a range of function spaces, such as Hölder and Sobolev spaces, as
well as spaces suitable for representing functions of ‘bounded variation’. As
an aside, it is also worth mentioning that using Mallat’s pyramid algorithm
(Mallat,, 1989), the DWT and IDWT are performed requiring only
$\mathcal{O}(n)$ operations, which makes them very efficient in terms of
computational speed and storage. These properties help to explain why wavelet
bases are excellent tools to address problems of data analysis. In the
following section, we present a brief review of handling the denoising problem
within the wavelet domain, emphasizing the Bayesian framework due to its
central role in the estimation process of this paper.
### 2.1 Bayesian wavelet denoising
Consider the nonparametric regression model
$\bm{y}=\bm{f}+\bm{e},$ (2)
where $\bm{y}=(y_{1},\dots,y_{n})^{T}$ is the vector of observed values,
$\bm{f}=(f(1/n),\dots,f(n/n))^{T}$ is the function of interest applied to a
grid of $n$ equally spaced points, and $\bm{e}=(e_{1},\dots,e_{n})^{T}$ is a
vector of zero-mean random variables. For most applications, $e_{t}$’s are
independent and identically distributed normal random variables with zero mean
and constant variance $\sigma^{2}$. The goal of nonparametric regression is to
recover the unknown function $f$ from the noisy observations $\bm{y}$.
With that in mind, Donoho and Johnstone, (1994) propose to transform the
observations $\bm{y}$ to the wavelet domain, shrink the noisy wavelet
coefficients or even equal them to zero, based on some threshold rule, and
then estimate $\bm{f}$ by applying the IDWT to the regularized coefficients.
This method is known in the literature as wavelet shinkage. Therefore, let $n$
be a power of two, $n=2^{J}$ for some positive integer $J$. Then, we can
represent (2) in the wavelet domain as
$\bm{d}^{*}=\bm{\theta}+\bm{\varepsilon},$ (3)
where $\bm{d}^{*}=\bm{Wy}$, $\bm{\theta}=\bm{Wf}$, and
$\bm{\varepsilon}=\bm{We}$, with $\bm{W}$ being the DWT matrix.
From a Bayesian perspective, the wavelet shrinkage technique consists in
assigning a prior distribution to each wavelet coefficient of the unknown
function. The idea is that, by choosing a prior able to capture the sparseness
associated with most wavelet decompositions, we can estimate $\bm{\theta}$, by
imposing some Bayes rule on the resulting posterior distribution of the
wavelet coefficients. Then, applying the IDWT to the estimated $\bm{\theta}$
gives us an estimation of $\bm{f}$.
One of the most appropriate prior choices for modeling wavelet coefficients
are the spike and slab priors. First consolidated within Bayesian variable
selection methods (George and McCulloch,, 1993), these kinds of prior are a
mixture between two components: one that concentrates its mass at values close
to zero or even in zero (Dirac delta) and another whose mass is spread over a
wide range of possible values for the unknown parameters. Choosing this
mixture as prior to the distribution of wavelet coefficients allows the first
component, known as spike, to capture the null wavelet coefficients, while the
second component, called slab, describes the coefficients associated with the
unknown function.
A spike and slab prior frequently assigned to wavelet coefficients is the
mixture between a point mass at zero and a Gaussian distribution (see, e.g.,
Abramovich et al.,, 1998). In this scenario, each detail wavelet coefficient
is distributed following
$\displaystyle\begin{split}&\pi_{j}\text{N}(0,\upsilon_{j}^{2})+(1-\pi_{j})\delta_{0}(\theta_{jk}),\\\
\end{split}$ (4)
$k=0,1,\dots,2^{j}-1$, $j=0,1,\dots,J-1$, with $\delta_{0}$ being a point mass
at zero. The prior specification is usually completed by assigning a diffuse
prior to the scaling coefficient at the coarsest level $c_{00}$. Thus, the
sample scaling coefficient obtained from the DWT of the data estimates
$c_{00}$ (Abramovich et al.,, 1998).
Under the prior (LABEL:eq:prior_normal), the posterior distribution for each
detail coefficient is also a mixture between a Gaussian distribution and
$\delta_{0}$, given by
$\displaystyle\begin{split}\theta_{jk}|d_{jk}^{*}&\sim\pi_{\text{post}}\text{N}\left(\frac{\upsilon_{j}^{2}}{1+\upsilon_{j}^{2}}d_{jk}^{*},\frac{\upsilon_{j}^{2}}{1+\upsilon_{j}^{2}}\right)+(1-\pi_{\text{{post}}})\delta_{0}(\theta_{jk}),\\\
\pi_{\text{post}}&=\frac{\pi_{j}g_{\upsilon_{j}^{2}}(d_{jk}^{*})}{\pi_{j}g_{\upsilon_{j}^{2}}(d_{jk}^{*})+(1-\pi_{j})\phi(d_{jk}^{*})},\\\
\end{split}$ (5)
$k=0,1,\dots,2^{j}-1$, $j=0,1,\dots,J-1$, where $\phi$ denotes the standard
normal density and $g_{\upsilon_{j}^{2}}$ denotes the convolution between the
slab component in (LABEL:eq:prior_normal) (in this case
$\text{N}(0,\upsilon_{j}^{2})$) and $\phi$. Using $\gamma$ to denote the slab
density and $\star$ to denote the convolution operator, we can write
$g=\gamma\star\phi$. It should be stressed that, as shown by Abramovich et
al., (1998), using the posterior medians as the pointwise estimates of
$\bm{\theta}$ yields a thresholding rule. In other words, we are able to equal
the estimated noisy coefficients to zero.
In the Empirical Bayes thresholding method by Johnstone and Silverman, 2005a ,
Johnstone and Silverman, 2005b , the authors propose replacing the Gaussian
component in (LABEL:eq:prior_normal) with heavy-tailed distributions, such as
the Laplace density. This replacement intends to provide larger estimates for
the non-null coefficients than those obtained from Gaussian distributions. In
this scenario, considering the Laplace density as the slab component, the
prior for each detail wavelet coefficient can be written as
$\displaystyle\begin{split}&\pi_{j}\gamma_{a}(\theta_{jk})+(1-\pi_{j})\delta_{0}(\theta_{jk}),\\\
\end{split}$ (6)
$k=0,1,\dots,2^{j}-1$, $j=0,1,\dots,J-1$, where $\gamma_{a}(x)$ denotes the
Laplace density with scale parameter $a>0$, i.e.,
$\gamma_{a}(x)=\frac{a}{2}\exp(-a|x|),\quad x\in\mathbb{R}.$ (7)
Johnstone and Silverman, 2005a , Johnstone and Silverman, 2005b thresholding
method is called Empirical Bayes because the hyperparameters $\pi_{j}$ and $a$
are chosen empirically from the data, using a marginal maximum likelihood
approach. Thus, for each resolution level $j$ of the wavelet transform, the
arguments $\pi_{j}$ and $a$ that maximize the marginal log-likelihood are
selected and plugged back into the prior. Then, the estimation of
$\bm{\theta}$ is carried out with either posterior medians, posterior means,
or other estimators. Under these circumstances, the posterior distribution is
given by
$\displaystyle\begin{split}\theta_{jk}|d_{jk}&\sim\pi_{\text{post}}f_{1}(\theta_{jk}|d_{jk})+(1-\pi_{\text{post}})\delta_{0}(\theta_{jk}),\\\
\pi_{\text{post}}&=\frac{\pi_{j}g_{a}(d_{jk}^{*})}{\pi_{j}g_{a}(d_{jk}^{*})+(1-\pi_{j})\phi(d_{jk}^{*})},\\\
\end{split}$ (8)
$k=0,1,\dots,2^{j}-1$, $j=0,1,\dots,J-1$, with $f_{1}(\theta_{jk}|d_{jk})$
being the non-null mixture component and $g_{a}=\gamma_{a}\star\phi$. It can
be shown that $f_{1}(\theta_{jk}|d_{jk})$ is a mixture of two truncated normal
distributions. Define $f_{\text{TN}}(x|\mu,\sigma,\alpha,\beta)$ to be the
density of a truncated normal distribution with location parameter $\mu$,
scale parameter $\sigma$, minimum value $\alpha$ and maximum value $\beta$.
Then, with a slight abuse of notation, we can write
$f_{1}(\theta_{jk}|d_{jk})$ as
$\displaystyle\begin{split}f_{1}(\theta_{jk}|d_{jk})&=\eta\times
f_{\text{TN}}\left(\theta_{jk}\biggl{\rvert}\frac{d_{jk}}{\sigma_{j}}-a,1,0,+\infty\right)\\\
&\quad+(1-\eta)\times
f_{\text{TN}}\left(\theta_{jk}\biggl{\rvert}\frac{d_{jk}}{\sigma_{j}}+a,1,-\infty,0\right),\end{split}$
(9)
where
$\displaystyle\eta=\frac{\exp{(-a\frac{d_{jk}}{\sigma_{j}})}\Phi(\frac{d_{jk}}{\sigma_{j}}-a)}{\exp{(a\frac{d_{jk}}{\sigma_{j}})}\tilde{\Phi}(\frac{d_{jk}}{\sigma_{j}}+a)+\exp{(-a\frac{d_{jk}}{\sigma_{j}})}\Phi(\frac{d_{jk}}{\sigma_{j}}-a)},$
with $\Phi$ denoting the standard normal cumulative function, and
$\tilde{\Phi}=1-\Phi$.
## 3 The model
Let $y_{1},\dots y_{n}$ be a random sample from the dynamic Gaussian mixture
model
$\displaystyle\begin{split}y_{t}&=(1-z_{t})x_{1t}+z_{t}x_{2t},\\\
x_{kt}|\mu_{k},\tau_{k}^{2}&\sim\text{N}(\mu_{k},\tau_{k}^{-2}),\quad
k=1,2,\\\ z_{t}|\alpha_{t}&\sim\text{Bern}(\alpha_{t}),\quad
t=1,\dots,n,\end{split}$ (10)
where $z_{t}$’s are allocation variables that indicate to which mixture
component the observations $y_{t}$’s belong to. The $z_{t}$ have a Bernoulli
distribution with parameter $\alpha_{t}$, the mixture weight that has a
dynamic behavior. In (10), the component parameters $\mu_{k}$ and
$\tau_{k}^{2}$, $k=1,2$, and the dynamic mixture weights $\alpha_{t}$,
$t=1,\dots,n$, are parameters to be estimated.
Following Albert and Chib, (1993), we introduce a data augmentation approach
by associating an auxiliary variable $l_{t}$ to each allocation variable
$z_{t}$. In the original work, $l_{t}=\bm{x}_{t}^{T}\bm{\theta}+e_{t}$ and
$e_{t}\sim\text{N}(0,1)$, where $\bm{{x}_{t}}$ is a vector of $p$ known
covariates and $\bm{\theta}$ is a vector of $p$ unknown parameters. In greater
detail, $z_{t}=1$, if $l_{t}>0$, and $z_{t}=0$, otherwise. However, unlike in
Albert and Chib, (1993), where the design matrix $\bm{X}$ in the probit
regression corresponds to the covariates related to $\alpha_{t}$, in this
paper, $\bm{X}=\bm{W}^{T}$, where $\bm{W}$ is the DWT matrix. Thus, for every
$t=1,\dots,n$, we have
$\displaystyle\begin{split}l_{t}&=\bm{x}_{t}^{T}\bm{\theta}+e_{t},\\\
e_{t}&\sim\text{N}(0,1),\\\ \end{split}$ (11)
where $\bm{x}_{t}$ corresponds to the $t$-th column of matrix $\bm{W}$ and
$\bm{\theta}=(c_{00},d_{00},\bm{d}_{1}^{T},\dots,\bm{d}_{J-1}^{T})^{T}$ is the
vector of wavelet coefficients, such that $n=p=2^{J}$. Therefore, the dynamic
mixture weight $\alpha_{t}$, which is the probability of success of $z_{t}$,
is given by the binary regression model,
$\alpha_{t}=\Phi(\bm{x}_{t}^{T}\bm{\theta}),$
where $\Phi$ is the standard Gaussian cumulative function.
### 3.1 Bayesian estimation
In this paper, the estimation of both component parameters and dynamic mixture
weights is performed through a Gibbs sampling algorithm. By giving conjugate
prior distributions to the parameters, we sample from their full conditional
posterior distributions and make inferences about the parameter values (e.g.,
point and credible estimates). In this section, we first present the full
conditional posterior distributions from which we draw the parameters of (10).
Then, we detail the MCMC algorithm built to perform the sampling.
In (10), since we are mostly interested in the estimation of the mixture
weights, we assume that the sample $\bm{y}=(y_{1},\dots,y_{n})^{T}$ is a time
series whose dependence structure is determined by the dynamic behavior of
$\alpha_{t}$’s. In this setting, given the component parameters and the
dynamic mixture weights, the observations $y_{t}$’s are conditionally
independent, and we have
$p(\bm{y}|\bm{\mu},\bm{\tau^{2}},\bm{z})=\prod_{t=1}^{n}p(y_{t}|z_{t},\bm{\mu},\bm{\tau^{2}})$.
Thus, the complete-data likelihood function
$p(\bm{y}|\bm{\mu},\bm{\tau^{2}},\bm{z})$ is given by
$\displaystyle\prod\limits_{k=1}^{2}\left(\frac{\tau_{k}^{2}}{2\pi}\right)^{T_{k}/2}\exp{\left[-\frac{\tau_{k}^{2}}{2}\sum\limits_{t:z_{t}=k-1}(y_{t}-\mu_{k})^{2}\right]},$
where $T_{k}=\\#\\{t:z_{t}=k-1,\,t=1,2,...,n\\}$ and
$s_{k}=\sum\limits_{t:z_{t}=k-1}y_{t}$ for $k=1,2$. For the complete-data
Bayesian estimation of $\bm{\mu}=(\mu_{1},\mu_{2})^{T}$ and
$\bm{\tau^{2}}=(\tau^{2}_{1},\tau^{2}_{2})^{T}$,
$p(\bm{y}|\bm{\mu},\bm{\tau^{2}},\bm{z})$ is combined with prior distributions
to obtain the posteriors. A common issue that arises in the Bayesian
estimation of mixture models is the invariance of the mixture likelihood
function under the relabelling of the mixture components, known as label
switching. To address this problem in our approach, we adopt the simple
constraint $\mu_{1}<\mu_{2}$ and reorder the pairs $(\mu_{k},\tau_{k}^{2})$
according to this restriction in the MCMC sampling scheme.
Following the usual practice of assigning independent prior distributions to
the component parameters (see Escobar and West,, 1995, Richardson and Green,,
2002), we assume
$p(\bm{\mu},\bm{\tau_{k}^{2}})=p(\mu_{1})p(\tau_{1}^{2})p(\mu_{2})p(\tau_{2}^{2})$
and place the following priors on $\mu_{k}$ and $\tau^{2}_{k}$, $k=1,2$,
$\displaystyle\mu_{k}\sim\text{N}(b_{0k},B_{0k}),$ (12)
$\displaystyle\tau_{k}^{2}\sim\Gamma(c_{0k},C_{0k}).$ (13)
For the sake of simplicity, hereafter we denote by $[\dots]$ the set of all
remaining variables to be considered for the posterior in use. Hence, under
the conjugate priors (12) and (13), one obtains the conditional posterior
distributions for $\mu_{k}$ and $\tau_{k}^{2}$,
$\displaystyle\mu_{k}|[\dots]\sim\text{N}(b_{k},B_{k}),$ (14)
$\displaystyle\tau_{k}^{2}|[\dots]\sim\Gamma(c_{k},C_{k}),$ (15)
where
$\displaystyle\begin{split}B_{k}&=(B_{0k}^{-1}+\tau_{k}^{2}T_{k})^{-1},\\\
b_{k}&=B_{k}(\tau_{k}^{2}s_{k}+B_{0k}^{-1}b_{0k}),\\\
\end{split}\qquad\begin{split}C_{k}&=C_{0k}+\frac{\sum\limits_{t:z_{t}=k-1}(y_{t}-\mu_{k})^{2}}{2},\\\
c_{k}&=c_{0k}+\frac{T_{k}}{2}.\end{split}$
It is worth stressing that assuming the mixture weights to have a dynamic
behavior does not interfere with the full conditional posteriors of the
component parameters, because they are calculated as in the case of the
ordinary (static) mixture model.
Given the observations $\bm{y}$, the component parameters $\bm{\mu}$,
$\bm{\tau^{2}}$ and $\bm{\alpha}=(\alpha_{1},\dots,\alpha_{n})^{T}$, the
$z_{t}$’s are conditionally independent and
$p(z_{t}=1|\bm{y},\bm{\mu},\bm{\tau^{2}},\bm{\alpha})\propto\alpha_{t}f_{N}(y_{t}|\mu_{2},\tau_{2}^{-2})$.
Thus, one can easily show that, for each $t=1,\dots,n$, the full conditional
posterior of $z_{t}$ is given by
$\displaystyle\begin{split}z_{t}|[\dots]&\sim\text{Bern}(\beta_{t}),\\\
\beta_{t}&=\frac{\alpha_{t}f_{N}(y_{t}|\mu_{2},\tau_{2}^{-2})}{\alpha_{t}f_{N}(y_{t}|\mu_{2},\tau_{2}^{-2})+(1-\alpha_{t})f_{N}(y_{t}|\mu_{1},\tau_{1}^{-2})}.\end{split}$
(16)
The latent variables introduced in (11) are unknown. However, given the vector
of wavelet coefficients $\bm{\theta}$ and the allocation data
$\bm{z}=(z_{1},\dots,z_{n})^{T}$, we can use the structure of the MCMC
algorithm to draw $l_{1},\dots,l_{n}$ from their posterior distribution, which
is
$\displaystyle\begin{split}l_{t}|[\dots]&\sim\text{N}(\bm{x}_{t}^{T}\bm{\theta},1)\text{
truncated at left by 0 if }z_{t}=1,\\\
l_{t}|[\dots]&\sim\text{N}(\bm{x}_{t}^{T}\bm{\theta},1)\text{ truncated at
right by 0 if }z_{t}=0.\end{split}$ (17)
For the vector of parameters $\bm{\theta}$, Albert and Chib, (1993) derived
the posterior distribution of $\bm{\theta}$ given $\bm{z}$ and $\bm{l}$ under
diffuse and Gaussian priors. In this work, on the other hand, $\bm{\theta}$ is
a vector of wavelet coefficients. As a result, we need a sparsity inducing
prior able to address the noise $e_{t}$ in (11). Thus, following the
discussion in Section 2.1, we suggest using spike and slab priors for the
components of vector $\bm{\theta}$. In this scenario, we assume that the
entries of $\bm{\theta}$ are mutually independent. For $t=2^{j}+k+1$,
$k=0,\dots,2^{j}-1$ and $j=0,\dots,J-1$, this kind of prior can be specified
as
$\theta_{t}\sim(1-\pi_{j})\delta_{0}(\cdot)+\pi_{j}\gamma(\cdot),$ (18)
where we consider $\gamma$ to be either the Gaussian distribution or the
Laplace distribution as presented in (LABEL:eq:prior_normal) and in
(LABEL:eq:prior_laplace), respectively. Following Abramovich et al., (1998),
the prior specification is completed by assigning a diffuse prior on the
scaling coefficient at the coarsest level $c_{00}$, in the first entry of
vector $\bm{\theta}$.
Under (18), the posterior distribution of $\theta_{t}$ is given by
$\displaystyle\begin{split}\theta_{t}|[\dots]&\sim(1-\pi_{\text{{post}}})\delta_{0}(\theta_{t})+\pi_{\text{{post}}}f_{1}(\theta_{t}|\bm{w}_{t}^{T}\bm{l}),\\\
\pi_{\text{{post}}}&=\frac{\pi_{j}g(\bm{w}_{t}^{T}\bm{l})}{\pi_{j}g(\bm{w}_{t}^{T}\bm{l})+(1-\pi_{j})\phi(\bm{w}_{t}^{T}\bm{l})},\end{split}$
(19)
where $\bm{w}_{t}$ is a column-vector corresponding to the $t$-th row of
matrix $\bm{W}$, $f_{1}(\theta_{t}|\bm{w}_{t}^{T}\bm{l})$ is the posterior
non-null mixture component and $g$ is the convolution between $\gamma$ and the
standard normal distribution $\phi$, $g=\gamma\star\phi$.
Regarding the hyperparameters of the spike and slab priors, that is, the
sparsity parameter $\pi_{j}$ and the variance $\upsilon_{j}^{2}$ (Gaussian
component) or the scale parameter $a$ (Laplace component), we follow the
approach in Johnstone and Silverman, 2005a , Johnstone and Silverman, 2005b
and estimate them jointly by maximizing the marginal log likelihood function,
which is given by
$\sum\limits_{i=1+2^{j}}^{2^{j+1}}\log\\{(1-\pi_{j})\phi(\bm{w}_{i}^{T}\bm{l})+\pi_{j}g(\bm{w}_{i}^{T}\bm{l})\\}.$
These values are then used in (19) to sample the vector $\bm{\theta}$ in the
MCMC procedure, which is detailed in Algorithm 1.
Algorithm 1 Gibbs sampling algorithm - Data augmentation
1:Choose number of iterations $N$.
2:Specify initial values for
$\bm{\mu}^{(0)},\,{\bm{\tau^{2}}}^{(0)},\,\bm{z}^{(0)}=(z_{1}^{(0)},\dots,z_{n}^{(0)})^{T}$
and $\bm{\alpha}^{(0)}$.
3:for $i\leftarrow 1$ to $N$ do
4: Sample $\mu_{1}^{(i)}\sim p(\mu_{1}|[\dots])$. $\triangleright$ See (14)
5: Sample ${\tau_{1}^{2}}^{(i)}\sim p(\tau_{1}^{2}|[\dots])$. $\triangleright$
See (15)
6: Sample $\mu_{2}^{(i)}\sim p(\mu_{2}|[\dots])$. $\triangleright$ See (14)
7: Sample ${\tau_{2}^{2}}^{(i)}\sim p(\tau_{2}^{2}|[\dots])$. $\triangleright$
See (15)
8: if $\mu_{2}<\mu_{1}$ then
9: Permute the labeling of pairs $(\mu_{k}^{(i)},{\tau_{k}^{2}}^{(i)})$.
10: end if
11: Sample $z_{t}^{(i)}\sim p(z_{t}|[\dots])$, for $t=1,\dots,n$.
$\triangleright$ See (16)
12: Sample $l_{t}^{(i)}\sim p(l_{t}|[\dots])$, for $t=1,\dots,n$.
$\triangleright$ See (17)
13: Select $\upsilon_{j}^{2}\,/\,a$ and $\pi_{j}$ by marginal maximum
likelihood.
14: Sample $\theta_{t}^{(i)}\sim p(\theta_{t}|[\dots])$, for $t=1,\dots,n$.
$\triangleright$ See (19)
15: Calculate $\bm{\alpha}^{(i)}=\Phi(\bm{W^{T}\theta})$. $\triangleright$
$\bm{W}$ is the matrix form of the DWT.
16:end for
As discussed in Section 2.1, using (18) as prior for $\theta_{t}$ allows the
posterior medians to act like thresholding rules, equating to zero noisy
coefficients. Because of this, we elect the absolute loss as the Bayes rule
estimator for the numerical experiments performed using the MCMC method
described in Algorithm 1.
## 4 Numerical Experiments
In this section, we illustrate the estimation process discussed in the former
sections by conducting Monte Carlo experiments and applying it to a river
quota data set to identify flood regimes. In both studies, we implement
Algorithm 1 running 6,000 iterations, discarding the first 1,000 as burn-in
and performing thinning every 5 draws. We consider the following independent
priors for the component parameters: $\mu_{1}\sim N(q_{1},s^{2})$,
$\tau_{1}^{2}\sim\Gamma(0.01,0.01)$, $\mu_{2}\sim N(q_{3},s^{2})$, and
$\tau_{2}^{2}\sim\Gamma(0.01,0.01)$, where $q_{1}$ and $q_{3}$ are the first
and third quartiles, respectively, of the observed data and $s^{2}$ is the
sample variance. The purpose of using the data statistics is to reduce
subjectivity, and, by adopting the quartiles, to segregate the data into two
groups.
Concerning the wavelet bases used to perform the transforms, we use the
coiflet basis with six vanishing moments. It is important to highlight that,
according to other simulated studies, using other Daubechies wavelet bases
provides similar results to those achieved by this specific coiflet basis. We
do not present these supplementary analyses due to space limitations.
### 4.1 Monte Carlo simulations
In our simulated investigations, we generate the artificial data sets by
mixing two normally distributed samples of size 1,024, as defined in (10). In
this case, we set the following values for the component parameters:
$\mu_{1}=0$, $\mu_{2}=2$, $\tau_{1}^{2}=4$ and $\tau_{2}^{2}=4$. Concerning
the dynamic mixture weights, we employ three different curves for
$\alpha_{t}$: sinusoidal, blocks, and bumps, with the first being defined as
$\alpha_{t}=0.4\,\cos(2\pi(t+\pi))+0.5$, and the last two being rescaled test
functions introduced by Donoho and Johnstone, (1994).
For all three behaviors of $\alpha_{t}$, we run 1,000 Monte Carlo replicates.
Additionally, we regard both spike and slab priors, discussed in Section 2.1,
for the distribution of the wavelet coefficients, namely: the spike and slab
prior with Gaussian slab (SSG), and the spike and slab prior with Laplace slab
(SSL). Hereafter, we use the acronyms, SSG and SSL, to refer to these priors.
As mentioned in Section 3.1, the point estimates are the medians of the MCMC
chains for each Monte Carlo replicate. To appraise the performance of the
estimation as a whole, we calculate the average of these point estimates and
their 95% HPD intervals. The results for the component parameters are
presented in Table 1 and Table 2. It is worth noting that the method, under
both priors, performs satisfactorily, with some estimates even coinciding with
the parameter values, which, in turn, are encompassed by the HPD intervals in
every $\alpha_{t}$’s scenario.
Regarding the dynamic mixture weights, Figure 1 shows the results. For the
sinusoidal scenario, we see that the method, considering both SSG and SSL
priors, succeeds in mimicking the curve’s shapes. Although the bumps and
blocks functions are less smooth than the sinusoidal, the method still can
satisfactorily estimate their curves. In fact, for the bumps, the point
estimates not only follow the sharp shape of the function but also captures
the null values correctly. For the blocks scenario, the estimates properly
mimic the discontinuity regions and the HPD intervals succeed at encompassing
the entire curve.
Table 1: Averages of the point estimates (95% HPD credible intervals) for the component parameters $\mu_{1},\tau_{1}^{2},\mu_{2}$ and $\tau_{2}^{2}$, based on 1,000 replications of data sets, considering the SSG prior to $\bm{\theta}$. $\alpha_{t}$’s curve | $\mu_{1}$ = 0 | $\tau_{1}^{2}$ = 4 | $\mu_{2}$ = 2 | $\tau_{2}^{2}$ = 4
---|---|---|---|---
Sinusoidal | 0.00 (-0.04;0.06) | 4.00 (3.58;4.65) | 2.00 (1.95;2.04) | 4.00 (3.40;4.59)
Bumps | 0.00 (-0.04;0.02) | 4.01 (3.59;4.38) | 1.90 (1.60;2.15) | 3.62 (1.06;6.45)
Blocks | 0.00 (-0.04;0.06) | 4.06 (3.41;4.71) | 2.00 (1.95;2.06) | 4.00 (3.50;4.63)
Table 2: Averages of the point estimates (95% HPD credible intervals) for the component parameters $\mu_{1},\tau_{1}^{2},\mu_{2}$ and $\tau_{2}^{2}$, based on 1,000 replications of data sets, considering the SSL prior to $\bm{\theta}$. $\alpha_{t}$’s curve | $\mu_{1}$ = 0 | $\tau_{1}^{2}$ = 4 | $\mu_{2}$ = 2 | $\tau_{2}^{2}$ = 4
---|---|---|---|---
Sinusoidal | 0.00 (-0.05;0.05) | 4.05 (3.50;4.62) | 2.00 (1.95;2.04) | 3.99 (3.49;4.50)
Bumps | 0.00 (-0.04;0.03) | 3.96 (3.34;4.53) | 1.89 (1.43;2.22) | 3.66 (0.71;6.56)
Blocks | 0.02 (-0.15;0.05) | 3.91 (3.40;5.60) | 1.95 (1.28;2.07) | 3.85 (0.82;4.76)
Figure 1: Estimates of the $\alpha_{t}$’s provided by SSG prior (left); and
SSL prior (right). The curves assigned to $\alpha_{t}$ are, respectively: the
sinusoidal (top), the bumps (middle), and the blocks (bottom). The full lines
correspond to the $\alpha_{t}$’s curve, the dashed lines correspond to the
average of the pointwise estimates and the shaded areas correspond to the 95%
HPD intervals.
### 4.2 Taquari quota data set
Part of the Taquari-Antas Hydrographic Basin (TAHB) in the state of Rio Grande
do Sul (south of Brazil), the Taquari River is located in the upper domain of
the Baixo Taquari-Antas Valley, a region that has been affected by an
increasing number of extreme rainfall events in recent decades (Tognoli et
al.,, 2021). As a result, on many occasions, the rain excess is not drained
efficiently and floods riverside regions. This phenomenon is aggravated in
urban areas, where the human occupation of floodplains and the soil
impermeability contribute to reducing the infiltration capacity and
overloading the drainage system, leading to flood inundations (Kurek,, 2016).
As reported by Oliveira et al., (2018), Encantado is one of the cities
adjacent to the course of the Taquari River most susceptible to fluvial
inundations. The geomorphological and topographical characteristics of
Encantado’s land favor the water accumulation and restrict its drainage
(Oliveira et al.,, 2018). Furthermore, the urbanization of areas with high
flood vulnerability in this municipality contributes to intensifying the
occurrence of flood inundations (Kurek,, 2016).
Because of these circumstances, we propose implementing Algorithm 1 to a time
series of Taquari’s river quota to estimate the probability of an inundation
regime in Encantado’s urban areas. A river quota is the height of the water
body, conventionally measured in centimeters (cm), on a given region of the
riverbank. The data set corresponds to the records of Encantado´s fluviometric
station identified by the code 86720000. The monthly time series of this
station comes from the Hidroweb system, an integrated platform of the National
Water Resources Management System (SINGREH) available at
https://www.snirh.gov.br/hidroweb/serieshistoricas. Figure 2 shows a map of
Encantado, highlighting the station used in this study.
Figure 2: Location map of the fluviometric station in the city of Encantado.
In the upper-right corner, the Taquari Antas Hydrographic Basin in Rio Grande
do Sul state, south of Brazil.
To validate the estimated probabilities, we use a report from the Brazilian
Geological Survey (CPRM) (Peixoto and Lamberty,, 2019) that records the months
when floods occurred in Encantado. Therefore, we can see if the estimates of
the mixture weight properly describe the flood regimes, no inundation and
inundation, for each month. It is worth highlighting that since inundations
can last for a couple of days or even more, there are no records of the
specific days when these events took place, only the months. Because of that,
and considering that the model is a mixture of two Gaussian distributions, we
use the monthly average of the Taquari quota to estimate the probability
associated with flood inundations. The period analyzed was from May 2004 to
December 2014, consisting of 128 observations. Figure 3 presents this data
set.
Table 3 shows the point estimates for the component parameters that describe
each flood regime. Note that the results provided by the method under the SSG
prior are similar to those achieved by it assigning the SSL prior to the
distribution of wavelet coefficients. Concerning the dynamic mixture weights,
Figure 4 shows the estimates considering both priors for $\bm{\theta}$. By
analyzing the results, we see that using the SSL prior allows estimating
higher peaks for the probabilities related to inundation periods than using
the SSG prior. In fact, under a Bayes classifier, if the method employs the
SSG prior, it can detect neither the months when flood episodes were reported
nor change points ($\\{t:\alpha_{t}=0.5\\}$).
In summary, the method provides results consistent with the data on flood
inundations in Encantado available in other works and reports (see Peixoto and
Lamberty,, 2019, Tognoli et al.,, 2021). In addition, choosing the Laplace
density in the spike and slab prior tends to provide dynamic weight estimates
more capable of detecting floods.
Figure 3: Monthly average of Taquari’s river quota (cm) from May 2004 to December 2014. Table 3: Medians (95% HPD credible intervals) for the component parameters $\mu_{1},\tau_{1}^{2},\mu_{2}$ and $\tau_{2}^{2}$ of the Taquari quota data set, based on the MCMC samples. Parameters | SSG prior | SSL prior
---|---|---
$\mu_{1}$ | 227.07 (210.09; 242.89) | 220.60 (206.25; 236.28)
$\tau_{1}^{2}$ | 2.30e-4 (1.54e-4; 3.15e-4) | 2.58e-4 (1.77e-4; 3.45e-4)
$\mu_{2}$ | 405.01 (316.38; 483.35) | 400.20 (355.72; 439.54)
$\tau_{2}^{2}$ | 1.14e-4 (2.56e-5; 3.42e-4) | 1.04e-4 (3.65e-5; 1.85e-4)
Figure 4: Estimates of the $\alpha_{t}$’s of the Taquari quota data provided
by SSG prior (left); and SSL prior (right). The full (black) lines correspond
to the point estimates (medians) and the dashed (blue) lines mark the months
when flood inundations were reported by Peixoto and Lamberty, (2019).
## 5 Conclusion
This paper presents an approach to identify regime switches in bimodal data
sets. We use a two-component mixture model whose mixture weight varies
according to some index, like time. This adaptation makes the model more
flexible and adaptive to a broader range of clustering and classification
problems. Furthermore, we use wavelet bases to estimate the dynamic behavior
of the mixture weight due to their excellent properties when it comes to
curves’ estimation. However, unlike other approaches in the literature that
also rely on wavelets (see Montoril et al.,, 2019), here we consider a
Bayesian framework and propose estimating the dynamic weights and the
component parameters jointly through an efficient Gibbs sampling algorithm.
We analyze the performance of this MCMC algorithm by conducting Monte Carlo
experiments and illustrate the approach with an application to a river quota
data set. Results from the simulations show that the method provides good
estimates for the component parameters and the dynamic weights even when the
function behind $\alpha_{t}$’s behavior is rougher. Additionally, the
estimation performance using SSG prior is similar to the performance achieved
when SSL prior is employed. The same does not apply to the results obtained in
the river quota data set. For this application, we notice that implementing
the method under the SSG prior to the wavelet coefficients yields smaller
values for the probabilities associated with inundations occurrence than the
estimates provided by using the SSL prior. This is likely because the Gaussian
distribution does not have heavy tails, unlike the Laplace distribution.
## References
* Abramovich et al., (2000) Abramovich, F., Bailey, T. C., and Sapatinas, T. (2000). Wavelet analysis and its statistical applications. Journal of the Royal Statistical Society: Series D (The Statistician), 49(1):1–29.
* Abramovich et al., (1998) Abramovich, F., Sapatinas, T., and Silverman, B. W. (1998). Wavelet thresholding via a bayesian approach. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 60(4):725–749.
* Albert and Chib, (1993) Albert, J. H. and Chib, S. (1993). Bayesian analysis of binary and polychotomous response data. Journal of the American Statistical Association, 88(422):669–679.
* Bordes et al., (2006) Bordes, L., Delmas, C., and Vandekerkhove, P. (2006). Semiparametric estimation of a two-component mixture model where one component is known. Scandinavian Journal of Statistics, 33(4):733–752.
* Cai and Brown, (1999) Cai, T. and Brown, L. D. (1999). Wavelet estimation for samples with random uniform design. Statistics & Probability Letters, 42(3):313–321.
* (6) Donoho, D. L. (1993a). Nonlinear wavelet methods for recovery of signals, densities, and spectra from indirect and noisy data. In In Proceedings of Symposia in Applied Mathematics, pages 173–205. American Mathematical Society.
* (7) Donoho, D. L. (1993b). Unconditional bases are optimal bases for data compression and for statistical estimation. Applied and Computational Harmonic Analysis, 1(1):100–115.
* Donoho et al., (1996) Donoho, D. L., Johnstone, I. M., Kerkyacharian, G., and Picard, D. (1996). Density estimation by wavelet thresholding. The Annals of Statistics, 24(2):508–539.
* Donoho and Johnstone, (1994) Donoho, D. L. and Johnstone, J. M. (1994). Ideal spatial adaptation by wavelet shrinkage. Biometrika, 81(3):425–455.
* Escobar and West, (1995) Escobar, M. D. and West, M. (1995). Bayesian density estimation and inference using mixtures. Journal of the American Statistical Association, 90(430):577–588.
* George and McCulloch, (1993) George, E. I. and McCulloch, R. E. (1993). Variable selection via gibbs sampling. Journal of the American Statistical Association, 88(423):881–889.
* Hall and Patil, (1995) Hall, P. and Patil, P. (1995). Formulae for mean integrated squared error of nonlinear wavelet-based density estimators. The Annals of Statistics, 23(3):905–928.
* Hall and Zhou, (2003) Hall, P. and Zhou, X.-H. (2003). Nonparametric estimation of component distributions in a multivariate mixture. The Annals of Statistics, 31(1):201–224.
* Härdle et al., (2012) Härdle, W., Kerkyacharian, G., Picard, D., and Tsybakov, A. (2012). Wavelets, approximation, and statistical applications, volume 129\. Springer Science & Business Media.
* Hughes and Guttorp, (1994) Hughes, J. P. and Guttorp, P. (1994). A class of stochastic models for relating synoptic atmospheric patterns to regional hydrologic phenomena. Water Resources Research, 30(5):1535–1546.
* Hui and Zhou, (1998) Hui, S. L. and Zhou, X.-H. (1998). Evaluation of diagnostic tests without gold standards. Statistical Methods in Medical Research, 7(4):354–370.
* (17) Johnstone, I. and Silverman, B. W. (2005a). Ebayesthresh: R programs for empirical bayes thresholding. Journal of Statistical Software, 12:1–38.
* (18) Johnstone, I. M. and Silverman, B. W. (2005b). Empirical bayes selection of wavelet thresholds. The Annals of Statistics, 33(4):1700–1752.
* Kurek, (2016) Kurek, R. K. M. (2016). Analysis of floods in the vale do taquari/rs as subsidy the development of a forecast model. Master’s thesis, Federal University of Santa Maria.
* Mallat, (1989) Mallat, S. G. (1989). A theory for multiresolution signal decomposition: the wavelet representation. IEEE transactions on pattern analysis and machine intelligence, 11(7):674–693.
* Montoril et al., (2021) Montoril, M. H., Correia, L. T., and Migon, H. S. (2021). Bayesian estimation of dynamic weights in gaussian mixture models.
* Montoril et al., (2019) Montoril, M. H., Pinheiro, A., and Vidakovic, B. (2019). Wavelet-based estimators for mixture regression. Scandinavian Journal of Statistics, 46(1):215–234.
* Morettin, (1996) Morettin, P. A. (1996). From fourier to wavelet analysis of time series. In Prat, A., editor, COMPSTAT, pages 111–122, Heidelberg. Physica-Verlag HD.
* Nagy et al., (2011) Nagy, I., Suzdaleva, E., Kárný, M., and Mlynářová, T. (2011). Bayesian estimation of dynamic finite mixtures. International Journal of Adaptive Control and Signal Processing, 25(9):765–787.
* Ogden, (1997) Ogden, R. T. (1997). Essential wavelets for statistical applications and data analysis. Birkhäuser Boston Inc.
* Oliveira et al., (2018) Oliveira, G. G. d., Eckhardt, R. R., Haetinger, C., and Alves, A. (2018). Caracterização espacial das áreas suscetíveis a inundações e enxurradas na bacia hidrográfica do rio taquari-antas. Geosciences= Geociências, 37(4):849–863.
* Patra and Sen, (2016) Patra, R. K. and Sen, B. (2016). Estimation of a two-component mixture model with applications to multiple testing. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 78(4):869–893.
* Peixoto and Lamberty, (2019) Peixoto, C. A. B. and Lamberty, D. (2019). Setorização de áreas de alto e muito alto risco a movimentos de massa, enchentes e inundações: Encantado, rio grande do sul. Technical report, CPRM, Porto Alegre.
* Percival and Walden, (1999) Percival, D. B. and Walden, A. T. (1999). Wavelet Methods for Time Series Analysis. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press.
* Priestley, (1996) Priestley, M. B. (1996). Wavelets and time-dependent spectral analysis. Journal of Time Series Analysis, 17(1):85–103.
* Richardson and Green, (2002) Richardson, S. and Green, P. J. (2002). On Bayesian Analysis of Mixtures with an Unknown Number of Components (with discussion). Journal of the Royal Statistical Society: Series B (Methodological), 59(4):731–792.
* Rindskopf and Rindskopf, (1986) Rindskopf, D. and Rindskopf, W. (1986). The value of latent class analysis in medical diagnosis. Statistics in Medicine, 5(1):21–27.
* Tognoli et al., (2021) Tognoli, F. M. W., Bruski, S. D., and Araujo, T. P. d. (2021). Data analysis reveals that extreme events have increased the flood inundations in the taquari river valley, southern brazil. Latin American Data in Science, 1(1):16–25.
* Vidakovic, (1999) Vidakovic, B. (1999). Statistical modeling by wavelets, volume 503. John Wiley & Sons.
* Walker et al., (2009) Walker, M. G., Mateo, M., Olszewski, E. W., Sen, B., and Woodroofe, M. (2009). Clean kinematic samples in dwarf spheroidals: An algorithm for evaluating membership and estimating distribution parameters when contamination is present. The Astronomical Journal, 137(2):3109.
|
# A Dynamic Model of Performative Human-ML Collaboration: Theory and Empirical
Evidence
Tom Sühr
MPI for Intelligent Systems, Tübingen
Tübingen AI Center
<EMAIL_ADDRESS>
&Samira Samadi
MPI for Intelligent Systems, Tübingen
Tübingen AI Center
<EMAIL_ADDRESS>
&Chiara Farronato
Harvard Business School
Harvard University, CEPR, NBER
<EMAIL_ADDRESS>
###### Abstract
Machine learning (ML) models are increasingly used in various applications,
from recommendation systems in e-commerce to diagnosis prediction in
healthcare. In this paper, we present a novel dynamic framework for thinking
about the deployment of ML models in a performative, human-ML collaborative
system. In our framework, the introduction of ML recommendations changes the
data generating process of human decisions, which are only a proxy to the
ground truth and which are then used to train future versions of the model. We
show that this dynamic process in principle can converge to different stable
points, i.e. where the ML model and the Human+ML system have the same
performance. Some of these stable points are suboptimal with respect to the
actual ground truth. We conduct an empirical user study with 1,408
participants to showcase this process. In the study, humans solve instances of
the knapsack problem with the help of machine learning predictions. This is an
ideal setting because we can see how ML models learn to imitate human
decisions and how this learning process converges to a stable point. We find
that for many levels of ML performance, humans can improve the ML predictions
to dynamically reach an equilibrium performance that is around 92% of the
maximum knapsack value. We also find that the equilibrium performance could be
even higher if humans rationally followed the ML recommendations. Finally, we
test whether monetary incentives can increase the quality of human decisions,
but we fail to find any positive effect. Our results have practical
implications for the deployment of ML models in contexts where human decisions
may deviate from the indisputable ground truth.
## 1 Introduction
Human-ML collaboration is increasingly used in various applications, from
content moderation in social media [1] to predicting diagnoses in healthcare
[2, 3] and making hiring decisions in human resources [4]. Companies that
implement human-ML collaborative systems face three crucial challenges: 1) ML
models learn from past human decisions, which are often only an approximation
to the ground truth (noisy labels); 2) ML models are rolled out to help future
human decisions, affecting the data-generating process of human-ML
collaboration that then influences future updates to ML models (performative
predictions [5]); and 3) the quality of the human-ML collaborative prediction
of the ground truth may change as a function of incentives and other human
factors. These challenges create a dynamic learning process. Without access to
the ground truth, it is often difficult to know whether the learning process
will reach an equilibrium state with a good approximation of the ground truth,
if it is interrupted at a sub-optimal level, or if it does not reach a stable
state at all.
For intuition, we can focus on the decision of a healthcare company to develop
and deploy an ML model to predict medical diagnoses from patient visits. The
problem is made difficult by the fact that a doctor’s diagnoses can be wrong,
and it is often too costly or time-consuming to identify the indisputable
ground truth—i.e., the underlying true diagnosis of a patient—so the company
typically uses all diagnoses to train their ML model, without distinction
between good or bad diagnoses. In addition, the company typically evaluates
the algorithm’s performance based on its ability to match those same doctor
diagnoses, potentially replicating their mistakes. The dynamic deployment of
updates to ML models that support doctor diagnoses could lead to a downward
spiral of human+ML performance if the company deploys a bad model and the bad
model adversely affects doctor decisions. Or, it can lead to continuous
improvement until it reaches a stable point that is a good approximation to
the indisputable ground truth. Without (potentially costly) efforts to measure
the ground truth, the company has no way of distinguishing between downward
spirals or continuous improvements.
This raises a multitude of empirical questions regarding the governing
mechanisms of this dynamic system. How do humans improve on ML predictions of
different quality levels, and do financial incentives matter? Will the dynamic
learning process converge to a good equilibrium even without the company
knowing the actual ground truth labels? How do we design an experiment to test
such a system in an empirical context?
#### Contributions.
In this paper, we present a novel framework for thinking about ML deployment
strategies in a performative, human-AI collaborative system. We present a
simple theoretical framework to identify conditions under which ML deployment
strategies converge to stable points that are a good approximation to the
ground truth, and conditions under which there are downward spirals away from
the ground truth. To validate our theory, we provide an empirical study in
which humans solve knapsack problems with the help of machine learning
predictions. We conducted a user study with 1,408 participants, each of whom
solved 10 knapsack problems. Knapsack problems are particularly suited to
explore our questions because we can train machine learning models to
replicate human decisions, while having access to the underlying ground truth
(the optimal solution) to evaluate the learning process. Additionally,
knapsack problems can be hard for humans, making the task not obvious; and
problem instances can be generated and labeled perfectly at negligible cost.
We highlight both theoretical and empirical contributions. Our theoretical
framework introduces the _collaborative characteristic function_ as the
function mapping the performance of ML models with respect to the ground truth
to the performance of humans using those models when making decisions
(human+ML). We define _utility_ the performance of decisions (made by ML
alone, human alone, or human+ML) with respect to the ground truth. We note
that utility cannot be easily quantified for many practical applications.
Finally, we introduce the notion of _collaborative learning paths_ , each of
which characterizes a possible dynamic deployment strategy. We show conditions
under which this dynamic system theoretically reaches a stable point of
utility for the firm.
On the empirical side, we show how low-quality ML models can make humans
perform worse than if they had no access to ML recommendations. Still, humans
improve on ML models so that the deployment strategy of ML models with initial
performance between 72% and 92% will converge to a performance that is around
92% of the value of the optimal knapsack solution. Two empirical findings
constrain the system from converging to an even higher performance. First,
humans do not respond to financial incentives for performance. Second, humans
sometimes make decisions that are worse than the ML recommendation, despite
the fact that it is fairly easy for them to compare their solution to the ML
suggestion and pick the best of the two.
Our results have practical implications for the deployment of ML models when
humans are influenced by those models but their decisions deviate from an
unknown ground truth. First, performance metrics of ML models can be
misleading when the learning objective is based on comparisons against human
decisions and those decisions can be wrong. Companies should thus exert
efforts to assess the quality of human decisions and take that into account
when training ML models. For example, in the medical setting, human diagnoses
should be first verified or confirmed by external experts, or patients should
be followed up to confirm the validity of initial diagnoses. At a minimum, ML
models should be trained on subsets of data for which there is enough
confidence that the decisions are correct. Second, our work highlights the
strategic importance of deploying ML models that allow for convergence to a
stable point with higher utility than humans alone. Such convergence is not
guaranteed and, as argued above, difficult to assess. Third, our work calls
for the need to adopt a dynamic approach when deploying algorithms that
interact with human decisions, and those interactions are used for future
model building.
## 2 Related Work
There has been a growing body of work investigating various forms of human-AI
collaboration. From learning-to-defer systems, where a model defers prediction
tasks to humans if its own uncertainty is too high[6, 7, 8], to AI-assisted
decision making where humans may or may not consult ML predictions to make a
decision [9, 3, 2]. Several alternative decision mechanisms have also been
explored [10, 11]. The application areas range from programming [12, 13], to
healthcare [2, 3] and business consulting [14]. Related work also investigates
factors influencing human-ML collaboration, such as explanations of ML
predictions [15], monetary incentives [16], fairness constraints [17], and
humans’ adaptability to model changes [18]. In this work, for the first time
to the best of our knowledge, we examine the human+ML interaction from a
dynamic perspective, where ML models learn from human decisions that are 1)
the result of previous human+ML collaboration and 2) can arbitrarily deviate
from the underlying ground truth.
This paper is also inspired by an extensive line of work on performative
prediction [5, 19, 20, 21], a theoretical framework in which predictions
influence the outcome they intend to predict. We adapt the ideas of
performative prediction to a context of human-ML collaboration and extend it
in three major ways: 1) In our setting, the model predictions change the
quality of the human-ML labels as a proxy for the ground truth (e.g., a doctor
diagnosis), but the ground truth is held constant (e.g., the true patient
diagnosis); 2) We introduce the concept of utility, to quantify the quality of
a solution with respect to the ground truth. There can be several stable
points with respect to model parameters in the performative prediction
framework, but not all of them are equally good at approximating the
indisputable ground truth; and 3) The ground truth is unknown, and the mapping
between human or ML labels and the ground truth is not fixed. To the best of
our knowledge, we are the first to explore performative predictions where the
company deploying ML models is unaware of the model’s performance relative to
the ground truth, and only knows its similarity to human labels. Our empirical
application is also novel in that it investigates the implications of
performative predictions for human-ML collaboration.
## 3 Problem Statement
We consider a setting in which time is separable in discrete time epochs
$t=1,...,T$. At each $t$, a firm deploys machine learning model
$M_{t}\in\mathcal{M}$ with $M_{t}:\mathcal{X}\rightarrow\mathcal{Y}$. The
model $M_{t}$ predicts a solution $Y\in\mathcal{Y}$ (e.g., a diagnosis) to a
problem $X\in\mathcal{X}$ (e.g. the patient’s symptoms) as a function of past
data. The firm employs expert humans $H\in\mathcal{H}$ with
$H:\mathcal{X}\times\mathcal{Y}\rightarrow\mathcal{Y}$, who solve the problems
with the help of the ML predictions. We will write $M_{t}(X)=Y_{M_{t}}$ and
$H(X,Y_{M_{t}})=Y_{H_{t}}$. We assume that for all $X\in\mathcal{X}$, there
exists an optimal solution $Y^{*}$, which is the undisputable ground truth.
#### The Firm’s Learning Objective.
In many real world applications, determining the ground truth label $Y^{*}$
can be extremely costly. For example, obtaining the correct medical diagnosis
can often require the knowledge of various specialists (e.g., orthopedists,
pediatricians, neurologists). Even when a single expert is enough, they can
misdiagnose a patient’s symptoms. Yet, in many of these cases, using the human
labels $Y_{H_{t}}$ as a proxy for $Y^{*}$ is the only feasible option to build
ML models. We allow for the quality of $Y_{H_{t}}$ with respect to $Y^{*}$ to
change. This means that two iterations of the ML model, $M_{t}$ and $M_{t+1}$,
are trained on data from two different data generating processes,
$(X,Y_{H_{t-1}})\sim~{}D_{t-1}$ and $(X,Y_{H_{t}})\sim D_{t}$, respectively.
Without access to $Y^{*}$, the only feasible learning objective for a firm
that wants to update its model parameters at time $t$ is the comparison
between the latest human-ML collaborative labels with the new
predictions.111We assume that models at time $t$ are trained exclusively on
data from the previous period $t-1$, although we can generalize our setting to
include any data points from 0 to $t-1$. For a given loss function
$\mathit{l}:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R_{+}}$ that is
$L(Y_{M_{t}},Y_{H_{t-1}}):=\underset{H\in\mathcal{H}}{\mathbb{E}}[\underset{(X,Y_{H_{t-1}})\sim
D_{t-1}}{\mathbb{E}}\mathit{l}(Y_{M_{t}},Y_{H_{t-1}})].$ (1)
The firm wants to minimize the difference between the model predictions at
time $t$ and the human labels at time $t-1$. We can write the firm’s problem
as selecting a model $M_{t}$ to minimize the loss function in Equation 1:
$\underset{M_{t}\in\mathcal{M}}{minimize}\text{ }L(Y_{M_{t}},Y_{H_{t-1}}).$
(2)
For simplicity, we assume that at each time $t$, the firm collects enough data
to perfectly learn the human-ML solution. In other words, with the optimal
model, $L(Y_{M_{t}},Y_{H_{t-1}})=0$. We return to this assumption in Appendix
A.7.
#### Utility.
In our scenario, the firm cannot quantify the true quality of a solution $Y$
with respect to $Y^{*}$. The loss in Equation 2 is just a surrogate for the
loss $L(Y,Y^{*})$, which is impossible or too costly to obtain. The firm thus
defines the human label as "ground truth," and maximizes the similarity
between model and human solutions, without knowing how close the human or ML
solutions are to the indisputable ground truth. In order to evaluate the
firm’s progress in approximating $Y^{*}$, it is useful to define a measure of
utility.
###### Definition 1.
(Utility) Let $d_{X}$ be a distance measure on $\mathcal{Y}$ with respect to a
given $X\in\mathcal{X}$. The function
$\mathbb{U}:\mathcal{X}\times\mathcal{Y}\rightarrow\mathbb{R}$ is a utility
function on $\mathcal{X}\times\mathcal{Y}$, if $\forall
X\in\mathcal{X},Y_{min},Y,Y^{\prime},Y^{*}\in\mathcal{Y}$
1. 1.
$\exists
Y_{min}\in\mathcal{Y}:\mathbb{U}(X,Y)\in[\mathbb{U}(X,Y_{min}),\mathbb{U}(X,Y^{*})]$
(bounded)
2. 2.
$\exists\varepsilon>0:|d_{X}(Y,Y^{*})-d_{X}(Y^{\prime},Y^{*})|<\varepsilon\Rightarrow\mathbb{U}(X,Y)=\mathbb{U}(X,Y^{\prime})$
($\varepsilon$-sensitive)
3. 3.
$d_{X}(Y,Y^{*})+\varepsilon<d_{X}(Y^{\prime},Y^{*})\Rightarrow\mathbb{U}(X,Y)>\mathbb{U}(X,Y^{\prime})$
(proximity measure)
The utility of a solution for the firm is maximal if $Y$ is
$\varepsilon$-close to $Y^{*}$ with respect to the underlying problem $X$. The
variable $\varepsilon$ should be interpreted as the threshold below which a
firm perceives no difference between two outcomes, i.e., it does not care
about infinitely small improvements.
#### Collaborative Characteristic Function.
As time $t$ increases, the firm hopes that the distributions $D_{t}$ shift
closer to the optimal distribution $D^{*}$, where $(X,Y)=(X,Y^{*})$. In other
words, for each model’s distance $d$, $d(D_{t},D^{*})>d(D_{t+1},D^{*})$. This
could happen, for example, if humans were able to easily compare available
solutions and pick the one that is closest to the indisputable ground truth.
We can translate this continuous improvement into properties of the human
decision function $H$ as follows: for all $t=1,...,T$ and $X\in\mathcal{X}$,
$\underset{H\in\mathcal{H}}{\mathbb{E}}[\mathbb{U}(X,H(X,Y_{M_{t}}))]=\mathbb{U}(X,Y_{M_{t}})+\delta_{M_{t}}.$
(3)
The firm’s hope is that $\delta_{M_{t}}\geq 0$ for $M_{t}$. Effectively,
$\delta_{M_{t}}$ characterizes the human-ML collaboration for all utility
levels of a model. If $\delta_{M_{t}}$ is positive, humans are able to improve
on a ML prediction (and future model iterations will thus get better at
approximating the ground truth). Instead, if $\delta_{M_{t}}$ is negative,
humans will perform worse than the ML recommendations, and future model
iterations will get progressively farther away from the ground truth. We
define the function given by Equation 3 as the collaborative characteristic
function:
###### Definition 2.
(Collaborative Characteristic Function) For a utility function $\mathbb{U}$,
humans $H\in\mathcal{H}$, we define the collaborative characteristic function
$\Delta_{\mathbb{U}}:\mathbb{R}\rightarrow\mathbb{R}$ as follows:
$\Delta_{\mathbb{U}}(\mathbb{U}(X,Y_{M}))=\underset{H\in\mathcal{H}}{\mathbb{E}}[\mathbb{U}(X,H(X,Y_{M}))]=\mathbb{U}(X,Y_{M})+\delta_{M}.$
The function $\Delta_{\mathbb{U}}$ can take any arbitrary form. Several
factors can affect $\Delta_{\mathbb{U}}$, e.g., monetary incentives and ML
explanations. In subsequent sections, we empirically approximate one such
function.
#### Collaborative Learning Path and Stable Points.
Although $\Delta_{\mathbb{U}}$ has infinite support, a firm will only
experience a discrete set of utility values achieved by humans with the help
of ML recommendations. We call this the collaborative learning path. It is
characterized by $\Delta_{\mathbb{U}}$, the utility of the first deployed
model $s$, and the number of deployment cycles $T$:
###### Definition 3.
(Collaborative Learning Path) Let $\Delta_{\mathbb{U}}$ be a collaborative
characteristic function, $t=1,...,T\in\mathbb{N}_{\geq 1}$ the number of
deployment cycles and $s=\mathbb{U}(X,Y_{M_{1}})$ the utility of the starting
model. We define the collaborative learning path to be the function
$\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t)=\underset{H\in\mathcal{H}}{\mathbb{E}}[\underset{X\in\mathcal{X}}{\mathbb{E}}(\mathbb{U}(H(X,Y_{M_{t}}))].$
###### Definition 4.
(Stable Point) A stable point $\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t)$ occurs
at $t$ if for all $t^{\prime}\geq t$,
$\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t^{\prime})=\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t)$.
Stable points are states where the utility remains constant in all future
model deployments. If $Y^{*}$ is unique for all $X$, then this is also a
stable point for the distribution shifts. Whether a firm can reach a stable
point on its collaborative learning function depends on the shape of
$\Delta_{\mathbb{U}}$ and the initial model utility $s$. Figure 1 shows two
examples of collaborative characteristic functions and collaborative learning
paths. The 45-degree line includes the points where
$\underset{X,H}{\mathbb{E}}[\mathbb{U}(X,H(X,Y))]=\underset{X}{\mathbb{E}}[\mathbb{U}(X,Y)]$.
Stable points will always lie on this line, because a stable point requires
$\delta_{t}\approx 0$ ($|\delta_{t}|\leq\epsilon$, where $\epsilon$ is defined
in Appendix A.5 and denotes the smallest change in utility that is possible
for a given $\varepsilon$ from definition 1). If $|\delta_{t}|>\epsilon$, it
indicates that humans’ influence changes labels $Y$ relative to the most
recent ML model, leading to a new data distribution. The model at $t+1$ will
thus differ from $M_{t}$, preventing stability. When the model and human+ML
labels differ, there are two possible cases. First, $\delta_{M_{t}}>\epsilon$,
which implies that the collaborative characteristic function
$\Delta_{\mathbb{U}}$ is above the 45-degree line on that portion of the
domain (figure 1(a)). In this case, human+ML labels are closer to the
indisputable ground truth than the model alone, which leads to improvements of
subsequent model deployments. Second, if $\delta_{M_{t}}<-\epsilon$, the
collaborative characteristic function is below the 45-degree line (figure
1(b)). In this case, human+ML labels are further away from the indisputable
ground truth than the model alone, which leads to deterioration of subsequent
model deployments.
(a) Collaborative Improvement
(b) Collaborative Harm
Figure 1: Collaborative Improvement (left): The firm’s collaborative
characteristic function and one collaborative learning path, if humans improve
on the ML solution. The x-axis denotes the model expected utility, the y-axis
denotes expected human+ML utility. The firm deploys a first model with utility
(s). Then humans use the model and improve utility by $\delta_{1}$, leading to
expected human+ML utility (1). The firm learns a new model with utility (b) on
the new data distribution. This is viable under the assumption that the new
model has the same utility as the previous period’s human+ML labels, i.e., we
can move horizontally from (1) to the 45-degree line at (b). Humans can
further improve utility by $\delta_{2}$, which leads to expected utility (2).
The dynamic improvement process continues until it reaches stable point
utility (6-d). Collaborative Harm (right): The firm deploys a model with
expected utility (s) but the humans, when interacting with the model, decrease
utility by $\delta_{1}$, with expected utility (1). The firm will thus learn a
model of utility (b) on the new distribution. The downward spiral continues
until stable point (d).
We present the best-case and worst-case scenarios from Figure 1 as
Propositions 1 and 2 below:
###### Proposition 1.
(Collaborative Improvement) If
$\Delta_{\mathbb{U}}(\mathbb{U}(X,Y_{M}))\geq\mathbb{U}(X,Y_{M})$ for all
$M\in\mathcal{M},X\in\mathcal{X}$. Then
$\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t)$, is non-decreasing with $t=1,...,T$
and for sufficiently large $T$ it exists a $t^{\prime}\in[1,T]$ such that
$\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t^{\prime})$ is a stable point.
###### Proof.
(sketch) Because $\mathbb{U}$ is bounded, $\delta_{M}$ must be 0 in the
extreme points. Furthermore, because of the $\varepsilon$-sensitivity of
$\mathbb{U}$, the steps $t$ until reaching the maximum utility are also
bounded. It follows that there exists a $t\in\mathbb{N}$ such that
$\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t)-\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t+1)=0$,
which is a stable point. See Appendix A.6 for the complete proof. ∎
###### Proposition 2.
(Collaborative Harm) If
$\Delta_{\mathbb{U}}(\mathbb{U}(X,Y_{M}))\leq\mathbb{U}(X,Y_{M})$ for all
$M\in\mathcal{M},X\in\mathcal{X}$. Then
$\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t)$, is non-increasing with $t=1,...,T$
and for sufficiently large $T$ it exists a $t^{\prime}\in[1,T]$ such that
$\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t^{\prime})$ is a stable point.
###### Proof.
Similar to the proof of Proposition 1. ∎
In practice, a firm’s collaborative characteristic function can take any
arbitrary shape, with portions above and portions below the 45-degree line. As
long as the function is continuous, at least one stable point exists, and
possibly more. When more than one stable point exist, the firm would like to
reach the stable point with the highest utility (i.e., the highest point of
the characteristic function lying on the 45-degree line). However, since the
firm does not have access to the indisputable ground truth, when it reaches a
stable point it does not know where such point lies on the 45-degree line.
In what follows, we empirically explore a context where 1) the ML model learns
to imitate human decisions, and 2) it is easy for us to identify the
indisputable ground truth. The setting allows us to draw at least a portion of
the collaborative characteristic function, and explore the effects of human
behavior on its shape, particularly the effect of monetary incentives and
alternative selection criteria. We present study participants with instances
of hard knapsack problems to answer the following research questions:
RQ1: How do monetary incentives affect human performance? To keep our
treatment condition manageable, we explore the effect of different levels of
performance bonuses on $\mathbb{U}(H(X,.))$, i.e., the human performance
without ML recommendations.
RQ2: What is the shape of the human-ML collaborative characteristic function
$\Delta_{\mathbb{U}}$? Here, we hold the performance bonus constant, and test
humans’ effect $\delta_{M}$ on utility for different levels of ML performance.
This will enable us to approximate $\Delta_{\mathbb{U}}$ for a specific task.
## 4 Experimental Setup
In this section, we describe our user study. The goal of our experimental
setup is to simulate an environment in which users work on difficult tasks
with the help of ML. The company responsible for deploying ML models does not
know the optimal solution $Y^{*}$ (e.g., the true patient’s diagnosis), and it
trains ML models to replicate experts’ decisions (doctor diagnoses). To
evaluate how the company’s models perform against $Y^{*}$, we need a setting
in which we, as researchers, know the quality of any solution Y using a
utility function $\mathbb{U}(X,Y)$. This allows us to make absolute quality
assessments of solutions. Note that this is often unattainable in practice, as
we argued in the introduction. The Knapsack Problem is particularly well
suited for this context.
#### The Knapsack Problem.
In our experiment, users solve instances of the knapsack problem. An instance
involves selecting which of $n=18$ items to pack into a knapsack, each with a
weight $w$ and a value $v$. The objective is to maximize value without
exceeding the weight limit $W$ of the knapsack (between 5 and 250). We focus
on the one-dimensional 0-1 knapsack problem, in which participants choose
which items to pack (see Appendix 6 for a formal definition). We constrain the
weights, values, and capacity of our instances to integer values, to make them
easier to interpret by humans. We describe the details of the knapsack problem
generation in Appendix A.10.
The knapsack problem has desirable properties for the empirical application of
our dynamic framework. First, users do not require special training—beyond a
short tutorial—to find a solution to the problem. Yet, the task is hard for
humans, especially with a growing number of items [22]. Thus, the optimal
solution $Y^{*}$ is not obvious. Second, we can generate solutions to the
knapsack problem in two ways. The “optimal” solution can be found with dynamic
programming. The “ML” solution can be found by imitating what humans select
and computing the training loss as the difference between the items selected
by participants versus items selected by a model.
This setup allows us to quantify the utility of the proposed solution relative
to the optimal solution. We define utility for the knapsack problem as
follows:
###### Definition 5.
(Economic Performance) For a knapsack instance
$X=((w_{1},\cdots,w_{n}),(v_{1},\cdots,v_{n}),W)$ with optimal solution
$\underset{x_{1},\cdot,x_{n};\sum_{i=1}^{n}x_{i}w_{i}\leq
W}{\max}\sum_{i=1}^{n}x_{i}v_{i}=:Y^{*}$ and a valid solution $Y$ we call the
function $\mathbb{U}_{\text{Econ}}(X,Y)=\frac{Y}{Y^{*}}$ the economic
performance of $Y$ given $X$.
Appendix A.4 contains details about $\mathbb{U}_{\text{Econ}}(X,Y)$ and
discusses our results using an alternative utility function. Note that there
can be multiple optimal combinations of items to pack, but the optimal value
$Y^{*}$ is unique.
#### Study Design.
We recruited participants from Prolific222https://www.prolific.com/
exclusively from the UK to ensure familiarity with the currency and weight
metrics used to describe the knapsack items and monetary incentives in the
study. Appendix A.11 presents screenshots of the web interface for each step
of the study. At the beginning of the study, participants received a tutorial
on the knapsack problem, our web application’s interface, and the payment
structure, described below. After the tutorial, the participants solved two
practice problems and received feedback on their submission’s performance. For
the main task, each participant received 10 knapsack problems generated by
Algorithm 1. For each problem, they had 3 minutes to submit their solution. If
the participant did not actively click on the submit button, the selected
items were automatically submitted at the 3-minute mark. Participants could
take unlimited breaks between problems. At the end of the study, we asked
participants about their demographics, previous experience with the knapsack
problem, and how much effort they put in solving the task.
Every participant received a base payment of £2.00 (approx. $2.50) if they
achieved at least 70% of the value of the optimal solution, averaged across
the 10 knapsack instances they solved. We set the 70% threshold to discourage
participants from randomly selecting items, as randomly-generated solutions
that pick items until reaching the weight capacity have an average
$\mathbb{U}_{\text{Econ}}$ around 60%.
Model | None | q1 | q2 | q3 | q4 | q5 | q6
---|---|---|---|---|---|---|---
Mean $\mathbb{U}_{\text{Econ}}(X,Y)$ | . | 0.717 | 0.800 | 0.844 | 0.884 | 0.899 | 0.920
SD | . | 0.083 | 0.105 | 0.098 | 0.105 | 0.088 | 0.085
No Bonus | N=102 | | | | | |
2-cent Bonus | N=98 | | | | | |
10-cent Bonus∗ | N=100+117 | N=64 | N=78 | N=194 | N=179 | N=70 | N=191
20-cent Bonus | N=96 | | | | | |
Table 1: Matrix of treatment conditions. The columns denote information on
the ML recommendation performance. The rows denote bonus payments for
performance. The number of study participants are presented in the relevant
cells. ∗We ran the 10-cent bonus treatment with no ML recommendation twice:
once without a comprehension quiz for the bonus structure (100 participants)
and once with the comprehension quiz (117).
Participants were randomly allocated into four monetary treatments and seven
algorithmic recommendations (see Table 1). All monetary conditions were tested
while users had no access to algorithmic recommendations. Participants in the
No Bonus condition did not receive any additional payments beyond the base
payment. Participants in the 2-cent Bonus condition received an additional
£0.02 for each percentage point of $\mathbb{U}_{\text{Econ}}$ above 70%. For
example, if a participant achieved on average $\mathbb{U}_{\text{Econ}}$ =
85%, they would receive
$\text{\textsterling}2.00+15\times\text{\textsterling}0.02=\text{\textsterling}2.30$.
Participants in the 10-cent Bonus and 20-cent Bonus treatments had similar
incentives for performance, but higher monetary rewards for each additional
percentage point increase in performance (£0.10 and £0.20, respectively).
We ran the 10-cent Bonus treatment twice. In the second round, we introduced a
comprehension quiz to ensure that our participants understood the payment
structure. Within the 10-cent Bonus with bonus comprehension quiz, we
randomized access to ML recommendations. Users were randomly allocated to one
of seven ML treatments. The control group had no ML recommendations. The other
six groups had access to recommendations from a progressively better ML model,
as Table 1 shows on each of the last six columns. We picked ML models with
varying degrees of performance to approximate the collaborative characteristic
function from Figure 1. For details on the model training for the ML
recommendations, see Appendix A.8.
## 5 Results
(a) Different bonus incentives.
(b) Different ML recommendations.
Figure 2: Human Performance Across Treatments. Error bars denote 95%
confidence intervals based on standard errors clustered at the user level.
Solid bars denote the average performance of the submitted solution, striped
bars denote the performance if one picked the higher solution between the
submitted solution and the provided ML recommendation.
A total of 1,408 participants completed the study; we removed 119 participants
due to forbidden browser reloads or uses of the browser’s back-button, which
left 1,289 for the analyses below. See Appendix A.9 for an overview of
participants’ demographics. On average, participants’ compensation implied an
hourly wage of £12.17 ($15.22), which is above the UK minimum wage of £11.44.
Please see Appendix A.3 for more payment details.
We start by discussing the null results of monetary performance incentives
(RQ1). Figure 2(a) shows the results of our monetary incentive experiment. On
average, user economic performance without any bonus was 89.7% (light blue
bar). None of the bonus alternatives are statistically distinguishable from
the control group, nor from each other, and their point estimates are all
between 88.6% (for the 20-cent bonus) and 90% (for the 10-cent bonus).
The null effect of monetary incentives is not due to the fact that users did
not understand the bonus structure. To test this hypothesis, we can compare
the performance of users in the two 10-cent bonus treatments without
algorithmic recommendations (third column in Figure 2(a) and first column in
Figure 2(b), both yellow). These two treatments only differ by the fact that
the one in Figure 2(b) had a comprehension quiz for the bonus structure. The
difference in performance between the two treatments is a mere 0.9%, not
statistically different from zero ($p=0.268$, based on standard errors
clustered at the user level).
RQ2: Because monetary incentives do not change human performance, we test the
introduction of ML recommendations with a single bonus structure, the 10-cent
bonus. Figure 2(b) presents the results. Focusing on the solid bars, three
insights are noteworthy. First, comparing the first two columns (yellow and
blue), models with low economic performance seem to lead humans to perform
slightly worse than if they were not supported by ML recommendations (89.4%
versus 90.9%). This comparison is not statistically significant ($p=0.147$),
likely due to low statistical power, but the level difference is not trivial.
Despite this, humans do improve performance relative to the algorithmic
recommendations (89.4% versus 71.8%, $p=1.8$e-$28$), a result we come back to
in Section 5.1. This suggests that people might reduce their effort in solving
the problem when they have access to recommendations, but they do not
completely eliminate effort (Appendix Figure 18 shows no clear patterns in
time spent per problem across treatment conditions). Second, models with
better performance lead to increases in human performance, as evidenced by the
progressively increasing performance from q1 (72%) to q6 (92%). Third, even if
human performance increases with the performance of the ML recommendation, the
increments in performance are quantitatively fairly small and sometimes
statistically indistinguishable from one another, going from 89.4% when the
model’s performance is 72%, to 92.6% when the model’s performance is
92%.333Regression results, controlling for time taken to solve each problem,
are presented in Appendix Table 3.
### 5.1 The Results Within Our Theoretical Framework
Figure 3: Empirical Collaborative Characteristic Function. Confidence
intervals are based on standard errors clustered at the participant level.
Figure 3 embeds our empirical results in the theoretical framework presented
in Section 3. On the x-axis, we plot the performance of the six ML models
deployed in our study. On the y-axis, we plot the performance of the solutions
submitted by humans who receive ML recommendations. Each of the points
correspond to the six ML treatments of Figure 2(b). We linearly interpolate
the estimated points to form an approximation of the collaborative
characteristic function $\Delta_{\mathbb{U}}$ (solid blue line). The curve
suggests that humans improve on the ML recommendations for ML performance
levels between 70% and 92%. The estimated $\delta_{qi}$’s range from 17.5%
($p=1.8$e-$28$) for $q1$, to 0.5% ($p=0.46$) for $q6$. We denote $q6$ a stable
point since the human improvement is estimated to be small and statistically
indistinguishable from zero. The results imply that, for this portion of the
domain, a firm can deploy a model with below-human performance and still
converge to a stable point with 92% performance in subsequent deployments.
The $\delta_{qi}$ improvements are always positive (or indistinguishable from
zero for q6), but they could have been even larger. Indeed, in this specific
setting, as participants add items to the knapsack, in principle, they can
easily compare the value of their solution to the value of the ML
recommendation (both of which appear at the top of the interface, see Appendix
Figure 15). If humans had picked the highest between their solution and the ML
recommendation, the collaborative characteristic function would have shifted
upward to the dashed green line in Figure 3, and the stable point would have
achieved a higher performance than 92%. The discrepancy between the solid and
dashed lines increases as the ML model improves, suggesting that even in a
straightforward comparison, humans do not follow ML recommendations when it is
in their best interest to do so (the difference can also be seen by comparing
the solid and striped bars in Figure 2(b)). Appendix Figure 19 decomposes the
net effect into two parts. On one hand, as the model performance improves,
humans are more likely to follow its recommendations. On the other, when they
do not follow the ML recommendation, as the model performance improves, it is
much more likely that the submitted solution is inferior compared to the
recommendation. Under both solid and dashed collaborative characteristic
functions, we can imagine possible collaborative learning paths,
$\mathbb{L}_{\Delta_{\mathbb{U}}}$. With this shape of $\Delta_{\mathbb{U}}$,
the deployment decision is simple: all collaborative learning paths will
eventually reach a stable point at above human performance.
## 6 Conclusions
We present a theoretical framework for human-ML collaboration in a dynamic
setting where human labels can deviate from the indisputable ground truth. We
introduce the collaborative characteristic function, which theoretically links
the utility of ML models with respect to the indisputable ground truth, to the
utility of humans using those same ML models to support their decisions. The
collaborative characteristic function allows for multiple collaborative
learning paths, depending on the utility of the initially deployed ML model.
Each of the collaborative learning paths characterizes a possible ML
deployment strategy and its ensuing dynamic learning process. We theoretically
show conditions under which this dynamic system reaches a stable point through
dynamic utility improvement or deterioration. We then present the empirical
results of a large user study, which allows us to estimate points on the
collaborative characteristic function of the knapsack problem. For ML models
of performance between 72% and 92%, our empirical results suggest that the
collaborative characteristic function lies above the 45-degree line. Any
collaborative learning path starting at utility between 72% and 92% will thus
converge to a stable point with utility around 92%. We explore two factors
that can shift the collaborative characteristic function. We find that
monetary incentives do not seem to affect human performance. However, we find
that wherever applicable, a simple post-processing step that picks the best
among available solutions (as is possible for the knapsack problem) can
substantially shift the collaborative characteristic function upward, leading
to stable equilibria of higher utility.
Our work has a number of limitations. On the theoretical side, our
collaborative learning paths assume that the firm is able to perfectly
replicate human+ML performance in future ML models. Appendix A.7 discusses
stability when learning does not exactly replicate previous human+ML
performance. On the empirical side, to reduce costs while maintaining
statistical power, we only randomized monetary incentives without ML
recommendations, and we randomized the quality of ML recommendations while
fixing monetary incentives. Studying the interaction of monetary incentives
and ML performance is an important extension. The null result of monetary
incentives should be interpreted within our context. First, the study
participants received payments above minimum wage, and we only tested
different levels of linear performance bonuses. It would be valuable to extend
our work to evaluate the extent to which alternative base payments or non-
linear bonuses may induce different levels of quality and effort by
participants and thus collaborative characteristic functions of varying
shapes. Our approximation of $\Delta_{\mathbb{U}}$ for the knapsack problem is
naturally incomplete since we did not test every possible level of model
performance. However, the six points of the curve that we empirically estimate
make us fairly comfortable that a linear interpolation is reasonable, at least
for model performances between our minimum and maximum.
Future work could investigate the properties of $\Delta_{\mathbb{U}}$ that
guarantee a unique optimal stable point, both theoretically and empirically.
Provided that researchers have access to the indisputable ground truth,
further empirical investigations of collaborative characteristic functions
could also shed light on the shape of those functions for practically relevant
tasks such as medical diagnoses or hiring decisions. Future work should also
discuss fairness aspects of this framework, e.g., whether or not fair stable
points exist and how a firm can reach them. More generally, we hope this work
generates more interest in studying settings where ML deployments lead to
changes in the data generating process, which have broad managerial and
practical applications.
## References
* [1] Vivian Lai, Samuel Carton, Rajat Bhatnagar, Q Vera Liao, Yunfeng Zhang, and Chenhao Tan. Human-ai collaboration via conditional delegation: A case study of content moderation. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pages 1–18, 2022.
* [2] Maia Jacobs, Melanie F Pradier, Thomas H McCoy Jr, Roy H Perlis, Finale Doshi-Velez, and Krzysztof Z Gajos. How machine-learning recommendations influence clinician treatment selections: the example of antidepressant selection. Translational psychiatry, 11(1):108, 2021.
* [3] Krishnamurthy Dvijotham, Jim Winkens, Melih Barsbey, Sumedh Ghaisas, Robert Stanforth, Nick Pawlowski, Patricia Strachan, Zahra Ahmed, Shekoofeh Azizi, Yoram Bachrach, et al. Enhancing the reliability and accuracy of ai-enabled diagnosis via complementarity-driven deferral to clinicians. Nature Medicine, 29(7):1814–1820, 2023.
* [4] Andi Peng, Besmira Nushi, Emre Kiciman, Kori Inkpen, and Ece Kamar. Investigations of performance and bias in human-ai teamwork in hiring. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 12089–12097, 2022.
* [5] Juan Perdomo, Tijana Zrnic, Celestine Mendler-Dünner, and Moritz Hardt. Performative prediction. In International Conference on Machine Learning, pages 7599–7609. PMLR, 2020.
* [6] Corinna Cortes, Giulia DeSalvo, and Mehryar Mohri. Learning with rejection. In Algorithmic Learning Theory: 27th International Conference, ALT 2016, Bari, Italy, October 19-21, 2016, Proceedings 27, pages 67–82. Springer, 2016.
* [7] Mohammad-Amin Charusaie, Hussein Mozannar, David Sontag, and Samira Samadi. Sample efficient learning of predictors that complement humans. In International Conference on Machine Learning, pages 2972–3005. PMLR, 2022.
* [8] Hussein Mozannar, Hunter Lang, Dennis Wei, Prasanna Sattigeri, Subhro Das, and David Sontag. Who should predict? exact algorithms for learning to defer to humans. In International conference on artificial intelligence and statistics, pages 10520–10545. PMLR, 2023.
* [9] Hussein Mozannar, Jimin Lee, Dennis Wei, Prasanna Sattigeri, Subhro Das, and David Sontag. Effective human-ai teams via learned natural language rules and onboarding. Advances in Neural Information Processing Systems, 36, 2024.
* [10] Mark Steyvers, Heliodoro Tejeda, Gavin Kerrigan, and Padhraic Smyth. Bayesian modeling of human–ai complementarity. Proceedings of the National Academy of Sciences, 119(11):e2111547119, 2022.
* [11] Hussein Mozannar, Gagan Bansal, Adam Fourney, and Eric Horvitz. Reading between the lines: Modeling user behavior and costs in ai-assisted programming. In Proceedings of the CHI Conference on Human Factors in Computing Systems, pages 1–16, 2024.
* [12] Arghavan Moradi Dakhel, Vahid Majdinasab, Amin Nikanjam, Foutse Khomh, Michel C Desmarais, and Zhen Ming Jack Jiang. Github copilot ai pair programmer: Asset or liability? Journal of Systems and Software, 203:111734, 2023.
* [13] Hussein Mozannar, Gagan Bansal, Adam Fourney, and Eric Horvitz. When to show a suggestion? integrating human feedback in ai-assisted programming. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 10137–10144, 2024.
* [14] Fabrizio Dell’Acqua, Edward McFowland, Ethan R Mollick, Hila Lifshitz-Assaf, Katherine Kellogg, Saran Rajendran, Lisa Krayer, François Candelon, and Karim R Lakhani. Navigating the jagged technological frontier: Field experimental evidence of the effects of ai on knowledge worker productivity and quality. Harvard Business School Technology & Operations Mgt. Unit Working Paper, 24(24-013), 2023.
* [15] Helena Vasconcelos, Matthew Jörke, Madeleine Grunde-McLaughlin, Tobias Gerstenberg, Michael S Bernstein, and Ranjay Krishna. Explanations can reduce overreliance on ai systems during decision-making. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW1):1–38, 2023.
* [16] Nikhil Agarwal, Alex Moehring, Pranav Rajpurkar, and Tobias Salz. Combining human expertise with artificial intelligence: Experimental evidence from radiology. NBER Working Paper No. 31422, 2023.
* [17] Tom Sühr, Sophie Hilgard, and Himabindu Lakkaraju. Does fair ranking improve minority outcomes? understanding the interplay of human and algorithmic biases in online hiring. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 989–999, 2021.
* [18] Gagan Bansal, Besmira Nushi, Ece Kamar, Daniel S Weld, Walter S Lasecki, and Eric Horvitz. Updates in human-ai teams: Understanding and addressing the performance/compatibility tradeoff. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 2429–2437, 2019.
* [19] Celestine Mendler-Dünner, Juan Perdomo, Tijana Zrnic, and Moritz Hardt. Stochastic optimization for performative prediction. Advances in Neural Information Processing Systems, 33:4929–4939, 2020.
* [20] Moritz Hardt, Meena Jagadeesan, and Celestine Mendler-Dünner. Performative power. Advances in Neural Information Processing Systems, 35:22969–22981, 2022.
* [21] Celestine Mendler-Dünner, Frances Ding, and Yixin Wang. Anticipating performativity by predicting from predictions. Advances in neural information processing systems, 35:31171–31185, 2022.
* [22] Carsten Murawski and Peter Bossaerts. How humans solve complex problems: The case of the knapsack problem. Scientific reports, 6(1):34851, 2016.
* [23] David Pisinger. Where are the hard knapsack problems? Computers & Operations Research, 32(9):2271–2284, 2005.
## Appendix A Appendix
### A.1 Data and Code
The code for model training, data generation, the web application for user
study and our data analysis and plotting can be found in retracted for
anonymity; all files are part of the submission as .zip file
### A.2 The Knapsack Problem
###### Definition 6.
(0-1 Knapsack Problem) We call $maximize\sum_{i=1}^{n}v_{i}x_{i}\text{ s.t.
}\sum_{i=1}^{n}w_{i}x_{i}\leq W$ $\text{ with
}x_{i}\in\\{0,1\\},v_{i},w_{i},W\in\mathbb{R_{+}}$ the 0-1 Knapsack Problem.
### A.3 Payment Details
We calculated the base payment assuming an average time of 19 minutes to
complete the study. The base payment was adjusted upward if the median time to
completion was longer than 19 minutes. We adjusted the payment despite the
fact that many participants finished our survey but did not enter the
completion code directly afterwards. This sometimes increased the median time
to completion.
### A.4 Analysis with Optimality
###### Definition 7.
(Optimality) For a knapsack instance $X$ with optimal solution $Y^{*}$ and a
valid solution $Y$ we call the function
$\mathbb{U}_{\text{Opt}}(X,Y)=\begin{cases}1&\text{if, }Y=Y^{*},\\\
0&\text{else }\end{cases}$ the optimality of $Y$ given $X$. Furthermore, we
call $\underset{X}{\mathbb{E}}[\mathbb{U}_{\text{Opt}}(X,Y)]$, the optimal
solution rate over all $X$.
###### Observation 1.
Economic performance and Optimality are utility functions (1).
###### Proof.
We start with the proof that Economic Performance is a utility function. 1)
Economic performance is bounded between 0 (for an empty knapsack) and 1, for
the optimal value of the knapsack. 2) There exists an $\varepsilon>0$, which
is the minimum value of an item for the knapsack problem. The value of that
item is the smallest possible distance between two solutions which are not
equally good. 3) Because the $Y$ in our case is the sum of the values of the
items in the knapsack and $Y*$ is the maximum possible value of the knapsack,
any value that is closer to the optimal solution has also higher economic
performance because the numerator grows. We chose $\varepsilon$ to be the
minimum item value, thus this minimum increase in value between solutions is
fulfilled. In summary, Economic Performance satisfies all three criteria of a
utiltiy function.
We continue with the proof that optimality is a utility function. 1) it is 0
or 1 and thus bounded. 2) If we choose $0<\varepsilon<1$, then
$\varepsilon$-sensitivity is satisfied. 3) Is always true for the choice of
our $\varepsilon$. Assume for example $\varepsilon=0.5$, then it is that
$d(1,1)+0.5<d(0,1)$ and $\mathbb{U}(1)>\mathbb{U}(0)$. This statement is true
for all $0<\varepsilon<1$ which is what we specified for $\varepsilon$. ∎
Optimality is the function that indicates whether a solution to a knapsack
problem has the optimal value or not. Figure 4 shows the empirical
collaborative characteristic function for optimality as utility function. The
humans achieve approximately $20\%$ optimalty without ML advice. The effect of
human on human-ML performance is significant for all models ($p<0.001$).
Interestingly, the effect is large even beyond human performance. Furthermore,
for models q1,q2,3 with extremely low utility (average optimality of almost
$0\%$), human effects on the overall outcome is large and close to human
performance. As in Figure 3, the utility gain of rationally acting humans
would have been larger for most models. Our observations suggest that stable
points of optimality would lie above human performance without ML adivce.
Figure 4: Empirical Collaborative Characteristic Function for the "Optimality"
utility function. Confidence intervals are based on standard errors clustered
at the participant level.
### A.5 Comments on the Definition of Utility
We want to denote that $\varepsilon$-sensitivity implies the following:
###### Observation 2.
$\exists\epsilon,\varepsilon>0:|d_{X}(Y,Y^{*})-d_{X}(Y^{\prime},Y^{*})|=\varepsilon\Rightarrow|\mathbb{U}(X,Y)-\mathbb{U}(X,Y^{\prime})|=\epsilon$
This means that there is a minimum utility change that we call $\epsilon$.
### A.6 Proof of Propositon 1&2
#### Proposition 1
(Collaborative Improvement)
If $\Delta_{\mathbb{U}}(\mathbb{U}(X,Y_{M}))\geq\mathbb{U}(X,Y_{M})$ for all
$M\in\mathcal{M},X\in\mathcal{X}$. Then
$\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t)$, is non-decreasing with $t=1,...,T$
and for sufficiently large $T$ it exists a $t^{\prime}\in[1,T]$ such that
$\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t^{\prime})$ is a stable point.
###### Proof.
Let $t\in 1,...,T$ be the number of deployment (epochs) that a firm will make.
The firm perfectly learns the data distribution in every epoch, in other
words, we assume that $L(Y_{M_{t}},Y_{H_{t-1}}=0,\forall t$. Furthermore, it
is $\Delta_{\mathbb{U}}(\mathbb{U}(X,Y_{M}))\geq\mathbb{U}(X,Y_{M})$ for all
$M\in\mathcal{M},X\in\mathcal{X}$.
We first show that $\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t)$ is non-decreasing
with $t$. For that, assume that there exists $t$ for which
$\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t)>\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t+1)$.
But
$\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t+1)=\underset{X\in\mathcal{X}}{\mathbb{E}}(\mathbb{U}(H(X,Y_{M_{t+1}})))\geq^{\delta_{i}\geq
0}\underset{X\in\mathcal{X}}{\mathbb{E}}(\mathbb{U}(Y_{M_{t+1}}))=^{L(Y_{M_{t+1}},Y_{H_{t}})=0}\underset{X\in\mathcal{X}}{\mathbb{E}}(\mathbb{U}(H(X,Y_{M_{t}})))=\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t)$.
It follows that $\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t)$ must be non-
decreasing.
Now we show that there exists a $t^{\prime}\in[1,T]$ such that
$\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t^{\prime})$ is a stable point for
sufficiently large $T$. For this, consider that $\mathbb{U}$ has a maximum
$\mathbb{U}(Y^{*})$ (Property 1 (bounded) of definition 1) and there exists a
minimum increment of utility $\epsilon$ (see A.5) in each deployment. If we do
not achieve at least $\epsilon$ increment in utility, we have reached a stable
point. Thus, we can write the maximum utility as
$\mathbb{U}(Y^{*})=\mathbb{U}(Y_{M_{t}})+N\epsilon$. For sufficiently large
($T\geq N+1)$, this implies that we reached maximum utility with
$\mathbb{L}_{\Delta_{\mathbb{U}}}(s,T)$, and every deployment beyond that must
have equal utility. ∎
#### Proposition 2
(Collaborative Harm)
If $\Delta_{\mathbb{U}}(\mathbb{U}(X,Y_{M}))\leq\mathbb{U}(X,Y_{M})$ for all
$M\in\mathcal{M},X\in\mathcal{X}$. Then
$\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t)$, is non-increasing with $t=1,...,T$
and for sufficiently large $T$ it exists a $t^{\prime}\in[1,T]$ such that
$\mathbb{L}_{\Delta_{\mathbb{U}}}(s,t^{\prime})$ is a stable point.
###### Proof.
Analogous to the proof of Proposition 1. ∎
### A.7 Perfect vs Imperfect learning
In this section we discuss what changes if we loosen the assumption
$L(Y_{M_{t}},Y_{H_{t-1}})=0$. We call this assumption the "perfect learner"
assumption because the firm perfectly learns the human labels from epoch $t-1$
with a model in epoch $t$. In the following, we consider an imperfect learner
such that $L(Y_{M_{t}},Y_{H_{t-1}})=\sigma$. For now, we limit our discussion
to the cases of Proposition 1 1 and Proposition 2 2.
In the case of collaborative improvement
($\Delta_{\mathbb{U}}(\mathbb{U}(X,Y_{M}))\geq\mathbb{U}(X,Y_{M})$) the human
will improve on any model that the firm can deploy. However, if
$L(Y_{M_{t}},Y_{H_{t-1}})=\sigma\Rightarrow(\mathbb{U}(Y_{M_{t}})-\mathbb{U}(Y_{M_{t-1}}))<0$
then our error $\sigma$ is larger than what we gain by putting a human in the
loop. If the above statement is true for all $M_{t}$, then the imperfection
has created collaborative harm and we arrived in the case of Proposition 2.
However, this means that this would lead to a stable point. If we consider a
second scenario where
$L(Y_{M_{t}},Y_{H_{t-1}})=\sigma\Rightarrow(\mathbb{U}(Y_{M_{t}})-\mathbb{U}(Y_{M_{t-1}}))>0$
for all $M_{t}$. Then we are still in the scenario of collaborative
improvement which means that we will reach a stable point. We can think about
this when looking at Figure 1. An imperfect learner which is still a
collaborative improvement setting is effectively like tilting the green dashed
lines with a negative slope. We would still reach point 6-d but would require
more iterations than with perfect learning.
In summary, this is an equally empirical question as it is for a perfect
learner. The question is much humans improve the system’s performance and in
the case of an imperfect learner how much of that improvement gets "eaten up"
by learning errors.
### A.8 Model Training
We release the code required for training our models, our model parameters and
all predictions for the instances together with the instances that
participants saw.
Learning to solve the knapsack problem is a research area for itself, however
for the small, one-dimensional case of our experiment, it is possible on
consumer hardware. We only train models for knapsack instances with 18 items.
As input features we concatenate weights $w_{1},...,w_{18}$, values
$v_{1},...,v_{18}$, the weight constraint $W$, the sum of the weights and the
sum of the values. Thus, our input dimension is 39. Our goal was to train
models with a broad spectrum of economic performances, not to solve the
knapsack problem perfectly. We added 5 fully connected layers, 4 of them with
ReLU activation functions. We use torch.Sigmoid() for our outputs. The output
dimension was 18 and the output values in each index can be interpreted as the
likelihood that the item belongs to a solution or not. For more details on the
architecture, see our code.
We want to highlight two important aspects of how we thought about the model
training. First, did not want to use any prior knowledge that a firm in our
setting could not have either. For example, if we could have known the utility
of a knapsack solution (economic performance or optimality) we could have just
directly maximized it, or if we could know the optimal solution, we could have
just used the distance to the optimal solution as our loss. Instead, we used
the binary cross-entropy between the label and prediction as our loss. The
label was a 18-dimensional 0-1 vector. If the i-th entry of this output vector
is 1, it means that the i-th item is in the knapsack and otherwise not. Thus
we simply minimized the differences between chosen items in our training data
and those of our model. For us, this was a reasonable analogy for the
application context of healthcare in which every "item" is a diagnosis or a
symptom (e.g. an ICD10 code).
Because our financial budget was limited and we wanted to test multiple
models, we trained all models on optimally solved knapsack instances. It would
have also created a lot of overhead and space for errors if we would have
collected the data of model q1 then trained q2 and rerun the user study.
Training them all on generated labels made it possible to run more treatments
at once. We still wanted to use ML models instead of solutions produced with
dynamic programming, because we wanted to incorporate the distributional
character of ML predictions (see Figure 8) and study the reaction to different
quantiles of solution quality in greater detail in future work.
However, we had to include two pieces of prior knowledge in order to achieve
better model performance (especially for q5 and q6). First, we sorted the
items by density (value/weight). This is a big advantage in general, but only
a small one for our knapsack instances because weights and values are strongly
correlated. Second, we normalized weights and values in a pre-processing step.
In our setting, both operations could not have been done by the firm (what is
a normalized symptom)? However, with those minor modifications we were able to
create a larger range of models without massive resources and still just
immitate the "human" label without incorporating anything in the loss. In a
post-processing step, we sorted the items by sigmoid outputs. We then added
items to the knapsack until the weight constraint was reached. From that item
selection, we calculated the actual knapsack values. For more details, please
visit our github repository To be added after acceptance.
### A.9 Overview statistics
Figure 5 shows the overview of the answer to the demographic questions in the
end of our study. Most participants held an Undergraduate degree, were between
25 and 44 years old and have not heard about the knapsack problem before
completing the study. $50.1\%$ of the participants identified as female
$48.6\%$ as male and $0.8\%$ as non-binary or non-gender conforming. $96.8\%$
of the participants have not heard about the knapsack problem before this
study. Figure 6 shows the perceived difficulty of the task for the
participants, as well as the reported effort the participants put to complete
the task. Most participants perceived the task as neutral to hard and put in
large to very large effort (self-reportedly). Figure 7 shows how much effort
people think they would have spent with or without the help of ML. It seems
like participants who had no ML help think they would put less effort in the
task. People who had the help of ML reported to put about as much effort as
all participants reported to put in right now. Future work should investigate
these perceptions in detail.
Figure 5: Highest level of education completed, age group, gender and whether
participants have heard of the knapsack problem before this study.
Figure 6: Perceived difficulty of the task versus the reported level of effort
participants reported in our study
Figure 7: How much effort participants thought that they would have spent
with/without the help of ML
### A.10 Generating Hard Knapsack Problems
Knapsack problems where the weights $w_{i}$ and values $v_{i}$ are strongly,
yet imperfectly, correlated [23, 22] tend to be hard to solve. We generate
knapsack instances with strong correlations ($r\in[0.89,1.00]$, mean
$r=.9814$) using Algorithm 1, following the criteria for difficult problems
outlined by [23]. In our experiment, users solve knapsack instances with
$n=18$ items, $W_{min}=5$, $W_{max}=250$. We constrain the weights, values,
and capacity of our instances to integer values, to make them easier to
interpret by humans.
Algorithm 1 Generate hard knapsack instance
number of items $n\geq 0$, knapsack capacity range $W_{min},W_{max}>0$
$W\leftarrow random.uniform.integer(W_{min},W_{max})$
$w\leftarrow random.uniform.integer(1,W,n)$ $\triangleright$ n-dimensional
vector of weights
$i\leftarrow 1$
while $i\leq n$ do
$v_{i}\leftarrow
max(1,random.uniform.integer(w_{i}-\lfloor\frac{W}{10}\rfloor,w_{i}+\lfloor\frac{W}{10}\rfloor)$
end while
### A.11 Survey Design
Figure 8: Distribution of economic performances of solutions by the six models
we deployed in our experiment. Figure 9: Tutorial 1/5 Figure 10: Tutorial 2/5
Figure 11: Tutorial 3/5 (with ML treatment) Figure 12: Tutorial 4/5 (with 10
cents/ppt monetary incentive) Figure 13: Tutorial 5/5 (with comprehension
quiz) Figure 14: Feedback to a practice problem Figure 15: Interface for the
main task: 1) the knapsack capacity, 2) sum of weights of selected items, 3)
sum of values of selected items, 4) remaining time, 5) items with weights and
values, 6) machine learning solution (only visible if user receives
corresponding treatment). Clicking on gray items adds them to the knapsack if
the weight allows it, and clicking on green items removes them from the
knapsack. The total weight and value of selected items is shown at the top and
automatically updated. Figure 16: Demographic questions after tasks Figure
17: Score screen for performance feedback in the end
### A.12 Futher Statistical insights
| $\mathbb{U}_{\text{Econ}}(X,H(X))$ | $\mathbb{U}_{\text{Econ}}(X,H(X))$
---|---|---
Intercept | $0.7620^{***}$ | $0.6957^{***}$
| $(0.0184)$ | $(0.0408)$
02-cent bonus | $0.0003$ | $0.1087^{*}$
| $(0.0077)$ | $(0.0484)$
10-Cent bonus | $0.0011$ | $0.0683$
| $(0.0076)$ | $(0.0499)$
20-Cent bonus | $-0.0087$ | $0.0787$
| $(0.0079)$ | $(0.0478)$
log(seconds spent) | $0.0322^{***}$ | $0.0481^{***}$
| $(0.0039)$ | $(0.0093)$
02-cent bonus $\cdot$ log(seconds spent) | — | $-0.0260^{*}$
| | $(0.0112)$
10-cent bonus $\cdot$ log(seconds spent) | — | $-0.0161$
| | $(0.0115)$
20-cent bonus $\cdot$ log(seconds spent) | — | $-0.0210$
| | $(0.0113)$
N | 3,960 | 3,960
Adj.R2 | 0.0613 | 0.0661
| $\mathbb{U}_{\text{Econ}}(X,H(X,Y))$ | $\mathbb{U}_{\text{Opt}}(X,H(X,Y))$
---|---|---
Intercept | $0.8082^{***}$ | $-0.0297$
| $(0.0049)$ | $(0.0245)$
q1 ($72\%$) | $-0.0131$ | $-0.0408$
| $(0.0044)^{**}$ | $(0.0260)$
q2 ($80\%$) | $0.0011$ | $-0.0161$
| $(0.0043)$ | $(0.0283)$
q3 ($84\%$) | $0.0048$ | $-0.0324$
| $(0.0049)$ | $(0.0198)$
q4 ($88\%$) | $0.0151^{***}$ | $0.0333$
| $(0.0033)$ | $(0.0214)$
q5 ($90\%$) | $0.0247^{***}$ | $0.0518^{*}$
| $(0.0043)$ | $(0.0263)$
q6 ($92\%$) | $0.0211^{***}$ | $0.0880^{***}$
| $(0.0033)$ | $(0.0213)$
log(seconds spent) | $0.0240^{***}$ | $0.0533^{***}$
| $(0.0010)$ | $(0.0045)$
N | 8,930 | 8,930
Adj.R2 | 0.0733 | 0.0281
Table 2: Linear regressions with clustered standard errors on participant id.
Effect of monetary incentive on $\mathbb{U}_{\text{Econ}}$ of human solutions
(left). Effect of ML recommendation on different levels of economic
performance $\mathbb{U}_{\text{Econ}}$ (right). Standard errors in
parentheses. * $p<0.05$, ** $p<0.01$, *** $p<0.001$.
(a) Different bonus incentives.
(b) Different ML recommendations.
Figure 18: Time Spent Across Treatment Conditions
| $\mathbb{U}_{\text{Econ}}(X,H(X))$
---|---
| (1) | (2) | (3)
Intercept | $0.7620^{***}$ | $0.7676^{***}$ | $0.8082^{***}$
| $(0.0184)$ | $(0.0276)$ | $(0.0049)$
2-cent Bonus | $0.0003$ | |
| $(0.0077)$ | |
10-cent Bonus | $0.0011$ | |
| $(0.0076)$ | |
20-cent Bonus | $-0.0087$ | |
| $(0.0079)$ | |
Comprehension Quiz | | $0.0104$ |
| | $(0.0076)$ |
q1 ($72\%$) | | | $-0.0131$
| | | $(0.0044)^{**}$
q2 ($80\%$) | | | $0.0011$
| | | $(0.0043)$
q3 ($84\%$) | | | $0.0048$
| | | $(0.0049)$
q4 ($88\%$) | | | $0.0151^{***}$
| | | $(0.0033)$
q5 ($90\%$) | | | $0.0247^{***}$
| | | $(0.0043)$
q6 ($92\%$) | | | $0.0211^{***}$
| | | $(0.0033)$
log(seconds spent) | $0.0322^{***}$ | $0.0317^{***}$ | $0.0240^{***}$
| $(0.0039)$ | $0.0064$ | $(0.0010)$
N | 3,960 | 2170 | 8,930
Adj.R2 | 0.0613 | 0.0506 | 0.00733
Included Bonus Treatments | All | 10-cent | 10-cent
Included ML Treatments | No ML | No ML | All ML
Comprehension Quiz | No | Both | Yes
Table 3: Linear regressions of economic performance $\mathbb{U}_{\text{Econ}}$
on dummies for the various treatment conditions. Column 1 includes all
treatment conditions without ML recommendations and without comprehension
quiz. It tests the difference in performance across different bonus levels.
Column 2 includes the two treatment conditions without ML recommendation and
with 10-cent bonus. The difference between the two treatment conditions is the
presence of a comprehension quiz for the bonus structure. Column 3 includes
all treatments with comprehension quiz and 10-cent bonus. It tests the
difference in performance across ML recommendations with different
performance. Standard errors, in parentheses, are clustered at the participant
level. * $p<0.05$, ** $p<0.01$, *** $p<0.001$.
(a) Rate of ML advice usage increased with better performance.
(b) Participants ignore the ML recommendation with better performance.
Figure 19: ML-usage increased with better ML performance. Share of ignored ML
solutions did also increase with better performance.
|
# Stochastic Dynamics of Noisy Average Consensus: Analysis and Optimization
Tadashi Wadayama and Ayano Nakai-Kasai
Part of this research was presented at IEEE International Symposium on
Information Theory 2022 (ISIT2022) [1]. 1Nagoya Institute of Technology,
Gokiso, Nagoya, Aichi 466-8555, Japan,
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
A continuous-time average consensus system is a linear dynamical system
defined over a graph, where each node has its own state value that evolves
according to a simultaneous linear differential equation. A node is allowed to
interact with neighboring nodes. Average consensus is a phenomenon that the
all the state values converge to the average of the initial state values. In
this paper, we assume that a node can communicate with neighboring nodes
through an additive white Gaussian noise channel. We first formulate the noisy
average consensus system by using a stochastic differential equation (SDE),
which allows us to use the Euler-Maruyama method, a numerical technique for
solving SDEs. By studying the stochastic behavior of the residual error of the
Euler-Maruyama method, we arrive at the covariance evolution equation. The
analysis of the residual error leads to a compact formula for mean squared
error (MSE), which shows that the sum of the inverse eigenvalues of the
Laplacian matrix is the most dominant factor influencing the MSE. Furthermore,
we propose optimization problems aimed at minimizing the MSE at a given target
time, and introduce a deep unfolding-based optimization method to solve these
problems. The quality of the solution is validated by numerical experiments.
## I Introduction
Continuous-time average consensus system is a linear dynamical system defined
over a graph [3]. Each node has its own state value, and it evolves according
to a simultaneous linear differential equation where a node is only allowed to
interact with neighboring nodes. The ordinary differential equation (ODE) at
the node $i(1\leq i\leq n)$ governing the evolution of the state value
$x_{i}(t)$ of the node $i$ is given by
$\displaystyle\frac{dx_{i}(t)}{dt}=-\sum_{j\in{\cal
N}_{i}}\mu_{ij}(x_{i}(t)-x_{j}(t)).$ (1)
The set ${\cal N}_{i}$ denote the neighboring nodes of node $i$, while the
positive scalar $\mu_{ij}$ denotes the edge weight associated with the edge
$(i,j)$. The same ODE applies to all other nodes as well. These dynamics
gradually decrease the differences between the state values of neighboring
nodes, leading to a phenomenon called average consensus that the all the state
values converge to the average of the initial state values [2].
The average consensus system has been studied in numerous fields such as
multi-agent control [4], distributed algorithm [5], formation control [6]. An
excellent survey on average consensus systems can be found in [3].
In this paper, we will examine average consensus systems within the context of
communications across noisy channels, such as wireless networks. Specifically,
we consider the scenario in which nodes engage in local wireless
communication, such as drones flying in the air or sensors dispersed across a
designated area. It is assumed that each node can only communicate with
neighboring nodes via an additive white Gaussian noise (AWGN) channel. The
objective of the communication is to aggregate the information held by all
nodes through the application of average consensus systems. As previously
stated, the consensus value is the average of the initial state values.
In this setting, we must account for the impact of Gaussian noise on the
differential equations. The differential equation for a noisy average
consensus system takes the form:
$\displaystyle\frac{dx_{i}(t)}{dt}=-\sum_{j\in{\cal
N}_{i}}\mu_{ij}(x_{i}(t)-x_{j}(t))+\alpha W_{i}(t),$ (2)
where $W_{i}(t)$ represents an additive white Gaussian process, and $\alpha$
is a positive constant. The noise $W_{i}(t)$ can be considered as the sum of
the noises occurring on the edges adjacent to the node $i$. In a noiseless
average consensus system, it is well-established that the second smallest
eigenvalue of the Laplacian matrix of the graph determines the convergence
speed to the average [5]. The convergence behavior of a noisy system may be
quite different from that of the noiseless system due to the presence of edge
noise. However, the stochastic dynamics of such a system has not yet been
studied. Studies on discrete-time consensus protocols subject to additive
noise can be found in [11][12], but to the best of our knowledge, there are no
prior studies on continuous-time noisy consensus systems.
The main goal of this paper is to study the stochastic dynamics of continuous-
time noisy average consensus system. The theoretical understanding of the
stochastic behavior of such systems will be valuable for various areas such as
multi-agent control and the design of consensus-based distributed algorithms
for noisy environments.
The primary contributions of this paper are as follows. We first formulate the
noisy average consensus systems using stochastic differential equations (SDE)
[7][8]. This SDE formulation facilitates mathematically rigorous treatment of
noisy average consensus. We use the Euler-Maruyama method [7] for solving the
SDE, which is a numerical method for solving SDEs. We derive a closed-form
mean squared error (MSE) formula by analyzing the stochastic behavior of the
residual errors in the Euler-Maruyama method. We show that the MSE is
dominated by the sum of the inverse eigenvalues of the Laplacian matrix.
However, minimizing the MSE at a specific target time is a non-trivial task
because the objective function involves the sum of the inverse eigenvalues. To
solve this optimization problem, we will propose a deep unfolding-based
optimization method.
The outline of the paper is as follows. In Section 2, we introduce the
mathematical notation used throughout the paper, and then provide the
definition and fundamental properties of average consensus systems. In Section
3, we define a noisy average consensus system as a SDE. In Section 4, we
present an analysis of the stochastic behavior of the consensus error and
derive a concise MSE formula. In Section 5, we propose a deep unfolding-based
optimization method for minimizing the MSE at a specified target time.
Finally, in Section 6, we conclude the discussion.
## II Preliminaries
### II-A Notation
The following notation will be used throughout this paper. The symbols
$\mathbb{R}$ and $\mathbb{R}_{+}$ represent the set of real numbers and the
set of positive real numbers, respectively. The one dimensional Gaussian
distribution with mean $\mu$ and variance $\sigma^{2}$ is denoted by ${\cal
N}(\mu,\sigma^{2})$. The multivariate Gaussian distribution with mean vector
$\bm{\mu}$ and covariance matrix $\bm{\Sigma}$ is represented by ${\cal
N}(\bm{\mu},\bm{\Sigma})$. The expectation operator is denoted by ${\sf
E}[\cdot]$. The notation $\mbox{diag}(\bm{x})$ is the diagonal matrix whose
diagonal elements are given by $\bm{x}\in\mathbb{R}^{n}$. The matrix
exponential $\exp(\bm{X})(\bm{X}\in\mathbb{R}^{n\times n})$ is defined by
$\displaystyle\exp(\bm{X})\equiv\sum_{k=0}^{\infty}\frac{1}{k!}\bm{X}^{k}.$
(3)
The Frobenius norm of $\bm{X}\in\mathbb{R}^{n\times n}$ is denoted by
$\|\bm{X}\|_{F}$. The notation $[n]$ denotes the set of consecutive integers
from $1$ to $n$.
### II-B Average Consensus
Let $G\equiv(V,E)$ be a connected undirected graph where $V=[n]$. Suppose that
a node $i\in V$ can be regarded as an agent communicating over the graph $G$.
Namely, a node $i$ and a node $j$ can communicate with each other if $(i,j)\in
E$. We will not distinguish $(i,j)$ and $(j,i)$ because the graph $G$ is
undirected.
Each node $i$ has a state value $x_{i}(t)\in\mathbb{R}$ where $t\in\mathbb{R}$
represents continuous-time variable. The neighborhood of a node $i\in V$ is
represented by
$\displaystyle{\cal N}_{i}\equiv\\{j\in V:(j,i)\in E,i\neq j\\}.$ (4)
Note that the node $i$ is excluded from ${\cal N}_{i}$. For any time $t$, a
node $i\in V$ can access the self-state $x_{i}(t)$ and the state values of its
neighborhood, i.e., $x_{j}(t),j\in{\cal N}_{i}$ but cannot access to the other
state values.
In this section, we briefly review the basic properties of the average
consensus processes [3]. We now assume that the set of state values
$\bm{x}(t)\equiv(x_{1}(t),x_{2}(t),\ldots,x_{n}(t))^{T}$ are evolved according
to the simultaneous differential equations
$\displaystyle\frac{dx_{i}(t)}{dt}=-\sum_{j\in{\cal
N}_{i}}\mu_{ij}(x_{i}(t)-x_{j}(t)),\quad i\in[n],$ (5)
where the initial condition is
$\displaystyle\bm{x}(0)=\bm{c}\equiv(c_{1},c_{2},\ldots,c_{n})^{T}\in\mathbb{R}^{n}.$
(6)
The edge weight $\mu_{ij}$ follows the symmetric condition
$\displaystyle\mu_{ij}=\mu_{ji},\quad i\in[n],j\in[n].$ (7)
Let
$\bm{\Delta}\equiv(\Delta_{1},\Delta_{2},\ldots,\Delta_{n})^{T}\in\mathbb{R}_{+}^{n}$
be a degree sequence where $\Delta_{i}$ is defined by
$\displaystyle\Delta_{i}\equiv\sum_{j\in{\cal N}_{i}}\mu_{ij},\quad i\in[n].$
(8)
The continuous-time dynamical system (5) is called an average consensus system
because a state value converges to the average of the initial state values at
the limit of $t\rightarrow\infty$, i.e,
$\displaystyle\lim_{t\rightarrow\infty}\bm{x}(t)=\frac{1}{n}\left(\sum_{i=1}^{n}c_{i}\right)\bm{1}{=\gamma\bm{1}},$
(9)
where the vector $\bm{1}$ represents $(1,1,\ldots,1)^{T}$ and $\gamma$ is
defined by
$\displaystyle\gamma\equiv\frac{1}{n}\sum_{i=1}^{n}c_{i}.$ (10)
We define the Laplacian matrix $\bm{L}\equiv\\{L_{ij}\\}\in\mathbb{R}^{n\times
n}$ of this consensus system as follows:
$\displaystyle L_{ij}$ $\displaystyle=\Delta_{i},\quad i=j,\ i\in[n],$ (11)
$\displaystyle L_{ij}$ $\displaystyle=-\mu_{ij},\quad i\neq j\mbox{ and
}(i,j)\in E,$ (12) $\displaystyle L_{ij}$ $\displaystyle=0,\quad i\neq j\mbox{
and }(i,j)\notin E.$ (13)
From this definition, a Laplacian matrix satisfies
$\displaystyle\bm{L}\bm{1}$ $\displaystyle=\bm{0},$ (14)
$\displaystyle\mbox{diag}(\bm{L})$ $\displaystyle=\bm{\Delta},$ (15)
$\displaystyle\bm{L}$ $\displaystyle=\bm{L}^{T}.$ (16)
Note that the eigenvalues of the Laplacian matrix $\bm{L}$ are nonnegative
real because $\bm{L}$ is a positive semi-definite symmetric matrix. Let
$\lambda_{1}=0<\lambda_{2}\leq\ldots\leq\lambda_{n}$ be the eigenvalues of
$\bm{L}$ and $\bm{\xi}_{1},\bm{\xi}_{2},\ldots,\bm{\xi}_{n}$ be the
corresponding orthonormal eigenvectors. The first eigenvector
$\bm{\xi}_{1}\equiv(1/\sqrt{n})\bm{1}$ is corresponding to the eigenvalue
$\lambda_{1}=0$, which results in $\bm{L}\bm{\xi}_{1}=0$.
By using the notion of the Laplacian matrix, the dynamical system (5) can be
compactly rewritten as
$\displaystyle\frac{d\bm{x}(t)}{dt}=-\bm{L}\bm{x}(t),$ (17)
where the initial condition is $\bm{x}(0)=\bm{c}$. The dynamical behaviors of
the average consensus system (17) are thus characterized by the Laplacian
matrix $\bm{L}$. Since the ODE (17) is a linear ODE, it can be easily solved.
The solution of the ODE (17) is given by
$\displaystyle\bm{x}(t)=\exp(-\bm{L}t)\bm{x}(0),\quad t\geq 0.$ (18)
Let
$\bm{U}\equiv(\bm{\xi}_{1},\bm{\xi}_{2},\ldots,\bm{\xi}_{n})\in\mathbb{R}^{n\times
n}$ where $\bm{U}$ is an orthogonal matrix. The Laplacian matrix $\bm{L}$ can
be diagonalized by using $\bm{U}$, i.e.,
$\displaystyle\bm{L}=\bm{U}\mbox{diag}(\lambda_{1},\ldots,\lambda_{n})\bm{U}^{T}.$
(19)
On the basis of the diagonalization, we have the spectral expansion of the
matrix exponential:
$\displaystyle\exp(-\bm{L}t)$
$\displaystyle=\exp(-\bm{U}\mbox{diag}(\lambda_{1},\ldots,\lambda_{n})\bm{U}^{T}t)$
$\displaystyle=\bm{U}\exp(-\mbox{diag}(\lambda_{1},\ldots,\lambda_{n})t)\bm{U}^{T}$
$\displaystyle=\sum_{i=1}^{n}\exp(-\lambda_{i}t)\bm{\xi}_{i}\bm{\xi}_{i}^{T}.$
(20)
Substituting this to $\bm{x}(t)=\exp(-\bm{L}t)\bm{x}(0)$, we immediately have
$\displaystyle\bm{x}(t)=\frac{1}{n}\bm{1}(\bm{1}^{T})\bm{c}+\sum_{i=2}^{n}\exp(-\lambda_{i}t)\bm{\xi}_{i}\bm{\xi}_{i}^{T}\bm{c}.$
(21)
The second term of the right-hand side converges to zero since $\lambda_{k}>0$
for $k=2,3,\ldots,n$. This explains why average consensus happens, i.e., the
convergence to the average of the initial state values (9). The second
smallest eigenvalue $\lambda_{2}$, called algebraic connectivity [10],
determines the convergence speed because
$\exp(-\lambda_{2}t)\bm{\xi}_{2}\bm{\xi}_{2}^{\sf T}$ shows the slowest
convergence in the second term.
## III Noisy average consensus system
### III-A SDE formulation
The dynamical model (2) containing a white Gaussian noise process is
mathematically challenging to handle. We will use a common approach of
approximating the white Gaussian process by using the standard Wiener process.
Instead of model (2), we will focus on the following stochastic differential
equation (SDE) [8]
$d\bm{x}(t)=-\bm{L}\bm{x}(t)dt+\alpha d\bm{b}(t)$ (22)
to study the noisy average consensus system. The parameter $\alpha$ is a
positive real number, and it represents the intensity of the noises. The
stochastic term $\bm{b}(t)$ represents the $n$-dimensional standard Wiener
process. The elements of $\bm{b}(t)=(b_{1}(t),b_{2}(t),\ldots,b_{n}(t))^{T}$
are independent one dimensional-standard Wiener processes. For the Wiener
process $b(t)$, we have $b(0)=0$, $E[b(t)]=0$, and it satisfies
$\displaystyle b(t)-b(s)\sim{\cal N}(0,t-s),\ 0\leq s\leq t.$ (23)
### III-B Approaches for studying stochastic dynamics
Our primary objective in the following analysis is to investigate the
stochastic dynamics of the noisy average consensus system, focusing on
deriving the mean and covariance of the solution $\bm{x}(t)$ for the SDE (22).
There are two approaches to analyze the system. The first approach relies on
the established theory of Ito calculus [8], which is used to handle stochastic
integrals directly (see Fig. 1). Ito calculus can be applied to derive the
first and second moments of the solution of (22).
Alternatively, the second approach employs the Euler-Maruyama (EM) method [7]
and utilizes the weak convergence property [7] of the EM method. We will adopt
the latter approach in our analysis, as it does not require knowledge of
advanced stochastic calculus if we accept the weak convergence property.
Additionally, this approach can be naturally extended to the analysis on the
discrete-time noisy average consensus system. Furthermore, the EM method plays
a key role in the optimization method to be presented in Section V. Our
analysis motivates the use EM method for optimizing the covariance.
Figure 1: Two approaches for deriving the mean and covariance of $\bm{x}(t)$.
This paper follows the lower path using the EM method.
### III-C Euler-Maruyama method
We use the Euler-Maruyama method corresponding to this SDE so as to study the
stochastic behavior of the solution of the SDE (22) defined above. The EM
method is well-known numerical method for solving SDEs [7].
Assume that we need numerical solutions of a SDE in the time interval $0\leq
t\leq T$. We divide this interval into $N$ bins and let $t_{k}\equiv k\eta,\
k=0,1,\ldots,N$ where the interval $\eta$ is given by $\eta\equiv{T}/{N}.$ Let
us define a discretized sample $\bm{x}^{(k)}$ be
$\bm{x}^{(k)}\equiv\bm{x}(t_{k}).$ It should be noted that, the choice of the
width $\eta$ is crucial in order to ensure the stability and the accuracy of
the EM method. A small width leads to a more accurate solution, but requires
more computational time. A large width may be computationally efficient but
may lead to instability in the solution.
The recursive equation of the EM method corresponding to SDE (22) is given by
$\displaystyle\bm{x}^{(k+1)}=\bm{x}^{(k)}-\eta\bm{L}\bm{x}^{(k)}+\alpha\bm{w}^{(k)},\
k=0,1,2,\ldots,N,$ (24)
where each element of
$\bm{w}^{(k)}\equiv(w_{1}^{(k)},w_{2}^{(k)},\ldots,w_{n}^{(k)})^{T}$ follows
$w_{i}^{(k)}\sim{\cal N}(0,\eta).$ In the following discussion, we will use
the equivalent expression [7]:
$\displaystyle\bm{x}^{(k+1)}=\bm{x}^{(k)}-\eta\bm{L}\bm{x}^{(k)}+\alpha\sqrt{\eta}\bm{z}^{(k)},\
k=0,1,2,\ldots,N,$ (25)
where $\bm{z}^{(k)}$ is a random vector following the multivariate Gaussian
distribution ${\cal N}(\bm{0},\bm{I})$. The initial vector $\bm{x}^{(0)}$ is
set to be $\bm{c}$. This recursive equation will be referred to as the Euler-
Maruyama recursive equation.
Figure 2 presents a solution evaluated with the EM method. The cycle graph
with 10 nodes with the degree sequence $\bm{d}=(2,2,\ldots,2)$ is assumed. The
initial value is randomly initialized as $\bm{x}(0)\sim{\cal N}(0,\bm{I})$. We
can confirm that the state values are certainly converging to the average
value $\gamma$ in the case of noiseless case (left). On the other hand, the
state vector fluctuates around the average in the noisy case (right).
Figure 2: Trajectories of $\bm{x}(t_{k})=(x_{1}(t_{k}),\ldots,x_{n}(t_{k}))$
estimated by using the EM method. Cycle graph with 10 nodes were used. The
range $[0,10.0]$ are discretized with $N=100$ points. The consensus average
value is $\gamma=-0.3267$. Left panel: noiseless case $(\alpha=0.0)$, Right
panel: noisy case $(\alpha=0.1)$.
## IV Analysis for Noisy average consensus
### IV-A Recursive equation for residual error
In the following, we will analyze the stochastic behavior of the residual
error. This will be the basis for the MSE formula to be presented.
Recall that the initial state vector is
$\bm{c}=(c_{1},c_{2},\ldots,c_{n})^{T}$ and that the average of the initial
values is denoted by $\gamma$. Since the set of eigenvectors
$\\{\bm{\xi}_{1},\ldots,\bm{\xi}_{n}\\}$ of $\bm{L}$ is an orthonormal base,
we can expand the initial state vector $\bm{c}$ as
$\displaystyle\bm{c}=\zeta_{1}\bm{\xi}_{1}+\zeta_{2}\bm{\xi}_{2}+\cdots+\zeta_{n}\bm{\xi}_{n},$
(26)
where the coefficient is obtained by
$\zeta_{i}=\bm{c}^{T}\bm{\xi}_{i}(i\in[n])$. Note that
$\zeta_{1}\bm{\xi}_{1}=\gamma\bm{1}$ holds.
At the initial index $k=0$, the Euler-Maruyama recursive equation becomes
$\displaystyle\bm{x}^{(1)}=\bm{x}^{(0)}-\eta\bm{L}\bm{x}^{(0)}+\alpha\sqrt{\eta}\bm{z}^{(0)}.$
(27)
Substituting (26) into the above equation, we have
$\displaystyle\bm{x}^{(1)}$
$\displaystyle=\bm{x}^{(0)}-\eta\bm{L}(\zeta_{1}\bm{\xi}_{1}+\zeta_{2}\bm{\xi}_{2}+\cdots+\zeta_{n}\bm{\xi}_{n})+\alpha\sqrt{\eta}\bm{z}^{(0)}$
$\displaystyle=\bm{x}^{(0)}-\eta\zeta_{1}\bm{L}\bm{\xi}_{1}-\eta
L(\bm{x}^{(0)}-\zeta_{1}\bm{\xi}_{1})+\alpha\sqrt{\eta}\bm{z}^{(0)}$
$\displaystyle={\bm{x}^{(0)}-\eta\bm{L}(\bm{x}^{(0)}-\gamma\bm{1})+\alpha\sqrt{\eta}\bm{z}^{(0)}},$
(28)
where the equations $L\bm{\xi}_{1}=\bm{0}$ and
$\zeta_{1}\bm{\xi}_{1}=\gamma\bm{1}$ are used in the last equality.
Subtracting $\gamma\bm{1}$ from the both sides, we get
$\displaystyle\bm{x}^{(1)}-\gamma\bm{1}=(\bm{I}-\eta\bm{L})(\bm{x}^{(0)}-\gamma\bm{1})+\alpha\sqrt{\eta}\bm{z}^{(0)}.$
(29)
For the index $k\geq 1$, the Euler-Maruyama recursive equation can be written
as
$\displaystyle\bm{x}^{(k+1)}=(\bm{I}-\eta\bm{L})\bm{x}^{(k)}+\alpha\sqrt{\eta}\bm{z}^{(k)}.$
(30)
Subtracting $\gamma\bm{1}$ from the both sides, we have
$\displaystyle\bm{x}^{(k+1)}-\gamma\bm{1}=(\bm{I}-\eta\bm{L})\bm{x}^{(k)}-\gamma\bm{1}+\alpha\sqrt{\eta}\bm{z}^{(k)}.$
(31)
By using the relation $(\bm{I}-\eta\bm{L})\gamma\bm{1}=\gamma\bm{1},$ we can
rewrite the above equation as
$\displaystyle\bm{x}^{(k+1)}-\gamma\bm{1}$
$\displaystyle=(\bm{I}-\eta\bm{L})\bm{x}^{(k)}-(\bm{I}-\eta\bm{L})\gamma\bm{1}+\alpha\sqrt{\eta}\bm{z}^{(k)}$
$\displaystyle=(\bm{I}-\eta\bm{L})(\bm{x}^{(k)}-\gamma\bm{1})+\alpha\sqrt{\eta}\bm{z}^{(k)}.$
(32)
It can be confirmed the above recursion (32) is consistent with the initial
equation (29). We here summarize the above argument as the following lemma.
###### Lemma 1
Let $\bm{e}^{(k)}\equiv\bm{x}^{(k)}-\gamma\bm{1}$ be the residual error at
index $k$. The evolution of the residual error of the EM method is described
by
$\displaystyle\bm{e}^{(k+1)}=(\bm{I}-\eta\bm{L})\bm{e}^{(k)}+\alpha\sqrt{\eta}\bm{z}^{(k)}$
(33)
for $k\geq 0$.
The residual error $\bm{e}^{(k)}$ denotes the error between the average vector
$\gamma\bm{1}$ and the state vector $\bm{x}^{(k)}$ at time index $k$. By
analyzing the statistical behavior of $\bm{e}^{(k)}$, we can gain insight into
the stochastic properties of the dynamics of the noisy consensus system.
### IV-B Asymptotic mean of residual error
Let a vector $\bm{x}\sim{\cal N}(\bm{\mu},\bm{\Sigma})$. Recall that the
vector obtained by a linear map $\bm{y}=\bm{A}\bm{x}$ also follows the
Gaussian distribution, i.e.,
$\displaystyle\bm{y}\sim{\cal N}(\bm{A}\bm{\mu},\bm{A}\bm{\Sigma}\bm{A}^{T}),$
(34)
where $\bm{A}\in\mathbb{R}^{n\times n}$. If two Gaussian vectors
$\bm{a}\sim{\cal N}(\bm{\mu}_{a},\bm{\Sigma}_{a})$ and $\bm{b}\sim{\cal
N}(\bm{\mu}_{b},\bm{\Sigma}_{b})$ are independent, the sum
$\bm{z}=\bm{a}+\bm{b}$ becomes also Gaussian, i.e,
$\displaystyle\bm{z}\sim{\cal
N}(\bm{\mu}_{a}+\bm{\mu}_{b},\bm{\Sigma}_{a}+\bm{\Sigma}_{b}).$ (35)
In the recursive equation (33), it is evident that $\bm{e}^{(1)}$ follows a
multivariate Gaussian distribution because
$\displaystyle\bm{e}^{(1)}=(\bm{I}-\eta\bm{L})(\bm{c}-\gamma\bm{1})+\alpha\sqrt{\eta}\bm{z}^{(0)}$
(36)
is the sum of a constant vector and a Gaussian random vector. From the above
properties of Gaussian random vectors, the residual error vector
$\bm{e}^{(k)}$ follows the multivariate Gaussian distribution ${\cal
N}(\bm{\mu}^{(k)},\bm{\Sigma}^{(k)})$ where the mean vector $\bm{\mu}^{(k)}$
and the covariance matrix $\bm{\Sigma}^{(k)}$ are recursively determined by
$\displaystyle\bm{\mu}^{(k+1)}$
$\displaystyle=(\bm{I}-\eta\bm{L})\bm{\mu}^{(k)},$ (37)
$\displaystyle\bm{\Sigma}^{(k+1)}$
$\displaystyle=(\bm{I}-\eta\bm{L})\bm{\Sigma}^{(k)}(\bm{I}-\eta\bm{L})^{T}+\alpha^{2}\eta\bm{I}$
(38)
for $k\geq 0$ where the initial values are formally given by
$\displaystyle\bm{\mu}^{(0)}$ $\displaystyle=\bm{c}-\gamma\bm{1},$ (39)
$\displaystyle\bm{\Sigma}^{(0)}$ $\displaystyle=\bm{O}.$ (40)
Solving the recursive equation, we can get the asymptotic mean formula as
follows.
###### Lemma 2
Suppose that $T>0$ is given. The asymptotic mean at $N\rightarrow\infty$ is
given by
$\displaystyle\lim_{N\rightarrow\infty}\bm{\mu}^{(N)}=\exp(-\bm{L}T)(\bm{c}-\gamma\bm{1}).$
(41)
(Proof) The mean recursion is given as
$\bm{\mu}^{(k)}=(\bm{I}-\eta\bm{L})^{k}(\bm{c}-\gamma\bm{1})$ for $k\geq 1$.
Recall that the eigenvalue decomposition of $\bm{L}$ is given by
$\bm{L}=\bm{U}\mbox{diag}(\lambda_{1},\ldots,\lambda_{n})\bm{U}^{T}$. From
$\displaystyle\bm{I}-\eta\bm{L}=\bm{U}(\bm{I}-\eta\mbox{diag}(\lambda_{1},\ldots,\lambda_{n}))\bm{U}^{T},$
(42)
we have
$\displaystyle(\bm{I}-\eta\bm{L})^{k}=\bm{U}\mbox{diag}((1-\eta\lambda_{1})^{k},\ldots,(1-\eta\lambda_{n})^{k})\bm{U}^{T}.$
(43)
This implies, from the definition of exponential function,
$\displaystyle\lim_{N\rightarrow\infty}\left(\bm{I}-\frac{T}{N}\bm{L}\right)^{N}=\exp(-\bm{L}T),$
(44)
where $\eta=T/N$.
It is easy to confirm that the claim of this lemma is consistent with the
continuous solution of noiseless case (18). Namely, at the limit of
$\alpha\rightarrow 0$, the state evolution of the noisy system converges to
that of the noiseless system.
### IV-C Asymptotic covariance of residual error
We here discuss the asymptotic behavior of the covariance matrix
$\bm{\Sigma}^{(N)}$ at the limit of $N\rightarrow\infty$.
###### Lemma 3
Suppose that $T>0$ is given. The asymptotic covariance matrix at
$N\rightarrow\infty$ is given by
$\displaystyle\lim_{N\rightarrow\infty}\bm{\Sigma}^{(N)}=\bm{U}\mbox{diag}\left(\alpha^{2}T,\theta_{2},\theta_{3},\ldots,\theta_{n}\right)\bm{U}^{T},$
(45)
where $\theta_{i}$ is defined by
$\displaystyle\theta_{i}\equiv\frac{\alpha^{2}}{2\lambda_{i}}\left(1-e^{-2\lambda_{i}T}\right).$
(46)
(Proof) Recall that
$\displaystyle\bm{I}-\eta\bm{L}=\bm{U}\mbox{diag}(1,1-\eta\lambda_{2}\ldots,1-\eta\lambda_{n})\bm{U}^{T}.$
(47)
Let
$\bm{\Sigma}^{(k)}=\bm{U}\mbox{diag}(s_{1}^{(k)},\ldots,s_{n}^{(k)})\bm{U}^{T}$.
A spectral representation of the covariance evolution (38) is thus given by
$\displaystyle\mbox{diag}(s_{1}^{(k+1)},\ldots,s_{n}^{(k+1)})$
$\displaystyle=\mbox{diag}(s_{1}^{(k)},s_{2}^{(k)}(1-\eta\lambda_{2})^{2}\ldots,s_{n}^{(k)}(1-\eta\lambda_{n})^{2})+\alpha^{2}\eta\bm{I},$
(48)
where $s_{i}^{(0)}=0$. The first component follows a recursion
$s_{1}^{(k+1)}=s_{1}^{(k)}+\alpha^{2}\eta$ and thus we have
$s_{1}^{(N)}=\alpha^{2}\eta N=\alpha^{2}T.$ Another component follows
$\displaystyle
s_{i}^{(k+1)}=s_{i}^{(k)}(1-\eta\lambda_{i})^{2}+\alpha^{2}\eta.$ (49)
Let us consider the characteristic equation of (49) which is given by
$\displaystyle s=s(1-\eta\lambda_{i})^{2}+\alpha^{2}\eta.$ (50)
The solution of the equation is given by
$\displaystyle s=\frac{\alpha^{2}\eta}{1-(1-\eta\lambda_{i})^{2}}.$ (51)
The above recursive equation (49) thus can be transformed as
$\displaystyle s_{i}^{(k+1)}-s=(s_{i}^{(k)}-s)(1-\eta\lambda_{i})^{2}.$ (52)
From the above equation, $s_{i}^{(N)}$ can be solved as
$\displaystyle s_{i}^{(N)}=s+(s_{i}^{(0)}-s)(1-\eta\lambda_{i})^{2N}.$ (53)
Taking the limit $N\rightarrow\infty$, we have
$\displaystyle\lim_{N\rightarrow\infty}s_{i}^{(N)}=\frac{\alpha^{2}}{2\lambda_{i}}\left(1-e^{-2\lambda_{i}T}\right).$
(54)
We thus have the claim of this lemma.
### IV-D Weak convergence of Euler-Maruyama method
As previously noted, the asymptotic mean (41) is consistent with the
continuous solution. The weak convergence property of the EM method [7] allows
us to obtain the moments of the error at time $t$.
We will briefly explain the weak convergence property. Suppose a SDE with the
form:
$\displaystyle d\bm{x}(t)=\phi(\bm{x}(t))dt+\psi(\bm{x}(t))d\bm{b}(t).$ (55)
If $\phi$ and $\psi$ are bounded and Lipschitz continuous, then the finite
order moment estimated by the EM method converges to the exact moment of the
solution $\bm{x}(t)$ at the limit $N\rightarrow\infty$ [7]. This property is
called the weak convergence property. In our case, the SDE (22) has bounded
and Lipschitz continuous coefficient functions, i.e,
$\phi(\bm{x})=-\bm{L}\bm{x}$ and $\psi(\bm{x})=\alpha$. Hence, we can employ
the weak convergence property in our analysis.
Suppose $\bm{x}(t)$ is a solution of SDE (22) with the initial condition
$\bm{x}(0)=\bm{c}$. Let $\bm{\mu}(t)$ be the mean vector of the residual error
$\bm{e}(t)=\bm{x}(t)-\gamma\bm{1}$ and $\bm{\Sigma}(t)$ is the covariance
matrix of the residual error $\bm{e}(t)$.
###### Theorem 1
For a positive real number $t>0$, the mean and the covariance matrix of the
residual error $\bm{e}(t)$ are given by
$\displaystyle\bm{\mu}(t)$ $\displaystyle=\exp(-\bm{L}t)(\bm{c}-\gamma\bm{1})$
(56) $\displaystyle\bm{\Sigma}(t)$
$\displaystyle=\bm{U}\mbox{diag}\left(\alpha^{2}t,\theta_{2},\theta_{3},\ldots,\theta_{n}\right)\bm{U}^{T}.$
(57)
(Proof) Due to the weak convergence property of the EM method, the first and
second moments of the error are converged to the asymptotic mean and
covariance of the EM method [7], i.e.,
$\displaystyle\bm{\mu}(T)$
$\displaystyle=\lim_{N\rightarrow\infty}\bm{\mu}^{(N)}$ (58)
$\displaystyle\bm{\Sigma}(T)$
$\displaystyle=\lim_{N\rightarrow\infty}\bm{\Sigma}^{(N)},$ (59)
where $N$ and $T$ are related by $T=\eta N$. Applying Lemmas 2 and 3 and
replacing the variable $T$ by $t$ provide the claim of the theorem.
### IV-E Mean squared error
In the following, we assume that the initial state vector $\bm{c}$ follows
Gaussian distribution ${\cal N}(\bm{0},\bm{I})$.
In this setting, $\bm{\mu}(t)$ also follows multivariate Gaussian distribution
with the mean vector $\bm{0}$ and the covariance matrix
$\bm{Q}(t)\bm{Q}(t)^{T}$ where
$\displaystyle\bm{Q}(t)\equiv\exp(-\bm{L}t)\left(\bm{I}-\frac{1}{n}\bm{1}(\bm{1}^{T})\right)$
(60)
because $\bm{\mu}(t)$ can be rewritten as
$\displaystyle\bm{\mu}(t)$
$\displaystyle=\exp(-\bm{L}t)(\bm{c}-\gamma\bm{1})=\exp(-\bm{L}t)\left(\bm{I}-\frac{1}{n}\bm{1}(\bm{1}^{T})\right)\bm{c}.$
(61)
By using the result of Theorem 1, we immediately have the following corollary
indicating the MSE formula.
###### Corollary 1
The mean squared error (MSE)
$\displaystyle{\sf MSE}(t)\equiv{\sf E}[\|\bm{x}(t)-\gamma\bm{1}\|_{2}^{2}]$
(62)
is given by
$\displaystyle{\sf MSE}(t)$
$\displaystyle=\alpha^{2}t+\frac{\alpha^{2}}{2}\sum_{i=2}^{n}\frac{1-e^{-2\lambda_{i}t}}{\lambda_{i}}+\mbox{tr}(\bm{Q}(t)\bm{Q}(t)^{T}).$
(Proof) We can rewrite $\bm{x}(t)$ as:
$\displaystyle\bm{x}(t)=\gamma\bm{1}+\bm{Q}(t)\bm{c}+\bm{w},$ (63)
where $\bm{w}\sim{\cal N}(\bm{0},\bm{\Sigma}(t))$, and $\bm{w}$ and $\bm{c}$
are independent. We thus have
$\displaystyle{\sf MSE}(t)$
$\displaystyle=\mbox{tr}(\bm{\Sigma}(t))+\mbox{tr}(\bm{Q}(t)\bm{Q}(t)^{T})$
$\displaystyle=\alpha^{2}t+\frac{\alpha^{2}}{2}\sum_{i=2}^{n}\frac{1-e^{-2\lambda_{i}t}}{\lambda_{i}}+\mbox{tr}(\bm{Q}(t)\bm{Q}(t)^{T})$
(64)
due to Theorem 1.
Since the value of the term $\mbox{tr}(\bm{Q}(t)\bm{Q}(t)^{T})$ is
exponentially decreasing with $t$, $\mbox{tr}(\bm{\Sigma}(t))$ is dominant in
${\sf MSE}(t)$ for sufficiently large $t$. For sufficiently large $t$, the MSE
is well approximated by the asymptotic MSE (AMSE) as
$\displaystyle{\sf MSE}(t)\simeq{\sf
AMSE}(t)\equiv\alpha^{2}t+\frac{\alpha^{2}}{2}\sum_{i=2}^{n}\frac{1}{\lambda_{i}}$
(65)
because $\mbox{tr}(\bm{Q}(t)\bm{Q}(t)^{T})$ is negligible, and
$1-e^{-2\lambda_{i}t}$ can be well approximated to $1$. We can observe that
the sum of inverse eigenvalue $\sum_{i=2}^{n}({1}/{\lambda_{i}})$ of the
Laplacian matrix determines the intercept of the ${\sf AMSE}(t)$. In other
words, the graph topology influences the stochastic error behavior through the
sum of inverse eigenvalues of the Laplacian matrix.
Figure 3 presents a comparison of ${\sf MSE}(t)$ evaluated by the EM method
(25) and the formula in (64). In this experiment, the cycle graph with 10
nodes is used. The values of ${\sf AMSE}(t)$ are also included in Fig. 3. We
can see that the theoretical values of ${\sf MSE}(t)$ and estimated values by
the EM method are quite close.
Figure 3: Comparison of MSE: The label Euler-Maruyama represents ${\sf
MSE}(t)$ estimated by using samples generated by the EM method. Theoretical
${\sf MSE}(t)$ represents the values evaluated by (64).Theoretical ${\sf
AMSE}(t)$ represents the values of ${\sf AMSE}(t)$. Cycle graph with 10 nodes
with $\bm{d}=(2,2,\ldots,2)$ are used. The parameter setting is as follows:
$N=250$, $T=10$, $\alpha=0.2$. $5000$ samples are generated by the EM method
for estimating ${\sf MSE}(t)$.
## V Minimization of mean squared error
### V-A Optimization Problems A and B
In the previous section, we demonstrated that the MSE can be expressed in
closed-form. It is natural to optimize the edge weights $\\{\mu_{ij}\\}$ in
order to decrease the value of the MSE. The optimization of the edge weights
is equivalent to the optimization of the Laplacian matrix $\bm{L}$. There
exist several related works that aim to achieve a similar goal for noise-free
systems. For example, Xiao and Boyd [5] proposed a method to minimize the
second eigenvalue to achieve the fastest convergence to the average. They
formulated the optimization problem as a convex optimization problem, which
can be solved efficiently. Kishida et al. [13] presented a deep unfolding-
based method for optimizing time-dependent edge weights, yet these methods are
not applicable to systems with noise. Optimizing the MSE may be a non-trivial
task as it involves the sum of the inverse eigenvalues of the Laplacian
matrix.
In this subsection, we will present two optimization problems of edge weights.
#### V-A1 Optimization problem A
Assume that a degree sequence $\bm{d}\in\mathbb{R}^{+}$ is given in advance.
The optimization problem A is the minimization problem of ${\sf MSE}(t^{*})$
under the given degree sequence where $t^{*}$ is the predetermined target time
given in advance. The precise formulation of the problem is given as follows:
$\displaystyle\mbox{minimize }{\sf MSE}(t^{*})$ subject to:
$\displaystyle\bm{L}$ $\displaystyle=\\{L_{ij}\\}\in\mathbb{R}^{n\times n}$
(66) $\displaystyle\bm{L}$ $\displaystyle=\bm{L}^{T}$ (67)
$\displaystyle\bm{L}$ $\displaystyle\bm{1}=\bm{0}$ (68)
$\displaystyle\|\mbox{diag}(\bm{L})-\bm{d}\|_{2}<\theta$ (69) $\displaystyle
L_{ij}$ $\displaystyle=0,\ (i,j)\notin E.$ (70)
The constraint (67) is imposed for the symmetry of the edge weight
$\mu_{ij}=\mu_{ji}$ for $(i,j)\in E$. The row sum constraint (68) is needed
for satisfying (8). The constraint (69) means that $\bm{L}$ should be close
enough to the given degree sequence. The positive constant $\theta$ can be
seen as a tolerance parameter.
One way to interpret the optimization problem A is to consider the graph $G$
representing the wireless connection between terminals $i\in[n]$. The degree
sequence $\bm{d}=(d_{1},d_{2},\ldots,d_{n})$ can be seen as an allocated
receive total wireless power, i.e., the terminal $i$ can receive the
neighbouring signals up to the total power $d_{i}$. If an average consensus
protocol is used in such a wireless network for specific applications, it is
desirable to optimize the ${\sf MSE}(t^{*})$ while satisfying the power
constraint.
#### V-A2 Optimization problem B
Assume that a real constant $D\in\mathbb{R}^{+}$ is given in advance. The
optimization problem B is the minimization problem of ${\sf MSE}(t^{*})$ under
the situation that the diagonal sum of the Laplacian matrix $\bm{L}$ is equal
to $D$. The formulation is given as follows:
$\displaystyle\mbox{minimize }{\sf MSE}(t^{*})$ subject to:
$\displaystyle\bm{L}$ $\displaystyle=\\{L_{ij}\\}\in\mathbb{R}^{n\times n}$
(71) $\displaystyle\bm{L}$ $\displaystyle=\bm{L}^{T}$ (72)
$\displaystyle\bm{L}$ $\displaystyle\bm{1}=\bm{0}$ (73)
$\displaystyle\left|\sum_{i=1}^{n}L_{ii}-D\right|<\theta$ (74) $\displaystyle
L_{ij}$ $\displaystyle=0,\ (i,j)\notin E.$ (75)
Following the interpretation above, the power allocation is also optimized in
this problem.
### V-B Minimization based on deep-unfolded EM method
Advances in deep neural networks have had a strong impact on the design of
algorithms for communications and signal processing [14, 15, 16]. Deep
unfolding can be seen as a very effective way to improve the convergence of
iterative algorithms. Gregor and LeCun introduced the Learned ISTA (LISTA)
[21]. Borgerding et al. also proposed variants of AMP and VAMP with trainable
capability [19][20]. Trainable ISTA(TISTA) [23] is another trainable sparse
signal recovery algorithm with fast convergence. TISTA requires only a small
number of trainable parameters, which provides a fast and stable training
process. Another advantage of deep unfolding is that it has a relatively high
interpretability of learning results.
The concept behind deep unfolding is rather simple. We can embed trainable
parameters into the original iterative algorithm and then unfold the signal-
flow graph of the original algorithm. The standard supervised training
techniques used in deep learning, such as Stochastic Gradient Descent (SGD)
and back propagation, can then be applied to the unfolded signal-flow graph to
optimize the trainable parameters.
The combination of deep unfolding and the differential equation solvers [24]
is a current area of active research in scientific machine learning. It should
be noted, however, that the technique is not limited to applications within
scientific machine learning. In this subsection, we introduce an optimization
algorithm that is based on the deep-unfolded EM method. The central idea is to
use a loss function that approximates ${\sf MSE}(t^{*})$. By using a
stochastic gradient descent approach with this loss function, we can obtain a
near-optimal solution for both optimization problems A and B. The proposed
method can be easily implemented using any modern neural network framework
that includes an automatic differentiation mechanism. The following
subsections will provide a more detailed explanation of the proposed method.
#### V-B1 Mini-batch for optimization
In an optimization process described below, a number of mini-batches are
randomly generated. A mini-batch consists of
$\displaystyle{\cal
M}\equiv\\{(\bm{c}_{1},\gamma_{1}),(\bm{c}_{2},\gamma_{2}),\ldots(\bm{c}_{K},\gamma_{K})\\}.$
(76)
The size parameter $K$ is called the mini-batch size. The initial value vector
$\bm{c}_{i}$ follows Gaussian distribution, i.e., $\bm{c}_{i}\sim{\cal
N}(\bm{0},\bm{I})(i\in[n])$. The corresponding average value are obtained by
$\gamma_{i}\equiv(1/n)\bm{c}_{i}^{T}\bm{I}$.
#### V-B2 Loss function for Optimization problem A
The loss function corresponding to a mini-batch ${\cal M}$ is given by
$\displaystyle E_{\cal
M}(\bm{L})\equiv\frac{1}{K}\sum_{i=1}^{K}\|\bm{\chi}(\bm{c}_{i})-\gamma_{i}\bm{1}\|^{2}_{2}+P_{A}(\bm{L}),$
(77)
where $\bm{\chi}(\bm{c}_{i})\equiv\bm{x}^{(N)}$ is the random variable given
by the Euler-Maruyama recursion:
$\displaystyle\bm{x}^{(k+1)}=\bm{x}^{(k)}-\eta\bm{L}\bm{x}^{(k)}+\alpha\sqrt{\eta}\bm{z}^{(k)},\
k=0,1,2,\ldots,N,$ (78)
with $\bm{x}^{(0)}=\bm{c}_{i}$. The first term of the loss function can be
regarded as an approximation of ${\sf MSE}(t^{*})$:
$\displaystyle\frac{1}{K}\sum_{i=1}^{K}\|\bm{\chi}(\bm{c}_{i})-\gamma_{i}\bm{1}\|^{2}_{2}\simeq{\sf
MSE}(t^{*})$ (79)
for sufficiently large $K$ and $T=t^{*}$.
The function $P_{A}(\bm{L})$ is a penalty function corresponding to the
constraints (67)–(70) defined by
$\displaystyle P_{A}(\bm{L})$
$\displaystyle\equiv\rho_{1}\|\bm{L}-\bm{L}^{T}\|_{F}^{2}+\rho_{2}\|\bm{L}\bm{1}\|_{2}^{2}+\rho_{3}\|\mbox{diag}(\bm{L})-\bm{d}\|_{2}^{2}$
$\displaystyle+\rho_{4}\|\bm{L}$ $\displaystyle\odot\bm{M}\|_{F}^{2},$ (80)
where $\bm{M}=\\{M_{ij}\\}$ is the mask matrix defined by
$\displaystyle M_{ij}\equiv\left\\{\begin{array}[]{cc}1,&(i,j)\notin E\\\
0,&\mbox{otherwise}.\end{array}\right.$ (83)
The operator $\odot$ represents the Hadamard matrix product. The positive
constants $\rho_{i}(i\in[4])$ controls relative strength of each penalty term.
The first term of the penalty function corresponds to the symmetric constraint
(67). The term $\|\bm{L}\bm{1}\|_{2}^{2}$ is the penalty term for the row sum
constraint (68). The third term $\|\mbox{diag}(\bm{L})-\bm{d}\|_{2}^{2}$ is
included for the degree constraint. The last term
$\|\bm{L}\odot\bm{M}\|_{F}^{2}$ enforces $L_{ij}$ to be very small if
$(i,j)\notin E$.
Due to these penalty terms in $P_{A}(\bm{L})$, the violations on the
constraints (67)–(70) are suppressed in an optimization process.
#### V-B3 Loss function for Optimization problem B
For Optimization problem B, we use almost the same same loss function:
$\displaystyle E_{\cal
M}(\bm{L})\equiv\frac{1}{K}\sum_{i=1}^{K}\|\bm{\chi}(\bm{c}_{i})-\gamma_{i}\bm{1}\|^{2}_{2}+P_{B}(\bm{L}).$
(84)
In this case, we use the penalty function matched to the feasible conditions
of Optimization problem B:
$\displaystyle P_{B}(\bm{L})$
$\displaystyle\equiv\rho_{1}\|\bm{L}-\bm{L}^{T}\|_{F}^{2}+\rho_{2}\|\bm{L}\bm{1}\|_{2}^{2}+\rho_{3}\left(\sum_{i=1}^{n}L_{ii}-D\right)^{2}$
$\displaystyle+\rho_{4}\|\bm{L}$ $\displaystyle\odot\bm{M}\|_{F}^{2}.$ (85)
The third term of $P_{B}(\bm{L})$ corresponds to the diagonal sum condition
(74).
#### V-B4 Optimization process
The optimization process is summarized in Algorithm 1. This optimization
algorithm is mainly based on the Deep-unfolded Euler-Maruyama (DU-EM) method
for approximating ${\sf MSE}(t^{*})$. The initial value of the matrix $\bm{L}$
is assumed to be the $n\times n$ zero matrix $\bm{O}^{n\times n}$. The main
loop can be regarded as a stochastic gradient descent method minimizing the
loss values. The update of $\bm{L}$ (line 5) can be done by any optimizer such
as the Adam optimizer. The gradient of the loss function (line 4) can be
easily evaluated by using an automatic differentiation mechanism included in
recent neural network frameworks such as TensorFlow, PyTorch, Jax, and Flux.jl
with Julia. The block diagram of the Algorithm 1 is shown in Fig. 4.
Algorithm 1 Optimization process using DU-EM method
0: graph $G$, tolerance $\theta$, degree sequence $\bm{d}$ or degree sum $D$
0: Laplacian matrix $\bm{L}_{out}$
1: Let $\bm{L}\equiv\bm{O}^{n\times n}$
2: for $i=1$ to $I$ do
3: Generate a mini-batch ${\cal M}$ randomly.
4: Compute the gradient of the loss function $\displaystyle\bm{g}\equiv\nabla
E_{\cal M}(\bm{L})$
5: The matrix $\bm{L}$ is updated by using $\bm{g}$.
6: end for
7: $\bm{L}_{out}\equiv\mbox{round}_{\theta,*}(\bm{L})$
Figure 4: Block diagram of optimization process in Algorithm 1. The core of
the algorithm is the DU-EM method for approximating ${\sf MSE}(t^{*})$.
Several standard deep learning techniques such as back propagation and
stochastic gradient descent can be applied to update the trainable matrix
$\bm{L}$.
The stochastic optimization process outlined in Algorithm 1 is unable to
guarantee that the obtained solution will be strictly feasible. To ensure
feasibility, it is necessary to search for a feasible solution that is near
the result obtained by optimization. This is accomplished by using the round
function $\mbox{round}_{\theta,*}(\cdot)$ at line 7 of Algorithm 1.
The specific details for the round function used for optimization problem A
are outlined in Algorithm 2. The first step in the algorithm,
$\bm{L}\equiv(\bm{L}_{in}+\bm{L}_{in}^{T})/2$, ensures that the resulting
matrix is symmetric. The nested loop from line 2 to line 7 is used to enforce
the degree constraint and the constraint $L_{ij}=0\ (i,j)\notin E$. The single
loop from line 9 to line 11 is implemented to satisfy the constraint
$\bm{L}\bm{1}=\bm{0}$. The output of the round function
$\mbox{round}_{\theta,\bm{d}}(\cdot)$ guarantees that the constraints
(67)-(70) of optimization problem A are strictly satisfied. A similar round
function can be constructed for optimization problem B, which is presented in
Algorithm 3.
Algorithm 2 Round function $\mbox{round}_{\theta,\bm{d}}(\cdot)$ for Opt.
prob. A
0: Matrix $\bm{L}_{in}$, degree sequence $\bm{d}$, threshold value $\theta$
0: Laplacian matrix $\bm{L}_{out}$ satisfying (67)–(70)
1: Let $\bm{L}\equiv(\bm{L}_{in}+\bm{L}_{in}^{T})/2$
2: for $i=1$ to $n$ do
3: $L_{ii}\equiv d_{i}$
4: for $j=1$ to $n$ do
5: If $(i,j)\notin E$, then let $L_{ij}\equiv 0$
6: end for
7: end for
8: $\bm{\epsilon}=(\epsilon_{1},\ldots,\epsilon_{n})^{T}\equiv\bm{L}\bm{1}$
9: for $i=1$ to $n$ do
10: Let $L_{ii}\equiv L_{ii}-\epsilon_{i}$
11: end for
12: if $\|\mbox{diag}(\bm{L})-\bm{d}\|_{2}\geq\theta$ then
13: Quit with declaration “optimization failed”
14: end if
15: Output $\bm{L}_{out}\equiv\bm{L}$
Algorithm 3 Round function $\mbox{round}_{\theta,D}(\cdot)$ for Opt. prob. B
0: Matrix $\bm{L}_{in}$, degree sum $D$, threshold value $\theta$
0: Laplacian matrix $\bm{L}_{out}$
1: Let $\bm{L}\equiv(\bm{L}_{in}+\bm{L}_{in}^{T})/2$
2: for $i=1$ to $n$ do
3: for $j=1$ to $n$ do
4: If $(i,j)\notin E$, then let $L_{ij}\equiv 0$
5: end for
6: end for
7: $\bm{\epsilon}=(\epsilon_{1},\ldots,\epsilon_{n})^{T}\equiv\bm{L}\bm{1}$
8: for $i=1$ to $n$ do
9: Let $L_{ii}\equiv L_{ii}-\epsilon_{i}$
10: end for
11: if $\left|\sum_{i=1}^{n}L_{ii}-D\right|\geq\theta$ then
12: Quit with declaration “optimization failed”
13: end if
14: Output $\bm{L}_{out}\equiv\bm{L}$
## VI Numerical results
### VI-A Choice of Number of bins for EM-method
In the previous sections, we proposed a DU-based optimization method. This
section presents results of numerical experiments. For these experiments, we
used the automatic differentiation mechanism provided by Flux.jl [25] on Julia
Language [26].
Before discussing the optimization of MSE, we first examine the choice number
of bins, $N$. Small $N$ is beneficial for computational efficiency but it may
lead to inaccurate estimation of MSE. In this subsection, we will compare the
Monte carlo estimates of MSE estimated by the EM-method.
The Karate graph is a well-known graph of a small social network. It
represents the relationships between 34 members of a karate club at a
university. The graph consists of 34 nodes, which represent the members of the
club, and 78 edges, which represent the relationships between the members.
Figure 5 compares three cases, i.e., $N=100,250,1000$. No visible difference
can be observed in the range from $T=0$ to $T=5$. In the following
experiments, we will use $N=250$ for EM-method.
Figure 5: MSE values estimated by Monte Carlo method based on the EM method.
Karate graph ($n=34$) and its unweighted Laplacian is assumed.
### VI-B Petersen graph (Optimization problem A)
Petersen graph is a 3-regular graph with $n=10$ nodes (Fig.6(a)). In this
subsection, we will examine the behavior of our optimization algorithm of
${\sf MSE}(t)$ for Petersen graph.
Figure 6: Small graphs: (a) Cycle graph, (b) Petersen graph, (c) House graph.
An adjacency matrix $\bm{A}\equiv\\{A_{ij}\\}\in\mathbb{R}^{n\times n}$ of a
graph $G\equiv(V,E)$ is defined by
$\displaystyle A_{ij}\equiv\left\\{\begin{array}[]{cc}1,&(i,j)\in E\\\
0,&\mbox{otherwise}.\end{array}\right.$ (88)
An unweighted Laplacian matrix $\bm{L}$ is defined by
$\displaystyle\bm{L}\equiv\bm{D}-\bm{A},$ (89)
The degree matrix $\bm{D}=\\{D_{ij}\\}$ is a diagonal matrix where $D_{ii}$ is
the degree of the node $i$. Namely, an unweighted Laplacian corresponds the
case where $\mu_{ij}=\mu_{ji}=1$ for any $(i,j)\in E$.
In the following discussion, let $\bm{L}_{P}$ be the unweighted Laplacian
matrix of Petersen graph. We assume Optimization problem A with the degree
sequence $\bm{d}\equiv\mbox{diag}(\bm{L}_{P})=(3,3,\ldots,3)$.
The parameter setting is as follows. The mini-batch size is set to $K=25$. The
noise intensity is $\alpha=0.3$. The penalty coefficients are
$\rho_{1}=\rho_{2}=\rho_{3}=\rho_{4}=10$. For time discretization, we use
$T=4,N=250$. The number of iterations for an optimization process is set to
3000. The tolerance parameter is set to $\theta=0.1$. In the optimization
process, we used the Adam optimizer with a learning rate of 0.01.
The loss values of an optimization process of Algorithm 1 are presented in
Fig.7. In the initial stages of the optimization process, the loss value is
relatively high since the initial $\bm{L}$ is set to the zero matrix, which
means that the system cannot achieve average consensus. The loss value
decreases monotonically until around iteration 700, after which it fluctuates
within a range of $700\leq k\leq 3000$. The graph shows that the matrix
$\bm{L}$ in Algorithm 1 is being updated appropriately and that the loss
value, which approximates ${\sf MSE}(t)$, is decreasing.
Figure 7: Loss values in an optimization process: Optimization prob. A for
Petersen graph
Let us denote the Laplacian matrix obtained by the optimization process as
$\bm{L}^{*}$. Table I summarizes several important quantities regarding
$\bm{L}^{*}$. The top 4 rows of Table I indicate that $\bm{L}^{*}$ is
certainly a feasible solution satisfying (67)–(70) because we set
$\theta=0.1$. This numerical results confirms that the round function
$\mbox{round}_{\theta,\bm{d}}(\cdot)$ works appropriately. The last row of
Table I shows that $\bm{L}^{*}$ is very close to the unweighted Laplacian
matrix $\bm{L}_{P}$. Since Petersen graph is regular and has high symmetry, it
is conjectured that $\bm{L}_{P}$ is the optimal solution for Optimization
problem A. Thus, the closeness between $\bm{L}_{P}$ and $\bm{L}^{*}$ can be
seen as a convincing result.
TABLE I: Several quantities on optimization result $\bm{L}^{*}$ $\|\bm{L}^{*}-\bm{L}^{*T}\|_{F}$ | 0
---|---
$\|\bm{L}^{*}\bm{1}\|_{2}$ | 0
$\|\bm{L}^{*}\odot M\|_{F}$ | 0
$\|\mbox{diag}(\bm{L}^{*})-\bm{d}\|_{2}$ | $5.74\times 10^{-3}$
$\|\bm{L}_{P}-\bm{L}^{*}\|_{F}$ | 0.188
The MSE values of the optimization result $\bm{L}^{*}$ and the unweighted
Laplacian matrix $\bm{L}_{P}$ are presented in Fig.8. These values are
evaluated by the MSE formula (64). No visible difference can be seen between
two curves. This means that Algorithm 1 successfully found a good solution for
Optimization problem A in this case.
Figure 8: Petersen graph: MSE values of the optimization result $\bm{L}^{*}$
and the unweighted Laplacian matrix $\bm{L}_{P}$.
### VI-C Karate graph (Optimization problem A)
We here consider Optimization problem A on the Karate graph. Let $\bm{L}_{K}$
be the unweighted Laplacian matrix of the Karate graph. The target degree
sequence is set to $\bm{d}\equiv\mbox{diag}(\bm{L})=(16,9,10,\ldots,12,17).$
The parameter setting for an optimization process is given as follows. The
mini-batch size is set to $K=50$. The noise intensity is set to $\alpha=0.3$.
The penalty coefficients are $\rho_{1}=\rho_{2}=\rho_{3}=\rho_{4}=10$. We use
$T=2,N=250$ for DU-EM method. The number of iterations for an optimization
process is set to 5000. The tolerance is set to $\theta=0.1$. In the
optimization process, we used the Adam optimizer with learning rate 0.01.
Assume that $\bm{L}^{*}$ is the Laplacian matrix obtained by an optimization
process. The matrix $\bm{L}^{*}$ is a feasible solution satisfying all the
constraints (67)–(70). For example, we have
$\|\mbox{diag}(\bm{L}^{*})-\bm{d}\|_{2}=0.0894<0.1$. Figure 9 presents the
absolute values of non-diagonal elements in $\bm{L}_{K}$ and $\bm{L}^{*}$.
According to its definition, the absolute value of a non-diagonal element of
$\bm{L}_{K}$ take the value one (left panel). On the other hand, we can
observe that non-diagonal elements of $\bm{L}^{*}$ takes the absolute values
in the range $0$ to $1.5$.
Figure 9: Absolute values of non-diagonal elements in $\bm{L}_{K}$ and
$\bm{L}^{*}$: (left panel) Laplacian matrix ${\bm{L}}_{K}$ of the Karate
graph, (right panel) The Laplacian matrix $\bm{L}^{*}$ obtained by an
optimization process.
We present the MSE values of the optimization result $\bm{L}^{*}$ and the
unweighted Laplacian matrix $\bm{L}_{K}$ in Fig.10. These values are evaluated
by the MSE formula (64). It can be seen that the optimized Laplacian
$\bm{L}^{*}$ provides smaller MSE values. In this case, appropriate assignment
of weights $\mu_{ij}$ improves the noise immunity of the system. The inverse
eigenvalue sums of the Laplacian matrices $\bm{L}_{K}$ and $\bm{L}^{*}$ are
13.83 and 13.41, respectively. In this case, the optimization process of
Algorithm 1 can successfully provide a feasible Laplacian matrix with smaller
inverse eigenvalue sum. As shown in (65), the inverse eigenvalue sum
determines the behavior of ${\sf MSE}(t)$.
Figure 10: Karate graph: MSE values of $\bm{L}^{*}$
($\bm{d}=\mbox{diag}(\bm{L}_{K})$) and the unweighted Laplacian matrix
$\bm{L}_{K}$.
### VI-D House graph (Optimization problem B)
The house graph (Fig.6(c)) is a small irregular graph with 5 nodes defined by
the adjacency matrix:
$\displaystyle\bm{A}=\begin{pmatrix}0&1&1&0&0\\\ 1&0&0&1&0\\\ 1&0&0&1&1\\\
0&1&1&0&1\\\ 0&0&1&1&0\\\ \end{pmatrix}.$ (90)
We thus have the unweighted Laplacian $\bm{L}_{H}$ of the house graph as
$\displaystyle\bm{L}_{H}=\begin{pmatrix}2&-1&-1&0&0\\\ -1&2&0&-1&0\\\
-1&0&3&-1&-1\\\ 0&-1&-1&3&-1\\\ 0&0&-1&-1&2\\\ \end{pmatrix},$ (91)
where the diagonal sum of $\bm{L}_{H}$ is 12.
We made two optimizations for $D=12$ and $D=20$. The parameter setting is
almost the same as the one used in the previous subsection. Only the
difference is to use $\rho_{3}=0.1$ as the diagonal sum penalty constant. As
results of the optimization processes, we have two Laplacian matrices
$\bm{L}^{*}_{12}$ $(D=12)$ and $\bm{L}^{*}_{24}$ $(D=24)$ as follows:
$\displaystyle\bm{L}^{*}_{12}=\begin{pmatrix}2.29&-1.05&-1.23&0&0\\\
-1.05&2.29&0&-1.24&0\\\ -1.23&0&2.70&-0.44&-1.03\\\
0&-1.24&-0.44&2.71&-1.03\\\ 0&0&-1.03&-1.03&2.06\\\ \end{pmatrix}$ (92)
$\displaystyle\bm{L}^{*}_{24}=\begin{pmatrix}4.80&-2.70&-2.09&0&0\\\
-2.70&4.79&0&-2.08&0\\\ -2.09&0&4.81&-0.37&-2.35\\\
0&-2.08&-0.37&4.85&-2.40\\\ 0&0&-2.35&-2.40&4.74\\\ \end{pmatrix}$ (93)
The diagonal sums of $\bm{L}^{*}_{12}$ and $\bm{L}^{*}_{24}$ are 12.04 and
23.99, respectively. Compared with $\bm{L}^{*}_{12}$ with $\bm{L}_{H}$, the
diagonal elements of $\bm{L}^{*}_{12}$ are more flat:
$\displaystyle\mbox{diag}(\bm{L}^{*}_{12})$
$\displaystyle=(2.29,2.29,2.70,2.71,2.06)^{T},$ (94)
$\displaystyle\mbox{diag}(\bm{L}_{H})$ $\displaystyle=(2,2,3,3,2)^{T}.$ (95)
The MSE values of the optimization result $\bm{L}^{*}_{12},\bm{L}^{*}_{24}$
and the unweighted Laplacian matrix $\bm{L}_{H}$ are shown in Fig.11. We can
observe that $\bm{L}_{12}^{*}$ achieves slightly smaller MSE values compared
with the unweighted Laplacian matrix $\bm{L}_{H}$. The Laplacian matrix
$\bm{L}^{*}_{24}$ provides much smaller MSE values than those of $\bm{L}_{H}$.
The sums of inverse eigenvalues are $1.64,1.59,0.82$ for $\bm{L}_{H}$,
$\bm{L}_{12}^{*}$, and $\bm{L}_{24}^{*}$, respectively.
Figure 11: House graph: MSE values of $\bm{L}^{*}_{12},\bm{L}^{*}_{20}$ and
the unweighted Laplacian matrix $\bm{L}_{H}$.
### VI-E Barabási-Albert (BA) random graphs (Optimization problem B)
As an example of random scale-free networks, we here handle Barabási-Albert
random graph which use a preferential attachment mechanism. The number of
edges between a new node to existing nodes is assumed to be 5.
In this experiment, we generated an instance of Barabási-Albert random graph
with 50 nodes. The unweighted Laplacian of the instance is denoted by
$\bm{L}_{B}$. The sum of the diagonal elements of $\bm{L}_{B}$ is $450$. The
parameter setting for optimization is the same as the one used in the previous
subsection except for $D=450$. The output of the optimization algorithm is
referred to as $\bm{L}^{*}$.
Figure 12 presents the MSE values of the original unweighted Laplacian
$\bm{L}_{B}$ and the optimization output $\bm{L}^{*}$. We can observe that the
optimized MSE values are substantially smaller than those of the unweighted
Laplacian $\bm{L}_{B}$. The sums of inverse eigenvalues for $\bm{L}^{*}$ and
$\bm{L}_{B}$ are $6.44$ and $7.16$, respectively.
Figure 13 illustrates the values of diagonal elements of $\bm{L}^{*}$ and
$\bm{L}_{B}$. It can be observed that the values distribution of $\bm{L}^{*}$
is almost flat although the values of $\bm{L}_{B}$ varies from 5 to 21. This
observation is consistent with the tendency observed in the previous
subsection regarding the house graph.
Figure 12: Barabási-Albert random graph: MSE values of $\bm{L}^{*}$ and the
unweighted Laplacian matrix $\bm{L}_{B}$. Figure 13: Comparison of diagonal
elements of $\bm{L}^{*}$ and $\bm{L}_{B}$.
## VII Conclusion
In this paper, we have formulated a noisy average consensus system through a
SDE. This formulation allows for an analytical study of the stochastic
dynamics of the system. We derived a formula for the evolution of covariance
for the EM method. Through the weak convergence property, we have established
Theorem 1 and derived a MSE formula that provides the MSE at time $t$.
Analysis of the MSE formula reveals that the sum of inverse eigenvalues for
the Laplacian matrix is the most significant factor impacting the MSE
dynamics. To optimize the edge weights, a deep unfolding-based technique is
presented. The quality of the solution has been validated by numerical
experiments.
It is important to note that the theoretical understanding gained in this
study will also provide valuable perspective on consensus-based distributed
algorithms in noisy environments. In addition, the methodology for
optimization proposed in this paper is versatile and can be adapted for
various algorithms operating on graphs. The exploration of potential
applications will be an open area for further studies.
## Acknowledgement
This study was supported by JSPS Grant-in-Aid for Scientific Research (A)
Grant Number 22H00514. The authors thank Prof. Masaki Ogura for letting us
know the related work [11] on discrete-time average consensus systems.
## References
* [1] T. Wadayama and A. Nakai-Kasai, “Continuous-time noisy average consensus system as Gaussian multiple access channel,” IEEE International Symposium on Information Theory, (ISIT) 2022.
* [2] R. Olfati-Saber and R. M. Murray, “Consensus problems in networks of agents with switching topology and time-delays,” IEEE Trans. Automat. Contr., vol. 49, no. 9, pp. 1520–1533, Sept. 2004.
* [3] R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus and cooperation in networked multi-agent systems,” Proceedings of the IEEE, vol. 95, no. 1, pp. 215–233, 2007\.
* [4] W. Reb and R. W. Beard, “Consensus seeking in multi-agent systems under dynamically changing interaction topologies,” IEEE Transactions on Automatic Control, vol. 50, no. 5, pp. 655–661, 2005.
* [5] L. Xiao and S. Boyd, “Fast linear iterations for distributed averaging,” Systems and Control Letters, vol. 53, pp. 65–78, 2004.
* [6] W. Ren, “Consensus strategies for cooperative control of vehicle formations,” IET Control Theory and Applications, vol.2, pp. 505–512, 2007.
* [7] P. E. Kloeden and E. Platen, “Numerical solution of stochastic differential equations,” Springer-Verlag, 1991.
* [8] B. Oksendal, “Stochastic differential equations: an introduction with applications,” Springer, 2010.
* [9] C. Godsil and G. F. Royle, “Algebraic graph theory,” Springer, 2001.
* [10] F. Chung, “Spectral graph theory,” American Mathematical Society, 1997.
* [11] A. Jadbabaie and A. Olshevsky, “On performance of consensus protocols subject to noise: Role of hitting times and network structure,” IEEE 55th Conference on Decision and Control (CDC), pp. 179-184, 2016.
* [12] R. Rajagopal and M. J. Wainwright, “Network-based consensus averaging with general noisy channels,” IEEE Trans. Signal Process., vol. 59, no. 1, pp. 373–385, Jan. 2011.
* [13] M. Kishida, M. Ogura, Y. Yoshida, and T. Wadayama, “Deep learning-based average consensus,” IEEE Access, vol. 8, pp. 142404 - 142412, 2020.
* [14] B. Aazhang, B. P. Paris and G. C. Orsak, “Neural networks for multiuser detection in code-division multiple-access communications,” IEEE Trans. Comm., vol. 40, no. 7, pp. 1212-1222, Jul. 1992.
* [15] E. Nachmani, Y. Beéry and D. Burshtein, “Learning to decode linear codes using deep learning,” 2016 54th Annual Allerton Conf. Comm., Control, and Computing, 2016, pp. 341-346.
* [16] T. O’Shea and J. Hoydis, “An introduction to deep learning for the physical layer,” IEEE Trans. Cog. Comm. Net., vol. 3, no. 4, pp. 563-575, Dec. 2017.
* [17] Y. A. LeCun, L. Bottou, G. B. Orr, and K. R. Müller, “Efficient backprop,” in Neural networks: Tricks of the trade, G. B. Orr and K. R. Müller, Eds. Springer-Verlag, London, UK, 1998, pp. 9-50.
* [18] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533-536, Oct. 1986.
* [19] M. Borgerding and P. Schniter, “Onsager-corrected deep learning for sparse linear inverse problems,” 2016 IEEE Global Conf. Signal and Inf. Process. (GlobalSIP), Washington, DC, Dec. 2016, pp. 227-231.
* [20] M. Borgerding, P. Schniter, and S. Rangan, “AMP-inspired deep networks for sparse linear inverse problems, ” IEEE Trans, Sig. Process. vol 65, no. 16, pp. 4293-4308 Aug. 2017.
* [21] K. Gregor, and Y. LeCun, “Learning fast approximations of sparse coding,” Proc. 27th Int. Conf. Machine Learning, pp. 399-406, 2010.
* [22] A. Balatsoukas-Stimming and C. Studer, “Deep Unfolding for Communications Systems: A Survey and Some New Directions,” 2019 IEEE International Workshop on Signal Processing Systems (SiPS), pp. 266-271, 2019.
* [23] D. Ito, S. Takabe, and T. Wadayama, ”Trainable ISTA for sparse signal recovery,” IEEE Transactions on Signal Processing, vol. 67, no. 12, pp. 3113-3125, 2019.
* [24] C. Rackauckas, Y. Ma, J. Martensen, C. Warner, K. Zubov, R. Supekar, D. Skinner, A. Ramadhan, and A. Edelman, “Universal differential equations for scientific machine learning,” arXiv:2001.04385, 2020.
* [25] M. Innes, “Flux: Elegant machine learning with Julia,” Journal of Open Source Software, 2018.
* [26] J. Bezanson, S. Karpinski, B. Viral, and A. Edelman, “Julia: A fast dynamic language for technical computing,” arXiv preprint arXiv:1209.5145, 2012.
* [27] R. Albert, A.-L. Barabasi, “Statistical mechanics of complex networks,” American Physical Society, Rev. Mod. Phys., vol.74 pp. 47–97, 2002.
|
# Discontinuous phase transition from ferromagnetic to oscillating states in a
nonequilibrium mean-field spin model
Laura Guislain Univ. Grenoble Alpes, CNRS, LIPhy, 38000 Grenoble, France
Eric Bertin Univ. Grenoble Alpes, CNRS, LIPhy, 38000 Grenoble, France
###### Abstract
We study a nonequilibrium ferromagnetic mean-field spin model exhibiting a
phase with spontaneous temporal oscillations of the magnetization, on top of
the usual paramagnetic and ferromagnetic phases. This behavior is obtained by
introducing dynamic field variables coupled to the spins through non-
reciprocal couplings. We determine a nonequilibrium generalization of the
Landau free energy in terms of the large deviation function of the
magnetization and of an appropriately defined smoothed stochastic time
derivative of the magnetization. While the transition between paramagnetic and
oscillating phase is continuous, the transition between ferromagnetic and
oscillating phases is found to be discontinuous, with a coexistence of both
phases, one being stable and the other one metastable. Depending on parameter
values, the ferromagnetic points may either be inside or outside the limit
cycle, leading to different transition scenarios. The stability of these
steady states is determined from the large deviation function. We also show
that in the coexistence region, the entropy production has a pronounced
maximum as a function of system size.
## I Introduction
A number of systems driven far from equilibrium are known to exhibit
spontaneous collective oscillations. This is the case for instance for coupled
oscillators, like the Kuramoto model with distributed frequencies [1, 2], or
in models of identical coupled noisy oscillators [3, 4]. Interestingly,
spontaneous oscillations have also been reported in systems composed of a
large number of coupled units which individually do not oscillate in the
absence of interaction. Standard examples include different types of chemical
oscillators [5], and recent experimental and theoretical studies have also
reported spontaneous oscillations in populations of biological cells [6, 7],
assemblies of active particles with non-reciprocal interactions [8, 9],
biochemical clocks [10, 11, 12], droplets in a fluid binary mixture [13],
models of population dynamics [14, 15], socio-economic models [16, 17] or
nonequilibrium spin systems [18, 19, 20].
At the deterministic level of description, valid in the infinite system size
limit, the spontaneous emergence of oscillations can be described as a Hopf
bifurcation [21] in the framework of dynamical system theory. However, many
situations of experimental relevance involve mesoscopic systems for which
fluctuations cannot be neglected, as in the case of biochemical clocks for
instance [22]. An important consequence of the presence of fluctuations is
that the coherence time of oscillations becomes finite [23, 24, 25, 26, 27].
At a phenomenological level, the onset of oscillations in a fluctuating system
may be described as a stochastic Hopf bifurcation [28, 29]. Yet, a deeper
understanding would require to cast this phenomenon in the general framework
of nonequilibrium phase transitions, by explicitly connecting the collective
level of description to the microscopic dynamics. One may in particular
interpret the onset of oscillations in a large system of interacting degrees
of freedom by extending to far-from-equilibrium systems the thermodynamic
framework of phase transition introduced at equilibrium. Along this line, the
entropy production density has been shown to play the role of a generalized
thermodynamic potential, with a discontinuous derivative at the onset of
spontaneous oscillations [30, 31, 32, 14, 33, 34, 11, 35, 36, 37]. Another
approach to phase transitions consists in introducing order parameters
associated with spontaneously broken symmetries [38]. At a mean-field level of
description, one may then introduce a Landau free energy and determine its
expansion close to the phase transition. While this approach has been
originally designed for equilibrium systems, several recent works have
extended it to different types of nonequilibrium situations in the context of
spin models, to describe relaxation effects [39, 40], or the driving by an
oscillatory field or by multiple heat baths [41]. Based on a large deviation
theory approach, the spontaneous transition from a paramagnetic to an
oscillating state has been recently described in a nonequilibrium Landau
framework [42].
In this paper, we extend the results of Ref. [42] by considering within the
same nonequilibrium Landau framework the transition from a ferromagnetic state
to a state of spontaneous collective oscillations. We study a mean-field spin
model where spins are coupled to dynamic fields in a non-reciprocal way,
resulting in a breaking of detailed balance which allows for the onset of
oscillations in some parameter range. The presence of spontaneous oscillations
may be interpreted as an instance of a non-reciprocal phase transition [43,
44]. Both spin and field variables are also subjected to ferromagnetic
couplings, with different values. The phase transition is characterized by
determining a large deviation function of the magnetization and of a
stochastic variable playing the role of a smoothed time derivative of the
magnetization. The transition from the ferromagnetic state to the oscillating
state is found to be discontinuous, with a coexistence of the two phases in
the transition region. The large deviation function allows us to determine
which phase is stable or metastable. We also characterize finite size effects
in terms of entropy production.
The paper is organized as follows. The model is defined in Sec. II, the method
is presented in Sec. III and the main results of Ref. [42] on the continuous
phase transition from paramagnetic to oscillating states are summarized and
extended in Sec. IV. Then, Sec. V characterizes a first scenario of
discontinuous phase transition from ferromagnetic to oscillating states,
whereby the limit cycle surrounds the ferromagnetic points. A second scenario,
in which ferromagnetic points stand outside the limit cycle, is studied in
Sec. VI. Conclusions and perspectives are summarized in Sec. VII.
## II Model description
### II.1 Definition of the model
We consider a generalization of the kinetic mean-field Ising model with
ferromagnetic interactions introduced in [42], and sharing some similarities
with related models having two spin populations [45, 18] or subjected to a
feedback control [19, 46]. The model involves $2N$ microscopic variables: $N$
spins $s_{i}=\pm 1$ and $N$ fields $h_{i}=\pm 1$. We define the magnetization
$m$ and average field $h$ as
$m=\frac{1}{N}\sum_{i=1}^{N}s_{i}\,,\qquad
h=\frac{1}{N}\sum_{i=1}^{N}h_{i}\,.$ (1)
The stochastic dynamics consists in randomly flipping a single spin $s_{i}=\pm
1$ with rate $w_{s}^{\pm}$, or a single field $h_{i}=\pm 1$ with rate
$w_{h}^{\pm}$. In a mean-field spirit, the flipping rates $w_{s}^{\pm}$ and
$w_{h}^{\pm}$ are independent of $i$, and depend only on $m$ and $h$,
$w_{s}^{\pm}(m,h)=\frac{1}{1+e^{\beta\Delta E_{s}^{\pm}(m,h)}},\quad
w_{h}^{\pm}=\frac{1}{1+e^{\beta\Delta E_{h}^{\pm}(m,h)}},$ (2)
with $\beta=T^{-1}$ the inverse temperature and $\Delta E_{s,h}^{\pm}(m,h)$
the variation of $E_{s,h}(m,h)$ when flipping a spin $s_{i}=\pm 1$ or a field
$h_{i}=\pm 1$, where
$\displaystyle E_{s}(m,h)$
$\displaystyle=-N\left(\frac{J_{1}}{2}m^{2}+\frac{J_{2}}{2}h^{2}+mh\right),$
(3) $\displaystyle E_{h}(m,h)$ $\displaystyle=E_{s}(m,h)+\mu Nhm\,.$ (4)
When $\mu=0$, $E_{h}=E_{s}$ and the transition rates satisfy detailed balance
with respect to the equilibrium probability distribution $P_{\rm eq}\propto
e^{-\beta E_{s}}$. Detailed balance is broken as soon as $\mu\neq 0$, and
$\mu$ can thus be interpreted as a parameter controling the distance to
equilibrium. For fixed values of the interactions $J_{1}$ and $J_{2}$, the
temperature $T$ and the distance to equilibrium $\mu$ are the two control
parameters of the model.
We denote as $\mathcal{C}=(s_{1},\dots,s_{N},h_{1},\dots,h_{N})$ the
microscopic configuration of the system. Flips of the variables $s_{i}$ and
$h_{i}$ are encoded into formal transition rates
$W(\mathcal{C}^{\prime}|\mathcal{C})$ from a configuration $\mathcal{C}$ to a
configuration $\mathcal{C}^{\prime}$. The probability $P(\mathcal{C},t)$ of a
configuration $\mathcal{C}$ at time $t$ evolves according to the master
equation
$\partial_{t}P(\mathcal{C},t)=\sum_{\mathcal{C}^{\prime}(\neq\mathcal{C})}\big{[}W(\mathcal{C}|\mathcal{C}^{\prime})P(\mathcal{C}^{\prime},t)-W(\mathcal{C}^{\prime}|\mathcal{C})P(\mathcal{C},t)\big{]}.$
(5)
### II.2 Phase diagram in the deterministic limit
We first study the bifurcation diagram of the system obtained in the
deterministic limit $N\to\infty$. We compute the time derivatives
$d_{t}\langle m\rangle$ and $d_{t}\langle h\rangle$ using the master equation
Eq. (5), where $\langle
x\rangle=\sum_{\mathcal{C}}m(\mathcal{C})P(\mathcal{C})$ for any observable
$x$. We assume that the law of large numbers holds in the limit $N\to\infty$
so that $\langle f(m,h)\rangle\to f(m,h)$ for any regular function $f$.
Finally we obtain a set of deterministic equations on $m(t)$ and $h(t)$ (see
Appendix A):
$\displaystyle\frac{dm}{dt}$ $\displaystyle=-m+\tanh[\beta(J_{1}m+h)],$ (6)
$\displaystyle\frac{dh}{dt}$ $\displaystyle=-h+\tanh[\beta(J_{2}h+(1-\mu)m)].$
(7)
We explore regimes where the magnetization $m(t)$ may exhibit oscillations. In
dynamical systems theory, a limit cycle may generally be described in the
plane defined by a variable and its time derivative, thus we introduce a new
variable $\dot{m}=dm/dt$. The set of deterministic equations become
$\displaystyle\frac{dm}{dt}=\dot{m}\,,\qquad\frac{d\dot{m}}{dt}=Y(m,\dot{m})$
(8)
where $Y(m,\dot{m})$ has a lengthy expression, given in Appendix B [see Eq.
(104)]. $Y(m,\dot{m})$ satisfies the symmetry $Y(-m,-\dot{m})=-Y(m,\dot{m})$.
To study the fixed points of Eq. (8) and their stability, we decompose
$Y(m,\dot{m})$ into a $\dot{m}$-independent contribution
$Y(m,0)=-V^{\prime}(m)$ (9)
[see Appendix B, Eq. (105) for its explicit expression] and a
$\dot{m}$-dependent contribution
$\dot{m}g(m,\dot{m})=Y(m,\dot{m})-Y(m,0)\,,$ (10)
which defines the function $g(m,\dot{m})$. From Eq. (8), the fixed points
$(m,\dot{m})=(m_{0},0)$ satisfy $Y(m_{0},0)=0$, and thus $V^{\prime}(m_{0})=0$
according to Eq. (9). One finds in particular that $(m,\dot{m})=(0,0)$ is
always a fixed point of the system, because $V^{\prime}(0)=0$ by symmetry.
Linearizing the dynamics around a fixed point $(m_{0},0)$, $m=m_{0}+\delta m$,
$\dot{m}=\delta\dot{m}$, one has from Eq. (8)
$\frac{d}{dt}\begin{pmatrix}\delta m\\\
\delta\dot{m}\end{pmatrix}=\mathbf{M}\begin{pmatrix}\delta m\\\
\delta\dot{m}\end{pmatrix}$ (11)
with
$\mathbf{M}=\begin{pmatrix}0&1\\\
-V^{\prime\prime}(m_{0})&g(m_{0},0)\end{pmatrix}.$ (12)
The linear stability of the fixed point $(m_{0},0)$ is determined by the sign
of the eigenvalues of the matrix $\mathbf{M}$,
$\lambda_{\pm}=\frac{1}{2}g(m_{0},0)\left(1\pm\sqrt{1-\frac{4V^{\prime\prime}(m_{0})}{g(m_{0},0)^{2}}}\right).$
(13)
The fixed point $(m_{0},0)$ is stable if both $\lambda_{+}$ and $\lambda_{-}$
are negative (or have a negative real part), implying that
$V^{\prime\prime}(m_{0})>0$ and $g(m_{0},0)<0$. We see in particular from Eq.
(13) that the fixed point $(m_{0},0)$ becomes unstable when $g(m_{0},0)$ is
positive. We define the critical temperature $T_{c}=(J_{1}+J_{2})/2$ and the
dimensionless deviation from $T_{c}$,
$\varepsilon=\frac{T_{c}-T}{T_{c}}\,.$ (14)
Using expression (104) of $Y(m,\dot{m})$, we get for $m_{0}=0$ and small
$\varepsilon$ that $g(0,0)=a_{0}\varepsilon$, with $a_{0}=2T/T_{c}$. We also
have $V^{\prime\prime}(0)=(\mu-\mu_{l}(T))/T^{2}$, where we define
$\mu_{l}(T)$ as
$\mu_{l}(T)=1-(J_{1}-T)(J_{2}-T).$ (15)
Hence, the fixed point $(m,\dot{m})=(0,0)$ is linearly stable for $T>T_{c}$
[$\varepsilon<0$, implying $g(0,0)<0$] provided that $\mu>\mu_{l}(T)$
[implying $V^{\prime\prime}(0)>0$], and unstable otherwise. We define
$\mu_{c}=\mu_{l}(T_{c})$.
Two examples of stability diagrams, obtained from the numerical evaluation of
the fixed points and their stability [given by the sign of the eigenvalues of
Eq. (13)], are shown in Fig. 1 for different values of $J_{1}$ and $J_{2}$.
Trajectories and existence of limit cycles are obtained from the numerical
integration of Eqs. (6) and (7). A stable paramagnetic fixed point [denoted as
P in Fig. 1(a,b)] is found at high enough temperature, while this point
becomes unstable at low temperature. For low values of $(T,\mu)$, two
symmetric ferromagnetic stable fixed points (F) are observed. At low $T$ and
high $\mu$, an oscillating behavior (O) is observed. The transition lines
between the three different behaviors meet at a tricritical point ($T_{c}$,
$\mu_{c}$), see Fig. 1(a,b).
Figure 1: Phase diagram of the deterministic dynamics Eq. (8) obtained
numerically for (a) $J_{1}=0.6$, $J_{2}=0.4$ and (b) $J_{1}=J_{2}=0.5$. Three
distinct behaviors are observed: a stable paramagnetic point (P), two stable
ferromagnetic points (F), and an oscillating phase (O). On the thick dashed
and dotted lines, the limit cycle and the ferromagnetic points coexist. On the
blue dotted line, the ferromagnetic points are inside the limit cycle (Type-I
coexistence) and on the red dashed line the ferromagnetic points are outside
the limit cycle (Type-II coexistence). When the two lines meet, the
bifurcation is heteroclinic. The orange line corresponds to $\mu_{l}(T)$ [Eq.
(15)]: in plain for $T>T_{c}$ where it represents the limit of stability of
the paramagnetic points and dotted for $T<T_{c}$ as indication. (c)-(f)
Examples of trajectories $m(t)$ and phase space ($m(t),\dot{m}(t))$: (c),(d)
Type-I coexistence for $J_{1}=0.6$, $J_{2}=0.4$, $T/T_{c}=0.9$ and
$(\mu-\mu_{c})/\mu_{c}=-3.5\times 10^{-3}$ (blue dot in (a)); (e),(f) Type-II
coexistence for $J_{1}=J_{2}=0.5$, $T/T_{c}=0.9$ and
$(\mu-\mu_{c})/\mu_{c}=8\times 10^{-3}$ (red dot in (b)).
Depending on the value of $\mu$, the bifurcation from the paramagnetic point
to a limit cycle at $T_{c}$ which occurs for $\mu>\mu_{c}$ can either be
continuous (supercritical Hopf bifurcation) or discontinuous (subcritical Hopf
bifurcation) [42]. It is generically continuous when the couplings $J_{1}$ and
$J_{2}$ are positive. The transition from the ferromagnetic points to a limit
cycle is discontinuous [except for the particular values $J_{2}=\pm 2+J_{1}$]
and we observe small regions of the parameter space where the ferromagnetic
points and the limit cycle coexist. In Fig. 1, they are represented by thick
dotted and dashed lines. Depending on the values of the parameters ($T$,
$J_{1}$ and $J_{2}$), the ferromagnetic points can be either inside or outside
the limit cycle, which leads to a topological classification of the transition
into two differents types. In the following, we call discontinuous transition
of Type I the case when the ferromagnetic points are inside the limit cycle,
and discontinuous transition of Type II the case when the ferromagnetic points
are outside the limit cycle. A discontinuous transition of Type I is typically
found close to $T_{c}$ for $J_{1}>J_{2}$ (under additional assumptions that
are specified below), as illustrated for $J_{1}=0.6$ and $J_{2}=0.4$ by the
dotted blue line in Fig. 1(a). An example of trajectory $m(t)$ and phase space
plot $(m(t),\dot{m}(t))$ is represented in Fig. 1(c),(d). A discontinuous
transition of Type II is found for $J_{1}\leq J_{2}$ (under additional
assumptions that are specified below) and $T<T_{c}$, see Fig. 1(b) for
$J_{1}=J_{2}=0.5$. The corresponding trajectory $m(t)$ and its phase space
representation $(m(t),\dot{m}(t))$ is plotted in Fig. 1(e),(f). The farther
from $T_{c}$, the closer the ferromagnetic points and the limit cycle are.
Note that in the case $J_{1}=0.6$ and $J_{2}=0.4$, one observes that for lower
temperatures, the ferromagnetic points sit outside the limit cycle [dashed red
line in Fig. 1(a)], similarly to the behavior displayed in Fig. 1(e), (f). At
the point where the dotted blue line meets the dashed red line, a limit cycle
joining the two ferromagnetic points and with an infinite period appears when
the ferromagnetic points loose stability, corresponding to a heteroclinic
bifurcation.
Note also that for $J_{2}=\pm 2+J_{1}$, the transition is neither of type I or
II, but is a continuous transition from the ferromagnetic points to the limit
cycle corresponding to a heteroclinic bifurcation. We do not study this
particular case in this paper, but a comment on the specificity of this case
is made in Sec. VI.5.
### II.3 Close to the tricritical point
Figure 2: Sign of $v_{1}(T_{c},\mu_{c})$ from Eq. (18) in the plane ($J_{1}$,
$J_{2}$). In regions with $v_{1}>0$, the discontinuous transition between
ferromagnetic and oscillating phases in the $(T,\mu)$ phase diagram is of Type
I: a limit cycle coexists with ferromagnetic points located inside the cycle
(see Fig. 1(a) and Sec. V). In the case $v_{1}=0$ with $J_{1}=J_{2}$ (red
line), the discontinuous transition between ferromagnetic and oscillating
phases in the $(T,\mu)$ phase diagram is of Type II: a limit cycle coexists
with ferromagnetic points located outside the cycle (see Fig. 1(b) and Sec.
VI). The case $v_{1}=0$ with $J_{1}\neq J_{2}$ is not discussed in details in
this work.
In the following, we focus on the transition close to $T_{c}$, i.e., for small
$\varepsilon$ in order to use a perturbative theory. We consider that $m$ is
small such that only the first orders of the series expansion of $V(m)$ are
necessary. One finds at order $m^{4}$ for $V(m)$ and at order $m^{2}$ for
$g(m,0)$ that
$\displaystyle
V(m)=\frac{\mu-\mu_{l}(T)}{2T^{2}}m^{2}+\frac{v_{1}(T,\mu)}{4}m^{4}+V_{0},$
(16) $\displaystyle g(m,0)=a_{0}\varepsilon-a_{1}m^{2},$ (17)
where $V_{0}$ is at this stage an arbitrary constant, and where
$v_{1}(T,\mu)$, $a_{0}$ and $a_{1}$ are given in Appendix B. In particular one
has $a_{0},a_{1}>0$ and
$v_{1}(T_{c},\mu_{c})=\frac{(J_{1}-J_{2})[4-(J_{1}-J_{2})^{2}]}{12(J_{1}+J_{2})}\,.$
(18)
The sign of $v_{1}(T_{c},\mu_{c})$, which plays a key role in the behavior of
the model, thus depends on the relative values of $J_{1}$ and $J_{2}$ (see
Fig. 2).
When $v_{1}>0$, ferromagnetic points exist for $\mu<\mu_{l}(T)$, and are given
by
$m_{0}^{2}=\frac{\mu_{l}(T)-\mu}{T^{2}v_{1}}\,,$ (19)
i.e., nonzero solutions of the equation $V^{\prime}(m_{0})=0$. According to
Eq. (13), and given that $V^{\prime\prime}(m_{0})>0$, ferromagnetic points are
linearly stable when $g(m_{0},0)<0$, which corresponds to
$\mu<\mu_{F}(T)\equiv\mu_{l}(T)-\frac{\varepsilon a_{0}T^{2}v_{1}}{a_{1}}\,.$
(20)
Numerically, one observes that before ferromagnetic points become linearly
unstable upon increasing $\mu$, they coexist over a tiny parameter range with
a limit cycle that surrounds them. An example is given for $J_{1}=0.6$ and
$J_{1}=0.4$ in Fig. 3, which displays the ferromagnetic points and the
extension of the limit cycle as a function of $\mu$ at fixed $\varepsilon$
[Fig. 3(a)], as well as the corresponding trajectories $m(t)$ [Fig. 3(b)] and
the coexisting trajectories in the phase space $(m,\dot{m})$ [Fig. 3(c)].
Unlike for smaller values of $T$, we observe that close to the tricritical
point, the ferromagnetic points and the limit cycle are well separated. We
also observe in this regime that the two symmetries $m\,\mapsto\,-m$ and
$\dot{m}\,\mapsto\,-\dot{m}$ are separately valid to a good approximation,
while they were previously valid only under the simultaneous transformation
$(m,\dot{m})\,\mapsto\,(-m,-\dot{m})$.
When $v_{1}\leq 0$, higher order terms in the expansion of $V(m)$ are
necessary to obtain the ferromagnetic points and their stability. Numerically,
we observe two scenarios: the first one is a heteroclinic bifurcation, the
ferromagnetic points loose stability and a limit cycle with infinite period
arises. This is the case in particular for $J_{2}=\pm 2+J_{1}$, for which
$v_{1}(T_{c},\mu_{c})=0$. The second scenario is that before disappearing, the
ferromagnetic points coexist with a small elliptic limit cycle. This is the
case for $J_{1}=J_{2}=0.5$, see Fig. 4 where an example of the evolution with
$\mu$ (at fixed $\varepsilon$) of the ferromagnetic fixed points and of the
limit cycle are displayed, together with examples of trajectories $m(t)$. When
$v_{1}(T_{c},\mu_{c})<0$, one finds numerically that the ferromagnetic phase
and the paramagnetic phase coexist for $T\gtrsim T_{c},\mu\gtrsim\mu_{l}(T)$,
so that $(T_{c},\mu_{c})$ is no longer a tricritical point, whereas for
$v_{1}(T_{c},\mu_{c})=0$ which is verified for $J_{1}=J_{2}$ and for
$J_{2}=\pm 2+J_{1}$, one finds that the three transition lines meet at
$(T_{c},\mu_{c})$ (see Fig. 1 for $J_{1}=J_{2}=0.5$ for an example of
bifurcation diagram). In the following, we focus on the case where
$v_{1}(T_{c},\mu_{c})\geq 0$ where the three phases meet at the critical point
$(T_{c},\mu_{c})$, in order to perform a perturbative analysis close to the
tricritical point.
Figure 3: Type-I coexistence close to the tricritical point, corresponding to
ferromagnetic points inside the limit cycle, for
$\varepsilon=(T_{c}-T)/T_{c}=10^{-4}$ and $J_{1}=0.6$, $J_{2}=0.4$. (a) Values
of $m$ for the ferromagnetic points (orange lines) and for the limit cycle
(blue shaded area) along the transition. At $\mu=\mu_{F}(T)$ [Eq. (20)]
ferromagnetic points become linearly unstable. (b) Examples of trajectories
for $(\mu-\mu_{F})/\mu_{F}=-5\times 10^{-7}$. (c) Corresponding phase space
representation in the plane ($m$, $\dot{m}$). Figure 4: Type-II coexistence
close to the tricritical point, corresponding to ferromagnetic points outside
the limit cycle, for $\varepsilon=10^{-3}$ and $J_{1}=J_{2}=0.5$. (a) Values
of $m$ for the ferromagnetic points (orange lines) and for the limit cycle
(blue shaded area) along the transition. At $\mu=\mu_{F}(T)$ [here,
$(\mu_{F}-\mu_{c})/\mu_{c}\approx 1.5\times 10^{-5}$] the ferromagnetic points
become unstable. (b) Examples of trajectories for
$(\mu-\mu_{F})/\mu_{F}=-5\times 10^{-6}$. (c) Corresponding phase space
representation in the plane ($m$, $\dot{m}$).
## III Generalized Landau theory
Figure 5: Trajectories $m(t)$: the dashed blue and orange lines correspond to
deterministic trajectories and the green line to a trajectory for a finite
system size $N=5000$ obtained from stochastic numerical simulations.
Parameters: $J_{1}=J_{2}=0.5$, $\varepsilon=5\times 10^{-2}$ and
$(\mu-\mu_{c})/\mu_{c}=3.88\times 10^{-3}$. Jumps between noisy oscillatory
states and ferromagnetic states are observed.
The deterministic limit provides knowledge on the different stable fixed
points or limit cycles that are present in the system. However it lacks
information on the behavior of the system at finite size $N$, such as
knowledge on the macroscopic fluctuations around the stable points or cycles.
But most importantly, in case of coexistence of solutions in the limit
$N\to\infty$, the deterministic approach fails to predict which solution is
the most stable one at finite but large size $N$. In addition, for moderate
size $N$, one observes jumps between noisy oscillatory states and
ferromagnetic states, as illustrated in Fig. 5. A statistical description of
such a situation where the ferromagnetic points and the limit cycle are both
linearly stable in the deterministic limit would thus be useful. We briefly
recall in this section the nonequilibrium generalization of the Landau theory
developed in [42], which allows for a description of phase transitions to
oscillating states.
### III.1 Stochastic time derivative $\dot{m}$
We first introduce a new variable $\dot{m}$ that plays the role of a smoothed
time derivative of the magnetization for finite-size systems. Following [42],
we formally define the stochastic derivative $\dot{m}(\mathcal{C})$ of the
magnetization $m(\mathcal{C})$ as
$\dot{m}(\mathcal{C})=\sum_{\mathcal{C}^{\prime}(\neq\mathcal{C})}\left[m\left(\mathcal{C}^{\prime}\right)-m\left(\mathcal{C}\right)\right]W(\mathcal{C}^{\prime}|\mathcal{C}),$
(21)
such that on average $d\langle m\rangle/dt=\langle\dot{m}\rangle$. Eq. (21)
thus associates with each microscopic configuration $\mathcal{C}$ an
observable $\dot{m}(\mathcal{C})$, which is a smoothed time derivative of $m$
because it is averaged over all possible transitions
$\mathcal{C}\to\mathcal{C}^{\prime}$, for a fixed configuration $\mathcal{C}$.
The advantage of this definition is that fluctuations of $\dot{m}$ are
typically on the same scale as that of $m$, which is a key property for the
large deviation approach described below. Taking instead the time derivative
of $m\big{(}\mathcal{C}(t)\big{)}$ would lead to diverging, white-noise-like
fluctuations which are not appropriate to develop a generalized Landau theory.
Under the mean-field assumption, the formal transition rate
$W(\mathcal{C}^{\prime}|\mathcal{C})$ can be reexpressed in terms of the
flipping rates $w_{s}^{\pm}(m,h)$ to flip a spin $s_{i}=\pm 1$ defined in Eq.
(2). When flipping a spin $s_{i}=\pm 1$, the magnetization change is given by
$m(\mathcal{C}^{\prime})-m(\mathcal{C})=\mp 2/N$. Since there are $N(1\pm
m)/2$ possibilities to choose a spin $s_{i}=\pm 1$, one finds:
$\dot{m}=(1-m)w_{s}^{-}(m,h)-(1+m)w_{s}^{+}(m,h).$ (22)
Using the expression (2) of the flipping rates $w_{s}^{\pm}(m,h)$, Eq. (22)
becomes:
$\dot{m}=-m+\tanh[\beta(J_{1}m+h)].$ (23)
Note that the functional relation $\dot{m}(m,h)$ turns out to be identical to
the functional relation (6) obtained in the deterministic limit $N\to\infty$.
However, Eq. (23) is valid for any finite $N$, and the variables $m$ and $h$
are here stochastic variables.
### III.2 Large deviation function
At finite size $N$, the dynamics of the system is determined by the master
equation (5). Instead of considering $P(\mathcal{C})$ which involves $2^{2N}$
configurations $\mathcal{C}=\\{s_{1},\dots,s_{N},h_{1},\dots,h_{N}\\}$, we
consider $P_{N}(m,\dot{m})$ the joint stationary probability density of the
global observables $m$ and $\dot{m}$. The variations of $m$ and $\dot{m}$
during a transition scale as $1/N$. We introduce $\mathbf{d}_{k}$ such that
$(\Delta m,\Delta\dot{m})=\pm\mathbf{d}_{k}/N$ with $k=1$ when flipping a spin
$s_{i}=\pm 1$ and $k=2$ when flipping a field $h_{i}=\pm 1$, so that we have:
$\displaystyle\textbf{d}_{1}$ $\displaystyle=\left(-2,2-2\beta J_{1}+2\beta
J_{1}(m+\dot{m})^{2}\right),$ (24) $\displaystyle\textbf{d}_{2}$
$\displaystyle=-\left(0,-2\beta+2\beta(m+\dot{m})^{2}\right).$ (25)
We note $NW_{k}^{\pm}$ the coarse-grained transition rates from a
configuration $(m,h)$ to $(m^{\prime},h^{\prime})$, with $m^{\prime}=m\mp 2/N$
and $h^{\prime}=h$ if $k=1$, and $m^{\prime}=m$ and $h^{\prime}=h\mp 2/N$ if
$k=2$. One has:
$\displaystyle W_{1}^{\pm}$ $\displaystyle=\frac{(1\pm m)/2}{1+\exp[\pm
2\beta(J_{1}m+h)]},$ (26) $\displaystyle W_{2}^{\pm}$
$\displaystyle=\frac{(1\pm h)/2}{1+\exp[\pm 2\beta(J_{2}h+(1-\mu)m)]}.$
The coarse-grained master equation governing the evolution of
$P_{N}(m,\dot{m})$ reads [42]:
$\displaystyle\partial_{t}$ $\displaystyle
P_{N}(m,\dot{m})=N\sum_{k,\sigma}\bigg{[}-W_{k}^{\sigma}(m,\dot{m})P_{N}(m,\dot{m})$
(27)
$\displaystyle+W_{k}^{\sigma}\left((m,\dot{m})-\frac{\sigma\textbf{d}_{k}}{N}\right)P_{N}\left((m,\dot{m})-\frac{\sigma\textbf{d}_{k}}{N}\right)\bigg{]}$
where $k=1,2$ and $\sigma=\pm$. For large $N$, the stationary joint
distribution $P_{N}(m,\dot{m})$ takes a large deviation form [47]
$P_{N}(m,\dot{m})\underset{N\to\infty}{\sim}\exp[-N\phi(m,\dot{m})]\,,$ (28)
a property justified by the theory of Markov jump processes with vanishing
jump size [48]. Beside providing information on the fluctuations at finite
system size, the large deviation function (or rate function) $\phi(m,\dot{m})$
determines the macroscopic phase of the system. Linearly stable solutions of
the deterministic equations correspond to local minima of the large deviation
function. When two or more linearly stable solutions are present in the
deterministic equations, the global minima of $\phi$ gives the macroscopic
phase of the system (i.e., the most stable one).
Injecting the large deviation form (28) into Eq. (1) gives to order
$(\nabla\phi)^{2}$,
$\displaystyle\dot{m}\partial_{m}\phi+Y(m,\dot{m})\partial_{\dot{m}}\phi+D_{11}(\partial_{m}\phi)^{2}$
(29)
$\displaystyle+2D_{12}\partial_{m}\phi\partial_{\dot{m}}\phi+D_{22}(\partial_{\dot{m}}\phi)^{2}=0\,,$
with
$(\dot{m},Y(m,\dot{m}))=\sum_{k,\sigma}-\sigma\mathbf{d}_{k}W_{k}^{\sigma}\,.$
(30)
The function $Y(m,\dot{m})$ is found to be the same function as the one
introduced in the deterministic limit in Eq. (8). We introduce
$\mathbf{D}=\\{D_{ij}(m,\dot{m})\\}$ as
$\mathbf{D}\equiv\sum_{k}\mathbf{d}_{k}\cdot\mathbf{d}_{k}^{T}W_{k}^{\sigma},$
(31)
whose explicit expression is given in Appendix B.
We use the decomposition introduced in Eq. (10),
$Y(m,\dot{m})=-V^{\prime}(m)+\dot{m}g(m,\dot{m})\,$ (32)
and we focus, in this paper, on obtaining the large deviation function in
regions where a fixed point ($m_{0},0$) looses stability in the deterministic
limit, i.e., where $g(m_{0},0)$ changes sign, in order to use a perturbative
framework in terms of the small parameter
$u_{0}\equiv g(m_{0},0).$ (33)
We assume $\nabla\phi=O(u_{0})$ since quadratic terms in $\nabla\phi$ have to
balance the contribution in $u_{0}\dot{m}\partial_{\dot{m}}\phi$. At order
$u_{0}$, Eq. (LABEL:eq:phi:quadrat) reduces to
$\dot{m}\partial_{m}\phi-V^{\prime}(m)\partial_{\dot{m}}\phi=0.$ (34)
The general solution of Eq. (34) reads [42]
$\phi(m,\dot{m})=f\big{(}H(m,\dot{m})\big{)}+f_{0}$ (35)
where the function $H(m,\dot{m})$ takes a form similar to a Hamiltonian,
$H(m,\dot{m})=\frac{\dot{m}^{2}}{2}+V(m)\,.$ (36)
The minimum value of $V(m)$ is set to $V=0$, and $f$ is at this stage an
arbitrary function, satisfying for convenience $f(0)=0$. The constant $f_{0}$
in Eq. (35) ensures that the minimal value of $\phi(m,\dot{m})$ is zero.
Contributions of order $u_{0}^{2}$ to Eq. (LABEL:eq:phi:quadrat) yield a
condition determining the derivative $f^{\prime}(H)$ (see [42] for a detailed
derivation)
$f^{\prime}(H)=-\frac{\int_{m_{1}}^{m_{2}}dm\,\sqrt{2[H-V(m)]}\,g\left(m,\sqrt{2[H-V(m)]}\right)}{\int_{m_{1}}^{m_{2}}dm\,\bigg{[}D_{11}\frac{V^{\prime}(m)^{2}}{\sqrt{2[H-V(m)]}}+2D_{12}V^{\prime}(m)+D_{22}\sqrt{2[H-V(m)]}\bigg{]}}\,,$
(37)
where $m_{1}$ and $m_{2}$ are such that $V(m_{1})=V(m_{2})=H$ and $V(m)\leq H$
for $m_{1}\leq m\leq m_{2}$.
The form Eq. (35) of the large deviation function $\phi(m,\dot{m})$ can be
interpreted as giving a statistical weight to deterministic trajectories
determined by the Hamiltonian dynamics
$\frac{dm}{dt}=\frac{\partial
H}{\partial\dot{m}},\qquad\frac{d\dot{m}}{dt}=-\frac{\partial H}{\partial m},$
(38)
valid at order $\varepsilon$, where the Hamiltonian $H(m,\dot{m})$ is defined
in Eq. (36). Denoting $m_{0}$ a minimum of $V(m)$ (we assume here for
simplicity that $V(m)$ has a single minimum or two symmetric minima) the case
$H=V(m_{0})$ corresponds to a fixed point ($m_{0},0$) of the deterministic
dynamics, whereas values $H>V(m_{0})$ correspond to closed orbits, and thus to
oscillations. The most probable value of $H$, and thus the macroscopically
observed behavior, is determined by the global minimum $H^{*}$ of $f(H)$. Note
that the method used here follows similar lines as the determination of
nonequilibrium potentials in dissipative dynamical systems [49, 50, 51].
When the three phases meet at the critical point $(T_{c},\mu_{c})$, the
ferromagnetic points, noted $\pm m_{0}$, have a small amplitude
($m_{0}\leq\varepsilon$), so that $g(m_{0},0)\sim\varepsilon$ with
$\varepsilon=(T_{c}-T)/T_{c}$. Therefore, close to the critical point, i.e.,
for small $\varepsilon$, the framework described above can be used to obtain
the large deviation function $\phi(m,\dot{m})$ and thus the probability
density $P_{N}(m,\dot{m})$ for large $N$.
## IV Continuous transition from a paramagnetic phase to an oscillating phase
In this section, we briefly recall and extend results presented in [42] for
the continuous transition observed when $J_{1}$ and $J_{2}$ are positive, at
$T=T_{c}$, for $\mu>\mu_{c}$, from a high-$T$ paramagnetic phase to a low-$T$
oscillating phase [vertical green line in Fig. 1(a), (b)], corresponding to a
Hopf bifurcation at the deterministic limit. Above $T_{c}$ ($\varepsilon<0$),
the system is in a paramagnetic phase, whereas below $T_{c}$ ($\varepsilon>0$)
it is in an oscillating phase.
### IV.1 Transition from a paramagnetic phase to an elliptic limit cycle
#### IV.1.1 Large deviation function
Figure 6: Large deviation function around the paramagnetic-oscillating
transition. (a) Examples of $f(H)$ for $\varepsilon=10^{-3}$ (blue curve) and
for $\varepsilon=-10^{-3}$ (orange curve). The inset represents the shape of
$V(m)$. (b) Colormap of $\phi(m,\dot{m})$ in the space $(m,\dot{m})$ for
$\varepsilon=10^{-3}$. Scalings with $\varepsilon$: $m\sim\varepsilon^{1/2}$,
$\dot{m}\sim\varepsilon^{1/2}$, $H\sim\varepsilon$ and
$f=\phi\sim\varepsilon^{2}$. Parameters: $J_{1}=0.6$, $J_{2}=0.4$,
$(\mu-\mu_{c})/\mu_{c}=1$.
As in the paramagnetic phase $m$ and $\dot{m}$ are small, we use a power-
series expansion of $V(m)$ and $g(m,\dot{m})$ in $m$ and $\dot{m}$. At the
lowest order required to describe the transition, one has
$\displaystyle V(m)=\frac{\mu-\mu_{l}(T)}{2T^{2}}m^{2},$ (39) $\displaystyle
g(m,\dot{m})=a_{0}\varepsilon-a_{1}m^{2}-a_{2}m\dot{m}-a_{3}\dot{m}^{2},$ (40)
where $\varepsilon=(T_{c}-T)/T_{c}$; $\mu_{l}(T)$ and $a_{0}$ were introduced
previously in Eq. (17) and their expressions are recalled in Appendix B along
with the expressions of $a_{1}$ and $a_{2}$, which are all positive
quantities. Compared to Eq. (16), we only keep the quadratic term in $V(m)$,
which is positive around $T_{c}$ for $\mu>\mu_{c}=\mu_{l}(T_{c})$. Higher
order terms are necessary only when the quadratic term is negative, in order
to describe ferromagnetic order, or when the quadratic term is small, which is
discussed in the next section. An illustration of the quadratic potential
$V(m)$ is plotted in the inset of Fig. 6(a).
In the deterministic limit, the oscillating phase appears for $\varepsilon>0$
when the paramagnetic point $(0,0)$ looses stability. The small perturbative
parameter $u_{0}$ introduced in Sec. III [see Eq. (33)] is proportional to
$\varepsilon$, since here $m_{0}=0$ and $u_{0}=g(0,0)=a_{0}\varepsilon$ from
Eq. (40). Hence the formalism introduced in the previous section to obtain the
large deviation function $\phi(m,\dot{m})=f(H)$ can be used to describe the
phase transition for small $\varepsilon$, when the perturbative approach is
valid.
We find from Eq. (37), after integration,
$f(H)=-\varepsilon aH+bH^{2},$ (41)
with
$a=\frac{T^{2}a_{0}}{T^{2}D_{22}+D_{11}[\mu-\mu_{l}(T)]}$ (42)
and
$b=\frac{a_{1}T^{4}+3T^{2}(\mu-\mu_{l})a_{3}}{4(\mu-\mu_{l})[T^{2}D_{22}+D_{11}(\mu-\mu_{l})]}.$
(43)
When $\varepsilon<0$, $f(H)$ is minimal for $H=0$, which corresponds to the
paramagnetic phase. When $\varepsilon>0$, $f(H)$ has a minimum in
$H^{*}=\varepsilon a/2b$, see Fig. 6(a) for examples of $f(H)$ around the
paramagnetic-oscillating transition. The equation $H(m,\dot{m})=H^{*}$
describes an ellipse in the phase space $(m,\dot{m})$ as depicted in Fig.
6(b). The period $\tau$ of a limit cycle described by
$V(m)+\dot{m}^{2}/2=H^{*}$ is obtained as
$\tau=2\int_{-m^{*}}^{m^{*}}\frac{dm}{\dot{m}}=\sqrt{2}\int_{-m^{*}}^{m^{*}}\frac{dm}{\sqrt{H^{*}-V(m)}}\,,$
(44)
where $m^{*}$ is such that $H^{*}=V(m^{*})$. Using expression (39) of $V(m)$,
we find
$\tau=\frac{2\sqrt{2}\pi T}{\sqrt{\mu-\mu_{l}(T)}}.$ (45)
#### IV.1.2 Order parameters
The paramagnetic to oscillating phase transition is characterized by two order
parameters, $\langle m^{2}\rangle$ and $\langle\dot{m}^{2}\rangle$, where
$\langle x\rangle=\int dmd\dot{m}P_{N}(m,\dot{m})\,x(m,\dot{m})$, and where
the observable $x$ stands for $m^{2}$ or $\dot{m}^{2}$ [or, more generally,
any even function $x(m,\dot{m})$]. Using Eqs. (28) and (35),
$P_{N}(m,\dot{m})$ can be approximated by its properly normalized large
deviation form,
$P_{N}(m,\dot{m})\approx\frac{\exp\big{[}-Nf\big{(}H(m,\dot{m})\big{)}\big{]}}{\int
dm^{\prime}d\dot{m}^{\prime}\,\exp\big{[}-Nf\big{(}H(m^{\prime},\dot{m}^{\prime})\big{)}\big{]}}.$
(46)
Then from Eq. (36), $\int d\dot{m}$ can be replaced by $\int
dH/\sqrt{2[H-V(m)]}$, so that $\langle x\rangle$ becomes, making the
integration intervals explicit:
$\langle
x\rangle=\frac{\int_{-1}^{1}dm\int_{V(m)}^{\infty}\frac{dH}{\sqrt{H-V(m)}}\,x\,e^{-Nf(H)}}{\int_{-1}^{1}dm\int_{V(m)}^{\infty}\frac{dH}{\sqrt{H-V(m)}}e^{-Nf(H)}}\,.$
(47)
From Eqs. (39) and (41) one can then compute the values of $\langle
m^{2}\rangle$ and $\langle\dot{m}^{2}\rangle$ using Eq. (47). In the limit of
large system sizes, in the paramagnetic phase ($\varepsilon<0$),
$\langle\dot{m}^{2}\rangle=1/(|\varepsilon|aN)$ which vanishes in the limit
$N\to\infty$. By contrast, in the oscillating phase ($\varepsilon<0)$,
$\langle\dot{m}^{2}\rangle=H^{*}\sim\varepsilon$ is constant in the limit
$N\to\infty$. At the transition, $\varepsilon=0$,
$\langle\dot{m}^{2}\rangle=1/\sqrt{\pi bN}$. For smaller system sizes
$N\ll|\varepsilon|^{-2}$, $f(H)$ can be approximated as $f(H)=bH^{2}$ so that
one finds
$\langle\dot{m}^{2}\rangle\sim N^{-1/2},$ (48)
which is independent of $\varepsilon$.
### IV.2 Non-elliptic limit cycle near the tricritical point
#### IV.2.1 Large deviation function
Figure 7: Large deviation function close to the tricritical point. (a) Example
of $f(H)$ from Eq. (52); the inset represents the shape of $V(m)$. (b)
Colormap of $\phi(m,\dot{m})$ in the space $(m,\dot{m})$. Note the different
scalings $m\sim\varepsilon^{1/2}$ and $\dot{m}\sim\varepsilon$. Parameters:
$J_{1}=0.6$, $J_{2}=0.4$, $\mu=\mu_{c}$ and $\varepsilon=10^{-3}$.
The three phases (paramagnetic, oscillating and ferromagnetic) meet at the
tricritical point $T=T_{c}$ and $\mu=\mu_{c}$. We now look at how the limit
cycle changes when approaching the tricritical point at $\mu=\mu_{c}$ for
$T<T_{c}$. We have
$\mu_{c}-\mu_{l}(T)=T_{c}^{2}\varepsilon^{2}\,,$ (49)
such that at $\mu=\mu_{c}$, the quadratic term $\propto(\mu_{c}-\mu_{l})m^{2}$
in the expression of $V(m)$ scales as $\varepsilon^{2}m^{2}$. Keeping only the
quadratic term in $V(m)$, we would find $m\sim\varepsilon^{1/2}$ by following
the same reasoning as in Sec. IV.1.1. Hence, the quadratic term becomes of
order $\varepsilon^{3}$ whereas the next-order term $\propto m^{4}$ scales as
$\varepsilon^{2}$, so that the assumption of neglecting the $m^{4}$-term in
the expansion of $V(m)$ is inconsistent. To obtain the right behavior at
$\mu=\mu_{c}$, it is thus necessary to include the contribution of $m^{4}$ in
$V(m)$. At $\mu=\mu_{c}$, we have
$V(m)=\frac{\varepsilon^{2}T_{c}^{2}}{2T^{2}}m^{2}+\frac{v_{1}(T,\mu)}{4}m^{4},$
(50)
with $v_{1}(T,\mu)$ given in Appendix B. In the following, we assume that the
scaling $m\sim\varepsilon^{1/2}$ remains valid close to $\mu_{c}$, and we
check below that the assumption is consistent. [Note that the reason why the
scaling $m\sim\varepsilon^{1/2}$ will eventually prove valid is different from
the one mentioned in the previous paragraph, which relies only on the
quadratic term in $V(m)$]. Under this assumption, the term in
$\varepsilon^{2}m^{2}\sim\varepsilon^{3}$ in Eq. (50) is negligible compared
to the term of order $m^{4}$. Hence, to leading order, we can use the simple
form
$V(m)=\frac{v_{1}(T,\mu)}{4}m^{4}.$ (51)
We also consider that $v_{1}(T_{c},\mu_{c})>0$ as discussed in Sec. II.2 (see
Fig. 2 for possible values of $J_{1}$, $J_{2}$) and that $v_{1}(T,\mu_{c})$
remains positive for small $\varepsilon$. An example of $V(m)$ is plotted in
the inset of Fig. 7(a). Using the same expression of $g(m,\dot{m})$ as before,
Eq. (40), we obtain from Eq. (37) after integration,
$f(H)=-\frac{\varepsilon a_{0}}{D_{22}}H+cH^{3/2}$ (52)
with
$c=\frac{8\,\Gamma\left(\frac{3}{4}\right)^{4}a_{1}}{5\pi^{2}D_{22}\sqrt{v_{1}}}\,$
(53)
where $\Gamma(x)=\int_{0}^{\infty}t^{x-1}e^{-t}dt$. One finds that $f(H)$ has
a minimum for
$H^{*}=\left(\frac{2\varepsilon a_{0}}{3cD_{22}}\right)^{2}$ (54)
[see Fig. 7(a)] corresponding to a limit cycle in the phase space
$(m,\dot{m})$. However, $H$ is no longer quadratic in $m$, because of the
quartic form (51) of $V(m)$. Hence the limit cycle is no longer elliptic, see
Fig. 7(b) for a colormap of $\phi(m,\dot{m})$ in the phase space
$(m,\dot{m})$. We recall that for $\mu\gg\mu_{c}$, one had
$m\sim\dot{m}\sim\varepsilon^{1/2}$, whereas now one finds distinct scalings
$m\sim\varepsilon^{1/2}$ and $\dot{m}\sim\varepsilon$. These scalings are
obtained by using $H^{*}\sim\varepsilon^{2}$ from Eq. (54), and the expression
(36) of $H(m,\dot{m})$ together with the quartic form (51) of $V(m)$. Finally,
evaluating the oscillation period using Eq. (44), we find
$\tau=\frac{4\sqrt{\pi}\Gamma\\!\left(\frac{5}{4}\right)}{\Gamma\left(\frac{3}{4}\right)(v_{1}H^{*})^{1/4}}\,.$
(55)
As $H^{*}\sim\varepsilon^{2}$, the period diverges as $\varepsilon^{-1/2}$
when $\varepsilon\to 0$.
Figure 8: (a) $P_{N}(m,\dot{m})$ obtained from stochastic numerical
simulations. (b) Theoretical $P_{N}(m,\dot{m})$ evaluated by including
leading-order corrections, as given in Eq. (60). Parameters:
$\varepsilon=10^{-2}$, $\mu=\mu_{l}(T)$, $N=10^{7}$.
### IV.3 Comparison with stochastic simulations and need for higher order
corrections
The method used to obtain analytically the large deviation function, developed
in Sec. III, relies on two main assumptions: $N$ is large and $\varepsilon$ is
small. We now compare the analytical results with numerical simulations of the
stochastic spin model. We use the Gillespie algorithm [52] to simulate the
stochastic dynamics with the rates given by Eq. (26) for a time-interval
$\tau$. The initial condition for the simulations are $m=0$ and $h=0$. In this
algorithm, time-steps are of $O(N^{-1})$ such that the number of steps
required to have $t=1$ is of $O(N)$. To observe an non-elliptic limit cycle
close to $T_{c}$, one needs to have $N|f(H^{*})|\gg 1$ which corresponds to
$N\varepsilon^{3}\gg 1$ [see Eqs. (52) and Eq. (54)]. For example, a value
$\varepsilon=10^{-3}$ would require simulations with at least $N\sim 10^{11}$.
To obtain data with converged statistics depicting the transition, we make
simulations for larger $\varepsilon$ where the approximations made in Sec III
are no longer expected to be quantitatively valid. We now discuss the notable
differences observed in numerical simulations due to larger $\varepsilon$
values and smaller system sizes $N$.
In Fig. 8, we plot $P_{N}(m,\dot{m})$ obtained from stochastic simulations for
$(T_{c}-T)/T_{c}=\varepsilon=10^{-2}$ and for $\mu=\mu_{l}(T)$. We take
$\mu=\mu_{l}(T)$ (instead of $\mu=\mu_{c}$) so that the term in front of
$m^{2}$ in $V(m)$ is exactly zero. Significant discrepancies are observed
between the simulation results and the theoretical predictions of the
perturbative approach described in Sec. III. From leading order calculations
in $\varepsilon$, we obtain that $\phi(m,\dot{m})=f(H)$ with
$H=V(m)+\dot{m}^{2}/2$ which has, in particular, two consequences. First, $m$
and $\dot{m}$ are decoupled, and the symmetries $m\,\mapsto\,-m$ and
$\dot{m}\,\mapsto\,-\dot{m}$ hold independently. Second, the probability
$P_{N}(m,\dot{m})$ is uniform along the limit cycle, corresponding in a
dynamical view (and in the deterministic limit) to a constant ‘speed’ along
the limit cycle. Both of these characteristics are not observed in the
stochastic simulations, see Fig. 8(a).
These differences come from higher order corrections, in $\varepsilon$ and in
$N$ in the probability density $P_{N}(m,\dot{m})$. Similar corrections were
studied in [49] in the context of noisy dynamical systems. We now give an
example of the first corrections for $\mu=\mu_{l}(T)$. The detailed steps of
the derivation are given in [53]. To perform a systematic
$\varepsilon$-expansion, we introduce rescaled variables $\tilde{H}=H/H^{*}$
and $x=m/m_{0}$ with $H^{*}$ given in Eq. (54) and
$m_{0}=(4H^{*}/v_{1})^{1/4}$ consistently with Eq. (19). At lowest order in
$\varepsilon$, from Eq. (52), one finds
$\phi(m,\dot{m})\sim\varepsilon^{3}\tilde{\phi}_{1}(x,\tilde{H})$, where
$\tilde{\phi}_{1}$ is a rescaled function independent of $\varepsilon$. We
introduce $C(m,\dot{m})$ the correction of $O(N^{0})$ to $\ln
P_{N}(m,\dot{m})$,
$P_{N}(m,\dot{m})\propto\exp[-N\phi(m,\dot{m})+C(m,\dot{m})],$ (56)
and we expand $\phi(m,\dot{m})$ and $C(m,\dot{m})$ in power series of
$\varepsilon^{1/2}$ [53],
$\phi(m,\dot{m})=\varepsilon^{3}\sum_{i=1}^{\infty}\varepsilon^{(i-1)/2}\tilde{\phi}_{i}(x,\tilde{H})$
(57)
and
$C(m,\dot{m})=\sum_{i=0}^{\infty}\varepsilon^{i/2}\tilde{C}_{i}(x,\tilde{H}).$
(58)
Injecting these expressions into the master equation on $P_{N}(m,\dot{m})$,
one finds equations on $\tilde{\phi}_{i}$ and $\tilde{C}_{i}$ at each order
$i$ [53]. For the lowest order, we find
$\phi_{1}(m,\dot{m})=\varepsilon^{3}\tilde{\phi}_{1}(x,\tilde{H})=f(H)$ [Eq.
(52)] and
$C_{0}(m,\dot{m})=\varepsilon^{1/2}\tilde{C}_{0}(x,\tilde{H})=\varepsilon^{1/2}c_{0}$
a constant given by the normalization of $P_{N}(m,\dot{m})$. For
$\tilde{\phi}_{2}(x,\tilde{H})$ and $\tilde{C}_{1}(x,\tilde{H})$ one finds:
$\displaystyle\tilde{\phi}_{2}(x,\tilde{H})$
$\displaystyle=A\left(1-\sqrt{\tilde{H}}\right)\tilde{H}x\left[a_{0}\,{}_{2}F_{1}\left(-\frac{1}{2},\frac{1}{4},\frac{5}{4},x^{4}\right)-\frac{2}{3}\frac{D_{22}}{\alpha}x^{2}\,{}_{2}F_{1}\left(-\frac{1}{2},\frac{3}{4},\frac{7}{4},x^{4}\right)\right]\,,$
(59) $\displaystyle\tilde{C}_{1}(x,\tilde{H})$
$\displaystyle=B\tilde{H}^{1/4}\left[-6a_{0}x\sqrt{1-x^{4}}+30a_{0}\sqrt{v_{1}}x\,{}_{2}F_{1}\left(-\frac{1}{2},\frac{1}{4},\frac{5}{4},x^{4}\right)-8\frac{D_{22}}{\alpha}x^{3}{}_{2}F_{1}\left(\frac{1}{2},\frac{5}{4},\frac{7}{4},x^{4}\right)\right]\,,$
where $\alpha$, $A$ and $B$ depend on $v_{1}$, $a_{0}$ and $a_{1}$ and are
given in Appendix B, and ${}_{2}F_{1}(a,b,c,x)$ denotes the hypergeometric
function. The correction $\tilde{\phi}_{2}$ changes the orientation and the
shape of the limit cycle, whereas the correction $\tilde{C}_{1}$ breaks the
uniformity of the probability $P_{N}(m,\dot{m})$ along the limit cycle. We
plot in Fig. 8 the following expression of $P_{N}(m,\dot{m})$ that includes
leading corrections,
$P_{N}(m,\dot{m})=\exp[-N(\phi_{1}+\phi_{2})+C_{0}+C_{1}]$ (60)
with the definitions
$\phi_{i}(m,\dot{m})=\varepsilon^{3+(i-1)/2}\tilde{\phi_{i}}(x,\tilde{H})$ and
$C_{i}(m,\dot{m})=\varepsilon^{i/2}\tilde{C}_{i}(x,\tilde{H})$. The main
features of the probability density obtained from the simulations are captured
by these leading corrections.
## V Type-I discontinuous transition between ferromagnetic and oscillating
phases
In this section, we investigate the properties, near the tricritical point
$(T_{c},\mu_{c})$, of the ferromagnetic to oscillating phase transition where
a limit cycle appears around the ferromagnetic points, called coexistence of
Type I. This case corresponds to $v_{1}(T_{c},\mu_{c})>0$, see Fig. 2 as well
as the phase diagram of Fig. 1(a) and the trajectories displayed in Figs. 1(c)
and 1(e).
### V.1 Large deviation function and phase diagram
#### V.1.1 Validity of the perturbative approach
We start from the generic expansion of the potential $V(m)$ given in Eq. (16),
recalled here for clarity,
$V(m)=\frac{\mu-\mu_{l}(T)}{2T^{2}}m^{2}+\frac{v_{1}(T,\mu)}{4}m^{4}+V_{0},$
(61)
where $V_{0}$ is chosen such that $V(m)\geq 0$ and its minimal value is zero.
An example of $V(m)$ for $\mu<\mu_{l}(T)$ is given in Fig. 9. We recall that
$g(m,\dot{m})$ is given in Eq. (40). As discussed in Sec. II.3, ferromagnetic
points $m_{0}^{2}=(\mu_{l}(T)-\mu)/T^{2}v_{1}$ exist for $\mu<\mu_{l}(T)$, and
are locally stable for $\mu\leq\mu_{F}$ with
$\mu_{F}(T)=\mu_{l}(T)-\varepsilon a_{0}T^{2}v_{1}/a_{1}$. In this section, we
focus on the region where the ferromagnetic points loose stability
($\mu\approx\mu_{F}$), thus:
$\mu_{l}(T)-\mu\sim\varepsilon\,$ (62)
and one has $m_{0}^{2}\sim\varepsilon$ and thus
$u_{0}=g(m_{0},0)\sim\varepsilon$. For small $\varepsilon=(T_{c}-T)/T_{c}$,
the main assumption made in Sec. III, i.e., that $u_{0}$ is small, is
verified, and we can use the method developed in this section to obtain the
large deviation function and study the phase transition from a ferromagnetic
phase to an oscillating phase.
#### V.1.2 Typical $f(H)$ and phase diagram
Figure 9: Large deviation function for Type-I discontinuous transition between
ferromagnetic and oscillating phases. (a) $f(H)$ determined numerically from
Eq. (37) (full line); the dashed line corresponds to local approximations
described in Sec. V.1.3. Inset: shape of $V(m)$ with two minima. (b) Colormap
of $\phi(m,\dot{m})$ in the plane $(m,\dot{m})$. Note the different scalings
$m\sim\varepsilon^{1/2}$ and $\dot{m}\sim\varepsilon$. Parameters:
$J_{1}=0.6$, $J_{2}=0.4$, $(\mu-\mu_{c})/\mu_{c}=-3.18\times 10^{-5}$ and
$\varepsilon=10^{-3}$.
In general, except for particular cases as the one described in the previous
section, one cannot obtain explicit analytical expressions of $f^{\prime}(H)$
from Eq. (37), and one needs to perform a numerical integration of the
integrals in Eq. (37) to determine $f(H)$. An example of $f(H)$, for
$\mu\leq\mu_{l}(T)$, numerically obtained from Eq. (37), is plotted in Fig.
9(a). We observe that $f(H)$ has two local minima: one in $H=0$ corresponding
to the ferromagnetic points $m=m_{0}$ and $\dot{m}=0$ [since $V(m_{0})=0$],
and one for $H=H^{*}>0$ corresponding to a limit cycle in the phase space
$(m,\dot{m})$. We numerically obtain (not shown) that
$H^{*}\sim\varepsilon^{2}.$ (63)
An example of colormap of $\phi(m,\dot{m})=f\big{(}H(m,\dot{m})\big{)}$ in the
phase space $(m,\dot{m})$ is displayed in Fig. 9(b). Here, the most stable
phase is the oscillating phase as $f(0)>f(H^{*})$. Contrary to Sec. IV, no
analytical expression of $H^{*}$ is available in the present case.
The transition from the ferromagnetic phase to the oscillating phase takes
place when $f(0)=f(H^{*})$; we note $\mu_{t}(T)$ the value of $\mu$ at the
transition. The value of $H$ jumps from $H=0$ to the nonzero value $H^{*}$ at
the transition, meaning that the latter is discontinuous. We obtain
numerically that $(\mu_{c}-\mu_{t})/\mu_{c}\sim\varepsilon$ where
$\varepsilon=(T_{c}-T)/T_{c}$ [see Fig. 10 with
$(\mu_{c}-\mu_{F})/\mu_{c}\sim\varepsilon$].
From the numerical determination of $f(H)$, one obtains a phase diagram in the
space $(\varepsilon,\mu)$ with the determination of the different phases:
ferromagnetic phase (F), oscillating phase (O) or the phase where both
coexist, with one being more stable than the other. A close up on the phase
diagram near the tricritical point for $J_{1}=0.6$ and $J_{2}=0.4$ is plotted
in Fig. 10, where we represent $(\mu-\mu_{F})/\mu_{F}$, with $\mu_{F}$ given
in Eq. (20), in order to visualize the different phases. The limits of
existence of the ferromagnetic and oscillating states can also be obtained in
the deterministic limit, but for the determination of $\mu_{t}$ (which
characterizes the most stable phase) it is necessary to consider finite system
sizes using the large deviation approach.
Figure 10: Close up of the phase diagram of Fig. 1(a) near the tricritical
point, for $J_{1}=0.6$ and $J_{2}=0.4$, in the reduced parameters
$\varepsilon=(T_{c}-T)/T_{c}$ and $(\mu-\mu_{F}(T))/\mu_{F}(T)$, where
$\mu_{F}$ is defined in Eq. (20) and
$(\mu_{c}-\mu_{F}(T))/\mu_{c}\sim\varepsilon$. O corresponds to the
oscillating phase, F to the ferromagnetic phase. In the hatched area, both
phases are locally stable. The most stable phase, given by the global minima
of $f(H)$, is either the ferromagnetic phase [(O)+F] or the oscillating phase
[O+(F)], separated by the transition line $\mu_{t}$. $\mu_{l}(T)$ corresponds
to the limit of existence of the ferromagnetic points in the deterministic
limit, $\mu_{F}$ to their linear stability limit, and $\mu_{O}$ to the limit
of existence of the oscillating state at deterministic level.
#### V.1.3 Local analytical expressions of $f(H)$
In most cases, keeping only the first orders of the series expansions of
$V(m)$ and $g(m,\dot{m})$ is not enough to obtain an analytical expression of
$f^{\prime}(H)$. Still, local approximations can be obtained. Near a minimum
$m_{0}$ of $V(m)$, a quadratic expansion of $V$ gives
$f(H)=f_{F}(H)\equiv-\frac{g(m_{0},0)}{D_{22}+2v_{1}m_{0}^{2}D_{11}}H,$ (64)
where $f_{F}(H)$ stands for the local approximate expression of $f(H)$ in the
ferromagnetic state. We recover that the point $(m,\dot{m})=(m_{0},0)$ is
stable when $g(m_{0},0)<0$. This expression of $f(H)$ is valid for small $H$
only. For $H\gg H^{*}$, we recover that $f(H)\approx cH^{3/2}$, with $c$
defined in Eq. (53). For intermediate values of $H$ ($H\sim H^{*}$), we do not
have an analytical expression of $f(H)$. However the regime $H\gg H^{*}$ is
similar to the one obtained for $\mu=\mu_{c}$, which suggests that the form of
$f(H)$ obtained for $\mu\approx\mu_{c}$ in Eq. (52) remains approximately
valid up to a redefinition of coefficient values. One can perform a local fit
of the form $f(H)=f_{O}(H)+f(H^{*})$ with
$f_{O}(H)=\tilde{c}\left(H^{3/2}-\frac{3}{2}\sqrt{H^{*}}H+\frac{1}{2}H^{*3/2}\right),$
(65)
where the parameters $\tilde{c}$ and $H^{*}$ are fitted on the numerically
evaluated $f(H)$ to get a local approximation of $f(H)$ near $H^{*}$ (see Fig.
9 for an example of a fit of $f(H)$ close to its minimum). The functional form
(65) provides a reasonable description of the large $H$ behavior of $f(H)$,
and is more accurate than a simple parabolic fit around the minimum $H=H^{*}$.
### V.2 Scalings of order parameters with system size at the transition
#### V.2.1 Large-$N$ scaling at $\mu=\mu_{t}$
Using the two local approximations of $f(H)$ given in Eqs. (64) and (65), we
study the behaviors of $\langle m^{2}\rangle$ and $\langle\dot{m}^{2}\rangle$
in the large-$N$ limit when approaching the critical point $\varepsilon=0$
where the three phases meet. In the ferromagnetic phase ($\mu<\mu_{t}$), using
Eq. (64) and $g(m_{0},0)=a_{0}\varepsilon-a_{1}m_{0}^{2}$, one finds in the
large-$N$ limit,
$\displaystyle\langle m^{2}\rangle$
$\displaystyle=m_{0}^{2}=\frac{\mu_{l}(T)-\mu}{T^{2}v_{1}},$ (66)
$\displaystyle\langle\dot{m}^{2}\rangle$
$\displaystyle=\frac{D_{22}+2v_{1}m_{0}^{2}D_{11}}{(a_{1}m_{0}^{2}-a_{0}\varepsilon)N}.$
(67)
We recover the results of the deterministic limit for $N\to\infty$, $\langle
m^{2}\rangle=m_{0}^{2}\sim\varepsilon$ and $\langle\dot{m}^{2}\rangle=0$. For
large but finite $N$, we obtain that
$\langle\dot{m}^{2}\rangle\sim\varepsilon^{-1}N^{-1}$. In the oscillating
phase ($\mu>\mu_{t}$), $f(H)$ is minimal in $H^{*}$, so that for large enough
$N$ we can replace $e^{-Nf(H)}$ by $\delta(H-H^{*})$ in Eq. (47), yielding
$\displaystyle\langle m^{2}\rangle$
$\displaystyle=\frac{\int_{-m^{*}}^{m^{*}}dm\frac{m^{2}}{\sqrt{H^{*}-V(m)}}}{\int_{-m^{*}}^{m^{*}}dm\frac{1}{\sqrt{H^{*}-V(m)}}},$
(68) $\displaystyle\langle\dot{m}^{2}\rangle$
$\displaystyle=\frac{\int_{-m^{*}}^{m^{*}}dm\sqrt{2(H^{*}-V(m))}}{\int_{-m^{*}}^{m^{*}}dm\frac{1}{\sqrt{2(H^{*}-V(m))}}},$
(69)
where $m^{*}$ is such that $H^{*}=V(m^{*})$. Both $\langle m^{2}\rangle$ and
$\langle\dot{m}^{2}\rangle$ reach constant values at large $N$. We obtain
numerically that $\langle m^{2}\rangle\sim\varepsilon$ and
$\langle\dot{m}^{2}\rangle\sim\varepsilon^{2}$, which are the same scalings as
the one observed for the nonelliptic limit cycle for $\mu=\mu_{c}$ (see Sec.
IV.2).
#### V.2.2 Moderate-$N$ scaling at $\mu=\mu_{t}$
Figure 11: Order parameters $\langle m^{2}\rangle$ and
$\langle\dot{m}^{2}\rangle$ as a function of system size $N$, for
$\varepsilon\in[10^{-3},10^{-4},7\times 10^{-6},6\times 10^{-7},4.6\times
10^{-8},3.6\times 10^{-9}$] at $\mu=\mu_{t}(\varepsilon)$. Order parameters
are evaluated numerically from Eqs. (37) and (47). (a) $\langle m^{2}\rangle$
vs $N$. (b) $\langle\dot{m}^{2}\rangle$ vs $N$. The red dashed line
corresponds to the scaling predictions for moderate values of $N$ from Eqs.
(70) and (71), with $\langle m^{2}\rangle\sim N^{-1/3}$ and
$\langle\dot{m}^{2}\rangle\sim N^{-2/3}$. (c) $\langle
m^{2}\rangle/\varepsilon$ and (d) $\langle\dot{m}^{2}\rangle\varepsilon^{2}$
vs the rescaled system size $N/\varepsilon^{-3}$, showing data collapse.
Parameters: $J_{1}=0.6$, $J_{2}=0.3$.
Unlike for large values of $N$, for intermediate values of $N$, $\langle
m^{2}\rangle$ and $\langle\dot{m}^{2}\rangle$ are found not to depend much on
$\varepsilon$ and on $\mu$.
In Figs. 11(a) and 11(b), we plot $\langle m^{2}\rangle$ and
$\langle\dot{m}^{2}\rangle$ for different $\varepsilon$ at the transition,
$\mu=\mu_{t}(\varepsilon)$. We observe that $\langle m^{2}\rangle\sim
N^{-1/3}$ and $\langle\dot{m}^{2}\rangle\sim N^{-2/3}$ for moderate $N$
values. These scaling behaviors can be understood as follows. The expression
of the average $\langle x\rangle$ contains an integral over $H$ with the
factor $\exp[-Nf(H)]$, see Eq. (47). For moderate $N$, the integral is
dominated by the $H^{3/2}$-term in $f(H)$. This can be justified by performing
the change of variable $z=NH^{3/2}$ in the integral. One then finds that
higher order powers of $H$ in $f(H)$ are negligible when $N\gg 1$, while
linear contributions in $H$ are also negligible as long as
$\sqrt{H^{*}}N^{1/3}\ll 1$, i.e., $N\ll(H^{*})^{-3/2}$, where $H^{*}$ is small
for small $\varepsilon$ [see Eq. (63)]. The integral is then dominated by the
contribution of the region $z\sim 1$, i.e., which corresponds to $H\gg H^{*}$.
In addition, Eq. (47) also contains an integral over $m$. Due to the factor
$[H-V(m)]^{-1/2}$ in the integrals, values of $m$ which contribute the most
are where $V(m)\approx v_{1}m^{4}/4$. In a similar way as above, this can be
justified by performing the change of variable $z^{\prime}=m/H^{1/4}$ in the
integral with the expression of $V(m)$ given in Eq. (61). One then finds that
higher order powers of $m$ are negligible when $H\ll 1$ and the quadratic
order is negligible when $v_{0}(\mu-\mu_{l})\ll H^{1/4}$ which is verified for
$H\gg H^{*}$ as $\mu-\mu_{l}\sim\varepsilon$ [Eq. (62)] and
$H^{*}\sim\varepsilon^{2}$ [Eq. (63)]. Using these two approximations on
$f(H)$ and $V(m)$, we find:
$\langle
m^{2}\rangle=\frac{4\Gamma\left(\frac{5}{6}\right)\Gamma\left(\frac{3}{4}\right)^{4}}{\pi^{5/2}\sqrt{v_{1}}c^{1/3}}N^{-1/3},$
(70)
and
$\langle\dot{m}^{2}\rangle=\frac{4\Gamma\left(\frac{7}{6}\right)}{3\sqrt{\pi}c^{2/3}}N^{-2/3},$
(71)
where $c$ is given in Eq. (53). We plot these quantities in red in Figs. 11(a)
and 11(b) alongside the numerical values obtained from Eq. (47) using the
numerical evaluation of $f(H)$ from Eq. (37).
We note $N^{*}$ the crossover value of $N$ between the moderate- and large-$N$
regimes. The crossover takes place when the value of $\langle m^{2}\rangle\sim
N^{-1/3}$ in the moderate-$N$ approximation is comparable to the one in the
large-$N$ approximation, $\langle m^{2}\rangle\sim\varepsilon$. One thus finds
that $N^{*}$ behaves as
$N^{*}\sim\varepsilon^{-3}.$ (72)
[Note that, according to the integration argument above,
$N^{*}\sim(H^{*})^{-3/2}$, implying $H^{*}\sim\varepsilon^{2}$, consistently
with Eq. (63)]. A similar argument for $\dot{m}$ yields the same scaling for
$N^{*}$: in the moderate-$N$ regime ($N\ll N^{*}$),
$\langle\dot{m}^{2}\rangle\sim N^{-2/3}$, while in the large-$N$ approximation
($N\gg N^{*}$), one has $\langle\dot{m}^{2}\rangle\sim\varepsilon^{2}$ in the
oscillating phase and $\langle\dot{m}^{2}\rangle\sim\varepsilon^{-1}N^{-1}$ in
the ferromagnetic phase, which both give a crossover at
$N^{*}\sim\varepsilon^{-3}$.
Focusing on the oscillating phase, these scaling behaviors of $\langle
m^{2}\rangle$ and $\langle\dot{m}^{2}\rangle$ can be encompassed into two
scaling functions
$\langle
m^{2}\rangle=\varepsilon\,\mathcal{F}_{m}(\varepsilon^{3}N),\quad\langle\dot{m}^{2}\rangle=\varepsilon^{2}\,\mathcal{F}_{\dot{m}}(\varepsilon^{3}N),$
(73)
with asymptotic behaviors $\mathcal{F}_{m}(x)\sim x^{-1/3}$ and
$\mathcal{F}_{\dot{m}}(x)\sim x^{-2/3}$ for $x\to 0$, while both functions go
to constant values for $x\to\infty$. Figs. 11(c) and 11(d) show the data
collapse obtained by plotting the rescaled variables $\langle
m^{2}\rangle/\varepsilon$ and $\langle\dot{m}^{2}\rangle/\varepsilon^{2}$
versus the rescaled system size $N/\varepsilon^{-3}$, for different values of
$\varepsilon$ at $\mu=\mu_{t}(\varepsilon)$.
### V.3 Detailed study of the crossover regime
We reported above two distinct scaling regimes $N\ll N^{*}$ and $N\gg N^{*}$
of the observables $\langle m^{2}\rangle$ and $\langle\dot{m}^{2}\rangle$ as a
function of system size $N$ for $\mu=\mu_{t}(\varepsilon)$, and we identified
the scaling with $\varepsilon$ of the crossover size $N^{*}$. We now
investigate in more details the behavior of these observables in the crossover
regime $N\sim N^{*}$, now focusing on the effect of the variations of $\mu$
close to the transition value $\mu_{t}$, for a fixed $\varepsilon$. We find in
particular that in the crossover regime, $\langle m^{2}\rangle$ has a non-
monotonic behavior as a function of $N$, whose details significantly depend on
$\mu$. The behavior of $\langle\dot{m}^{2}\rangle$, while monotonic as a
function of $N$, is found to strongly depend on $\mu$.
#### V.3.1 Influence of $\mu$ on the crossover regime
Figure 12: (a) $\langle m^{2}\rangle$ and (b) $\langle\dot{m}^{2}\rangle$ as a
function of system size $N$, for different values of $\mu$ close to $\mu_{c}$,
corresponding to $(\mu_{c}-\mu)/\mu_{c}\in[4.0,3.6,3.3,3.1,2.5]\times 10^{-5}$
from darker to lighter colors. The transition between ferromagnetic and
oscillating states takes place at $\mu_{t}$ given by
$(\mu_{c}-\mu_{t})/\mu_{c}\approx 3.2\times 10^{-5}$. Parameters: $J_{1}=0.6$,
$J_{2}=0.4$, and $\varepsilon=10^{-3}$. Order parameters are evaluated
numerically in the same way as in Fig. 11.
As mentioned above, the observables $\langle m^{2}\rangle$ and
$\langle\dot{m}^{2}\rangle$ are seen to have a very weak dependence on
$\varepsilon$ and $\mu$ in the moderate-$N$ regime. For large $N$, the value
of $\langle\dot{m}^{2}\rangle$ is found to be significantly different in the
oscillating phase $(\mu>\mu_{t})$ where
$\langle\dot{m}^{2}\rangle\sim\varepsilon^{2}$ and in the ferromagnetic phase
$(\mu<\mu_{t})$ where $\langle\dot{m}^{2}\rangle\sim\varepsilon^{-1}N^{-1}$.
In contrast, the value of $\langle m^{2}\rangle$ is similar in both phases,
with $\langle m^{2}\rangle\sim\varepsilon$. In Fig. 12, $\langle m^{2}\rangle$
and $\langle\dot{m}^{2}\rangle$ are plotted as a function of system size $N$
in the crossover regime $N\sim N^{*}$, for different values of $\mu$ across
the transition, keeping $\varepsilon$ fixed. After an initial decay for $N\ll
N^{*}$, we observe that $\langle m^{2}\rangle$ slightly increases before
reaching a constant value, as expected in the limit $N\to\infty$. For some
values of $\mu$, like for $(\mu-\mu_{c})/\mu_{c}=-3.3\times 10^{-5}$, a second
decay is observed before reaching the asymptotic constant value. The behavior
of $\langle\dot{m}^{2}\rangle$ is significantly different from that of
$\langle m^{2}\rangle$ as the large-$N$ limit yields two different behaviors
in the ferromagnetic or in the oscillating phase. We observe that for
$\mu<\mu_{t}$, $\langle\dot{m}^{2}\rangle$ first reaches a plateau for a
significant range of $N$, before steeply decreasing to eventually reach the
large-$N$ scaling $\langle\dot{m}^{2}\rangle\sim N^{-1}$.
#### V.3.2 Interpretation as a finite-size phase coexistence
The observed non-trivial behaviors can be given a simple interpretation in
terms of finite-size phase coexistence and metastability. For a finite-size
system, a metastable state has a finite probability to be visited, and this
probability decreases exponentially with system size. Based on this idea, we
introduce a simple decomposition of average values into contributions of each
phase, and show that such a decomposition is sufficient to account for most of
the observed behaviors.
Figure 13: (a) $\langle m^{2}\rangle$ and (b) $\langle\dot{m}^{2}\rangle$ vs
system size $N$. In black, data from Fig. 12 for
$(\mu_{c}-\mu)/\mu_{c}=3.3\times 10^{-5}$. The dashed lines correspond to
$\langle m^{2}\rangle$ and $\langle\dot{m}^{2}\rangle$ computed for $f=f_{H}$
[Eq. (64)] (orange) and for $f=f_{O}$ [Eq. (65)] (blue). The plain red lines
correspond to $\langle x(m,H)\rangle_{\mathrm{approx}}$ defined in Eq. (74).
Here, $f(H^{*})=1.1\times 10^{-12}$.
Starting from the expression of an average observable given in Eq. (47), we
split the semi-axis $H\geq 0$ into two regions, separated by the value $H_{0}$
corresponding to the local maximum of $f(H)$. For small $H$, i.e., near the
ferromagnetic points, $f(H)$ is linear [see Eq. (64)]. For $H$ around $H^{*}$,
we write $f(H)=f_{O}(H)+f(H^{*})$ with $f_{O}$ from Eq. (65) where
$\tilde{c}$, $H^{*}$ and $f(H^{*})$ are parameters fitted on the numerically
evaluated $f(H)$. We consider $N$ large enough so that the integration
interval can be extended to the entire real axis due to the rapidly decaying
factor $\exp[-Nf(H)]$. Then, for any quantity $x(m,H)$, we use the approximate
expression of the average value $\langle x(m,H)\rangle$,
$\langle x(m,H)\rangle_{\mathrm{approx}}=\frac{\langle x\rangle_{F}+\langle
x\rangle_{O}C_{1}\sqrt{N}e^{-Nf(H^{*})}}{1+C_{1}\sqrt{N}e^{-Nf(H^{*})}}$ (74)
with
$C_{1}=\frac{1}{\sqrt{N}}\frac{\int_{-1}^{1}dm\int_{V(m)}^{\infty}dH\frac{1}{\sqrt{H-V(m)}}e^{-Nf_{O}(H)}}{\int_{-1}^{1}dm\int_{V(m)}^{\infty}dH\frac{1}{\sqrt{H-V(m)}}e^{-Nf_{F}(H)}}.$
(75)
In Eq. (74), $\langle x\rangle_{F}$ (resp. $\langle x\rangle_{O}$) corresponds
to the ‘pure-state’ average computed in the ferromagnetic state with
$f(H)=f_{F}(H)$ defined in Eq. (64) [resp. $f(H)=f_{O}(H)$ in the oscillating
state, see Eq. (65)]. We obtain
$C_{1}=-\frac{4g(m_{0},0)m_{0}\sqrt{v_{1}}H^{*1/4}}{\sqrt{3\pi\tilde{c}}(D_{22}+2v_{1}m_{0}^{2}D_{11})}\int_{m*}^{1}\frac{dm}{\sqrt{H^{*}-V(m)}},$
(76)
with $m^{*}$ such that $V(m^{*})=H^{*}$. The oscillating phase has a
contribution weighted with the factor
$C_{1}\sqrt{N}e^{-Nf(H^{*})}/(1+C_{1}\sqrt{N}e^{-Nf(H^{*})})$. When
$f(H^{*})>0$ (i.e., the ferromagnetic phase is the most stable one), the
contribution of the oscillating phase disappears at large $N$ but is non-
negligible for $N\sim f(H^{*})$. In Fig. 13, the ‘pure-state’ averages
$\langle m^{2}\rangle$ and $\langle\dot{m}^{2}\rangle$ evaluated using either
$f=f_{H}$ for the ferromagnetic state, or $f=f_{O}$ for the oscillating state,
as well as the ‘mixed-state’ approximation Eq. (74) are compared to the values
of $\langle m^{2}\rangle$ and $\langle\dot{m}^{2}\rangle$ obtained from the
numerically evaluated $f(H)$ (same data as on Figs. 11 and 12). $\langle
m^{2}\rangle$ increases due to the influence of the oscillating phase as
$\langle m^{2}\rangle$ is higher for $f=f_{O}$ than for $f=f_{H}$; then for
larger $N$, it decreases to its expected value in the ferromagnetic state.
When $f(H^{*})<0$ (i.e., the oscillating phase is the most stable one), we
observe a monotonous increase between the moderate-$N$ decay the asymptotic
large-$N$ value (see Fig. 12). The influence of the oscillating phase is even
more pronounced for $\langle\dot{m}^{2}\rangle$, as we observe that for
moderate $N$, $\langle\dot{m}^{2}\rangle$ is almost constant and equal to the
value expected in the oscillating state (obtained using $f=f_{O}$), before
eventually steeply decreasing when $Nf(H^{*})\sim 1$.
#### V.3.3 Comparison with stochastic simulations
Figure 14: Comparison with stochastic simulations of the spin model. (a)
$\langle m^{2}\rangle$ and (b) $\langle\dot{m}^{2}\rangle$ obtained from
stochastic numerical simulations, as a function of $N$, for
$(\mu_{c}-\mu)/\mu_{c}\in[4.44,4.04,3.64,3.24]\times 10^{-3}$ (from darker to
lighter colors). Parameters: $J_{1}=0.6$, $J_{2}=0.4$, and
$\varepsilon=10^{-1}$.
In Fig. 14, $\langle m^{2}\rangle$ and $\langle\dot{m}^{2}\rangle$ obtained
from stochastic simulations of the spin model are plotted for different system
sizes $N$, for $\varepsilon=(T_{c}-T)/T_{c}=10^{-1}$. Qualitatively, the
moderate- and large-$N$ regimes are visible on the data. However, the decay of
$\langle m^{2}\rangle$ in the moderate-$N$ regime is significantly slower than
the theoretically predicted behavior $\langle m^{2}\rangle\sim N^{-1/3}$. This
is most likely due to the fact that $\varepsilon$ is not small enough to enter
the asymptotic low-$\varepsilon$ regime, as discussed below. The moderate-$N$
decay of $\langle\dot{m}^{2}\rangle$ seems better described by the theoretical
prediction $\langle\dot{m}^{2}\rangle\sim N^{-2/3}$, although significant
deviations are also visible. In the large-$N$ regime,
$\langle\dot{m}^{2}\rangle$ reaches a constant value $\sim\varepsilon^{2}$ in
the oscillating phase ($\mu>\mu_{t}$), or decreases as
$\varepsilon^{-1}N^{-1}$ in the ferromagnetic phase ($\mu<\mu_{t}$). The
transition between the moderate- and large-$N$ regimes takes place around
$N^{*}\approx 10^{3}\approx\varepsilon^{-3}$, as expected.
To understand the discrepancies found between stochastic simulations data and
theoretical predictions, we note that the main approximation made to obtain
the power laws $\langle m^{2}\rangle\sim N^{-1/3}$ and
$\langle\dot{m}^{2}\rangle\sim N^{-2/3}$ is the assumption
$\sqrt{H^{*}}N^{1/3}\ll 1$ (see Sec. V.2.2), which is not valid here for
moderate $N$ values. Moreover, we showed in Sec. IV.3 that there are
discrepancies between the theory and the simulations for low values of
$\varepsilon$ when $N$ is not large enough. We discuss this issue in more
details in the next subsection.
#### V.3.4 Discussion on the low-$\varepsilon$ and large-$N$ approximations
Figure 15: (a) $P_{N}(m,\dot{m})$ obtained from stochastic simulations of the
spin model for $N=5\times 10^{5}$. (b) Trajectories ($m(t),\dot{m}(t))$ in the
phase space $(m,\dot{m})$ obtained in the deterministic limit. The color
corresponds to $v(m,\dot{m})^{-1}$, where $v(m,\dot{m})$ is the local speed
along the limit cycle, and can be interpreted as the local density
$p(m,\dot{m})\propto v(m,\dot{m})^{-1}$. The two black dots are added for
visual purpose and correspond to the ferromagnetic points. Parameters:
$J_{1}=0.6$, $J_{2}=0.4$, $\varepsilon=10^{-1}$ and
$(\mu_{c}-\mu)/\mu_{c}=3.5\times 10^{-3}$. At the deterministic limit, both
the limit cycle and the ferromagnetic points are linearly stable.
In Fig. 14, we compared $\langle m^{2}\rangle$ and $\langle\dot{m}^{2}\rangle$
obtained from the theoretical results of Sec. III with stochastic simulations
of the spin model. We observed that the behavior is qualitatively the same,
but we did not obtain quantitative results. We now discuss the
low-$\varepsilon$ approximation and its consequences. We plot in Fig. 15(a) an
example of $P_{N}(m,\dot{m})$ obtained from numerical simulations for
$N=5\times 10^{5}$, $\varepsilon=10^{-1}$ and $(\mu_{c}-\mu)/\mu_{c}=3.5\times
10^{-3}$. In the deterministic limit, both the limit cycle and the
ferromagnetic points are linearly stable for these parameter values. We
observe a significant difference with the results obtained in this section,
similar to what was observed in Sec. IV.3: there is no individual symmetry
$m\,\mapsto\,-m$ or $\dot{m}\,\mapsto\,-\dot{m}$, and the probability density
is not constant along the limit cycle. In Sec. IV, we showed that higher order
corrections [both in $\phi(m,\dot{m})$ and $C(m,\dot{m})$] to the large
deviation form of $P_{N}(m,\dot{m})$ may account for discrepancies between
analytical predictions and numerical results of stochastic simulations. We
recall that corrections in $\varepsilon$ to the large deviation function
$\phi$ lead to changes in the shape of the limit cycle, and are responsible
for the breaking of the individual reversal symmetry in $m$ and $\dot{m}$. In
contrast, corrections in $N^{-1}$ and $\varepsilon$ to the large deviation
function, given by the function $C(m,\dot{m})$ [Eq. (56)], break the
uniformity of the probability density along the limit cycle. In Sec. IV, we
computed the first corrections analytically for $\mu=\mu_{l}(T)$. However,
these corrections are more complicated to compute for any $\mu$, and we thus
propose here a different way to determine corrections to the large deviation
function. In the deterministic limit $N\to\infty$, Eq. (56) can be rewritten
in the form
$P_{N}(m,\dot{m})\underset{N\to\infty}{\to}\exp[C(m,\dot{m})]\,\delta(\phi(m,\dot{m})),$
(77)
and the probability density on the limit cycle can be obtained from the local
speed
$v(m,\dot{m})=[\left(dm/dt\right)^{2}+\left(d\dot{m}/dt\right)^{2}]^{1/2},$
(78)
since $P(m,\dot{m})\propto v(m,\dot{m})^{-1}$. The deterministic limit
provides information on the location of the minima of the function
$\phi(m,\dot{m})$, which corresponds to the limit cycle, as well as the value
of $C(m,\dot{m})$ on the limit cycle. To obtain these corrections, we compute
the trajectory $(m(t),\dot{m}(t))$ in the deterministic limit [using Eq. (8)]
and we plot, in Fig. 15(b) the trajectories in the phase space $(m,\dot{m})$
where the color can be interpreted as the local density $P(m,\dot{m})\propto
v(m,\dot{m})^{-1}$ along the limit cycle. We observe the same shape of limit
cycle as in the stochastic simulations, and we recover a higher probability
density near the axis $\dot{m}=0$ (close to the ferromagnetic points), in
qualitative agreement with numerical results.
### V.4 Entropy production
Figure 16: Entropy production $\Sigma$ vs. $N$, for different values of $\mu$
such that
$(\mu_{c}-\mu)/\mu_{c}\in[4.0,3.6,3.4,3.33,3.29,3.28,3.275,3.272,3.271]\times
10^{-5}$ from darker to lighter colors. The inset represents $N_{\Sigma}$, the
value of $N$ for which $\Sigma$ is maximal, as a function of $\mu-\mu_{t}$.
The dashed line indicates an exponent $-1$. The entropy production is
evaluated numerically from Eq. (80) where the averages are computed as in Fig.
11. Parameters: $J_{1}=0.6$, $J_{2}=0.4$, and $\varepsilon=10^{-3}$;
$(\mu_{c}-\mu_{t})/\mu_{c}=3.271\times 10^{-5}$.
Beyond the order parameter $\langle\dot{m}^{2}\rangle$, the transition to an
oscillating phase may also be characterized thermodynamically as a transition
from microscopic to macroscopic irreversibility [31, 37], by introducing the
entropy production density $\sigma=\Sigma/N$ in the limit $N\to\infty$, where
the steady-state entropy production $\Sigma$ identifies with the entropy flux
[54, 55],
$\Sigma=\frac{1}{2}\sum_{\mathcal{C},\mathcal{C}^{\prime}}\big{[}W(\mathcal{C}^{\prime}|\mathcal{C})P(\mathcal{C})-W(\mathcal{C}|\mathcal{C}^{\prime})P(\mathcal{C}^{\prime})\big{]}\,\ln\frac{W(\mathcal{C}^{\prime}|\mathcal{C})}{W(\mathcal{C}|\mathcal{C}^{\prime})}\,.$
(79)
We briefly investigate the influence of the bistability of the system on the
entropy production. In the large-$N$ and small-$\varepsilon$ limits, one has
(see Appendix C and [42])
$\frac{\Sigma}{N}=\left[1+(T-J_{1})^{2}\right]\left\langle\dot{m}^{2}\right\rangle+T^{2}\left\langle
V^{\prime}(m)^{2}\right\rangle.$ (80)
In the paramagnetic phase or ferromagnetic phase, one finds $\Sigma=O(N^{0})$,
and in an oscillating phase $\Sigma=O(N)$. Using Eq. (80), we compute the
entropy production numerically for different system sizes; the results are
plotted in Fig. 16. For large $N$, we recover that $\Sigma$ is independent of
$N$ in the ferromagnetic phase, whereas $\Sigma\sim N$ in the oscillating
phase. However, for moderate values of $N$, one has $\Sigma\sim N^{1/3}$, due
to the scaling $\langle\dot{m}^{2}\rangle\sim N^{-2/3}$ obtained in Sec. V.3.
Note that this scaling is different from the scaling of the usual transition
to an oscillating phase with an elliptic limit cycle, where one finds for
moderate $N$, $\Sigma\sim N^{1/2}$, as a consequence of the scaling
$\langle\dot{m}^{2}\rangle\sim N^{-1/2}$ [see Eq. (48)]. In the ferromagnetic
phase, due to the influence of the oscillating phase, the entropy production
increases before having a steep decrease to its constant value. Interestingly,
this ‘overshoot’ effect is still present for $\mu$ slightly below $\mu_{O}$
(see Fig. 10), that is when the limit cycle no longer exists at the
deterministic level. In this situation, the fluctuations described by the
large deviation function keep track of the nearby existence of the limit cycle
in parameter space, and are still able to generate a non-monotonous behavior.
We introduce $N_{\Sigma}$ the value of $N$ where the entropy production is
maximal. The evolution of $N_{\Sigma}$ with $\mu-\mu_{t}$, where $\mu_{t}$ is
the value of $\mu$ at the transition between the oscillating and the
ferromagnetic phases, is plotted in the inset of Fig. 16 over a range of small
values of $\mu-\mu_{t}$. Numerical data can be approximately described by a
power-law decay $N_{\Sigma}\sim(\mu-\mu_{t})^{-1}$, although no theoretical
prediction is available to support this scaling relation. Accordingly, for $N$
larger than $N_{\Sigma}$, the entropy production drops by an amount
$\Delta\Sigma\sim N_{\Sigma}\sim(\mu-\mu_{t})^{-1}$, before reaching its
asymptotic constant value.
## VI Type-II discontinuous transition between ferromagnetic and oscillating
phases
In this section, we investigate the properties of the transition of Type II
between the ferromagnetic and oscillating phases, near the tricritical point
$(T_{c},\mu_{c})$. In this case, obtained for $J_{1}=J_{2}$, a small, almost
elliptic limit cycle around the center is observed. The Type II scenario is
illustrated in the phase diagram of Fig. 1(b) and on the trajectories of Figs.
1(d) and 1(f). All figures in this section are obtained with
$J_{1}=J_{2}=0.5$.
### VI.1 Large deviation function and phase diagram
#### VI.1.1 Validity of the perturbative approach
For $J_{1}=J_{2}$, one has $\mu_{c}=1$ and $T_{c}=J_{1}$. The main difference
with the previous case is that $v_{1}(T,\mu)$, the factor in front of $m^{4}$
in $V(m)$ [Eq. (61)] vanishes at $(T_{c},\mu_{c})$. We have to leading order
in an expansion in $\varepsilon$ and $\mu-\mu_{c}$,
$v_{1}(T,\mu)=-\frac{J^{2}}{12}\varepsilon-\frac{1}{4J^{2}}(\mu-\mu_{c})$ (81)
with $\varepsilon=(T_{c}-T)/T_{c}$. Corrections to Eq. (81) include terms
proportional to $(\mu-\mu_{c})^{2}$, $\varepsilon(\mu-\mu_{c})$ and
$\varepsilon^{2}$. Numerically, we observe that the transition between the
oscillating and ferromagnetic phases takes place for
$(\mu-\mu_{c})\ll\varepsilon$ so that the term in $\mu-\mu_{c}$ can be
neglected and we write $v_{1}=-\alpha\varepsilon$ with $\alpha=J^{2}/12$. As
$v_{1}<0$, higher order terms in the expansion of $V(m)$ are necessary to
compensate for the term $-\varepsilon m^{4}$ and thus to describe the
ferromagnetic points. Numerically, we observe ferromagnetic points whose
amplitude goes to zero with $\varepsilon$. The coefficients of the terms
proportional to $m^{6}$ and $m^{8}$ in the expansion of $V(m)$ obtained from
Eq. (105) scale as $\varepsilon$ for small $\varepsilon$, which would give
ferromagnetic points independent of $\varepsilon$ in this limit, if only these
terms were retained. Expanding $V(m)$ further, we find that the coefficient of
the term proportional to $m^{10}$ is independent of $\varepsilon$. Assuming
that the ferromagnetic point $m_{0}$ results from the balance of the terms in
$m^{4}$ and in $m^{10}$, i.e., $\varepsilon m_{0}^{4}\sim m_{0}^{10}$, yields
$m_{0}\sim\varepsilon^{1/6}$. We now check a posteriori that the assumption to
neglect the terms in $m^{6}$ and $m^{8}$ was valid. For
$m\sim\varepsilon^{1/6}$, one has $\varepsilon m^{6}\sim\varepsilon^{2}$ and
$\varepsilon m^{8}\sim\varepsilon^{7/3}$ which are both much smaller than the
term $m^{10}\sim\varepsilon^{5/3}$ for $\varepsilon\to 0$, so that neglecting
the terms in $m^{6}$ and $m^{8}$ was justified for $m\sim m_{0}$. We thus
write the following minimal form for $V(m)$,
$V(m)=\frac{\mu-\mu_{l}(T)}{2T^{2}}m^{2}-\frac{\alpha\varepsilon}{4}m^{4}+\frac{v_{4}}{10}m^{10}+V_{0}$
(82)
where $v_{4}=J^{2}/81$. $V_{0}$ is such that $V(m)\geq 0$ and the minimal
value of $V(m)$ is zero. An example of the shape of $V(m)$ is given in Fig.
17(a). Until now, we have considered only $V(m)$ with one or two minima,
whereas now it can have three of them. As we now show, this has important
consequences which makes this case of interest, and quite different from the
previous ones. We found that the ferromagnetic points are
$m_{0}\sim\varepsilon^{1/6}$ when $\mu-\mu_{c}\ll\varepsilon$, so that
$u_{0}=g(m_{0},0)\sim\varepsilon^{1/3}$ is small. Thus, for small
$\varepsilon$ and close to $\mu_{c}$, one can use the perturbative method
described in Sec. III to obtain the large deviation function.
#### VI.1.2 Typical $f(H)$ and phase diagram
Figure 17: Type-II transition between ferromagnetic and oscillating states.
(a) Potential $V(m)$; the dashed lines correspond to $m=\pm m_{l}$. (b)
Representation of the different areas in the plane $(m,\dot{m})$. (c) $f(H)$
in the different areas defined in Eq. (83). (d) Colormap of $\phi(m,\dot{m})$
in the plane $(m,\dot{m})$. Note the different scalings
$m\sim\varepsilon^{1/2}$ and $\dot{m}\sim\varepsilon$. Parameters:
$J_{1}=J_{2}=0.5$, $(\mu-\mu_{l})/\mu_{l}=4.55\times 10^{-4}$ and
$\varepsilon=10^{-2}$.
The main difference with the previous case is as follows. As illustrated in
Fig. 17(a), the condition $V(m)=H$ may correspond to six values of $m$ instead
of only two or four previously, when considering values of $H$ close to the
local minima of $V(m)$ with $m\neq 0$ (ferromagnetic points). In the
definition of $f^{\prime}(H)$ given in Eq. (37), we integrate over $m_{1}(H)$
and $m_{2}(H)$ such that $V(m_{1})=V(m_{2})=H$ and $V(m)\leq H$ for
$m\in[m_{1},m_{2}]$. Thus, for a given value of $H$, $f$ can have different
values depending on the range of values of $m$ over which the integral is
computed. We note $m_{l}$ the positive value of $m$ where $V(m)$ has a local
maximum. In the phase space $(m,\dot{m})$ there are four different areas,
which are represented in Fig. 17(b). A first area around the center
corresponds to small $m$ and $\dot{m}$, where $|m|<|m_{l}|$ and
$V(m)<V(m_{l})$, which is denoted area 1 and is represented in green. Two
symmetric domains situated around the ferromagnetic points, where $|m|>m_{l}$
and $V(m)<V(m_{l})$, correspond to area 2 and are represented in red. A last
area for higher values of $H$, is denoted area 3 and is represented in blue.
We define three different functions $f$, one for each area:
$\phi(m,\dot{m})\\!=\\!\left\\{\\!\begin{array}[]{ll}f_{1}(H(m,\dot{m}))\\!&\\!\text{if
}|m|\\!<\\!m_{l}\text{ and }V(m)\\!<\\!V(m_{l})\\\
f_{2}(H(m,\dot{m}))\\!&\\!\text{if }|m|\\!>\\!m_{l}\text{ and
}V(m)\\!<\\!V(m_{l})\\\
f_{3}(H(m,\dot{m}))\\!&\\!\text{otherwise.}\end{array}\right.$ (83)
As $f$ is defined up to a constant in every area, we impose that $f_{1}(0)=0$
and we assume $f(H)$ to be continuous at the border between two different
areas. In Fig. 17(c), (d), examples of $f(H)$ and $\phi(m,\dot{m})$ are
plotted. A limit cycle around the center, and two ferromagnetic points are
locally stable. Once again, the most stable phase is given by the global
minima of $f$, here the oscillating phase.
From the numerical determination of $f(H)$ and its minima, one obtains the
phase diagram in the parameter space ($T,\mu$). We plot in Fig. 18 the phase
diagram for $J_{1}=J_{2}$ close to the tricritical point $(T_{c},\mu_{c})$. We
introduce the line $\mu_{F}$ indicating the existence of the ferromagnetic
points and the line $\mu_{O}$ indicating the existence of the oscillating
state. We also introduce $\mu_{t}(T)$ the value of $\mu$ at the transition,
such that $f_{2}(H(m_{0},0))=f_{1}(H^{*})$ (where $m_{0}$ corresponds to the
ferromagnetic point and $H^{*}$ is where $f_{1}(H)$ is minimal). Numerically,
we obtain (see Fig. 18) that
$(\mu_{t}-\mu_{c})/\mu_{c}\sim\varepsilon^{4/3}.$ (84)
Indeed, the transition almost takes place when the ferromagnetic points
disappear, meaning that $\mu_{t}\approx\mu_{F}$. The ferromagnetic points
disappear when the term in $m^{2}$ balances the $m^{4}$ term in $V(m)$ at
$m_{0}\sim\varepsilon^{1/6}$, so that
$(\mu_{F}-\mu_{l})m_{0}^{2}\sim\varepsilon m_{0}^{4}$. As $\mu_{l}\sim\mu_{c}$
[Eq. (49)], this gives $(\mu_{F}-\mu_{c})/\mu_{c}\sim\varepsilon^{4/3}$. We
observe that when the ferromagnetic phase and the oscillating phase coexist,
the ferromagnetic phase is almost always the most stable one.
#### VI.1.3 Approximate local analytical expressions of $f(H)$
To go beyond the numerical evaluation of $f(H)$, we now try to obtain an
approximate analytical expression of $f(H)$, which will be helpful in
particular to determine the scaling regimes of the order parameters $\langle
m^{2}\rangle$ and $\langle\dot{m}^{2}\rangle$. As in Sec. V, one cannot obtain
a full analytic expression of $f(H)$, and we thus focus on local
approximations. For an expression of $f(H)$ around the ferromagnetic points, a
quadratic expansion of $V$ around one of its local minima $m_{0}$ gives
$f_{2}(H)=f_{F}(H)+\bar{f}_{2}$ where
$f_{F}(H)=-\frac{g(m_{0},0)}{D_{22}+6v_{4}m_{0}^{8}D_{11}}\left[H-V(m_{0})\right],$
(85)
and $\bar{f}_{2}$ is a constant such that $f$ is continuous in $H=V(m_{l})$.
We note here that $H(m_{0},0)\neq 0$ for the ferromagnetic points unlike in
the previous section as now $V(m_{0})$ can be nonzero if the global minimum of
$V$ is for $m=0$.
For the area around the center, the leading term of $V(m)$ is the $m^{2}$
term, leading to the same $f(H)$ as for the Hopf bifurcation [see Eq. (41)],
$f_{1}(H)=f_{O}(H)+\bar{f}_{1}$ where
$f_{O}(H)=-\varepsilon aH+bH^{2}+\frac{(\varepsilon a)^{2}}{b}$ (86)
with $a$ and $b$ given in Eqs. (42) and (43), and $\bar{f}_{1}$ a constant
chosen such that $f(H)$ is continuous in $H=V(m_{l})$. The minimum of
$f_{O}(H)$ corresponds to an elliptic limit cycle around the center, as
depicted in Fig. 17(d), with
$H^{*}=\frac{a\varepsilon}{2b}\sim\varepsilon[\mu-\mu_{l}(T)]\,,$ (87)
as $b\sim[\mu-\mu_{l}(T)]^{-1}$ from Eq. (43). At the transition
($\mu=\mu_{t}$), we find [Eqs. (49) and (84)],
$\mu_{t}-\mu_{l}(T)\sim\varepsilon^{4/3}$ (88)
and thus $H^{*}\sim\varepsilon^{7/3}$.
Figure 18: Close up of the phase diagram of Fig. 1(b) for $J_{1}=J_{2}=0.5$
for $T<T_{c}$. $\mu_{F}$ corresponds to the limit of existence of the
ferromagnetic points (obtained from $V(m)$), and $\mu_{O}$ of the limit cycle
(obtained in the deterministic limit). The transition between the F and O
phases takes place for $\mu=\mu_{t}$. In the hatched area both the
ferromagnetic points and the limit cycle are local minima of $f(H)$. As
$\mu_{F}$ and $\mu_{t}$ are very close to each other, in most of the
coexistence region, the ferromagnetic phase is the most stable one. Scalings
of transition lines: $(\mu_{t}-\mu_{c})/\mu_{c}\sim\varepsilon^{4/3}$,
$(\mu_{F}-\mu_{c})/\mu_{c}\sim\varepsilon^{4/3}$ and
$(\mu_{O}-\mu_{c})/\mu_{c}\sim\varepsilon^{2}$, with
$\varepsilon=(T_{c}-T)/T_{c}$.
In the phase diagram of Fig. 18, we observe that the transition line $\mu_{t}$
is very close to the line $\mu_{F}$ indicating the limit of existence of the
ferromagnetic points. Hence in most of the coexistence region, the
ferromagnetic phase is the most stable phase. This can be explained with the
following argument. Around the ferromagnetic points,
$f^{\prime}(H)\sim\varepsilon^{1/3}$ whereas near the limit cycle,
$f^{\prime}(H)\sim\varepsilon$. The slope of $f$ near the ferromagnetic points
is much steeper that around the limit cycle (see Fig. 17 for an example of
$f$). When the area around the ferromagnetic points exists, it rapidly becomes
the global minimum of $f$ when varying $\mu$ at fixed $\varepsilon$.
### VI.2 Multiple scalings of order parameters with $N$ at the transition
In Figs. 19(a) and 19(b) we plot $\langle m^{2}\rangle$ and
$\langle\dot{m}^{2}\rangle$ as a function of $N$ at the transition, for
$\mu=\mu_{t}(\varepsilon)$. We observe three different regimes depending on
the value of $N$ and $\varepsilon$, that are described below.
#### VI.2.1 Large-$N$ scaling at $\mu=\mu_{t}$
Using the local approximations, we obtain the behaviors of $\langle
m^{2}\rangle$ and $\langle\dot{m}^{2}\rangle$ in the large-$N$ limit when
approaching the critical point $\varepsilon=0$ where the three phases meet. In
the ferromagnetic phase, using Eq. (64), we find that $\langle
m^{2}\rangle=m_{0}^{2}\sim\varepsilon^{1/3}$ and
$\langle\dot{m}^{2}\rangle=\frac{D_{22}+6v_{4}m_{0}^{8}D_{11}}{-g(m_{0},0)N},$
(89)
so that $\langle\dot{m}^{2}\rangle\sim\varepsilon^{-1/3}N^{-1}$ as
$|g(m_{0},0)|\sim m_{0}^{2}\sim\varepsilon^{-1/3}$. Due to the $m^{10}$ term
in $V(m)$, we obtain power laws in $\varepsilon$ with critical exponents quite
different from the corresponding values previously obtained. In the
oscillating phase, one finds $\langle m^{2}\rangle\sim\varepsilon$ similarly
to the elliptic limit cycle obtained in Sec. IV, and
$\langle\dot{m}^{2}\rangle\sim\varepsilon[\mu_{t}-\mu_{l}(T)]$. One has
$\mu_{t}-\mu_{l}\sim\varepsilon^{4/3}$ [Eq. (88)] such that one finds
$\langle\dot{m}^{2}\rangle\sim\varepsilon^{7/3}$ in the oscillating phase.
#### VI.2.2 Moderate-$N$ scaling at $\mu=\mu_{t}$
Figure 19: Order parameters $\langle m^{2}\rangle$ and
$\langle\dot{m}^{2}\rangle$ as a function of the system size $N$, for
$\varepsilon\in[10^{-4},10^{-6},10^{-8},10^{-10}]$ from lighter to darker
colors at $\mu=\mu_{t}(\varepsilon)$. Order parameters are evaluated
numerically from Eqs. (37) and (47). (a) $\langle m^{2}\rangle$ vs $N$. (b)
$\langle\dot{m}^{2}\rangle$ vs $N$. The red dashed line corresponds to the
theoretical prediction for low $N$ [Eqs. (90) and (91)], with a scaling of
$N^{-1/6}$ for $\langle m^{2}\rangle$ and $N^{-5/6}$ for
$\langle\dot{m}^{2}\rangle$. (c) $\langle m^{2}\rangle$ rescaled by
$\varepsilon^{1/3}$ and (d) $\langle\dot{m}^{2}\rangle$ rescaled by
$\varepsilon^{5/3}$ versus rescaled system size $N/\varepsilon^{-2}$,
highlighting the crossover between the moderate and intermediate-$N$ regimes.
(e) $\langle m^{2}\rangle$ rescaled by $\varepsilon$ and (f)
$\langle\dot{m}\rangle$ rescaled by $\varepsilon^{7/3}$ versus rescaled system
size $N/\varepsilon^{-10/3}$, highlighting the crossover to the large-$N$
regime. Parameters: $J_{1}=J_{2}=0.5$.
For moderate values of $N$, we observe that $\langle m^{2}\rangle$ and
$\langle\dot{m}^{2}\rangle$ decrease with $N$ and are independent of
$\varepsilon$. Similarly to Sec. V.2.2, for low values of $N$ we can keep only
the first $\varepsilon$-independent term in $V(m)$, namely here
$V(m)=v_{4}m^{10}/10$. Using Eq. (37), this approximation gives
$f(H)=AH^{5/6}$, leading to
$\displaystyle\langle m^{2}\rangle\sim N^{-1/6},$ (90)
$\displaystyle\langle\dot{m}^{2}\rangle\sim N^{-5/6},$ (91)
where the exact asymptotic relations including prefactors are given in
Appendix D, and are plotted in dashed red lines in Fig. 19(a,b).
#### VI.2.3 Intermediate-$N$ scaling at $\mu=\mu_{t}$
For intermediate values of $N$, we observe that for all $\varepsilon$,
$\langle m^{2}\rangle\sim N^{-1/2}$ and $\langle\dot{m}^{2}\rangle\sim
N^{-1/2}$. Indeed, for values of $N$ such that $f(H(m_{l}))\sim N^{-1}$ where
$m_{l}$ is the positive junction point between the different areas (see Fig.
17), because the ferromagnetic areas are small, the main contribution to the
integrals corresponds to $m$ in area 1 (in the center). In this area, one has
$f(H)\sim f_{O}(H)$ where $f_{O}(H)$ is given in Eq. (86). The leading
correction in $N$ of $\langle m^{2}\rangle$ and $\langle\dot{m}^{2}\rangle$ is
given by
$\langle\dot{m}^{2}\rangle=\frac{\mu-\mu_{l}(T)}{T^{2}}\langle
m^{2}\rangle=H^{*}+\frac{e^{-NbH^{*2}}}{2\sqrt{\pi bN}},$ (92)
with $H^{*}=a\varepsilon/b$. We recall that
$b\sim(\mu-\mu_{l})^{-1}\sim\varepsilon^{-4/3}$ [Eq. (88)] at the transition,
thus we find $\langle m^{2}\rangle\sim\varepsilon^{-2/3}N^{-1/2}$ and
$\langle\dot{m}^{2}\rangle\sim\varepsilon^{2/3}N^{-1/2}$ for
$N\varepsilon^{2}\ll 1$. We note $N_{1}$ the crossover value of $N$ between
those the moderate- and intermediate-$N$ regimes, and $N_{2}$ the crossover
value between the intermediate- and large-$N$ regimes.
For $N\ll N_{1}$, $\langle m^{2}\rangle\sim N^{-1/6}$ while for $N_{1}\ll N\ll
N_{2}$, $\langle m^{2}\rangle\sim\varepsilon^{-2/3}N^{-1/2}$, so that
$N_{1}\sim\varepsilon^{-2}$. The same argument holds for
$\langle\dot{m}^{2}\rangle$: for $N\ll N_{1}$, $\langle\dot{m}^{2}\rangle\sim
N^{-5/6}$ and for $N_{1}\ll N\ll N_{2}$,
$\langle\dot{m}^{2}\rangle\sim\varepsilon^{2/3}N^{-1/2}$, also implying
$N_{1}\sim\varepsilon^{-2}$. These different scaling behaviors for $\langle
m^{2}\rangle$ around the first crossover regime $N\sim N_{1}$ can be
encompassed into a single scaling function
$\langle m^{2}\rangle=\varepsilon^{1/3}\,\mathcal{F}_{m,1}(\varepsilon^{2}N),$
(93)
where $\mathcal{F}_{m,1}(x)$ asymptotically behaves as
$\mathcal{F}_{m,1}(x)\sim x^{-1/6}$ for $x\to 0$ and $\mathcal{F}_{m,1}(x)\sim
x^{-1/2}$ for $x\to\infty$. In a similar way, $\langle\dot{m}^{2}\rangle$ can
be expressed in terms of a scaling function,
$\langle\dot{m}^{2}\rangle=\varepsilon^{5/3}\,\mathcal{F}_{\dot{m},1}(\varepsilon^{2}N),$
(94)
with asymptotic behaviors $\mathcal{F}_{\dot{m},1}(x)\sim x^{-5/6}$ for $x\to
0$ and $\mathcal{F}_{\dot{m},1}(x)\sim x^{-1/2}$ for $x\to\infty$. We plot in
Fig. 19(c,d) $\langle m^{2}\rangle/\varepsilon^{1/3}$ and
$\langle\dot{m}^{2}\rangle/\varepsilon^{5/3}$ as a function of
$N/\varepsilon^{-2}$, which is proportional to the rescaled system size
$N/N_{1}$. As expected, the different curves corresponding to different values
of $\varepsilon$ collapse for moderate up to intermediate values of $N$.
We now turn to the second crossover $N\sim N_{2}$ between the intermediate-
and large-$N$ regimes. For $N_{1}\ll N\ll N_{2}$, one has $\langle
m^{2}\rangle\sim\varepsilon^{-2/3}N^{-1/2}$ while for $N\gg N_{2}$, $\langle
m^{2}\rangle\sim\varepsilon$. Balancing the two contributions thus gives
$N_{2}\sim\varepsilon^{-10/3}$ (note that $N_{2}\gg N_{1}$). The same argument
holds for $\langle\dot{m}^{2}\rangle$: for $N_{1}\ll N\ll N_{2}$, one finds
$\langle\dot{m}^{2}\rangle\sim\varepsilon^{2/3}N^{-1/2}$ and for $N\gg N_{2}$,
$\langle\dot{m}^{2}\rangle\sim\varepsilon^{7/3}$, which also gives
$N_{2}\sim\varepsilon^{-10/3}$. These scaling behaviors of $\langle
m^{2}\rangle$ and $\langle\dot{m}^{2}\rangle$ can be encompassed into two
scaling functions
$\langle
m^{2}\rangle=\varepsilon\,\mathcal{F}_{m,2}(\varepsilon^{10/3}N),\quad\langle\dot{m}^{2}\rangle=\varepsilon^{7/3}\,\mathcal{F}_{\dot{m},2}(\varepsilon^{10/3}N),$
(95)
with asymptotic behaviors
$\mathcal{F}_{m,2}(x)\sim\mathcal{F}_{\dot{m},2}(x)\sim x^{-1/2}$ for $x\to
0$, while both functions go to constant values for $x\to\infty$. In Fig.
19(e,f), we plot $\langle m^{2}\rangle/\varepsilon$ and
$\langle\dot{m}^{2}\rangle/\varepsilon^{7/3}$ as a function of the rescaled
system size $N/\varepsilon^{-10/3}\sim N/N_{2}$. As expected, for different
values of $\varepsilon$, the different curves collapse for intermediate up to
large values of $N$.
### VI.3 Crossover between intermediate and large $N$ regimes
We reported above three distinct scaling regimes, separated by $N_{1}$ and
$N_{2}$, for the observables $\langle m^{2}\rangle$ and
$\langle\dot{m}^{2}\rangle$ as a function of the system size $N$ for
$\mu=\mu_{t}(\varepsilon)$, and we identified the scalings with $\varepsilon$
of the crossover sizes $N_{1}$ and $N_{2}$. We now investigate in more details
the behavior of the observables in the crossover regimes, focusing on the
effect of the variable $\mu$ close to the transition value $\mu_{t}$ for a
fixed $\varepsilon$. We numerically find that in the moderate- and
intermediate-N regimes, the two observables only weakly depend on $\mu$,
unlike for large-$N$ values. Thus, we now focus on the dependence of the
observables $\langle m^{2}\rangle$ and $\langle\dot{m}^{2}\rangle$ on $\mu$
for a fixed $\varepsilon$, in the crossover regime between intermediate- and
large-$N$ values ($N\sim N_{2}$).
#### VI.3.1 Influence of $\mu$ on the second crossover regime
Figure 20: (a) $\langle m^{2}\rangle$ and (b) $\langle\dot{m}^{2}\rangle$ as a
function of the system size $N$, for
$(\mu-\mu_{c})/\mu_{c}\in[8.310,8.340,8.348,8.350]\times 10^{-7}$ from darker
to lighter colors. Parameters: $J_{1}=J_{2}=0.5$ and $\varepsilon=10^{-4}$.
The transition takes place at $(\mu_{t}-\mu_{c})/\mu_{c}\approx 8.349\times
10^{-7}$. Order parameters are evaluated numerically in the same way as in
Fig. 19. The black dashed line correspond to $\langle
x(m,H)\rangle_{\mathrm{approx}}$ defined in Eq. (96).
In the large-$N$ limit, the observables $\langle m^{2}\rangle$ and
$\langle\dot{m}^{2}\rangle$ are discontinuous at the transition, whereas for
moderate and intermediate-$N$ values, they do not depend much on the value of
$\mu$. In Fig. 20, $\langle m^{2}\rangle$ and $\langle\dot{m}^{2}\rangle$ are
plotted as a function of system size $N$ for different values of $\mu$ along
the transition, at fixed $\varepsilon$. Like in Sec. V, we observe jumps in
$\langle m^{2}\rangle$ and $\langle\dot{m}^{2}\rangle$ in the ferromagnetic
phase ($\mu<\mu_{t}$) at a finite system size, which takes place for higher
values of $N$ when approaching the transition. The two values of $\langle
m^{2}\rangle$ in the different phases are nonzero whereas
$\langle\dot{m}^{2}\rangle$ goes from a nonzero value (in the oscillating
phase) to a value decreasing as $N^{-1}$. The main difference with Sec. V is
that the jump in $\langle m^{2}\rangle$ is now much more pronounced because of
its different dependence on $\varepsilon$. Indeed, we had in Sec. V that
$\langle m^{2}\rangle\sim\varepsilon$ in both phases, whereas now $\langle
m^{2}\rangle\sim\varepsilon$ in the oscillating phase and $\langle
m^{2}\rangle\sim\varepsilon^{1/3}$ in the ferromagnetic phase, leading for
small $\varepsilon$ to a strong mismatch of $\langle m^{2}\rangle$ between the
two phases.
#### VI.3.2 Approximation in terms of phase coexistence
Here again, the observed behaviors can be given a simple interpretation in
terms of finite-size phase coexistence and metastability. Hence, as in Sec. V,
we introduce a simple decomposition of average values into contributions of
each phases. For any quantity $x(m,H)$, we introduce the approximate average
value obtained by taking into account the statistical weight of each phase,
$\langle x(m,H)\rangle_{\rm approx}=\frac{\langle x\rangle_{F}+\langle
x\rangle_{O}C_{2}\sqrt{N}e^{-N(\bar{f}_{2}-\bar{f}_{1})}}{1+C_{2}\sqrt{N}e^{-N(\bar{f}_{2}-\bar{f}_{1})}}$
(96)
where the ‘pure-state’ averages $\langle x\rangle_{F}$ and $\langle
x\rangle_{O}$ are respectively obtained from the ferromagnetic state large
deviation function $f_{F}(H)$ given in Eq. (85), and from the oscillating
state large deviation function $f_{O}(H)$ given in Eq. (86); $\bar{f}_{i}$ is
the minimum of $f_{i}(H)$ ($i=1$, $2$). The constant $C_{2}$, whose expression
is similar to that of the constant $C_{1}$ given in Eq. (75), but with
different expressions for $f_{F}$ and $f_{O}$ from the ones found in Sec. V,
now becomes
$C_{2}=\frac{-g(m_{0},0)}{(D_{22}+6v_{4}m_{0}^{8}D_{11})}\sqrt{\frac{6\pi
v_{4}}{bv_{0}}}m_{0}^{4}\,.$ (97)
In Sec. V, we could not have a local expression of $f_{O}(H)$ in the
oscillating phase, so we used fitting parameters for the coefficients
$\tilde{c}$ and $H^{*}$. Here, both $f_{F}$ and $f_{O}$ are known
analytically. The only quantity which is not known analytically and is a
fitted parameter, obtained from the numerical evaluation of $f(H)$, is
$\bar{f}_{2}-\bar{f}_{1}$.
This decomposition is plotted for different values of $\mu$ in Fig. 20 in
black dashed lines. The jump in $\langle m^{2}\rangle$ and
$\langle\dot{m}^{2}\rangle$ is well described by this simple decomposition.
When increasing $N$, the influence of the oscillating phase dominates until
$f(H^{*})\sim N^{-1}$.
#### VI.3.3 Comparison with stochastic simulations
Figure 21: Comparison with stochastic simulations of the spin model. (a)
$\langle m^{2}\rangle$ and (b) $\langle\dot{m}^{2}\rangle$ obtained from
numerical simulations as a function of the system size $N$, for
$(\mu-\mu_{c})/\mu_{c}\in[2,3,4]\times 10^{-3}$ (from darker to lighter
colors). Parameters: $J_{1}=J_{2}=0.5$ and $\varepsilon=5\times 10^{-2}$.
We plot in Fig. 21 $\langle m^{2}\rangle$ and $\langle\dot{m}^{2}\rangle$
computed from stochastic simulations for different system sizes and different
$\mu$, with $\varepsilon=(T_{c}-T)/T_{c}=5\times 10^{-2}$. We observe the
different expected behaviors described above. At large $N$, a jump of $\langle
m^{2}\rangle$ is observed between the ferromagnetic phase (high values of
$\langle m^{2}\rangle\sim\varepsilon^{1/3}$ for low values of $\mu-\mu_{c}$)
and the oscillating phase (low values of $\langle m^{2}\rangle\sim\varepsilon$
for higher values of $\mu-\mu_{c}$). A jump of $\langle\dot{m}^{2}\rangle$ is
also observed: $\langle\dot{m}^{2}\rangle$ is constant and of order
$\varepsilon^{7/3}\sim 10^{-3}$ for sufficiently high values of $\mu-\mu_{c}$,
while it decreases as $N^{-1}$ for lower values of $\mu-\mu_{c}$.
Similarly to Sec. V, we are able to describe the qualitative behavior of the
observables $\langle m^{2}\rangle$ and $\langle\dot{m}^{2}\rangle$. However,
for the restricted range of values of $\varepsilon$ and $N$ accessible in
stochastic simulations, we are not able to reach a quantitative agreement with
analytical predictions obtained in the small-$\varepsilon$, large-$N$ limit.
We now briefly discuss the effect of considering finite values of
$\varepsilon$ and $N$.
Figure 22: (a) $P_{N}(m,\dot{m})$ obtained from stochastic simulations for
$N=2\times 10^{4}$. (b) Trajectories ($m(t),\dot{m}(t))$ in the phase space
$(m,\dot{m})$ obtained in the deterministic limit. The color corresponds to
$v(m,\dot{m})^{-1}$ where $v(m,\dot{m})$ is the local velocity, as the local
density $p(m,\dot{m})\sim v(m,\dot{m})^{-1}$. The black red dots are added for
visual purpose and correspond to the ferromagnetic points. Parameters:
$J_{1}=J_{2}=0.5$, $\varepsilon=5\times 10^{-2}$ and
$(\mu-\mu_{c})/\mu_{c}=3.981\times 10^{-3}$.
In Fig. 22(a), we plot an example of $P_{N}(m,\dot{m})$ obtained from
stochastic simulations for $\varepsilon=10^{-1}$ and $N=5\times 10^{5}$, for a
value of $\mu$ where both the limit cycle and the ferromagnetic points are
linearly stable in the deterministic limit. We observe important discrepancies
with the theory described in Sec. VI.1, which assumed a small elliptic limit
cycle in the center with uniform probability along the cycle, and
ferromagnetic points outside the cycle. From a dynamical viewpoint, a non-
uniform probability along the cycle in an ensemble approach means that an
individual system goes along the cycle at a non-uniform speed. Here, as shown
in Fig. 22(a), the limit cycle is hardly visible, meaning the probability is
strongly non-uniform along the cycle. To understand this result, we plot in
Fig. 22(b) trajectories, obtained in the deterministic limit, in the phase
space $(m,\dot{m})$ where the color codes for $v(m,\dot{m})^{-1}$, with
$v(m,\dot{m})=[\left(dm/dt\right)^{2}+\left(d\dot{m}/dt\right)^{2}]^{1/2}$ the
local speed on the cycle. The quantity $v(m,\dot{m})^{-1}$ is proportional the
local probability density $p(m,\dot{m})$ along the limit cycle. We observe
that the limit cycle is not elliptic, and that the speed $v(m,\dot{m})$ is far
from uniform along the cycle. The probability density $p(m,\dot{m})$ is much
higher close to the ferromagnetic points, in qualitative agreement with
stochastic simulations. This discrepancy with theoretical predictions derived
in Sec. VI.1 comes from both the small-$\varepsilon$ and large-$N$
approximations made to obtain analytical results. The small-$\varepsilon$
approximation gives an elliptic limit cycle, and the large-$N$ one gives a
constant speed along the limit cycle [see Sec. IV.3 and Sec. V].
### VI.4 Entropy production
Figure 23: Entropy production $\Sigma$ vs. $N$, for
$(\mu-\mu_{c})/\mu_{c}\in[6,7.2,8.0,8.2,8.3,8.330,8.343,8.348,8.352]\times
10^{-7}$ from darker to lighter colors. The entropy production is evaluated
numerically from Eq. (80). Inset: $N_{\Sigma}$, the value of $N$ where
$\Sigma$ is maximal, as a function of $\mu-\mu_{t}$, with
$(\mu_{t}-\mu_{c})/\mu_{c}=8.352\times 10^{-7}$. A slope $-1.3$ is indicated
by a dashed line, as a guide to the eye. Parameters: $J_{1}=J_{2}=0.5$ and
$\varepsilon=10^{-4}$.
Similarly to Sec. V, we discuss the behavior of the entropy production Eq.
(80) with system size at the transition, which is plotted in Fig. 23 for
different values of $\mu$ across the transition ($\mu\approx\mu_{t}$). For low
values of $N$, the entropy production increases as $N^{1/6}$ as
$\langle\dot{m}^{2}\rangle\sim N^{-5/6}$. For larger values of $N$ and for
$\mu>\mu_{t}$, $\Sigma\sim N$ as expected in an oscillating phase. For
$\mu<\mu_{t}$, as in Sec. V, we observe that $\Sigma$ increases like in the
oscillating phase before steeply decreasing to a constant value. This behavior
is similar to the one observed in Sec. V and is a consequence of the proximity
of the oscillating phase. We again denote as $N_{\Sigma}$ the value of $N$
when $\Sigma$ is maximal. In the inset of Fig. 23, we plot $N_{\Sigma}$ as a
function of the distance to the transition $\mu-\mu_{t}$, showing an
approximate power-law divergence of $N_{\Sigma}$ when $\mu-\mu_{t}\to 0$. As a
result, getting closer to the transition, the drop of $\Sigma$ takes place for
larger $N$ and thus becomes bigger, since it eventually decays to
approximately the same asymptotic large-$N$ value for all $\mu-\mu_{t}$.
### VI.5 Comment on the continuous transition for $J_{2}=\pm 2+J_{1}$
In this section, we investigated the properties of the discontinuous
transition of Type II between the ferromagnetic phase and the oscillating
phase taking place for $J_{1}=J_{2}$ where $v_{1}(T_{c},\mu_{c})=0$. As seen
in Fig. 2, the condition $v_{1}(T_{c},\mu_{c})=0$ is also satisfied for
$J_{2}=\pm 2+J_{1}$, and it would thus be natural to also study the transition
in this case. However, in the deterministic limit we find for $J_{2}=\pm
2+J_{1}$ that the transition between the ferromagnetic phase and the
oscillating phase is of a different type: the transition is continuous, and
the ferromagnetic points turn into a limit cycle with a infinite period at the
transition. We now comment further on the difference between the two cases
$J_{1}=J_{2}$ and $J_{2}=\pm 2+J_{1}$ and why the method presented in this
paper does not allow for a characterization of continuous transitions between
ferromagnetic points and a limit cycle with infinite period.
In Sec. VI.1, we computed the large deviation locally using Eq. (37) and
assumed that the large deviation function is continuous in $m=m_{l}$ in order
to obtain the large deviation function in all three areas, and thus in the
whole plane ($m,\dot{m}$). However, an important issue is that at the point
where the different areas meet, $m=m_{l}$, we have $V^{\prime}(m_{l})=0$ and
$g(m_{l},0)\neq 0$. Thus, the assumption $V^{\prime}(m)\gg\dot{m}g(m,\dot{m})$
made to split the different orders of Eq. (LABEL:eq:phi:quadrat) is not valid
close to $m=m_{l}$. Therefore, we expect to get corrections close to $m_{l}$
which are not taken into account here. When the ferromagnetic points and the
limit cycle are well separated and far from $m_{l}$, which is true for
$J_{1}=J_{2}$ as we have $m\sim\varepsilon^{1/6}$ for the ferromagnetic phase
and $m\sim\varepsilon^{1/2}$ for the oscillating phase, the corrections close
to $m_{l}$ do not affect the qualitative behavior at the transition, and thus
the method presented in this section describes well the phase transition.
However, if one or both of them are close to $m_{l}$, corrections, that are
not taken into account in this paper and which would require a different
approach, are necessary.
For $J_{2}=\pm 2+J_{1}$, we find that the $m^{6}$-term in $V(m)$ is
independent of $\varepsilon$ (whereas for $J_{1}=J_{2}$ it scales as
$\varepsilon$) and thus is enough to describe the ferromagnetic points, which
turn out to scale as $m\sim\varepsilon^{1/2}$ (as seen by balancing the terms
in $\varepsilon m^{4}$ and $m^{6}$). If we blindly apply the method described
in this section, we find a limit cycle with $m\sim\varepsilon^{1/2}$ in
between the ferromagnetic points, which have the same scaling. The limit cycle
is very close to the point $m=m_{l}$ where $V^{\prime}(m_{l})=0$ and where the
theory breaks downs. Furthermore, from the definition of the period of the
limit cycle Eq. (44), we find that when $H^{*}\approx V(m_{l})$, the period is
very large, and diverges when $H^{*}=V(m_{l})$. Thus, by applying the method
without enough care, we would still recover some qualitative properties of the
transition: the ferromagnetic points and the limit cycle are very close to
each other and the period of the limit cycle is very large. Yet, this would
not be a correct description of the transition as we would find a
discontinuous transition instead of a continuous one as observed numerically.
## VII Conclusion
We have considered a mean-field spin model with a dynamics breaking detailed
balance due to the non-reciprocal couplings between spins and auxiliary
dynamic fields. The presence of ferromagnetic interactions between spins on
one side, and between dynamic fields on the other side, allows for the
presence of both ferromagnetic and spontaneously oscillating phases. We have
characterized in details the transition between these two phases, showing that
it is discontinuous with the coexistence of both ferromagnetic and oscillating
states, one state being stable and the other one metastable. The relative
stability of both states is determined by a large deviation function,
generalizing the Landau free energy, that we evaluated explicitly in different
cases thanks to a perturbative framework. Two main scenarios are discussed,
whether the ferromagnetic points turn out to be inside or outside the limit
cycle. In addition, we found that the entropy production is peaked as a
function of system size, leading to a maximally dissipative system for an
optimal finite system size.
A natural generalization of this work may be to try to extend these results
beyond mean-field, by considering finite-dimensional systems, with the goal to
formulate a Ginzburg-Landau theory extending the present Landau framework
based on a large deviation principle. Such a theory might then be amenable to
a renormalization group treatment, extending the results of [3, 4] which
considered the synchronization of coupled oscillators. Here, we have started
from more basic ingredients, in the sense that the microscopic degrees of
freedom of the model (i.e., the spins and dynamic fields) do not oscillate in
the absence of interaction. Connecting these types of models to previous
results obtained on the synchronization transition is thus an interesting
challenge for future work.
###### Acknowledgements.
L. G. acknowledges funding from the French Ministry of Higher Education and
Research.
## Appendix A Derivation of the deterministic evolution equations
In this appendix, we derive the deterministic evolution equations Eqs. (6) and
(7) from the microscopic spin and field dynamics. The dynamics of the system
is determined from the master equation [see Eq. (5)]. As the average is
defined as $\langle x\rangle=\sum_{\mathcal{C}}x(\mathcal{C})P(\mathcal{C})$,
we find after rearranging terms,
$\displaystyle d_{t}\langle
m\rangle=\sum_{\mathcal{C}}P(\mathcal{C})\sum_{\mathcal{C}^{\prime}\neq\mathcal{C}}\left[m(\mathcal{C}^{\prime})-m(\mathcal{C})\right]W(\mathcal{C}^{\prime}|\mathcal{C}),$
(98) $\displaystyle d_{t}\langle
h\rangle=\sum_{\mathcal{C}}P(\mathcal{C})\sum_{\mathcal{C}^{\prime}\neq\mathcal{C}}\left[h(\mathcal{C}^{\prime})-h(\mathcal{C})\right]W(\mathcal{C}^{\prime}|\mathcal{C}),$
(99)
with the shorthand notation $d_{t}\equiv d/dt$. From a configuration
$\mathcal{C}=\\{s_{1},...,s_{N},h_{1},...,h_{N}\\}$ with magnetization
$m=N^{-1}\sum_{i=1}^{N}s_{i}$ and average field $h=N^{-1}\sum_{i=1}^{N}h_{i}$,
there are $(1\pm m)N/2$ possibilities to flip a spin $s_{i}=\pm 1$ and $(1\pm
h)N/2$ possibilities to flip a field $h_{i}=\pm 1$. For a flip of a spin
$s_{i}=\pm 1$, $m(\mathcal{C}^{\prime})-m(\mathcal{C})=\mp 2/N$ and for a flip
of a field $h_{i}=\pm 1$, $h(\mathcal{C}^{\prime})-h(\mathcal{C})=\mp 2/N$.
Thus, using the definition of the transition rates given in the main text [Eq.
(2)], we find:
$\displaystyle d_{t}\langle m\rangle$
$\displaystyle=\big{\langle}-m+\tanh[\beta(J_{1}m+h)]\,\big{\rangle},$ (100)
$\displaystyle d_{t}\langle h\rangle$
$\displaystyle=\big{\langle}-h+\tanh[\beta(J_{2}h+(1-\mu)m)]\,\big{\rangle}.$
(101)
Assuming that the law of large numbers applies in the limit $N\to\infty$, $m$
and $h$ obey the following deterministic equations:
$\displaystyle d_{t}m$ $\displaystyle=-m+\tanh[\beta(J_{1}m+h)],$ (102)
$\displaystyle d_{t}h$ $\displaystyle=-h+\tanh[\beta(J_{2}h+(1-\mu)m)].$ (103)
These deterministic equations can be used to determine the macroscopic phase
when a single solution exists for given values of the control parameters
$\beta=T^{-1}$ and $\mu$. When two solutions exist, the most stable one has to
be determined from the large deviation function approach, as explained in the
main text.
## Appendix B Values of the different functions and coefficients of the model
In this appendix, we give the values of the different functions and
coefficients introduced in the main text. The function $Y(m,\dot{m})$
introduced in Eq. (8) as $d_{t}\dot{m}=Y(m,\dot{m})$ has been split into a
$\dot{m}$-independent part, $V^{\prime}(m)=Y(m,0)$ and a $\dot{m}$-dependent
part $\dot{m}g(m,\dot{m})=Y(m,\dot{m})-Y(m,0)$. From Eqs. (6) and (7), we
have:
$\displaystyle Y(m,\dot{m})=$ $\displaystyle\beta J_{1}m+(-1+\beta
J_{1})\dot{m}-\tanh^{-1}(m+\dot{m})+\beta\tanh[J_{2}\tanh^{-1}(m+\dot{m})+\beta(1-\mu-
J_{1}J_{2})m]$ (104)
$\displaystyle+(m+\dot{m})^{2}\left[\tanh^{-1}(m+\dot{m})-\beta\tanh[J_{2}\tanh^{-1}(m+\dot{m})+\beta(1-\mu-
J_{1}J_{2})m]-\beta J_{1}(m+\dot{m})\right]\,,$ $V^{\prime}(m)=-\beta
J_{1}m+\beta
J_{1}m^{3}+(1-m^{2})\tanh^{-1}(m)-\beta(1-m^{2})\tanh[J_{2}\tanh^{-1}(m)+\beta(1-\mu-
J_{1}J_{2})m]\,.$ (105)
In the main text, we introduced the following expansions of $V(m)$ and
$g(m,\dot{m})$ [see Eqs. (16) and (17)],
$V(m)=\frac{v_{0}}{2}m^{2}+\frac{v_{1}}{4}m^{4},$ (106)
and
$g(m,\dot{m})=a_{0}\varepsilon-a_{1}m^{2}-a_{2}m\dot{m}-a_{3}\dot{m}^{2}.$
(107)
The coefficients appearing in these expansions are given by
$\displaystyle v_{0}=(\mu-1)/T^{2}+(1-J_{1}/T)(1-J_{2}/T),$ (108)
$\displaystyle\begin{aligned}
v_{1}=&-2/3+(2J_{2}+3J_{1})/3T-(\mu-1+J_{1}J_{2})/T^{2}\\\
&-(\mu-1-J_{2}T+J_{1}J_{2})^{3}/3T^{4},\end{aligned}$ (109)
$\displaystyle\varepsilon=(T_{c}-T)/T_{c},$ (110) $\displaystyle
a_{0}=2T_{c}/T,$ (111) $\displaystyle\begin{aligned}
a_{1}&=-2+(2J_{2}+J_{2}^{3}+3J_{1})/T\\\
&+J_{2}(-1+J_{1}J_{2}+\mu)^{2}/T^{3}\\\
&-2(1+J_{2}^{2})(-1+J_{1}J_{2}+\mu)/T^{2},\end{aligned}$ (112)
$\displaystyle\begin{aligned} a_{2}&=-2+2\beta J_{2}+\beta J_{2}^{3}+3\beta
J_{1}\\\ &+\beta^{2}(1+J_{2}^{2})(-1+J_{1}J_{2}+\mu),\end{aligned}$ (113)
$\displaystyle a_{3}=-2/3+(2J_{2}+J_{2}^{3}+3J_{1})/3T.$ (114)
In addition, we introduced in Eq. (31) the coefficients $D_{11}(m,\dot{m})$,
$D_{12}(m,\dot{m})=D_{21}(m,\dot{m})$ and $D_{22}(m,\dot{m})$, which read:
$\displaystyle D_{11}=1-m(m+\dot{m}),$ (115) $\displaystyle
D_{12}=[1-m(m+\dot{m})][-1+\beta J_{1}(1-(m+\dot{m})^{2})]$ (116)
$\displaystyle\begin{aligned} D_{22}&=[1-m(m+\dot{m})][-1+\beta
J_{1}(1-(m+\dot{m})^{2})]^{2}\\\
&+[1-h(h+\dot{h})]\beta^{2}[1-(m+\dot{m})^{2}]^{2}\end{aligned}$ (117)
with
$h=T\tanh^{-1}(m+\dot{m})-J_{1}m$ (118)
and
$\dot{h}(m,\dot{m})=T\frac{Y(m,\dot{m})+\dot{m}}{1-(m+\dot{m})^{2}}-J_{1}\dot{m}.$
(119)
In the main text, we use the coefficients evaluated at $m=0$ and $\dot{m}=0$,
which simplify to:
$\displaystyle D_{11}(0,0)=1,$ (120) $\displaystyle D_{12}(0,0)=-1+J_{1}/T,$
(121) $\displaystyle D_{22}(0,0)=1/T^{2}+(J_{1}/T-1)^{2}.$ (122)
## Appendix C Entropy production
In steady-state, the entropy production can be identified with the entropy
flux, which is defined from the microscopic configurations
$\mathcal{C}=\\{s_{1},...,s_{N},h_{1},...,h_{N}\\}$ as
$\Sigma=\sum_{\mathcal{C},\mathcal{C}^{\prime}}W(\mathcal{C}^{\prime}|\mathcal{C})P(\mathcal{C})\,\ln\frac{W(\mathcal{C}^{\prime}|\mathcal{C})}{W(\mathcal{C}|\mathcal{C}^{\prime})}\,.$
(123)
Note that Eq. (123) is equivalent to the definition (80) given in the main
text, up to a symmetrization of expression (123). To compute the entropy
production, we aim at changing the sum over the configurations into integrals
over $m$ and $\dot{m}$. We thus replace $\sum_{\mathcal{C}}$ by $\int
dmd\dot{m}\sum_{\mathcal{C}\in S(m,\dot{m})}$, where $S(m,\dot{m})$ denotes
the ensemble of configurations $\mathcal{C}$ with $m(\mathcal{C})=m$ and
$\dot{m}(\mathcal{C})=\dot{m}$. We transform the integral over
$\mathcal{C}^{\prime}$ into a sum over all possible transitions. We recall
that spin reversals are labelled with $k=1$ and field reversals with $k=2$,
keeping track of the sign $\sigma=\pm$ of the variable prior to reversal. We
denote as $W_{k}^{\sigma}$ the coarse-grained transition rates given in Eq.
(26) and $n_{k}^{\sigma}$ the fraction of possible transitions,
$n_{1}^{\pm}=\frac{1}{2}(1\pm m),\quad n_{2}^{\pm}=\frac{1}{2}(1\pm h).$ (124)
We find
$\Sigma=N\sum_{k,\sigma}\left\langle
W_{k}^{\sigma}(\mathbf{x})\ln\left[\\!\frac{W_{k}^{\sigma}(\mathbf{x})n_{k}^{-\sigma}(\mathbf{x}+\frac{\sigma\mathbf{d}_{k}}{N})}{W_{k}^{-\sigma}(\mathbf{x}+\frac{\sigma\mathbf{d}_{k}}{N})n_{k}^{\sigma}(\mathbf{x})}\right]\\!\right\rangle$
(125)
with the shorthand notation $\mathbf{x}=(m,\dot{m})$. To further simplify
notations, we make the dependence on $\mathbf{x}$ implicit in what follows. At
leading order in $N$, we find
$\Sigma=N\sum_{k}\left\langle\left(W_{k}^{+}-W_{k}^{-}\right)\ln\left[\frac{W_{k}^{+}n_{k}^{-}}{W_{k}^{-}n_{k}^{+}}\right]\right\rangle.$
(126)
We define
$\Sigma_{k}=\left\langle\left(W_{k}^{+}-W_{k}^{-}\right)\ln\left[\frac{W_{k}^{+}n_{k}^{-}}{W_{k}^{-}n_{k}^{+}}\right]\right\rangle.$
(127)
For $k=1$, we have $W_{1}^{+}-W_{1}^{-}=\dot{m}/2$ and
$\ln\left[\frac{W_{k}^{+}n_{k}^{-}}{W_{k}^{-}n_{k}^{+}}\right]=2\tanh^{-1}(m+\dot{m}),$
(128)
thus we find that
$\Sigma_{1}=N\left\langle\dot{m}\tanh^{-1}(m+\dot{m})\right\rangle.$ (129)
In the main text, we showed that close to a transition, one generically has a
scaling behavior $\dot{m}\sim\varepsilon^{\alpha}$ (with $\alpha>0$ an
exponent depending on the specific transition considered) and $\langle
m\dot{m}\rangle=0$. Thus, keeping the lowest order in $\varepsilon$, we find
$\Sigma_{1}/N=\langle\dot{m}^{2}\rangle.$ (130)
For $k=2$, we recall that we have
$h(m,\dot{m})=-J_{1}m+T\tanh^{-1}[m+\dot{m}]$ and we introduce
$\dot{h}=-h+\tanh[\beta(J_{2}h+(1-\mu)m)]$, the equivalent of $\dot{m}$ for
the fields variables $h_{i}$, given in Eq. (119) as a function of $m$ and
$\dot{m}$. We find $\Sigma_{2}=N\langle\dot{h}\tanh^{-1}(h+\dot{h})\rangle$.
As spins and fields play symmetric roles, one also has $\langle
h\dot{h}\rangle=0$ close to a transition. Using that
$Y(m,\dot{m})=-V^{\prime}(m)$ and $\langle V^{\prime}(m)\dot{m}\rangle=0$ at
the lowest order in $\varepsilon$, we find
$\frac{\Sigma_{2}}{N}=\langle
T^{2}V^{\prime}(m,\dot{m})^{2}+(T-J_{1})^{2}\dot{m}^{2}\rangle.$ (131)
Gathering contributions, one finds
$\frac{\Sigma}{N}=\left[1+(T-J_{1})^{2}\right]\left\langle\dot{m}^{2}\right\rangle+T^{2}\left\langle
V^{\prime}(m)^{2}\right\rangle$ (132)
at leading order in $\varepsilon$ and $N$.
## Appendix D Moderate-$N$ approximation of Sec. VI
In this appendix, we give the exact expressions of $\langle m^{2}\rangle$ and
$\langle\dot{m}^{2}\rangle$ for moderate values of $N$ for the Type-II
discontinuous transition of Sec. VI. For intermediate values of $N$, only high
values of $V(m)$ contribute, thus we consider that $V(m)=v_{4}m^{10}/10$.
Using Eq. (37), we find
$f(H)=AH^{6/5}\,,$ (133)
with
$A=\frac{9\Gamma\left(\frac{3}{10}\right)^{2}a_{1}}{(2^{6}5^{4}v_{4})^{1/5}\sqrt{\pi}\Gamma\left(\frac{1}{10}\right)}\,.$
(134)
Thus, using the definition of $\langle m^{2}\rangle$ and
$\langle\dot{m}^{2}\rangle$ from Eq. (47) we find:
$\displaystyle\langle m^{2}\rangle=\frac{c_{m}}{(a_{1}v_{4})^{1/6}}N^{-1/6},$
(135)
$\displaystyle\langle\dot{m}^{2}\rangle=c_{\dot{m}}\frac{v_{4}^{1/6}}{a_{1}^{5/6}}N^{-5/6},$
(136)
with
$\displaystyle
c_{m}=\frac{5^{1/3}\Gamma\left(\frac{3}{10}\right)\Gamma\left(\frac{2}{3}\right)}{3\times
2^{2/5}\pi}\left(\frac{\Gamma\left(\frac{13}{10}\right)}{\Gamma\left(\frac{11}{10}\right)}\right)^{5/6}\\!\left(\frac{\Gamma\left(\frac{9}{5}\right)}{\Gamma\left(\frac{8}{5}\right)}\right)^{1/6}\\!\approx
0.53,$ (137) $\displaystyle
c_{\dot{m}}=\frac{5^{2/3}\Gamma\left(\frac{3}{5}\right)\Gamma\left(\frac{4}{3}\right)\left(\Gamma\left(\frac{11}{10}\right)\Gamma\left(\frac{9}{5}\right)\right)^{5/6}}{2\sqrt{\pi}\Gamma\left(\frac{8}{5}\right)^{11/6}\Gamma\left(\frac{13}{10}\right)^{5/6}}\approx
1.33.$ (138)
## References
* Acebrón _et al._ [2005] J. A. Acebrón, L. L. Bonilla, C. J. Pérez Vicente, F. Ritort, and R. Spigler, Rev. Mod. Phys. 77, 137 (2005).
* Gupta _et al._ [2014] S. Gupta, A. Campa, and S. Ruffo, J. Stat. Mech.: Theor. Exp. , R08001 (2014).
* Risler _et al._ [2004] T. Risler, J. Prost, and F. Jülicher, Phys. Rev. Lett. 93, 175702 (2004).
* Risler _et al._ [2005] T. Risler, J. Prost, and F. Jülicher, Phys. Rev. E 72, 016130 (2005).
* Nicolis [1986] G. Nicolis, Rep. Prog. Phys. 49, 873 (1986).
* Kamino _et al._ [2017] K. Kamino, Y. Kondo, A. Nakajima, M. Honda-Kitahara, K. Kaneko, and S. Sawai, Proc. Natl. Acad. Sci. USA 114, E4149 (2017).
* Wang and Tang [2019] S.-W. Wang and L.-H. Tang, Nat. Commun. 10, 5613 (2019).
* Saha _et al._ [2020] S. Saha, J. Agudo-Canalejo, and R. Golestanian, Phys. Rev. X 10, 041009 (2020).
* You _et al._ [2020] Z. You, A. Baskaran, and M. C. Marchetti, Proc. Natl. Acad. Sci. USA 117, 19767 (2020).
* Cao _et al._ [2015] Y. Cao, H. Wang, Q. Ouyang, and Y. Tu, Nat. Phys. 11, 772 (2015).
* Nguyen _et al._ [2018] B. Nguyen, U. Seifert, and A. C. Barato, J. Chem. Phys. 149, 045101 (2018).
* Aufinger _et al._ [2022] L. Aufinger, J. Brenner, and F. C. Simmel, Nat. Commun. 13, 2852 (2022).
* Devailly _et al._ [2015] C. Devailly, C. Crauste-Thibierge, A. Petrosyan, and S. Ciliberto, Phys. Rev. E 92, 052312 (2015).
* Andrae _et al._ [2010] B. Andrae, J. Cremer, T. Reichenbach, and E. Frey, Phys. Rev. Lett. 104, 218102 (2010).
* Duan _et al._ [2019] D. Duan, B. Niu, and J. Wei, Chaos, Solitons and Fractals 123, 206 (2019).
* Gualdi _et al._ [2015] S. Gualdi, J.-P. Bouchaud, G. Cencetti, M. Tarzia, and F. Zamponi, Phys. Rev. Lett. 114, 088701 (2015).
* Yi _et al._ [2015] S. D. Yi, S. K. Baek, G. Chevereau, and E. Bertin, J. Stat. Mech.: Theor. Exp. , P11001 (2015).
* Collet _et al._ [2016] F. Collet, M. Formentin, and D. Tovazzi, Phys. Rev. E. 94, 042139 (2016).
* De Martino and Barato [2019] D. De Martino and A. C. Barato, Phys. Rev. E 100, 062123 (2019).
* Dai Pra _et al._ [2020] P. Dai Pra, M. Formentin, and P. Guglielmo, J. Stat. Phys. 179, 690 (2020).
* Crawford [1991] J. D. Crawford, Rev. Mod. Phys. 63, 991 (1991).
* Fei _et al._ [2018] C. Fei, Y. Cao, Q. Ouyang, and Y. Tu, Nat. Commun. 9, 1434 (2018).
* Gaspard [2002] P. Gaspard, J. Chem. Phys. 117, 8905 (2002).
* Barato and Seifert [2016] A. C. Barato and U. Seifert, Phys. Rev. X 6, 041053 (2016).
* Barato and Seifert [2017] A. C. Barato and U. Seifert, Phys. Rev. E 95, 062409 (2017).
* Oberreiter _et al._ [2022] L. Oberreiter, U. Seifert, and A. C. Barato, Phys. Rev. E 106, 014106 (2022).
* Remlein _et al._ [2022] B. Remlein, V. Weissmann, and U. Seifert, Phys. Rev. E 105, 064101 (2022).
* Sagués _et al._ [2007] F. Sagués, J. M. Sancho, and J. García-Ojalvo, Rev. Mod. Phys. 79, 829 (2007).
* Xu _et al._ [2020] H.-Y. Xu, Y.-P. Luo, J.-W. Wu, and M.-C. Huang, Physica D 411, 132612 (2020).
* Crochik and Tomé [2005] L. Crochik and T. Tomé, Phys. Rev. E 72, 057103 (2005).
* Xiao _et al._ [2008] T. J. Xiao, Z. Hou, and H. Xin, J. Chem. Phys. 129, 114506 (2008).
* Xiao _et al._ [2009] T. Xiao, Z. Hou, and H. Xin, J. Phys. Chem. B 113, 9316 (2009).
* Barato and Hinrichsen [2012] A. C. Barato and H. Hinrichsen, J. Phys. A: Math. Theor. 45, 115005 (2012).
* Tomé and de Oliveira [2012] T. Tomé and M. J. de Oliveira, Phys. Rev. Lett. 108, 020601 (2012).
* Noa _et al._ [2019] C. E. F. Noa, P. E. Harunari, M. J. de Oliveira, and C. E. Fiore, Phys. Rev. E 100, 012104 (2019).
* Martynec _et al._ [2020] T. Martynec, S. H. L. Klapp, and S. A. M. Loos, New J. Phys. 22, 093069 (2020).
* Seara _et al._ [2021] D. S. Seara, B. B. Machta, and M. P. Murrell, Nat. Commun. 12, 392 (2021).
* Le Bellac [1992] M. Le Bellac, _Quantum and Statistical Field Theory_ (Oxford Science Publications, Oxford, 1992).
* Meibohm and Esposito [2022] J. Meibohm and M. Esposito, Phys. Rev. Lett. 128, 110603 (2022).
* Holtzman and Raz [2022] R. Holtzman and O. Raz, Commun. Phys. 5, 280 (2022).
* Aron and Chamon [2020] C. Aron and C. Chamon, SciPost Phys. 8, 074 (2020).
* Guislain and Bertin [2023] L. Guislain and E. Bertin, Phys. Rev. Lett. 130, 207102 (2023).
* Fruchart _et al._ [2021] M. Fruchart, R. Hanai, P. Littlewood, and V. Vitelli, Nature 592, 363 (2021).
* Martin _et al._ [2023] D. Martin, D. Daniel Seara, Y. Avni, M. Fruchart, and V. Vitelli, (2023), arXiv:2307.08251 .
* Collet [2014] F. Collet, J. Stat. Phys. 157, 1301 (2014).
* Sinelschikov _et al._ [2023] D. Sinelschikov, A. Poggialini, M. F. Abbate, and D. De Martino, Emergence of collective self-oscillations in minimal lattice models with feedback (2023), arXiv:2306.01823 .
* Touchette [2009] H. Touchette, Physics Reports 478 (2009).
* Knessl _et al._ [1985] C. Knessl, B. J. Matkowsky, Z. Schuss, and C. Tier, SIAM J. Appl. Math. 46, 1006 (1985).
* Graham and Tél [1987] R. Graham and T. Tél, Phys. Rev. A 35, 1328 (1987).
* Graham and Tél [1984] R. Graham and T. Tél, Journal of Statistical Physics 35, 729 (1984).
* Graham [1989] R. Graham, Journal of Statistical Physics 54, 1207 (1989).
* Gillespie [2007] D. T. Gillespie, Annu. Rev. Phys. Chem. 58, 35 (2007).
* [53] See Supplemental Material at XXX.
* Schnackenberg [1976] J. Schnackenberg, Rev. Mod. Phys. 48, 571 (1976).
* Gaspard [2004] P. Gaspard, J. Stat. Phys. 117, 599 (2004).
Supplementary Information: Discontinuous phase transition from ferromagnetic
to oscillating states in a nonequilibrium mean-field spin model
## Corrections to the large deviation function for $\mu=\mu_{l}$
Stochastic simulations are made for values of $\varepsilon$ and $N$ for which
the assumptions of small $\varepsilon$ and large $N$ are not fully satisfied.
|
Statistical estimation of full-sky radio maps from 21cm array visibility data
using Gaussian Constrained Realisations–A
# Statistical estimation of full-sky radio maps from 21cm array visibility
data
using Gaussian Constrained Realisations
Katrine A. Glasscock1 E-mail<EMAIL_ADDRESS>(KAG)
0000-0001-6894-0902 Philip Bull1,2 0000-0001-5668-3101 Jacob Burba1
0000-0002-8465-9341 Hugh Garsden1 0009-0001-3949-9342 Michael J. Wilensky1
1Jodrell Bank Centre for Astrophysics 0000-0001-7716-9312 University of
Manchester Manchester M13 9PL UK
2Department of Physics and Astronomy University of Western Cape Cape Town
7535 South Africa
(Accepted XXX. Received YYY; in original form ZZZ; 2024)
###### Abstract
An important application of next-generation wide-field radio interferometers
is making high dynamic range maps of radio emission. Traditional deconvolution
methods like CLEAN can give poor recovery of diffuse structure, prompting the
development of wide-field alternatives like Direct Optimal Mapping and
$m$-mode analysis. In this paper, we propose an alternative Bayesian method to
infer the coefficients of a full-sky spherical harmonic basis for a drift-scan
telescope with potentially thousands of baselines. The can precisely encode
the uncertainties and correlations between the parameters used to build the
recovered image. We use Gaussian Constrained Realisations (GCR) to efficiently
draw samples of the spherical harmonic coefficients, despite the very large
parameter space and extensive sky-regions of missing data. Each GCR solution
provides a complete, statistically-consistent gap-free realisation of a full-
sky map conditioned on the available data, even when the interferometer’s
field of view is small. Many realisations can be generated and used for
further analysis and robust propagation of statistical uncertainties. In this
paper, we present the mathematical formalism of the spherical harmonic GCR-
method for radio interferometers. We focus on the recovery of diffuse emission
as a use case, along with validation of the method against simulations with a
known diffuse emission component.
###### keywords:
reionisation, observations, large scale structure of universe – methods:
statistical – techniques: interferometric
## 1 Introduction
Through measurements of the Cosmic Microwave Background (CMB), significant
progress has been made in studying the very early Universe (Planck
Collaboration et al., 2020), but the subsequent epochs up to and including the
Epoch of Reionisation still leaves much to be discovered. A promising method
to explore these intermediate epochs of cosmic history is the redshifted 21 cm
line from neutral hydrogen. The signal can be used as a cosmological probe of
structure formation, as it traces the distribution and evolution of the
neutral hydrogen that fills the early Universe and forms the first stars and
galaxies during Cosmic Dawn and also provides an inverse tracer of the
reionisation of the intergalactic medium during the later Epoch of
Reionisation (EoR) (Pritchard & Loeb, 2012; Liu & Shaw, 2020). Importantly,
this tracer is redshift dependent, providing line-of-sight distance
information that adds an extra dimension compared to the CMB, (Furlanetto et
al., 2006; Morales & Wyithe, 2010; Liu & Shaw, 2020).
Studies of the 21 cm signal are often divided into either measuring the
spatially averaged _global signal_ , or measuring the statistical properties
of the spatial fluctuations in the brightness temperature field. The 21 cm
signal from the above-mentioned epochs is redshifted to the range $z\sim
27-6$, corresponding to frequencies of $50-$200\text{\,}\mathrm{MHz}$$. A
number of experiments have been built with the purpose of searching for this
signal, and many have already reported upper limits. A non-exhaustive list
includes the Murchison Widefield Array; MWA (Bowman et al., 2013; Tingay et
al., 2013; Wayth et al., 2018), the Donald C. Backer Precision Array to Probe
the Epoch of Reionisation; PAPER (Parsons et al., 2012; Ali et al., 2015), The
LOw Frequency Array; LOFAR (van Haarlem et al., 2013; Patil et al., 2017), and
the Hydrogen Epoch of Reionization Array; HERA (DeBoer et al., 2017; The HERA
Collaboration et al., 2022).
The biggest technical challenge to date is the proper handling of foreground
contaminants and how they are distorted by the receiving instrumentation. The
dominant contaminant is typically the diffuse emission that is present at all
parts of the sky, and is roughly a factor of around $10^{5}$ times brighter
than the underlying 21 cm signal (Liu & Shaw, 2020). The main component of
diffuse emission is the Galactic synchrotron emission, but there are also
contributions from extragalactic free-free emission, bright Galactic radio
point sources, and unresolved extragalactic point sources, (Santos et al.,
2005). In addition there may be as-yet unknown diffuse components, e.g.
associated with apparent excess background emission as claimed by ARCADE 2
(Fixsen et al., 2011; Singal et al., 2010) and OVRO-LWA (Dowell & Taylor,
2018), or potential dark matter annihilation signals (Evoli et al., 2014;
Lopez-Honorez et al., 2016). Being able to robustly separate known but
uncertain Galactic and extra-galactic components from novel sources of
emission would greatly aid the physical interpretation of all of these
effects, particularly as highly sensitive but complex next-generation arrays
such as SKAO come online.
The issue of modelling diffuse emission is especially important for close-
packed radio arrays. Sparse interferometers such as the VLA or SKA-MID are
essentially blind to large-scale emission, as the antenna configuration
resolves out large angular scales. Compact arrays with smaller antennas are
designed to make high dynamic range measurements on large angular scales
however, which means the bright diffuse emission becomes an unavoidable issue.
Imaging with interferometers is complicated, particularly for diffuse
emission, as the effect of the interferometer response (its point spread
function; PSF) must be deconvolved. For regular close-packed arrays, there is
often highly incomplete coverage of the $(u,v)$-plane, since there are many
baselines of the same length. This ‘redundancy’ of the baselines results in
artefacts like grating lobes in the PSF (Dillon & Parsons, 2016).
The typical lack of uniform $(u,v)$-coverage has given rise to the use of
specialised deconvolution algorithms, where the so-called _dirty image_ is
deconvolved to result in a more uniform image with imaging artifacts removed
or suppressed. Many traditional pipelines make use of the CLEAN algorithm,
which works by iteratively removing and replacing bright sources in the dirty
image with their calculated point spread functions (PSFs) to get rid of the
point source side lobes, (Högbom, 1974; Cornwell, T. J., 2009). CLEAN,
however, does not correct for diffuse emission and additionally the statistics
of the resulting image are not well known, particularly as CLEAN acts as a
non-linear transformation of the data. Some improvements have been made to the
CLEAN algorithm through Multi-Scale CLEAN (Cornwell, 2008) and WSCLEAN
(Offringa & Smirnov, 2017), which can be used in either (or both) multi-scale
and multi-frequency mode. Frequency dependence is introduced by dividing the
full bandwidth into several output channels and then treating them
individually. In multi-scale mode, further iterations are introduced using
sub-loops and going through the scales one-by-one by convolving the dirty
image with the corresponding space-kernel. Even so, both methods still build
upon the same CLEAN principle of using maxima to replace sources with an ideal
PSF response.
An alternative approach is Direct Optimal Mapping (DOM), which uses a maximum
likelihood method to estimate the beam-weighted map of the sky using a pixel-
basis, (Dillon et al., 2015; Xu et al., 2022). DOM is thus a one-shot process
that estimates both the mean and the covariance in a lossless way, i.e.
without losing information on model parameters. The model is built around a
measurement matrix $\boldsymbol{\mathbf{A}}$ that maps a pixelated sky map to
the visibilities. The choice of $\boldsymbol{\mathbf{A}}$ can be problematic
however, and regularisation is needed in order to overcome degeneracies and
ensure uniqueness of the map solutions that are obtained by ‘inverting’ the
measuring matrix. Furthermore, the noise covariance $\boldsymbol{\mathbf{N}}$
is also affected by the measurement matrix, requiring a choice for how to
beam-weight the recovered map. While in principle the DOM formulation accounts
for the whole sky, much smaller faceted maps are typically used to reduce the
computational costs.
Like DOM, $m$-mode analysis is also typically implemented as a maximum
likelihood estimator that will produce an estimate of the mean and covariance
of the sky, although now in a spherical harmonic basis instead (Shaw et al.,
2014, 2015; Eastwood et al., 2018). This method is developed specifically for
drift-scan telescopes, taking advantage of simplifications that arise after
applying a spherical transform to both the sky intensity and beam function, as
well as performing a Fourier transform of the visibilities along the time
direction, into $m$-mode space. As with DOM, proper care must be taken to
ensure uniqueness of the solutions, so the inverse of the beam transfer
function is replaced with the Moore-Penrose pseudo-inverse. Both DOM and
$m$-mode analysis are maximum likelihood estimators, and therefore only
provide moments of the underlying posterior distribution of the sky model
parameters (although see Chapter 6 of Kriele (2022), which recasts $m$-mode
analysis as a Bayesian linear model).
In this paper, we introduce a statistical method that, like $m$-mode analysis,
is based on a spherical harmonic description of the sky (although we do not
use an $m$-mode transform here). We use an explicitly Bayesian treatment of
the map reconstruction problem, introducing a prior to the system which
provides a regularisation, particularly in regions of missing data, and using
the Gaussian Constrained Realisation (GCR) method to make sampling of the
large number of spherical harmonic coefficients tractable. Each sample vector
drawn from the posterior is a valid realisation of the whole sky, without any
missing data regions, and we can treat it like we would treat ‘ideal’ data
without noise etc. Uncertainties can be propagated by passing multiple
realisations through subsequent processing steps and then inspecting their
distribution. By using an explicit forward model, we can also avoid ad-hoc
beam weighting steps, which makes interpretation of the results more
straightforward. Furthermore, this sampler can be incorporated in a larger
‘Gibbs sampling’ framework that also samples other sky and instrumental
parameters (e.g. Eriksen et al., 2008; Kennedy et al., 2023; CHIME
Collaboration, 2023).
Naturally, calculating multiple samples will require more computational power
than solving once for the maximum likelihood (or for the maximum _a
posteriori_) solution. We propose some approaches to reduce the computational
burden, such as specialising to the drift-scan telescope case and using Wigner
$\mathfrak{D}$-matrices to account for the sky rotating above the instrument.
This removes the need to explicitly calculate the visibility response function
(which maps the spherical harmonic coefficients to the visibilities) at
multiple times.
The paper is structured as follows: In Section 2 we build and present the
visibility response model and cover the Bayesian methods of Wiener filtering
and Gaussian Constrained realisations. Both methods are related to Gibbs
sampling, a future prospect of this work. Section 3 describes the diffuse
emission foreground model as well as the visibility simulation- and sampling
parameters. In Section 4 we present the performance for our reference
(standard) case as well as a comparative analysis of various noise- and prior
levels and ten other simulation scenarios including varying the field of view
of the array. Finally, in Section 5 we conclude.
## 2 Bayesian sampling of diffuse foregrounds
In this section we develop the mathematical formalism to statistically sample
spherical harmonic modes on the whole sky given radio interferometer
visibility data.
### 2.1 Data model
The data model in this work is based on a single diffuse component modelled as
a sum of spherical harmonic modes at each frequency. Ultimately, this will
only be one component of a more comprehensive sky model involving
contributions from point sources, the EoR signal, etc. (which do not need to
be modelled using a spherical harmonic basis). It would also be possible to
define a particular functional form for the frequency dependence and interpret
the spherical harmonic coefficients as amplitudes of this spectral template,
e.g. by defining the specific intensity of the sky as
$\displaystyle I(\nu,\theta,\phi)$ $\displaystyle=\sum_{\ell,m}a_{\ell
m}Y_{\ell m}(\theta,\phi)f(\nu)$ (1)
with frequency dependence $f(\nu)$. A common choice of frequency dependence is
a power law with spectral index $\beta$. These scenarios involve only simple
modifications to the single component (and per-frequency) model that we
consider here however, and so for the sake of simplicity we will not consider
them further here.
We begin by writing out an expression for the complex visibilities observed by
an interferometer. A baseline separated by vector $\boldsymbol{\mathbf{b}}$
given by antennas $i,j$ roughly probes the sky brightness temperature at
Fourier mode $\boldsymbol{\mathbf{u}}=\boldsymbol{\mathbf{b}}/\lambda$, (Liu &
Shaw, 2020). The visibility that the interferometer measures by that baseline
$\boldsymbol{\mathbf{b}}$ is then given by,
$\displaystyle
V_{ij}(\nu,t)=\int\text{d}^{2}\Omega\,A_{i}(\nu,\boldsymbol{\mathbf{\theta}})A_{j}^{*}(\nu,\boldsymbol{\mathbf{\theta}})\,I(\nu,\boldsymbol{\mathbf{\theta}})\,e^{-2\pi\boldsymbol{\mathbf{u}}(\nu)\cdot\boldsymbol{\mathbf{\theta}}}+n_{ij},$
(2)
where $A_{i}$ and $A_{j}^{*}$ are the E-field beams for each antenna, $I$ is
the specific intensity of the sky, the exponential term describes the fringes
where $\boldsymbol{\mathbf{\theta}}$ is the topocentric coordinates of the
sources, and $n_{ij}$ is the baseline dependent noise.
The per-frequency spherical harmonic expansion of the sky model is given as,
$\displaystyle I(\nu,\theta,\phi)=\sum_{\ell,m}a_{\ell m}(\nu)Y_{\ell
m}(\theta,\phi),$ (3)
where $\theta$ and $\phi$ are the declination and right ascension in
equatorial coordinates respectively. For a typical drift scan array the motion
of the sky is especially simple in an equatorial coordinate system, as it is a
simple right ascension (or LST) rotation.
There are many different conventions on which $a_{\ell m}$ modes to include,
when working with spherical harmonics. For the sake of reducing computational
resources, it makes sense to manipulate the $a_{\ell m}$-vector to contain
only the minimal required modes to solve the system. Firstly, it should be
noted that even though the visibilities are complex entities, the actual sky
needs to be a real field. The following symmetry relation for the $a_{\ell m}$
modes must be satisfied,
$\displaystyle{a_{\ell m}}=(-1)^{m}a^{*}_{l,-m},$ (4)
or, split into real and imaginary parts,
$\displaystyle a_{\ell,+m}^{\rm re}=(-1)^{m}a_{\ell,-m}^{\rm re}\quad{\rm
and}\quad a_{\ell,+m}^{\rm im}=(-1)^{m+1}a_{\ell,-m}^{\rm im}\,.$ (5)
This means that only $m\geq 0$ modes are necessary and all negative $m$ modes
are naturally excluded from the $a_{\ell m}$-vector.
Generally, the spherical harmonic coefficients are complex-valued. This can
present complications when dealing with vectors of $a_{\ell m}$-values
numerically. We therefore split the $a_{\ell m}$-values into their real- and
imaginary parts. The $a_{\ell m}$-vector now consists first of all the real-
and then the imaginary modes. Moreover, since the $m=0$ imaginary parts will
always be zero in order to satisfy the (anti-)symmetry condition of Eq. 5,
these spherical harmonic modes are removed while sampling and injected back in
afterwards. For a given $\ell_{\text{max}}$ this leads to a total number of
$a_{\ell m}$ modes of $N_{\text{modes}}=({\ell_{\text{max}}}+1)^{2}$.
We next define the visibility response operator, $\delta V_{ij}^{\ell
m}(\nu,t)$, which gives the projection from a spherical harmonic vector to a
set of radio interferometer visibilities. The visibility response is dependent
on the antenna array configuration and location, which means the primary beam
function is also absorbed into this operator. The full visibility will then be
the visibility response function applied to the $a_{\ell m}$ modes summed over
all $(\ell,m)$-values,
$\displaystyle V_{ij}=\sum_{\ell m}\delta V_{ij}^{\ell m}(\nu,t)\,a_{\ell m}.$
(6)
where the visibility response is defined as,
$\displaystyle\delta V_{ij}^{\ell
m}(\nu,t)=\int\text{d}^{2}\Omega\,A_{i}(\nu,\boldsymbol{\mathbf{\theta}})A_{j}^{*}(\nu,\boldsymbol{\mathbf{\theta}})\,Y_{\ell
m}(\alpha,\delta)\,e^{-2\pi\boldsymbol{\mathbf{u}}_{ij}(\nu)\cdot\boldsymbol{\mathbf{\theta}}}.$
(7)
In this expression, $\boldsymbol{\mathbf{\theta}}$ are in topocentric
coordinates (e.g. local altitude/azimuth), and
$\alpha=\alpha(\boldsymbol{\mathbf{\theta}},t)$ and
$\delta=\delta(\boldsymbol{\mathbf{\theta}},t)$ are the RA and Dec coordinates
corresponding to a given topocentric pointing $\boldsymbol{\mathbf{\theta}}$
at local sidereal time $t$. This operator can be computed ahead of time for a
given array configuration, which determines the available baseline vectors
$\mathbf{u}$ and set of antenna E-field beams $A_{i}$.
### 2.2 Wigner $\mathfrak{D}$-matrix formalism
In the above section, the visibility response is defined as a function of both
time and frequency. Instead of simulating the visibility response operator for
each time separately, we can choose a single reference time and apply a
rotation to get the response at any other desired LST. This is made simpler by
choosing the spherical harmonic basis to align with the rotation axis of the
sky as seen by a drift-scanning telescope, i.e. by defining the spherical
harmonic basis in equatorial coordinates. In this case, a simple azimuthal
rotation around the celestial axis implements the mapping between RA and LST,
while the declination stays constant.
We begin by defining the rotational matrix $\mathcal{R}_{\ell
m\ell^{\prime}m}$ that transforms the spherical harmonic modes at a reference
sidereal time $t_{\text{ref}}$ to the updated sidereal time,
$\displaystyle a_{\ell m}(t)$ $\displaystyle=\mathcal{R}_{\ell
m\ell^{\prime}m^{\prime}}(t)\,a_{\ell^{\prime}m^{\prime}}(t_{\text{ref}}).$
(8)
For spherical harmonics, this is given by the Wigner $\mathfrak{D}$-matrix,
which is a unitary matrix in an irreducible representation of the SO(3) group
(Tung, 1985). The spherical harmonics transform as
$\displaystyle Y_{\ell}^{m}(\theta^{\prime},\phi^{\prime})$
$\displaystyle=\sum_{m^{\prime}=-\ell}^{\ell}Y_{\ell}^{m^{\prime}}(\theta,\phi)\,{\mathfrak{D}}_{m^{\prime}m}^{\ell}(\alpha,\beta,\gamma),$
(9)
where ${\mathfrak{D}}_{mm^{\prime}}^{\ell}(\alpha,\beta,\gamma)$ is the
$\mathfrak{D}$-matrix given by the three Euler angles (that describe
sequential rotations around three axes). Note, that for any rotation with
$\mathfrak{D}$-matrices, the spherical harmonic of degree $\ell$ and order $m$
transforms into a linear combination of spherical harmonics to the same degree
$\ell$. The $\mathfrak{D}$-matrix itself is given as
$\displaystyle{\mathfrak{D}}^{\ell}_{m^{\prime}m}(\alpha,\beta,\gamma)$
$\displaystyle=e^{-im^{\prime}\alpha}\,d^{\ell}_{m^{\prime}m}(\beta)\,e^{-im\gamma},$
(10)
where $d^{\ell}_{m^{\prime}m}(\beta)$ are the corresponding reduced Wigner
matrices. A table of the (small) $d$-matrices can be found in Dong (1994).
Note the lack of mixing between $\ell$ modes.
For a zenith-pointing drift-scan telescope, the sky rotation in a time
interval $t-t_{\rm ref}$ can be mapped directly to an azimuthal rotation
angle. The full visibility given in Eq. 6 can then be written as,
$\displaystyle V_{ij}(\nu,t)=\sum_{\ell m}\delta V_{ij}^{\ell
m}(\nu,t_{\text{ref}})\,\sum_{m^{\prime}}{\mathfrak{D}}^{\ell}_{m\,m^{\prime}}(t)\,a_{\ell
m^{\prime}}(t_{\rm ref}),$ (11)
where $\delta V_{ij}^{\ell m}(\nu,t_{\text{ref}})$ is the pre-computed
visibility response at a reference time $t_{\text{ref}}$,
${\mathfrak{D}}^{\ell}_{m\,m^{\prime}}(t)$ is the appropriate
$\mathfrak{D}$-matrix for a given LST, and $a_{\ell m^{\prime}}$ are the
spherical harmonic modes of the sky at $t_{\text{ref}}$. If we let $p$ and $q$
label the frequency and time samples, $\nu_{p}$ and $t_{q}$, we can rewrite
the above in index notation as
$\displaystyle V_{ijpq}={X}_{ijpq\ell m}\,\,a_{\ell m},$ (12)
where repeated indices denote summation, and ${X}_{ijpq\ell m}$ is the
combined operator for the two sums in Eq. 11. The combined visibility response
and $\mathfrak{D}$-matrices now form the full
$\boldsymbol{\mathbf{X}}$-operator, which is linear and equivalent to that of
Eq. 6.
### 2.3 Wiener filter – the maximum a posteriori solution
Now, having established our visibility model based on the linear operator
$\boldsymbol{\mathbf{X}}$ for the visibility response using a spherical
harmonics basis — we want to obtain the posterior distribution for the
spherical harmonic modes given the data, noise, and priors, in order to
generate samples of the diffuse emission sky. The steps outlined in this
section and the next are steps of increasing complexity in a Bayesian
hierarchy, ultimately preparing for implementing this model into a full Gibbs
sampling scheme. Here, we go from the Wiener filter solution; a simple maximum
a posteriori (MAP) solution, to drawing independent samples from the Gaussian
Constrained Realisation (GCR) equation.
Thereby, as a first step in the Bayesian hierarchy we find the Wiener filter
solution, which also acts as the central value for drawing the realisations.
From Bayes’s theorem we can get the conditional distribution on the $a_{\ell
m}$ modes,
$\displaystyle
P(\boldsymbol{\mathbf{a}}\,|\,\boldsymbol{\mathbf{d}},\boldsymbol{\mathbf{N}},\boldsymbol{\mathbf{a}}_{0},\boldsymbol{\mathbf{S}})\propto\,$
$\displaystyle
P(\boldsymbol{\mathbf{d}}|\,\boldsymbol{\mathbf{a}},\boldsymbol{\mathbf{N}})\,P(\boldsymbol{\mathbf{a}}|\,\boldsymbol{\mathbf{a}}_{0},\boldsymbol{\mathbf{S}}),$
(13)
when conditioning on the known covariances of the signal
$\boldsymbol{\mathbf{S}}$ and noise $\boldsymbol{\mathbf{N}}$, the data-vector
$\boldsymbol{\mathbf{d}}$, and the mean of the prior on the $a_{\ell m}$ modes
$\boldsymbol{\mathbf{a}}_{0}$. For simplification of the notation we have
dropped the $(\ell,m)$-indices and simply denote the $a_{\ell m}$-vector as
$\boldsymbol{\mathbf{a}}$ as well as dropping the baseline indices ($i,j$). It
is assumed that generally the signal prior and covariance will be independent
of the data and noise covariance. For simplicity we are keeping
$\boldsymbol{\mathbf{S}}$ fixed, however, in a full Gibbs sampling scheme it
would be possible to sample $\boldsymbol{\mathbf{S}}$ as well. Furthermore, we
assume that the noise is Gaussian thus rewriting Eq. 13 as:
$\displaystyle
P(\boldsymbol{\mathbf{a}}\,|\,\boldsymbol{\mathbf{d}},\boldsymbol{\mathbf{N}},\boldsymbol{\mathbf{a}}_{0},\boldsymbol{\mathbf{S}})\propto
e^{-(\boldsymbol{\mathbf{d}}-\boldsymbol{\mathbf{Xa}})^{\dagger}\boldsymbol{\mathbf{N}}^{-1}(\boldsymbol{\mathbf{d}}-\boldsymbol{\mathbf{Xa}})}e^{-(\boldsymbol{\mathbf{a}}-\boldsymbol{\mathbf{a}}_{0})^{\dagger}\boldsymbol{\mathbf{S}}^{-1}(\boldsymbol{\mathbf{a}}-\boldsymbol{\mathbf{a}}_{0})}.$
(14)
For a Gaussian distribution the maximum of the posterior distribution is the
same as the maximum of the log-posterior, hence we can find the Wiener filter
solution by maximising the partial derivative of the exponent of Eq. 14 with
respect to the signal, $\boldsymbol{\mathbf{a}}$:
$\displaystyle\left.\frac{\partial}{\partial\boldsymbol{\mathbf{a}}}\right|_{\boldsymbol{\mathbf{a}}=\boldsymbol{\mathbf{\hat{a}}}}\left(-(\boldsymbol{\mathbf{d}}-\boldsymbol{\mathbf{Xa}})^{\dagger}\boldsymbol{\mathbf{N}}^{-1}(\boldsymbol{\mathbf{d}}-\boldsymbol{\mathbf{Xa}})-(\boldsymbol{\mathbf{a}}-\boldsymbol{\mathbf{a}}_{0})^{\dagger}\boldsymbol{\mathbf{S}}^{-1}(\boldsymbol{\mathbf{a}}-\boldsymbol{\mathbf{a}}_{0})\right)=0,$
(15)
resulting in,
$\displaystyle\boldsymbol{\mathbf{d}}^{\dagger}\boldsymbol{\mathbf{N}}^{-1}\boldsymbol{\mathbf{X}}+\boldsymbol{\mathbf{a_{0}}}^{\dagger}\boldsymbol{\mathbf{S}}^{-1}=\boldsymbol{\mathbf{a}}^{\dagger}\boldsymbol{\mathbf{X}}^{\dagger}\boldsymbol{\mathbf{N}}^{-1}+\boldsymbol{\mathbf{a}}^{\dagger}\boldsymbol{\mathbf{S}}^{-1}.$
(16)
Now, taking advantage of the fact that covariance matrices are Hermitian, we
can complex conjugate the entire expression and rearrange to obtain the Wiener
filter solution,
$\displaystyle\left[\boldsymbol{\mathbf{X}}^{\dagger}\boldsymbol{\mathbf{N}}^{-1}\boldsymbol{\mathbf{X}}+\boldsymbol{\mathbf{S}}^{-1}\right]\boldsymbol{\mathbf{a}}_{\textup{wf}}=\left(\boldsymbol{\mathbf{X}}^{\dagger}\boldsymbol{\mathbf{N}}^{-1}\boldsymbol{\mathbf{d}}+\boldsymbol{\mathbf{S}}^{-1}\boldsymbol{\mathbf{a}}_{0}\right).$
(17)
Returning to Eq.14; multiplying two multivariate Gaussians will result in a
new multivariate Gaussian. By completing the square it can be shown that this
can be written as proportional to a multivariate Gaussian with inverse
covariance matrix given as
$\displaystyle\boldsymbol{\mathbf{\Sigma}}^{-1}=\boldsymbol{\mathbf{X}}^{\dagger}\boldsymbol{\mathbf{N}}^{-1}\boldsymbol{\mathbf{X}}+\boldsymbol{\mathbf{S}}^{-1},$
(18)
and where the Wiener filter solution acts as the mean,
$\displaystyle\boldsymbol{\mathbf{\hat{a}}}=\boldsymbol{\mathbf{\Sigma}}\left(\boldsymbol{\mathbf{X}}^{\dagger}\boldsymbol{\mathbf{N}}^{-1}\boldsymbol{\mathbf{d}}+\boldsymbol{\mathbf{S}}^{-1}\boldsymbol{\mathbf{a}}_{0}\right).$
(19)
Even though the Wiener filter is the maximum a posteriori solution, it is
generally a biased estimator as described in more detail in Kennedy et al.
(2023). Furthermore, the Wiener filter is a summary statistic but we are
interested in generating actual samples that are complete and statistically-
consistent realisations of the full-sky map. For this, the Wiener filter can
instead act as the central value for drawing the samples.
Figure 1: The absolute values of the visibility response operator
$X_{\text{re}}$ on a log-scale and with dimensions (Nmodes) $\times$ (NLSTs
$\times$ Nfreq $\times$ Nbl). Here it is shown for $\ell_{\text{max}}=20$, LST
$=$ 0–8 $\mathrm{h}$, $\nu=$ 100–101 $\mathrm{MHz}$ and 10 close packed
antennas that form 45 baselines as shown in Fig. 2. The darker is is, the less
that frequency/LST/baseline contributes to measuring that $(m,\ell)$-value.
Note that this is the _real-part only_ of $X$ (the imaginary part showing
similar structure) but it covers both the real- and imaginary parts of the
$a_{\ell m}$ modes.
### 2.4 Gaussian Constrained Realisations
In order to draw samples from the full conditional distribution
$P(\boldsymbol{\mathbf{a}}\,|\,\boldsymbol{\mathbf{d}},\boldsymbol{\mathbf{N}},\boldsymbol{\mathbf{a}}_{0},\boldsymbol{\mathbf{S}})$,
we can take advantage of the fact that we can describe our model as a
multivariate Gaussian distribution. The Wiener filter solution described by
its mean and covariance given in eqs. 18 and 19 acts as a first step in the
hierarchy to which we can add random normal realisation terms of the signal
$\boldsymbol{\mathbf{\omega}}_{a}$ and noise terms
$\boldsymbol{\mathbf{\omega}}_{d}$ correctly scaled by their respective
covariances to draw the constrained realistations
$\boldsymbol{\mathbf{a}}_{\textup{cr}}$, (Eriksen et al., 2008). This leads to
the Gaussian Constrained Realisation (GCR) equation,
$\displaystyle\left[\boldsymbol{\mathbf{X}}^{\dagger}\boldsymbol{\mathbf{N}}^{-1}\boldsymbol{\mathbf{X}}+\boldsymbol{\mathbf{S}}^{-1}\right]\boldsymbol{\mathbf{a}}_{\textup{cr}}=\left(\boldsymbol{\mathbf{X}}^{\dagger}\boldsymbol{\mathbf{N}}^{-1}\boldsymbol{\mathbf{d}}+\boldsymbol{\mathbf{S}}^{-1}\boldsymbol{\mathbf{a}}_{0}+\boldsymbol{\mathbf{S}}^{-\frac{1}{2}}\boldsymbol{\omega}_{a}+\boldsymbol{\mathbf{X}}^{\dagger}\boldsymbol{\mathbf{N}}^{-\frac{1}{2}}\boldsymbol{\mathbf{\omega}}_{d}\right).$
(20)
In turn Eq. 20 reduces back to the mean
$\langle\boldsymbol{\mathbf{a}}_{\textup{cr}}\rangle=\boldsymbol{\mathbf{\hat{a}}}$,
since the unit variance Gaussian vectors $\boldsymbol{\mathbf{\omega}}_{a}$
and $\boldsymbol{\mathbf{\omega}}_{d}$ have zero mean. By solving Eq. 20
repeatedly we can draw independent samples of the $a_{\ell m}$ modes on the
sky consistent with the given data vector $\boldsymbol{\mathbf{d}}$, chosen
signal prior $\boldsymbol{\mathbf{a}}_{0}$, the visibility response operator
$\boldsymbol{\mathbf{X}}$, and the covariances $\boldsymbol{\mathbf{S}}$ and
$\boldsymbol{\mathbf{N}}$. For the sake of keeping computational costs low, we
have kept to 100 samples per set of parameters for this paper.
In the case studied in this paper, there is only missing data outside of the
FOV defined by the primary beam and the horizon; both of which is dealt with
through the visibility response function. In case of actual gaps in the data,
for instance due to RFI flagging, one can define a set of weights
$\boldsymbol{\mathbf{w}}$ to redefine the inverse noise covariance,
$\displaystyle\boldsymbol{\mathbf{N}}_{\boldsymbol{\mathbf{w}}}^{-1}=\boldsymbol{\mathbf{ww}}^{T}\circ\boldsymbol{\mathbf{N}}^{-1},$
(21)
where $\circ$ denotes element-wise multiplication. Using this weighted noise
covariance in Eq. 20 means we automatically zero the contribution from the
data inside the mask. The signal covariance, however, does not necessarily go
to zero inside the masked regions and thereby _takes over_ the signal
estimation in the absence of information from the likelihood function. Note
that this is not equivalent to drawing samples from the prior distribution and
simply filling-in the masked regions. The samples generated within the flagged
regions will be informed by both the prior and data from the un-flagged
regions.
In the future this process will be done within the context of Gibbs sampling
(Geman & Geman, 1984), which is a Bayesian method to sample directly from the
joint posterior of more complicated high-dimensional distributions. In this
instance, we focus only on the distribution for the spherical harmonic
coefficients, and use a conjugate gradient solver to obtain
$\boldsymbol{\mathbf{a}}_{\textup{cr}}$ from the GCR equation. Most solvers do
not handle complex quantities well so to overcome this, we saw it necessary to
‘realify’ the full system as described in App. A.
When implemented into a Gibbs sampling scheme, it will enable us to also
sample the signal covariance $\boldsymbol{\mathbf{S}}$. Gibbs sampling is an
iterative method, that samples from each conditional distribution in turn and
thereby effectively samples from the joint posterior. For each iteration it
updates the conditional variables with the samples obtained from the previous
step,
$\displaystyle\boldsymbol{\mathbf{a}}_{i+1}$ $\displaystyle\leftarrow
P\\!\left(\boldsymbol{\mathbf{a}}_{i}\,|\,\boldsymbol{\mathbf{d}},\boldsymbol{\mathbf{N}},\boldsymbol{\mathbf{a_{0}}},\boldsymbol{\mathbf{S}}_{i}\right),$
(22a) $\displaystyle\boldsymbol{\mathbf{S}}_{i+1}$ $\displaystyle\leftarrow
P\\!\left(\boldsymbol{\mathbf{S}}_{i}\,|\,\boldsymbol{\mathbf{a}}_{i+1}\right),$
(22b)
where $\leftarrow$ represents generating samples through evaluating the
conditional distribution and $i$ indexes the Gibbs iteration number. First,
the ${a_{\ell m}}$ modes are sampled and it is noticed that the conditional
distribution takes the same form as in Eq. 13. We can therefore use the same
arguments as before; we are still dealing with a multivariate Gaussian
distribution and therefore end up at the GCR equation of Eq. 20. The sample
obtained from Eq. 22a already contains the conditioning on the noise
covariance $\boldsymbol{\mathbf{N}}$, data $\boldsymbol{\mathbf{d}}$, and
prior mean $\boldsymbol{\mathbf{a_{0}}}$, so there is no need to condition
again on these parameters when sampling $\boldsymbol{\mathbf{S}}$ in Eq. 22b.
We leave a full exploration of sampling the signal covariance (which encodes
the spherical harmonic angular power spectra) to later work.
Figure 2: The standard array layout used in the majority of the simulations,
consisting of 10 closed packed antennas with a diameter of
$14\text{\,}\mathrm{m}$ and a separation of $14.6\text{\,}\mathrm{m}$.
## 3 Simulations and sky model
The structure of the simulated visibility response function has already been
laid out in Sec. 2.1 where it is also made clear that the simulation is
dependent on which LSTs are chosen, $\ell_{\text{max}}$, frequency $\nu$,
baseline orientation and length $b$, and dish size $\theta_{D}$. This section
will describe how the simulations were obtained as well as the standard
parameters used for the reference results, which from here on will be referred
to as the _standard_ case. More realisations have been drawn using a variation
of both the simulation/observation parameters as well as samples from using a
variation of noise and prior levels. The details of these parameter variations
are explained in Sections 4.2–4.4.
### 3.1 Precomputing the visibility response operator
To simulate the visibility response function we used the
hydra111https://github.com/HydraRadio/ code, which builds upon matvis
(Kittiwisit et al., 2023) to simulate the visibility per $a_{\ell m}$-mode as
well as per baseline, frequency, and LST. The absolute values of the real-part
of the visibility response operator, $\boldsymbol{\mathbf{X}}_{\rm re}$, is
shown in Fig. 1 also showing how the operator’s dimensions are packed. The
visibility response is only simulated for Stokes-I. We have assumed the most
general case where we want to solve for a different set of ${a_{\ell m}}$
modes for each frequency channel. The simulations cover the narrow bandwidth
of $100$–$102\text{\,}\mathrm{MHz}$ in just two channels, in order to keep the
problem small for the time being. The LST range spans
$0$–$8\text{\,}\mathrm{h}$ in 10 steps. The primary beam model is calculated
using pyuvsim222https://github.com/RadioAstronomySoftwareGroup/pyuvsim which
provides an analytic Gaussian beam model assumed to be identical for each
antenna, with the width $\theta\simeq\lambda/b$ where $b$ is length of the
baseline $\boldsymbol{\mathbf{b}}$. As standard the distance between each
antenna is $14.6\text{\,}\mathrm{m}$ (HERA-like) and the dishes themselves are
modelled as hyperbolic dishes of diameter
$\theta_{\text{D}}=$14\text{\,}\mathrm{m}$$ with no side-lobes. The model is
simple, but has been specifically chosen so that we can clearly understand and
validate the results. The impact of different beams is an interesting study in
its own right, which we defer to future work.
The standard array layout used in the simulations is a small closed-packed
redundant array to represent a subsection of the full HERA array. A redundant
layout benefits from sampling the same modes many times, thus increasing
sensitivity especially to the relatively large angular scales relevant for EoR
experiments. In this configuration we consider only 10 receivers as shown in
Fig. 2. The subset is sufficiently small to both keep computational costs low
and with its resulting 45 baselines it still offers multiple redundant
baseline groups and variation in both direction and baseline length. As with
HERA the array used in the simulations is a drift-scan instrument pointing
towards zenith placed at latitude
$\sim$-31\text{\,}\mathrm{\SIUnitSymbolDegree}$$. From Fig. 1 it is already
clear that some baselines contribute more than others. For instance the
baseline of antennas $(3,6)$ is not very responsive, which is the long E-W
$43.8\text{\,}\mathrm{m}$ baseline (Fig. 2) whereas all the short
$14.6\text{\,}\mathrm{m}$ baselines contribute significantly more.
### 3.2 Diffuse emission sky model
As we aim to sample the diffuse emission foregrounds specifically, no other
foregrounds are included in the foreground model. For the purely diffuse
emission foreground sky model we use the Global Sky Model (GSM2016; Zheng et
al., 2017) as implemented by pyGDSM (Price, 2016). GSM2016 is based on
principal component analysis of a large set of multi-frequency datasets
(spanning $10\text{\,}\mathrm{MHz}$ to $5\text{\,}\mathrm{THz}$) as well as
performing a blind separation into physical components revealing five
components identified as synchrotron emission, free-free emission, cold dust,
warm dust, and the CMB anisotropy. The sky model is provided as frequency
dependent HEALpix maps where we use only the GSM2016 map at reference
frequency $100\text{\,}\mathrm{MHz}$. The foreground map is then passed to
HEALpy to get the true $a_{\ell m}$ modes,
$\boldsymbol{\mathbf{a}}_{\text{true}}$. As standard, the resolution is set to
nside $=128$, which corresponds to a HEALPix angular pixel size of
$0.46\text{\,}\mathrm{\SIUnitSymbolDegree}$. The maximum mode of the operator
is set to ${\ell_{\text{max}}}=20$, however, which corresponds to
$\sim$9\text{\,}\mathrm{\SIUnitSymbolDegree}$$. The true sky is used as input
for the full data model, which is obtained by applying the visibility response
to the true $a_{\ell m}$ modes.
### 3.3 Data and noise model
The data input for the analysis in this work is based on the visibility
response simulations. In the future, this will be done with real visibility
data instead, but for now simulations with a known diffuse sky component serve
as a means of validation. Here, we use the visibility response operator
$\boldsymbol{\mathbf{X}}$ to map the true-sky spherical harmonic coefficient
vector $\boldsymbol{\mathbf{a}}_{\rm true}$ into visibilities and add a noise
vector $\boldsymbol{\mathbf{n}}$ to the simulated visibilities,
$\displaystyle\boldsymbol{\mathbf{d}}=\boldsymbol{\mathbf{X}}\boldsymbol{\mathbf{a}}_{\text{true}}+\boldsymbol{\mathbf{n}}.$
(23)
The noise on the data model is added as uncorrelated Gaussian random noise,
$\displaystyle\boldsymbol{\mathbf{n}}\sim\mathcal{N}(0,\boldsymbol{\mathbf{N}}),$
(24)
given by the noise covariance $\boldsymbol{\mathbf{N}}$ with components
$N_{ij}$ modelled by the simulated auto-correlation visibilities $V_{ii}$ and
$V_{jj}$ as given by the radiometer equation,
$\displaystyle N_{ij}=\sigma_{ij}^{2}=\frac{V_{ii}V_{jj}}{\Delta t\Delta\nu}.$
(25)
Since the foregrounds are spectrally smooth, there is no need for very high
spectral resolution and we choose $\Delta\nu=$1\text{\,}\mathrm{MHz}$$. The
time resolution is set to $\Delta t=$60\text{\,}\mathrm{s}$$, both to avoid
issues with sky rotation (smearing) and to still ensure good signal-to-noise
ratio.
Lastly, we have applied a $10\%$ prior on the ${a_{\ell m}}$ values. To
implement this, we have defined a prior covariance $\boldsymbol{\mathbf{S}}$
that is diagonal, with values
${S}_{nm}=(0.1\,a_{\text{true},n})^{2}\delta_{nm}$. The prior mean,
$\boldsymbol{\mathbf{a}}_{0}$, is not set equal to
$\boldsymbol{\mathbf{a}}_{\text{true}}$ however, as this would effectively be
inputting information about the correct answer into the inference ahead of
time, which is not realistic. Instead, we draw values for
$\boldsymbol{\mathbf{a}}_{0}$ from a Gaussian distribution centered on the
true $a_{\ell m}$ values, with the same prior covariance, i.e.
$\boldsymbol{\mathbf{a}}_{0}\sim\mathcal{N}(\boldsymbol{\mathbf{a}}_{\text{true}},\boldsymbol{\mathbf{S}})$.
This ensures that the prior mean that we input in the GCR equation (Eq. 20) is
set consistently with the chosen prior model (GSM2016), while deviating from
the true values of the parameters (as would be the case in a real analysis).
Figure 3: The real- and imaginary parts of the difference of the mean of 100
samples of $a_{\ell m}$ modes from the GCR solver and the true sky using the
_standard_ configuration and parameters. Any _outliers_ from the central
region is marked with $\blacktriangle$ and it is notable that these occur more
frequently in the low-$\ell$ region. As described in Sec. 2.1 the $m=0$
imaginary modes ($\times$) should always be zero and the GCR solver therefore
does not solve for this.
Figure 4: Maps generated from the estimated spherical harmonic modes on the
sky using the _standard_ configuration and parameters. _Upper left:_ The true
sky given by pyGSM with ${\ell_{\text{max}}}=20$ and nside$=128$. Note that
the rippled structure comes from the true sky and not from the GCR solver.
_Upper right:_ The estimated sky based on the mean of 100 samples from the GCR
solver. The spherical harmonic modes can be seen in Fig. 3. _Bottom left:_
Fractional difference between the mean and the true sky adjusted to show
differences $<10\%$, which coincides with the beam region, see also Fig. 5 for
a closeup of the region. _Bottom right:_ The standard deviation of the
difference of the estimated and true sky.
## 4 Results
In the following we show results of the samples obtained under various
different parameter configurations of the sampler itself and by investigating
the effects of different observation scenarios. The purpose of this analysis
is to demonstrate the basic behaviours of the method and to validate that it
can recover the true sky to a reasonable level. The comparative analysis uses
simulations in order to understand the behaviour we would expect from real
data as a function of different LST coverage, the effects of effectively
truncating the visibility response operator at different levels of
$\ell_{\text{max}}$, and the impact of increasing the FoV of the array. We
also dive into a stress-test of the prior- vs likelihood levels, making sure
that neither is too broad nor too narrow.
Before going into the comparative analysis, we show in more detail in Sec. 4.1
the results of our chosen reference – or, _standard_ – case, using the general
parameters described in Sec. 3, considering not only the fractional difference
between the posterior mean and true sky map but also looking at the recovered
$a_{\ell m}$ modes, followed by the full comparative analysis in Secs.
4.2–4.3. Unless otherwise stated, we draw 100 samples per scenario, as we
deemed this sufficient for calculating the relevant statistics of the
recovered spherical harmonic modes. As a precautionary check of this
assumption, we also consider a high-sample size case of
$N_{\text{samples}}\approx 3500$. Finally, Sec. 4.4 is dedicated to a closer
study of the special case of increasing the field of view by decreasing the
diameter of the antennas to $\theta_{\text{D}}=$3\text{\,}\mathrm{m}$$, which
results in a FWHM at $100\text{\,}\mathrm{MHz}$ of
$\sim$60\text{\,}\mathrm{\SIUnitSymbolDegree}$$. Having a greater sky coverage
should improve on the sensitivity on lower $\ell$ modes. A large FoV can be
achieved with instruments like the MWA.
Figure 5: Cartesian projection of the fractional difference from the true sky
using the _standard_ configuration and parameters. The RA and dec ranges have
been narrowed to focus on the primary beam region. The FWHM of
$12.3\text{\,}\mathrm{\SIUnitSymbolDegree}$ is illustrated in the corner of
the figure. The white horizontal lines indicate a FWHM distance from the
centre of the primary beam at $-31.7\text{\,}\mathrm{\SIUnitSymbolDegree}$.
The simulated LST range is $0$-$8\text{\,}\mathrm{h}$ corresponding to an RA
range of
$0\text{\,}\mathrm{\SIUnitSymbolDegree}$-$120\text{\,}\mathrm{\SIUnitSymbolDegree}$.
The centre of the beam-region has fractional differences to the true sky of
$<5\%$ and at the bounds $<10\%$. Figure 6: Approximate range of $\ell$-value
sensitivity for several baselines, as a function of frequency. The baselines
displayed here are the five unique baseline lengths of the 10-dish standard
configuration. The width (shaded regions) is given by the FWHM of the beam
$\delta\ell\simeq\pi/{\rm FWHM}$, shown here for the HERA-like case with FWHM
$=$12.3\text{\,}\mathrm{\SIUnitSymbolDegree}$$. The two horizontal lines
indicate an ${\ell_{\text{max}}}=30$ (top, dark purple) and
${\ell_{\text{max}}}=20$ (bottom, magenta) and the black vertical line is set
at $\nu=$100\text{\,}\mathrm{MHz}$$, the frequency used for the simulations in
this work.
### 4.1 Realisations of the standard configuration
Figure 7: Cartesian projection of the fractional difference between the
posterior mean and true sky for multiple sets of samples with varied levels of
noise covariance $\boldsymbol{\mathbf{N}}$, prior covariance
$\boldsymbol{\mathbf{S}}$, prior mean $\boldsymbol{\mathbf{a}}_{0}$, and
increased sample size compared to the _standard_ case. The fractional
difference is defined as defined as
$(\mu({\boldsymbol{\mathbf{a}}}_{\textup{cr}})-\boldsymbol{\mathbf{a}}_{\textup{true}})/\boldsymbol{\mathbf{a}}_{\textup{true}}$,
where $\mu$ indicates the sample mean. The two horizontal lines indicate a
FWHM distance from the centre of the beam.
The results from the Gaussian constrained realisations of the $a_{\ell m}$
modes for the standard case are shown in Figs. 3, 4, and 5. First, we define
the true-sky subtracted mean as the mean of the spherical harmonic
coefficients sampled by the GCR method subtracted by the true sky (input
model) spherical harmonic modes from pyGDSM. In Fig. 3 the true-sky subtracted
mean of 100 realisations of the $a_{\ell m}$ modes is shown with corresponding
error bars defined by the standard deviations of the samples. To be able to
show the details of the region around a difference of zero, the figure is
cropped to $\pm 100$ with any modes outside of this region are marked with a
$\blacktriangle$ and their values. Since the $m=0$ imaginary modes are
artificially injected back in with a value of 0 (cf. Eq. 5), they are without
error bars but have been included and marked with $\times$ in the figure for
clarity. It is clear that the higher order modes are easiest to constrain and
that the lower order modes are far from the true sky.
In Fig. 6 the $\ell$-mode sensitivity given the baseline length and frequency
is shown. The shortest, and most frequent, baseline length in the standard
configuration is $b=$14.6\text{\,}\mathrm{m}$$ corresponding to an
$\ell_{\textup{min}}\sim(\pi b)/\lambda\sim 15$ at a frequency of
$100\text{\,}\mathrm{MHz}$. Note, that at this frequency the baseline length
(in metres) roughly corresponds to the $\ell$-value. Thus we are not expecting
to be sensitive to $\ell\sim$ few, since modes with
$\Delta\theta\gtrsim\lambda/b_{\rm min}$ are resolved out by the
interferometer, and so we should not expect the data to constrain them. The
recovered values of these low-$\ell$ modes are instead prior-driven.
In fact, Fig. 3 is showing the spherical harmonic coefficients, each of which
integrates information across the whole sky. Since the observed region is
small (only $\sim$ 8.6% of the sky), the recovered $a_{\ell m}$ values are
generally mostly determined by the prior even when the noise level is very low
in the observed region, we suspect this is because the prior-dominated area is
much larger. Hence, plotting the statistics of the recovered $a_{\ell m}$
values directly is not a very sensitive test of this method.
Instead, it is much more useful to look at the recovered sky in map-space, as
the difference between the observed and unobserved region is much clearer laid
out. In Fig. 4 we show the maps of the true input sky (top left) given as
described in Sec. 3.2 compared to the recovered map defined by the mean of the
posterior (top right). It is noted that the recovered map shows lower values
for the large scale modes (just as the ${a_{\ell m}}$-values in Fig. 3)
compared to the true sky map, which we expect is due to a random fluctuation
rather than a bias. Fig. 4 also shows the fractional difference between these
two maps (bottom left) and the standard deviation of the recovered sky (bottom
right). The fractional difference between the posterior mean and true sky is
lowest within the primary beam region and the standard deviation is much
smaller in this area too, thus demonstrating the specific patch of sky that is
directly observed versus that constrained by the prior. This explains how we
are still able to recover most of the galactic structure, despite the majority
of it being outside the observed region of the sky. Note, that between
RA$=$150\text{\,}\mathrm{\SIUnitSymbolDegree}$$ and
RA$=$180\text{\,}\mathrm{\SIUnitSymbolDegree}$$ there is another small low
residual region in the fractional difference map, however, this does not
appear in the standard deviation map. This low residual is simply due to a
random fluctuation rather than the true sky being well recovered in this
region, and it is therefore ignored.
Now focusing on the primary beam region alone, we show a Cartesian projection
of the fractional difference between the posterior mean and true sky in Fig.
5. The region is defined to be within a FWHM on either side of the central
latitude of $-31.7\text{\,}\mathrm{\SIUnitSymbolDegree}$. For the standard
case with the antenna diameter of $\theta_{\text{D}}=$14\text{\,}\mathrm{m}$$
the FWHM is $12.3\text{\,}\mathrm{\SIUnitSymbolDegree}$. For the standard case
the sky is recovered to within 5% close to the centre of the beam region and
to within 10% within the FWHM bounds.
### 4.2 Noise and prior levels
Figure 8: The absolute value of the fractional difference between the posterior mean and true sky taken as a slice through the centre of the primary beam region at dec=$-31.7\text{\,}\mathrm{\SIUnitSymbolDegree}$. Artefacts due to the pixelisation have been smoothed away by applying a pair of moving average filters to each of these curves. For all runs in this plot the LST range is $0-$8\text{\,}\mathrm{h}$$ corresponding to RA values of $0-$120\text{\,}\mathrm{\SIUnitSymbolDegree}$$. The light shaded region covers the FWHM of $12.3\text{\,}\mathrm{\SIUnitSymbolDegree}$ and the dark shaded region is outside of the primary beam. Within the LST range (white region) the standard and A1 case have the lowest fractional difference between the posterior mean and true sky. Figure 9: Cartesian projection of the fractional difference between the posterior mean and true sky for runs with variations to the standard observation/simulation parameters such as increasing the ${\ell_{\text{max}}}$, adding antennas for longer baselines, increasing the LST range, and increasing the cadence of observations. The fractional difference is defined as defined as $(\mu({\boldsymbol{\mathbf{a}}}_{\textup{cr}})-\boldsymbol{\mathbf{a}}_{\textup{true}})/\boldsymbol{\mathbf{a}}_{\textup{true}}$, where $\mu$ indicates the sample mean. The two horizontal lines indicate a FWHM distance from the centre of the beam. Figure 10: The absolute value of the fractional difference between the posterior mean and true sky taken as a slice through the centre of the primary beam region at dec=$-31.7\text{\,}\mathrm{\SIUnitSymbolDegree}$. Artefacts due to the pixelisation have been smoothed away by applying a pair of moving average filters to each of these curves. Results B4-B5 have different LST/RA ranges as indicated by the additional x-axes. Note that for B2, only the first half of the LST/RA range is shown, to match the $8\text{\,}\mathrm{h}$ span of the other runs. The light shaded region covers the FWHM of $12.3\text{\,}\mathrm{\SIUnitSymbolDegree}$ and the dark shaded region is outside of the primary beam. It is noticeable that all three ${\ell_{\text{max}}}=30$ results have higher fractional differences to the true than the ${\ell_{\text{max}}}=20$ results. Table 1: Overview of all the cases analysed in this paper and their labels along with which parameters are varied. A “—” indicates that the parameter is set as in the standard case. The labels are numbered after the order of appearance in Figs. 7 and 9. | $N_{\rm samples}$ | Noise covariance | Prior covariance | Prior mean
---|---|---|---|---
A1 | $\sim 3500$ | — | — | —
A2 | — | — | — | $\boldsymbol{\mathbf{a}}_{0}=0$
A3 | — | $\boldsymbol{\mathbf{N}}\times 10^{4}$ | — | $\boldsymbol{\mathbf{a}}_{0}=0$
A4 | — | $\boldsymbol{\mathbf{N}}\times 10^{2}$ | — | —
A5 | — | $\boldsymbol{\mathbf{N}}\times 10^{2}$ | $\boldsymbol{\mathbf{S}}^{-1}=0$ | —
A6 | — | — | $\boldsymbol{\mathbf{S}}\times 10^{2}$ | —
A7 | — | $\boldsymbol{\mathbf{N}}\times 10^{4}$ | — | —
A8 | — | $\boldsymbol{\mathbf{N}}\times 10^{4}$ | $\boldsymbol{\mathbf{S}}^{-1}=0$ | —
A9 | — | — | $\boldsymbol{\mathbf{S}}\times 10^{4}$ | —
| ${\ell_{\text{max}}}$ | Number of LSTs | LST range |
B1 | ${\ell_{\text{max}}}=30$ | — | — |
B2 | — | $N_{\rm LST}=20$ | $0-$16\text{\,}\mathrm{h}$$ |
B3 | — | $N_{\rm LST}=20$ | $0-8\,\,\,\,$\mathrm{h}$$ |
B4 | — | $N_{\rm LST}=10$ | $16-$24\text{\,}\mathrm{h}$\,\,\,$ |
B5 | — | $N_{\rm LST}=10$ | $8-$16\text{\,}\mathrm{h}$$ |
| $N_{\rm ants}$ | Dish diameter | FWHM |
C1 | — | $3\text{\,}\mathrm{m}$ | $57.3\text{\,}\mathrm{\SIUnitSymbolDegree}$ |
In this section we explore how varying the noise covariance $\mathbf{N}$,
prior covariance $\mathbf{S}$, and prior mean $\boldsymbol{\mathbf{a}}_{0}$
will affect the results from the GCR-sampler compared to our chosen standard
case. The motivation for this is to assure that a sufficient balance is
reached between the data- and prior-information, and that the prior is not
dominating the system. The results of the various noise- and prior level runs
can be seen in Figs. 7 and 8. All the various runs have been labelled in order
of there appearance in Fig. 7 and these labels will be used throughout the
paper. A full overview of all labels can also be seen in Table 1.
Before varying the noise and prior-levels, we examine whether using many more
samples ($\approx 3500$) would significantly change the results (case A1), in
order to check that only 100 samples is enough to produce statistically valid
results. The fractional difference between the posterior mean and true sky for
the A1 result is shown in Fig. 7 along with re-displaying the standard case
for comparison. It is clear that the fractional difference of the two cases
look almost identical. For even closer comparison we take a slice of the
centre of the beam of the absolute values of the fractional difference to the
true value, as shown in Fig. 8. As the figure shows, even when comparing the
fractional difference slices, the two cases only have a slight difference at
${\rm RA}\sim$10\text{\,}\mathrm{\SIUnitSymbolDegree}$$, thus emphasising that
these two results are almost identical, which (together with the full beam
region fractional difference) justifies the use of 100 samples as adequate for
our purposes.
Next, we set the inverse prior covariance to zero, which may be interpreted as
the case where only the likelihood term is used and the prior is neglected. We
then multiply the noise covariance with a factor of $10^{2}$ and $10^{4}$,
case A5 and A8 respectively. This is done in order to confirm, that removing
the information provided by the likelihood and/or prior does indeed lead to no
recovery of information of the sky, i.e. we demonstrate that the good recovery
in other cases is not simply due to the structure of the sky model or similar.
The fractional difference between the posterior mean and true sky for the A5
case ($\boldsymbol{\mathbf{S}}^{-1}=0,\mathbf{N}\times 10^{2}$) shows a slight
hint of the primary beam structure, but the high noise levels washes out the
data and there is no prior-information to assist the recovery of the sky. When
increasing the noise further in the A8 case
($\boldsymbol{\mathbf{S}}^{-1}=0,\mathbf{N}\times 10^{4}$), the recovered sky
is all gone and there is only noise left. In order to see how well the sampler
can recover the sky with no prior information, an extra run (not shown) was
made with standard noise level (but keeping the inverse prior covariance at
zero), which resulted in a fractional difference between the posterior mean
and true sky very similar to that of case A8 (i.e. where $\mathbf{S}\times
10^{4}$). Essentially the very high value of the prior covariance $\mathbf{S}$
is the same as saying the inverse prior covariance
$\boldsymbol{\mathbf{S}}^{-1}$ is closer to zero. It is therefore not that
surprising that these two results would be similar. For both these cases the
prior is now so broad and uncertain that the only well-recovered part is the
observed region.
The prior drives the good recovery outside of the observed region, which is
very clear when comparing for instance case A7 ($\mathbf{S}=$ standard,
$\mathbf{N}\times 10^{4}$) to case A8 ($\mathbf{S}^{-1}=0,\mathbf{N}\times
10^{4}$). Without any prior information nothing is recovered – but with the
prior most of the sky is recovered well to within $\sim 10\%$. However, A7
lacks the primary beam structure that we saw in the standard case, so the
recovery here is due to the prior driving the solution since the broad noise
covariance is washing out the data. When the noise is lowered again in case A4
to $\mathbf{N}\times 10^{2}$ (while keeping $\mathbf{S}=$ standard), the
central beam structure starts to reappear. The good recovery of the sky
outside of the observed region is without a doubt driven by the prior – as the
data cannot say anything about this region anyway – but the _very_ good
recovery of the observed region is due to the data. In the standard case the
well-recovered sky is further limited to be close to the observed region. The
fact that the spherical harmonics are continuous means the sky will still be
constrained by the data a little outside of the observed region. This explains
why the standard deviation in Fig. 4 smoothly goes from low to a high value.
An important test is to make sure that our choice of prior mean,
$\boldsymbol{\mathbf{a}}_{0}$, is not over-informing the sampler. Comparing
the A2 ($\mathbf{a}_{0}=0,\mathbf{N}=$ standard) and A3
($\mathbf{a}_{0}=0,\mathbf{N}\times 10^{4}$) cases to the standard case in
Fig. 7, it initially looks like setting the prior mean to zero results in a
poorly recovered sky. Indeed, this is the case if the noise is also increased
(as in A3), although generally increasing the noise with a factor of $10^{4}$
has yielded poor results (A7, A8, A9) and does not seem to be due to the
change of the prior mean alone. Instead, when the noise level is kept at
standard level (case A2), the true sky is still well-recovered near the centre
of the beam (i.e. recover is not biased), but the residual is larger beyond
the beam FWHM than in the standard case. Again, taking a slice of the absolute
fractional difference at the central declination only (around
$\textup{dec}=$-31.7\text{\,}\mathrm{\SIUnitSymbolDegree}$$), Fig. 8), shows
that the two $\mathbf{a}_{0}=0$ cases are not performing as good as the
standard case. Although, especially for the A2 case, at
RA$=60-$100\text{\,}\mathrm{\SIUnitSymbolDegree}$$ it is performing similarly
or (very) slightly better than the standard case. In most of our runs we have
a 10% prior that has prior means which are scattered around the true value.
This is somewhat conservative (we are not assuming a ’correct’ prior mean and
thereby not enforcing the true solution), but there is still the question of
how sensitive the results are to prior choices. What the
$\boldsymbol{\mathbf{a}}_{0}=0$ runs (A2, A3) show is what happens when our
prior is typically wrong by $10\sigma$.
### 4.3 Effect of changing array and observing configurations
The performance of the sampler is affected by which spherical harmonic modes
are available to it. This can be tested by changing the ${\ell_{\text{max}}}$
or for instance by including longer baselines to increase sensitivity to
higher $\ell$ modes, as can also be gauged from Fig. 6. Since most of the
diffuse foreground power comes from the Galaxy, it is also interesting to
examine how changing the LST range will affect how well the sky is recovered.
All the test cases in the following section has been labelled with a similar
style to the previous section, now based on the order of appearance in Fig. 9.
A full overview of all labels can be seen in Table 1.
To capture regions of the entire HERA-strip we examine LST ranges of
$8-$16\text{\,}\mathrm{h}$$ and $16-$24\text{\,}\mathrm{h}$$ on top of the
$0-$8\text{\,}\mathrm{h}$$ LST range; case B5, B4, and the standard case,
respectively. The B4 case covers the brightest regions of the galactic diffuse
emission, which can be seen in the RA range of
$$240\text{\,}\mathrm{\SIUnitSymbolDegree}$-$360\text{\,}\mathrm{\SIUnitSymbolDegree}$$
(the readers right hand side) on the top-left plot of Fig. 4. We also examine
the effects of increasing the data set to $N_{\textup{LST}}=20$ either by
doubling the LST range and keeping the same cadence (case B2) – or, vice versa
(case B3). The former results in a greater sky coverage and the latter results
in more information of the same sky region. The standard case cadence is one
integration of duration 60 sec per 48 minutes equivalent of 10 samples evenly
spread over an 8 hour LST range. Note, since the sky rotates
$15\text{\,}\deg\mathrm{/}\mathrm{h}$, the beam-crossing time of the HERA-like
array is $\sim$49\text{\,}\mathrm{min}$$ given the FWHM of
$12.3\text{\,}\mathrm{\SIUnitSymbolDegree}$. In this section we go over the
results from each of these cases and how they affect the resulting recovered
sky.
For the standard case we use ${\ell_{\text{max}}}=20$ because it is a
reasonable minimum for testing purposes that includes the shortest (most
sensitive) HERA baseline, see Fig. 6. In case B1, we see from Fig. 10 that
increasing ${\ell_{\text{max}}}$ to 30 results in a significantly degraded
fractional residual compared to ${\ell_{\text{max}}}$ of 20. While increasing
${\ell_{\text{max}}}$ brings the $b=$25.3\text{\,}\mathrm{m}$$ baseline group
into the _directly_ constrained $\ell$ range (see Fig. 6), each baseline is
actually sensitive to a broad range of $\ell$-values, as for example the
primary beam contributes to smearing of the $\ell$-mode sensitivity. For
instance the $b=$29.2\text{\,}\mathrm{m}$$ baseline (antennas $(0,2)$) in Fig.
1 is clearly sensitive to $\ell$-values down to around $\ell\sim 8$ and the
$(0,5)$-baseline, which is a shorter $25.3\text{\,}\mathrm{m}$ baseline,
contributes mostly at $\ell\gtrsim 5$ but still has some visibility response
below this level. These baselines will therefore already contribute to some
degree at the ${\ell_{\text{max}}}=20$ level. However, when we increase the
${\ell_{\text{max}}}$ to 30 the total number of modes more than doubles, from
441 to 961. This stretches the available signal-to-noise of the data much
farther than in the ${\ell_{\text{max}}}=20$ case.
Next, we vary the LST ranges. First, we keep to 10 time steps and 8 hr ranges
(as in the standard case), but shift the ranges to cover different parts of
the sky. Fig. 9 shows that the range of $8-$16\text{\,}\mathrm{h}$$ (B5)
performs better than both the standard $0-$8\text{\,}\mathrm{h}$$ and the B4
case ($16-$24\text{\,}\mathrm{h}$$), which covers the brightest part of the
galactic emission. As before, we inspect the slice at the central declination
for a closer comparison, see Fig. 10. It is clear from the figure, that within
their respective observed regions all three sets of LST ranges actually
perform comparatively well with only small differences. As one moves closer to
the edge of the observed region, however, the B5 case shows some improvement
to the standard case – and, especially, when compared to the B4 case. The
difference in recovery level can simply be due to the shifting of the sampling
points. For all the runs the simulated data is the same and have one specific
realisation of the noise. Areas that have less noise will therefore always
have a higher signal-to-noise and should perform better.
Increasing the data volume should also help constrain the sky better. One way
to do this is to double the number of observation times from
$N_{\textup{LST}}=10$ to $20$, either by doubling the range and keeping the
cadence the same (case B2) or by increasing the cadence within the same LST
range (case B3). At first glance both results seem to perform similar or
perhaps slightly better than the standard case, see Fig. 9. The recovered
signal will be worse at the edges of the observed region. The B2 case is twice
the size and encompasses the far edge of the B3 case at LST
$=$8\text{\,}\mathrm{h}$$, and will naturally be better recovered at this LST
than the B3 case. We will therefore limit the comparison to the
$0-$8\text{\,}\mathrm{h}$$ range for the fractional residual slice in Fig. 10.
The B3 result mostly shows improvement to the standard case with the exception
of RA
$$20\text{\,}\mathrm{\SIUnitSymbolDegree}$-$40\text{\,}\mathrm{\SIUnitSymbolDegree}$$
although exceeded by the performance of the B2 case, that not only performs
better than the standard case but also remains closer to true at the lower
edge ($0\text{\,}\mathrm{h}$ of the observed region. This hints that
increasing the observation time range (and thus increasing the size of the
observable sky) has a larger impact on the recovery of the sky than doubling
the data we have inside the same region. This can also be explained with a
larger observable sky being sensitive to larger scales (lower spherical
harmonic modes).
### 4.4 Improving sensitivity to larger scales
Figure 11: The real- and imaginary parts of the difference of the mean of 100
samples of $a_{\ell m}$ modes from the GCR solver and the true sky with the
diameter of the receivers set to $\theta_{D}=$3\text{\,}\mathrm{m}$$ and
thereby increasing the field of view. As described in Sec. 2.1 the $m=0$
imaginary modes ($\times$) should always be zero and the GCR solver therefore
does not solve for this. All residual ${a_{\ell m}}$ modes now lie within the
$\pm 100$ boundaries and including the uncertainties are consistent with zero,
except the zeroth $\ell$-mode that still has some loss of power.
Figure 12: Resulting maps from the estimated spherical harmonic modes on the
sky with the diameter of the receivers set to
$\theta_{D}=$3\text{\,}\mathrm{m}$$ and thereby increasing the field of view.
_Upper left:_ The true sky given by pyGSM with ${\ell_{\text{max}}}=20$ and
nside$=128$. Note that the rippled structure comes from the true sky and not
from the GCR solver. _Upper right:_ The estimated sky based on the mean of 100
samples from the GCR solver. The spherical harmonic modes can be seen in Fig.
11. _Bottom left:_ Fractional difference between the mean and the true sky
adjusted to show differences $<10\%$ (i.e. highlighting improvements over the
prior). The beam region is no longer clear cut as in the standard case.
_Bottom right:_ The standard deviation of the difference of the estimated and
true sky. The lowest noise levels are now larger than with the standard case,
but overall noise is lower.
One reason that the smaller scales are more difficult to constrain comes from
the array layout itself, as discussed above. The prior helps breaking the
degeneracy of the low-$\ell$ modes, that can otherwise only be determined as a
linear combination (i.e. they are degenerate, as there are no sufficiently
short baselines to resolve them individually). It is clear, however, from the
$a_{\ell m}$ modes presented in Fig. 3 that these modes are still difficult to
constrain up to $\ell\sim 5$. Moving on the same logic as above, where the
observable sky is increased to increase sensitivity to the larger scales, we
can decrease the size of the dishes to increase the field of view (FoV). With
the HERA-like dishes the FWHM $=$12.3\text{\,}\mathrm{\SIUnitSymbolDegree}$$.
Altering the array to have a FoV more similar to that of OVRO-LWA (Eastwood et
al., 2018) or MWA (Yoshiura et al., 2021), the diameter of the receivers is
changed to $3\text{\,}\mathrm{m}$, now with a ${\rm
FWHM}=$57.3\text{\,}\mathrm{\SIUnitSymbolDegree}$$ (case C1). A broader FoV
now means that the smearing around the $\ell$ modes in Fig. 6 is much narrower
($\delta\ell\simeq\pi/{\rm FWHM}$), thus more clearly picking out the specific
$\ell$ modes. Unlike when we increase the LST range, we expand the observable
sky not only in the E-W direction but also N-S. As increasing the FoV with
more than $4\times{\rm FWHM}$ of the standard case is a much more radical
change than those made in Sec. 4.3, the C1 results are presented in more
detail in Figs. 11 and 12, although now leaving out the Cartesian projections
of the fractional residual, since the beam is now so wide that a close up of
the primary beam region is no longer relevant. Starting with the true-
subtracted mean of the recovered ${a_{\ell m}}$ modes in Fig. 11, it is
noticeable that there are no longer any _outliers_ from the displayed region,
as was the case in Fig. 3 with the standard configuration. The zeroth-mode is
still the most difficult to constrain and is again underestimated, however,
not as severely underestimated as earlier. Earlier it was argued that the
${a_{\ell m}}$ modes are not the most representative performance indicator of
the code, since they are all-sky quantities and the observed region is only a
narrow strip, but now that we have much larger coverage it clearly shows that
full-sky features as the ${a_{\ell m}}$ modes can be well-constrained.
By-eye, it is now almost impossible to tell the difference between the true
sky map and the mean of the recovered sky in Fig. 12. Taking the fractional
difference between the two maps reveals a very different structure than
earlier. Now, almost all parts of the sky are recovered within $10\%$ with the
Galaxy showing the smallest residuals. It is striking that the primary beam
structure no longer shows up in either the fractional difference map or the
standard deviation. Since the noise covariance $\mathbf{N}$ and subsequently
the noise $n_{ij}$ on the data is defined by the auto-visibilities as per Eq.
25, the noise level will be affected by whether the Galactic emission is
within the beam. Since the FoV is now so large, the noise has increased with
the extra power from the Galaxy as most of this falls within the observed
area.
Additional runs have been made where the noise was reduced with a factor of
$10^{-1}$ and $10^{-2}$ respectively (not shown), and not only does this
improve the estimate of the ${a_{\ell m}}$ modes, it is also clear that the
area of the sky with the largest fractional residual is the furthest region
from the observed region, corresponding to the same area in Fig. 12 where the
standard deviation is higher. Decreasing the noise also reduced the standard
deviation inside the observed region to the same level as the standard case.
Earlier, in Sec. 4.2, we showed that reducing the noise level could lead to
worse recovery of the sky. Since we are dealing with constrained realisations,
the solution in the prior-dominated region can still depend on the data-
dominated region (and vice-versa), as the spherical harmonic basis functions
connect the two. The influence of the data on the prior-dominated region is
minimal if the noise covariance is very narrow. For the $3\text{\,}\mathrm{m}$
beam case it is not as big an issue, since the large FoV makes up for it.
Before the beam was also very narrow, so when the noise covariance was made
narrow, it did not allow the prior to contribute as much, which made it
difficult to recover the sky even at the edges of the observed region. Now,
that the FoV is larger, the prior is not as crucial to determine full-sky
features and we can reduce the noise without losing the constraining power.
## 5 Conclusions
For next-generation wide-field radio interferometer arrays, particularly those
targeting the 21 cm fluctuations at low and high redshift, the biggest
challenge remains proper handling of foreground contamination following its
distortion/modulation by the instrument. To improve on this issue, it is
crucial to be able to make accurate maps of the radio emission on both small
and large scales from the measured visibilities. Traditional deconvolution
algorithms such as CLEAN (and its extensions) can struggle to properly recover
diffuse emission, which is the dominant foreground contaminant on large
angular scales. To address this, methods such as Direct Optimal Mapping and
$m$-mode analysis have been developed, both of which are maximum likelihood
estimators that focus on accurately recovering wide field maps of the
emission, i.e. on scales where the curvature of the sky and the shape of the
primary beam becomes important.
In this work we have presented an alternative approach to recovering diffuse
foregrounds from visibilities – by constructing a parametric model of the
emission, represented by spherical harmonics on the sky, and drawing samples
from the joint posterior of the spherical harmonic coefficients given the
visibility data and a choice of prior. A linear model can be constructed by
writing down an operator that encodes the response of the interferometer (and
thus the measured visibilities) to each spherical harmonic mode. This operator
includes all the relevant instrumental degrees of freedom. We can then
estimate the joint posterior distribution of all the coefficients by
repeatedly solving the Gaussian constrained realisation (GCR) equation, an
extension of the Wiener filter that includes random realisation terms. This
can be solved efficiently even for a very large number of coefficients, making
this large inference problem computationally tractable. Furthermore, each
sample of coefficients is a complete realisation of the spherical harmonic
coefficients, and therefore the full sky (i.e. with no gaps or mask) that is
statistically-consistent with the data. This ultimately allows us to recover
the diffuse emission signal in a statistically robust manner while also
avoiding difficulties with missing data in subsequent steps of the data
analysis pipeline.
After presenting the mathematical formalism for this sampler, the primary aim
of this paper was to validate the method by applying it to a suite of
simulations. We tested the performance of the spherical harmonic sampler by
comparing the recovered sky, defined as the mean of the sky maps of the
samples, to the (known) true sky modelled using pyGSM. For the analysis we
chose a standard set of parameters to act as a reference for the various tests
of noise and prior uncertainty levels, sample size, ${\ell_{\text{max}}}$, the
specific LST range of the observations, and the influence of the field of
view. The _standard case_ is based on 100 realisations with a $10\%$ prior
centred on the true ${a_{\ell m}}$-modes, and the noise is given by a Gaussian
distribution with zero mean and covariance defined by the auto-visibilities
through the radiometer equation. The standard case uses a HERA-like closed-
packed redundant array of 10 antennas and covers the LST range
$0-$8\text{\,}\mathrm{h}$$. The dishes are 14 m in diameter (FWHM
$=$12.3\text{\,}\mathrm{\SIUnitSymbolDegree}$$), which means the directly
observed sky is a narrow $24.6\text{\,}\mathrm{\SIUnitSymbolDegree}$ (dec) by
$120\text{\,}\mathrm{\SIUnitSymbolDegree}$ (RA) strip covering only $\sim
8.6\%$ of the full sky.
The standard case performs well within the observed region and recovers the
true sky to within $10\%$ within a distance of 1 FWHM from the centre and to
within $5\%$ at the centre of the beam. When increasing the number of samples
to $N_{\textup{samples}}\approx 3500$, we found that there were minimal
changes to the result allowing us to keep the computational costs lower by
continuing with 100 samples per scenario only. Next, the impact of the noise
level on recovery was studied. When the noise variance is increased by a
factor of $>10^{2}$, the noise will dominate over the sky signal and downgrade
the recovery. At this noise level, the samples will only show diffuse sky
structure if there is a prior as well to drive the solution. For the standard
case we chose a $10\%$ prior, as the pyGSM model should describe the diffuse
emission sky to roughly within this margin. If the prior covariance is
increased, however, the prior quickly becomes too broad and uncertain and the
only well-recovered part is the very centre of the observed region. Including
a prior helps break the degeneracy of the low-$\ell$ modes. It was also found
that even if the prior mean is set to zero (a highly pessimistic assumption),
the sky can still be recovered, although to a slightly worse degree than the
standard case. This shows the benefit of including a prior; the boundary
region between observed and unobserved sky is constrained by both the prior
and the data, while regions of missing data are still filled with a
statistically plausible realisation. We conclude that, for the standard case,
the prior is not overly-informative, and contributes about the same as the
data to the diffuse emission recovery around the observed region.
For close-packed arrays like HERA, the most abundant baselines are the
shortest ones, which have lengths $<$30\text{\,}\mathrm{m}$$. This justifies
choosing a relatively low ${\ell_{\text{max}}}$ for our tests. When increasing
the ${\ell_{\text{max}}}$ from 20 to 30, the number of modes in the ${a_{\ell
m}}$-vector increases significantly from 441 to 961, more than doubling the
number of parameters we are trying to constrain, and thus stretching the
available signal-to-noise across more degrees of freedom. A more informative
prior would help to balance this increase in the size of the parameter space.
Most of the diffuse emission on the sky originates from the Galaxy and is not
uniformly distributed. This results in field-dependent effects on the recovery
of the signal. For instance, the recovery was best for a
$8-$16\text{\,}\mathrm{h}$$ region, whereas the region covering the brightest
part of the Galaxy ($16-24$\mathrm{h}$$) was recovered worst, due largely to
the extra noise given by the auto-visibilities. When increasing the data
volume we expect the results to improve, for instance if we increase the
cadence and take more data samples within the same LST range, or, if we keep
the cadence but increase the range. When doubling the number of data points
within the range, there was only a slight improvement to the recovered sky,
suggesting that there is not much information to be gained by making more
measurements of nearby (correlated) pointings, e.g. within a beam width of one
another. In contrast, doubling the range itself not only improved the results
in the centre, but also at the boundaries of the LST range, as additional sky
coverage led to improved constraints on the spherical harmonic modes across
the board.
Ultimately, the biggest impact on our simulated results occurred by increasing
the field of view. This was done by decreasing the effective antenna diameter
to $3\text{\,}\mathrm{m}$, thus increasing the FWHM from
$12.3\text{\,}\mathrm{\SIUnitSymbolDegree}$ to
$57.3\text{\,}\mathrm{\SIUnitSymbolDegree}$. For the large-FoV case, the
spherical harmonic coefficients now all coincide with the true sky within
their given uncertainties except the zeroth mode. The standard deviation map
of the large-FoV results no longer shows the clear strip of the observed sky,
as almost all of the modes are now well-constrained. Since the noise is
defined by the auto-visibilities, and this setup now directly observes most of
the Galaxy, the noise level is larger than the standard case. However, as the
FoV is now so large, we are also less dependent on the prior as there are
fewer gaps of information to fill in, and the boundary region (between
directly observed and totally unobserved regions) is wider.
The results presented in this paper are based on a handful of simplified
example cases, using a reduced-size array and only two frequencies for the
visibility response operator. A simple extension of this work would be to
rerun the analysis while simulating the visibility response for more
frequencies or, alternatively, the frequency dependence could be included via
a parametric model, e.g. power law with spectral index $\beta$. Likewise, we
have only considered cases of ${\ell_{\text{max}}}<30$, but in reality a
higher angular resolution would be desirable if a larger portion of the array
is considered (i.e. including more longer baselines). Finally, the Gaussian
constrained realisation method has been developed with inclusion into a Gibbs
sampling framework in mind, whereby beam, 21cm signal, and point source
foreground model parameters would also be sampled. A simpler and more focused
Gibbs sampling scheme would also enable us to sample the signal covariance
$\boldsymbol{\mathbf{S}}$ (e.g. as parametrised by the angular power
spectrum), as well as the spherical harmonic coefficients themselves.
## Acknowledgements
This result is part of a project that has received funding from the European
Research Council (ERC) under the European Union’s Horizon 2020 research and
innovation programme (Grant agreement No. 948764; KAG, PB, JB, MJW). We
acknowledge use of the following software: matplotlib (Hunter, 2007), numpy
(van der Walt et al., 2011), and scipy (Virtanen et al., 2020). This work used
the DiRAC@Durham facility managed by the Institute for Computational Cosmology
on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was
funded by BEIS capital funding via STFC capital grants ST/P002293/1,
ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant
ST/R000832/1. DiRAC is part of the National e-Infrastructure.
## Data Availability
The Python code used to produce the results in this paper is available from
https://github.com/HydraRadio/Hydra. The Jupyter notebook used for the data
analysis and to produce plots is available from
https://github.com/katrinealice/sph_harm_GCR_analysis. Simulated data are
available on request.
## References
* Ali et al. (2015) Ali Z. S., et al., 2015, ApJ, 809, 61
* Bowman et al. (2013) Bowman J. D., et al., 2013, PASA, 30, e031
* CHIME Collaboration (2023) CHIME Collaboration 2023, The Astrophysical Journal, 947, 16
* Cornwell (2008) Cornwell T. J., 2008, IEEE Journal of Selected Topics in Signal Processing, 2, 793–801
* Cornwell, T. J. (2009) Cornwell, T. J. 2009, A&A, 500, 65
* DeBoer et al. (2017) DeBoer D. R., et al., 2017, Publications of the Astronomical Society of the Pacific, 129, 045001
* Dillon & Parsons (2016) Dillon J. S., Parsons A. R., 2016, The Astrophysical Journal, 826, 181
* Dillon et al. (2015) Dillon J. S., et al., 2015, Phys. Rev. D, 91, 023002
* Dong (1994) Dong R., 1994, Nuclear Magnetic Resonance of Liquid Crystals. Partially ordered systems, Springer-Verlag, https://books.google.co.uk/books?id=tAbwAAAAMAAJ
* Dowell & Taylor (2018) Dowell J., Taylor G. B., 2018, The Astrophysical Journal Letters, 858, L9
* Eastwood et al. (2018) Eastwood M. W., et al., 2018, The Astronomical Journal, 156, 32
* Eriksen et al. (2008) Eriksen H. K., Jewell J. B., Dickinson C., Banday A. J., Górski K. M., Lawrence C. R., 2008, ApJ, 676, 10
* Evoli et al. (2014) Evoli C., Mesinger A., Ferrara A., 2014, J. Cosmology Astropart. Phys, 2014, 024
* Fixsen et al. (2011) Fixsen D. J., et al., 2011, The Astrophysical Journal, 734, 5
* Furlanetto et al. (2006) Furlanetto S. R., Oh S. P., Briggs F. H., 2006, Physics Reports, 433, 181
* Geman & Geman (1984) Geman S., Geman D., 1984, IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-6, 721
* Högbom (1974) Högbom J. A., 1974, A&AS, 15, 417
* Hunter (2007) Hunter J. D., 2007, Computing in Science Engineering, 9, 90
* Kennedy et al. (2023) Kennedy F., Bull P., Wilensky M. J., Burba J., Choudhuri S., 2023, The Astrophysical Journal Supplement Series, 266, 23
* Kittiwisit et al. (2023) Kittiwisit P., et al., 2023, arXiv e-prints, p. arXiv:2312.09763
* Kriele (2022) Kriele M. A., 2022, PhD thesis, Eindhoven University of Technology, Curtin University, ASTRO3D
* Liu & Shaw (2020) Liu A., Shaw J. R., 2020, Publications of the Astronomical Society of the Pacific, 132, 062001
* Lopez-Honorez et al. (2016) Lopez-Honorez L., Mena O., Moliné Á., Palomares-Ruiz S., Vincent A. C., 2016, J. Cosmology Astropart. Phys, 2016, 004
* Morales & Wyithe (2010) Morales M. F., Wyithe J. S. B., 2010, ARA&A, 48, 127
* Offringa & Smirnov (2017) Offringa A. R., Smirnov O., 2017, Monthly Notices of the Royal Astronomical Society, 471, 301–316
* Parsons et al. (2012) Parsons A., Pober J., McQuinn M., Jacobs D., Aguirre J., 2012, The Astrophysical Journal, 753, 81
* Patil et al. (2017) Patil A. H., et al., 2017, The Astrophysical Journal, 838, 65
* Planck Collaboration et al. (2020) Planck Collaboration et al., 2020, A&A, 641, A1
* Price (2016) Price D. C., 2016, 2.0.0, Astrophysics Source Code Library, 1603.013
* Pritchard & Loeb (2012) Pritchard J. R., Loeb A., 2012, Reports on Progress in Physics, 75, 086901
* Santos et al. (2005) Santos M. G., Cooray A., Knox L., 2005, ApJ, 625, 575
* Shaw et al. (2014) Shaw J. R., Sigurdson K., Pen U.-L., Stebbins A., Sitwell M., 2014, ApJ, 781, 57
* Shaw et al. (2015) Shaw J. R., Sigurdson K., Sitwell M., Stebbins A., Pen U.-L., 2015, Phys. Rev. D, 91, 083514
* Singal et al. (2010) Singal J., Stawarz L., Lawrence A., Petrosian V., 2010, Monthly Notices of the Royal Astronomical Society, 409, 1172–1182
* The HERA Collaboration et al. (2022) The HERA Collaboration et al., 2022, The Astrophysical Journal, 924, 51
* Tingay et al. (2013) Tingay S. J., et al., 2013, Publications of the Astronomical Society of Australia, 30, e007
* Tung (1985) Tung W., 1985, Group Theory in Physics. World Scientific, https://books.google.co.uk/books?id=O89tgpOBO04C
* Virtanen et al. (2020) Virtanen P., et al., 2020, Nature Methods, 17, 261
* Wayth et al. (2018) Wayth R. B., et al., 2018, Publications of the Astronomical Society of Australia, 35, e033
* Xu et al. (2022) Xu Z., et al., 2022, The Astrophysical Journal, 938, 128
* Yoshiura et al. (2021) Yoshiura S., et al., 2021, Monthly Notices of the Royal Astronomical Society, 505, 4775–4790
* Zheng et al. (2017) Zheng H., et al., 2017, MNRAS, 464, 3486
* van Haarlem et al. (2013) van Haarlem M. P., et al., 2013, A&A, 556, A2
* van der Walt et al. (2011) van der Walt S., Colbert S. C., Varoquaux G., 2011, Computing in Science Engineering, 13, 22
## Appendix A Further “realification”
Inherently visibilities are complex quantities and the spherical harmonic
coefficients are as well. Many standard linear solvers are not set up to
handle complex numbers however. The first step to make the system real (i.e.
_realify_) has already been done in Sec. 2.1 bt reordering the spherical
harmonic coefficient vector, $\boldsymbol{\mathbf{a}}_{\ell m}$, to contain
the values of first the real- and then the imaginary part of the spherical
harmonic coefficients. This leaves us with a fully real-valued vector to solve
for, denoted $\boldsymbol{\mathbf{a}}_{\rm cr}$ (constrained realisations). A
real-valued spherical harmonic coefficient vector does not automatically
result in real-valued visibilities however. Instead, the visibility response
is still complex valued, now containing a complex visibility response per
real- and per imaginary part of the $\boldsymbol{\mathbf{a}}_{\rm cr}$-vector.
Further _realification_ of the system is therefore needed, while making sure
that all the mixing of real and imaginary parts that you get when multiplying
complex numbers together is handled correctly.
First, we define vectors (bold, lower case) and matrices (bold, upper case) in
this real-valued system as
$\displaystyle\widetilde{\boldsymbol{\mathbf{v}}}=\begin{pmatrix}\boldsymbol{\mathbf{v}}_{\rm
re}\\\ \boldsymbol{\mathbf{v}}_{\rm
im}\end{pmatrix},\quad\widetilde{\boldsymbol{\mathbf{M}}}=\begin{pmatrix}\boldsymbol{\mathbf{M}}_{\rm
re}&-\boldsymbol{\mathbf{M}}_{\rm im}\\\\[3.0pt] \boldsymbol{\mathbf{M}}_{\rm
im}&\boldsymbol{\mathbf{M}}_{\rm re}\end{pmatrix}$ (26)
with the Hermitian conjugate of the matrix
$\widetilde{\boldsymbol{\mathbf{M}}}$ given as
$\displaystyle\widetilde{\boldsymbol{\mathbf{M}}}^{{\dagger}}=\begin{pmatrix}\boldsymbol{\mathbf{M}}_{\rm
re}^{T}&\boldsymbol{\mathbf{M}}_{\rm im}^{T}\\\\[3.0pt]
-\boldsymbol{\mathbf{M}}_{\rm im}^{T}&\boldsymbol{\mathbf{M}}_{\rm
re}^{T}\end{pmatrix},$ (27)
where the superscript $T$ denotes the transpose and the minus sign has been
swapped due to complex conjugation.
With this we can redefine the complex-valued visibility response
$\boldsymbol{\mathbf{X}}$ into a purely real-valued version,
$\displaystyle\widetilde{\boldsymbol{\mathbf{X}}}=\begin{pmatrix}\boldsymbol{\mathbf{X}}_{\textup{re}}&-\boldsymbol{\mathbf{X}}_{\textup{im}}\\\\[3.0pt]
\boldsymbol{\mathbf{X}}_{\textup{im}}&\boldsymbol{\mathbf{X}}_{\textup{re}}\end{pmatrix},\quad\widetilde{\boldsymbol{\mathbf{X}}}^{\dagger}=\begin{pmatrix}\boldsymbol{\mathbf{X}}_{\textup{re}}^{T}&\boldsymbol{\mathbf{X}}_{\textup{im}}^{T}\\\\[3.0pt]
-\boldsymbol{\mathbf{X}}_{\textup{im}}^{T}&\boldsymbol{\mathbf{X}}_{\textup{re}}^{T}\end{pmatrix}\,,$
(28)
and we define the new noise covariance as a diagonal matrix
$\displaystyle\widetilde{\boldsymbol{\mathbf{N}}}=\begin{pmatrix}\boldsymbol{\mathbf{N}}/2&0\\\
0&\boldsymbol{\mathbf{N}}/2\end{pmatrix}\,.$ (29)
Since the spherical harmonic coefficient vector is already constructed so that
it is real-valued, as described in Sec. 2.1, so is the prior mean
$\boldsymbol{\mathbf{a}}_{0}$ and $\boldsymbol{\mathbf{\omega}}_{a}$,
$\displaystyle\widetilde{\boldsymbol{\mathbf{a}}}=\begin{pmatrix}\boldsymbol{\mathbf{a}}_{\rm
cr}\\\
0\end{pmatrix},\quad\widetilde{\boldsymbol{\mathbf{a}}}_{0}=\begin{pmatrix}\boldsymbol{\mathbf{a}}_{0}\\\
0\end{pmatrix},\quad\widetilde{\boldsymbol{\mathbf{\omega}}}_{a}=\begin{pmatrix}\boldsymbol{\mathbf{\omega}}_{a}\\\
0\end{pmatrix}\,.$ (30)
Lastly, the signal covariance is defined in Sec. 3.3 by the real-valued
$\boldsymbol{\mathbf{a}}_{\text{true}}$-vector and so, already, is real-valued
itself. We therefore define the new signal covariance as,
$\displaystyle\widetilde{\boldsymbol{\mathbf{S}}}=\begin{pmatrix}{\color[rgb]{0,0,0}\boldsymbol{\mathbf{S}}}&0\\\
0&0\end{pmatrix}\,.$ (31)
Using these definitions, we can derive the Gaussian constrained realisation
equation once again, and we are left with the final _realified_ GCR equation,
$\displaystyle\left[\boldsymbol{\mathbf{S}}^{-1}+2\boldsymbol{\mathbf{X}}_{\textup{re}}^{T}\boldsymbol{\mathbf{N}}^{-1}\boldsymbol{\mathbf{X}}_{\textup{re}}+2\boldsymbol{\mathbf{X}}_{\textup{im}}^{T}\right.$
$\displaystyle\left.\boldsymbol{\mathbf{N}}^{-1}\boldsymbol{\mathbf{X}}_{\textup{im}}\right]\boldsymbol{\mathbf{a}}_{\textup{cr}}=$
$\displaystyle\quad\boldsymbol{\mathbf{X}}_{\textup{re}}^{T}\left(2\boldsymbol{\mathbf{N}}^{-1}\boldsymbol{\mathbf{d}}_{\textup{re}}+\sqrt{2}\boldsymbol{\mathbf{N}}^{-\frac{1}{2}}(\boldsymbol{\mathbf{\omega}}_{d})_{\textup{re}}\right)$
$\displaystyle\quad+\boldsymbol{\mathbf{X}}_{\textup{im}}\left(2\boldsymbol{\mathbf{N}}^{-1}\boldsymbol{\mathbf{d}}_{\textup{im}}+\sqrt{2}\boldsymbol{\mathbf{N}}^{-\frac{1}{2}}(\boldsymbol{\mathbf{\omega}}_{d})_{\textup{im}}\right)$
$\displaystyle\quad+\boldsymbol{\mathbf{S}}^{-1}\boldsymbol{\mathbf{a}}_{0}+\boldsymbol{\mathbf{S}}^{-\frac{1}{2}}\boldsymbol{\mathbf{\omega}}_{a}.$
(32)
|
# Ultra diffuse galaxies in the Hydra I cluster from the LEWIS Project: Phase-
Space distribution and globular cluster richness
Duncan A. Forbes, 1 Jonah Gannon1, Enrichetta Iodice2, Michael Hilker5,Goran
Doll2,3, Chiara Buttitta2, Antonio La Marca6,7, Magda Arnaboldi5, Michele
Cantiello4, G. D’Ago8, Jesus Falcon Barroso15,16, Laura Greggio9, Marco
Gullieuszik9, Johanna Hartke12,13, Steffen Mieske10, Marco Mirabile4,14,
Roberto Rampazzo9, Marina Rejkuba5, Marilena Spavone2, Chiara Spiniello11,
Giulio Capasso2
1 Centre for Astrophysics & Supercomputing, Swinburne University, Hawthorn,
VIC 3122, Australia
2 INAF - Astronomical Observatory of Capodimonte, Salita Moiariello 16,
I-80131, Naples, Italy
3 University of Naples “Federico II”, C.U. Monte Sant’Angelo, Via Cinthia,
80126, Naples, Italy
4 INAF - Astronomical Observatory of Abruzzo, Via Maggini, 64100, Teramo,
Italy
5 European Southern Observatory, Karl-Schwarzschild-Strasse 2, 85748 Garching
bei Muenchen, Germany
6 SRON Netherlands Institute for Space Research, Landleven 12, 9747 AD
Groningen, The Netherlands
7 Kapteyn Astronomical Institute, University of Groningen, Postbus 800, 9700
AV Groningen, The Netherlands
8 Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge,
CB3 0HA
9 Osservatorio Astronomico di Padova, Via dell’Osservatorio 8, 36012 Asiago
(VI), Italy
10 European Southern Observatory, Alonso de Cordova 3107, 7630355 Vitacura,
Santiago, Chile
11 Sub-department of Astrophysics, University of Oxford, Denys Wilkinson
Building, Keble Road, Oxford OX1 3RH, United Kingdom
12 Finnish Centre for Astronomy with ESO (FINCA), FI-20014 University of
Turku, Finland
13 Tuorla Observatory, Department of Physics and Astronomy, FI-20014
University of Turku, Finland
14 Gran Sasso Science Institute, L’Aquila, Italy
15 INAF - Astronomical Observatory of Padova, Vicolo dell’Osservatorio 5,
I-35122 Padova, Italy
15 Instituto de Astrofísica de Canarias, Vía Láctea s/n, E-38205 La Laguna,
Tenerife, Spain
16 Departamento de Astrofísica, Universidad de La Laguna, E-38200 La Laguna,
Tenerife, Spain E-mail<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
Although ultra diffuse galaxies (UDGs) are found in large numbers in clusters
of galaxies, the role of the cluster environment in shaping their low surface
brightness and large sizes is still uncertain. Here we examine a sample of
UDGs in the Hydra I cluster (D = 51 Mpc) with new radial velocities obtained
as part of the LEWIS (Looking into the faintest with MUSE) project using
VLT/MUSE data. Using a phase-space, or infall diagnostic, diagram we compare
the UDGs to other known galaxies in the Hydra I cluster and to UDGs in other
clusters. The UDGs, along with the bulk of regular Hydra I galaxies, have low
relative velocities and are located near the cluster core, and thus consistent
with very early infall into the cluster. Combining with literature data, we do
not find the expected trend of GC-rich UDGs associated with earlier infall
times. This result suggests that quenching mechanisms other than cluster
infall should be further considered, e.g. quenching by strong feedback or in
cosmic sheets and filaments. Tidal stripping of GCs in the cluster environment
also warrants further modelling.
###### keywords:
galaxies: star clusters: general — galaxies: haloes — galaxies: structure —
galaxies: photometry
††pubyear: 2023††pagerange: Ultra diffuse galaxies in the Hydra I cluster from
the LEWIS Project: Phase-Space distribution and globular cluster
richness–References
## 1 Introduction
The possible formation pathways of ultra-diffuse galaxies (UDGs) have been a
subject of an ongoing vigorous debate since 2015, when a population of these
extremely diffuse galaxies was identified in the Coma cluster using the
Dragonfly Telephoto Array (van Dokkum et al., 2015). Existing in all
environments, they are most common in clusters with several hundred found in
the Coma cluster (Yagi et al., 2016; Alabi et al., 2018). This significant
contribution to our ‘census of galaxies’ has prompted numerous simulation
studies and accompanying predictions (see Sales et al. (2020) and references
therein). These simulations can be broadly placed in two categories; internal
processes (e.g. episodic supernova feedback) or external (e.g. tidal effects
in a dense environment). Some combination of both processes may be operating
along with past galaxy infall (and subsequent quenching) into clusters.
UDGs have low surface brightnesses (they are defined to have central values in
the g band of $\mu$ $>$ 24 mag. per sq. arcsec) so that spectroscopic studies
of them push even 8–10m class telescopes, with efficient low surface
brightness instruments, such as KWCI on Keck or MUSE on VLT, to their limits.
While strictly speaking dwarf galaxies with M∗ $<10^{9}$ M⊙, UDGs are unlike
classical dwarfs as they have extreme sizes with effective radii Re $>$ 1.5
kpc (i.e. comparable to the disk of the Milky Way with Re $\sim$3.5 kpc). They
also reveal another unexplained feature, with some hosting up to ten times
more globular clusters (GCs) than classical dwarf galaxies of the same
luminosity (Forbes et al., 2020). Their very existence in clusters and their
generally old stellar populations suggests that some may be protected within
an overly massive dark matter halo. The latter is supported by the correlation
between GC numbers and host galaxy halo mass for normal galaxies, e.g. Burkert
& Forbes (2020).
In the standard picture of dwarf galaxy evolution (Mistani et al., 2016),
dwarfs that fell into clusters at early times will have experienced intense
star formation, prior to, or at the start of, infall (which is also expected
to give rise to a high fraction of stars in bound star clusters). This is
followed by quenching of any further star formation as the infall proceeds.
Both of these effects would lead to a high fraction of GCs relative to their
field stars (Mistani et al., 2016; Ramos-Almendares et al., 2020). Indeed
trends of GC richness and [$\alpha$/Fe] ratios with clustercentric radius
provide some observational support for this interpretation (Peng et al., 2008;
Liu et al., 2016; Romero-Gómez et al., 2023). This early-infall, or biasing,
has been invoked for UDGs by Carleton et al. (2021) who include cluster tidal
effects within the IllustrisTNG simulation and simplified GC formation
physics. Similar to classical dwarfs, they predict that early-infall UDGs
should be rich in GCs. Based on a semi-empirical model, Trujillo-Gomez et al.
(2022) also predict that galaxies near the cluster core form more GCs.
Using phase-space, or infall diagnostic, diagrams of the type proposed by Rhee
et al. (2017) one can investigate whether GC richness depends on UDG cluster
infall time. No trend between GC richness and very early infall times might
suggest that GC formation and quenching occurred before cluster infall. While
low mass galaxies typically quench at late times, there is a considerable
range in quenching times with some low mass galaxies quenching at z $\sim$ 2
or 10.5 Gyr ago (Moster et al., 2020). Quenching at early times via stellar
feedback (Stinson et al., 2013) may be one possibility. This early quenching
applied to UDGs has been described by Danieli et al. (2022). Another
possibility may be quenching via the interaction with cosmic sheets or
filaments (Pasha et al., 2023). A first attempt at this sort of infall
analysis applied to GCs was presented in Gannon et al. (2022) for several UDGs
in the Coma and Perseus clusters. No clear signal was found but the sample was
small with just over a dozen UDGs and with a bias towards GC-rich UDGs.
In this Letter we examine the infall diagnostic diagram for a new sample of
UDGs in the Hydra I cluster (A1060; D = 51 $\pm$ 4 Mpc). The Hydra I cluster
appears to be fairly dynamically relaxed (Ventimiglia et al., 2011) but also
reveals hints of substructures (Lima-Dias et al., 2021; La Marca et al.,
2022a), an infalling group of galaxies (Arnaboldi et al., 2012), and evidence
for ram pressure stripping (Wang et al., 2021; Iodice et al., 2021). The
observed UDGs are located near the cluster core and the northern subgroup,
with all lying within the 0.3 virial radii (R200) of the Hydra I cluster
centre. Each was observed using MUSE on the VLT as part of the ongoing LEWIS
(Looking into the faintEst WIth MUSE) project. Details of the project,
including galaxy radial velocities, positions, GC counts etc, are given in
Paper I by Iodice et al. (2023, in press). The GC counts are based on deep,
optical multi-filter imaging with the VST as part of the VEGAS project (Iodice
et al., 2020) and will be updated after the full analysis of the MUSE data.
Here we explore the distribution of UDGs in phase space and investigate
whether they reveal any trend in this space with their GC richness. We also
include similar data for UDGs in other nearby clusters. For the Hydra I
cluster we adopt the same parameters as used by La Marca et al. (2022b), i.e.
cz = 3683 $\pm$ 46 km s-1, $\sigma$ = 724 $\pm$ 31 km s-1, and virial
parameters R200 = 1.6 Mpc, M200 = 2 $\times$ 1014 h-1 M⊙ and take its centre
as NGC 3311 (RA = 159.17842, Dec = –27.528339). These values are similar to
those found by Lima-Dias et al. (2021) who recently studied Hydra I galaxies
out to the virial radius.
Figure 1: Infall diagnostic diagram for non-UDG (giants, dwarfs and LSB
galaxies) and UDGs in the Hydra I cluster. The diagram shows the relative
line-of-sight velocity of each galaxy normalised by the cluster velocity
dispersion against the projected radius normalised by the virial radius.
Regions of the diagram are shaded according to their infall times from the
cosmological simulations of Rhee et al. (2017) as indicated in the legend. The
plot shows that most UDGs and non-UDG galaxies of the the Hydra I cluster lie
within the very early infall zone – the simulations of indicating that around
half of the galaxies in this zone were part of the cluster at least 6.45 Gyr
ago.
## 2 Infall Diagnostic Diagram for Hydra I Cluster Galaxies
Rhee et al. (2017) carried out cosmological simulations of several galaxy
clusters and examined the resulting distribution of galaxies in phase-space
(i.e. velocity of the galaxy relative to the mean cluster velocity normalised
by the cluster velocity dispersion versus the galaxy’s clustercentric radius
normalised by the cluster virial radius). Based on the infall time of
galaxies, they divided this diagram into several infall zones, ranging from
those that fell into the cluster at very early times, to those that are yet to
fall in. Thus the location of galaxies in this diagram provides an ‘infall
diagnostic’ which is statistical in nature and additional scatter is
introduced when using 2D projected radii (as is the case for observational
data). For example, the ‘very early infall’ (or ancient infaller) zone in the
simulation is occupied by a slight majority (52%) of galaxies that have
resided in the cluster for more than 6.45 Gyr. Projection effects mean that
the true clustercentric radius for some galaxies is larger in 3D than observed
in 2D. For most galaxies this effect should be less a factor of two from the
projected one.
In Fig. 1 we show such an infall diagnostic diagram for all galaxies in the
Hydra I cluster out to half the virial radius. This includes giant and dwarf
galaxies from the study of Christlein & Zabludoff (2003) plus the addition of
UDGs and 3 low surface brightness (LSB) galaxies that have UDG-like sizes but
are slightly brighter from Iodice et al. (2023, in press). We find that the
bulk of the non-UDG Hydra I galaxies are located within the ‘very early
infall’ zone. The simulation of Rhee et al. (2017) predicts that just over
half of these would have been part of the cluster for at least 6.45 Gyr. There
are also galaxies located in later infall zones and three galaxies that may
lie outside of the cluster with large relative velocities – these could be
backsplash galaxies (having passed through the cluster) or simply galaxies
that are yet to fall into the cluster.
If we examine giant and classical dwarf galaxies separately (divided at MR =
–18 or mR = 15.5) there is no clear difference between them in terms of their
infall properties. Compared to the UDGs they appear to scatter to higher
relative velocities on average. A more quantitative measure of the differences
in their infall properties can be obtained from the product of their relative
velocity from the cluster mean and their radial position: $\Delta$V/$\sigma$ x
R/R200. Restricting to 0.3R/R200, as probed by the imaging, we find mean
values (and error on the mean) of $\Delta$V/$\sigma$ x R/R200 = 0.83 ($\pm$
0.07) $\times$ 0.15 ($\pm$ 0.01) = 0.12 ($\pm$ 0.02) for giant galaxies and
0.88 ($\pm$ 0.06) $\times$ 0.16 ($\pm$ 0.01) = 0.14 ($\pm$ 0.02) for classical
dwarfs. For the UDGs the mean value is $\Delta$V/$\sigma$ x R/R200 = 0.80
($\pm$ 0.17) $\times$ 0.16 ($\pm$ 0.02) = 0.13 ($\pm$ 0.04). This indicates
that UDGs are similarly concentrated in phase-space to the other cluster
galaxies. Also, while UDGs have a similar distribution in clustercentric
radius, their velocities are closer to the cluster mean than either giants or
classical dwarfs. We note that Lima-Dias et al. (2021) also found passive
early-type galaxies to be concentrated in the cluster core.The LSB galaxies in
Fig. 1 are found in a range of infall zones, from early to late infall.
As might be expected from their inner cluster position, our UDGs were among
the earliest inhabitants of the cluster, infalling at least 6.45 Gyr ago
according to simulations of Rhee et al. (2017). They would be expected to have
star formation (SF) histories that indicate early quenching. A preliminary
analysis by Iodice et al. (2023, submitted) for one UDG (UDG11) indicates an
old age of $\sim$10 Gyr, suggestive of early quenching. Future analysis will
also include the [$\alpha$/Fe] ratios which appears to be a sensitive
indicators of SF histories for low mass galaxies (see Ferre-Mateu et al. 2023,
submitted for results on UDGs in other clusters and Romero-Gómez et al.
(2023), for dwarf galaxies in the Fornax cluster). We note that the study of
Lima-Dias et al. (2021) found 88% of Hydra I galaxies (with log M∗ $>$ 8.5) to
be quenched, i.e. no sign of ongoing star formation.
Figure 2: Infall diagnostic diagram for only UDGs in the Hydra I, Coma, Virgo
and Perseus clusters. Regions of the diagram are shaded according to their
infall times from the cosmological simulations of Rhee et al. (2017). As per
the legend, UDGs in different clusters are denoted by different symbols.
Symbols are outlined in red (if GC-rich) or blue (if GC-poor), and without an
outline if the GC properties are unknown. See main text for discussion of
selection effects in the UDG samples. Globular cluster (GC) rich UDGs are not
predominately found in the very early infall region, indeed the data suggest
that very early infall UDGs tend to be GC-poor.
## 3 Infall Diagnostic Diagram for UDGs in Several Clusters
In Fig. 2 we show the UDGs from the Hydra I cluster along with those from the
literature and coded by globular cluster (GC) richness. Total GC counts for
the Hydra I UDGs are determined in (Iodice et al., 2020; La Marca et al.,
2022b) and listed again in Paper I (Iodice et al. 2023, submitted). Literature
data comes from Gannon et al. (2022) and the recent work of Toloba et al.
(2023). The GC counts are almost exclusively based on imaging (i.e. lacking
radial velocities) and we follow Gannon et al. (2022) assigning a somewhat
arbitrary separation between rich and poor GC systems at 20 GCs. This
corresponds to a halo mass of 1011 M⊙ using the scaling relation of Burkert &
Forbes (2020). Below 20 GCs the scaling relation is less predictive of halo
mass due to increased scatter. By this definition, all of the UDGs in the
Hydra I cluster are GC-poor (ranging from no GCs for several UDGs to 15 GCs
for UDG3) and this is unlikely to change significantly when the full set of
MUSE spectroscopic data is available. Given the relatively small stellar mass
range of the Hydra I UDGs, a fixed GC number corresponds closely to a GC
system total mass per host galaxy stellar mass. If we assume the same average
mass for a GC of 105 M⊙, this ratio is $<$1.2% for all of the observed Hydra I
UDGs. While some Coma cluster UDGs also have a ratio $<$1.2% the majority have
much higher ratios, with up to $\sim$10% of the galaxy stellar mass in their
GC system, see figure 4 of Forbes et al. (2020).
Before interpreting Fig. 2 there are various caveats and selection effects
that should be born in mind. Firstly, we note that some of the literature UDGs
lack firm GC counts and their rich/poor status is on the basis of a visual
estimate only (Gannon et al., 2022). Secondly, the literature sample is
subject to sample selection effects. The Coma cluster sample of UDGs comes
from studies that have focused on GC-rich galaxies or they have focused on a
narrow range in clustercentric radius (i.e. around 0.12 R/R200 in the Coma
cluster). Observations of the Perseus cluster UDGs have so far avoided the
cluster inner regions. The Virgo UDG sample is relatively small and mostly GC-
poor. In terms of a selection bias, the Hydra I UDGs are the closest to being
a representative sample of UDGs in the cluster, however only the inner 0.3
R/R200 was imaged in Iodice et al. (2020). Thus, we may be missing the late
infalling UDGs. We note that La Marca et al. (2022b) estimated a total UDG
population out to the virial radius of 48 $\pm$ 10 and so many outer region
UDGs, which may be late infallers, remain to be studied.
The UDG infall diagram does not clearly show GC-rich UDGs to be located in
earlier infall zones as might be expected in the standard picture of dwarf
galaxy quenching due to infall which leads to richer GC systems (as described
in the Introduction). Indeed, the opposite trend may be present, such that in
the very early infall region there are 13 GC-poor UDGs and 5 GC-rich ones,
whereas outside of this region (but within 0.5 R/R200) there are only 6 GC-
poor and 6 GC-rich UDGs. Again, we caution that selection and projection
effects make conclusions tentative.
## 4 Discussion
Alabi et al. (2018) used the phase-space diagram to investigate the infall
epoch of UDGs, classical dwarfs and other galaxies in the Coma cluster (a
massive, dynamically relaxed cluster). Similar to the Hydra I cluster, they
saw little difference between classical dwarfs and the giant galaxies. For the
UDGs, they identified both early and late infallers. A similar situation might
be present for Hydra I UDGs if outer region UDGs were probed. Alabi et al.
(2018) did not include GC richness in their study.
Given the lack of a clear signal for ‘infall bias’ in the GC richness of UDGs
alternatives should be further investigated. As noted in the Introduction,
quenching at very early times prior to cluster infall should be considered.
For such UDGs, we would expect very old ages, low metallicities (similar to
the metal-poor subpopulation of GCs) and high alpha overabundances (indicative
of rapid star formation). A high fraction of mass in GCs relative to field
stars might also be expected. A UDG in the NGC 5846 group, (NGC5846$\\_$UDG1)
discovered in VEGAS imaging (Forbes et al., 2019), may be an example of such a
failed galaxy having a remarkable 13% of its current stellar mass in the form
of GCs (Danieli et al., 2022). As noted above, the observed Hydra I UDGs (from
the inner cluster regions) all have less than 1.2% of their stellar mass in
GCs.
Another possibility is that the Hydra I UDGs are GC-poor because they have
been tidally stripped from their host galaxy. This tidal stripping would have
to remove most of the dark matter halo before any GCs, since the dark matter
is more radially extended than GC systems. Continued stripping would be
expected to remove GCs and stars in roughly equal proportions since the radial
extent of GC systems for UDGs closely follows that of the galaxy stars. As
well as operating in clusters, tidal stripping of UDGs may occur in galaxy
groups. We note that UDGs in the field do tend to be GC-poor (Jones et al.,
2023) however this is unlikely to be due to tidal effects and rather some
internal process.
The Hydra I UDGs are generally well-fit by a single Sersic profile however a
few show hints of asymmetries that might point to a tidal interaction (Iodice
et al., 2020; La Marca et al., 2022a). For the one UDG examined in detail by
Iodice et al. (2023, submitted) there is some evidence for an isophotal twist
in the MUSE data. This might indicate tidal interaction (or a triaxial
potential). Furthermore, a Hydra I UDG first identified by Misgeld et al.
(2008) reveals a clear S-shape indicative of ongoing tidal interaction (Koch
et al., 2012). In the case of Coma cluster UDGs, Mowla et al. (2017) looked
specifically for signs of tidal features via position angle twists in a
stacked sample, finding no evidence for such twists.
Sales et al. (2020) have simulated UDGs in clusters of similar mass to Hydra I
using Illustris-TNG100. They identify two types of UDGs in clusters, i.e.
Tidal-UDGs and Born-UDGs (see also Jiang et al. 2019). The Tidal-UDGs were
originally massive galaxies (up to 1010 M⊙) that have been tidally stripped of
stars and puffed-up by the cluster. Born-UDGs were formed as UDGs outside of
the cluster and more recently entered the cluster. Thus Tidal-UDGs dominate
the inner $\sim$0.5R/R200 since they were accreted at early times, while Born-
UDGs dominate the outer regions with some only recently falling into the
cluster. We remind the reader that we only probe out to 0.3R/R200 in Hydra I.
The Sales et al. (2020) model would also predict on average higher
metallicities, older ages and lower internal velocity dispersions, for a given
stellar mass, for their Tidal-UDG compared to the Born-UDGs. These stellar
population, kinematic, GC colours and dark matter content predicted for Tidal-
UDGs can be tested when the full LEWIS dataset is available.
## 5 Conclusions
As part of the LEWIS project (Iodice et al. 2023, in press) we obtained new
VLT/MUSE observations of the radial velocities of UDGs in the Hydra I cluster
(at D = 51 Mpc). Here we examine the location of Hydra I UDGs in infall phase-
space diagrams based on simulations of cluster galaxies. We find all of the
observed UDGs (and 3 low surface brightness galaxies) to be associated with
the cluster. From comparison with the Rhee et al. (2017) simulations, we
conclude that most giants, classical dwarfs and UDGs fell into the Hydra I
cluster long ago, with UDGs being among the earliest infallers. Projection
effects in observations and the statistical nature of the infall diagnostic
diagram limit our ability to determine the true fraction of ancient infallers.
Nevertheless we might expect UDGs in the Hydra I cluster to reveal old stellar
populations consistent with early quenching.
We also compare Hydra I UDGs with their counterparts in the Coma, Perseus and
Virgo clusters in terms of their GC richness. If very early infall into a
cluster is associated with enhanced GC richness (as has been suggested for
classical dwarf galaxies) then such a trend is expected. The data from these
clusters do not show a clear trend of GC richness with earlier infall times,
indeed the data suggest the opposite trend. If verified by larger and more
complete samples, then UDGs may be quenched by a different mechanism than that
thought to operate on classical dwarf galaxies. As more data for UDGs is
acquired, trends, or the lack of, may become more apparent in an infall
diagnostic diagram. A future analysis of star formation histories will give an
indication of when quenching occurred for the Hydra I UDGs. Once the full
dataset of the LEWIS project is available we will be able to test other
mechanisms, such as pre-infall quenching and/or tidal stripping, and their
possible role in shaping UDGs and their globular cluster systems.
## Acknowledgements
We wish to thank the anonymous referee for their comments. We thank A.
Romanowsky, L. Buzzo, L. Haacke and O. Gerhard for useful suggestions. This
work is based on visitor mode observations collected at the European Southern
Observatory (ESO) La Silla Paranal Observatory and collected at the European
Southern Observatory under ESO programmes 099.B-0560(A) and 108.222P. INAF
authors acknowledge financial support for the VST project (P.I.: P. Schipani).
DAF thanks the ARC for support via DP220101863. Parts of this research were
supported by the Australian Research Council Centre of Excellence for All Sky
Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013.
MC acknowledges support from the INAF-EDGE program (PI Leslie K. Hunt). J. F-B
acknowledges support through the RAVET project by the grant
PID2019-107427GB-C32 from the Spanish Ministry of Science, Innovation and
Universities (MCIU), and through the IAC project TRACES which is partially
supported through the state budget and the regional budget of the Consejería
de Economía, Industria, Comercio y Conocimiento of the Canary Islands
Autonomous Community.
## Data Availability
Raw data is available from the ESO archive.
## References
* Alabi et al. (2018) Alabi A., et al., 2018, MNRAS, 479, 3308
* Arnaboldi et al. (2012) Arnaboldi M., Ventimiglia G., Iodice E., Gerhard O., Coccato L., 2012, A&A, 545, A37
* Burkert & Forbes (2020) Burkert A., Forbes D. A., 2020, AJ, 159, 56
* Carleton et al. (2021) Carleton T., Guo Y., Munshi F., Tremmel M., Wright A., 2021, MNRAS, 502, 398
* Christlein & Zabludoff (2003) Christlein D., Zabludoff A. I., 2003, ApJ, 591, 764
* Danieli et al. (2022) Danieli S., et al., 2022, ApJ, 927, L28
* Forbes et al. (2019) Forbes D. A., Gannon J., Couch W. J., Iodice E., Spavone M., Cantiello M., Napolitano N., Schipani P., 2019, A&A, 626, A66
* Forbes et al. (2020) Forbes D. A., Alabi A., Romanowsky A. J., Brodie J. P., Arimoto N., 2020, MNRAS, 492, 4874
* Gannon et al. (2022) Gannon J. S., et al., 2022, MNRAS, 510, 946
* Iodice et al. (2020) Iodice E., et al., 2020, A&A, 642, A48
* Iodice et al. (2021) Iodice E., et al., 2021, A&A, 652, L11
* Jiang et al. (2019) Jiang F., Dekel A., Freundlich J., Romanowsky A. J., Dutton A. A., Macciò A. V., Di Cintio A., 2019, MNRAS, 487, 5272
* Jones et al. (2023) Jones M. G., et al., 2023, ApJ, 942, L5
* Koch et al. (2012) Koch A., Burkert A., Rich R. M., Collins M. L. M., Black C. S., Hilker M., Benson A. J., 2012, ApJ, 755, L13
* La Marca et al. (2022a) La Marca A., et al., 2022a, A&A, 659, A92
* La Marca et al. (2022b) La Marca A., et al., 2022b, A&A, 665, A105
* Lima-Dias et al. (2021) Lima-Dias C., et al., 2021, MNRAS, 500, 1323
* Liu et al. (2016) Liu Y., et al., 2016, ApJ, 818, 179
* Misgeld et al. (2008) Misgeld I., Mieske S., Hilker M., 2008, A&A, 486, 697
* Mistani et al. (2016) Mistani P. A., et al., 2016, MNRAS, 455, 2323
* Moster et al. (2020) Moster B. P., Naab T., White S. D. M., 2020, MNRAS, 499, 4748
* Mowla et al. (2017) Mowla L., van Dokkum P., Merritt A., Abraham R., Yagi M., Koda J., 2017, ApJ, 851, 27
* Pasha et al. (2023) Pasha I., Mandelker N., van den Bosch F. C., Springel V., van de Voort F., 2023, MNRAS, 520, 2692
* Peng et al. (2008) Peng E. W., et al., 2008, ApJ, 681, 197
* Ramos-Almendares et al. (2020) Ramos-Almendares F., Sales L. V., Abadi M. G., Doppel J. E., Muriel H., Peng E. W., 2020, MNRAS, 493, 5357
* Rhee et al. (2017) Rhee J., Smith R., Choi H., Yi S. K., Jaffé Y., Candlish G., Sánchez-Jánssen R., 2017, ApJ, 843, 128
* Romero-Gómez et al. (2023) Romero-Gómez J., et al., 2023, MNRAS, 522, 130
* Sales et al. (2020) Sales L. V., Navarro J. F., Peñafiel L., Peng E. W., Lim S., Hernquist L., 2020, MNRAS, 494, 1848
* Stinson et al. (2013) Stinson G. S., Brook C., Macciò A. V., Wadsley J., Quinn T. R., Couchman H. M. P., 2013, MNRAS, 428, 129
* Toloba et al. (2023) Toloba E., et al., 2023, arXiv e-prints, p. arXiv:2305.06369
* Trujillo-Gomez et al. (2022) Trujillo-Gomez S., Kruijssen J. M. D., Reina-Campos M., 2022, MNRAS, 510, 3356
* Ventimiglia et al. (2011) Ventimiglia G., Arnaboldi M., Gerhard O., 2011, A&A, 528, A24
* Wang et al. (2021) Wang J., et al., 2021, ApJ, 915, 70
* Yagi et al. (2016) Yagi M., Koda J., Komiyama Y., Yamanoi H., 2016, ApJS, 225, 11
* van Dokkum et al. (2015) van Dokkum P. G., Abraham R., Merritt A., Zhang J., Geha M., Conroy C., 2015, ApJ, 798, L45
|
# Falcon 2.0: An Entity and Relation Linking Tool over Wikidata
Ahmad Sakor<EMAIL_ADDRESS>L3S Research Center and TIB, University of
HannoverHannoverGermany , Kuldeep Singh<EMAIL_ADDRESS>Cerence
GmbH and Zerotha ResearchP.O. Box 1212AachenGermany , Anery Patel
<EMAIL_ADDRESS>TIB, University of HannoverHannoverGermany and Maria-
Esther Vidal<EMAIL_ADDRESS>L3S Research Center and TIB, University of
HannoverHannoverGermany
(2020)
###### Abstract.
The Natural Language Processing (NLP) community has significantly contributed
to the solutions for entity and relation recognition from a natural language
text, and possibly linking them to proper matches in Knowledge Graphs (KGs).
Considering Wikidata as the background KG, there are still limited tools to
link knowledge within the text to Wikidata. In this paper, we present Falcon
2.0, the first joint entity and relation linking tool over Wikidata. It
receives a short natural language text in the English language and outputs a
ranked list of entities and relations annotated with the proper candidates in
Wikidata. The candidates are represented by their Internationalized Resource
Identifier (IRI) in Wikidata. Falcon 2.0 resorts to the English language model
for the recognition task (e.g., N-Gram tiling and N-Gram splitting), and then
an optimization approach for the linking task. We have empirically studied the
performance of Falcon 2.0 on Wikidata and concluded that it outperforms all
the existing baselines. Falcon 2.0 is open source and can be reused by the
community; all the required instructions of Falcon 2.0 are well-documented at
our GitHub repository111https://github.com/SDM-TIB/falcon2.0. We also
demonstrate an online API, which can be run without any technical expertise.
Falcon 2.0 and its background knowledge bases are available as resources at
https://labs.tib.eu/falcon/falcon2/.
NLP, Entity Linking, Relation Linking, Background Knowledge, English
morphology, DBpedia, and Wikidata
††journalyear: 2020††copyright: rightsretained††conference: Proceedings of the
29th ACM International Conference on Information and Knowledge Management;
October 19–23, 2020; Virtual Event, Ireland††booktitle: Proceedings of the
29th ACM International Conference on Information and Knowledge Management
(CIKM ’20), October 19–23, 2020, Virtual Event, Ireland††doi:
10.1145/3340531.3412777††isbn: 978-1-4503-6859-9/20/10††ccs: Information
systems Resource Description Framework (RDF)††ccs: Information systems
Information extraction
## 1\. Introduction
Entity Linking (EL)- also known as Named Entity Disambiguation (NED)- is a
well-studied research domain for aligning unstructured text to its structured
mentions in various knowledge repositories (e.g., Wikipedia, DBpedia (Auer et
al., 2007), Freebase (Bollacker et al., 2008) or Wikidata (Vrandecic, 2012)).
Entity linking comprises two sub-tasks. The first task is Named Entity
Recognition (NER), in which an approach aims to identify entity labels (or
surface forms) in an input sentence. Entity disambiguation is the second sub-
task of linking entity surface forms to semi-structured knowledge
repositories. With the growing popularity of publicly available knowledge
graphs (KGs), researchers have developed several approaches and tools for EL
task over KGs. Some of these approaches implicitly perform NER and directly
provide mentions of entity surface forms in the sentences to the KG (often
referred to as end-to-end EL approaches) (Delpeuch, 2019). Other attempts
(e.g., Yamanda et al. (Yamada et al., 2016a), DCA (Yang et al., 2019))
consider recognized surface forms of the entities as additional inputs besides
the input sentence to perform entity linking. Irrespective of the input format
and underlying technologies, the majority of the existing attempts (Röder et
al., 2018) in the EL research are confined to well-structured KGs such as
DBpedia or Freebase222it is now depreciated and no further updates are
possible. These KGs rely on a well-defined process to extract information
directly from Wikipedia infoboxes. They do not provide direct access to the
users to add/delete the entities or alter the KG facts. Wikidata, on the other
hand, also allows users to edit Wikidata pages directly, add newer entities,
and define new relations between the objects. Wikidata is hugely popular as a
crowdsourced collection of knowledge. Since its launch in 2012, over 1 billion
edits have been made by the users across the
world333https://www.wikidata.org/wiki/Wikidata:Statistics.
#### Motivation, Approach, and Contributions.
We motivate our work by the fact that despite the vast popularity of Wikidata,
there are limited attempts to target entity and relation linking over
Wikidata. For instance, there are over 20 entity linking tools/APIs for
DBpedia (Singh et al., 2018b; Röder et al., 2018), which are available as
APIs. To the best of our knowledge, there exists only one open-source API for
Wikidata entity linking (i.e., OpenTapioca (Delpeuch, 2019)). Furthermore,
there is no tool over Wikidata for relation linking, i.e., linking predicate
surface forms to their corresponding Wikidata mentions. In this paper, we
focus on providing Falcon 2.0, a reusable resource API for joint entity and
relation linking over Wikidata. In our previous work, we proposed Falcon
(Sakor et al., 2019), a rule-based approach yet effective for entity and
relation linking on short text (questions in this case) over DBpedia. In
general, the Falcon approach has two novel concepts: 1) a linguistic-based
approach that relies on several English morphology principles such as
tokenization, and N-gram tiling; 2) a local knowledge base which serves as a
source of background knowledge (BK). This knowledge base is a collection of
entities from DBpedia. We resort to the Falcon approach for developing Falcon
2.0. Our aim here is to study whether or not the Falcon approach is agnostic
to underlying KG; hence, we do not claim novelty in the underlying
_linguistic-based approach_ for Falcon 2.0. Further, we investigate the
concerns related to robustness, emerging failures, and bottlenecks. We
introduce Falcon 2.0 based on the methodology employed in the first version.
Our tool is the first joint entity and relation linking tool for Wikidata. Our
novel contributions briefly lie in two aspects:
1. (1)
Falcon 2.0: The first resource for joint entity and relation linking over
Wikidata. Falcon 2.0 relies on fundamental principles of English morphology
(tokenization and compounding) and links entity and relation surface forms in
a short sentence to its Wikidata mentions. Falcon 2.0 is available as an
online API and can be accessed at https://labs.tib.eu/falcon/falcon2/. Falcon
2.0 is also able to recognize entities in keywords such as Barack Obama, where
there is no relation. We empirically evaluate Falcon 2.0 on three datasets
tailored for Wikidata. According to the observed results, Falcon 2.0
significantly outperforms all the existing baselines. For the ease of use, we
integrate the Falcon API444https://labs.tib.eu/falcon/ into Falcon 2.0. This
option is available in case Wikipedia contains an equivalence entity (Wikidata
is a superset of DBpedia) The Falcon 2.0 API already has over half a million
hits from February 2020 to the time of paper acceptance, which shows its
gaining usability (excluding self-access of the API while performing the
evaluation).
2. (2)
Falcon 2.0 Background KG: We created a new background KG of Falcon 2.0 with
the Wikidata. We extracted 48,042,867 Wikidata entities from its public dump
and aligned these entities with the aliases present in Wikidata. For example,
Barack Obama is a Wikidata entity
Wiki:Q76555https://www.wikidata.org/wiki/Q76. We created a mapping between the
label (Barack Obama) of Wiki:Q76 with its aliases such as President Obama,
Barack Hussein Obama, and Barry Obama and stored it in the background
knowledge base. We implemented a similar alignment for 15,645
properties/relations of Wikidata. The background knowledge base is an indexed
graph and can be queried. The resource is also present at a persistent URI for
further reuse666https://doi.org/10.6084/m9.figshare.11362883.
The rest of this paper is organized as follows: Section 2 reviews the state-
of-the-art, and the following Section 3 describes our two resources and
approach to build Falcon 2.0. Section 4 presents experiments to evaluate the
performance of Falcon 2.0. Section 5 presents the importance and impact of
this work for the research community. The availability and sustainability of
resources is explained in Section 6, and its maintenance related discussion is
presented in Section 7. We close with the conclusion in Section 8.
## 2\. Related Work
Several surveys provide a detailed overview of the advancements of the
techniques employed in entity linking over KGs (Shen et al., 2015; Balog,
2018). Various reading lists (Ji, 2019), online
forums777http://nlpprogress.com/english/entity_linking.html and Github
repositories888https://github.com/sebastianruder/NLP-
progress/blob/master/english/entity_linking.md track the progress in the
domain of entity linking. Initial attempts in EL considered Wikipedia as an
underlying knowledge source. The research field has matured and the SOTA
nearly matches human-level performance (Raiman and Raiman, 2018). With the
advent of publicly available KGs such as DBpedia, Yago, and Freebase, the
focus has shifted towards developing EL over knowledge graphs. The
developments in Deep Learning have introduced a range of models that carry out
both NER and NED as a single end-to-end step (Kolitsas et al., 2018; Ganea and
Hofmann, 2017). NCEL (Cao et al., 2018) learns both local and global features
from Wikipedia articles, hyperlinks, and entity links to derive joint
embeddings of words and entities. These embeddings are used to train a deep
Graph Convolutional Network (GCN) that integrates all the features through a
Multi-layer Perceptron. The output is passed through a Sub-Graph Convolution
Network, which finally resorts to a fully connected decoder. The decoder maps
the output states to linked entities. The BI-LSTM+CRF model (Inan and
Dikenelli, 2018) formulates entity linking as a sequence learning task in
which the entity mentions are a sequence whose length equals the series of the
output entities. Albeit precise, deep learning approaches demand _high-
quality_ training annotations, which are not extensively available for
Wikidata entity linking (Cetoli et al., 2019; Mulang et al., 2020).
There is concrete evidence in the literature that the machine learning-based
models trained over generic datasets such as WikiDisamb30 (Ferragina and
Scaiella, 2010), and CoNLL (YAGO) (Hoffart et al., 2011) do not perform well
when applied to short texts. Singh et al. (Singh et al., 2018b) evaluated more
than 20 entity linking tools over DBpedia for short text (e.g., questions) and
concluded that issues like capitalization of surface forms, implicit entities,
and multi-word entities affect the performance of EL tools in a short input
text. Sakor et al. (Sakor et al., 2019) addresses specific challenges of short
texts by applying a rule-based approach for EL over DBpedia. In addition to
linking entities to DBpedia, Sakor et al. also provides DBpedia IRIs of the
relations in a short text. EARL (Banerjee et al., [n.d.]) is another tool that
proposes a traveling salesman algorithm-based approach for joint entity and
relation linking over DBpedia. To the best of our knowledge, EARL and Falcon
are the only available tools that provide both entity and relation linking.
Entity linking over Wikidata is a relatively new domain. Cetoli et al. (Cetoli
et al., 2019) propose a neural network-based approach for linking entities to
Wikidata. The authors also align an existing Wikipedia corpus-based dataset to
Wikidata. However, this work only targets entity disambiguation and assumes
that the entities are already recognized in the sentences. Arjun (Mulang et
al., 2020) is the latest work for Wikidata entity linking. It uses an
attention-based neural network for linking Wikidata entity labels. OpenTapioca
(Delpeuch, 2019) is another attempt that performs end-to-end entity linking
over Wikidata; it is the closest to our work even though OpenTapioca does not
provide Wikidata Ids of relations in a sentence. OpenTapioca is also available
as an API and is utilized as our baseline. S-Mart (Yang and Chang, 2015) is a
tree-based structured learning framework based on multiple additive regression
trees for linking entities in a tweet. The model was later adapted for linking
entities in the questions. VCG (Sorokin and Gurevych, 2018) is another attempt
which is a unifying network that models contexts of variable granularity to
extract features for an end to end entity linking. However, Falcon 2.0 is the
first tool for joint entity and relation linking over Wikidata.
## 3\. Falcon 2.0\- A Resource
In this section, we describe Falcon 2.0 in detail. First the architecture of
Falcon 2.0 is depicted. Next, we discuss the BK used to match the surface
forms in the text to the resource in a specific KG. In the paper’s scope, we
define ”short text” as grammatically correct questions (up to 15 words).
### 3.1. Architecture
Figure 1. The Falcon 2.0 Architecture. The boxes highlighted in Grey are
reused from Falcon (Sakor et al., 2019). Grey boxes contain a linguistic
pipeline for recognizing and linking entity and relation surface forms. The
boxes in White are our addition to the Falcon pipeline to build a resource for
the Wikidata entity and relation linking. The white boxes constitute what we
refer to as BK specific to Wikidata. The text search engine contains the
alignment of Wikidata entity/relation labels along with the entity and
relation aliases. It is used for generating potential candidates for entity
and relation linking. RDF triple store is a local copy of Wikidata triples
containing all entities and predicates.
The Falcon 2.0 architecture is depicted in Figure 1. Falcon 2.0 receives short
input texts and outputs a set of entities and relations extracted from the
text; each entity and relation in the output is associated with a unique
Internationalized Resource Identifier (IRI) in Wikidata. Falcon 2.0 resorts to
BK and a catalog of rules for performing entity and relation linking. The BK
combines Wikidata labels and their corresponding aliases. Additionally, it
comprises alignments between nouns and entities in Wikidata. Alignments are
stored in a text search engine, while the knowledge source is maintained in an
RDF triple store accessible via a SPARQL endpoint. The rules that represent
the English morphology are in a catalog; a forward chaining inference process
is performed on top of the catalog during the extraction and linking tasks.
Falcon 2.0 also comprises several modules that identify and link entities and
relations to the Wikidata. These modules implement POS Tagging, Tokenization &
Compounding, N-Gram Tiling, Candidate List Generation, Matching & Ranking,
Query Classifier, and N-Gram Splitting and are reused from the implementation
of Falcon.
### 3.2. Background Knowledge
Figure 2. Falcon 2.0 Background Knowledge is built by converting labels of
entities and relations in Wikidata into pairs of alignments. It is a part of
search engine (cf. Figure 1).
Wikidata contains over 52 million entities and 3.9 billion facts (in the form
of subject-predicate-object triples). Since Falcon 2.0 background knowledge
only depends on labels, a significant portion of this extensive information is
not useful for our approach. Hence, we only extract all the entity and
relation labels to create a local background KG, A.K.A ”alias background
knowledge base.”. For example, the entity United States of
America999https://www.wikidata.org/wiki/Q30 in Wikidata has the natural
language label ‘United States of America’ and several other aliases (or
known_as labels) of United States of America such as ”the United States of
America, America, U.S.A., the U.S., United States, etc.”. We extended our
background KG with this information from Wikidata. Similarly, for relation’s
labels, the background KG is enriched with known_as labels to provide synonyms
and derived word forms. For example, the relation spouse
101010https://www.wikidata.org/wiki/Property:P26 in Wikidata has the label
spouse and the other known as labels are husband, wife, married to, wedded to,
partner, etc. This variety of synonyms for each relation empowers Falcon 2.0
to match the surface form in the text to a relation in Wikidata. Figure 2
illustrates the process of building background knowledge.
### 3.3. Catalog of Rules
Falcon 2.0 is a rule-based approach. A catalog of rules is predefined to
extract entities and relations from the text. The rules are based on the
English morphological principles and borrowed from Sakor et al. (Sakor et al.,
2019). For example, Falcon 2.0 excludes all verbs from the entities candidates
list based on the rule verbs are not entities. For example, the N-Gram tiling
module in the Falcon 2.0 architecture resorts to the rule: entities with only
stopwords between them are one entity. Another example of such rule When ->
date, Where -> place solves the ambiguity of matching the correct relation in
case the short text is a question by looking at the questions headword. For
example, give the two questions When did Princess Diana die? and Where did
Princess Diana die?, the relation died can be the death place or the death
year. The question headword (When/Where) is the only insight to solve the
ambiguity here. When the question word is where, Falcon 2.0 matches only
relations that have a place as a range of the relation.
### 3.4. Recognition
Extraction phase in Falcon 2.0 consists of three modules. POS tagging,
tokenization & compounding, and N-Gram tiling. The input of this phase is a
natural language text. The output of the phase is the list of surface forms
related to entities or relations.
Part-of-speech (POS) Tagging receives a natural language text as an input. It
tags each word in the text with its related tag, e.g., noun, verb, and adverb.
This module differentiates between nouns and verbs to enable the application
of the morphological rules from the catalog. The output of the module is a
list of the pairs of (word, tag).
Tokenization & Compounding builds the tokens list by removing the stopwords
from the input and splitting verbs from nouns. For example, if the input is
What is the operating income for Qantas, the output of this module is a list
of three tokens [operating, income, Qantas].
N-Gram Tilling module combines tokens with only stopwords between them relying
on one of the rules from a catalog of rules. For example, if we consider the
previous module’s output as an input for the n-gram tilling module, operating
and income tokens will be combined in one token. The output of the module is a
list of two tokens [operating income, Qantas].
### 3.5. Linking
This phase consists of four modules: candidate list generation, matching &
ranking, relevant rule selection, and n-gram splitting.
Candidate List Generation receives the output of the recognition phase. The
module queries the text search engine for each token. Then, tokens will have
an associated candidate list of resources. For example, the retrieved
candidate list of the token operating income is [(P3362, operating income),
(P2139, income), (P3362, operating profit)]; where the first element is the
Wikidata predicate identifier and the second is the list of labels associated
with the predicates which match the query ”operating income.”
Matching & Ranking ranks the candidate list received from the candidate list
generation module and matches candidates’ entities and relations. Since, in
any KG, the facts are represented as triples, the matching and ranking module
creates triples consisting of the entities and relationships from the
candidates’ list. Then, for each pair of entity and relation, the module
checks if the triple exists in the RDF triple store (Wikidata). The checking
is done by executing a simple ASK query over the RDF triple store. For each
triple, the module increases the rank of the involved relations and entities.
The output of the module is the ranked list of the candidates.
Relevant Rule Selection interacts with the matching & ranking module by
suggesting increasing the ranks of some candidates relying on the catalog of
rules. One of the suggestions is considering the question headword to clear
the ambiguity between two relations based on the range of relationships in the
KG.
N-Gram Splitting module is used if none of the triples tested in the matching
& ranking modules exists in the triple store, i.e., the compounding the
approach did in the tokenization & compounding module led to combining two
separated entities. The module splits the tokens from the right side and
passes the tokens again to the candidate list generation module. Splitting the
tokens from the right side resorts to one of the fundamentals of the English
morphology; the compound words in English have their headword always towards
the right side (Williams, 1981).
Text Search Engine stores all the alignments of the labels. A simple querying
technique (Gormley and Tong, 2015) is used as the text search engine over
background knowledge. It receives a token as an input and then returns all the
related resources with labels similar to the received token.
RDF Triple store is a local copy of the Wikidata endpoint. It consists of all
the RDF triples of Wikidata labeled with the English language. An RDF triple
store is used to check the existence of the triples passed from the Matching &
Ranking module. The RDF triple store keeps around 3.9 billion triples.
## 4\. Experimental Study
We study three research questions: RQ1) What is the performance of Falcon 2.0
for entity linking over Wikidata? RQ2) What is the impact of Wikidata’s
specific background knowledge on the performance of a linguistic approach?
RQ3) What is the performance of Falcon 2.0 for relation linking over Wikidata?
#### Metrics
We report the performance using the standard metrics of Precision, Recall, and
F-measure. Precision is the fraction of relevant resources among the retrieved
resources. Recall is the fraction of relevant resources that have been
retrieved over the total amount of relevant resources. F-Measure or F-Score is
the harmonic mean of precision and recall.
#### Datasets
We rely on three different question answering datasets namely SimpleQuestion
dataset for Wikidata (Diefenbach et al., 2017), WebQSP-WD (Sorokin and
Gurevych, 2018) and LC-QuAD 2.0 (Dubey et al., 2019). The SimpleQuestion
dataset contains 5,622 test questions which are answerable using Wikidata as
underlying KG. WebQSP-WD contains 1639 test questions, and LC-QUAD 2.0
contains 6046 test questions. SimpleQuestion and LC-QuaD 2.0 provide the
annotated gold standard for entity and relations, whereas WebQSP-WD only
provides annotated gold standard for entities. Hence, we evaluated entity
linking performance on three datasets and relation linking performance on two
datasets. Also, SimpleQuestion and WebQSP-WD contain questions with a single
entity and relation, whereas LC-QuAD 2.0 contains mostly complex questions
(i.e., more than one entity and relation).
#### Baselines
OpenTapioca (Delpeuch, 2019): is available as a web API; it provides Wikidata
URIs for entities. We run OpenTapioca API on all the three datasets.
Variable Context Granularity model (VCG) (Sorokin and Gurevych, 2018): is a
unifying network that models contexts of variable granularity to extract
features for mention detection and entity disambiguation. We were unable to
reproduce VCG using the publicly available source code. Hence, we only report
its performance on WebQSP-WD from the original paper (Sorokin and Gurevych,
2018) as we are unable to run the model on the other two datasets for entity
linking. For the completion of the approach, we also report the other two
baselines provided by the authors, namely Heuristic Baseline and Simplified
VCG.
S-Mart (Yang and Chang, 2015): was initially proposed to link entities in the
tweets and later adapted for question answering. The system is not open
source, and we adapt its result from (Sorokin and Gurevych, 2018) for WebQSP-
WD dataset.
No Baseline for Relation Linking: To the best of our knowledge, there is no
baseline for relation linking on Wikidata. One argument could be to run the
existing DBpedia based relation linking tool on Wikidata and compare it with
our performance. We contest this solely because Wikidata is extremely noisy.
For example, in ”What is the longest National Highway in the world?” the
entity surface form ”National Highway” matches four(4) different entities in
Wikidata that share the same entity label (i.e., ”National Highway”). In
comparison, 2,055 other entities contain the full mention in their labels for
the surface form ”National Highway”. However, in DBpedia, there exists only
one unique label for ”National Highway”. Hence, any entity linking tool or
relation linking tool tailored for DBpedia will face issues on Wikidata (cf.
table 3). Therefore, instead of reporting the bias and under-performance, we
did not evaluate their performance for a fair comparison. Hence, we report
Falcon 2.0 relation linking performance only to establish new baselines on two
datasets: SimpleQuestion and LC-QuAD 2.0.
Figure 3. Falcon 2.0 API Web Interface.
#### Experimental Details
Falcon 2.0 is extremely lightweight from an implementation point of view. A
laptop machine, with eight cores and 16GB RAM running Ubuntu 18.04 is used for
implementing and evaluating Falcon 2.0. We deployed its web API on a server
with 723GB RAM, 96 cores (Intel(R) Xeon(R) Platinum 8160CPU with 2.10GHz)
running Ubuntu 18.04. This publicly available API is used to calculate the
standard evaluation metrics, namely Precision, Recall, and F-score.
### 4.1. Experimental Results
Table 1. Entity linking evaluation results on LC-QuAD 2.0 & SimpleQuestion
datasets. Best values are in bold.
Approach | Dataset | P | R | F
---|---|---|---|---
OpenTapioca (Delpeuch, 2019) | LC-QuAD 2.0 | 0.29 | 0.42 | 0.35
Falcon 2.0 | LC-QuAD 2.0 | 0.50 | 0.56 | 0.53
OpenTapioca (Delpeuch, 2019) | SimpleQuestion | 0.01 | 0.02 | 0.01
Falcon 2.0 | SimpleQuestion | 0.56 | 0.64 | 0.60
OpenTapioca (Delpeuch, 2019) | SimpleQuestion Uppercase Entities | 0.16 | 0.28 | 0.20
Falcon 2.0 | SimpleQuestion Uppercase Entities | 0.66 | 0.75 | 0.70
Table 2. Entity linking evaluation results on the WEBQSP test dataset. Best
values are in bold.
Approach | P | R | F
---|---|---|---
S-MART (Yang and Chang, 2015) | 0.66 | 0.77 | 0.72
Heuristic baseline (Sorokin and Gurevych, 2018) | 0.30 | 0.61 | 0.40
Simplified VCG (Sorokin and Gurevych, 2018) | 0.84 | 0.62 | 0.71
VCG (Sorokin and Gurevych, 2018) | 0.83 | 0.65 | 0.73
OpenTapioca (Delpeuch, 2019) | 0.01 | 0.02 | 0.02
Falcon 2.0 | 0.80 | 0.84 | 0.82
#### Experimental Results 1
In the first experiment described in Table 1, we compare entity linking
performance of Falcon 2.0 on SimpleQuestion and LC-QuAD 2.0 datasets. We first
evaluate the performance on the SimpleQuestion dataset. Surprisingly, we
observe that for the OpenTapioca baseline, the values are approximately 0.0
for Precision, Recall, and F-score. We analyzed the source of errors and found
that out of 5,622 questions, only 246 have entity labels in uppercase letters.
Opentapioca fails to recognize and link entity mentions written in lowercase
letters. Case sensitivity is a common issue for entity linking tools over
short text, as reported by Singh et al. (Singh et al., 2018a; Singh et al.,
2018b) in a detailed analysis. From the remaining 246 questions, only 70 are
answered correctly by OpenTapioca. Given that OpenTapioca finds limitation in
linking lowercase entity surface forms. We evaluated Falcon 2.0 and
OpenTapioca on the 246 questions of SimpleQuestion to provide a fair
evaluation for the baseline (reported as SimpleQuestion uppercase entities in
table 1). OpenTapioca reports F-score 0.20 on this subset of SimpleQuestion.
On the other hand, Falcon 2.0 reports F-score 0.70 on the same dataset (cf.
Table 1). For LC-QuAD 2.0, OpenTapioca reports F-score 0.35 against Falcon 2.0
with F-score 0.53 reported in Table 1.
#### Experimental Results 2
We report performance of Falcon 2.0 on WebQSP-WD dataset in Table 2. Falcon
2.0 clearly outperforms all other baselines with highest F-score value 0.82.
OpenTapioca demonstrates a low performance on this dataset as well. Experiment
results 1 & 2 answer our first research question (RQ1).
#### Ablation Study for Entity Linking and Recommendations
For the second research question (RQ2), we evaluate the impact of Wikidata’s
specific background knowledge on the entity linking performance. We evaluated
Falcon on the WebQSP-WD dataset against Falcon 2.0. We linked Falcon predicted
DBpedia IRIs with corresponding Wikidata IDs using owl:sameAs. We can see in
the Table 3 that Falcon 2.0 significantly outperforms Falcon despite using the
same linguistic driven approach. The jump in Falcon 2.0 performance comes from
Wikidata’s specific local background knowledge, which we created by expanding
Wikidata entities and relations with associated aliases. It also validates the
novelty of Falcon 2.0 when compared to Falcon for the Wikidata entity linking.
We observe an indifferent phenomenon in our performance for three datasets,
and the performance for Falcon 2.0 differs a lot per dataset. For instance, on
WebQSP-WD, our F-score is 0.82, whereas, on LC-QuAD 2.0, the F-Score drops to
0.57. The first source of error is the dataset(s) itself. In both the datasets
(SimpleQuestion and LC-QuAD 2.0), many questions are grammatically incorrect.
To validate our claim more robustly, we asked two native English speakers to
check the grammar of 200 random questions on LC-QuAD 2.0. Annotators reported
that 42 out of 200 questions are grammatically incorrect. Many questions have
erroneous spellings of the entity names. For example, ”Who is the country for
head of state of Mahmoud Abbas?” and ”Tell me about position held of Malcolm
Fraser and elected in?” are two grammatically incorrect questions in LC-QuAD
2.0. Similarly, many questions in the SimpleQuestion dataset are also
grammatically incorrect. ”where was hank cochran birthed” is one such example
in the SimpleQuestion dataset. Falcon 2.0 resorts to fundamental principles of
the English morphology and finds limitation in recognizing entities in many
grammatically incorrect questions.
We also recognize that the performance of Falcon 2.0 on sentences with minimal
context is limited. For example, in the question ”when did annie open?” from
the WebQSP-WD dataset, the sentential context is shallow. Also, more than one
instance of ”Annie” exists in Wikidata, such as Wiki:Q566892 (correct one) and
Wiki:Q181734. Falcon 2.0 wrongly predicts the entity in this case. In another
example, ”which country is lamb from?”, the correct entity is Wiki:Q6481017
with label ”lamb” in Wikidata. However, Falcon 2.0 returns Wiki:13553878,
which also has a label ”lamb”. In such cases, additional knowledge graph
context shall prove to be useful. Approaches such as (Yang et al., 2019)
introduced a concept of feeding ”entity descriptions” as an additional context
in an entity linking model over Wikipedia. Suppose the extra context in the
form of entity description (1985 English drama film directed by Colin Gregg)
for the entity Wiki:13553878 is provided. In that case, a model may correctly
predict the correct entity ”lamb.” Based on our observations, we propose the
following recommendations for the community to improve the entity linking task
over Wikidata:
* •
Wikidata has inherited challenges of vandalism and noisy entities due to
crowd-authored entities (Heindorf et al., 2016). We expect the research
community to come up with more robust short text datasets for the Wikidata
entity linking without spelling and grammatical errors.
* •
Rule-based approaches come with its limitations when the sentential context is
minimal. However, such methods are beneficial for the nonavailability of
training data. We recommend a two-step process to target questions with
minimal sentential context: 1) work towards a clean and large Wikidata dataset
for entity linking of short text. This will allow more robust machine learning
approaches to evolve 2) use of entity descriptions from knowledge graphs to
improve the linking process (same as (Yang et al., 2019)).
Table 3. Entity Linking Performance of Falcon vs Falcon 2.0 on WEBQSP-WD. Best
values are in bold.
Approach | P | R | F
---|---|---|---
Falcon (Sakor et al., 2019) | 0.47 | 0.45 | 0.46
Falcon 2.0 | 0.80 | 0.84 | 0.82
Table 4. Relation linking evaluation results on LC-QuAD 2.0 & SimpleQuestion
datasets.
Approach | Dataset | P | R | F
---|---|---|---|---
Falcon 2.0 | LC-QuAD 2.0 | 0.44 | 0.37 | 0.40
Falcon 2.0 | SimpleQuestion | 0.35 | 0.44 | 0.39
Table 5. Sample Questions from LC-QuAD 2.0 datset. The table shows five sample
questions and associated gold standard relations. These sentences do not
include standard sentential relations in the English language. Considering
Wikidata is largely authored by the crowd, the crowd often creates such
uncommon relations. Falcon 2.0 finds limitation in linking such relations, and
most results are empty.
Question | Gold Standard IDs | Gold Standard Labels | Predicted IDs | Predicted Labels
---|---|---|---|---
Which is the global-warming potential of dichlorodifluoromethane? | P2565 | global warming potential | [] | _
What is the AMCA Radiocommunications Licence ID for Qantas? | P2472 | ACMA Radiocommunications Client Number | P275 | copyright license
What is ITIS TSN for Sphyraena? | P815 | ITIS TSN | [] | _
What is the ARICNS for Fomalhaut? | P999 | ARICNS | [] | _
Which is CIQUAL 2017 ID for cheddar? | P4696 | CIQUAL2017 ID | [] | _
#### Experimental Results 3:
In the third experiment (for RQ3), we evaluate the relation linking
performance of Falcon 2.0. We are not aware of any other model for relation
linking over Wikidata. Table 4 summarizes relation linking performance. With
this, we established new baselines over two datasets for relation linking on
Wikidata.
#### Ablation Study for Relation Linking and Recommendations
Falcon reported an F-score of 0.43 on LC-QuAD over DBpedia in (Sakor et al.,
2019) whereas Falcon 2.0 reports a comparable relation linking F-score 0.40 on
LC-QuAD 2.0 for Wikidata (cf. Table 4). The wrong identification of the
entities does affect the relation linking performance, and it is the major
source of error in our case for relation linking. Table 5 summarizes a sample
case study for relation linking on five LC-QuAD 2.0 questions. We observe that
the relations present in the questions are highly uncommon and nonstandard,
and it is a peculiar property of Wikidata. Falcon 2.0 finds limitations in
linking such relations. We recommend the following:
* •
Wikidata challenges relation linking approaches by posing a new challenge:
user-created nonstandard relations such as in Table 5. A rule-based approach
like ours faces a clear limitation in linking such relations. Linking user-
created relations in crowd-authored Wikidata is an open question for the
research community.
## 5\. Impact
In August 2019, Wikidata became the first Wikimedia project that crossed 1
billion edits, and over 20,000 active Wikidata
editors111111https://www.wikidata.org/wiki/Wikidata:Statistics. A large subset
of the information extraction community has extensively relied on its research
around DBpedia and Wikidata targeting different research problems such as KG
completion, question answering, entity linking, and data quality assessments
(Moon et al., 2017; Reinanda et al., 2016; Yang et al., 2013). Furthermore,
entity and relation linking tasks have been studied well beyond information
extraction research, especially NLP and Semantic Web. Despite Wikidata being
hugely popular, there are limited resources for reusing and aligning
unstructured text to Wikidata mentions. However, when it comes to a short
text, the performance of existing baselines are limited. We believe the
availability of Falcon 2.0 as a web API along with open source access to its
code will provide researchers an easy and reusable way to annotate
unstructured text against Wikidata. We also believe that a rule-based
approach, such as ours that does not require any training data, is beneficial
for low resource languages (considering Wikidata is multilingual
121212https://www.wikidata.org/wiki/Help:Wikimedia_language_codes/lists/all).
## 6\. Adoption and Reusability
Falcon 2.0 is open source. The source code is available in our public GitHub:
https://github.com/SDM-TIB/Falcon2.0 for reusability and reproducibility.
Falcon 2.0 is easily accessible via a simple CURL request or using our web
interface. Detailed instructions are provided on our GitHub. It is currently
available for the English language. However, there is no assumption in the
approach or while building the background knowledge base that restricts its
adaptation or extensibility to other languages. The background knowledge of
Falcon 2.0 is available for the community and can be easily reused to generate
candidates for entity linking (Yamada et al., 2016b) or in question answering
approaches such as (Zhang and Zou, 2018). The background knowledge consists of
48,042,867 alignments of Wikidata entities and 15,645 alignments for Wikidata
predicates. MIT License allows for the free distribution and re-usage of
Falcon 2.0. We hope the research community and industry practitioners will use
Falcon 2.0 resources for various usages such as linking entities and relations
to Wikidata, annotating an unstructured text, developing new low language
resources, and others.
## 7\. Maintenance and Sustainability
Falcon 2.0 is a publicly available resource offering of the Scientific Data
Management(SDM) group at TIB, Hannover131313https://www.tib.eu/en/research-
development/scientific-data-management/. TIB is one of the largest libraries
for Science and Technology in the world
141414https://www.tib.eu/en/tib/profile/. It actively promotes open access to
scientific artifacts, e.g., research data, scientific literature, non-textual
material, and software. Similar to other publicly maintained repositories of
SDM, Falcon 2.0 will be preserved and regularly updated to fix bugs and
include new features151515https://github.com/SDM-TIB. The Falcon 2.0 API will
be sustained on the TIB servers to allow for unrestricted free access.
## 8\. Conclusion and Future Work
We presented the resource Falcon 2.0, a rule-based entity and relation linking
tool able to recognize entities & relations in a short text and link them to
the existing knowledge graph, e.g., DBpedia and Wikidata. Although there are
various approaches for entity & relation linking to DBpedia, Falcon 2.0 is one
of the few tools targeting Wikidata. Thus, given the number of generic and
domain-specific facts that compose Wikidata, Falcon 2.0 has the potential to
impact researchers and practitioners that resort to NLP tools for transforming
semi-structured data into structured facts. Falcon 2.0 is open source. The API
is publicly accessible and maintained in the servers of the TIB labs. Falcon
2.0 has been empirically evaluated on three benchmarks, and the outcomes
suggest that it is able to overcome the state of the arts. Albeit promising,
the experimental results can be improved. In the future, we plan to continue
researching novel techniques that enable adjusting the catalog of rules and
alignments to the changes in Wikidata. We further plan to mitigate errors
caused by a rule-based approach using machine learning approaches to aim
towards a hybrid approach.
## 9\. Acknowledgments
This work has received funding from the EU H2020 Project No. 727658 (IASIS).
## References
* (1)
* Auer et al. (2007) Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary G. Ives. 2007. DBpedia: A Nucleus for a Web of Open Data. In _ISWC_. 722–735.
* Balog (2018) Krisztian Balog. 2018\. _Entity-oriented search_. Springer Open.
* Banerjee et al. ([n.d.]) Debayan Banerjee, Mohnish Dubey, Debanjan Chaudhuri, and Jens Lehmann. [n.d.]. Joint Entity and Relation Linking using EARL. ([n. d.]).
* Bollacker et al. (2008) Kurt D. Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In _ACM SIGMOD_. 1247–1250.
* Cao et al. (2018) Yixin Cao, Lei Hou, Juanzi Li, and Zhiyuan Liu. 2018\. Neural Collective Entity Linking. arXiv:1811.08603 http://arxiv.org/abs/1811.08603
* Cetoli et al. (2019) Alberto Cetoli, Stefano Bragaglia, Andrew D O’Harney, Marc Sloan, and Mohammad Akbari. 2019\. A Neural Approach to Entity Linking on Wikidata. In _European Conference on Information Retrieval_. Springer, 78–86.
* Delpeuch (2019) Antonin Delpeuch. 2019\. OpenTapioca: Lightweight Entity Linking for Wikidata. _arXiv preprint arXiv:1904.09131_ (2019).
* Diefenbach et al. (2017) Dennis Diefenbach, Thomas Tanon, Kamal Singh, and Pierre Maret. 2017\. Question answering benchmarks for wikidata.
* Dubey et al. (2019) Mohnish Dubey, Debayan Banerjee, Abdelrahman Abdelkawi, and Jens Lehmann. 2019. Lc-quad 2.0: A large dataset for complex question answering over wikidata and dbpedia. In _International Semantic Web Conference_. Springer, 69–78.
* Ferragina and Scaiella (2010) Paolo Ferragina and Ugo Scaiella. 2010. TAGME: on-the-fly annotation of short text fragments (by wikipedia entities). In _Proceedings of the 19th ACM Conference on Information and Knowledge Management, CIKM 2010, Toronto, Ontario, Canada, October 26-30, 2010_. 1625–1628.
* Ganea and Hofmann (2017) Octavian-Eugen Ganea and Thomas Hofmann. 2017. Deep Joint Entity Disambiguation with Local Neural Attention. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017_. 2619–2629.
* Gormley and Tong (2015) Clinton Gormley and Zachary Tong. 2015. _Elasticsearch: The Definitive Guide: A Distributed Real-Time Search and Analytics Engine_. ” O’Reilly Media, Inc.”.
* Heindorf et al. (2016) Stefan Heindorf, Martin Potthast, Benno Stein, and Gregor Engels. 2016. Vandalism detection in wikidata. In _Proceedings of the 25th ACM International on Conference on Information and Knowledge Management_. 327–336.
* Hoffart et al. (2011) Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen Fürstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011\. Robust Disambiguation of Named Entities in Text. In _EMNLP 2011_. 782–792.
* Inan and Dikenelli (2018) Emrah Inan and Oguz Dikenelli. 2018. A Sequence Learning Method for Domain-Specific Entity Linking. In _Proceedings of the Seventh Named Entities Workshop_ (Melbourne, Australia). Association for Computational Linguistics, 14–21. http://aclweb.org/anthology/W18-2403
* Ji (2019) Heng Ji. 2019. _Entity Discovery and Linking and Wikification Reading List_. http://nlp.cs.rpi.edu/kbp/2014/elreading.html
* Kolitsas et al. (2018) Nikolaos Kolitsas, Octavian-Eugen Ganea, and Thomas Hofmann. 2018. End-to-End Neural Entity Linking. In _Proceedings of the 22nd Conference on Computational Natural Language Learning_. 519–529.
* Moon et al. (2017) Changsung Moon, Paul Jones, and Nagiza F Samatova. 2017\. Learning entity type embeddings for knowledge graph completion. In _Proceedings of the 2017 ACM on conference on information and knowledge management_. 2215–2218.
* Mulang et al. (2020) Isaiah Onando Mulang, Kuldeep Singh, Akhilesh Vyas, Saeedeh Shekarpour, Ahmad Sakor, Maria Esther Vidal, Soren Auer, and Jens Lehmann. 2020. Encoding Knowledge Graph Entity Aliases in an Attentive Neural Networks for Wikidata Entity Linking. _In WISE (to appear)_ (2020).
* Raiman and Raiman (2018) Jonathan Raphael Raiman and Olivier Michel Raiman. 2018. DeepType: multilingual entity linking by neural type system evolution. In _Thirty-Second AAAI Conference on Artificial Intelligence_.
* Reinanda et al. (2016) Ridho Reinanda, Edgar Meij, and Maarten de Rijke. 2016\. Document Filtering for Long-tail Entities. In _Proceedings of the 25th ACM International Conference on Information and Knowledge Management, CIKM 2016, Indianapolis, IN, USA, October 24-28, 2016_. ACM, 771–780. https://doi.org/10.1145/2983323.2983728
* Röder et al. (2018) Michael Röder, Ricardo Usbeck, and Axel-Cyrille Ngonga Ngomo. 2018\. Gerbil–benchmarking named entity recognition and linking consistently. _Semantic Web_ 9, 5 (2018), 605–625.
* Sakor et al. (2019) Ahmad Sakor, Isaiah Onando Mulang, Kuldeep Singh, Saeedeh Shekarpour, Maria Esther Vidal, Jens Lehmann, and Sören Auer. 2019\. Old is gold: linguistic driven approach for entity and relation linking of short text. In _Proceedings of the 2019 NAACL HLT (Long Papers)_. 2336–2346.
* Shen et al. (2015) W. Shen, J. Wang, and J. Han. 2015. Entity Linking with a Knowledge Base: Issues, Techniques, and Solutions. _IEEE Transactions on Knowledge and Data Engineering_ 27, 2 (2015), 443–460.
* Singh et al. (2018a) Kuldeep Singh, Ioanna Lytra, Arun Sethupat Radhakrishna, Saeedeh Shekarpour, Maria-Esther Vidal, and Jens Lehmann. 2018a. No One is Perfect: Analysing the Performance of Question Answering Components over the DBpedia Knowledge Graph. _arXiv:1809.10044_ (2018).
* Singh et al. (2018b) Kuldeep Singh, Arun Sethupat Radhakrishna, Andreas Both, Saeedeh Shekarpour, Ioanna Lytra, Ricardo Usbeck, Akhilesh Vyas, Akmal Khikmatullaev, Dharmen Punjani, Christoph Lange, Maria-Esther Vidal, Jens Lehmann, and Sören Auer. 2018b. Why Reinvent the Wheel: Let’s Build Question Answering Systems Together. In _Web Conference_. 1247–1256.
* Sorokin and Gurevych (2018) Daniil Sorokin and Iryna Gurevych. 2018. Mixing Context Granularities for Improved Entity Linking on Question Answering Data across Entity Categories. In _Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics_. 65–75.
* Vrandecic (2012) Denny Vrandecic. 2012\. Wikidata: a new platform for collaborative data collection. In _Proceedings of the 21st World Wide Web Conference, WWW 2012, Lyon, France, April 16-20, 2012 (Companion Volume)_. ACM, 1063–1064. https://doi.org/10.1145/2187980.2188242
* Williams (1981) Edwin Williams. 1981\. On the notions” Lexically related” and” Head of a word”. _Linguistic inquiry_ 12, 2 (1981), 245–274.
* Yamada et al. (2016a) Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016a. Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation. In _CoNLL 2016_ , Yoav Goldberg and Stefan Riezler (Eds.). ACL, 250–259.
* Yamada et al. (2016b) Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016b. Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation. _CoRR_ abs/1601.01343 (2016).
* Yang et al. (2019) Xiyuan Yang, Xiaotao Gu, Sheng Lin, Siliang Tang, Yueting Zhuang, Fei Wu, Zhigang Chen, Guoping Hu, and Xiang Ren. 2019. Learning Dynamic Context Augmentation for Global Entity Linking. In _EMNLP-IJCNLP 2019_ , Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan (Eds.). 271–281.
* Yang and Chang (2015) Yi Yang and Ming-Wei Chang. 2015. S-MART: Novel Tree-based Structured Learning Algorithms Applied to Tweet Entity Linking. In _ACL- IJCNLP (Volume 1: Long Papers)_. 504–513.
* Yang et al. (2013) Zi Yang, Elmer Garduño, Yan Fang, Avner Maiberg, Collin McCormack, and Eric Nyberg. 2013\. Building optimal information systems automatically: configuration space exploration for biomedical information systems. In _22nd ACM CIKM’13, San Francisco, USA_. ACM, 1421–1430.
* Zhang and Zou (2018) Xinbo Zhang and Lei Zou. 2018. IMPROVE-QA: An Interactive Mechanism for RDF Question/Answering Systems. In _Proceedings of the 2018 International Conference on Management of Data, SIGMOD Conference 2018, Houston, TX, USA, June 10-15, 2018_. 1753–1756. https://doi.org/10.1145/3183713.3193555
|
# Signatures of fragmentation for periodically driven fermions
Somsubhra Ghosh1, Indranil Paul2, and K. Sengupta1 1School of Physical
Sciences, Indian Association for the Cultivation of Science, Kolkata 700032,
India.
2Université Paris Cité, CNRS, Laboratoire Matériaux et Phénomènes Quantiques,
75205 Paris, France.
###### Abstract
We study the possible signatures of prethermal strong Hilbert space
fragmentation (HSF) for one-dimensional (1D) fermions subjected to a periodic
drive. We extend the results of Phys. Rev. Lett. 130, 120401 (2023) to show
the possibility of such fragmentation for a large class of experimentally
relevant drive protocols. Moreover, we demonstrate the persistence of HSF when
the fermion chain is taken away from half-filling. Both these analysis
indicate the robustness of the fragmentation phenomenon reported earlier. We
also provide an alternate derivation of the Floquet Hamiltonian of the driven
chain which yields insight into the generic nested commutator structure of its
higher order terms. Finally, we study the density-density out-of-time-
correlators (OTOC) of the driven chain both away and near the special drive
frequencies at which its first order Floquet Hamiltonian exhibits
fragmentation. We show that these OTOCs, for a chain with open boundary
condition, exhibit a distinct periodic unscrambling of information at special
drive frequencies; such unscrambling can therefore serve as a marker of
prethermal HSF. We provide an approximate analytic explanation of the role of
HSF behind such periodic unscrambling and discuss experiments which can detect
signatures of strong HSF in such driven chains.
## I Introduction
Closed quantum systems driven out of equilibrium have become increasingly
important subject of research in recent years [1, 2, 3]. One of the central
questions in this field pertains to the long-time behavior of local
correlation functions of these systems. In most case, the behavior of such
correlation functions can be understood from the eigenstate thermalization
hypothesis (ETH) [4, 5]. ETH predicts eventual thermalization, under unitary
Hamiltonian dynamics, of a generic many-body quantum state which can initially
be far from equilibrium; it is one of the central paradigms for understanding
long-time behavior of a generic ergodic many-body system. ETH also holds, with
minor modifications, for periodically driven systems, where the driven system
is ultimately expected to heat up to infinite temperature [6].
ETH relies on the ergodicity of a generic quantum system and is known to fail
when it is violated. Such ergodicity violation can occur in integrable models
due to the presence of a large number of conserved quantities [1]. In
addition, it fails in the presence of strong disorder leading to many-body
localization and consequent violation of ergodicity [7, 8, 9]. A more subtle
and weaker failure of ETH occurs due to emergent symmetry sectors in an
otherwise generic quantum systems. The presence of such symmetries typically
lead to tower of states which are protected from the other thermal states in
its Hilbert space. Thus any quantum dynamics starting from an initial state
which belongs to this sectors can not thermalize; such states are often called
quantum scars [10, 11, 12, 13, 14]. The violation of ETH in this case is weak
since it only happens if the initial state has large overlap with the states
in the scar sector. The number of such states, for a one-dimensional (1D)
system of length $L$, is typically ${\rm O}(L)$; they form a tiny fraction of
the total number of states in the Hilbert space which is ${\rm O}(e^{L})$.
Thus these systems display ETH violating dynamics for a small fraction of
initial states.
Another, recently found, violation of ETH occurs in non-integrable quantum
systems due to the presence of kinetic constraints. The Hamiltonian of such
quantum systems, expressed as a matrix in the computational basis, breaks down
into an exponentially large number of dynamically disconnected fragments. The
presence of such a large number of disconnected sectors is to be contrasted
with those occurring from the presence of conserved global quantities; the
latter only leads to an ${\rm O}(L)$ disconnected symmetry sectors. This
phenomenon is known as strong Hilbert space fragmentation (HSF) [15, 16, 17,
18, 19]. Such a strong fragmentation naturally breaks ergodicity since any
generic initial state, which belongs to a given fragment, can not, under the
action of the system Hamiltonian, explore states in the Hilbert space that
belong to other fragments. Most of the Hamiltonians studied in this context
are 1D spin or fermionic models [15, 16, 17, 18]; however, more recently some
higher-dimensional models have also been shown to exhibit strong HSF [19].
More recently, the generalization of strong HSF to periodically driven quantum
system has been studied [20]. It has been shown that a periodically driven
fermionic chain can show signatures of Hilbert space fragmentation at special
drive frequencies over a large prethermal timescale; the extent of this
prethermal timescale depends on the drive amplitude and can be quite large in
the large drive amplitude regime. The signatures of such prethermal
fragmentation can be found in entanglement entropy, autocorrelation function
and equal-time correlators of such driven system; each of these quantities
show departure from their counterparts in ergodic quantum systems [20]
demonstrating a clear realization of prethermal strong HSF. In addition, such
prethermal HSF in driven quantum systems can lead to interesting oscillatory
dynamics of correlators for certain initial states which has no counterpart
for HSF realized in an equilibrium setting [20].
In this work, we extend the results obtained in Ref. 20 in several ways.
First, we show that such signatures of fragmentation can be obtained for a
much wider class of drive protocols. This makes the prethermal fragmentation
phenomenon much more relevant to standard experiments using ultracold atom
platform which we discuss. Second, we provide a comprehensive analysis of the
Floquet Hamiltonian. The analytical expression for the first order Floquet
Hamiltonian, $H_{F}^{(1)}$, derived using Floquet perturbation theory (FPT),
was presented in Ref. 20; here we provide an alternate derivation of the
Floquet Hamiltonian up to second order in perturbation theory. This analysis
provides insight into the commutator structure of the higher order terms that
was not apparent from the previous derivation. It also provides an estimate of
the frequency range over which the first order Floquet Hamiltonian provides a
qualitatively accurate description of the dynamical evolution in the
prethermal regime. Third, we show that the signature of fragmentation persists
when the driven chain is taken away from half-filling. This shows the
robustness of the prethermal fragmentation phenomenon and points out the
possibility of its experimental realization for a wide range of fermion
filling fraction. Finally, we study the density-density out-of-time correlator
(OTOC) for the driven fermion chain. We show that the behavior of such OTOC is
qualitatively different at special frequencies at which the system exhibits
signatures of prethermal HSF. In particular, for a driven fermion chain with
open boundary condition, we find, starting from a initial ${\mathbb{Z}}_{2}$
state, unscrambling of information manifested through periodic revival of the
OTOC. We analyze this phenomenon in details, provide an analytic, albeit
qualitative, understanding of its mechanism, and tie it to the fragmented
structure of the first-order Floquet Hamiltonian, $H_{F}^{(1)}$, of the driven
chain obtained using FPT. Our results thus demonstrate that OTOCs can server
as markers for prethermal HSF in a driven system.
The organization of the rest of this work is as follows. In Sec. II, we
present a derivation of the Floquet Hamiltonian which brings out its nested
commutator structure. Next, in Sec. III, we discuss the different classes of
drive protocols which allows for signature of prethermal fragmentation and
also derive the higher order Floquet Hamiltonian corresponding to them. This
is followed by Sec. IV where we demonstrate signature of prethermal HSF away
from half-filling. Next, in Sec. V, we discuss the behavior of OTOC in such
driven system. Finally, we discuss our main results and conclude in Sec. VI.
Some details of the calculation are presented in the Appendices.
## II Formalism
In this section, we outline the derivation of the Floquet Hamiltonian of the
driven fermion chain. Our derivation brings out the nested commutator
structure of the Floquet Hamiltonian and also addresses a more general class
of drive protocols for which the fermion chain exhibits prethermal HSF.
### II.1 Preliminary
Consider a time dependent quantum mechanical system described by the
Hamiltonian
$\mathcal{H}(t)=\mathcal{H}_{0}(t)+\mathcal{H}_{1},$ (1)
where all the time dependence is in the zeroth order term $\mathcal{H}_{0}$.
The term $\mathcal{H}_{1}$, which in the following will be treated
perturbatively, has no explicit time dependence. From Schrodinger equation
$i\hbar\partial_{t}\psi(t)=[\mathcal{H}_{0}(t)+\mathcal{H}_{1}]\psi(t)$, and
the definition of the time evolution operator $U(t,0)$ via
$\psi(t)=U(t,0)\psi(0)$, we get
$i\hbar\frac{\partial}{\partial
t}U(t,0)=[\mathcal{H}_{0}(t)+\mathcal{H}_{1}]U(t,0).$ (2)
The evaluation of $U(t,0)$ can be broken into two steps. To do so, we write
[21, 22, 23]
$U(t,0)=U_{0}(t,0)W(t,0),$ (3)
where $U_{0}(t,0)$ is the exact time evolution operator in the absence of the
$\mathcal{H}_{1}$ term. The first step, which is simple, is to evaluate
$U_{0}(t,0)$ which is given by
$U_{0}(t,0)=\exp[-\frac{i}{\hbar}\int_{0}^{t}d\tau\mathcal{H}_{0}(\tau)].$ (4)
In the above the time ordering in front of the exponential can be omitted
since the operator $\mathcal{H}_{0}(\tau)$ at different times commute. The
second step, which is non-trivial, is to compute $W(t,0)$ that encodes the
time evolution due to $\mathcal{H}_{1}$. This is performed perturbatively.
Using $i\partial_{t}U_{0}(t,0)=\mathcal{H}_{0}U_{0}(t,0)$ and Eqs. (2) and
(3), we get
$i\hbar\frac{\partial}{\partial t}W(t,0)=\mathcal{H}_{p}(t)W(t,0),$ (5)
where
$\mathcal{H}_{p}(t)\equiv U_{0}(t,0)^{-1}\mathcal{H}_{1}U_{0}(t,0).$ (6)
Using Eq. (5), the perturbative expansion is
$\displaystyle W(t,0)$
$\displaystyle=1-\left(\frac{i}{\hbar}\right)\int_{0}^{t}d\tau\mathcal{H}_{p}(\tau)\,+$
$\displaystyle\left(-\frac{i}{\hbar}\right)^{2}\int_{0}^{t}d\tau_{1}\mathcal{H}_{p}(\tau_{1})\int_{0}^{\tau_{1}}d\tau_{2}\mathcal{H}_{p}(\tau_{2})\,+$
$\displaystyle\left(-\frac{i}{\hbar}\right)^{3}\\!\int_{0}^{t}\\!d\tau_{1}\mathcal{H}_{p}(\tau_{1})\int_{0}^{\tau_{1}}\\!d\tau_{2}\mathcal{H}_{p}(\tau_{2})\int_{0}^{\tau_{2}}\\!d\tau_{3}\mathcal{H}_{p}(\tau_{3})$
$\displaystyle+\cdots.$ (7)
Note, in the above the operator $\mathcal{H}_{p}(\tau)$ at different times do
not commute.
The above formulation can also be viewed as a series expansion in a rotating
frame for the following reason. Consider a time dependent unitary
transformation $V(t)$ between a laboratory to a rotating reference frame with
the initial condition $V(0)=1$. The wavefunction in the rotating frame is
$\psi_{r}(t)=V^{\dagger}(t)\psi(t)$, and an operator in the same frame is
$\mathcal{O}_{r}(t)=V^{\dagger}(t)\mathcal{O}V(t)$, where $\psi(t)$ and
$\mathcal{O}$ are the wavefunction and the operator in the laboratory frame,
respectively. Simultaneously, the Hamiltonian $\mathcal{H}(t)$ in the
laboratory frame transforms to $\mathcal{H}_{r}(t)$ in the rotating frame. By
demanding that $i\hbar\partial_{t}\psi_{r}(t)=\mathcal{H}_{r}(t)\psi_{r}(t)$
we get
$\mathcal{H}_{r}(t)=V^{\dagger}(t)\mathcal{H}(t)V(t)-i\hbar
V^{\dagger}(t)\dot{V}(t),$ (8)
where $\dot{V}\equiv\partial_{t}V(t)$. Furthermore, if we define the time
evolution operator $U_{r}(t_{1},t_{2})$ in the rotating frame by
$i\hbar\partial_{t_{1}}U_{r}(t_{1},t_{2})=\mathcal{H}_{r}(t_{1})U_{r}(t_{1},t_{2})$,
then it is related to that in the laboratory frame by
$U(t_{1},t_{2})=V(t_{1})U_{r}(t_{1},t_{2})V^{\dagger}(t_{2}).$ (9)
The connection between the two formulations is made if we choose the time
dependent unitary transformation to be
$V(t)=\exp[-\frac{i}{\hbar}\int_{0}^{t}d\tau\mathcal{H}_{0}(\tau)],$ (10)
such that the $\mathcal{H}_{0}(t)$ term is “gauged out” in the rotating frame.
In this case $\mathcal{H}_{r}(t)$ coincides with $\mathcal{H}_{p}(t)$ given by
Eq. (6), $U_{r}(t,0)$ with $W(t,0)$, and Eqs. (3) and (9) become identical
with $t_{2}=0$.
However, note that the first formulation is more versatile in the sense that
it can be still used when $\mathcal{H}_{1}$ is the zeroth order term and
$\mathcal{H}_{0}$ is perturbative. In this case, we simply exchange
$\mathcal{H}_{0}(t)\leftrightarrow\mathcal{H}_{1}$ in Eqs. (4) and (6). The
resulting expansion will not match with that in the rotating frame.
### II.2 Floquet perturbation theory
Until now the discussion has been general, and it applies to all time
dependent problems. In the particular case of a Floquet system, where the time
dependence is due a periodic external drive, we are interested in the
stroboscopic time evolution operator $U(T,0)$, where $T$ is the period of the
drive. The related Floquet Hamiltonian is defined by
$\mathcal{H}_{F}\equiv\frac{i\hbar}{T}\log
U(T,0)=\frac{i\hbar}{T}\log[U_{0}(T,0)W(T,0)].$ (11)
We suppose that there is a small parameter that justifies the expansion
$W(T,)=1+W_{1}(T)+W_{2}(T)+\cdots$, and correspondingly
$\mathcal{H}_{F}=\mathcal{H}_{F}^{(0)}+\mathcal{H}_{F}^{(1)}+\mathcal{H}_{F}^{(2)}+\cdots$.
Then, using Eq. (II.1) and after some algebra the first few terms in the
expansion of the Floquet Hamiltonian are given by
$\displaystyle\mathcal{H}_{F}^{(0)}$ $\displaystyle=\frac{i\hbar}{T}\log
U_{0}(T,0),$ (12) $\displaystyle\mathcal{H}_{F}^{(1)}$
$\displaystyle=\frac{i\hbar}{T}W_{1}(T)=\frac{1}{T}\int_{0}^{T}d\tau\mathcal{H}_{p}(\tau),$
(13) $\displaystyle\mathcal{H}_{F}^{(2)}$
$\displaystyle=\frac{i\hbar}{T}\left[W_{2}(T)-\frac{1}{2}W_{1}(T)^{2}\right]$
$\displaystyle=\frac{-i}{2\hbar
T}\int_{0}^{T}d\tau_{1}\int_{0}^{\tau_{1}}d\tau_{2}\left[\mathcal{H}_{p}(\tau_{1}),\mathcal{H}_{p}(\tau_{2})\right],$
(14) $\displaystyle\mathcal{H}_{F}^{(3)}$
$\displaystyle=\frac{i\hbar}{T}\left[W_{3}(T)-\frac{1}{2}\left(W_{1}(T)W_{2}(T)+W_{2}(T)W_{1}(T)\right)\right.$
$\displaystyle+\left.\frac{1}{3}W_{1}(T)^{3}\right]$
$\displaystyle=-\frac{1}{6\hbar^{2}T}\int_{0}^{T}d\tau_{1}\int_{0}^{\tau_{1}}d\tau_{2}\int_{0}^{\tau_{2}}d\tau_{3}\left\\{\left[\mathcal{H}_{p}(\tau_{1}),\right.\right.$
$\displaystyle\left.\left.\left[\mathcal{H}_{p}(\tau_{2}),\mathcal{H}_{p}(\tau_{3})\right]\right]+\left[\left[\mathcal{H}_{p}(\tau_{1}),\mathcal{H}_{p}(\tau_{2})\right],\mathcal{H}_{p}(\tau_{3})\right]\right\\}.$
(15)
Eqs. 12-15 indicate the nested commutator structure of the higher-order terms
of the Floquet Hamiltonian; we shall use them for explicit computation of
$H_{F}$ in Sec. III.
## III Computation of the Floquet Hamiltonian
In this section we first provide analytical results for higher order terms in
the Floquet Hamiltonian for a cosine drive protocol in Sec. III.1. This is
followed, in Sec. III.2, by a derivation and analysis of the first order
Floquet Hamiltonian $H_{F}^{(1)}$ for a more general drive protocol.
### III.1 Cosine modulation of interaction
Consider a driven system described by Eq. (1) where
$\displaystyle\mathcal{H}_{0}(t)$
$\displaystyle=V_{1}\cos\omega_{D}t\sum_{i}\hat{n}_{i}\hat{n}_{i+1},$ (16)
$\displaystyle\mathcal{H}_{1}$
$\displaystyle=\sum_{i}\left[-J(c^{\dagger}_{i}c_{i+1}+\rm{h.c.})+V_{0}\hat{n}_{i}\hat{n}_{i+1}+V_{2}\hat{n}_{i}\hat{n}_{i+2}\right],$
(17)
with $V_{1}\gg(J,V_{0},V_{2})$. Thus, in the following we treat the
$\mathcal{H}_{0}$ term exactly, and $\mathcal{H}_{1}$ perturbatively.
Following Sec. II, we have, using Eq. (4),
$U_{0}(t,0)=\exp[-i\lambda\hat{B}\sin\omega_{D}t],$ (18)
where $\lambda\equiv V_{1}/(\hbar\omega_{D})$ is a dimensionless parameter and
$\hat{B}\equiv\sum_{j}\hat{n}_{j}\hat{n}_{j+1}$.
The next step is to compute $\mathcal{H}_{p}(t)$ using Eq. (6). As an
intermediate step we find
$\displaystyle\left[\hat{B},\mathcal{H}_{1}\right]$
$\displaystyle=-J\sum_{i}\hat{A}_{i}\left(c^{\dagger}_{i}c_{i+1}-c^{\dagger}_{i+1}c_{i}\right),$
(19) $\displaystyle\left[\hat{B},\left[\hat{B},\mathcal{H}_{1}\right]\right]$
$\displaystyle=-J\sum_{i}\hat{A}_{i}^{2}\left(c^{\dagger}_{i}c_{i+1}+c^{\dagger}_{i+1}c_{i}\right),$
(20)
and so on, where
$\hat{A}_{i}=\hat{n}_{i-1}-\hat{n}_{i+2}.$ (21)
. Using these relations we obtain
$\displaystyle\mathcal{H}_{p}(t)$
$\displaystyle=\exp[i\lambda\hat{B}\sin\omega_{D}t]\mathcal{H}_{1}\exp[-i\lambda\hat{B}\sin\omega_{D}t]$
$\displaystyle=\mathcal{H}_{1}+i\lambda\sin\omega_{D}t\left[\hat{B},\mathcal{H}_{1}\right]+\frac{1}{2!}(i\lambda\sin\omega_{D}t)^{2}$
$\displaystyle\times\left[\hat{B},\left[\hat{B},\mathcal{H}_{1}\right]\right]+\frac{1}{3!}(i\lambda\sin\omega_{D}t)^{3}\left[\hat{B},\left[\hat{B},\left[\hat{B},\mathcal{H}_{1}\right]\right]\right]$
$\displaystyle+\cdots$
$\displaystyle=\sum_{i}\left[-J\left(e^{i\lambda\hat{A}_{i}\sin\omega_{D}t}c^{\dagger}_{i}c_{i+1}+e^{-i\lambda\hat{A}_{i}\sin\omega_{D}t}c^{\dagger}_{i+1}c_{i}\right)\right.$
$\displaystyle\left.+V_{0}\hat{n}_{i}\hat{n}_{i+1}+V_{2}\hat{n}_{i}\hat{n}_{i+2}\right].$
(22)
Using the explicit form of $\mathcal{H}_{p}(t)$ it is possible to compute
order by order the Floquet Hamiltonian.
The zeroth order Floquet Hamiltonian $\mathcal{H}_{F}^{(0)}$ vanishes because,
from Eq. (18), we have $U_{0}(T,0)=1$.
To compute the first order Floquet Hamiltonian we use the relation
$\displaystyle I_{1}(\hat{A},\lambda)$
$\displaystyle\equiv\frac{1}{T}\int_{0}^{T}d\tau
e^{i\lambda\hat{A}\sin(\omega_{D}\tau)}$
$\displaystyle=J_{0}(\lambda\hat{A})=(1-\hat{A}^{2})+\hat{A}^{2}J_{0}(\lambda),$
(23)
where $J_{n}(x)$ is a Bessel function of the first kind with integer order.
Using this relation and Eq. (13) we get
$\displaystyle\mathcal{H}_{F}^{(1)}$
$\displaystyle=\sum_{i}\left[-JJ_{0}(\lambda\hat{A}_{i})\left(c^{\dagger}_{i}c_{i+1}+{\rm
h.c.}\right)\right.$
$\displaystyle+\left.V_{0}\hat{n}_{i}\hat{n}_{i+1}+V_{2}\hat{n}_{i}\hat{n}_{i+2}\right].$
(24)
If the drive frequency is tuned to $\omega_{m}$ such that
$\lambda_{m}=V_{1}/(\hbar\omega_{m})$ coincides with the $m^{{\rm th}}$ zero
of the Bessel function $J_{0}$, then the corresponding first order Floquet
Hamiltonian is
$\displaystyle\mathcal{H}_{F}^{(1)}(\lambda=\lambda_{m})$
$\displaystyle=\sum_{i}\left[-J(1-\hat{A}_{i}^{2})\left(c^{\dagger}_{i}c_{i+1}+{\rm
h.c.}\right)\right.$
$\displaystyle+\left.V_{0}\hat{n}_{i}\hat{n}_{i+1}+V_{2}\hat{n}_{i}\hat{n}_{i+2}\right].$
(25)
The above defines a model with constrained hopping, where only those hops are
allowed which preserve the total number of nearest neighbors
$\hat{N}_{D}\equiv\sum_{i}\hat{n}_{i}\hat{n}_{i+1}$. This model is known to
show strong Hilbert space fragmentation [16].
The second order Floquet Hamiltonian can be broken into two parts
$\mathcal{H}_{F}^{(2)}=\mathcal{H}_{F}^{(2a)}+\mathcal{H}_{F}^{(2b)}$, with
$\mathcal{H}_{F}^{(2a)}=\frac{-i}{2\hbar
T}\int_{0}^{T}d\tau_{1}\int_{0}^{\tau_{1}}d\tau_{2}\left[\tilde{\mathcal{H}}_{p}(\tau_{1})\,,\,\tilde{\mathcal{H}}_{p}(\tau_{2})\right],$
(26)
and
$\displaystyle\mathcal{H}_{F}^{(2b)}$ $\displaystyle=\frac{-i}{2\hbar
T}\int_{0}^{T}d\tau_{1}\int_{0}^{\tau_{1}}d\tau_{2}\left\\{\left[\tilde{\mathcal{H}}_{p}(\tau_{1})\,,\,\hat{K}\right]\right.$
$\displaystyle+\left.\left[\hat{K}\,,\,\tilde{\mathcal{H}}_{p}(\tau_{2})\right]\right\\}.$
(27)
In the above
$\tilde{\mathcal{H}}_{p}(\tau)\equiv-J\sum_{i}\left(e^{i\lambda\hat{A}_{i}\sin\omega_{D}\tau}c^{\dagger}_{i}c_{i+1}+{\rm
h.c.}\right),$ (28)
and
$\hat{K}=\sum_{i}\left(V_{0}\hat{n}_{i}\hat{n}_{i+1}+V_{2}\hat{n}_{i}\hat{n}_{i+2}\right).$
(29)
The details of the evaluation of the two parts is given in the appendix. The
final result is
$\mathcal{H}_{F}^{(2)}=\frac{2J\mathcal{C}(\lambda)}{\hbar\omega_{D}}\left[\sum_{i}\hat{A}_{i}\left(c^{\dagger}_{i}c_{i+1}-{\rm
h.c.}\right)\,,\,\mathcal{H}_{F}^{(1)}\right],$ (30)
where
$\mathcal{C}(\lambda)\equiv\sum_{k=0}^{\infty}\frac{J_{2k+1}(\lambda)}{2k+1}.$
This concludes our derivation of the Floquet Hamiltonian for the cosine
protocol. We note that ${\mathcal{H}}_{F}^{(2)}$ does not respect the
constrained hopping structure of ${\mathcal{H}}_{F}^{(1)}$ and therefore
destroys HSF in the driven model beyond a prethermal timescale; below this
timescale ${\mathcal{H}}_{F}^{(1)}$ dominates the dynamics leading to
prethermal relaization of HSF.
### III.2 An experimentally relevant drive protocol
A possible realization of a standard fermion chain where coherent quantum
dynamics can be studied involves ultracold atom platforms [24, 25]. In such
realizations, both the hopping amplitude and the nearest-neighbor interaction
between the fermions depend on the strength of the external lasers; therefore
it is difficult to dynamically alter one keeping the other fixed. Therefore an
experimental realization of strong HSF would require a protocol which allows
for simultaneous variation of both the hopping and the interaction strength.
To take such simultaneous variations into account, we now consider a fermionic
chain with the Hamiltonian
$\displaystyle H$ $\displaystyle=$
$\displaystyle-J(t)\sum_{j}\left(c_{j}^{\dagger}c_{j+1}+{\rm
h.c.}\right)+(V_{0}+V(t))\sum_{j}\hat{n}_{j}\hat{n}_{j+1}$ (31)
$\displaystyle+V_{2}\sum_{j}\hat{n}_{j}\hat{n}_{j+2}$
where $J(t)$ and $V(t)$ are amplitudes of nearest neighbor hopping and
interactions respectively, $V_{2}\ll|V(t)|$ is the amplitude of the second-
neighbor interactions, $c_{j}$ denotes the fermion annihiliation operator on
the $j^{\rm th}$ site of the chain, and $\hat{n}_{j}=c_{j}^{\dagger}c_{j}$ is
the fermion density operator.
In what follows, we choose a square pulse protocol so that
$\displaystyle V(t)$ $\displaystyle=$ $\displaystyle-V_{1}\quad t\leq T/2,$
(32) $\displaystyle=$ $\displaystyle V_{1}\quad T/2<t\leq T$ $\displaystyle
J(t)$ $\displaystyle=$ $\displaystyle J_{1}\quad t\leq T/2,$ (33)
$\displaystyle=$ $\displaystyle J_{2}\quad T/2<t\leq T,$
with $V_{1}\gg J_{1},J_{2},V_{0},V_{2}$ so that one can reliably apply FPT to
compute the Floquet Hamiltonian. We note that the protocol given by Eqs. 32
and 33 allows for simultaneous variation of the hopping and the interaction
strengths of the fermions.
To obtain an analytic expression for the first-order Floquet Hamiltonian, we
first write the Hamitonian given by Eq. 31 as $H=H_{0}+H_{1}$ where
$H_{0}=V(t)\sum_{j}\hat{n}_{j}\hat{n}_{j+1}$ and
$\displaystyle H_{1}$ $\displaystyle=$
$\displaystyle-J(t)\sum_{j}\left(c_{j}^{\dagger}c_{j+1}+{\rm
h.c.}\right)+V_{0}\sum_{j}\hat{n}_{j}\hat{n}_{j+1}$ (34)
$\displaystyle+V_{2}\sum_{j}\hat{n}_{j}\hat{n}_{j+2}.$
We then follow the standard procedure and obtain the evolution operator
corresponding to the term $H_{0}$ [21, 22, 23]. This yields
$\displaystyle U_{0}(t,0)$ $\displaystyle=$ $\displaystyle
e^{iV_{1}t\sum_{j}\hat{n}_{j}\hat{n}_{j+1}/\hbar}\quad\quad\quad t\leq T/2$
$\displaystyle=$ $\displaystyle
e^{iV_{1}(T-t)\sum_{j}\hat{n}_{j}\hat{n}_{j+1}/\hbar}\quad T/2<t\leq T$
The Floquet Hamiltonian corresponding to $U_{0}(T,0)$ can be easily read off
from Eq. III.2 to be identically $H_{F}^{(0)}=0$.
Next, we consider the effect of the terms in $H_{1}$ using perturbation
theory. The first order contribution to the evolution operator from $H_{1}$ is
given by
$\displaystyle U_{1}(T,0)$ $\displaystyle=$
$\displaystyle\frac{-i}{\hbar}\int_{0}^{T}dt\,U_{0}^{\dagger}(t,0)H_{1}U_{0}(t,0)$
(36)
To obtain analytic expression of $U_{1}(T,0)$ we first note that the
interaction terms in $H_{1}$ (Eq. 34) commute with $U_{0}$. Thus the
contribution of this term to $U_{1}$ is trivially obtained and yields
$\displaystyle U_{1a}(T,0)$ $\displaystyle=$
$\displaystyle\frac{-iT}{\hbar}\left(V_{0}\sum_{j}\hat{n}_{j}\hat{n}_{j+1}+V_{2}\sum_{j}\hat{n}_{j}\hat{n}_{j+2}\right)$
In contrast, the contribution from the hopping term in $H_{1}$ requires a more
detailed analysis. To this end, using Eq. III.2, we write
$\displaystyle
U_{1b}(T,0)=\frac{iJ_{1}}{\hbar}\int_{0}^{T/2}e^{-iV_{1}t\sum_{j}\hat{n}_{j}\hat{n}_{j+1}/\hbar}\sum_{j}(c_{j}^{\dagger}c_{j+1}+{\rm
h.c.})e^{iV_{1}t\sum_{j}\hat{n}_{j}\hat{n}_{j+1}/\hbar}$
$\displaystyle+\frac{iJ_{2}}{\hbar}\int_{T/2}^{T}e^{-iV_{1}T\sum_{j}\hat{n}_{j}\hat{n}_{j+1}/\hbar}e^{iV_{1}t\sum_{j}\hat{n}_{j}\hat{n}_{j+1}/\hbar}\sum_{j}(c_{j}^{\dagger}c_{j+1}+{\rm
h.c.})e^{-iV_{1}t\sum_{j}\hat{n}_{j}\hat{n}_{j+1}/\hbar}e^{iV_{1}T\sum_{j}\hat{n}_{j}\hat{n}_{j+1}/\hbar},$
(38)
where we have used Eqs. 32 and 33.
To evaluate Eq. 38, we note that the hopping from site $j$ to $j+1$ costs an
energy due to the nearest-neighbor interaction if it changes the number of
bonds on the lattice whose both ends have sites occupied by fermions. This
allows us to define an operator
$\displaystyle\hat{A}_{j}$ $\displaystyle=$
$\displaystyle\hat{n}_{j+2}-\hat{n}_{j-1}$ (39)
which takes values $\pm 1$ or $0$ on any site. The hopping of a fermion from a
site $j$ changes the energy due to nearest-neighbor interaction only if
$\hat{A}_{j}\neq 0$. This allows us to write
$\displaystyle
U_{1b}(T,0)=\frac{iJ_{1}}{\hbar}\int_{0}^{T/2}dt\sum_{j}(e^{-iV_{1}t\hat{A}_{j}/\hbar}c_{j}^{\dagger}c_{j+1}+{\rm
h.c.})+\frac{iJ_{2}}{\hbar}\int_{T/2}^{T}dt\sum_{j}(e^{-iV_{1}(T-t)\hat{A}_{j}/\hbar}c_{j}^{\dagger}c_{j+1}+{\rm
h.c.})$ (40)
Carrying out the integrals in Eq. 40 and noting that $A_{j}$ can take values
$0$ and $\pm 1$, we find [20]
$\displaystyle U_{1b}(T,0)$ $\displaystyle=$
$\displaystyle\frac{iT}{\hbar}J_{s}\sum_{j}\left(\left[(1-\hat{A}_{j}^{2})+\hat{A}_{j}^{2}e^{-iV_{1}\hat{A}_{j}T/(4\hbar)}\frac{\sin
V_{1}T/(4\hbar)}{V_{1}T/(4\hbar)}\right]c_{j}^{\dagger}c_{j+1}+{\rm
h.c.}\right)=\frac{-iT\hat{J}_{c}}{\hbar}$ (41)
where $J_{s}=(J_{1}+J_{2})/2$ and the expression of $\hat{J}_{c}$ can be read
off from Eq. 41. Thus we find that for
$\displaystyle V_{1}$ $\displaystyle=$ $\displaystyle 2m\hbar\omega_{D},$ (42)
where $m\in Z$, the first order contribution to $U_{1}$ occurs only if
$\hat{A}_{j}=0$. This in turn means that the first order evolution operator
receives finite contribution from a constrained hopping term which propagates
fermion hopping in such systems. This leads to a Floquet Hamitlonian that
exhibits Hilbert space fragmentation similar to that derived in Ref. 20.
Figure 1: (Color Online) (a) Plot of $S(nT)/S_{p}$ as a function of $n$ at
$\omega_{D}=V_{1}/\hbar$ starting from a random Fock state for different
values of the drive amplitude $V_{1}$. For all values of $V_{1}$, $S(nT)$
saturates to $S_{p}$. (b) Similar to (a) but at the special frequency
$\omega_{D}=V_{1}/2\hbar$. With increase in $V_{1}$, $S(nT)$ saturates to
$S_{p}^{f}$, the Page value of the fragment of $H_{F}^{(1)}$ from which the
initial Fock state is chosen, for $n\leq 200$. (c) Plot of the density-density
autocorrelator $C_{L}(nT)$ as a function of $n$ at $\omega_{D}=V_{1}/2\hbar$
starting from a infinite temperature thermal state. In this case too, the
autocorrelator does not reach its thermal value of zero within the first 500
drive cycles, thus bearing signatures of prethermal HSF. (d) Value of
$C_{L}(nT)$ after $n=5000$ drive cycles as a function of $V_{1}$ and
$\hbar\omega_{D}/V_{1}$. The plot shows two special frequencies at
$\hbar\omega_{D}/V_{1}=0.25$ and $\hbar\omega_{D}/V_{1}=0.5$. The time
evolutions are performed using the exact unitary evolution operators,
corresponding to drive protocols (32) and (33). The system sizes are $L=16$
for plots (a) and (b) and $L=14$ for (c) and (d). For all plots,
$J_{1}=J_{2}/3=0.5$ and $V_{0}=V_{2}=2$.
The derivation of the first order Floquet Hamiltonian from Eq. 41 and III.2
can be carried out in a straightforward manner [21] and yields
$\displaystyle H_{F}^{(1)}$ $\displaystyle=$
$\displaystyle\hat{J}_{c}+V_{0}\sum_{j}\hat{n}_{j}\hat{n}_{j+1}+V_{2}\sum_{j}\hat{n}_{j}\hat{n}_{j+2}$
(43)
Thus the fragmentation exhibited by $H_{F}^{(1)}$ for this protocol is
identical to that found in Ref. 20. In addition, it also allows for variation
of $J$ which makes the protocol much less restrictive compared to its
counterpart in Ref. 20.
The corresponding dynamical signatures in the half-chain entanglement entropy
and the density-density autocorrelation function are shown in Fig. 1. For
these plots, we use Eqs. 32 and 33 and set the hopping amplitudes to
$J_{1}=0.5$ for the first half of the drive and $J_{2}=1.5$ for the next half
cycle in Fig. 1. Also, we set $V_{0}=V_{2}=2$. Figs. 1(a) and 1(b) show the
evolution of the half-chain entanglement entropy starting from a random Fock
state from the half-filled sector in a chain of length $L=16$ with periodic
boundary condition at $\omega_{D}=V_{1}/\hbar$ (generic frequency) and
$\omega_{D}=V_{1}/2\hbar$ (special frequency satisfying the relation in Eq.
42) respectively. Fig. 1(a) shows that away from the special frequency, for
all values of the drive amplitude, the entanglement entropy $S(nT)$ saturates
to the Page value $S_{p}$ of the half-filled symmetry sector from which the
initial state is chosen, as is expected of ergodic systems. In contrast to
this, Fig. 1(b) shows that at the special frequency, the entanglement entropy
fails to reach $S_{p}$ with increasing drive amplitude within the first $200$
drive cycles. Instead, with increase in drive amplitude, $S(nT)$ saturates to
$S_{p}^{f}$, the Page value of the fragment of $H_{F}^{(1)}$, from which the
initial state is chosen. Both $S_{p}$ and $S_{p}^{f}$ have been computed
analytically and numerically following Ref. 20.
Fig. 1(c) shows similar behavior for the time evolution of the density-density
autocorrelator
$\displaystyle C_{L}(nT)=\langle(n_{L}(nT)-1/2)(n_{L}(0)-1/2)\rangle$ (44)
in an infinite temperature thermal state for a chain of length $L=14$ with
open boundary condition at the special frequency $\omega_{D}=V_{1}/2\hbar$. A
careful look at Eq. 44 reveals that $C_{L}$ also represents the connected
autocorrelator since $\langle n_{L}(0)-1/2\rangle=0$. Thus, in an ergodic
system, $C_{L}(nT)$ is expected to decay to zero at long enough times
signifying loss of any initial memory. However, Fig. 1(c) shows that with
increasing drive amplitude $V_{1}$, $C_{L}$ saturates to a value much higher
than zero at long enough times. This can be explained by considering the
fragmented structure of $H_{F}^{(1)}$ and the Mazur’s bound on the
autocorrelator in the presence of the fragmented structure [20]. In [20], we
had seen that the long-time saturation value of the autocorrelator was above
the lower bound predicted by the Mazur’s bound. The autocorrelator decays down
to zero when the chain is driven away from the special frequencies. Fig. 1(d)
elucidates this by plotting the value of $C_{L}(nT)$ after $5000$ drive cycles
as a function of the drive amplitude and the drive frequency. This plot can
also serve as a “phase diagram” in the drive frequency and drive amplitude
space, where non-zero saturation values of $C_{L}(nT)$ (bright regions in the
color plot) indicate parameter regimes where prethermal fragmentation is
observed.
It is to be noted here that for a given drive amplitude, the rate of
thermalization is faster as compared to that reported in [20]. This is to be
attributed to the asymmetric drive protocol (different values of the hopping
amplitude during the two half-cycles) used here. Due to the asymmetric nature
of the protocol, the lowest non-trivial correction to the constrained
Hamiltonian at the special frequency comes from the second-order Floquet
Hamiltonian, $H_{F}^{(2)}$ as compared to $H_{F}^{(3)}$ in [20]. This, in
turn, results in a shorter thermalization timescale.
## IV Other filling fractions
Figure 2: (Color Online) (a) Plot of the ${\mathcal{D}}_{\rm sub-
sector}/{\mathcal{D}}_{\rm sector}$ for $H_{F}^{(1)}$ as a function of $L$ for
$N/L=1/3$ showing exponential reduction with $L$ similar to the half-filling
case. (b) Similar plot for $N/L=1/4$. For both plots, $J=1$ and
$V_{1}/(\hbar\omega_{D})=2m$.
In this section, we study the driven fermion chain away from half-filling to
demonstrate the robustness of the fragmentation signature. To this end, we
consider the driven fermion chain at filling fractions $N/L=1/3,1/4$ (where
$N$ is the fermion number and $L$ is the chain length). In what follows, we
shall use the square-pulse protocol given by $V(t)=-(+)V_{1}$ for
$t\leq(>)T/2$ in accordance with Ref. 20.
We begin our study by analyzing the Hilbert space dimension (HSD) of the
largest fragment of the first order Floquet Hamiltonian (Eq. 13) at $N/L=1/3$
and $1/4$. This is shown in Fig. 2 where the ratio of the HSD of the largest
fragment, ${\mathcal{D}}_{{\rm sub-sector}}$, and the total HSD of the
symmetry sector (one-third filled sector in (a) and one-fourth filled sector
in (b)), ${\mathcal{D}}_{{\rm sector}}$, is plotted as a function of $L$ for
$J=1$ and at a special frequency $V_{1}/(\hbar\omega_{D})=2m$, where $m$ is an
integer. We find a clear signature of exponential decay of this ratio for both
$1/3$ and $1/4$ filling fractions as a function of $L$. This indicates the
possibility of the presence of signature of strong HSF in the dynamics of the
driven chain at these filling.
To verify this expectation, we compute the entanglement entropy $S(nT)$ as a
function of $n$ and at different drive frequencies. For an ergodic driven
system, $S$ is expected to increase with $n$ and eventually saturate to its
Page value corresponding to the symmetry sector $S_{p}$ irrespective of the
initial state chosen for the dynamics[27]. In contrast, for a chain which
exhibits HSF, $S(nT)$ is expected to saturate to the Page value of the
fragment to which the initial state belongs: $S\to S_{p}^{f}$. Thus the
saturation value of $S$ is lower; also it depends on the initial state from
which the dynamics originates. This allows one to distinguish between
dynamical behavior of a driven chain with and without strong HSF. A plot of
$S(nT)/S_{p}$, shown in Fig. 3(a) for $N/L=1/3$ and Fig. 3(b) for $N/L=1/4$,
clearly shows the distinction between the behavior of $S$ at and away from the
special frequencies. $S(nT)/S_{p}$ saturates, for both fillings, to unity at
large $n$ away from the special frequencies($V_{1}/(\hbar\omega_{D})=1/2$); in
contrast, at the special frequency $V_{1}/(\hbar\omega_{D})=2$, they saturate
to a lower value which corresponds to $S_{p}^{f}$ of the respective sectors
from which the initial states are chosen.
Figure 3: (Color Online) (a) Growth of entanglement entropy from exact
dynamics for $L=18$ and $N=6$, starting from a randomly chosen Fock state. The
result is averaged over 10 such states chosen from the same fragment of the
first order Floquet Hamiltonian. The sector dimension from which the state is
chosen is $1980$. The entanglement entropy is scaled by the Page value $S_{p}$
of the symmetry sector for which $N/L=1/3$. At the special frequencies, EE
saturates to a value less than $S_{p}$ but close to $S_{p}^{f}$ for the
fragment from which the initial state was chosen. (b) Same as in (a) but for
$L=20$ and $N=5$ corresponding to $1/4$ filling. The sector dimension of the
fragment from which the initial state is chosen is $1050$. (c) Plot of
$C_{L}(nT)$ as a function of $n$ at the special frequency
$V_{1}/(\hbar\omega_{d})=2$ and away from it $V_{1}/(\hbar\omega_{D})=1/2$ for
$N/L=1/3$ and $L=18$ The initial state is same as in (a). (d) Same as (c) but
for $L=20$ and $N/L=1/4$; the initial state is same as in (b). For all plots
$J=1$.
In addition, we compute the density-density autocorrelation function given by
Eq. 44. Fig. 3(c) and (d) show the behavior of $C_{L}(nT)$ as a function of
$n$ at and away from the special frequency for $N/L=1/3$ and $1/4$
respectively. We find that in both cases, $C_{L}(nT)$ saturates to finite
value at the special frequency and to zero away from it. Thus these plots
confirm the existence of strong HSF similar to the half-filling sectors in
these fermionic chains.
Finally, in Fig. 4, we show the plot of $S(nT)/S_{p}$ starting from several
initial Fock states which belong to different sectors with different values of
$S_{p}^{f}$ for $V_{1}/(\hbar\omega_{D})=2$. We find that in each case, both
at $N/L=1/3$ (Fig. 4(a)) and $1/4$ (Fig. 4(b)), $S(nT)/S_{p}<1$ for large $n$;
moreover, $S(nT)\to S_{p}^{f}$ corresponding to the fragment of $H_{F}^{(1)}$
to which the initial state belongs. This clearly demonstrates signature of HSF
at these filling fractions.
Figure 4: (Color Online) Plot of $S(nT)/S_{p}$ as a function of $n$ at
$V_{1}/(\hbar\omega_{D})=2$ starting from Fock states belonging to different
fragments for (a) $L=18$ and $N=6$ and (b) $L=20$ and $N=5$. The numbers label
the Hilbert space dimensions of the fragments and the dashed lines indicate
the corresponding $S_{p}^{f}$. For all plots $J=1$.
## V Density-density out-of-time ordered correlation function
In this section we analyze the out-of-time ordered correlator (OTOC) for the
driven chain. The basic definitions are outlined in Sec. V.1. This is followed
by numerical study of OTOC for a chain with periodic boundary condition (PBC)
and starting from an infinite temperature thermal state in Sec. V.2. Finally
we study the behavior of OTOC for fermion chains with open boundary condition
(OBC) and starting from the ${\mathbb{Z}}_{2}=|0,1,0,1,\ldots\rangle$ Fock
state in Sec. V.3.
### V.1 Preliminary
The study of OTOC serves as an important tool to diagnose the rate of
propagation of local information in a quantum system [29, 30, 31, 32]. Ergodic
systems are known to exhibit ballistic spread of local information accompanied
by a diffusive front. In case of non-ergodic systems, the behavior of
information propagation, as detected using OTOC, ranges from logarithmic
growth in many-body localized systems [33] to alternate scrambling and
unscrambling in certain integrable systems [34]. For fragmented systems, the
scrambling of information is expected to be slow since the Hamiltonian does
not connect states belonging to different fragments; however, its detailed
features, in the presence of a periodic drive, have not been studied earlier.
To probe the rate of scrambling of information in our system, we study the
temporal (in stroboscopic times) and spatial profile of the OTOC
$\displaystyle F(r,nT)$ $\displaystyle=$
$\displaystyle\langle\tilde{n}_{i}(nT)\tilde{n}_{j}(0)\tilde{n}_{i}(nT)\tilde{n}_{j}(0)\rangle,$
(45)
where $\tilde{n}_{i}=2n_{i}-1$ with $n_{i}$ and $n_{j}$ being the number
density operators at sites $i$ and $j$ respectively and $r=|i-j|$ measures the
distance between these two sites. We take the average with respect to both an
infinite-temperature thermal state and the $|{\mathbb{Z}}_{2}\rangle$ state.
Since the operator $\tilde{n}_{i}$ is hermitian and squares to identity, it
can be shown that the function $F(r,nT)$ is related to the squared commutator
$C(r,nT)$ as
$\displaystyle C(r,nT)$ $\displaystyle=$
$\displaystyle\langle[\tilde{n}_{i}(nT),\tilde{n}_{j}]^{\dagger}[\tilde{n}_{i}(nT),\tilde{n}_{j}]\rangle$
(46) $\displaystyle=$ $\displaystyle 2(1-F(r,nT)).$
Cast in this form, it can be argued that as the operator $\tilde{n}_{i}$,
initially localized at site $i$, spreads to the site $j$, the value of
$C(r,nT)$ at this site gradually increases from zero and hence the OTOC,
$F(r,nT)$ decreases from $1$. A higher value of $C(r,nT)$ (i.e. lower value of
$F(r,nT)$) at a given instant of time therefore indicates larger spread of the
local operator ($\tilde{n}_{i}$ in the present case).
### V.2 Infinite-temperature initial state
Figure 5: (Color Online) (a) Profile of the OTOC $F(r,nT)$ in an infinite
temperature thermal state for $L=14$ (half-filled sector) with PBC and
$\omega_{D}=V_{1}/\hbar$. The initial operator $\tilde{n}_{i}$ is localized at
the centre of the chain $i=7$. The index $j$ in the x-axis labels the site of
the chain and $r=|j-7|$ for all plots. A ballistic spread of information is
seen with the value of the OTOC rapidly dropping close to zero as is expected
of a thermalizing system. (b) Same as (a) but at the special frequency
$\omega_{1}^{*}=V_{1}/2\hbar$, where the first-order Floquet Hamiltonian
$H_{F}^{(1)}$ is fragmented. Although there is some information scrambling in
this case due to mixing of states within fragments, the value to which
$F(r,nT)$ reaches at similar times is higher as compared to the thermalizing
case. This implies that the extent of information scrambling is less in this
case, compared to (a). (c) Profile of $F(r,nT)$ at site $j=14$ both at and
away from the special frequency $\omega_{1}^{*}$, illustrating the same point.
(d) Variation of the value of $F(r,nT)$ at the farthest site $j=14$ after
$t=nT=15$ with drive amplitude $V_{1}$ and $\omega_{D}=\omega_{1}^{\ast}$.
With decrease in $V_{1}$, higher-order terms in the pertubation series gain
prominence and enhance information scrambling. For all plots,
$J=1,V_{0}=V_{2}=2$. For (a)-(c), $V_{1}=50$. All the results are obtained
using the exact time-evolution operator.
In this section, we study the spread of OTOC in an infinite-temperature
thermal state for a half-filled chain of length $L=14$ with PBC both at the
special frequency ($\hbar\omega_{1}^{*}=V_{1}/2$, shown in Fig 5(b)) and away
from it ($\omega_{D}=2\omega_{1}^{*}$, shown in Fig 5(a)). The operator is
initially localized at the centre of the chain, i.e. $i=L/2$ in both the
cases. Fig. 5(a) shows that at a generic frequency, the operator spreads
ballistically. Such a spread can be inferred from the linear variation of $r$,
for sites at which $F(r,t)$ has almost similar values, as a function of $t$.
$F(r,nT)$ quickly falls to a value close to zero, implying that $C(r,nT)$
saturates to a value close to $1$. At the special frequency, however, Fig.
5(b) shows that although the local information reaches the farthest site
almost at the same time as in the previous case, the OTOC saturates to a
higher value as compared to its thermalizing counterpart. This is a direct
consequence of the fact that the first order Floquet Hamiltonian
$H_{F}^{(1)}$, which is fragmented, only allows mixing of the states within a
particular fragment. Although the infinite-temperature thermal initial state
(represented by a density matrix) weighs all the states equally, during time
evolution they can only be connected with states belonging to the same
fragment. As a result, they fail to spread out through the whole Hilbert
space. The information is scrambled only due to mixing between states within
individual fragments; this leads to lower scrambling in the prethermal regime
than that due to ergodic evolution away from the special frequency. Fig. 5(c)
illustrates this fact by plotting the value of $F(r,nT)$ for site $j=14$ both
at and away from the special frequency. As the drive amplitude decreases, the
higher order terms in the perturbation series start dominating, allowing
mixing between different fragments. This enhances information scrambling
leading to a decrease in the value of the OTOC. This is shown in Fig. 5(d)
which plots the variation of the value of OTOC at the farthest site ($j=14$)
at $nT=15$ as a function of $V_{1}$ for $\omega_{D}=\omega_{1}^{\ast}$. This
shows that information scrambling is suppressed beyond a critical $V_{1}$
where signatures of fragmentation can be found over a long pre-thermal
timescale.
### V.3 $\mathbb{Z}_{2}$ State
Figure 6: (Color Online) (a) Profile of the OTOC $F(r,nT)$ in a
$\mathbb{Z}_{2}$ state for L=14 with OBC and at a generic frequency
$\omega_{D}=V_{1}/\hbar$. The initial operator $\tilde{n}_{i}$ is localized at
one of the edges of the chain $i=1$. The information reaches to the other end
of the system ballistically, bearing signature of thermalizing systems. (b)
Same as (a) but at the special frequency $\omega_{1}^{*}=V_{1}/2\hbar$. The
information after reaching the other end starts unscrambling again. This
alternate scrambling and unscrambling of information continues over a short
timescale, dictated by the quasienergy spectrum of the fragmented
$H_{F}^{(1)}$ as explained in the main text. (c) Long-time oscillations in the
profile of the OTOC at $\omega_{1}^{*}$. (d) Plot of $\chi_{\beta\alpha}(nT)$
(defined below Eq. 51 in the main text) as a function of $t=nT$. The color
codes are: $\chi_{28}$ (blue), $\chi_{38}$ (red), $\chi_{48}$ (brown),
$\chi_{58}$ (pink), $\chi_{68}$ (green) and $\chi_{78}$ (cyan). The red dashed
lines mark integer multiples of $2\pi$. The first two times when these phases
are very close integer multiples of $2\pi$, are marked as $t_{1}^{*}$ and
$t_{2}^{*}$ respectively. These correspond to the first two recurrence times
in (b). For all plots, $J=1,V_{0}=V_{2}=2,V_{1}=50$.
In this section, we study the spatial and temporal profile of the OTOC in a
$|{\mathbb{Z}}_{2}\rangle$ state at the special frequency with OBC. Fig. 6(a)
shows that away from the special frequency, starting from one end of the
chain, the information propagates ballistically to the other end, as is
expected for a ergodic system; $F(r,t)$ monotonically decays to near-zero
value at all sites within $nT\leq 10$. In contrast, as shown in Fig. 6(b) and
(c), at the special frequency the behavior of $F(r,nT)$ is quite different and
it shows signature of fragmentation. In Fig. 6(b), we find that at short time
scales $nT\sim$ 10, there are initial fast oscillations which lead to
alternate scrambling and unscrambling of information. Such alternate
scrambling and unscrambling of information is reminiscent of the behavior of
OTOC in integrable systems [34]; however, as we show below, the mechanism for
this phenomenon is different in the present case. Furthermore, over longer
time scales $nT\sim$ 100 - 500, we find slow oscillatory behavior as seen in
Fig. 6(c). As discussed below, this is related to tunneling between two near
degenerate states.
Both the above oscillatory features can be related to the fact that at high
drive amplitude and at the special frequencies, the dynamics is mostly
governed by $H_{F}^{(1)}$ at short and intermediate timescales. To understand
the behavior of $F$, we therefore focus on the fragment of $H_{F}^{(1)}$ (with
OBC) to which $|{\mathbb{Z}}_{2}\rangle$ belongs. For $L=14$, there are $8$
states in this fragment namely
$\displaystyle\mathcal{H}=$
$\displaystyle\\{|{\mathbb{Z}}_{2}\rangle,|j_{h}=2\rangle,|j_{h}=4\rangle,|j_{h}=6\rangle,$
$\displaystyle|j_{h}=8\rangle,|j_{h}=10\rangle,|j_{h}=12\rangle,|\bar{\mathbb{Z}}_{2}\rangle\\}$
(47)
where $|\bar{\mathbb{Z}}_{2}\rangle=|1,0,1,0,\ldots\rangle,\text{ and
}|j_{h}\rangle$ is a state with one hole-defect (where aa hole-defect implies
two adjacent unoccupied sites) at position $j_{h}$ and zero particle defect
(i.e. no two adjacent sites are occupied), viz
$|j_{h}=2\rangle=|1,0,0,1,0,1,0,1,0,1,0,1,0,1\rangle$. Note, the constrained
hopping introduces dynamics between these eight states, and $H_{F}^{(1)}$ in
this subspace is equivalent to a nearest neigbor hopping model of a linear
chain with eight sites and OBC. Here $|{\mathbb{Z}}_{2}\rangle$ and
$|\bar{\mathbb{Z}}_{2}\rangle$ form the ends of the chain while
$j_{h}=2,4,\ldots,12$ form the sites in between.
The OTOC at a site $j$ will have the structure
$F(r_{1},nT)=\langle\mathbb{Z}_{2}|\tilde{n}_{1}(nT)\tilde{n}_{j}(0)\tilde{n}_{1}(nT)\tilde{n}_{j}(0)|\mathbb{Z}_{2}\rangle$
(48)
where $r_{1}=|j-1|$. Inserting the complete set of states $|m\rangle$ from
this fragment and noting that the operator $\tilde{n}$ is diagonal in the Fock
basis, this expression reads
$F(r_{1},nT)\approx\sum_{m}(-1)^{j}f^{j}_{m}|\langle
m(nT)|\tilde{n}_{1}|\mathbb{Z}_{2}(nT)\rangle|^{2}$ (49)
where $f^{j}_{m}=\langle m|\tilde{n}_{j}|m\rangle$. Expanding
$|\mathbb{Z}_{2}\rangle$ and $|m\rangle$ in the energy eigenstates of
$H_{F}^{(1)}$:
$|\mathbb{Z}_{2}\rangle=\sum_{\alpha}c_{\alpha}|\phi_{\alpha}\rangle$,
$|m\rangle=\sum_{\beta}c_{\beta}^{m}|\phi_{\beta}\rangle$, Eq. 49 yields
$F(r_{1},nT)\approx\sum_{m}(-1)^{j}f^{j}_{m}g_{m}(nT)$ (50)
where
$g_{m}(nT)=\Big{|}\sum_{\alpha,\beta}c_{\beta}^{m*}c_{\alpha}e^{-i\chi_{\beta\alpha}(t)}N^{1}_{\beta\alpha}\Big{|}^{2}$
(51)
with
$N^{1}_{\beta\alpha}=\langle\phi_{\beta}|\tilde{n}_{1}|\phi_{\alpha}\rangle$
being the matrix element of $\tilde{n}_{1}$ between the energy eigenstates and
$\chi_{\beta\alpha}(t)=(\epsilon_{\alpha}-\epsilon_{\beta})nT/\hbar$.
#### V.3.1 Short and Long time Oscillations
In Eq. 50 the spatial dependence on $r_{1}$ or $j$ is factorized out from the
time dependence $nT$. This implies that the time dependence of the OTOC is
site-independent, i.e. the recurrence time at every site is the same and the
recurrence happens when all the phases $\chi_{\beta\alpha}(t)$ are
approximately close to integer multiples of $2\pi$. We arrange the spectrum
$\epsilon_{1}<\epsilon_{2}<\ldots<\epsilon_{8}$. Numerically, we find that the
matrix elements $N^{1}_{\beta\alpha}$ between the states
$|\phi_{\beta}\rangle;\beta=2,3,\ldots,8$ and
$|\phi_{\alpha}\rangle;\alpha=7,8$ are an order of magnitude higher than the
rest of the off-diagonal matrix elements. This is because the states
$\beta=1,2,\ldots,6$ are mostly made of the six single-hole wavefunction,
while the states $\alpha=7,8$ are mostly made of the states $\mathbb{Z}_{2}$
and $\bar{\mathbb{Z}}_{2}$. Since the last two states have one extra next-
nearest-neighbor interaction compared to the first six,
$\epsilon_{\alpha\beta}\sim V_{2}$, and this energy scale shows up in the fast
oscillations seen over timescales $nT\sim$ 10\. Thus, in this relatively short
time, the recurrence is predominantly dictated by the phases
$\chi_{\beta\alpha}(t)$ with $\beta=2,3,\ldots,7$ and $\alpha=8$. Fig. 6(d)
plots these phases as a function of $t=nT$. It can be seen that the recurrence
occurs when all these phases are close to $2\pi p_{0}$ (where $p_{0}\in Z$) as
shown in Fig. 6(d). The first two of these times are marked with $t_{1}^{*}$
and $t_{2}^{*}$ in Fig.6(d). These are not exactly periodic because of
involvement of multiple phases in the dynamics. It is also to be noted from
Fig. 6(d) that the energies $\epsilon_{7,8}$ (which are mostly linear
combinations of $\mathbb{Z}_{2}$ and $\bar{\mathbb{Z}}_{2}$ Fock states) are
so close that for the short timescale involved, the phase $\chi_{87}(t)$
almost remains close to zero; it does not play much role in determining the
short recurrence time. Thus, from the above discussion it is clear that the
recurrences at short timescales owe their existence to two features in the
model. First, the finite next-nearest-neighbor interaction energy $V_{2}$.
Second, the OBC which allows the single-hole states to be included within the
same fragment as to which the states $\mathbb{Z}_{2}$ and
$\bar{\mathbb{Z}}_{2}$ belong (with PBC, the fragment has only the
$\mathbb{Z}_{2}$ and $\bar{\mathbb{Z}}_{2}$ states).
In Fig. 7(a), we show the comparison between results for $F(r_{1},nT)$
obtained from exact dynamics (solid lines) and the analytical estimate
obtained from $H_{F}^{(1)}$ in Eq. 50 (dashed lines) for some representative
sites $j=1,5,14$. It can be seen that the first two recurrence times at
$t_{1}^{*}=8.07,t_{2}^{*}=16.21$ are well approximated by Eq. 50.
The phase $\chi_{87}(t)$ manifests itself only at longer time scales of
$nT\sim$ 100 - 500. As seen in Fig. 6(c), over this time scale $F(r_{1},nT)$
oscillates from values nearly one to nearly minus one with frequency $\Omega$,
where $\Omega=(\epsilon_{8}-\epsilon_{7})/\hbar$. In Fig. 7(b) and (c) we show
that these oscillations can be explained using Eq 50 by considering the $8$
states belonging to this fragment of $H_{F}^{(1)}$. In Appendix B, we show
that a four state ansatz can be used to arrive at this result for a high next-
nearest-neighbor interaction strength, when the $\mathbb{Z}_{2}$,
$\bar{\mathbb{Z}}_{2}$ states are well separated in energy from the remaining
$|j_{h}\rangle$ states.
Figure 7: (Color Online) (a) Comparison of exact and approximate estimate (Eq.
50) of $F(r_{1},nT)$ for $j=1$ (green solid, brown dashed), $j=5$ (cyan solid,
red dashed) and $j=14$ (pink solid, black dashed) sites. The solid lines
represent results obtained from exact dynamics and dashed lines represent
approximate estimates obtained using the fragment of $H_{F}^{(1)}$. The first
two recurrence times are in good agreement, emphasizing the role of
fragmentation in the scrambling and unscrambling behavior observed in Fig.
6(b). For $j=1$, the agreement between approximate and exact numerical result
is nearly perfect and the green solid and the brown dashed lines are almost on
top of each other. (b), (c) Similar comparison for the long time oscillations
for $j=1,14$, following the color scheme used in (a). (b) Plot of exact
numerical result for $j=1,14$. (c) Plot of approximate estimate for $j=1,14$.
#### V.3.2 Spatial profile of OTOC
The spatial dependence in the profile of the OTOC appears through the term
$h_{jm}=(-1)^{j}f^{j}_{m}$ in Eq. 50. The initial linear increase in Fig. 6(b)
for $nT\leq 5$ can be explained by focusing on this term. The profile of
$h_{jm}$ for odd $j$’s reads
$h_{jm}=\begin{pmatrix}1&-1&-1&-1&-1&-1&-1&-1\\\ 1&1&-1&-1&-1&-1&-1&-1\\\
1&1&1&-1&-1&-1&-1&-1\\\ 1&1&1&1&-1&-1&-1&-1\\\ 1&1&1&1&1&-1&-1&-1\\\
1&1&1&1&1&1&-1&-1\\\ 1&1&1&1&1&1&1&-1\\\ \end{pmatrix}$ (52)
where the rows label the odd sites $j=1,3,5,\ldots 13$ and the columns label
the different Fock states $|m\rangle$ in this fragment, in the same order as
in Eq. 47. The time dependent functions $g_{m}(nT)$ are positive definite at
all times. As $j$ increases, the number of $g_{m}(nT)$’s having positive
weights increases linearly as is evident from Eq. 52. Thus, the shift of
$F(r_{1},nT)$ from $1$ happens progressively at a later time as $j$ increases.
It is also useful to note that $f^{2k-1}_{m}=-f^{2k}_{m}$ for all $k\text{ and
}m$, so that $h_{2k-1,m}=h_{2k,m}$ and hence at any given instant of time,
$F(2k-1,nT)=F(2k-2,nT)$. Thus the even sites $j$ have similar behavior as the
odd sites.
## VI Discussion
In this work, we studied the dynamics of a periodically driven Fermi chain and
extended the study of prethermal HSF in these system undertaken in Ref. 20 in
several ways.
First, we have studied the existence of such prethermal HSF beyond half-
filling in such chains. We found the existence of such prethermal HSF phase
for several other filling fractions such as $N/L=1/4$ and $1/3$. This shows
that the robustness of the pretheraml MBL phase in such driven chain.
Second, we provide a derivation of the first and second order Floquet
Hamiltonian in such driven system in an alternative manner. Our derivation
brings out the commutator structure of the Floquet Hamiltonian; in particular,
we find that the second order term in the Floquet Hamiltonian, $H_{F}^{2}$,
can be expressed as a commutator of a constrained current operator
$\sum_{j}A_{j}(c_{j}^{\dagger}c_{j}-h.c.)$ with $H_{F}^{(1)}$. We expect
similar commutation relations to hold for higher order terms in $H_{F}$; this
sheds light on the symmetry content of the higher order terms in the Floquet
Hamiltonian for the cosine drive protocol.
Third, we extend our analysis to experimentally relevant and slightly more
complicated drive protocols. In a typical experiment, involving ultracold
atoms, the interaction strength between fermions and their hopping strength
are both controlled by intensities of the applied lasers. Consequently,
experimentally relevant protocols must allow change of both the hopping
amplitudes and interaction strength. We show that the prethermal HSF is stable
for a large class of such drives and chart out a phase diagram for the special
frequencies at which it occurs.
Finally, we study the behavior of density-density OTOC for such driven
systems. Our study shows that such OTOCs can serve as detectors of such
prethermal HSF in two distinct ways. First, irrespective of the boundary
condition used, the OTOC $F(r,t)$ for a finite fermion chain driven at the
special frequency and starting from a ${\mathbb{Z}}_{2}$ initial state,
exhibits a larger long-time value than when driven away from such frequencies.
In addition, it also exhibits oscillations with very large periodicity at the
special frequencies that have the same origin as the correlation functions
discussed in Ref. 20. In contrast, no such oscillations are found when one is
away from the special frequency; the OTOC monotonically decreases to zero.
Second, for fermion chains with open boundary condition and driven at special
frequencies, we find periodic scrambling and unscrambling of information which
is in sharp contrast to standard behavior of OTOCs in driven ergodic systems.
Such a behavior was found earlier for integrable spin chains [34]; however,
their origin for systems with prethermal HSF quite different and can be tied
to the localization of the driven system within a group of Fock states with
same dipole number ($n_{d}=0$). For chains with open boundary condition, there
are $O(L)$ such states which govern the dynamics up to a long prethermal time
scale leading to periodic scrambling and unscrambling. This phenomenon is
qualitatively different from the behavior of OTOC away from the special
frequencies where it monotonically decays due to fast spread of the driven
system through the Hilbert space; it is also different for a chain with PBC
with two Fock states (${\mathbb{Z}}_{2}$ and ${\bar{\mathbb{Z}}_{2}}$) in the
$n_{d}=0$ sector where no such unscrambling is found.
Experimental verification of our result would require realization of isolated
fermi chain. A possible scenario for this is a $1D$ fermion systems with
nearest neighbor hopping and local interaction realized suing ultracold
fermions in an optical lattice. We propose to drive this with the
experimentally relevant protocol discussed in this work; this can be achieved
by varying strength of lasers used to generate the lattice [24, 25]. The
simplest measurement would involve measuring $\langle\hat{n}_{d}\rangle$ as a
function of time. We predict that the dynamics of $\langle n_{d}\rangle$
staring from the ${\mathbb{Z}}_{2}$ state for such a chain will be
approximately constant (and close to zero) for a long prethermal timescale
when the system is driven at the special frequency. This is to be contrasted
the behavior of $\langle n_{d}\rangle$ away from the special frequency which
should exhibit rapid dynamics at short timescale.
In conclusion, we have studied several aspects of prethermal HSF in a driven
Fermi chain. Our results have showed the robustness of this phenomenon by
confirming its existence for different, experimentally relevant, drive
protocols and also when the system is away from half-filling. In addition we
have demonstrated that OTOCs may serve as a marker for such prethermal HSF;
they exhibit periodic scrambling and unscrambling for fermion chains with open
boundary condition driven at the special frequency.
Acknowledgement: SG acknowledges CSIR, India for support through project
09/080(1133)/2019-EMR-I. IP thanks Edouard Boulat for discussions. KS thanks
DST, India for support through SERB project JCB/2021/000030 and Arnab Sen for
discussions.
## Appendix A Computation of $\mathcal{H}_{F}^{(2)}$ for cosine protocol
The second order Floquet Hamiltonian can be broken into two parts
$\mathcal{H}_{F}^{(2)}=\mathcal{H}_{F}^{(2a)}+\mathcal{H}_{F}^{(2b)}$. From
Eq. (26) we get
$\displaystyle\mathcal{H}_{F}^{(2a)}$ $\displaystyle=\frac{-iJ^{2}}{2\hbar
T}\sum_{i,j}\sum_{m,n}B_{m,n}\left[J_{m}(\lambda\hat{A}_{i})c^{\dagger}_{i}c_{i+1}+J_{m}(-\lambda\hat{A}_{i})\right.$
$\displaystyle\times\left.c^{\dagger}_{i+1}c_{i}\,,\,J_{n}(\lambda\hat{A}_{j})c^{\dagger}_{j}c_{j+1}+J_{n}(-\lambda\hat{A}_{j})c^{\dagger}_{j+1}c_{j}\right],$
(53)
where
$B_{m,n}\equiv\int_{0}^{T}d\tau_{1}\int_{0}^{\tau_{1}}d\tau_{2}\,e^{im\omega\tau_{1}}e^{in\omega\tau_{2}}$
(54)
The evaluation of the above integrals yield
$\displaystyle B_{m,n}$
$\displaystyle=\frac{T^{2}}{2}\delta_{m,0}\delta_{n,0}+\frac{T}{im\omega}\delta_{n,0}(1-\delta_{m,0})$
$\displaystyle-\frac{T}{in\omega}\delta_{m,0}(1-\delta_{n,0})+\frac{T}{in\omega}(1-\delta_{n,0})\delta_{m,-n}.$
(55)
Due to the commutator structure of Eq. (A) the first and the fourth terms
above do not contribute. The second and the third terms are non-zero and
equal. Next, due to the $1/m$ factor in the second term, only integers $m$
contribute. We get
$\displaystyle\mathcal{H}_{F}^{(2a)}$
$\displaystyle=-\frac{2J^{2}}{\hbar\omega}\sum_{k=0}^{\infty}\frac{1}{2k+1}\left[\sum_{i}J_{2k+1}(\lambda\hat{A}_{i})\right.$
$\displaystyle\times\left.\left(c^{\dagger}_{i}c_{i+1}-{\rm
h.c.}\right)\,,\,\sum_{j}J_{0}(\lambda\hat{A}_{j})\left(c^{\dagger}_{j}c_{j+1}+{\rm
h.c.}\right)\right].$ (56)
Noting that $\hat{A}_{i}$ can only take values 0, 1, -1, we have the relation
$J_{2k+1}(\lambda\hat{A}_{i})=\hat{A}_{i}J_{2k+1}(\lambda).$
Using the above we get
$\displaystyle\mathcal{H}_{F}^{(2a)}$
$\displaystyle=-\frac{2J^{2}}{\hbar\omega}\mathcal{C}(\lambda)\left[\sum_{i}\hat{A}_{i}\left(c^{\dagger}_{i}c_{i+1}-{\rm
h.c.}\right)\right.\,,\,$
$\displaystyle\left.\sum_{j}J_{0}(\lambda\hat{A}_{j})\left(c^{\dagger}_{j}c_{j+1}+{\rm
h.c.}\right)\right],$ (57)
where
$\mathcal{C}(\lambda)\equiv\sum_{k=0}^{\infty}\frac{J_{2k+1}(\lambda)}{2k+1}.$
For the second term $\mathcal{H}_{F}^{(2b)}$ we note that
$\int_{0}^{T}d\tau_{1}\int_{0}^{\tau_{1}}d\tau_{2}\tilde{\mathcal{H}}_{p}(\tau_{2})=\int_{0}^{T}d\tau_{1}\int_{\tau_{1}}^{T}d\tau_{2}\tilde{\mathcal{H}}_{p}(\tau_{1}).$
Using the above relation and Eq. (III.1) we get
$\displaystyle\mathcal{H}_{F}^{(2b)}$ $\displaystyle=\frac{i}{2\hbar
T}\int_{0}^{T}d\tau(T-2\tau)\left[\tilde{\mathcal{H}}_{p}(\tau)\,,\,\hat{K}\right]$
$\displaystyle=\frac{-iJ}{2\hbar
T}\sum_{i,m}\int_{0}^{T}d\tau(T-2\tau)e^{im\omega\tau}$
$\displaystyle\times\left[J_{m}(\lambda\hat{A}_{i})c^{\dagger}_{i}c_{i+1}+J_{m}(-\lambda\hat{A}_{i})c^{\dagger}_{i+1}c_{i}\,,\,\hat{K}\right]$
For the $\tau$-integral above we use the relation
$\int_{0}^{T}d\tau(T-2\tau)e^{im\omega\tau}=-\frac{2T}{im\omega}(1-\delta_{m,0}).$
The appearance of the factor $1/m$ in the above implies that, again, only the
Bessel functions with odd indices contribute. This gives
$\mathcal{H}_{F}^{(2b)}=\frac{2J\mathcal{C}(\lambda)}{\hbar\omega}\left[\sum_{i}\hat{A}_{i}\left(c^{\dagger}_{i}c_{i+1}-{\rm
h.c.}\right)\,,\,\hat{K}\right].$ (58)
The Eqs. (A) and (58) imply Eq. (30) in the main text.
## Appendix B 4-state ansatz for the long-time OTOC oscillations
Figure 8: (Color Online) Comparison of exact results for OTOC $F(r_{1},nT)$
with that obtained from Eq. 61 and 63, starting from a
$|\mathbb{Z}_{2}\rangle$ state. (a) Green (light-colored) line is obtained
from exact dynamics for $j=1$, (b) Pink (light-colored) line is obtained from
exact dynamics for $j=14$, (c) Brown (dark-colored) is obtained from the 4
state ansatz, given by Eq. 59 for $j=1$. (d)Black (dark-colored) is obtained
from the 4 state ansatz, for $j=14$. For all the plots, $L=14,V_{0}=2,V_{2}=6$
and $V_{1}=50$ with $\omega_{D}=V_{1}/2\hbar$.
In this section, we discuss a simplification of Eq. 50 when a high value of
the next-nearest-neighbor interaction $V_{2}$ is considered. In this case, the
$\\{|\mathbb{Z}_{2}\rangle,|\bar{\mathbb{Z}}_{2}\rangle\\}$ states, which have
one extra next-nearest-neighbor pair, are well separated in energy from the
other $|j_{h}\rangle$ states. Thus, the extent of hybridization between
$\\{|\mathbb{Z}_{2}\rangle,|\bar{\mathbb{Z}}_{2}\rangle\\}$ states and the
$|j_{h}\rangle$ is small and hence the eigenfunctions of $H_{F}^{(1)}$ can be
written down in terms of only a few Fock states as we show below.
We assume that the two highest energy levels $|\phi_{7}\rangle$ and
$|\phi_{8}\rangle$ are mostly symmetric and anti-symmetric combinations of
$\mathbb{Z}_{2}$ and $\bar{\mathbb{Z}}_{2}$ with very small contributions from
the two “nearest” $|j_{h}\rangle$ states, i.e. $|j_{h}=2\rangle$ and
$|j_{h}=12\rangle$. By “nearest”, we refer to states which can be connected to
$\mathbb{Z}_{2},\bar{\mathbb{Z}}_{2}$ by one constrained hop. We also consider
two more states $|\phi_{5,6}\rangle$ which are orthogonal to
$|\phi_{7,8}\rangle$ and have energies $\epsilon_{5,6}$. We assume these last
two states to be nearly degenerate i.e.
$\epsilon_{5}\approx\epsilon_{6}=E_{0}$ and the splitting between the two
highest states $\epsilon_{8}-\epsilon_{7}=\Omega\ll V_{2}$,
$\epsilon_{7}-\epsilon_{6}=V_{2}$. Thus
$\displaystyle|\phi_{8}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\mathcal{C}(|\mathbb{Z}_{2}\rangle-|\bar{\mathbb{Z}}_{2}\rangle)+\frac{1}{\sqrt{2}}\mathcal{S}(|2\rangle-|12\rangle)$
$\displaystyle|\phi_{7}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\mathcal{C}(|\mathbb{Z}_{2}\rangle+|\bar{\mathbb{Z}}_{2}\rangle)+\frac{1}{\sqrt{2}}\mathcal{S}(|2\rangle+|12\rangle)$
$\displaystyle|\phi_{6}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\mathcal{S}(|\mathbb{Z}_{2}\rangle-|\bar{\mathbb{Z}}_{2}\rangle)-\frac{1}{\sqrt{2}}\mathcal{C}(|2\rangle-|12\rangle)$
$\displaystyle|\phi_{5}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\mathcal{S}(|\mathbb{Z}_{2}\rangle+|\bar{\mathbb{Z}}_{2}\rangle)-\frac{1}{\sqrt{2}}\mathcal{C}(|2\rangle+|12\rangle)$
(59)
where $\mathcal{C}=\cos{\theta}$ and $\mathcal{S}=\sin{\theta}$ with $\theta$
being a phenomenological parameter to be determined from diagonalization.
Inverting these relations, the time evolved states read
$\displaystyle|\mathbb{Z}_{2}(t)\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\left[\mathcal{C}e^{-iV_{2}t}(|\phi_{7}\rangle+|\phi_{8}\rangle
e^{-i\Omega t})\right.$
$\displaystyle\left.+\mathcal{S}(|\phi_{5}\rangle+|\phi_{6}\rangle)\right]$
$\displaystyle|\bar{\mathbb{Z}}_{2}(t)\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\left[\mathcal{C}e^{-iV_{2}t}(|\phi_{7}\rangle\right.$
$\displaystyle\left.-|\phi_{8}\rangle e^{-i\Omega
t})+\mathcal{S}(|\phi_{5}\rangle-|\phi_{6}\rangle)\right]$
$\displaystyle|2(t)\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\left[\mathcal{S}e^{-iV_{2}t}(|\phi_{7}\rangle\right.$
$\displaystyle\left.+|\phi_{8}\rangle e^{-i\Omega
t})-\mathcal{C}(|\phi_{5}\rangle+|\phi_{6}\rangle)\right]$
$\displaystyle|12(t)\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\left[\mathcal{S}e^{-iV_{2}t}(|\phi_{7}\rangle\right.$
(60) $\displaystyle\left.-|\phi_{8}\rangle e^{-i\Omega
t})-\mathcal{C}(|\phi_{5}\rangle-|\phi_{6}\rangle)\right]$
where we have set the reference energy $E_{0}=0$. Using these 4 states, Eq. 49
for sites $j=1,14$ can be simplified as
$\displaystyle F(0,nT)=f_{1}(nT)-f_{2}(nT)$ $\displaystyle
F(13,nT)=f_{1}(nT)+f_{2}(nT)$ (61)
where
$\displaystyle f_{1}(nT)$ $\displaystyle=$
$\displaystyle|\langle\mathbb{Z}_{2}(nT)|\tilde{n}_{1}|\mathbb{Z}_{2}(nT)\rangle|^{2}-|\langle\bar{\mathbb{Z}}_{2}(nT)|\tilde{n}_{1}|\mathbb{Z}_{2}(nT)\rangle|^{2}$
$\displaystyle f_{2}(nT)$ $\displaystyle=$ $\displaystyle|\langle
2(nT)|\tilde{n}_{1}|\mathbb{Z}_{2}(nT)\rangle|^{2}+|\langle
12(nT)|\tilde{n}_{1}|\mathbb{Z}_{2}(nT)\rangle|^{2}$
Using the time-evolved states in Eq. 60, we obtain
$\displaystyle f_{1}(t)$ $\displaystyle=$
$\displaystyle\mathcal{C}^{8}\cos{2\Omega
t}-2\mathcal{C}^{6}\mathcal{S}^{2}\sin{\Omega
t}\big{(}\sin{(V_{2}+\Omega)t}-\sin{V_{2}t}\big{)}+2\mathcal{C}^{4}\mathcal{S}^{4}\left[2\cos{\Omega
t}-1+2\big{(}1-\cos{(V_{2}+\Omega)t}-\cos{V_{2}t}\big{)}^{2}\right]$
$\displaystyle-$ $\displaystyle
4\mathcal{S}^{2}\mathcal{C}^{2}\left(\mathcal{S}^{4}+\mathcal{C}^{4}\cos{\Omega
t}\right)\left(1-\cos{(V_{2}+\Omega)t}-\cos{V_{2}t}\right)+\mathcal{S}^{8}$
$\displaystyle f_{2}(t)$ $\displaystyle=$
$\displaystyle\mathcal{S}^{2}\mathcal{C}^{2}\left[\mathcal{S}^{4}\big{(}\cos{(V_{2}+\Omega)t}-\cos{V_{2}t}\big{)}^{2}+\left(\mathcal{C}^{2}\sin{\Omega
t}+\mathcal{S}^{2}(\sin{(V_{2}+\Omega)t}-\sin{V_{2}t})\right)^{2}+\left(\sin{(V_{2}+\Omega)t}+\sin{V_{2}t}\right)^{2}\right]$
(63) $\displaystyle+$
$\displaystyle\mathcal{S}^{2}\mathcal{C}^{2}\left[\mathcal{S}^{2}-\mathcal{C}^{2}\cos{\Omega
t}+(2\mathcal{C}^{2}-1)\left(\cos{(V_{2}+\Omega)t}+\cos{V_{2}t}-1\right)\right]^{2}$
For $V_{2}\gg J$, $\theta\ll 1$, we retain up to quadratic terms in $\theta$,
yielding
$\displaystyle F(0,t)$ $\displaystyle=$
$\displaystyle\left(1-4\theta^{2}\right)\cos{2\Omega
t}+6\theta^{2}-2\theta^{2}\cos{\Omega t}$ $\displaystyle+$ $\displaystyle
4\theta^{2}\cos{V_{2}t}(\cos{\Omega t}-1)$ $\displaystyle F(13,t)$
$\displaystyle=$ $\displaystyle\left(1-4\theta^{2}\right)\cos{2\Omega
t}-6\theta^{2}-6\theta^{2}\cos{\Omega t}$ (64) $\displaystyle+$ $\displaystyle
4\theta^{2}\cos{V_{2}t}(3\cos{\Omega t}+1)$
We compare in Fig. 8 the exact results and those obtained from Eq. 63 for
sites $j=1$ and $j=14$ where we find good agreement between the two. The
parameters chosen are $V_{0}=2,V_{2}=6,V_{1}=50$ and
$\omega_{D}=V_{1}/2\hbar$.
## Appendix C OTOC in $\mathbb{Z}_{2}$ state with PBC
Figure 9: (Color Online) Profile of the OTOC $F(r,nT)$ in a $\mathbb{Z}_{2}$
state for $L=14$ with PBC and $\omega_{D}=V_{1}/2\hbar$. The initial operator
is localized at $i=1$. This too shows alternate scrambling and unscrambling
with a period $\tau\sim 300$. The parameters are $V_{0}=10V_{2}=2,V_{1}=20$
In this appendix, we consider the evolution of the profile of the OTOC
starting from a $|\mathbb{Z}_{2}\rangle$ state with PBC at the special
frequency. Fig. 9 shows that in this case too, there is alternate scrambling
and unscrambling for $nT\leq 1200$ and $V_{0}=10V_{2}=2,V_{1}=20$. In the
following we show that these oscillations can be interpreted as tunneling back
and forth between the states $\mathbb{Z}_{2}$ and $\bar{\mathbb{Z}}_{2}$ when
the system is close to the fragmented limit. Such tunneling events were
reported in Ref. [20]. This can be understood by noting that for PBC, the
$|\mathbb{Z}_{2}\rangle$ and $|\bar{\mathbb{Z}}_{2}\rangle$ states are
degenerate frozen states for $H_{F}^{(1)}$. This degeneracy is lifted by
higher-order hopping terms and for exact $H_{F}$, there are two eigenstates
which are symmetric and anti-symmetric combinations of
$|\mathbb{Z}_{2}\rangle$ and $|\bar{\mathbb{Z}}_{2}\rangle$ states viz
$\chi_{+}=\frac{1}{\sqrt{2}}(|\mathbb{Z}_{2}\rangle+|\bar{\mathbb{Z}}_{2}\rangle),\quad\chi_{-}=\frac{1}{\sqrt{2}}(|\mathbb{Z}_{2}\rangle-|\bar{\mathbb{Z}}_{2}\rangle)$
. The energy splitting between these two states is given by
$\Omega=\epsilon_{-}-\epsilon_{+}$. This implies the time evolutions
$\displaystyle|\mathbb{Z}_{2}(t)\rangle$
$\displaystyle=e^{i\epsilon_{-}t}\left[(e^{i\Omega
t}+1)|\mathbb{Z}_{2}\rangle+(e^{i\Omega
t}-1)|\bar{\mathbb{Z}}_{2}\rangle\right]/2,$
$\displaystyle|\bar{\mathbb{Z}}_{2}(t)\rangle$
$\displaystyle=e^{i\epsilon_{-}t}\left[(e^{i\Omega
t}-1)|\mathbb{Z}_{2}\rangle+(e^{i\Omega
t}+1)|\bar{\mathbb{Z}}_{2}\rangle\right]/2.$ (65)
Inserting an approximate complete set comprising of the states
$\mathbb{Z}_{2}$ and $\bar{\mathbb{Z}}_{2}$ in Eq. 48 of the main text, we
obtain
$\displaystyle
F(r_{1},t)\approx\left|\langle\mathbb{Z}_{2}|\tilde{n}_{1}(t)|\mathbb{Z}_{2}\rangle\right|^{2}-\left|\langle\mathbb{Z}_{2}|\tilde{n}_{1}(t)|\bar{\mathbb{Z}}_{2}\rangle\right|^{2}$
$\displaystyle=\left|\langle\mathbb{Z}_{2}(t)|\tilde{n}_{1}|\mathbb{Z}_{2}(t)\rangle\right|^{2}-\left|\langle\mathbb{Z}_{2}(t)|\tilde{n}_{1}|\bar{\mathbb{Z}}_{2}(t)\rangle\right|^{2},$
(66)
where the last line is going from Heisenberg to Schrodinger picture. Note, the
above equation already captures an important aspect of Fig. 9, namely
$F(r_{1},t)$ is mostly independent of $r_{1}$ at this timescale.
Using Eq. 65,we get
$F(r_{1},t)\approx\cos(2\Omega t).$ (67)
Thus, at stroboscopic times $t=nT$, where $n\Omega T$ is close to a integer
multiple of $\pi$, the OTOC is close to one while, when $n\Omega T$ is close
to a half-integer mutiple of $\pi$, the OTOC is close to minus one.
## References
* [1] A. Polkovnikov, K., Sengupta, A. Silva, A. and M. Vengalattore, Rev. Mod. Phys. 83, 863 (2011).
* [2] L. D’Alessio, Y. Kafri, A. Polkovnikov, and M. Rigol, Adv. Phys. 65, 239 (2016).
* [3] M. Bukov, L. D’Alessio, and A. Polkovnikov, Adv. Phys. 64, 139 (2014).
* [4] J. M. Deutsch, Phys. Rev. A 43, 2046 (1991); M. Srednicki, Phys. Rev. E 50, 888 (1994); M. Srednicki, J. Phys. A 32, 1163 (1999).
* [5] M. Rigol, V. Dunjko, M. Olshanii, Nature 452, 854 (2008); P. Reimann, Phys. Rev. Lett. 101, 190403 (2008).
* [6] L. D’Alessio and M. Rigol, Phys. Rev. X 4, 041048 (2014).
* [7] M. Basko, I. L. Aleiner, and B. L. Altshuler, Ann. Phys. 321, 1126 (2006).
* [8] R. Nandkishore and D. Huse, Ann. Rev. Cond. Mat. 6, 15 (2015)
* [9] T. Kohlert, S. Scherg, X. Li, H. P. Luschen, S.D. Sarma, I. Bloch, and M. Aidelsburger, Phys. Rev. Lett. 122, 170403 (2019).
* [10] A. Chandran, T. Iadecola, V. Khemani, R. Moessner, Annual Review of Condensed Matter Physics 14, 443 (2023).
* [11] C. J. Turner, A. A. Michailidis, D. A. Abanin, M. Serbyn, and Z. Papi´c, Nat. Phys. 14, 745 (2018); S. Maudgalya, N. Regnault, and B. A. Bernevig, Phys. Rev. B 98, 235156 (2018).
* [12] W. W. Ho, S. Choi, H. Pitchler, and M. D. Lukin, Phys. Rev. Lett. 122, 040603 (2019); N. Shiraishi, J. Stat. Mech. 08313 (2019).
* [13] S. Choi, C. J. Turner, H. Pichler, W. W. Ho, A. A. Michailidis, Z. Papi´c, M. Serbyn, M. D. Lukin, and D. A. Abanin, Phys. Rev. Lett. 122, 220603 (2019); T. Iadecola, M. Schecter, and S. Xu, Phys. Rev. B 100, 184312 (2019).
* [14] P. A. McClarty, M. Haque, A. Sen and J. Richter, Phys. Rev. B 102, 224303(2020); D. Banerjee and A. Sen Phys. Rev. Lett. 126, 220601(2021); S. Biswas, D. Banerjee, and A. Sen, SciPost Phys. 12, 148 (2022).
* [15] V. Khemani, M. Hermele and R. Nandkishore, Phys. Rev. B 101, 174204 (2020); P. Sala, T. Rakovszky, R. Verresen, M. Knap and F. Pollmann, Phys. Rev. X 10, 011047 (2020).
* [16] T. Rakovszky, P. Sala, R. Verresen, M. Knap and F. Pollmann, Phys. Rev. B 101, 125126 (2020); Z.-C. Yang, F. Liu, A. V. Gorshkov and T. Iadecola, Phys. Rev. Lett. 124, 207602 (2020); G. De Tomasi, D. Hetterich, P. Sala, and F. Pollmann, Phys. Rev. B 100, 214313(2019); P. Frey, L. Hackl, and S. Rachel1, arXiv:2209.11777 (unublished); D. Vu, K. Huang, X. Li, and S. Das Sarma, Phys. Rev. Lett. 128, 146601 (2022).
* [17] S. Moudgalya and O. I. Motrunich, Phys. Rev. X 12, 011050 (2022); D. T. Stephen, O. Hart, and R. M. Nandkishore, arXiv:2209.03966 (unpublished); D. Hahn, P. A. McClarty, D. J. Luitz, SciPost Phys. 11, 074 (2021); N. Regnault and B. A. Bernevig, arXiv:2210.08019 (unpublished); T. Kohlert, S. Scherg, P. Sala, F. Pollmann, B. H. Madhusudhana, I. Bloch, and M. Aidelsburger, arXiv:2106.15586 (unpublished).
* [18] B. Mukherjee, D. Banerjee, K. Sengupta, and A. Sen, Phys. Rev. B 104, 155117 (2021); P. Brighi, M. Ljubotina, and M. Serbyn, arXiv:2210.15607 (unpublished).
* [19] A. Chattopadhyay, B. Mukherjee, K. Sengupta, and A. Sen, SciPost Phys. 14, 146 (2023); J Lehmann, P Sala, F Pollmann, and T Rakovszky, SciPost Phys. 14, 140 (2023); Y. H. Kwan, P. H. Wilhelm, S. Biswas, and S. A. Parameswaran, arXiv:2304.02669 (unpublished).
* [20] S. Ghosh, I. Paul, and K. Sengupta, Phys. Rev. Lett. 130, 120401 (2023).
* [21] A. Sen, D. Sen, and K. Sengupta, J. Phys. Condens. Matter 33, 443003 (2021).
* [22] A. Soori and D. Sen, Phys. Rev. B 82, 115432 (2010).
* [23] T. Bilitewski and N. R. Cooper, Phys. Rev. A 91, 063611 (2015).
* [24] I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. 80, 885 (2008).
* [25] For a recent review, see J. Dobrzyniecki and T. Sowinski, Adv. Quantum Technol. 3, 2000010 (2020).
* [26] R. Ghosh, D. Das and K. Sengupta, JHEP 05, 172 (2021).
* [27] D. N. Page, Phys. Rev. Lett. 71, 1291 (1993); L. Vidmar and M. Rigol, Phys. Rev. Lett. 119, 220603 (2017).
* [28] P. Mazur, Physica 43, 533 (1969); J. Sirker, SciPost Phys. Lect. Notes 17, 1 (2020).
* [29] A. I. Larkin and Y. N. Ovichinnikov, Soviet J. Exp. Theor. Phys. 28, 1200 (1969); I. L. Aleiner, L. Faoro, and L. B. Ioffe, Ann. Phys-N. Y. 375, 378 (2016).
* [30] A. Nahum, S. Vijay, and J. Haah, Phys. Rev. X 8, 021014 (2018); C. W. von Keyserlingk, T. Rakovszky, F. Pollmann, and S. L. Sondhi, Phys. Rev. X 8, 021013 (2018); S. Xu, and B. Swingle, Locality, Phys. Rev. X 9, 031048 (2019).
* [31] M. Garttner, J. G. Bohnet, A. Safavi-Naini, M. L. Wall, J. J. Bollinger, and A. M. Rey, Nat. Phys. 13, 781 (2017).
* [32] For a recent review, see I. García-Mata, R. A. Jalabert, D. A. Wisniacki, Scholarpedia, 18, 55237 (2023)
* [33] Y. Chen, arXiv:1608.02765 (unpublished); R. Fan, P. Zhang, H. Shen, and H. Zhai, Science Bulletin 62, 707 (2017).
* [34] S. Sur and D. Sen, arXiv:2210.15302 (unpublished); ibid, arXiv:2310:12226 (unpublished).
|
# Steps Minimize Dissipation in Rapidly Driven Stochastic Systems
Steven Blaber<EMAIL_ADDRESS>Dept. of Physics, Simon Fraser University,
Burnaby, British Columbia V5A 1S6, Canada Miranda D. Louwerse Dept. of
Chemistry, Simon Fraser University, Burnaby, British Columbia V5A 1S6, Canada
David A. Sivak<EMAIL_ADDRESS>Dept. of Physics, Simon Fraser University,
Burnaby, British Columbia V5A 1S6, Canada
###### Abstract
Micro- and nano-scale systems driven by rapid changes in control parameters
(control protocols) dissipate significant energy. In the fast-protocol limit,
we find that protocols that minimize dissipation at fixed duration are
universally given by a two-step process, jumping to and from a point that
balances jump size with fast relaxation. Jump protocols could be exploited by
molecular machines or thermodynamic computing to improve energetic efficiency,
and implemented in nonequilibrium free-energy estimation to improve accuracy.
The birth of thermodynamics as a modern science can be traced to Sadi Carnot’s
study of the design principles for energetically efficient heat engines
described in _Reflections on the Motive Power of Fire_ Carnot et al. (1960).
In classical thermodynamics, minimum-dissipation protocols are important in
the design of macroscopic heat engines describing, for example, adiabatic (no
heat loss) and quasistatic (infinitely slow) compression of gas by a piston.
Nearly 200 years later, the field of stochastic thermodynamics Jarzynski
(2011); Seifert (2012) similarly studies the design principles governing the
ability to dynamically vary control parameters and perform work at minimum
energetic cost (minimum dissipation), but now in micro- and nano-scale
fluctuating systems. Minimum-dissipation protocols inform our understanding of
the design principles of biological molecular machines Brown and Sivak (2017,
2019) and are of practical use to single-molecule experiments Tafoya et al.
(2019), free-energy estimation Shenfeld et al. (2009); Geiger and Dellago
(2010); Blaber and Sivak (2020a); Chodera et al. (2011); Minh and Chodera
(2009), and thermodynamic computing Conte et al. (2019); Proesmans et al.
(2020).
In contrast to macroscopic systems that are well described by averages of
thermodynamic quantities, the stochastic fluctuations in microscopic systems
are large relative to the mean and cannot be ignored. The work done on a
stochastic system by a control protocol is a fluctuating quantity that depends
on the entire protocol history, making it particularly difficult to optimize.
General optimization requires minimizing over all possible paths through
control-parameter space, which cannot be solved in general Schmiedl and
Seifert (2007). Despite advances relating optimal-control to optimal-transport
theory, even numerical optimizations are still limited to relatively simple
systems Aurell et al. (2011).
Although general solutions are intractable, we have gained considerable
insight into minimum-dissipation protocols by considering simple systems. For
example, Schmiedl and Seifert Schmiedl and Seifert (2007) showed that for a
Brownian particle diffusing in a harmonic potential with time-dependent
minimum or stiffness, minimum-dissipation protocols exhibit jump
discontinuities. It was posited that jumps in control parameters are a general
feature of minimum-dissipation protocols, and they have since been observed in
a number of different systems Gomez-Marin et al. (2008); Then and Engel
(2008); Esposito et al. (2010).
More general insight can be gained by approximating the mean work in relevant
limits. For slow protocols, linear-response theory yields an approximation for
the mean work, from which the approximate minimum-dissipation protocol can be
calculated Sivak and Crooks (2012). Despite its success, the linear-response
formalism relies on near-equilibrium approximations that break down for fast
protocols, miss key features of the exact minimum-dissipation protocol (e.g.,
jumps in control parameters), and for short duration can perform worse than
naive (constant-velocity) protocols Blaber and Sivak (2020b).
While minimum-dissipation protocols for slowly driven systems are relatively
well understood, comparatively little is known about rapidly driven systems.
In this work we focus on fast protocols and find a universal design principle:
the minimum-dissipation protocol consists of jumps at the beginning and end of
the protocol, spending the entire duration at the control-parameter value that
optimally balances the _initial force-relaxation rate_ (IFRR) (7b) with the
jump size (13). Our results are physically intuitive, apply to a wide range of
physical systems, and generalize easily to multidimensional control. To
illustrate this, we calculate the minimum-dissipation protocols in a diverse
set of systems described by Fokker-Planck or master-equation dynamics with
single- (Fig. 1) or multi-dimensional control (Fig. 4). Combining our results
with known minimum-dissipation protocols in the slow limit Sivak and Crooks
(2012), we demonstrate that a simple interpolation scheme produces protocols
that reduce dissipation at all speeds (Fig. 3).
_Derivation_.—Consider a thermodynamic system described by dynamics of the
form
$\displaystyle\frac{\partial p_{\Lambda}(x,t)}{\partial
t}=L[x,\boldsymbol{\lambda}(t)]\,p_{\Lambda}(x,t)\ ,$ (1)
where $p_{\Lambda}(x,t)$ is the probability distribution over microstates $x$
at time $t$ given the control protocol $\Lambda$, and
$L[x,\boldsymbol{\lambda}(t)]$ is the operator describing the system’s time
evolution. $L$ is the Liouvillian for Hamiltonian, drift/diffusion operator
for Fokker-Planck, and the transition-rate matrix for master-equation
dynamics. The system is in contact with a heat bath at temperature $T$ such
that the equilibrium probability distribution over microstates $x$ at fixed
control parameters $\boldsymbol{\lambda}$ is
$\pi(x|\boldsymbol{\lambda})\equiv\exp\\{\beta[F(\boldsymbol{\lambda})-U(x,\boldsymbol{\lambda})]\\}$,
for internal energy $U(x,\boldsymbol{\lambda})$ and free energy
$F(\boldsymbol{\lambda})\equiv-k_{\rm B}T\,\ln\sum_{x}\exp[-\beta
U(x,\boldsymbol{\lambda})]$, where $\beta\equiv(k_{\rm B}T)$ for Boltzmann’s
constant $k_{\rm B}$. The average excess work $W_{\rm ex}\equiv W-\Delta F$ by
an external agent changing control parameters $\boldsymbol{\lambda}$ according
to protocol $\Lambda$ is
$\displaystyle\langle W_{\rm ex}\rangle_{\Lambda}=-\int_{0}^{\Delta
t}\mathrm{d}t\,\frac{\mathrm{d}\boldsymbol{\lambda}^{\rm
T}}{\mathrm{d}t}\langle\delta{\bf f}(t)\rangle_{\Lambda}\ ,$ (2)
where a bold symbol denotes a column vector and superscript ${\rm T}$ the
transpose. ${\bf f}\equiv-\partial U/\partial\boldsymbol{\lambda}$ are the
forces conjugate to the control parameters, and $\delta{\bf f}\equiv{\bf
f}-\langle{\bf f}\rangle_{\rm eq}$ the deviations from the equilibrium
averages. Angle brackets $\langle\cdots\rangle_{\Lambda}$ denote a
nonequilibrium ensemble average over the control-parameter protocol $\Lambda$.
Here we hold fixed the initial ($\boldsymbol{\lambda}_{\rm i}$) and final
($\boldsymbol{\lambda}_{\rm f}$) control parameters, consistent with
nonequilibrium free-energy estimation Ritort et al. (2002); Gore et al.
(2003); Shenfeld et al. (2009); Geiger and Dellago (2010); Kim et al. (2012);
Blaber and Sivak (2020b); Schindler et al. (2020); Aldeghi et al. (2018); Kuhn
et al. (2017); Ciordia et al. (2016); Wang et al. (2015); Gapsys et al.
(2015); Chodera et al. (2011); Minh and Chodera (2009) but distinct from
optimizations that constrain the initial and final probability distributions
Proesmans et al. (2020); Zhang (2020).
If the total duration $\Delta t$ is short compared to the system’s natural
relaxation time $\tau$ (a fast protocol), expanding the probability
distribution in $\Delta t/\tau$ around an initial equilibrium distribution
gives
$\displaystyle p_{\Lambda}(x,s)=\pi(x|\boldsymbol{\lambda}_{\rm
i})+p^{1}_{\Lambda}(x,s)\frac{\Delta
t}{\tau}+\mathcal{O}\left[\left(\frac{\Delta t}{\tau}\right)^{2}\right]\ ,$
(3)
for $s\equiv t/\Delta t$ and first-order coefficient $p^{1}_{\Lambda}(x,s)$.
Plugging (3) up to $\mathcal{O}(\Delta t/\tau)$ into (1) gives
$\displaystyle\frac{\partial p^{1}_{\Lambda}(x,s)}{\partial
s}\approx\mathcal{L}[x,\boldsymbol{\lambda}(s)]\,\pi(x|\boldsymbol{\lambda}_{\rm
i})\ ,$ (4)
with $\mathcal{L}\equiv\tau L$ the dimensionless time-evolution operator.
Solving for $p^{1}_{\Lambda}(x,s)$ and substituting into (3) yields
$\displaystyle p_{\Lambda}(x,s)\approx\pi(x|\boldsymbol{\lambda}_{\rm
i})+\frac{\Delta
t}{\tau}\int_{0}^{s}\mathrm{d}s^{\prime}\mathcal{L}[x,\boldsymbol{\lambda}(s^{\prime})]\,\pi(x|\boldsymbol{\lambda}_{\rm
i})\ .$ (5)
Multiplying by conjugate forces ${\bf f}$, integrating over microstates $x$,
and changing the time variable back to $t$ gives
$\displaystyle\langle{\bf f}(t)\rangle_{\Lambda}\approx\langle{\bf
f}\rangle_{\boldsymbol{\lambda}_{\rm
i}}+\int_{0}^{t}\mathrm{d}t^{\prime}\,{\bf R}_{\boldsymbol{\lambda}_{\rm
i}}[\boldsymbol{\lambda}(t^{\prime})]\ ,$ (6)
for the _initial force-relaxation rate_ (IFRR)
$\displaystyle{\bf R}_{\boldsymbol{\lambda}_{\rm i}}[\boldsymbol{\lambda}(t)]$
$\displaystyle\equiv\int\mathrm{d}x\ {\bf
f}(x)\,L[x,\boldsymbol{\lambda}(t)]\,\pi(x|\boldsymbol{\lambda}_{\rm i})$ (7a)
$\displaystyle=\frac{\mathrm{d}\langle{\bf
f}\rangle_{\boldsymbol{\lambda}_{\rm
i}}}{\mathrm{d}t}\bigg{|}_{\boldsymbol{\lambda}(t)}\ ,$ (7b)
the rate of change of the conjugate forces at the current control-parameter
values (averaged over the initial equilibrium distribution).
Within this approximation, the average excess work is
$\displaystyle\langle W_{\rm ex}\rangle_{\Lambda}\approx\langle W_{\rm
ex}\rangle_{\boldsymbol{\lambda}_{\rm i}}-\int_{0}^{\Delta
t}\mathrm{d}t\,\frac{\mathrm{d}\boldsymbol{\lambda}^{\rm
T}}{\mathrm{d}t}\int_{0}^{t}\mathrm{d}t^{\prime}\,{\bf
R}_{\boldsymbol{\lambda}_{\rm i}}[\boldsymbol{\lambda}(t^{\prime})]\ .$ (8)
The first RHS term is the excess work during an instantaneous jump between the
initial and final control-parameter values, which equals the _relative
entropy_ $k_{\rm B}TD(\pi_{\rm i}||\pi_{\rm f})\equiv k_{\rm
B}T\int\mathrm{d}x~{}\pi_{\rm i}\ln[\pi_{\rm i}/\pi_{\rm f}]$ between the
initial $\pi_{\rm i}\equiv\pi(x|\boldsymbol{\lambda}_{\rm i})$ and final
$\pi_{\rm f}\equiv\pi(x|\boldsymbol{\lambda}_{\rm f})$ equilibrium probability
distributions Large and Sivak (2019). Integrating (8) by parts gives our main
theoretical result: for sufficiently short duration, the excess work is
$\displaystyle\langle W_{\rm ex}\rangle_{\Lambda}\approx k_{\rm B}TD(\pi_{\rm
i}||\pi_{\rm f})-\int_{0}^{\Delta t}\mathrm{d}t\,{\bf
R}_{\boldsymbol{\lambda}_{\rm i}}^{\rm
T}[\boldsymbol{\lambda}(t)]\,[\boldsymbol{\lambda}_{{\rm
f}}-\boldsymbol{\lambda}(t)]\ .$ (9)
The second RHS term is the first-order correction in $\Delta t$, an
approximation for the saved work $W_{\rm save}\equiv k_{\rm B}TD(\pi_{\rm
i}||\pi_{\rm f})-W_{\rm ex}$ compared to an instantaneous protocol.
_Initial Force-Relaxation Rate_.—The IFRR can be intuitively understood by
considering one-dimensional exponential relaxation. For a discrete jump from
initial control-parameter value $\lambda_{\rm i}$ to an intermediate value
$\lambda$, an exponentially relaxing mean conjugate force obeys
$\displaystyle\langle f(t)\rangle_{\Lambda}=\langle f\rangle_{\lambda_{\rm
i}}+\left(\langle f\rangle_{\lambda}-\langle f\rangle_{\lambda_{\rm
i}}\right)e^{-t/\tau(\lambda)}\ ,$ (10)
where $\tau(\lambda)$ is the relaxation time of the conjugate force. The IFRR
is the initial slope of the mean conjugate force as it relaxes towards
equilibrium (7b), which for exponential relaxation is
$\displaystyle{R}_{{\lambda}_{\rm i}}(\lambda)=\frac{\langle
f\rangle_{\lambda_{\rm i}}-\langle f\rangle_{\lambda}}{\tau(\lambda)}\ .$ (11)
Under simple exponential relaxation, $\tau(\lambda)$ is the same relaxation
time defined in Ref. Sivak and Crooks, 2012 for slow protocols, thereby
connecting short- and long-duration control.
For a small control-parameter jump $\lambda-\lambda_{\rm i}$, static linear-
response theory, $\langle f\rangle_{\lambda_{\rm i}}-\langle
f\rangle_{\lambda}\approx\beta(\lambda-\lambda_{\rm i})\langle\delta
f^{2}\rangle_{\lambda_{\rm i}}$, implies that the IFRR further simplifies to
$\displaystyle{R}_{{\lambda}_{\rm i}}(\lambda)\approx\frac{\langle\delta
f^{2}\rangle_{\lambda_{\rm i}}(\lambda-\lambda_{\rm i})}{\tau(\lambda)}\ .$
(12)
The relaxation rate is zero at the initial control-parameter value and
increases with larger control-parameter jumps which drive the system further
from equilibrium.
_Minimum-dissipation protocols_.—Equation (9) allows for relatively
straightforward optimization to determine the _short-time efficient protocol_
(STEP), satisfying the Euler-Lagrange equation
$\displaystyle\frac{\partial}{\partial\boldsymbol{\lambda}}\left[{\bf
R}_{\boldsymbol{\lambda}_{\rm i}}^{\rm
T}\left(\boldsymbol{\lambda}\right)\left(\boldsymbol{\lambda}_{{\rm
f}}-\boldsymbol{\lambda}^{\rm
STEP}\right)\right]\bigg{|}_{\boldsymbol{\lambda}^{\rm STEP}}={\bf
R}_{\boldsymbol{\lambda}_{\rm i}}\left(\boldsymbol{\lambda}^{\rm STEP}\right)\
.$ (13)
As an algebraic equation, the solution is a point in control-parameter space,
thus the STEP consists of two jumps: a jump at the start from its initial
value to the optimal value $\boldsymbol{\lambda}^{\rm STEP}$, and at the end
from the optimal value to the final value. The STEP is a jump protocol
provided the time-evolution operator $L$ is independent of time derivatives of
the control parameters. For Fokker-Planck dynamics this is satisfied if the
system is driven by a (generally time-dependent) scalar potential.
To illustrate the two-step minimum-dissipation protocol we have calculated the
STEP for diverse model systems (Fig. 1). In the translating- and breathing-
trap systems described by Fokker-Planck dynamics (Supplemental Material I SM
), the STEP jumps halfway between the two endpoints, consistent with the
results of Ref. Schmiedl and Seifert, 2007. The single-spin-flip and two-state
binding/unbinding reaction systems are described by master-equation dynamics
(Supplemental Material II and III SM ), with STEPs that jump to points that
are respectively larger and smaller than halfway between the initial and final
control-parameter values. Specific jump sizes for the STEP depend on the
functional form of the IFRR, but the minimum-dissipation protocol always
consists of jumps to and from an intermediate control-parameter value.
Figure 1: Short-time efficient protocols (STEPs) for a single spin in a time-
dependent magnetic field (blue dots), Brownian particle in a translating
harmonic potential (black dashed), Brownian particle in a harmonic potential
with time-dependent stiffness (black dashed), and a two-state
binding/unbinding reaction system with variable binding and unbinding rates
controlled by the chemical-potential difference (red dash-dotted).
The STEP jumps to the point in control-parameter space that maximizes the
_short-time power savings_
$\displaystyle P_{\rm save}^{\rm st}(\boldsymbol{\lambda})\equiv{\bf
R}_{\boldsymbol{\lambda}_{\rm i}}^{\rm
T}(\boldsymbol{\lambda})(\boldsymbol{\lambda}_{{\rm f}}-\boldsymbol{\lambda})$
(14)
due to relaxation at intermediate $\boldsymbol{\lambda}$. The STEP balances
fast relaxation rate ${\bf R}_{\boldsymbol{\lambda}_{\rm i}}$ with large final
jump $\boldsymbol{\lambda}_{{\rm f}}-\boldsymbol{\lambda}$. The STEP spends
the duration $\Delta t$ relaxing at $\boldsymbol{\lambda}^{\rm STEP}$, so for
short duration $P_{\rm save}^{\rm st}(\boldsymbol{\lambda}^{\rm{STEP}})\Delta
t$ is the work saved relative to an instantaneous protocol.
To demonstrate the energetics of the STEP, consider the thermodynamic cycle
consisting of tightening and loosening a harmonic trap (Fig. 2). For a
quasistatic (infinitely slow) protocol, work equals the free-energy
difference, which exactly cancels for a cyclic process. An instantaneous
protocol has an additional contribution, which for tightening (loosening) the
trap equals the relative entropy between the open (closed) and closed (open)
states. The relative entropy is dissipated as heat during equilibration
between tightening and loosening the trap (outer vertical arrows). The STEP
spends the duration relaxing at an intermediate control-parameter value,
resulting in saved work approximated by the area of the rectangle with width
given by the final jump size $\lambda_{{\rm f}}-\lambda^{\rm STEP}$ and height
by ${R}_{{\lambda}_{\rm i}}(\lambda^{\rm STEP})\Delta t$. To maximize the
saved work (rectangle area) the STEP optimally balances the IFRR (determining
the height) with final jump size (width).
Figure 2: Thermodynamic cycle in the force vs. control parameter plane for the
breathing harmonic trap driven by instantaneous (red dotted), STEP (green
dash-dotted), and quasistatic (black dashed) protocols. Arrows denote protocol
direction for transitions between open ($\lambda_{\rm i}$) and closed
($\lambda_{\rm f}$) states, shown schematically. The area under each curve
gives the average work done by the respective protocol, the area under the
quasistatic curve is the free-energy difference $\Delta F_{{\rm
i}\rightarrow{\rm f}}=F_{\rm f}-F_{\rm i}$, the area between the instantaneous
(dotted) and quasistatic (dashed) curves is the relative entropy (e.g.,
$\langle W\rangle_{\lambda_{\rm i}}\ -\Delta F_{{\rm i}\rightarrow{\rm
f}}=k_{\rm B}TD(\pi_{\rm i}||\pi_{\rm f})$), and the area between the STEP
(dash-dotted) and instantaneous (dotted) curves is the saved work $\langle
W_{\rm save}\rangle$ (shaded rectangles). Control-parameter endpoints satisfy
$\lambda_{\rm i}/\lambda_{\rm f}=1/2$, with duration $\Delta t/\tau=2/5$ for
fastest relaxation time $\tau=1/(2\lambda_{\rm f})$.
For a single control parameter, if the duration is sufficiently short the
_gain_ $G_{\rm save}\equiv{\langle W_{\rm save}\rangle_{\Lambda}^{\rm
des}}/{\langle W_{\rm save}\rangle_{\Lambda}^{\rm naive}}$ in saved work by
the STEP is
$\displaystyle G_{\rm save}^{\rm
STEP}\approx\frac{\max_{\boldsymbol{\lambda}}[P_{\rm save}^{\rm
st}(\lambda)]}{\overline{P_{\rm save}^{\rm st}(\lambda)}}\ ,$ (15)
where an overbar denotes the spatial average $\overline{P_{\rm save}^{\rm
st}(\lambda)}\equiv(\Delta\lambda)^{-1}\int_{\lambda_{\rm i}}^{\lambda_{\rm
f}}\mathrm{d}\lambda~{}P_{\rm save}^{\rm st}(\lambda)$, “naive” the constant-
velocity protocol, and “des” a designed protocol. The gain from a STEP is
greatest when the power savings $P_{\rm save}^{\rm st}(\lambda)$ is sharply
peaked.
_Interpolated protocols_.—In order to design a protocol that performs well for
any duration, we combine the STEP (optimal for fast protocols) with the
minimum-dissipation protocol from linear-response theory (optimal for slow
protocols). The linear-response protocol is continuous and when driven by a
single control parameter proceeds at velocity
$\mathrm{d}\lambda/\mathrm{d}t\propto[\zeta(\lambda)]^{-1/2}$, where
$\zeta(\lambda)$ is the generalized friction coefficient Sivak and Crooks
(2012). We assume the shape of the protocol from linear-response theory
remains unchanged (i.e.,
$\mathrm{d}\lambda/\mathrm{d}t\propto[\zeta(\lambda)]^{-1/2}$) but with
initial jump $(\lambda^{\rm STEP}-\lambda_{\rm i})/(1+\Delta t/\tau)^{\alpha}$
and final jump $(\lambda_{\rm f}-\lambda^{\rm STEP})/(1+\Delta
t/\tau)^{\alpha}$, where the constant $\alpha$ controls the crossover from
slow to fast approximations. For our simple systems we empirically find
$\alpha=1$ performs relatively well.
Figure 3 shows the improvement from designed protocols relative to naive
(constant-velocity) for the model system of a breathing harmonic trap. The
difference between naive and designed work (Fig. 3a) shows the expected
asymptotic behavior in the short- and long-duration limits: scaling as $\Delta
t$ (slope of one) for small $\Delta t/\tau$ and $(\Delta t)^{-1}$ (slope of
negative one) for large $\Delta t/\tau$. Both the fast and slow designed
protocols perform worse than naive (the difference is negative) for large and
small $\Delta t/\tau$, respectively. The fast-protocol approximation (9)
breaks down for long duration because the conjugate-force relaxation rate
decreases as the system approaches equilibrium at $\boldsymbol{\lambda}$,
whereas (9) assumes a constant relaxation time. However, the interpolated
protocol performs well for any duration, and the largest work saved compared
to naive is for intermediate duration. The gain $G_{\rm save}$ quantifies the
percent increase in saved work from designed protocols relative to naive,
where a gain greater than one indicates the designed does less work than the
naive. For this system, the largest gain in saved work occurs in the fast
limit (small $\Delta t/\tau$) for the STEP, interpolated, and exact optimal
protocols.
Figure 3: Benefit in the breathing harmonic trap from designed protocols
relative to the naive (constant-velocity) protocol, as a function of the
duration $\Delta t$ scaled by the fastest integral relaxation time $\tau$. The
different designed (“des”) protocols include the exact optimal Schmiedl and
Seifert (2007) (“opt”, solid black), linear-response optimized (“slow opt”,
dashed blue), STEP (“fast opt”, red dots), and interpolated optimal protocol
(“inter opt”, dash-dotted green). (a) Difference between the work done by the
naive (constant-velocity) and designed protocols. (b) Gain $G_{\rm
save}\equiv{\langle W_{\rm save}\rangle_{\Lambda}^{\rm des}}/{\langle W_{\rm
save}\rangle_{\Lambda}^{\rm naive}}$ in saved work. Solid red line in (b)
denotes the short-duration limit (15). Control-parameter endpoints satisfy
$\lambda_{\rm i}/\lambda_{\rm f}=16$, and the interpolated protocol uses
$\alpha=1$ and fastest integral relaxation time $\tau=1/(2\lambda_{\rm i})$
Blaber and Sivak (2020b).
_Multi-dimensional control_.—Optimization of multi-dimensional control
protocols has seen a recent surge in interest, primarily driven by possible
improvements to nonequilibrium free-energy estimates in fast-switching
simulations Chipot and Lelièvre (2011); Dellago and Hummer (2014). Previous
calculations of minimum-dissipation protocols which observed jumps were
limited to one-dimensional optimization. A significant advantage of the
present approximation is that it permits simple multidimensional control-
protocol optimization. Equation (13) implies that for multidimensional control
the STEP consists of jumps to and from the control-parameter point
$\boldsymbol{\lambda}^{\rm STEP}$.
To illustrate, we consider a nine-spin Ising model with frustrated boundary
conditions (Fig. 4) Rotskoff et al. (2017); Venturoli et al. (2009). We use a
2D control parameter $\boldsymbol{h}=(h_{\rm b},h_{\rm g})$ of magnetic fields
applied to the mid-edge spins (Fig. 4a) which initially hold the system in the
spin-down state and reverse during the protocol, driving the system to invert
its magnetization. Supplemental Material IV SM gives model details.
Figure 4: a) Nine-spin ferromagnetic Ising model (internal black spins) with
fixed boundary conditions (external gray spins). The multi-dimensional control
parameter is two external magnetic fields, $h_{\rm b}$ (blue) applied to
horizontal-edge spins and $h_{\rm g}$ (green) applied to vertical-edge spins.
b) Short-time power savings (14) as function of control parameters ($h_{\rm
b}$,$h_{\rm g}$). Red line: naive protocol; red star:
$\boldsymbol{h}^{\rm{STEP}}$ (13). c) Work difference between designed and
naive protocols (dotted red), and its short-duration approximation (9) (solid
red). d) Gain $G_{\rm save}\equiv{\langle W_{\rm save}\rangle_{\Lambda}^{\rm
des}}/{\langle W_{\rm save}\rangle_{\Lambda}^{\rm naive}}$ in saved work for
multi-dimensional STEP relative to naive (dotted red), and its short-duration
limit (15) (solid red). Control-parameter endpoints are $\boldsymbol{h}_{\rm
i}=(-2,-2)$ and $\boldsymbol{h}_{\rm f}=(2,2)$, with duration $\Delta t$ and
fastest relaxation time $\tau=N/k_{0}$, for $N=9$ spins and single-spin flip
attempt rate $k_{0}$.
The power saving (14) vanishes at initial and final control-parameter values,
respectively corresponding to zero relaxation rate and zero final jump size
(Fig. 4b). By jumping past control-parameter regions with small power saving,
the STEP outperforms the naive protocol for short duration, as quantified by
the difference between naive and designed work (Fig. 4c) and the gain in saved
work (Fig. 4d). Indeed, for short duration the STEP more than doubles the work
saved by the naive protocol (i.e., has gain greater than two).
_Discussion_.—We have developed an approximation for work in the fast-protocol
limit (9) that permits straightforward optimization (13) simply from the
initial force-relaxation rate (IFRR), Eq. (7b). We find that jumps are a
universal feature of minimum-dissipation protocols in this fast limit, which
we illustrate with several model systems under single- (Fig. 1) or multi-
dimensional control (Fig. 4). Jumps minimize dissipation for fast protocols
because the relaxation rate is approximately constant, with no diminishing
returns from spending the entire duration at a single control-parameter value.
Therefore, the STEP jumps between the fixed control-parameter endpoints to
spend the entire duration at the control-parameter value that maximizes the
product of the IFRR and the subsequent jump size (14). This breaks down for
slow protocols since with sufficient time at a single control-parameter value,
the relaxation rate decreases over time; indeed, in the slow limit the
minimum-dissipation protocol is continuous Sivak and Crooks (2012). We combine
these two seemingly disparate limits with a simple interpolation scheme,
producing protocols that perform well for any duration (Fig. 3).
One important application of minimum-dissipation protocols is to free-energy
estimation, which aids the design of novel ligands for targeted protein
binding Schindler et al. (2020); Aldeghi et al. (2018); Kuhn et al. (2017);
Ciordia et al. (2016); Wang et al. (2015); Gapsys et al. (2015); Chodera et
al. (2011). Quite generally, the accuracy of free-energy estimates decreases
with increasing dissipation Ritort et al. (2002); Gore et al. (2003); Shenfeld
et al. (2009); Geiger and Dellago (2010); Kim et al. (2012); Blaber and Sivak
(2020b). Based on the results of Ref. Schmiedl and Seifert, 2007, jump
protocols have been used to reduce dissipation and improve free-energy
estimates Geiger and Dellago (2010), but previously it was unknown whether
jumps would always reduce dissipation in these more complex systems, and there
was no simple procedure to find the optimal jump size. The present formalism
demonstrates that jumps are a general feature and gives a simple optimization
procedure applicable to multidimensional control. This makes protocol
optimization tractable for a considerably expanded range of systems.
Although we focused on systems with known equations of motion, the IFRR (7b)
and short-time power savings (14) are easily estimated without detailed
dynamical information: the system only needs to be equilibrated at a single
control-parameter value; the protocol can be very short; the average converges
with few samples; and the STEP is found using standard optimization techniques
applied to (14). The STEP can thus be computed relatively inexpensively,
easing determination of minimum-dissipation protocols in rapidly driven
complex chemical and biological systems. This opens the door to improve the
energetic efficiency in thermodynamic computing Conte et al. (2019); Proesmans
et al. (2020) and the accuracy of nonequilibrium free-energy estimates in
simulations and single-molecule experiments Tafoya et al. (2019); Blaber and
Sivak (2020b); Shenfeld et al. (2009); Ritort et al. (2002).
This work is supported by an SFU Graduate Deans Entrance Scholarship (SB), an
NSERC CGS Doctoral fellowship (MDL), an NSERC Discovery Grant and Discovery
Accelerator Supplement (DAS), and a Tier-II Canada Research Chair (DAS), and
was enabled in part by support provided by WestGrid (www.westgrid.ca) and
Compute Canada Calcul Canada (www.computecanada.ca). The authors thank John
Bechhoefer, Jannik Ehrich, and Joseph Lucero (SFU Physics) for enlightening
feedback on the manuscript.
## References
* Carnot et al. (1960) S. Carnot, E. Clapeyron, and R. Clausius, in _Reflections on the Motive Power of Fire: And Others Papers on the Second Law of Thermodynamics_ , edited by E. Mendoza (New York: Dover Publications, 1960).
* Jarzynski (2011) C. Jarzynski, Annu. Rev. Condens. Matter Phys. 2, 329 (2011).
* Seifert (2012) U. Seifert, Rep. Prog. Phys. 75, 126001 (2012).
* Brown and Sivak (2017) A. I. Brown and D. A. Sivak, Physics in Canada 73 (2017).
* Brown and Sivak (2019) A. I. Brown and D. A. Sivak, Chem. Rev. 120, 434 (2019).
* Tafoya et al. (2019) S. Tafoya, S. J. Large, S. Liu, C. Bustamante, and D. A. Sivak, Proc. Natl. Acad. Sci. U.S.A 116, 5920 (2019).
* Shenfeld et al. (2009) D. K. Shenfeld, H. Xu, M. P. Eastwood, R. O. Dror, and D. E. Shaw, Phys. Rev. E 80, 046705 (2009).
* Geiger and Dellago (2010) P. Geiger and C. Dellago, Phys. Rev. E 81, 021127 (2010).
* Blaber and Sivak (2020a) S. Blaber and D. A. Sivak, Phys. Rev. E 101, 022118 (2020a).
* Chodera et al. (2011) J. D. Chodera, D. L. Mobley, M. R. Shirts, R. W. Dixon, K. Branson, and V. S. Pande, Curr. Opin. Struc. Biol. 21, 150 (2011).
* Minh and Chodera (2009) D. D. Minh and J. D. Chodera, J. Chem. Phys. 131, 134110 (2009).
* Conte et al. (2019) T. Conte, E. DeBenedictis, N. Ganesh, T. Hylton, J. P. Strachan, R. S. Williams, A. Alemi, L. Altenberg, G. Crooks, J. Crutchfield, et al., arXiv preprint arXiv:1911.01968 (2019).
* Proesmans et al. (2020) K. Proesmans, J. Ehrich, and J. Bechhoefer, Phys. Rev. Lett. 125, 100602 (2020).
* Schmiedl and Seifert (2007) T. Schmiedl and U. Seifert, Phys. Rev. Lett. 98, 108301 (2007).
* Aurell et al. (2011) E. Aurell, C. Mejía-Monasterio, and P. Muratore-Ginanneschi, Phys. Rev. Lett. 106, 250601 (2011).
* Gomez-Marin et al. (2008) A. Gomez-Marin, T. Schmiedl, and U. Seifert, J. Chem. Phys. 129, 024114 (2008).
* Then and Engel (2008) H. Then and A. Engel, Phys. Rev. E 77, 041105 (2008).
* Esposito et al. (2010) M. Esposito, R. Kawai, K. Lindenberg, and C. Van den Broeck, EPL 89, 20003 (2010).
* Sivak and Crooks (2012) D. A. Sivak and G. E. Crooks, Phys. Rev. Lett. 108, 190602 (2012).
* Blaber and Sivak (2020b) S. Blaber and D. A. Sivak, J. Chem. Phys. 153, 244119 (2020b).
* Ritort et al. (2002) F. Ritort, C. Bustamante, and I. Tinoco, Proc. Natl. Acad. Sci. U.S.A 99, 13544 (2002).
* Gore et al. (2003) J. Gore, F. Ritort, and C. Bustamante, Proc. Natl. Acad. Sci. U.S.A 100, 12564 (2003).
* Kim et al. (2012) S. Kim, Y. W. Kim, P. Talkner, and J. Yi, Phys. Rev. E 86, 041130 (2012).
* Schindler et al. (2020) C. E. Schindler, H. Baumann, A. Blum, D. Böse, H.-P. Buchstaller, L. Burgdorf, D. Cappel, E. Chekler, P. Czodrowski, D. Dorsch, et al., J. Chem. Inf. Model (2020).
* Aldeghi et al. (2018) M. Aldeghi, V. Gapsys, and B. L. de Groot, ACS Cent. Sci. 4, 1708 (2018).
* Kuhn et al. (2017) B. Kuhn, M. Tichý, L. Wang, S. Robinson, R. E. Martin, A. Kuglstatter, J. Benz, M. Giroud, T. Schirmeister, R. Abel, et al., J. Med. Chem 60, 2485 (2017).
* Ciordia et al. (2016) M. Ciordia, L. Pérez-Benito, F. Delgado, A. A. Trabanco, and G. Tresadern, J. Chem. Inf. Model 56, 1856 (2016).
* Wang et al. (2015) L. Wang, Y. Wu, Y. Deng, B. Kim, L. Pierce, G. Krilov, D. Lupyan, S. Robinson, M. K. Dahlgren, J. Greenwood, et al., J. Am. Chem. Soc. 137, 2695 (2015).
* Gapsys et al. (2015) V. Gapsys, S. Michielssens, J. H. Peters, B. L. de Groot, and H. Leonov, in _Molecular Modeling of Proteins_ , edited by A. Kukol (Springer New York, New York, NY, 2015), pp. 173–209.
* Zhang (2020) Y. Zhang, EPL 128, 30002 (2020).
* Large and Sivak (2019) S. J. Large and D. A. Sivak, J. Stat. Mech.: Theory Exp. 2019, 083212 (2019).
* (32) See Supplemental Material at [URL will be inserted by publisher] for Model Details.
* Chipot and Lelièvre (2011) C. Chipot and T. Lelièvre, SIAM J. Appl. Math 71, 1673 (2011).
* Dellago and Hummer (2014) C. Dellago and G. Hummer, Entropy 16, 41 (2014).
* Rotskoff et al. (2017) G. M. Rotskoff, G. E. Crooks, and E. Vanden-Eijnden, Phys. Rev. E 95, 012148 (2017).
* Venturoli et al. (2009) M. Venturoli, E. Vanden-Eijnden, and G. Ciccotti, J Math Chem 45, 188 (2009).
* Glauber (1963) R. J. Glauber, J. Math. Phys. 4, 294 (1963).
|
Zhang et al.
An Efficient Stochastic Augmented Lagrangian-Type Algorithm
Solving Stochastic Optimization with Expectation Constraints Efficiently by a Stochastic Augmented Lagrangian-Type Algorithm
Liwei Zhang
School of Mathematical Sciences,
Dalian University
of Technology, 116023 Dalian, China<EMAIL_ADDRESS>Yule Zhang
School of Science,
Dalian Martime University,
116085 Dalian, China<EMAIL_ADDRESS>
Jia Wu
School of Mathematical Sciences,
Dalian University
of Technology, 116023 Dalian, China<EMAIL_ADDRESS>
Xiantao Xiao
School of Mathematical Sciences,
Dalian University
of Technology, 116023 Dalian, China<EMAIL_ADDRESS>
This paper considers the problem of minimizing a convex expectation function with a set of inequality convex expectation constraints. We propose a stochastic augmented Lagrangian-type algorithm, namely the stochastic linearized proximal method of multipliers, to solve this convex stochastic optimization problem. This algorithm can be roughly viewed as a hybrid of stochastic approximation and the traditional proximal method of multipliers. Under mild conditions, we show that this algorithm exhibits $O(K^{-1/2})$ expected convergence rates for both objective reduction and constraint violation if parameters in the algorithm are properly chosen, where $K$ denotes the number of iterations. Moreover, we show that, with high probability, the algorithm has $O(\log(K)K^{-1/2})$ constraint violation bound and $O(\log^{3/2}(K)K^{-1/2})$ objective bound. Numerical results demonstrate that the proposed algorithm is efficient.
stochastic approximation; linearized proximal method of multipliers; expectation constrained stochastic program; expected convergence rate; high probability bound
§ INTRODUCTION
In this paper, we consider the following stochastic optimization problem
\begin{equation}\label{eq:1}
\begin{array}{rl}
\min\limits_{x \in \cC} & f(x):=\mathbb{E}[F(x,\xi)]\\[4pt]
{\rm s.t.} & g_i(x):=\mathbb{E}[G_i(x,\xi)] \leq 0,\ i=1,\ldots,p.\\
\end{array}
\end{equation}
Here, $\cC \subset \R^n$ is a nonempty bounded closed convex set, $\xi$ is a random vector whose probability distribution is supported on $\Xi \subseteq \R^q$, $F: \cC \times \Xi \rightarrow \R$ and $G_i:\cC \times \Xi \rightarrow \R$, $i=1,\ldots, p$. Let $\Phi$ be the feasible set of problem (<ref>) as
\begin{equation}\label{eq:Phi}
\Phi:=\left\{x\in \cC: g_i(x) \leq 0,\ i=1,\ldots,p\right\}.
\end{equation}
We assume that
\mathbb{E}[F(x,\xi)]= \int_{\Xi} F(x,\xi)dP(\xi),\, \mathbb{E}[G_i(x,\xi)]= \int_{\Xi} G_i(x,\xi)dP(\xi),\ i=1,\ldots, p
are well defined and finite valued for every $x\in \cC$. Moreover, we assume that the functions $F(\cdot,\xi)$ and $G_i(\cdot,\xi)$ are continuous and convex on $\cC$ for almost every $\xi$. Hence, the expectation functions $f(\cdot)$ and $g_i(\cdot,\xi)$ are continuous and convex on $\cC$.
Problems in the form of (<ref>) are standard in stochastic programming <cit.> and also arise frequently in many practical applications such as machine learning <cit.> and finance <cit.>.
A computational difficulty of solving (<ref>) is that expectation is a multidimensional integral and it cannot be computed with a high accuracy for large dimension $q$. In order to handle this issue, a popular approach is to use stochastic approximation (SA) technique which is based on the following general assumptions: (i)
it is possible to generate i.i.d. sample $\xi^1,\xi^2,\ldots,$ of realizations of random vector $\xi$;
(ii) there is an oracle, which, for any point $(x,\xi)\in \cC \times \Xi$ returns stochastic subgradients $v_0(x,\xi),\ v_1(x,\xi),\ \ldots,\ v_p(x,\xi)$ of $F(x,\xi)$, $G_1(x,\xi),\ \ldots,\ G_p(x,\xi)$ such that
$ v_i(x)=\mathbb{E}[v_i(x,\xi)],\ i=0,1,\ldots,p $
are well defined and are subgradients of $f(\cdot)$, $g_1(\cdot)$, $\ldots$, $g_p(\cdot)$ at $x$, respectively, i.e., $v_0(x)\in \partial f(x)$, $v_i(x) \in \partial g_i(x)$, $i=1,\ldots,p$.
Since the pioneering paper <cit.>, due to low demand for computer memory and cheap computation cost at every iteration, SA type algorithms become widely used in stochastic optimization and machine learning, see, e.g. <cit.>. If $f(\cdot)$ is twice continuously differentiable and strongly convex, in the classical analysis it is shown that the SA algorithm exhibits asymptotically optimal rate of convergence $\mathbb{E}[f(x^k)-f^*]=O(k^{-1})$, where $x^k$ is $k$th iterate and $f^*$ is the optimal value. An important improvement developed by <cit.> and <cit.> suggests that, larger stepsizes of SA algorithm can be adopted by consequently averaging the obtained iterates. Moreover, <cit.>
show that, without assuming smoothness and strong convexity, a properly modified SA method achieves the convergence rate $O(k^{-1/2})$ and remarkably outperforms the sample average approximation (SAA) approach for a certain class of convex stochastic problems. After the seminal work <cit.>, there are many significant results appeared, even for nonconvex stochastic optimization problems, see <cit.> and references cited therein.
Among all mentioned works, the feasible set is an abstract closed convex set and none of these SA algorithms are applicable to expectation constrained problems. The main reason is that the computation of projection $\Pi_{\Phi}$ is quite easy only when $\Phi$ is of a simple structure.
However, when $\Phi$ is defined by (<ref>), the computation is prohibitive.
As a first attempt for solving expectation constrained stochastic optimization problems with stochastic approximation technique, <cit.> introduce a cooperative stochastic approximation (CSA) algorithm for solving (<ref>) with single expectation constraint ($p=1$), which is a stochastic counterpart of Polyak's subgradient method <cit.>. The authors show that CSA exhibits the optimal $O(1/\sqrt{K})$ rate of expected convergence, where $K$ is a fixed iteration number. In an online fashion, <cit.> propose an algorithm (simply denoted by “YNW”) that can be easily extended to solve (<ref>) with multiple expectation constraints. Under the Slater's condition and the assumption that $\cC$ is compact, they show that the algorithm
can achieve $O(1/\sqrt{K})$ expected regret and $O(\log(K)/\sqrt{K})$
high probability regret. <cit.> develops a penalized stochastic gradient (PSG) method and establishes its almost sure convergence and expected convergence rates. PSG can be roughly viewed as a hybrid of the classical penalty method for nonlinear programming and the stochastic quasi-gradient method <cit.> for stochastic composition problem. A stochastic level-set method <cit.>, which ensures a feasible solution path with high probability, is proposed and analyzed.
<cit.> propose a conservative stochastic optimization algorithm (CSOA), which is in the similar primal-dual framework as PSG and YNW. In addition to CSOA, the authors also propose a projection-free algorithm named as FW-CSOA which can deal with the case that the projection $\Pi_{\cC}$ is difficult to calculate. <cit.> study an adaptive primal-dual stochastic gradient method (APriD) for solving (<ref>) and establish the convergence rate of $O(1/\sqrt{K})$ in terms of the objective error and the constraint violation.
All of the above mentioned methods for solving (<ref>) can be cast into the family of stochastic first-order algorithms. Although the iteration in stochastic first-order algorithms is computationally cheap and these methods perform well for certain problems, there are plenty of practical experiences and evidences of their convergence difficulties and instability with respect to the choice of parameters.
the success of augmented Lagrangian methods for various kinds of functional constrained optimization problems is witnessed. <cit.> study an augmented Lagrangian method for multistage stochastic problems. For solving semidefinite programming (SDP) problems, <cit.> consider an Newton-CG augmented Lagrangian method, which is shown to be very efficient even for large-scale SDP problems. <cit.> propose several methods based on augmented Lagrangian framework for optimization problems with stochastic-order constraints and analyze their convergence. <cit.> study an augmented Lagrangian decomposition method for nonconvex chance-constrained problems, in which a convex subproblem and a 0-1 knapsack subproblem are solved at each iteration.
The aim of this paper is to develop an efficient stochastic approximation-based augmented Lagrangian-type method for solving (<ref>). To the best of our knowledge, this is still limited in the literature.
<cit.> propose a stochastic proximal method of multipliers (PMMSopt) for solving problem (<ref>) and show that PMMSopt exhibits $O(K^{-1/2})$ convergence rates for both objective reduction and constraint violation. PMMSopt is partially inspired by the classic proximal method of multipliers <cit.>, which is modeled through an augmented Lagrangian with an extra proximal term.
However, the subproblem is difficult to solve, that makes PMMSopt an unimplementable algorithm, and hence no numerical results are given.
In this paper, based on PMMSopt, we propose a stochastic linearized proximal method of multipliers (SLPMM) for solving the stochastic convex optimization problem (<ref>), and analyze its expected convergence rate as well as probability guarantee for both objective reduction and constraint violation. In specific, at the $k$th iteration in SLPMM, we consider the augmented Lagrangian function $\cL_\sigma^k(x,\lambda)$ of a linearized problem with respect to the stochastic subgradients $v_i(x^k,\xi^k)$, $i=0,1,\ldots,p$. Then, we obtain the next iterate $x^{k+1}$ by solving the problem $\min_{x\in\cC}\cL_\sigma^k(x,\lambda^k)+\frac{\alpha}{2}\|x-x^k\|^2$ and update the Lagrange multiplier. The subproblem is the minimization of a strongly convex (approximately) quadratic function and hence is relatively easy to solve. Assuming that the set $\cC$ is compact, the subgradients are bounded and the Slater's condition holds, if the parameters in SLPMM are chosen as $\alpha=\sqrt{K}$ and $\sigma=1/\sqrt{K}$, we show that SLPMM attains $O(1/\sqrt{K})$ expected convergence rate with respect to both objective reduction and constraint violation. Under certain light-tail assumptions, we also establish the large-deviation properties of SLPMM. The numerical results on some practical applications such as Neyman-Pearson classification demonstrate that SLPMM performs efficiently and has certain advantages over the existing stochastic first-order methods.
The remaining parts of this paper are organized as follows. In Section <ref>, we develop some important properties of SLPMM. In Section <ref>, in the expectation sense we establish the convergence rate of SLPMM for problem (<ref>). The high probability guarantees for objective reduction and constraint violation of SLPMM are investigated in Section <ref>. In Section <ref>, we report our numerical results. Finally, we draw a conclusion in Section <ref>.
§ ALGORITHMIC FRAMEWORK, ASSUMPTIONS AND AUXILIARY LEMMAS
In this section, we propose a stochastic linearized proximal method of multipliers (SLPMM) for solving problem (<ref>) and establish some important auxiliary lemmas.
Let us define $[t]_+:=\max\{t,0\}$ for any $t\in\R$ and let $[y]_+=\Pi_{\R^p_+}[y]$ denote the projection of $y$ onto $\R^p_+$ for any $y \in \R^p$. We also define $[t]_+^2:=(\max\{t,0\})^2$.
The detail of SLPMM is described in Algorithm <ref>.
A stochastic linearized proximal method of multipliers
alg:SA11Initialization: Choose an initial point $x^0 \in \cC$ and select parameters $\sigma>0,\alpha>0$. Set $\lambda^0=0\in\R^p$ and $k=0$.
alg:SA33Generate i.i.d. sample $\xi^k$ of $\xi$ and compute
x^k+1= _x ∈ { ^k_σ(x,λ^k) +α/2x-x^k^2},
^k_σ(x,λ) :=F(x^k,ξ^k)+⟨v_0(x^k,ξ^k),x-x^k ⟩
+ 1/2σ[ ∑_i=1^p[λ_i+σ(G_i(x^k,ξ^k)+ ⟨v_i(x^k,ξ^k),x-x^k ⟩)]_+^2-λ^2]
and $v_i(x^k,\xi^k)$, $i=0,1,\ldots,p$ are the corresponding stochastic subgradients.
alg:SA44Update the Lagrange multipliers by
λ_i^k+1=[λ_i^k+σ(G_i(x^k,ξ^k)+ ⟨v_i(x^k,ξ^k),x^k+1-x^k ⟩)]_+, i=1,…,p.
alg:SA55Set $k\leftarrow k+1$.
In specific, at each iteration, we first generate an i.i.d. sample $\xi^k$ and choose the stochastic subgradients $v_i(x^k,\xi^k)$, $i=0,1,\ldots,p$ of $F$ and $G_i$, respectively.
Then, in (<ref>) we obtain $x^{k+1}$ by computing the proximal point of $\cL^k_{\sigma}(x,\lambda)$, which is the augmented Lagrangian function of the linearized problem
\[
\begin{array}{ll}
\min\limits_{x \in \cC} & F(x^k,\xi^k)+\langle v_0(x^k,\xi^k),x-x^k \rangle\\[4pt]
{\rm s.t.} & G_i(x^k,\xi^k)+ \langle v_i(x^k,\xi^k),x-x^k \rangle \leq 0,\ i=1,\ldots,p.\\
\end{array}
\]
Finally, in (<ref>) we update the Lagrange multipliers.
\[
G(x,\xi):=(G_1(x,\xi),\ldots, G_p(x,\xi))^T,\quad g(x):=(g_1(x),\ldots, g_p(x))^T.
\]
\[
V(x^k,\xi^k):=(v_1(x^k,\xi^k),\ldots, v_p(x^k,\xi^k))^T,
\]
then (<ref>) can be rewritten as
In the following, we shall study the convergence of the stochastic process $\{x^k,\lambda^k\}$ generated by SLPMM with respect to the filtration $\cF_k$ (sigma-algebra) which is generated by the random information $\{(\xi^0,\ldots,\xi^{k-1})\}$.
Before that, we introduce the following assumptions.
Let $R>0$ be a positive parameter such that
\|x'-x''\|\leq R,\ \forall x',x'' \in \cC.
There exists a constant $\nu_g>0$ such that for each $\xi^k$,
\|G(x,\xi^k)\| \leq \nu_g,\ \forall x \in \cC.
There exist constants $\kappa_f>0$ and $\kappa_g>0$ such that for each $\xi^k$,
\|v_0(x,\xi^k)\| \leq \kappa_f, \,\, \|v_i(x,\xi^k)\| \leq \kappa_g,\ i=1,\ldots, p,\ \forall x\in \cC.
The Slater's condition holds, i.e., there exist $\varepsilon_0>0$ and $\widehat x \in \cC$ such that
g_i(\widehat x) \leq -\varepsilon_0, \,\, i =1,\ldots, p.
Assumption <ref> shows that $\cC$ is a compact convex set with diameter $R$. Assumption <ref> indicates that the constraint functions $G_i(\cdot,\xi^k)$ are bounded over $\cC$. This assumption is a bit restrictive, but it is required in the analysis of almost all existing stochastic methods for solving problem (<ref>) <cit.>. Assumption <ref> requires that the stochastic subgradients $v_i(\cdot,\xi^k)$ are bounded over $\cC$. Assumption <ref> is a standard Slater's condition for optimization problem with functional constraints.
The following auxiliary lemma will be used several times in the sequel.
For any $z\in \cC$, we have
\begin{equation}\label{eq:opt-x-1}
\begin{array}{ll}
\displaystyle\langle v_0(x^k,\xi^k),x^{k+1}-x^k \rangle + \frac{1}{2\sigma}\|\lambda^{k+1}\|^2
+ \frac{\alpha}{2} \|x^{k+1}-x^k\|^2 \\[5pt]
\leq \displaystyle\langle v_0(x^k,\xi^k),z-x^k \rangle + \frac{1}{2\sigma}\left[ \sum_{i=1}^p[\lambda^k_i+\sigma (G_i(x^k,\xi^k)+\langle v_i(x^k,\xi^k), z-x^k \rangle)]_+^2\right]\\[15pt]
\quad\quad+ \displaystyle\frac{\alpha}{2}(\|z-x^k\|^2-\|z-x^{k+1}\|^2).
\end{array}
\end{equation}
In particular, if we take $z=x^k$, it yields
\begin{equation}\label{eq:opt-x-2}
\begin{array}{ll}
\displaystyle\langle v_0(x^k,\xi^k),x^{k+1}-x^k \rangle + \frac{1}{2\sigma}\|\lambda^{k+1}\|^2
+ \alpha \|x^{k+1}-x^k\|^2\\[10pt]
\leq \displaystyle\frac{1}{2\sigma}\left[ \sum_{i=1}^p[\lambda^k_i+\sigma G_i(x^k,\xi^k)]_+^2\right].
\end{array}
\end{equation}
By using the optimality conditions, we have from (<ref>) that $x^{k+1}$ satisfies
\begin{equation}\label{eq:aux-opt}
0\in \nabla_x\cL^k_{\sigma}(x^{k+1},\lambda^k)+\alpha (x^{k+1}-x^k)+\cN_{\cC}(x^{k+1}),
\end{equation}
where $\cN_{\cC}(x^{k+1})$ denotes the normal cone of $\cC$ at $x^{k+1}$ and
\[
\nabla_x\cL^k_{\sigma}(x^{k+1},\lambda^k)=v_0(x^k,\xi^k)+\sum_{i=1}^pv_i(x^k,\xi^k)\cdot[\lambda_i^k+\sigma (G_i(x^k,\xi^k)+ \langle v_i(x^k,\xi^k),x^{k+1}-x^k \rangle)]_+.
\]
Let us now consider the following auxiliary problem
\begin{equation}\label{eq:auxP}
\begin{array}{ll}
\min\limits_{x \in \cC} \,\langle v_0(x^k,\xi^k),x-x^k \rangle+ \frac{1}{2\sigma}\left[ \displaystyle\sum_{i=1}^p[\lambda_i^k+\sigma (G_i(x^k,\xi^k)+ \langle v_i(x^k,\xi^k),x-x^k \rangle)]_+^2\right]\\[15pt]
\quad\quad\quad +\frac{\alpha}{2}(\|x-x^k\|^2-\|x-x^{k+1}\|^2).
\end{array}
\ee
We can easily check that (\ref{eq:auxP}) is a convex optimization problem. Therefore, $\hat{x}$ is an optimal solution to (\ref{eq:auxP}) if and only if
\[
\begin{array}{ll}
0\in &v_0(x^k,\xi^k)+\sum_{i=1}^pv_i(x^k,\xi^k)\cdot[\lambda_i^k+\sigma (G_i(x^k,\xi^k)+ \langle v_i(x^k,\xi^k),\hat{x}-x^k \rangle)]_+\\[6pt]&\quad+\alpha (x^{k+1}-x^k)+\cN_{\cC}(\hat{x}).
\end{array}
\]
Hence, if follows from (\ref{eq:aux-opt}) that $x^{k+1}$ is an optimal solution to (\ref{eq:auxP}), which gives (\ref{eq:opt-x-1}) and (\ref{eq:opt-x-2}) obviously.
\Halmos\endproof
In what follows, we estimate an upper bound of $\|x^{k+1}-x^k\|$.
\begin{lemma}\label{lem:aux3}
Let Assumptions \ref{assu:compact}-\ref{assu:moment} be satisfied. Then, if the parameters satisfy $2\alpha-p\kappa_g^2\sigma>0$, we have
\[
\|x^{k+1}-x^k\|\leq \frac{1}{\alpha
}(\kappa_f+ \sqrt{p}\kappa_g\|\lambda^k\|+
\sqrt{p}\nu_g\kappa_g\sigma).
\]
\end{lemma}
%\comm{Correct. Assume that $\alpha-p\kappa_g^2\sigma>0$, then we have
%\|x^{k+1}-x^k\|\leq \frac{2}{\alpha
%}(\kappa_f+ \sqrt{p}\kappa_g\|\lambda^k\|+
\proof{Proof.} From (\ref{eq:opt-x-2}) and Assumption \ref{assu:moment}, we have
\[
\alpha\|x^{k+1}-x^k\|^2\leq \kappa_f\|x^{k+1}-x^k\|+
\frac{1}{2\sigma}\sum_{i=1}^p\left([a_i]_+^2-[b_i]_+^2\right),
\]
in which, for simplicity, we use
\[
a_i:=\lambda_i^k+\sigma G_i(x^k,\xi^k),\quad b_i:=\lambda_i^k+\sigma (G_i(x^k,\xi^k)+\langle v_i(x^k,\xi^k),x^{k+1}-x^k\rangle).
\]
Noticing that
\[
\begin{array}{ll}
\leq (|a_i|+|b_i|)\cdot|a_i-b_i|\\[8pt]
\leq (2|a_i|+|b_i-a_i|)\cdot|a_i-b_i|\\[8pt]
\leq 2|\lambda_i^k+\sigma G_i(x^k,\xi^k)|\cdot\sigma\kappa_g\|x^{k+1}-x^k\|+\sigma^2\kappa_g^2\|x^{k+1}-x^k\|^2,
\end{array}
\]
we obtain
\[
2\alpha\|x^{k+1}-x^k\|\leq 2\kappa_f+\sum_{i=1}^p(2\kappa_g|\lambda_i^k+\sigma G_i(x^k,\xi^k)|+\sigma\kappa_g^2\|x^{k+1}-x^k\|).
\]
If $2\alpha-p\kappa_g^2\sigma>0$, it yields
\[
\|x^{k+1}-x^k\|\leq \frac{2}{2\alpha-p\kappa_g^2\sigma}\left(\kappa_f+\sum_{i=1}^p(\kappa_g|\lambda_i^k+\sigma G_i(x^k,\xi^k)|\right).
\]
Therefore, from the facts that $\sum_{i=1}^p|\lambda_i^k|\leq\sqrt{p}\|\lambda^k\|$ and \[\sum_{i=1}^p|G_i(x^k,\xi^k)|\leq\sqrt{p}\|G(x^k,\xi^k)\|\leq\sqrt{p}\nu_g,\]
the claim is obtained.
\Halmos\endproof
Under the Slater's condition, we derive the following conditional expected estimate of the multipliers.
\begin{lemma}\label{lem:aux4}
Let Assumption \ref{assu:slater} be satisfied. Then, for any $t_2 \leq t_1-1$ where $t_1$ and $t_2$ are positive integers,
\[
\mathbb{E}\left[\langle \lambda^{t_1}, G(\widehat x, \xi^{t_1}) \rangle \,|\, \cF_{t_2}\right]
\leq -\varepsilon_0 \mathbb{E}\left[\|\lambda^{t_1}\| \,|\, \cF_{t_2}\right].
\]
\end{lemma}
\proof{Proof.}
%To prove this lemma, we first show that
%\mathbb{E}\left[\lambda^{t_1}_iG_i(\widehat x, \xi^{t_1}) \,|\, \xi^{[t_2]}\right]
%\leq -\varepsilon_0 \mathbb{E}\left[\lambda^{t_1}_i \,|\, \xi^{[t_2]}\right],\ i=1,\ldots,p.
For any $i \in \{1,\ldots,p\}$, noticing that $\lambda^{t_1}_i\in \cF_{t_1}$ and $\cF_{t_2}\subseteq \cF_{t_1}$ for $t_2 \leq t_1-1$, we have
\[
\begin{array}{ll}
\mathbb{E}\left[\lambda^{t_1}_iG_i(\widehat x, \xi^{t_1}) \,|\, \cF_{t_2}\right] &=\mathbb{E} \left[\mathbb{E}\left[\lambda^{t_1}_iG_i(\widehat x, \xi^{t_1}) \,|\, \cF_{t_1}\right]\,|\,\cF_{t_2}\right]\\[10pt]
%&=\mathbb{E}\left[G_i(\widehat x, \xi^{t_1})\right]\left[\lambda^{t_1}_i \,|\,\xi^{[t_2]}\right]\\[10pt]
&\leq -\varepsilon_0 \mathbb{E}\left[\lambda^{t_1}_i \,|\, \cF_{t_2}\right].
\end{array}
\]
Summing the above inequality over $i \in \{1,\ldots,p\}$ yields
\mathbb{E}\left[\langle \lambda^{t_1}, G(\widehat x, \xi^{t_1}) \rangle \,|\, \cF_{t_2}\right]
\leq
-\varepsilon_0 \mathbb{E} \left[ \sum_{i=1}^p \lambda^{t_1}_i \,|\, \cF_{t_2}\right]
\leq -\varepsilon_0 \mathbb{E}\left[\|\lambda^{t_1}\| \,|\, \cF_{t_2}\right],
by using $\sum_{i=1}^p \lambda^{t_1}_i\geq \|\lambda^{t_1}\|$.
\Halmos\endproof
We next present some important relations of $\|\lambda^k\|$.
\begin{lemma}\label{lem:aux5}
Let Assumptions \ref{assu:compact}--\ref{assu:slater} be satisfied and $s > 0$ be an arbitrary integer. Define $\beta_0:=\nu_g+\sqrt{p}\kappa_gR$ and
\begin{equation}\label{eq:theta9}
\vartheta (\sigma,\alpha,s):= \frac{\varepsilon_0\sigma s}{2}+\sigma \beta_0(s-1)+ \frac{\alpha R^2}{\varepsilon_0s}+\frac{2\kappa_f R}{\varepsilon_0}+ \frac{\sigma \nu_g^2}{\varepsilon_0}.
\end{equation}
Then, the following holds:
\begin{equation}\label{eq:6}
|\|\lambda^{k+1}\|-\|\lambda^k\||\leq \sigma \beta_0
\end{equation}
\begin{equation}\label{eq:7}
\mathbb{E}\left [ \|\lambda^{k+s}\|-\|\lambda^k\| \,|\, \cF_k\right]
\leq \left
\{
\begin{array}{ll}
s \sigma \beta_0, & \mbox{if } \|\lambda^k\| < \vartheta (\sigma,\alpha, s),\\[6pt]
-s \displaystyle \frac{\sigma \varepsilon_0}{2}, & \mbox{if } \|\lambda^k\| \geq \vartheta (\sigma,\alpha,s).
\end{array}
\right.
\end{equation}
\end{lemma}
\proof{Proof.}
It follows from Assumptions \ref{assu:compact}--\ref{assu:moment}, (\ref{eq:xna1}) and the nonexpansion property of the projection $\Pi_{\R^p_+}(\cdot)$ that
\[
\begin{array}{ll}
&\leq\|\lambda^{k+1}-\lambda^k\| =\|[\lambda^k+\sigma (G(x^k,\xi^k)+V(x^k,\xi^k)(x^{k+1}-x^k))]_+-[\lambda^k]_+\|\\[6pt]
& \leq
\sigma \|G(x^k,\xi^k)+V(x^k,\xi^k)(x^{k+1}-x^k)\|\\[6pt]
& \leq \sigma [\nu_g+\sqrt{p}\kappa_g R],
\end{array}
\]
which implies (\ref{eq:6}). This also gives that $\|\lambda^{k+s}\|-\|\lambda^k\|\leq s \sigma \beta_0$. Hence, we only need to establish the second part in (\ref{eq:7}) under the case $\|\lambda^k\| \geq \vartheta (\sigma,\alpha,s)$.
For a given positive integer $s$, suppose that $\|\lambda^k\| \geq \vartheta (\sigma,\alpha,s)$. For any $l \in \{k,k+1,\ldots,k+s-1\}$, from (\ref{eq:opt-x-1}) and the convexity of $G_i(\cdot,\xi^l)$ one has
\begin{array}{l}
\langle v_0(x^l,\xi^l),x^{l+1}-x^l\rangle + \frac{1}{2\sigma}\|\lambda^{l+1}\|^2+ \frac{\alpha}{2}\|x^{l+1}-x^l\|^2\\[10pt]
\leq \langle v_0(x^l,\xi^l),\widehat x-x^l\rangle + \frac{1}{2\sigma}\left[ \sum_{i=1}^p[\lambda^l_i+\sigma (G_i(x^l,\xi^l)+ \langle v_i(x^l,\xi^l), \widehat x-x^l \rangle)]_+^2\right]\\[10pt]
\quad\quad + \frac{\alpha}{2}(\|\widehat x-x^l\|^2-\|\widehat x-x^{l+1}\|^2)\\[10pt]
\leq \langle v_0(x^l,\xi^l),\widehat x-x^l\rangle + \frac{1}{2\sigma}\|[\lambda^{l}+\sigma G(\widehat x,\xi^l)]_{+}\|^2+\frac{\alpha}{2}(\|\widehat x-x^l\|^2-\|\widehat x-x^{l+1}\|^2)\\[10pt]
\leq \langle v_0(x^l,\xi^l),\widehat x-x^l\rangle + \frac{1}{2\sigma}\|\lambda^{l}+\sigma G(\widehat x,\xi^l)\|^2+\frac{\alpha}{2}(\|\widehat x-x^l\|^2-\|\widehat x-x^{l+1}\|^2).
\end{array}
Rearranging terms and using Assumption \ref{assu:cons}
we obtain
\[
\begin{array}{ll}
\frac{1}{2\sigma} \left[\|\lambda^{l+1}\|^2-\|\lambda^l\|^2\right]\\[10pt]
\leq \langle v_0(x^l,\xi^l),\widehat x-x^{l+1}\rangle + \langle \lambda^l,G(\widehat x,\xi^l)\rangle
+ \frac{\sigma}{2}\|G(\widehat x,\xi^l)\|^2\\[10pt]
\quad\quad + \frac{\alpha}{2}(\|\widehat x-x^l\|^2-\|\widehat x-x^{l+1}\|^2)\\[10pt]
\leq \kappa_fR+ \langle \lambda^l,G(\widehat x,\xi^l)\rangle
+ \frac{\sigma}{2}\nu_g^2+ \frac{\alpha}{2}(\|\widehat x-x^l\|^2-\|\widehat x-x^{l+1}\|^2).
\end{array}
\]
Making a summation over $\{k,k+1,\ldots,k+s-1\}$ and taking the conditional expectation, we obtain from
Lemma \ref{lem:aux4} that
\[
\begin{array}{ll}
\frac{1}{2\sigma} \mathbb{E}\left[\|\lambda^{k+s}\|^2-\|\lambda^k\|^2\,|\, \cF_k\right]\\[10pt]
\leq (\kappa_f R + \frac{\sigma}{2}\nu_g^2 )s+ \sum_{l=k}^{k+s-1} \mathbb{E}\left[\langle \lambda^l,G(\widehat x,\xi^l)\rangle\,|\, \cF_k\right]+\frac{\alpha}{2}\|\widehat x-x^k\|^2\\[10pt]
\leq (\kappa_f R + \frac{\sigma}{2}\nu_g^2 )s -\varepsilon_0 \sum_{l=0}^{s-1} \mathbb{E}\left[\|\lambda^{k+l}\|\,|\, \cF_k\right]+\frac{\alpha}{2}R^2
\\[10pt]
\leq (\kappa_f R + \frac{\sigma}{2}\nu_g^2 )s -\varepsilon_0 \sum_{l=0}^{s-1} \mathbb{E}\left[\|\lambda^{k}\|-\sigma\beta_0l \,|\, \cF_k\right]+\frac{\alpha}{2}R^2
\\[10pt]
\quad \quad (\mbox{from } \|\lambda^{k+1}\|\geq \|\lambda^k\|-\sigma \beta_0)\\[8pt]
\leq (\kappa_f R + \frac{\sigma}{2}\nu_g^2 )s + \varepsilon_0\sigma\beta_0 \frac{s(s-1)}{2}
-\varepsilon_0 s \|\lambda^{k}\| +\frac{\alpha}{2}R^2.
\end{array}
\]
Further, we get from Assumption \ref{assu:cons} and (\ref{eq:theta9}) that
\[
\begin{array}{l}
\mathbb{E}\left[\|\lambda^{k+s}\|^2\,|\, \cF_k\right]\\[10pt]
\leq
\|\lambda^k\|^2+2\sigma(\kappa_f R + \frac{\sigma}{2}\nu_g^2 )s
+\varepsilon_0\sigma^2\beta_0s(s-1)-2\varepsilon_0\sigma s \|\lambda^{k}\| +\sigma\alpha R^2\\[10pt]
\leq(\|\lambda^k\|- \frac{\varepsilon_0\sigma}{2}s)^2
+\varepsilon_0\sigma^2 \beta_0s(s-1)+ 2\sigma(\kappa_f R + \frac{\sigma}{2}\nu_g^2 )s+\sigma\alpha R^2-\varepsilon_0\sigma s \|\lambda^{k}\| \\[10pt]
\leq(\|\lambda^k\|- \frac{\varepsilon_0\sigma}{2}s)^2
%- \frac{\varepsilon_0^2\sigma^2}{4}s^2
+\varepsilon_0\sigma s[\sigma \beta_0(s-1)+ \frac{2(\kappa_f R + \frac{\sigma}{2}\nu_g^2 )}{\varepsilon_0}+\frac{\alpha R^2}{\varepsilon_0s}- \vartheta (\sigma,\alpha,s)]\\[10pt]
\leq (\|\lambda^k\|- \frac{\varepsilon_0\sigma}{2}s)^2.
\end{array}
\]
This, together with Jensen's inequality and the fact that $\|\lambda^k\|\geq \vartheta (\sigma,\alpha,s)\geq \frac{\varepsilon_0\sigma}{2}s$, implies that
\mathbb{E}\left[\|\lambda^{k+s}\|\,|\, \cF_k\right]\leq
\|\lambda^k\|- \frac{\varepsilon_0\sigma}{2}s.
The proof is completed.
\Halmos\endproof
Let us make some comments on inequality (\ref{eq:7}). This result may seem a bit confusing. From the proof, we actually show that:
the inequality $\mathbb{E}[\|\lambda^{k+s}-\lambda^k\| | \cF_k]\leq s\sigma\beta_0$ holds true under the conditions of Lemma 4; in addition, if $\|\lambda^k\|\geq \vartheta(\sigma,\alpha,s)$, the bound can be improved to $\mathbb{E}[\|\lambda^{k+s}-\lambda^k\| | \cF_k]\leq -s\frac{\sigma\varepsilon_0}{2}$.
However, we state it in the form of (\ref{eq:7}) intentionally.
Since this is only a middle result, our true purpose is to show that the conditions of the following lemma \citep[Lemma 5]{YMNeely2017} are satisfied for $\|\lambda^k\|$.
\begin{lemma}\label{lem:Yu1}
Let $\{Z_t, t \geq 0\}$ be a discrete time stochastic process adapted to a filtration $\{\cF_t, t\geq
0\}$ with $Z_0 = 0$ and $\cF_0 = \{\emptyset, \Omega\}$. Suppose there exist an integer $t_0 >0$, real constants $\theta>0$, $\delta_{\max}>0$ and $ 0 <\zeta \leq \delta_{\max}$ such that
\[
\begin{array}{rl}
|Z_{t+1}-Z_t| & \leq \delta_{\max},\\[12pt]
\mathbb{E}[Z_{t+t_0}-Z_t\,|\, \cF_t] & \leq \left
\{
\begin{array}{ll}
t_0 \delta_{\max}, & \mbox{if } Z_t < \theta,\\[6pt]
-t_0\zeta, & \mbox{if } Z_t \geq \theta,
\end{array}
\right.
\end{array}
\]
hold for all $t \in \{1,2,\ldots\}.$ Then the following properties are satisfied.
\begin{itemize}
\item[(i)] The following inequality holds,
\begin{equation}\label{eq:aux1}
\mathbb{E}[Z_t] \leq \theta +t_0 \delta_{\max}+t_0 \frac{4 \delta_{\max}^2}{\zeta}\log \left[ \frac{8 \delta_{\max}^2}{\zeta^2} \right],\ \forall t \in \{1,2,\ldots\}.
\end{equation}
\item[(ii)] For any constant $0 < \mu <1$, we have
\Pr\left[Z_t\geq z\right] \leq \mu,\ \forall t \in \{1,2,\ldots\},
\begin{equation}\label{eq:aux2}
z=\theta +t_0 \delta_{\max}+t_0 \frac{4 \delta_{\max}^2}{\zeta}\log \left[ \frac{8 \delta_{\max}^2}{\zeta^2}\right]+t_0 \frac{4 \delta_{\max}^2}{\zeta}\log\left( \frac{1}{\mu} \right).
\end{equation}
\end{itemize}
\end{lemma}
It is not difficult to verify that, Lemma \ref{lem:aux5} implies that the conditions of Lemma \ref{lem:Yu1} are satisfied with respect to $\|\lambda^k\|$
if we take
\[
\theta=\vartheta (\sigma,\alpha,s),\ \delta_{\max}=\sigma \beta_0,\ \zeta = \frac{\sigma}{2}\varepsilon_0,\ t_0=s.
\]
For simplicity, we define
\[
\psi(\sigma,\alpha,s):= \kappa_0+\kappa_1\frac{\alpha}{s}+\kappa_2\sigma+\kappa_3 \sigma s,\ \phi (\sigma,\alpha,s,\mu):=\psi(\sigma,\alpha,s)+\frac{8\beta_0^2}{\varepsilon_0} \log \left( \frac{1}{\mu}\right)\sigma s,
\]
where $\kappa_0,\kappa_1,\kappa_2,\kappa_3$ are constants given by
\be\label{eq:kappas}
\kappa_0= \frac{2\kappa_f R}{\varepsilon_0},\,\,
\kappa_1= \frac{ R^2}{\varepsilon_0},\,\,
\kappa_2= \frac{ \nu_g^2}{\varepsilon_0}-\beta_0,\,\,
\kappa_3=\left[2\beta_0 + \frac{\varepsilon_0}{2}+ \frac{8\beta_0^2}{\varepsilon_0}\log \frac{32\beta_0^2}{\varepsilon_0^2}\right].
\ee
We can also observe that
$\psi(\sigma,\alpha,s)$ and $\phi (\sigma,\alpha,s,\mu)$ are exactly the same as the right-hand sides of (\ref{eq:aux1}) and (\ref{eq:aux2}), respectively. Therefore, in view of Lemma \ref{lem:aux5}, the following lemma is a direct consequence of Lemma \ref{lem:Yu1}.
\begin{lemma}\label{lem:lambda}
Let Assumptions \ref{assu:compact}--\ref{assu:slater} be satisfied and $s > 0$ be an arbitrary integer. Then, it holds that
\be\label{eq:lambda-Exp}
\Exp[\|\lambda^k\|]\leq \psi(\sigma,\alpha,s).
\ee
Moreover, for any constant $0<\mu<1$, we have
\be\label{eq:lambda-Pr}
\Pr[\|\lambda^k\|\geq \phi(\sigma,\alpha,s,\mu)]\leq\mu.
\ee
\end{lemma}
\section{Expected convergence rates}\label{sec:rates}
In this section, we shall establish the expected convergence rates of SLPMM with respect to constraint violation and objective reduction.
In the following lemma, we derive a bound of the constraints in terms of the averaged iterate $$\hat{x}^K=\frac{1}{K}\sum_{k=0}^{K-1}x^k,$$ where $K$ is a fixed iteration number.
\begin{lemma}\label{lem:cons}
Let Assumptions \ref{assu:compact}-\ref{assu:moment} be satisfied. Then, if the parameters satisfy $2\alpha-p\kappa_g^2\sigma>0$, for each $i=1,\ldots,p$ we have
\[
\Exp[g_i(\hat{x}^K)]\leq\frac{1}{\sigma K}\Exp[\lambda^{K}_i]+ \frac{\kappa_g}{\alpha
\sqrt{p}\nu_g\kappa_g\sigma)+\frac{\sqrt{p}\kappa_g^2}{\alpha K}\sum_{k=0}^{K-1}\Exp[\|\lambda^k\|].
\]
\end{lemma}
\proof{Proof.}
From the definition $\lambda^{k+1}_i=[\lambda^k_i+\sigma ( G_i(x^k,\xi^k)+\langle v_i(x^k,\xi^k), x^{k+1}-x^k\rangle )]_+$, it follows that
\begin{array}{ll}
\lambda^{k+1}_i
&\geq \lambda^k_i+\sigma( G_i(x^k,\xi^k)+\langle v_i(x^k,\xi^k), x^{k+1}-x^k\rangle )\\[10pt]
& \geq \lambda^k_i+\sigma(G_i(x^k,\xi^k)- \kappa_g\|x^{k+1}-x^k\|).
\end{array}
Using Lemma \ref{lem:aux3}, we have
\be\label{eq:a2}
G_i(x^k,\xi^k)\leq\frac{1}{\sigma}(\lambda^{k+1}_i-\lambda_i^k)+\frac{\kappa_g}{\alpha}(\kappa_f+ \sqrt{p}\kappa_g\|\lambda^k\|+
\sqrt{p}\nu_g\kappa_g\sigma).
\ee
Taking conditional expectation with respect to $\cF_k$, it yields that
\[
g_i(x^k)\leq\frac{1}{\sigma}(\Exp[\lambda^{k+1}_i|\cF_k]-\lambda_i^k)+\frac{\kappa_g}{\alpha}(\kappa_f+ \sqrt{p}\kappa_g\|\lambda^k\|+
\sqrt{p}\nu_g\kappa_g\sigma),
\]
which further gives that
\[
\Exp[g_i(x^k)]\leq\frac{1}{\sigma}(\Exp[\lambda^{k+1}_i]-\Exp[\lambda_i^k])+\frac{\kappa_g}{\alpha}(\kappa_f+ \sqrt{p}\kappa_g\Exp[\|\lambda^k\|]+
\sqrt{p}\nu_g\kappa_g\sigma).
\]
Summing over $\{0,\ldots,K-1\}$ and noticing that $\lambda^0=0$, we obtain
\sum_{k=0}^{K-1} \Exp[g_i(x^k)]\leq
\frac{1}{\sigma}\Exp[\lambda^{K}_i]+ \frac{\kappa_g K}{\alpha
\sqrt{p}\nu_g\kappa_g\sigma)+\frac{\sqrt{p}\kappa_g^2}{\alpha}\sum_{k=0}^{K-1}\Exp[\|\lambda^k\|].
Therefore, from the convexity of $g_i$ and the definition of $\hat{x}^K$ it follows
\[
\begin{array}{ll}
\Exp[g_i(\hat{x}^K)]&\leq\frac{1}{K}\sum_{k=0}^{K-1} \Exp[g_i(x^k)]\\[10pt]
&\leq\frac{\Exp[\lambda^{K}_i]}{\sigma K}+ \frac{\kappa_g(\kappa_f+
\sqrt{p}\nu_g\kappa_g\sigma)}{\alpha
}+\frac{\sqrt{p}\kappa_g^2}{\alpha K}\sum_{k=0}^{K-1}\Exp[\|\lambda^k\|].
\end{array}
\]
The proof is completed.
\Halmos\endproof
In what follows, we present the bound of the objective reduction in terms of the averaged iterate.
\begin{lemma}\label{lem:obj}
Let Assumptions \ref{assu:compact}-\ref{assu:moment} be satisfied. Then, for any $z \in \Phi$,
\[
\Exp[f(\hat{x}^K)]-f(z)\leq\frac{\kappa_f^2}{2\alpha}+\frac{\sigma}{2}\nu_g^2+\frac{\alpha}{2K}R^2.
\]
\end{lemma}
\proof{Proof.} For any $z \in\Phi$, since $v_0(x^k,\xi^k)\in \partial_x F(x^k,\xi^k)$, we have
\langle v_0(x^k,\xi^k), z-x^k \rangle\leq F(z,\xi^k)-F(x^k,\xi^k).
Then, in view of (\ref{eq:opt-x-1}), one has
\begin{equation}\label{eq:h1}
\begin{array}{ll}
& F(x^k,\xi^k) \\[8pt]
& \leq
F(z,\xi^k)+\left[\langle v_0(x^k,\xi^k), x^k-x^{k+1}\rangle- \frac{\alpha}{2}\|x^{k+1}-x^k\|^2\right
&\quad + \frac{1}{2\sigma}\left[\|[\lambda^{k}+\sigma (G(x^k,\xi^k)+V(x^k,\xi^k)(z-x^k))]_+\|^2-\|\lambda^k\|^2\right]\\[12pt]
& \quad -\frac{1}{2\sigma}\left[\|\lambda^{k+1}\|^2-\|\lambda^k\|^2\right] + \frac{\alpha}{2}\left[\|z-x^k\|^2-\|z-x^{k+1}\|^2\right].
\end{array}
\end{equation}
From Assumption \ref{assu:moment} and the fact that $\langle x,y\rangle\leq \frac{\alpha}{2}\|x\|^2+\frac{1}{2\alpha}\|y\|^2$, we obtain that
\begin{equation}\label{eq:h2}
\langle v_0(x^k,\xi^k),x^k-x^{k+1}\rangle- \frac{\alpha}{2}\|x^{k+1}-x^k\|^2
\leq \frac{1}{2\alpha}\|v_0(x^k,\xi^k)\|^2
\leq \frac{\kappa_f^2}{2\alpha}.
\end{equation}
For every $i=1,\ldots,p$, we have from $v_i(x^k,\xi^k)\in \partial_x G_i(x^k,\xi^k)$ and $[a]_+^2\leq a^2$ that
[\lambda^k_i+\sigma(G_i(x^k,\xi^k)+\langle v_i(x^k,\xi^k),z-x^k\rangle)]_+^2 \leq [\lambda^k_i+\sigma G_i(z,\xi^k)]^2
and hence
\|[\lambda^k+\sigma(G(x^k,\xi^k)+V(x^k,\xi^k)(z-x^k))]_+\|^2\leq \|\lambda^k+\sigma G(z,\xi^k)\|^2.
Then, we obtain
\begin{equation}\label{eq:h3}
\begin{array}{ll}
\|[\lambda^k+\sigma(G(x^k,\xi^k)+V(x^k,\xi^k)(z-x^k))]_+\|^2-\|\lambda^k\|^2\\[10pt]
\leq 2\sigma \langle \lambda^k,G(z,\xi^k)\rangle+\sigma^2\|G(z,\xi^k)\|^2.
\end{array}
\end{equation}
Substituting (\ref{eq:h2}) and (\ref{eq:h3}) into (\ref{eq:h1}), we get
\be\label{eq:a8}
\begin{array}{ll}
F(x^k,\xi^k) &
\leq F(z,\xi^k)+ \frac{\kappa_f^2}{2\alpha}-\frac{1}{2\sigma}\left[\|\lambda^{k+1}\|^2-\|\lambda^k\|^2\right]
+ \langle \lambda^k,G(z,\xi^k)\rangle\\[8pt]
&\quad +\frac{\sigma}{2}\|G(z,\xi^k)\|^2
+ \frac{\alpha}{2}\left[\|z-x^k\|^2-\|z-x^{k+1}\|^2\right].
\end{array}
\ee
Taking conditional expectation with respect to $\cF_k$ and noticing that
\[
\Exp[\langle \lambda^k,G(z,\xi^k)\rangle|\cF_k]=\langle \lambda^k,g(z)\rangle\leq 0,
\]
we have
\[
\begin{array}{ll}
f(x^k)-f(z)&\leq \frac{\kappa_f^2}{2\alpha}-\frac{1}{2\sigma}\left[\Exp[\|\lambda^{k+1}\|^2|\cF_k]-\|\lambda^k\|^2\right]\\[8pt]
&\quad+\frac{\sigma\nu_g^2}{2}+ \frac{\alpha}{2}\left[\|z-x^k\|^2-\Exp[\|z-x^{k+1}\|^2|\cF_k]\right],
\end{array}
\]
which further gives
\[
\begin{array}{ll}
\Exp[f(x^k)]-f(z)&\leq \frac{\kappa_f^2}{2\alpha}-\frac{1}{2\sigma}\left[\Exp[\|\lambda^{k+1}\|^2]-\Exp[\|\lambda^k\|^2]\right]\\[8pt]
&\quad+\frac{\sigma\nu_g^2}{2}+ \frac{\alpha}{2}\left[\Exp[\|z-x^k\|^2]-\Exp[\|z-x^{k+1}\|^2]\right].
\end{array}
\]
Making a summation and noticing that $\lambda^0=0$, one has
\[
\sum_{k=0}^{K-1}\Exp[f(x^k)]\leq K\left[f(z)+\frac{\kappa_f^2}{2\alpha}+\frac{\sigma}{2}\nu_g^2\right]+\frac{\alpha}{2}\|z-x^0\|^2.
\]
Therefore, from the convexity of $f$ and the definition of $\hat{x}^K$ it follows
\[
\Exp[f(\hat{x}^K)]\leq\frac{1}{K}\sum_{k=0}^{K-1}\Exp[f(x^k)]\leq f(z)+\frac{\kappa_f^2}{2\alpha}+\frac{\sigma}{2}\nu_g^2+\frac{\alpha}{2K}R^2.
\]
The proof is completed.
\Halmos\endproof
Based on Lemma \ref{lem:cons} and Lemma \ref{lem:obj}, if we take $\alpha=\sqrt{K}$, $\sigma=1/\sqrt{K}$ and $s=\ceil{\sqrt{K}}$, where $\ceil{a}$ denotes the ceiling function that returns the least integer greater than or equal to $a$, the excepted convergence rates of SLPMM with respect to constraint violation and objective reduction are shown to be
$O(1/\sqrt{K})$ in the following theorem.
\begin{theorem}\label{th:rate}
Let Assumptions \ref{assu:compact}-\ref{assu:slater} be satisfied. If we take $\alpha=\sqrt{K}$ and $\sigma=1/\sqrt{K}$ in Algorithm \ref{alg:SLPMM}, where $K$ is a fixed iteration number. Then, the following statements hold.
\begin{itemize}
\item[(i)] If $K>\max\{1,p\kappa_g^2/2\}$, then we have
\[
\Exp[g_i(\hat{x}^K)]\leq\frac{(1+\sqrt{p}\kappa_g^2
)\kappa_2+\sqrt{p}\nu_g\kappa_g^2}{K},\quad i=1,\ldots,p,
\]
where $\bar{\kappa}:=\kappa_0+\kappa_1
+2\kappa_3$ and $\kappa_0, \kappa_1, \kappa_2, \kappa_3$ are defined in (\ref{eq:kappas}).
\item[(ii)] For all $K\geq 1$,
\[
\Exp[f(\hat{x}^K)]-f(x^*)\leq\frac{\kappa_f^2+\nu_g^2+R^2}{2\sqrt{K}},
\]
where $x^*$ is any optimal solution to (\ref{eq:1}).
\end{itemize}
\end{theorem}
\proof{Proof.}
Consider item (i). If $K>p\kappa_g^2/2$, we have $2\alpha-p\kappa_g^2\sigma>0$, then it follows from Lemma \ref{lem:cons} that
\be\label{eq:a1}
\Exp[g_i(\hat{x}^K)]\leq
\frac{1}{\sigma K}\Exp[\lambda^{K}_i]+ \frac{\kappa_g}{\alpha
\sqrt{p}\nu_g\kappa_g\sigma)+\frac{\sqrt{p}\kappa_g^2}{\alpha K}\sum_{k=0}^{K-1}\Exp[\|\lambda^k\|].
\ee
If we take $s=\ceil{\sqrt{K}}$, then from Lemma \ref{lem:lambda} one has
\[
\Exp[\|\lambda^k\|]\leq \psi(\sigma,\alpha,s)=\kappa_0+\kappa_1\frac{\alpha}{s}+\kappa_2
\sigma+\kappa_3 \sigma s\leq \kappa_0+\kappa_1+\frac{\kappa_2}{\sqrt{K}}+2\kappa_3=\bar{\kappa}+\frac{\kappa_2}{\sqrt{K}}.
\]
Therefore, from $\alpha=\sqrt{K}, \sigma=1/\sqrt{K}$ and (\ref{eq:a1}) we have
\[
\Exp[g_i(\hat{x}^K)]\leq \frac{1}{\sqrt{K}}\left(\bar{\kappa}+\frac{\kappa_2}{\sqrt{K}}\right)+\frac{\kappa_g\kappa_f}{\sqrt{K}}+\frac{\sqrt{p}\nu_g\kappa_g^2}{K}+\frac{\sqrt{p}\kappa_g^2}{\sqrt{K}}\left(\bar{\kappa}+\frac{\kappa_2}{\sqrt{K}}\right),
\]
which verifies item (i).
By taking $z=x^*$ in Lemma \ref{lem:obj}, we derive item (ii) since
\[
\Exp[f(\hat{x}^K)]-f(x^*)\leq\frac{\kappa_f^2}{2\alpha}+\frac{\sigma}{2}\nu_g^2+\frac{\alpha}{2K}R^2=\frac{\kappa_f^2+\nu_g^2+R^2}{2\sqrt{K}}.
\]
The proof is completed.
\Halmos\endproof
Let us point out that all of the algorithms
\citep{YMNeely2017,LanZ2016,ABR2021} have $O(1/\sqrt{K})$ expected convergence. However, the algorithm \citep{YMNeely2017} is an extension of Zinkevich's online algorithm \citep{Zi2003}, which is a variant of the projection gradient method, and the CSA method \citep{LanZ2016} is a stochastic counterpart of Polyak's subgradient method \citep{Polyak1967}. When problem (\ref{eq:1}) reduces to a deterministic problem, these algorithms have at most linear rate of convergence. In contrast, SLPMM becomes the (linearized) proximal method of multipliers, which has an asymptotic superlinear rate of convergence.
Moreover, the iteration complexity analysis \citep{LanZ2016} is based on the selection of stepsizes, which are dependent on the parameters $R$ , $\kappa_f$ and $\kappa_g$. However, these data are not known beforehand when problem (\ref{eq:1}) is put forward to solve. Note that, in SLPMM the stepsizes $\sigma$ and $\alpha$ are problem-independent.
\section{High probability performance analysis}\label{sec:prob}
In this section, we shall establish the large-deviation properties of SLPMM. By Theorem \ref{th:rate} and Markov's inequality, we have
for all $\rho_c>0$ and $\rho_o>0$ that
\be\label{eq:d1}
\Pr\left[g_i(\hat{x}^K)\leq \rho_c\left(\frac{(1+\sqrt{p}\kappa_g^2
)\kappa_2+\sqrt{p}\nu_g\kappa_g^2}{K}\right)\right]\geq 1-\frac{1}{\rho_c}
\ee
\be\label{eq:d2}
\Pr\left[f(\hat{x}^K)-f(x^*)\leq \rho_o\frac{\kappa_f^2+\nu_g^2+R^2}{2\sqrt{K}}\right]\geq 1-\frac{1}{\rho_o}.
\ee
However, these results are very weak.
In the following, we will show that these high probability bounds can be significantly improved.
We introduce the following standard ``light-tail" assumption, see \citep{Lan2016,LanZ2016,LNSY2019} for instance.
\begin{assumption}\label{assu:lt-cons}
There exists a constant $\sigma_c>0$ such that, for any $x\in \cC$,
\[
\Exp[\exp(\|G_i(x,\xi)-g_i(x)\|^2/\sigma_c^2)]\leq\exp(1),\quad i=1,\ldots,p.
\]
\end{assumption}
From a well-known result \citep[Lemma 4.1]{Lan2020}, under Assumption \ref{assu:lt-cons} one has for any $\rho\geq 0$ and $i=1,\ldots,p$ that
\be\label{eq:a3}
\Pr\left[\frac{1}{K}\sum_{k=0}^{K-1}g_i(x^k)-\frac{1}{K}\sum_{k=0}^{K-1}G_i(x^k,\xi^k)\geq\frac{\rho\sigma_c}{\sqrt{K}}\right]\leq \exp(-\rho^2/3).
\ee
For the sake of readability, we define the following notations,
\[
\theta_1:=\sigma_c+(1+\sqrt{p}\kappa_g^2)\frac{16\beta_0}{\varepsilon_0},\quad \theta_2:=\kappa_g\kappa_f+(1+\sqrt{p}\kappa_g^2)(\kappa_0+\kappa_1+2\kappa_3)
\]
\[
\theta_3:=(1+\sqrt{p}\kappa_g^2)\frac{16\beta_0}{\varepsilon_0},\quad \theta_4:=\sqrt{p}\nu_g\kappa_g^2+(1+\sqrt{p}\kappa_g^2)\kappa_2,
\]
in which $\beta_0$ is defined in Lemma \ref{lem:aux5}, $\kappa_0, \kappa_1, \kappa_2, \kappa_3$ are defined in (\ref{eq:kappas}) and other parameters are defined in Assumptions \ref{assu:compact}-\ref{assu:lt-cons}.
We are now read to state the main result on constraint violation.
\begin{theorem}\label{th:cons-pr}
Let Assumptions \ref{assu:compact}-\ref{assu:lt-cons} be satisfied. We take $\alpha=\sqrt{K}$ and $\sigma=1/\sqrt{K}$ in Algorithm \ref{alg:SLPMM}, where $K$ is a fixed iteration number satisfying $K>\max\{1,p\kappa_g^2/2\}$. Then, for any $\rho\geq 0$ and $i=1,\ldots,p$,
\[
\Pr\left[g_i(\hat{x}^K)\leq \frac{\theta_1\rho+\theta_2+\theta_3\log(K+1)}{\sqrt{K}}+\frac{\theta_4}{K}\right]\geq 1-\exp(-\rho^2/3)-\exp(-\rho).
\]
\end{theorem}
\proof{Proof.}
Summing (\ref{eq:a2}) over $\{0,\ldots,K-1\}$, we have
\[
\frac{1}{K}\sum_{k=0}^{K-1}G_i(x^k,\xi^k)\leq \frac{\lambda_i^K}{\sigma K}+\frac{\kappa_g(\kappa_f+\sqrt{p}\nu_g\kappa_g\sigma)}{\alpha}+\frac{\sqrt{p}\kappa_g^2}{\alpha K}\sum_{k=0}^{K-1}\|\lambda^k\|.
\]
Noticing that $\alpha=\sqrt{K}$, $\sigma=1/\sqrt{K}$ and $g_i(\hat{x}^K)\leq \frac{1}{K}\sum_{k=0}^{K-1}g_i(x^k)$, one has
\be\label{eq:a4}
g_i(\hat{x}^K)\leq \frac{1}{K}\sum_{k=0}^{K-1}[g_i(x^k)-G_i(x^k,\xi^k)]+\frac{\lambda_i^K}{\sqrt{K}}+\frac{\kappa_g\kappa_f}{\sqrt{K}}+\frac{\sqrt{p}\nu_g\kappa_g^2}{K}+\frac{\sqrt{p}\kappa_g^2}{K^{3/2}}\sum_{k=0}^{K-1}\|\lambda^k\|.
\ee
We next consider the probability bound of $\lambda^k$. From (\ref{eq:lambda-Pr}), it follows that
\[
\Pr[\|\lambda^k\|\geq \phi(\sigma,\alpha,s,\mu)]\leq\mu,\quad k=0,1\ldots,K.
\]
If we take $s=\ceil{\sqrt{K}}$ and $\mu=\exp(-\rho)/(K+1)$, then
\[
\begin{array}{ll}
\phi(\sigma,\alpha,s,\mu)&= \kappa_0+\kappa_1\frac{\alpha}{s}+\kappa_2\sigma+\kappa_3 \sigma s+\frac{8\beta_0^2}{\varepsilon_0} \log \left( \frac{1}{\mu}\right)\sigma s\\[10pt]
&\leq \kappa_0+\kappa_1+\frac{\kappa_2}{\sqrt{K}}+2\kappa_3+\frac{16\beta_0^2}{\varepsilon_0}(\rho+\log(K+1))
\end{array}
\]
and hence for all $k=0,1,\ldots,K$,
\be\label{eq:a5}
\Pr[\|\lambda^k\|\geq \kappa_0+\kappa_1+\frac{\kappa_2}{\sqrt{K}}+2\kappa_3+\frac{16\beta_0^2}{\varepsilon_0}(\rho+\log(K+1))]\leq\frac{\exp(-\rho)}{K+1}.
\ee
Using (\ref{eq:a3}) and (\ref{eq:a5}) in (\ref{eq:a4}), we conclude that
\[
\begin{array}{ll}
\Pr\left[ g_i(\hat{x}^K)\geq \frac{\rho(\sigma_c+(1+\sqrt{p}\kappa_g^2)\frac{16\beta_0}{\varepsilon_0})}{\sqrt{K}}+\frac{\kappa_g\kappa_f+(1+\sqrt{p}\kappa_g^2)(\kappa_0+\kappa_1+2\kappa_3)}{\sqrt{K}}\right.\\[15pt]
\quad\quad\left.+\frac{(1+\sqrt{p}\kappa_g^2)\frac{16\beta_0}{\varepsilon_0}\log(K+1)}{\sqrt{K}}+\frac{\sqrt{p}\nu_g\kappa_g^2+(1+\sqrt{p}\kappa_g^2)\kappa_2}{K}
\right]\leq \exp(-\rho^2/3)+\exp(-\rho).
\end{array}
\]
The proof is completed.
\Halmos\endproof
In view of Theorem \ref{th:cons-pr}, if we take $\rho=\log(K)$, then we have
\[
\Pr\left[g_i(\hat{x}^K)\leq O\left(\frac{\log(K)}{\sqrt{K}}\right)\right]\geq 1-\frac{1}{K^{2/3}}-\frac{1}{K}.
\]
We next make the following ``light-tail" assumption with respect to the objective function.
\begin{assumption}\label{assu:lt-obj}
There exists a constant $\sigma_o>0$ such that, for any $x\in \cC$,
\[
\Exp[\exp(\|F(x,\xi)-f(x)\|^2/\sigma_o^2)]\leq\exp(1).
\]
\end{assumption}
Similar to (\ref{eq:a3}), under Assumption \ref{assu:lt-obj} one has for any $\rho\geq 0$ that
\be\label{eq:a6}
\Pr\left[\frac{1}{K}\sum_{k=0}^{K-1}f(x^k)-\frac{1}{K}\sum_{k=0}^{K-1}F(x^k,\xi^k)\geq\frac{\rho\sigma_o}{\sqrt{K}}\right]\leq \exp(-\rho^2/3)
\ee
\be\label{eq:a7}
\Pr\left[\frac{1}{K}\sum_{k=0}^{K-1}F(z,\xi^k)-\frac{1}{K}\sum_{k=0}^{K-1}f(z)\geq\frac{\rho\sigma_o}{\sqrt{K}}\right]\leq \exp(-\rho^2/3)
\ee
for all $z\in \cC$.
The following lemma is from \citep[Lemma 9]{YMNeely2017}.
\begin{lemma}\label{lem:Yu2}
Let $\{Z_t,t\geq 0\}$ be a supermartingale adapted to a filtration $\{\cF_t,t\geq 0\}$ with $Z_0=0$ and $\cF_0=\{\emptyset, \Omega\}$, i.e. $\mathbb E[Z_{t+1}\,|\, \cF_t]\leq Z_t$, $\forall t \geq 0$. Suppose there exists a constant $c>0$ such that $\{|Z_{t+1}-Z_t|>c\}\subseteq \{Y_t>0\}$, $\forall t\geq 0$, where each $Y_t$ is adapted to $\cF_t$. Then, for all $z>0$, we have
\Pr[Z_t\geq z] \leq e^{-z^2/(2tc^2)}+ \sum_{j=0}^{t-1} \Pr[Y_j>0],\ \forall t \geq 1.
\end{lemma}
For any fixed $z \in \Phi$, by taking $Z_t:= \sum_{k=0}^{t-1} \langle \lambda^k, G(z, \xi^k) \rangle$ in Lemma \ref{lem:Yu2} we obtain the following lemma.
\begin{lemma}\label{lem:l13}
For any fixed $z \in \Phi$ and an arbitrary constant $c>0$, let $Z_0:=0$ and $Z_t:= \sum_{k=0}^{t-1} \langle \lambda^k, G(z, \xi^k) \rangle$ for $t\geq 1$. Let $\cF_0=\{\emptyset, \Omega\}$ and $Y_t:=\|\lambda^{t}\|-c/\nu_g$ for all $t\geq 0$. Then, for all $\gamma>0$, we have
\Pr[Z_t\geq \gamma] \leq e^{-\gamma^2/(2tc^2)}+ \sum_{j=0}^{t-1} \Pr[Y_j>0],\ \forall t \geq 1.
\end{lemma}
\proof{Proof.} It is simple to check that $\{Z_t\}$ and $\{Y_t\}$ are both adapted to $\{\cF_t, t\geq 0\}$. Now we prove that
$\{Z_t\}$ is a supermartingale. Since
Z_{t+1}=Z_t+\langle \lambda^{t}, G(z, \xi^{t})\rangle,
we have
\begin{array}{ll}
\mathbb{E}[Z_{t+1}\,|\, \cF_t]&=\mathbb{E}[ Z_t+\langle \lambda^{t}, G(z, \xi^{t})\rangle\,|\, \cF_t]\\[8pt]
& =Z_t+\langle \lambda^{t}, \mathbb{E}[G(z, \xi^{t})\,|\, \cF_t]\rangle\\[8pt]
&=Z_t+\langle \lambda^{t}, g(z)\rangle\\[8pt]
&\leq Z_t,
\end{array}
which follows from $\lambda^t\in\cF_t$, $\lambda^t\geq 0$ and $g(z)\leq 0$. Thus, we obtain that $\{Z_t\}$ is a supermartingale.
From Assumption \ref{assu:cons}, we get
|Z_{t+1}-Z_t|=|\langle \lambda^{t}, G(z, \xi^{t} \rangle| \leq \nu_g\|\lambda^{t}\|.
This implies that $\|\lambda^{t}\|> c/\nu_g$ if $|Z_{t+1}-Z_t| >c$ and hence
\{|Z_{t+1}-Z_t| >c\} \subseteq \{Y_t>0\}.
Therefore, we can observe that the conditions of Lemma \ref{lem:Yu2} are satisfied, and hence the claim is obtained.
\Halmos\endproof
Finally, we establish a high probability objective reduction bound in the following theorem.
\begin{theorem}\label{th:obj-pr}
Let Assumptions \ref{assu:compact}-\ref{assu:slater} and \ref{assu:lt-obj} be satisfied. We take $\alpha=\sqrt{K}$ and $\sigma=1/\sqrt{K}$ in Algorithm \ref{alg:SLPMM}, where $K\geq 1$ is a fixed iteration number. Then, for any $\rho\geq 0$,
\[
\begin{array}{ll}
\displaystyle\Pr\left[f(\hat{x}^K)-f(x^*)\leq\sqrt{2\rho}\nu_g\left(\frac{\kappa_0+\kappa_1+2\kappa_3}{\sqrt{K}}+\frac{\frac{16\beta_0^2}{\varepsilon_0}(\rho+\log(K))}{\sqrt{K}}+\frac{\kappa_2}{K}\right)\right.\\[15pt]
\quad\quad \displaystyle\left.+ \frac{2\sigma_0\rho}{\sqrt{K}}+\frac{\theta_5}{\sqrt{K}} \right]\geq 1-2\exp(-\rho^2/3)-2\exp(-\rho),
\end{array}
\]
where $x^*$ is any fixed optimal solution to (\ref{eq:1}),
\theta_5:=(\kappa_f^2+\nu_g^2+R^2)/2$, $\beta_0$ is defined in Lemma \ref{lem:aux5} and $\kappa_0, \kappa_1, \kappa_2, \kappa_3$ are defined in (\ref{eq:kappas}).
\end{theorem}
\proof{Proof.}
For any $z\in\Phi$, summing (\ref{eq:a8}) over $\{0,\ldots,K-1\}$ and using the facts that $\lambda^0=0$, $\|G(z,\xi^k)\|^2\leq \nu_g^2$ and $\|z-x^0\|^2\leq R^2$, we have
\[
\frac{1}{K}\sum_{k=0}^{K-1}F(x^k,\xi^k)\leq \frac{1}{K}\sum_{k=0}^{K-1}F(z,\xi^k)+\frac{1}{K}\sum_{k=0}^{K-1}\langle \lambda^k,
G(z, \xi^k)\rangle +\frac{\kappa_f^2+\nu_g^2+R^2}{2\sqrt{K}}.
\]
Then, it follows from $f(\hat{x}^K)\leq \frac{1}{K}\sum_{k=0}^{K-1}f(x^k)$ that
\be\label{eq:b1}
\begin{array}{ll}
&\leq \displaystyle\frac{1}{K}\sum_{k=0}^{K-1}[f(x^k)-F(x^k,\xi^k)]+\frac{1}{K}\sum_{k=0}^{K-1}[F(z,\xi^k)-f(z)]\\[15pt]
&\quad \displaystyle+\frac{1}{K}\sum_{k=0}^{K-1}\langle \lambda^k,
G(z, \xi^k)\rangle+\frac{\kappa_f^2+\nu_g^2+R^2}{2\sqrt{K}}.
\end{array}
\ee
By Lemma \ref{lem:l13}, for any $c>0$ and $\gamma>0$ we have
\[
\Pr\left[\frac{1}{K}\sum_{k=0}^{K-1}\langle \lambda^k,
G(z, \xi^k)\rangle\geq \frac{\gamma}{K}\right] \leq \exp(-\gamma^2/(2Kc^2))+ \sum_{k=0}^{K-1}\Pr[\|\lambda^k\|\geq c/\nu_g].
\]
Let us take $s=\ceil{\sqrt{K}}$ and $\mu=\exp(-\rho)/K$, then
\[
\phi(\sigma,\alpha,s,\mu)\leq \kappa_0+\kappa_1+\frac{\kappa_2}{\sqrt{K}}+2\kappa_3+\frac{16\beta_0^2}{\varepsilon_0}(\rho+\log(K)).
\]
If we take $c=\nu_g\phi(\sigma,\alpha,s,\mu)$, then from (\ref{eq:lambda-Pr}) we obtain
\[
\sum_{k=0}^{K-1}\Pr[\|\lambda^k\|\geq c/\nu_g]\leq K\mu=\exp(-\rho).
\]
Moreover, let us take $\gamma=\sqrt{2\rho K} c$, then
\[
\frac{\gamma}{K}
\]
and hence
\be\label{eq:b2}
\begin{array}{ll}
\Pr\left[\frac{1}{K}\sum_{k=0}^{K-1}\langle \lambda^k,
G(z, \xi^k)\rangle\geq \sqrt{2\rho}\nu_g\left(\frac{\kappa_0+\kappa_1+2\kappa_3}{\sqrt{K}}+\frac{\frac{16\beta_0^2}{\varepsilon_0}(\rho+\log(K))}{\sqrt{K}}+\frac{\kappa_2}{K}\right)\right] \\[15pt]
\leq 2\exp(-\rho).
\end{array}
\ee
Using (\ref{eq:a6}), (\ref{eq:a7}) and (\ref{eq:b2}) in (\ref{eq:b1}), one has
\[
\begin{array}{ll}
\Pr\left[f(\hat{x}^K)-f(z)\geq\frac{2\sigma_0\rho}{\sqrt{K}}+\sqrt{2\rho}\nu_g\left(\frac{\kappa_0+\kappa_1+2\kappa_3}{\sqrt{K}}+\frac{\frac{16\beta_0^2}{\varepsilon_0}(\rho+\log(K))}{\sqrt{K}}+\frac{\kappa_2}{K}\right)\right.\\[15pt]
\quad\quad\left.+\frac{\kappa_f^2+\nu_g^2+R^2}{2\sqrt{K}} \right]\leq 2\exp(-\rho^2/3)+2\exp(-\rho).
\end{array}
\]
The claim is derived by taking $z=x^*$ in the above inequality.
\Halmos\endproof
In view of Theorem \ref{th:obj-pr}, if we take $\rho=\log(K)$, then we have
\[
\Pr\left[f(\hat{x}^K)-f(x^*)\leq O\left(\frac{\log^{3/2}(K)}{\sqrt{K}}\right)\right]\geq 1-\frac{2}{K^{2/3}}-\frac{2}{K}.
\]
In contrast to (\ref{eq:d1}) and (\ref{eq:d2}), we can observe that the results in Theorem \ref{th:cons-pr} and \ref{th:obj-pr} are much finer.
% Moreover, the established probability bounds in Theorem \ref{th:cons-pr} and \ref{th:obj-pr} are comparable with the related existing work \cite{YMNeely2017}.
\section{Preliminary numerical experiments}\label{sec:num}
In this section, we demonstrate the efficiency of the proposed stochastic linearized proximal method of multipliers on two preliminary numerical problems. All numerical experiments are carried out using MATLAB R2020a on a desktop computer with Intel(R) Xeon(R) E-2124G 3.40GHz and 32GB memory. The MATLAB code and test problems can be found on
\url{https://bitbucket.org/Xiantao_Xiao/SLPMM}. All reported time is wall-clock time in seconds.
\subsection{Solving subproblems}\label{sec:subp}
This subsection focuses on solving the subproblem (\ref{xna}) in SLPMM, that is
\[
\begin{array}{l}
x^{k+1}= \argmin\limits_{x \in \cC} \,\left\{ \cL^k_{\sigma }(x,\lambda^k) +\frac{\alpha}{2}\|x-x^k\|^2\right\}.
\end{array}
\]
This problem is equivalent to
\begin{equation}\label{eq:general-subp}
\min_{x \in \cC} \phi(x):=\frac{1}{2}\sum_{i=1}^p[a_i^Tx+b_i]_+^2+\frac{1}{2}\|x\|^2+c^Tx,
\end{equation}
\[
a_i:=\sqrt{\frac{\sigma}{\alpha}}v_i(x^k,\xi^k),\ b_i:=\frac{\lambda_i}{\sqrt{\sigma\alpha}}+\sqrt{\frac{\sigma}{\alpha}}G_i(x^k,\xi^k)-\left\langle\sqrt{\frac{\sigma}{\alpha}}v_i(x^k,\xi^k),x^k\right\rangle
\]
and $c:=v_0(x^k,\xi^k)/\alpha-x^k$.
Since $\phi$ is obviously strongly convex, we could apply the following popular Nesterov's accelerated gradient method to solve (\ref{eq:general-subp}).
\mbox{}\\[4pt]
{\bf APG}: Nesterov's accelerated projected gradient method for (\ref{eq:general-subp}).
\begin{description}
\item[Step 0 ] Input $x^0\in \cC$ and $\eta>1$. Set $y^0=x^0$, $L_{-1}=1$ and $t:=0$.
\item[Step 1]
\[
\]
where $T_{L}(y):=\Pi_{\cC}[y-\frac{1}{L}\nabla \phi(y)]$, the stepsize $L_t=L_{t-1}\eta^{i_t}$ and $i_t$ is the smallest nonnegative integer satisfies the following condition
\[
\begin{array}{ll}
\phi(T_{L_{t-1}\eta^{i_t}}(y^t))&\leq \phi(y^t)+\langle\nabla \phi(y^t),T_{L_{t-1}\eta^{i_t}}(y^t)-y^t\rangle\\[10pt]
\end{array}
\]
\item[ Step 2] Compute
\[
\]
\item[ Step 3] Set $t:=t+1$ and go to Step 1.
\end{description}
A well-known convergence result of the above method is that, if $\phi$ is $\mu$-strongly convex and $\nabla \phi$ is $L$-Lipschitz continuous, then
$\phi(x^t)-\phi(x^*)\leq O\left((1-\sqrt{\mu/L})^t\right)$. See \citep{Beck2017} for a detailed discussion on this topic. Here, we assume that the set $\cC$ is simple such that the projection $\Pi_{\cC}$ can be efficiently computed. For example, if
\[\cC:=\left\{x\in\R^n:\sum_{i=1}^nx_i=1,\ x\geq 0\right\},\] the projection $\Pi_{\cC}$ can be computed by the method proposed in \citep{WL2015}.
When $\cC$ is $\R^n$ or a polyhedron, the subproblem is equivalent to a convex quadratic programming (QP) problem as
\[
\begin{array}{ll}
\min\limits_{x,y}\quad & \displaystyle\frac{1}{2}\sum_{i=1}^py_i^2+\frac{1}{2}\|x\|^2+c^Tx\\[8pt]
\textrm{s.t.}\quad & a_i^Tx+b_i-y_i\leq 0,\ i=1,2,\ldots,p,\\[5pt]
& x\in\cC,\quad y\geq 0.
\end{array}
\]
In this case, the subproblem can also be solved by a QP solver.
Let us also mention that, if $p=1$, the closed form of the stationary point to the objective function in Problem (\ref{eq:general-subp}) is given by
\[
\tilde{x}=\left\{
\begin{array}{ll}
-c,\quad&\mbox{if}\ -a_1^Tc+b_1\leq 0,\\[8pt]
\end{array}
\right.
\]
Then, $\tilde{x}$ is the unique optimal solution if it lies in the interior of $\cC$.
\subsection{Neyman-Pearson classification}
For a classifier $h$ to predict $1$ and $-1$, let us define the type I error (misclassifying class -1 as 1) and type II error (misclassifying class 1 as -1) respectively by
\[
\mbox{type I error}:=\mathbb{E}[\varphi(-bh(a))|b=-1],\quad\mbox{type II error}:=\mathbb{E}[\varphi(-bh(a))|b=1],
\]
where $\varphi$ is some merit function. Unlike the conventional binary classification in machine learning, the Neyman-Pearson (NP) classification paradigm is developed to learn a classifier by minimizing type II error with type I error being below a user-specified level $\tau>0$,
see \citep{TFZ2016} and references therein.
In specific, for a given class $\mathcal{H}$ of classifiers, the NP classification is to solve the following problem
\[
\begin{array}{ll}
\min\limits_{h\in\mathcal{H}}&\mathbb{E}[\varphi(-bh(a))|b=1]\\[5pt]
\mbox{s.t.}\quad &\mathbb{E}[\varphi(-bh(a))|b=-1]\leq \tau.
\end{array}
\]
In what follows, we consider its empirical risk minimization counterpart. Suppose that a labeled training dataset $\{a_i\}_{i=1}^N$ consists of the positive set $\{a^0_i\}_{i=1}^{N_0}$ and the negative set $\{a^1_i\}_{i=1}^{N_1}$. The associated empirical NP classification problem is
\begin{equation}\label{prob:NP}
\begin{array}{ll}
\min\limits_x&f(x):=\frac{1}{N_0}\sum_{i=1}^{N_0}\ell(x^Ta_i^0)\\[8pt]
\mbox{s.t.}\quad &g(x):=\frac{1}{N_1}\sum_{i=1}^{N_1}\ell(-x^Ta_i^1)-\tau\leq 0,
\end{array}
\end{equation}
where $\ell(\cdot)$ is a loss function, e.g., logistic loss $\ell(y):=\log(1+\exp(-y))$.
The datasets tested in our numerical comparison are summarized in Table \ref{table:datasets}. The datasets for multi-class classification have been manually divided into two types. For example, the MNIST dataset is used for classifying odd and even digits.
\begin{table}[!htp]
\centering
\caption{Datasets used in Neyman-Pearson classification}
\begin{tabular}{|c||c|c|c|c|}
\hline
Dataset & Data $N$ & Variable $n$ & {Density} & Reference \\
\hline
%$\mathtt{rcv1}$ & 20242 & 47236 & 0.16\% &\cite{RCV1}\\[2pt]
$\mathtt{gisette}$ & 6000 & 5000 & 12.97\% &\citep{gisette}\\[2pt]
$\mathtt{CINA}$ & 16033 & 132 & 29.56\% &\citep{CINA}\\[2pt]
$\mathtt{MNIST}$ & 60000 & 784 & 19.12\% &\citep{MNIST}\\
\hline
\end{tabular}
\label{table:datasets}
\end{table}
%In Figure \ref{figure:regret}, we show the changes of the \textit{objective regret} and the \textit{regret in constraint violation} of SLPMM for solving the Neyman-Pearson classification problem (\ref{prob:NP}). For all three datasets, it can be seen that the regrets of objective and constraint violation converge rapidly to zero.
In the following experiment, we show the performance of SLPMM compared with CSA \citep{LanZ2016}, PSG \citep{Xiao2019}, YNW \citep{YMNeely2017} and APriD \citep{YX2022}. For all five methods, we use an efficient mini-batch strategy, that is, at each iteration the stochastic gradients of the objective function and the constraint function are computed, respectively, by
\[
v_0^k:=\frac{1}{|\cN_0^k|}\sum_{i\in\cN_0^k}\nabla f_i(x^k),\quad v_1^k:=\frac{1}{|\cN_1^k|}\sum_{i\in\cN_1^k}\nabla g_i(x^k),
\]
where $f_i(x):=\ell(x^Ta_i^0), i=1,\ldots,N_0$ and $g_i(x):=\ell(-x^Ta_i^1), i=1,\ldots,N_1$. Here, the sets $\cN_0^k$ and $\cN_1^k$ are randomly chosen from the index sets $\{1,\ldots,N_0\}$ and $\{1,\ldots,N_1\}$, respectively. The batch sizes $|\cN_0^k|$ and $|\cN_1^k|$ are fixed to $1\%$ of the data sizes $N_0$ and $N_1$, respectively.
We choose $x^0=0$ as the initial point. The parameter $\tau$ is set to 1. The parameters in SLPMM is chosen as $\alpha=\sqrt{K}$ and $\sigma=1/\sqrt{K}$. The maximum number of iterations is set to $K=3000$.
In Figure \ref{figure:gisette}, Figure \ref{figure:CINA} and Figure \ref{figure:MNIST}, we show the performance of all methods for solving the empirical NP classification problem with logistic loss. In each figure, the pictures (a) and (b) show the changes of the objective value and the constraint value with respect to \textit{epochs}, and the pictures (c) and (d) represent the changes of the objective value and the constraint value with respect to \textit{cputime}. Here, in (a) and (c) the horizontal dashed line represents a reference optimal objective value which is computed by the built-in MATLAB function \texttt{fmincon}. Moreover, one epoch denotes a full pass over a dataset. The results are averaged over 10 independent runs.
Generally, we can observe that the behaviors of CSA, PSG and YNW are similar since all of them are stochastic first-order methods.
SLPMM obviously outperforms these three methods by combining the evaluations of both objective decreasing and constraint violation. In particular, the results demonstrate that SLPMM converges obviously faster than CSA and PSG both with respect to epochs and cputime. Our results also show that PSG usually generates solutions which are failed to satisfy the constraint. In contrast, CSA always gives feasible solutions, but the objective values are far from optimal. Finally, the performance of APriD is very different from the others. The total performance of APriD seems better than the others. However, the curves of APriD oscillate heavily even for the average of 10 runs, and the issue is much worse for each independent run.
\begin{figure}[htp]
\centering
\setlength{\belowcaptionskip}{-6pt}
\begin{tabular}{cccc}
\subfloat[$\mathtt{objective \slash epochs}$]{
\includegraphics[width=5.5cm]{./results_logloss_gisette_epoch_obj_10_runs.eps}} &
\subfloat[$\mathtt{constraint \slash epochs}$]{
\includegraphics[width=5.5cm]{./results_logloss_gisette_epoch_cons_10_runs.eps}} &\\
\subfloat[$\mathtt{objective \slash cputime}$]{
\includegraphics[width=5.5cm]{./results_logloss_gisette_time_obj_10_runs.eps}} &
\subfloat[$\mathtt{constraint \slash cputime}$]{
\includegraphics[width=5.5cm]{./results_logloss_gisette_time_cons_10_runs.eps}}
\end{tabular}
\caption{Comparison of algorithms on $\mathtt{gisette}$ for Neyman-Pearson classification.}
\label{figure:gisette}
\end{figure}
\begin{figure}[htp]
\centering
\setlength{\belowcaptionskip}{-6pt}
\begin{tabular}{cccc}
\subfloat[$\mathtt{objective \slash epochs}$]{
\includegraphics[width=5.5cm]{./results_logloss_CINA_epoch_obj_10_runs.eps}} &
\subfloat[$\mathtt{constraint \slash epochs}$]{
\includegraphics[width=5.5cm]{./results_logloss_CINA_epoch_cons_10_runs.eps}} &\\
\subfloat[$\mathtt{objective \slash cputime}$]{
\includegraphics[width=5.5cm]{./results_logloss_CINA_time_obj_10_runs.eps}} &
\subfloat[$\mathtt{constraint \slash cputime}$]{
\includegraphics[width=5.5cm]{./results_logloss_CINA_time_cons_10_runs.eps}}
\end{tabular}
\caption{Comparison of algorithms on $\mathtt{CINA}$ for Neyman-Pearson classification.}
\label{figure:CINA}
\end{figure}
\begin{figure}[htp]
\centering
\setlength{\belowcaptionskip}{-6pt}
\begin{tabular}{cccc}
\subfloat[$\mathtt{objective \slash epochs}$]{
\includegraphics[width=5.5cm]{./results_logloss_MNIST_epoch_obj_10_runs.eps}} &
\subfloat[$\mathtt{constraint \slash epochs}$]{
\includegraphics[width=5.5cm]{./results_logloss_MNIST_epoch_cons_10_runs.eps}} &\\
\subfloat[$\mathtt{objective \slash cputime}$]{
\includegraphics[width=5.5cm]{./results_logloss_MNIST_time_obj_10_runs.eps}} &
\subfloat[$\mathtt{constraint \slash cputime}$]{
\includegraphics[width=5.5cm]{./results_logloss_MNIST_time_cons_10_runs.eps}}
\end{tabular}
\caption{Comparison of algorithms on $\mathtt{MNIST}$ for Neyman-Pearson classification.}
\label{figure:MNIST}
\end{figure}
\subsection{Stochastic quadratically constrained quadratical programming}
In this subsection, we consider the following stochastic quadratically constrained quadratical programming
\[
\begin{array}{ll}
\min\limits_{x\in\cC} &f(x):=\mathbb{E}\left[\frac{1}{2}x^TA^{(0)}x+(b^{(0)})^Tx-c^{(0)}\right]\\[8pt]
\mbox{s.t.}\quad &g_i(x):=\mathbb{E}\left[\frac{1}{2}x^TA^{(i)}x+(b^{(i)})^Tx+c^{(i)}\right]\leq 0,\ i=1,2,\ldots,p,\\[5pt]
\end{array}
\]
where $A^{(i)}\in\cS_{+}^n$, $b^{(i)}\in\R^n$, $c^{(i)}\in\R$ for $i=0,1,\ldots,p$. Here, $\cS_+^n$ denotes the set of all $n\times n$ positive semidefinite matrices. The expectations are taken with respect to the components of the parameters $\{A^{(i)},b^{(i)},c^{(i)}\}_{i=0}^p$, which are all random variables.
The following numerical example is partially motivated by \cite{CZP2021}. The set $\cC:=\{x\in\R^n:\|x\|\leq R\}$, where $R>0$ is a constant. Let $\widehat{x}\in\R^n$ be a given point with its enty $\widehat{x}_i$ being uniformly generated from $\left(-\frac{R}{\sqrt{n}},\frac{R}{\sqrt{n}}\right)$. Let $I_n$ be the identity matrix. For each $i=0,1,\ldots,p$, the random matrix $A^{(i)}=I_n+\Delta_i$, where $\Delta_i$ is a symmetric matrix and its entry is uniformly distributed over $[-0.1,0.1]$. The random vector $b^{(i)}$ is uniformly distributed from $[-1,1]$. The random variable $c^{(i)}$ is constructed with a particular purpose. Let $h^{(i)}$ be a random variable uniformly distributed over $[0,2i]$, then define $c^{(i)}=-(\frac{1}{2}\widehat{x}^TA^{(i)}\widehat{x}+(b^{(i)})^T\widehat{x}+h^{(i)})$. In this setting, we can easily verify that $g_i(\widehat{x})=-i<0$ for $i=1,\ldots,p$ and hence the Slater's condition is satisfied. We can also get that the optimal solution is $0$ and the optimal value is $\frac{1}{2}\|\widehat{x}\|^2$.
In this experiment, we compare the performance of SLPMM with PSG, YNW and APriD. At each iteration of the algorithms, we generate the samples of $\{A^{(i)},b^{(i)},c^{(i)}\}_{i=0}^p$ based on the above distributions for function and gradient evaluation. We set $n=100$, $p=5$, $R=2$. The maximum number of iterations is set to $K=1000$. The initial point is set to $x^0=(\sqrt{R/n},\sqrt{R/n},\ldots,\sqrt{R/n})^T$.
The results in terms of time are shown in Figure \ref{figure:QCQP}. From picture (b) (plots the value of $\max_i\{g_i(x^k)\}$), we can see the iterations of all algorithms satisfy the constraints. From picture (a), we observe that SLPMM is comparable with PSG, and obviously outperforms over APriD and YNW.
\begin{figure}[htp]
\centering
\setlength{\belowcaptionskip}{-6pt}
\begin{tabular}{cccc}
\subfloat[$\mathtt{objective \slash cputime}$]{
\includegraphics[width=5.5cm]{./results_QCQP_random_time_obj_1_runs.eps}} &
\subfloat[$\mathtt{constraint \slash cputime}$]{
\includegraphics[width=5.5cm]{./results_QCQP_random_time_cons_1_runs.eps}}
\end{tabular}
\caption{Comparison of algorithms on stochastic quadratically constrained quadratical programming.}
\label{figure:QCQP}
\end{figure}
\subsection{Second-order stochastic dominance constrained portfolio optimization}
In this subsection, we consider the following second-order stochastic dominance (SSD) constrained portfolio optimization problem
\[
\begin{array}{ll}
\min &\mathbb{E}[-\xi^Tx]\\[5pt]
\mbox{s.t.}\quad &\mathbb{E}[[\eta-\xi^Tx]_+]\leq\mathbb{E}[[\eta-Y]_+],\quad\forall \eta\in\R,\\[5pt]
&x\in\cC:=\{x\in\R^n:\sum_{i=1}^nx_i=1,\ \bar{x}\geq x\geq 0\},
\end{array}
\]
where $\bar{x}$ is the upper bound and $Y$ stands for the random return of a benchmark portfolio dominated by the target portfolio in the SSD sense. Since it was first introduced by \cite{DR2003}, SSD has been widely used to control risk in financial portfolio \citep{KD2018,Noyan2018}.
\cite{KKU2016} showed that,
if $Y$ is discretely distributed with $\{y_1,y_2,\ldots,y_p\}$, the SSD constrained portfolio optimization is reduced to
\begin{equation}\label{eq:port-ssd}
\begin{array}{ll}
\min &f(x):=\mathbb{E}[-\xi^Tx]\\[8pt]
\mbox{s.t.}\quad &g_i(x):=\mathbb{E}[[y_i-\xi^Tx]_+]-\mathbb{E}[[y_i-Y]_+]\leq 0,\quad i=1,\ldots,p,\\[8pt]
&x\in \cC:=\{x\in\R^n:\sum_{i=1}^nx_i=1,\ \bar{x}\geq x\geq 0\},
\end{array}
\end{equation}
which is an instance of Problem (\ref{eq:1}).
\cite{DMW2016} proposed several methods for solving SSD constrained optimization problems based on augmented Lagrangian framework and analyze their convergence. In particular, the proposed approximate augmented Lagrangian method with exact minimization (PALEM) has some similarities to SLPMM. At each iteration in PALEM, a minimization problem with respect to the augmented Lagrangian function of a reduced problem is solved to obtain $x^k$, and the multiplier $\mu^k$ is updated. They proved that the sequences $\{x^k\}$ and $\{\mu^k\}$ converge to the optimal solution of primal and dual problem, respectively. In contrast, although SLPMM is also constructed based on augmented Lagrangian framework as PALEM, they are quite different. The subproblem at each iteration in SLPMM is a minimization problem of a linearized augmented Lagrangian function together with a proximal term, which is easier to solve. The sampling strategy is different. In PALEM, the sample set is updated at each iteration based on the calculation of the expectation of constraint function. SLPMM only simply requires one sample at each iteration. Moreover, since in our setting the expectation is assumed to be impossible to be calculated, we can not obtain the convergence of the sequence to optimal solution.
In this experiment, we compare the performance of SLPMM with APriD, PSG, YNW and PALEM to solve Problem (\ref{eq:port-ssd}) on the following four datasets
\[
\{``\texttt{Dax\_26\_3046}",``\texttt{DowJones\_29\_3020}",``\texttt{SP100\_90\_3020}",``\texttt{DowJones\_76\_30000}"\}
\] from \citep{KKU2016}. Take ``\texttt{DowJones\_29\_3020}" for example, ``\texttt{DowJones}" stands for Dow Jones Index, 29 is the number of stocks and 3020 is the number of scenarios, i.e., $n=29, p=3020$. The initial point is set to 0. For PALEM, we use the MATLAB function \texttt{fmincon} to solve the subproblem. For SLPMM, we utilize the Nesterov's accelerated projected gradient method (APG) to solve the subproblem (\ref{eq:general-subp}), the stopping criterion of APG is set to $\|y^t-T_{L_t}(y^t)\|\leq 10^{-6}$, and the projection $\Pi_{\cC}$ is computed by the method proposed in \citep{WL2015}. In particular, since the number of the constraints of Problem (\ref{eq:port-ssd}) is large, we apply a sampling technique to reduce the computational cost. In specific, at each iteration, instead of using the whole constraint index set $\{1,\ldots,p\}$ in the augmented Lagrangian function (\ref{augL}), we first randomly sample a subset $I_k\subset\{1,\ldots,p\}$ and then replace $\sum_{i=1}^p$ with $\sum_{i\in I_k}$ in (\ref{augL}). This sampling strategy, which is also used in \citep{Xiao2019}, is proven to be very efficient in practice. Let us also remark that, by taking an extra expectation with respect to $I_k$, the expected convergence rates of SLPMM coupled with this sampling strategy can be established in a similar way as in Section \ref{sec:rates}. This is also pointed out in \citep[Section 5]{Xiao2019}.
The numerical results are presented in Figure \ref{figure:ssd}. Since the maximum of $p$ constraint values are always zero (which indicates that the constraints are satisfied), we omit the presentation of constraint violation. We only report the change of the objective value with respect to cputime. The horizontal dashed line in each picture represents a reference optimal objective value which is obtained from \citep{KKU2016}. In general, we can observe that SLPMM has an obvious advantage compared with the other four algorithms. In view of dataset ``\texttt{DowJones\_76\_30000}" which refers to a large scale optimization problem with 30,000 constraints, SLPMM converges to the optimal objective value less than 4 seconds. We can also observe that SLPMM is very robust and stable for all four datasets.
\begin{figure}[htp]
\centering
\setlength{\belowcaptionskip}{-6pt}
\begin{tabular}{cccc}
\subfloat[\texttt{Dax\_26\_3046}]{
\includegraphics[width=5.5cm]{./results_SSD_DAX_26_3046_time_obj_1_runs.eps}} &
\subfloat[\texttt{DowJones\_29\_3020}]{
\includegraphics[width=5.5cm]{./results_SSD_DowJones_29_3020_time_obj_1_runs.eps}} &\\
\subfloat[\texttt{SP100\_90\_3020}]{
\includegraphics[width=5.5cm]{./results_SSD_SP100_90_3020_time_obj_1_runs.eps}} &
\subfloat[\texttt{DowJones\_76\_30000}]{
\includegraphics[width=5.5cm]{./results_SSD_DowJones_76_30000_time_obj_1_runs.eps}}
\end{tabular}
\caption{Comparison of algorithms for SSD constrained portfolio optimization.}
\label{figure:ssd}
\end{figure}
%\subsection{MIMO Transmit Signal Design with Imperfect CSI}
%Suppose that a base station is equipped with $n$ antennas and it simultaneously transmits $p$ data streams to $p$ users using MIMO signaling based on the estimated channel state information (CSI) $\hat{h}_i\in\mathbb{C}^n$, $i=1,2,\ldots,p$. Let $e_i$ denote the random error between the true channel $h_i$ and the estimated channel $\hat{h}_i$.
%We now consider the following MIMO transmit signal design problem with imperfect CSI \citep{DB2009,LLK2019}
%\min\limits_{Q} &f(Q):=\displaystyle\sum_{i=1}^p\Tr(Q_i)\\[5pt]
%\mbox{s.t.}\quad &\mathbb{E}[G_i(Q,\xi)]\geq r_i,\quad\forall i=1,2,\ldots,p,\\[5pt]
%&Q_i\succeq 0,\quad i=1,2,\ldots,p,
%G_i(Q,\xi):=\log\left(1+\frac{h_i^HQ_ih_i}{\sum_{j\neq i}h_i^HQ_jh_i+\sigma_i^2}\right),\forall i=1,2,\ldots,p,
%$\xi=\{e_1,e_2,\ldots,e_p\}$, $Q=\{Q_1,Q_2,\ldots,Q_p\}$ with each $Q_i$ being the covariance matrix of the transmit signal for user $i$, $\sigma_i^2$ is the variance of the thermal noise at user $i$, $r_i$ is the expected rate requirement for user $i$. Here, $\Tr(Q_i)$ is the trace of the matrix $Q_i$ and $Q_i\succeq 0$ denotes that the matrix $Q_i$ is required to be positive semidefinite. Let us note that the objective function $f(Q)$ is linear and the expectation constraints $\mathbb{E}[G_i(Q,\xi)]\geq r_i$ are nonconvex.
%\cite{LLK2019} proposed a constrained stochastic successive convex approximation (CSSCA) method for solving this nonconvex problem, which is the state-of-the-art algorithm in this field. Let us remark that the successive convex approximation technique is also used by \cite{HYZ2011} to deal with joint chance constrained programs, in which the constraint is also nonconvex. We try to compare SLPMM with CSSCA for a randomly generated test case which can be found on arxiv: https://arxiv.org/abs/1801.08266. In both SLPMM and CSSCA, CVX \citep{cvx} is used to solve the convex subproblem at each iteration.
%The numerical results are presented in Figure \ref{figure:mimo}. Since there are $p$ constraints, we use the maximum of $p$ constraint values to show the constraint violation. Unlike the previous experiments, the results are not promising. Although the objective value of SLPMM is better than CSSCA, the constraint value in SLPMM can not decrease to zero. The possible reason is that, for nonconvex constraint, simply linearizing it and adding to the augmented Lagrangian function may not work. Other surrogate functions should be considered, which is out of the scope of this paper. We leave this as a future topic.
%\includegraphics[width=5.5cm]{./results_MIMO_exam1_time_obj_1_runs.eps}} &
%\caption{Comparison of algorithms for MIMO Transmit Signal Design.}
\section{Conclusion}\label{sec:conclusion}
We present a hybrid method of stochastic approximation technique and proximal augmented Lagrangian method. It is shown that the expected convergence rates and the large-deviation properties are comparable with the existing related stochastic methods. On the other hand, the proposed method is parametric-independent. Numerical experiments also demonstrate the superiority in comparison with the stochastic first-order methods. Thus, both theoretical and numerical results suggest that the proposed algorithm is efficient for solving convex stochastic programming with expectation constraints.
However, there are still several valuable questions left to be answered. It is well-known that the deterministic augmented Lagrangian can achieve superlinear convergence. Therefore, the first question is whether the convergence rates can be improved to match the numerical performance and the rates in the deterministic setting. Secondly, it is worthwhile to consider the inexact method, that is, the subproblem is solved inexactly.
Another interesting topic is how to use the techniques in this paper to deal with nonconvex stochastic optimization. The proposed algorithm in the current form is not applicable to solve nonconvex problems, such as chance constrained programs \citep{BSZ2021} and MIMO transmit signal design problem \citep{LLK2019}.
Finally, let us mention that the stochastic algorithms for stochastic optimization can be easily extended to solve online problems, and vice versa, see \citep{YMNeely2017} for instance. Hence, the proposed method can be slightly revised to solve the corresponding online problems.
% Samples of sectioning (and labeling) in IJOC
% NOTE: (1) \section and \subsection do NOT end with a period
% (2) \subsubsection and lower need end punctuation
% (3) capitalization is as shown (title style).
%\section{Introduction.}\label{intro} %%1.
%\subsection{Duality and the Classical EOQ Problem.}\label{class-EOQ} %% 1.1.
%\subsection{Outline.}\label{outline1} %% 1.2.
%\subsubsection{Cyclic Schedules for the General Deterministic SMDP.}
% \label{cyclic-schedules} %% 1.2.1
%\section{Problem Description.}\label{problemdescription} %% 2.
% Text of your paper here
% Acknowledgments here
\ACKNOWLEDGMENT{%
The authors would like to thank the anonymous reviewers and the associate
editor for the valuable comments and suggestions that
helped us to greatly improve the quality of the paper.
}% Leave this (end of acknowledgment)
% Appendix here
% Options are (1) APPENDIX (with or without general title) or
% (2) APPENDICES (if it has more than one unrelated sections)
% Outcomment the appropriate case if necessary
% \begin{APPENDIX}{<Title of the Appendix>}
% \end{APPENDIX}
% or
% \begin{APPENDICES}
% \section{<Title of Section A>}
% \section{<Title of Section B>}
% etc
% \end{APPENDICES}
% References here (outcomment the appropriate case)
% CASE 1: BiBTeX used to constantly update the references
% (while the paper is being written).
\bibliographystyle{informs2014} % outcomment this and next line in Case 1
\bibliography{ref.bib} % if more than one, comma separated
% CASE 2: BiBTeX used to generate mypaper.bbl (to be further fine tuned)
%\input{mypaper.bbl} % outcomment this line in Case 2
%(1) This paper presents and analyzes the convergence of an efficient stochastic augmented Lagrangian-type algorithm for solving expectation constrained stochastic optimization problem. The considered problem is a standard model in operations research, especially in stochastic programming community. The topic in this paper perfectly matches the scope of IJOC.
%(2) This paper designs an efficient algorithm of a type of stochastic optimization problems and analyzes its convergence rates. Thus, it is appropriate for the Design & Analysis of Algorithms-Continuous area.
\end{document}
|
# $\delta\mathcal{N}$ formalism on the past light-cone
G. Fanizza G. Marozzi and M. Medeiros
###### Abstract
We apply the gradient expansion approximation to the light-cone gauge,
obtaining a separate universe picture at non-linear order in perturbation
theory within this framework. Thereafter, we use it to generalize the
$\delta\mathcal{N}$ formalism in terms of light-cone perturbations. As a
consistency check, we demonstrate the conservation of the gauge invariant
curvature perturbation on uniform density hypersurface $\zeta$ at the
completely non-linear level. The approach studied provides a self-consistent
framework to connect at non-linear level quantities from the primordial
universe, such as $\zeta$, written in terms of the light-cone parameters, to
late time observables.
## 1 Introduction
Advances in cosmological observations have provided us with high-precision
methods to study the universe [1, 2, 3]. So far, linear cosmological
perturbation theory has been the main tool to describe the early universe,
particularly the primordial seeds that are believed to be produced by quantum
mechanical fluctuations during inflation. These fluctuations grow during the
quasi-exponential expansion epoch and freeze outside the horizon. Later on,
they re-enter the horizon during a power-law expansion epoch, giving rise to
the large-scale structure observed in the universe.
In order to link the gauge invariant quantity that characterizes such
primordial fluctuations with the observations, we need to have a good
understanding of their behaviour outside the horizon. An interesting example
is given by the primordial curvature perturbation on the uniform density
hypersurface $\zeta$, which is expected to be of order $10^{-5}$ at the last-
scattering surface and has been shown to be conserved outside the horizon,
both at the linear [4] and the non-linear level [5]. The first-order treatment
for the primordial fluctuations agrees with the observations of a nearly
Gaussian, scale-invariant power spectrum. Although non-linearities are
expected to be small, they are however unavoidable as a consequence of the
non-linear evolution of the perturbations. Detection of the related non-
Gaussianities can then provide important insights into early universe models,
such as the inflationary ones [6, 7].
The evolution of $\zeta$ is proportional to the non-adiabatic (if any)
contribution in the energy-momentum tensor, as shown in [4, 5]. In the linear
regime, $\zeta$ has been successfully calculated using the $\delta\mathcal{N}$
formalism [8, 9, 10, 11, 12, 13], which has been extended to the exact non-
linear level [5] by applying the first-order gradient expansion directly in
the equations of motion provided by the Arnowitt-Deser-Misner (ADM) formalism
[14]. The first-order gradient expansion, also known as the separate universe
(SU) scheme, describes the universe as a set of FLRW geometries with
independent equations of motion and is a good approximation in the regime of
large comoving wavelengths compared to the horizon [8, 9, 4, 5].
The great advantage of the SU scheme is that the equations of motion within
this approximation have the same form both for the background and perturbed
universe with the exception of the momentum constraint, which vanishes in the
background. As a consequence, one can obtain the non-linear field’s evolution
from the background one by imposing non-linear initial conditions [15].
In [16], this formalism has been generalized to include stochastic effects and
derive, within the framework of the stochastic approach [17] and its relation
with QFT [18, 19], non-perturbative correlation functions for single-field
slow-roll inflation. Further extensions have also studied ultra-slow-roll
inflation [20, 21, 22, 23], allowing for the investigation of primordial
black-hole production [20, 21, 22, 23, 24, 25]. The $\delta\mathcal{N}$
formalism has also been extended to the case when cosmic shear is included to
describe the anisotropic expansion. In such a framework, the evolution of
gravitational waves has been explored both for the case of a Bianchi I
universe [26] and when couplings with external fields are present [27, 28].
A formalism that connects the picture of the primordial universe presented so
far to late-time observables would be greatly welcome, especially if such a
connection can account for non-linearities. Since the Geodesic Light-Cone
(GLC) gauge [29] gives the chance of describing light-like observables in the
late universe exactly, such as the redshift and the distance-redshift relation
[29, 30, 31, 32, 33, 34, 35, 36], the galaxy number count [37, 38], the non-
linear corrections to the CMB spectra [39, 40], and also Ultra-Relativistic
particles [41], this is a natural framework to pursue the aforementioned
program.
Moreover, there has also been recent interest in the GLC gauge application to
the study of backreaction effects from the primordial universe [36, 42, 43,
44]. Although these are very interesting prospects, one still must face the
fact that the evolution of perturbations in the GLC gauge is quite involved
already at linear order (see [44] for an analytical treatment). An alternative
approach to this may be provided by numerical attempts, as done for instance
in [45] for the linearized evolved solution for the gravitational potential on
the past light-cone. In this manuscript, we take a different route by
providing simplified equations of motion using the SU approach on the past
light-cone.
The connection between the primordial origin of inhomogeneities and their
observations has to deal with the fact that the latter are done along our past
light-cone, whereas the primordial universe is usually described using spatial
hypersurfaces. During primordial epochs, these hypersurfaces are naturally
described in terms of uniform field slices. In fact, in a single-field
inflationary scenario, the inflaton is the only clock available. Therefore,
the natural slicing which describes the dynamical space-time evolution is the
one given by uniform inflaton hypersurfaces, which also fixes the time gauge
mode. Another interesting fixing for the time coordinate is the one describing
uniform density slices. This is an interesting fixing because it directly
translates the density perturbations into curvature perturbations, providing
the initial conditions for large-scale structure formation and allowing us to
use the $\delta\mathcal{N}$ formalism to connect current inhomogeneities to
the primordial ones. These two gauge fixings are usually called, respectively,
uniform field gauge (UFG) and uniform density gauge (UDG).
To make contact between these gauges and the GLC one, we recall that, although
the GLC time coordinate is fixed to the time measured by a free-falling
observer, a generalization of this gauge is provided in [44], the so-called
Light-Cone (LC) gauge. In this generalization, the time gauge choice is left
unspecified, allowing us to fix the lapse function for describing the uniform
field and uniform density slicing on the past light-cone. An alternative
approach could be to start from the cosmological perturbation theory on the
past light-cone developed in [36, 42], and then perform the necessary gauge
transformations. A description of the primordial universe in terms of the LC
gauge could be a promising framework to connect non-linearly the late universe
to the primordial one, given in terms of light-cone parameters.
Here, we will take a first step in this direction. In particular, along this
manuscript, we will discuss the gradient expansion as done on the observer’s
past light-cone, which allows us to obtain a SU picture and the
$\delta\mathcal{N}$ formalism in terms of light-cone perturbations. We will
provide this both at the fully non-perturbative level, using the LC gauge
[44], and, as a consistency check, at the linear level using the light-cone
perturbation theory [36, 42]. By considering the LC gauge as a non-linear ADM
decomposition (see [44] for more details), we will show that, unlike previous
literature, where the shift vector was a first-order term in the gradient
expansion [15, 5, 26], in the LC gauge the shift vector has to be taken into
account also for the background. This is an important difference, since in
this case the shift vector corresponds to the direction of propagation of the
photon, and it is used to take into account inhomogeneities along the photon
propagation direction. However, we will neglect spatial derivatives of such
shift vector since they correspond to light-cone distortion effects which are
expected to be negligible on large scales.
After such implementations, we will show that the SU picture can be realized
in the LC gauge (i.e., we will obtain evolution equations with the same form
for both perturbed and background universe). Furthermore, within the gradient
expansion approximation, we will verify at the fully non-linear level that the
curvature perturbation on uniform density slices $\zeta$ is a conserved
quantity (for adiabatic pressure) also when the light-like slicing of
spacetime is used. This is a sanity check that confirms how the SU picture can
be extended also to the case of the light-cone gauge.
In summary, we will present a novel approach for connecting the primordial
universe with observations along the past light-cone, by developing the
$\delta\mathcal{N}$ formalism in the LC gauge. We will verify that the LC
gauge allows for a non-linear description of the primordial universe in terms
of light-cone parameters, and that the SU picture can be realized in this
gauge by neglecting spatial derivatives of the shift vector.
The manuscript is organized as follows. In Sect. 2 we obtain a generic SU
description where we keep inhomogeneities along the geodesics in terms of an
ADM metric. In Sect. 3 we present the set of light-cone gauges used here and,
with a non-linear diffeomorphism, we show how the standard ADM formalism
relates with the LC gauge. Moreover, we provide the LC gauge fixing condition
in terms of the ADM variables up to non-linear order. Thereafter, we discuss
the SU formalism on the past light-cone, which is also presented for the GLC
gauge. Sect. 4 is devoted to the computation of the non-linear number of
e-folds and its relation to spacetime perturbations, which allow us to obtain
the non-linear scale factor in the LC. We then give a proof for the super-
horizon conservation of $\zeta$, at both linear and non-linear order in
perturbation theory, and first order in the gradient expansion, for adiabatic
fluids. Finally, we provide a generalization of the $\delta\mathcal{N}$
formalism in the LC gauge. In Sect. 5, our main conclusions are summarized and
discussed. In Appendix A we provide the linear $\delta\mathcal{N}$ formalism
as a consistency check of the obtained results.
## 2 Separate universe
Let us begin by introducing a systematic approximation scheme, which can be
used when the wavelength of the perturbations is larger than the physical
horizon. This approximation, widely known as the SU approach [8, 9, 4, 5],
consists of employing the already mentioned gradient expansion perturbative
scheme. This is based on the quantity $\epsilon\equiv k/(aH)$, rather than on
the amplitude of the perturbations. This quantity compares the comoving
wavenumber of a given mode $k/a$ with the expansion rate $H$. As an example,
within this approximation scheme, terms with one spatial derivative will be
first order111As an example in a flat space, under a Fourier transformation,
spatial derivatives give rise to terms proportional to $k$. Here we are
considering that for a quantity $Q$,
$\frac{1}{a}\partial_{i}Q\ll\partial_{t}Q\approx HQ$ [10]. in $\epsilon$. The
first order gradient expansion is known as the SU approach, since in this case
the equations of motion for a local patch of the perturbed space-time have the
same form of the FLRW background ones [15]. Thus, in this view the universe
can be described as a collection of FLRW geometries, each one locally
described by a different scale factor.
This approximation can be particularly interesting to study the super-horizon
evolution of the curvature perturbation $\zeta$ with a light-cone foliation of
the spacetime. In fact, the SU approach is used in [4, 5] to show the
conservation of $\zeta$ on super-horizon scales, for adiabatic pressure, at
linear and non-linear order in perturbation theory.
By pplying the gradient expansion to the non-linear ADM formalism [15] one can
provide a SU scheme in the uniform curvature gauge (UCG). It has been shown
that the shift vector has a decaying evolution, and therefore during the
exponential expansion of the universe, it can be considered as a first order
term in the gradient expansion. Also analyzing the consistency between the
Hamiltonian and momentum constraints, it has been shown that, taking into
account also the momentum constraints, the results differ only by a decaying
solution [15]. Moreover, it is shown that the additional information in the
momentum constraints should be $\mathcal{O}(\epsilon^{3})$ in the gradient
expansion. Thereby, for super-horizon perturbations, the SU scheme is a good
approximation.
In this manuscript, we will provide a SU picture for the LC [44] and GLC [29]
gauges. As we will see later, one difference with the previous works is that,
when we consider the LC gauge as an ADM decomposition, the shift vector does
not vanish, not even on the background (see, for instance [44]). In fact, in
the LC gauge, the shift vector describes the direction of observation. On the
other hand, we will neglect the divergence of the shift vector, which
describes the divergence of the direction of observation in the language of
$1+3$ formalism.
### 2.1 ADM formalism
In this section we provide the SU set of equations for generic perturbations.
Firstly, we introduce the ADM splitting and the $1+3$ evolution equations,
then we obtain general conditions which allow a SU evolution of the
perturbations. Thereafter, we show how also the LC gauge satisfies these
conditions. Starting with the ADM metric
$ds_{ADM}^{2}=-\mathcal{M}^{2}dt^{2}+f_{ij}\left(dx^{i}+N^{i}dt\right)\left(dx^{j}+N^{j}dt\right)\,,$
(2.1)
one can prove that, with a suitable choice of the coordinates,
$N^{i}=\mathcal{O}\left(\epsilon\right)$ and therefore
$\partial_{i}N^{i}=\mathcal{O}\left(\epsilon^{2}\right)$. This condition was
assumed in the references [5, 4], and it was proved in [15] considering the
UCG.
Let us now work with the ADM foliation of Eq. (2.1). Thanks to this
description of non-linear general perturbations, made on top of a FLRW
background, we will show how to recover a SU picture even if the shift vector
does not vanish in the background. It rather combines with the time derivative
to provide a derivative along the time-like motion.
The vector $n^{\mu}$ normal to the space-like hypersurfaces $t=const$ is given
by
$n^{\mu}=\frac{\partial^{\mu}t}{\left(-\partial_{\nu}t\partial^{\nu}t\right)^{\frac{1}{2}}}\,,$
(2.2)
which satisfies
$n_{\mu}=-\mathcal{M}\delta_{\mu}^{t}\,,\qquad
n^{\mu}\partial_{\mu}=\frac{1}{\mathcal{M}}\left(\partial_{t}-N^{i}\partial_{i}\right)\,,$
(2.3)
with correspondent induced metric given by
$f_{\mu\nu}=g_{\mu\nu}+n_{\mu}n_{\nu}\,.$ (2.4)
This metric can be used to define the following induced quantities
$E\equiv n^{\mu}n^{\nu}T_{\mu\nu}\,,\quad\quad
p^{\mu}\equiv-f^{\mu\nu}n^{\rho}T_{\nu\rho}\,,\quad\quad S^{\mu\nu}\equiv
f^{\mu\rho}f^{\nu\sigma}T_{\rho\sigma}\,,$ (2.5)
where $E$ is the energy, $p^{\mu}$ is the energy flux (or momentum) and
$S^{\mu\nu}$ is stress tensor. Then, the standard energy-momentum tensor can
be written as
$T_{\mu\nu}=\rho n_{\mu}n_{\nu}+p_{\mu}n_{\nu}+p_{\nu}n_{\mu}+S_{\mu\nu}\,,$
(2.6)
where $\rho$ and $p=\frac{1}{3}f^{\mu\nu}S_{\mu\nu}$ are respectively the
energy density and the pressure of the given fluid.
Now we have everything that we need to present the decomposed Einstein
equations. These are developed in full details in [44], where the authors have
specialized to the LC gauge as a $1+1+2$ ADM foliation. As a starting point,
we can extract the energy (time-time) and momentum (time-space) constraints
respectively given by222From now on, for a generic tensor $C_{ij}$, we will
denote its trace with $C\equiv f^{ij}C_{ij}$.
${}^{(3)}R+\Theta_{n}^{2}-K_{ij}K^{ij}=\,2E\,,\qquad\qquad-
D_{j}K_{i}^{j}+D_{i}\Theta_{n}=\,p_{i}\,,$ (2.7)
where we defined the extrinsic curvature as
$K_{\mu\nu}\equiv\nabla_{(\mu}n_{\nu)}$. We can then also define the expansion
rate $\Theta_{n}$ as
$\Theta_{n}\equiv f^{\mu\nu}K_{\mu\nu}\,.$ (2.8)
The evolution of the induced metric $f_{ij}$ and of $K_{ij}$ is then obtained
from the space-space decomposition
$\displaystyle\left(\partial_{t}-\mathcal{L}_{N}\right)f_{ij}$
$\displaystyle=\,2\mathcal{M}K_{ij}\,,$
$\displaystyle\left(\partial_{t}-\mathcal{L}_{N}\right)K_{ij}$
$\displaystyle=\,\mathcal{M}\left[2K_{ik}K_{j}^{k}-K_{ij}\Theta_{n}-^{(3)}R_{ij}+S_{ij}-\frac{1}{2}f_{ij}\left(S-E\right)\right]+D_{i}D_{j}\mathcal{M}\,,$
(2.9)
where $\mathcal{L}_{N}$ is the Lie derivative along the field $N^{i}$.
Finally, the equations for the matter sector $\nabla_{\mu}T^{\mu\nu}=0$ are
given by
$\displaystyle\left(\partial_{t}-\mathcal{L}_{N}\right)E$
$\displaystyle=\,-D_{i}\left(\mathcal{M}p^{i}\right)-\mathcal{M}\left(\Theta_{n}E+K_{ij}S^{ij}\right)\,,$
$\displaystyle\left(\partial_{t}-\mathcal{L}_{N}\right)p_{i}$
$\displaystyle=\,-D_{j}\left(S_{i}^{j}\mathcal{M}\right)-\mathcal{M}\Theta_{n}p_{i}-ED_{i}\mathcal{M}\,.$
(2.10)
Let us now follow the decomposition of [26] by extracting the shape-preserving
volume expansion out of the spatial metric. First, we make a conformal re-
scaling of $f_{ij}$ to
$f_{ij}\equiv e^{2\Xi}\hat{f}_{ij}\,,$ (2.11)
by requiring that the determinant $\hat{f}=\text{det}[\hat{f}_{ij}]=1$. In
this way, we can interpret $e^{\Xi}$ as the local effective scale factor.
Then, we re-scale accordingly also the other quantities
$\displaystyle\hat{A}_{ij}=$
$\displaystyle\,e^{-2\Xi}\left(K_{ij}-\frac{1}{3}f_{ij}\Theta_{n}\right)\,,$
$\,{}^{(3)}\hat{\mathcal{R}}_{ij}=$
$\displaystyle\,e^{-2\Xi}\left({}^{(3)}R_{ij}-\frac{1}{3}\,^{(3)}R\,f_{ij}\right)\,,$
$\displaystyle\hat{\mathcal{S}}_{ij}=$
$\displaystyle\,e^{-2\Xi}\left(S_{ij}-\frac{1}{3}f_{ij}S\right)\,.$ (2.12)
Before applying the decomposition (2.12) to Eqs. (2.9) and (2.10), we define
$\mathcal{M}\frac{d}{d\widetilde{\lambda}}\equiv\left(\partial_{t}-\mathcal{L}_{N}\right)$
and compute the trace of the evolution of $A_{ij}\equiv e^{2\Xi}\hat{A}_{ij}$
$\displaystyle f^{ij}\left(\partial_{t}-\mathcal{L}_{N}\right)A_{ij}=$
$\displaystyle\frac{1}{\mathcal{M}}\left[f^{ij}f_{il}\frac{d}{d\widetilde{\lambda}}\left(f_{jk}A^{lk}\right)+f^{ij}f_{jk}A^{lk}\frac{d}{d\widetilde{\lambda}}f_{il}\right]$
$\displaystyle=$
$\displaystyle\frac{1}{\mathcal{M}}\frac{d}{d\widetilde{\lambda}}\left(f_{lk}A^{lk}\right)+f^{ij}f_{jk}A^{lk}2K_{il}$
$\displaystyle=$ $\displaystyle 2A_{ij}A^{ij}=2\hat{A}_{ij}\hat{A}^{ij}\,,$
(2.13)
where we have used the first of Eqs. (2.9) from the first to the second line,
and the fact that $A_{ij}$ is trace-less from the second to the last line.
Now we can apply the decomposition (2.12) to Eqs. (2.9), using also Eqs. (2.7)
and (2.13), to obtain
$\displaystyle\frac{d\Theta_{n}}{d\widetilde{\lambda}}=$
$\displaystyle-\frac{\Theta_{n}^{2}}{3}-\hat{A}_{ij}\hat{A}^{ij}-\frac{1}{2}\left(E+S\right)+\frac{1}{\mathcal{M}}D^{2}\mathcal{M}\,,$
$\displaystyle\frac{d\Xi}{d\widetilde{\lambda}}=$
$\displaystyle\frac{\Theta_{n}}{3}\,,$
$\displaystyle\frac{d\hat{f}_{ij}}{d\widetilde{\lambda}}=$ $\displaystyle
2\hat{A}_{ij}\,.$ (2.14)
Then the evolution of the trace-less quantity $\hat{A}_{ij}$ is given by
$\frac{d\hat{A}_{ij}}{d\widetilde{\lambda}}=-\frac{1}{3}\Theta_{n}\hat{A}_{ij}+2e^{-2\Xi}A_{ik}A_{j}^{k}+\hat{\mathcal{S}}_{ij}-\,^{(3)}\hat{\mathcal{R}}_{ij}+\frac{1}{\mathcal{M}}\left(D_{i}D_{j}-\frac{1}{3}f_{ij}D^{2}\right)\mathcal{M}\,.$
(2.15)
At this point, we will apply our gradient expansion scheme without any gauge
fixing. Hence, considering that
$n^{\mu}=\frac{1}{\mathcal{M}}\left(1,-N^{i}\right)$, for a generic tensor
$l_{ij}$, it holds
$\frac{1}{\mathcal{M}}\left(\partial_{t}-\mathcal{L}_{N}\right)l_{ij}=n^{\mu}\partial_{\mu}l_{ij}+l_{ik}\partial_{j}N^{k}+l_{jk}\partial_{i}N^{k}=\frac{d}{d\lambda}l_{ij}+\mathcal{O}\left(\epsilon^{2}\right)\,,$
(2.16)
where we define $n^{\mu}\partial_{\mu}\equiv\frac{d}{d\lambda}$ and then we
have
$\frac{d}{d\widetilde{\lambda}}=\frac{d}{d\lambda}+\mathcal{O}\left(\epsilon^{2}\right)$.
This corresponds to neglect terms proportional to
$\partial_{i}N^{j}=\mathcal{O}\left(\epsilon^{2}\right)$ in Eq. (2.16), as
done also in [15, 26]. This is our main assumption and comes from the fact
that, due to the spatial derivatives, the momentum constraint of Eq. (2.7) is
a first order relation, i.e. $p_{i}=\mathcal{O}\left(\epsilon\right)$. Hence,
using Eq. (2.5), we get
$\displaystyle p_{i}=-\frac{1}{\mathcal{M}}\left(T_{it}-N^{j}T_{ij}\right)\,.$
(2.17)
Such Eq. (2.17) can be satisfied either by the strong condition
$\displaystyle T_{it}\sim N^{i}=\mathcal{O}\left(\epsilon\right)\,,$ (2.18)
or by the weaker condition that only the combination on its r.h.s. is
$\mathcal{O}(\epsilon)$. Along this paper, we will adopt the stronger
condition to justify our claim that
$\partial_{i}N^{j}=\mathcal{O}(\epsilon^{2})$, in agreement with [15, 5, 4,
26].
Moreover, just as done in some previous works [15, 5, 4], we will also neglect
${}^{(3)}R_{ij}\thicksim
R\thicksim\hat{\mathcal{S}}_{ij}=\mathcal{O}\left(\epsilon^{2}\right)$ since
all of these terms contain double spatial derivatives. For what concerns the
anisotropic stress $\hat{\mathcal{S}}_{ij}$, this is given by combinations of
double spatial derivatives acting on the scalar fields in the matter sector.
The condition $\hat{\mathcal{S}}_{ij}=\mathcal{O}(\epsilon^{2})$ was relaxed
in [26] only for Bianchi geometries and on [27, 28] due to the presence of
gauge fields.
With these considerations, we can decompose again the metric evolution
provided by the first of Eqs. (2.9) thanks Eqs. (2.12). Hence, at first order
in the gradient expansion, we obtain333Note that, although Eqs. (2.19) are
very similar to Eqs. (2.14), after the gradient expansion we have replaced
$\frac{d}{d\tilde{\lambda}}=\frac{d}{d\lambda}+\mathcal{O}(\epsilon^{2})$.
$\frac{d\Xi}{d\lambda}=\frac{\Theta_{n}}{3}\,,\qquad\qquad\frac{d\hat{f}_{ij}}{d\lambda}=2\hat{A}_{ij}\,.$
(2.19)
Moreover, we can prove that $\hat{A}_{ij}$ is at least a second order term in
the gradient expansion at every order in perturbation theory. In fact,
following [15], from Eq. (2.15) we get
$\frac{d}{d\lambda}\hat{A}_{ij}=-\frac{1}{3}\Theta_{n}\hat{A}_{ij}+2\hat{A}_{ik}\hat{A}_{j}^{k}+\mathcal{O}\left(\epsilon^{2}\right)\,,$
(2.20)
and we choose a coordinate system such that $A_{ij}$ vanishes on the
background444This is a quite general condition for isotropic spaces and, as we
will show later, this is also the case for an isotropic LC background. Hence,
at $\mathcal{O}(\delta)$ in perturbation theory, we get that
$\hat{A}_{ik}\hat{A}_{j}^{k}\sim\mathcal{O}(\delta^{2})$ and then Eq. (2.20),
with first of Eqs. (2.19), becomes
$\frac{d}{d\lambda}\hat{A}_{ij}=-\frac{d\Xi}{d\lambda}\hat{A}_{ij}+\mathcal{O}\left(\delta^{2},\epsilon^{2}\right)\,.$
(2.21)
The latter equation is clearly solved by
$\hat{A}_{ij}\propto e^{-\Xi}\,,$ (2.22)
and proves that the $\mathcal{O}(1)$ term in the gradient expansion of
$\hat{A}_{ij}$ decays when $\Xi$ grows in time. Then it can be neglected.
Therefore, we obtain that $\hat{A}_{ij}$ is at least first order in
$\epsilon$. As a consequence, $\hat{A}_{ik}\hat{A}_{j}^{k}$ is not only second
order in $\delta$ expansion but also at least
$\mathcal{O}\left(\epsilon^{2}\right)$. Thanks to this result, this proof can
be repeated iteratively to any order $n$-th in perturbation theory, leading
then to
$\frac{d}{d\lambda}\hat{A}_{ij}=-\frac{d\Xi}{d\lambda}\hat{A}_{ij}+\mathcal{O}\left(\delta^{n+1},\epsilon^{2}\right)\,.$
(2.23)
The solution in Eq. (2.22) solves also Eq. (2.23) and this proves that also
the term $\hat{A}_{ij}$, which is $\mathcal{O}(\epsilon)$, is decaying and can
be neglected. So we have proven our initial claim that $\hat{A}_{ij}$ is at
least of order $\epsilon^{2}$. Hence, considering the evolution of the spatial
metric in Eqs. (2.19), we have that
$\frac{d}{d\lambda}\hat{f}_{ij}=\mathcal{O}\left(\epsilon^{2}\right)\,,$
(2.24)
at any order in perturbation theory.
The above analysis was performed in [15] leading also to
$N^{i}=\mathcal{O}\left(\epsilon\right)$ when $N^{i}$ vanishes at the
background. For the sake of clarity, we underline that in [15] the UCG has
been fixed and then $\hat{f}_{ij}$ corresponds to the tensor modes. This also
shows that the evolution of the tensor modes can be neglected at linear order
in $\epsilon$.
Finally, our complete set of equations to order
$\mathcal{O}\left(\delta^{n+1},\epsilon^{2}\right)$ is given by the energy and
momentum constraints
$E=\frac{\Theta_{n}^{2}}{3}\,,\hskip
85.35826ptp_{i}=\frac{2}{3}D_{i}\Theta_{n}\,,$ (2.25)
with their respective evolution equations given by
$\frac{dE}{d\lambda}=-\Theta_{n}\left(E+\frac{1}{3}S\right)\,,\hskip
56.9055pt\frac{dp_{i}}{d\lambda}=-\frac{1}{3\mathcal{M}}D_{i}\left(\mathcal{M}S\right)-\Theta_{n}p_{i}\,.$
(2.26)
In order to complete our set of SU equations we also need the decomposed
spatial metric evolution given by Eqs. (2.19) and (2.24). Also we have that
the expansion rate evolution is given by
$\displaystyle\frac{d\Theta_{n}}{d\lambda}=$
$\displaystyle-\frac{1}{3}\Theta_{n}^{2}-\frac{1}{2}\left(S+E\right)\,.$
(2.27)
As one can easily see, Eqs. (2.19) and (2.24)-(2.27), valid at first order in
the gradient expansion and to all orders in perturbation theory, exactly
correspond to the homogeneous and isotropic background equations if one
neglects the momentum constraint. These then prove that the condition
$\left(\partial_{t}-\mathcal{L}_{N}\right)f_{ij}=\frac{1}{\mathcal{M}}\frac{d}{d\lambda}f_{ij}+\mathcal{O}\left(\delta^{n+1},\,\epsilon^{2}\right)\,,$
(2.28)
with the fact that $\partial_{i}N^{i},\,\hat{\mathcal{S}}_{ij},\,R_{ij}$ and
$R$ are $\mathcal{O}\left(\delta^{n+1},\,\epsilon^{2}\right)$, reproduces the
SU picture, and matches with previous works [15, 5, 4] if $\lambda=t$ and
$N^{i}\partial_{i}=\mathcal{O}(\epsilon^{2})$.
In the next section, we will specialize this construction to non-linear LC
perturbations on the top of a FLRW background. Thanks to the freedom of the
choice of the lapse function in the LC gauge, we will then provide a general
SU formalism. Furthermore, within the synchronous fixing of the lapse
function, our formalism will be extended also to the GLC gauge.
## 3 Light-Cone gauge
Let us now introduce the LC gauge [44]. This is a generalization of the GLC
gauge [29], where the lapse function is left unfixed and is built as a
foliation of the spacetime thanks to a set of four coordinates adapted to the
observed past light-cone. In particular, the proper time of a generic observer
is described by the coordinate $t$. This corresponds to the proper-time of a
free-falling observer when the GLC fixing of the lapse function occurs. The
coordinates $w$ and $\theta^{a}$ satisfy the same properties as in the GLC
gauge, i.e. $w$ describes the observer’s past light-cone and
$\theta^{a}=const$ describes the light-like geodesics. Given that, the non-
linear line element is [44]
$ds_{LC}^{2}=\Upsilon^{2}dw^{2}-2\mathcal{M}\Upsilon
dwdt+\gamma_{ab}\left(d\theta^{a}-U^{a}dw\right)\left(d\theta^{b}-U^{b}dw\right)\,.$
(3.1)
In this case, the vector $n^{\mu}$ in Eq. (2.2) is given by
$n^{\mu}=\left(\frac{1}{\mathcal{M}},\,\frac{1}{\Upsilon},\,\frac{U^{a}}{\Upsilon}\right)\,.$
(3.2)
The advantage of the metric (3.1) is that it simplifies the description of
light-like signal. For instance, the light-like geodesics are exactly solved
by $k_{\mu}=-\omega\delta_{\mu}^{w}$, where $k_{\mu}$ is the four-momentum of
the photon and $\omega$ is its physical frequency. Moreover, for the GLC gauge
where $\mathcal{M}=1$, also the time-like geodesic is exactly solved by
$u_{\mu}=-\partial_{\mu}t$. In this case $u_{\mu}=n_{\mu}$ is the four-
velocity of the geodesic observer and is perpendicular to the three-
dimensional hypersurfaces of $t=const$. This particular choice simplifies the
description of late-time cosmological observables and allows a completely non-
linear description of such observables as a factorization of the metric
entries [29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40]. As relevant
examples, the expressions for the cosmological redshift and the angular
distance are given, in a exact way and for an arbitrary geometry, directly as
[29, 33]
$1+z=\frac{\left(u_{\mu}k^{\mu}\right)_{s}}{\left(u_{\nu}k^{\nu}\right)_{o}}=\frac{\Upsilon_{o}}{\Upsilon_{s}}\,,\qquad\qquad
d^{2}_{A}=\frac{\sqrt{\gamma}}{\left(\frac{\text{det}\partial_{\tau}\gamma_{ab}}{4\sqrt{\gamma}}\right)_{o}}\,,$
(3.3)
where $z$ is the redshift of the source, the subscript $s$ and $o$ stands for
a quantity evaluated at the source and observer position, and $\gamma$ is the
determinant of $\gamma_{ab}$.
### 3.1 LC gauge shift vector
Here, we will use $N_{LC}^{i}$ (instead of $N^{i}$)555For the LC metric, Latin
indices will always refer to the coordinates $w$ and $\theta^{a}$. to describe
the shift vector of the LC gauge, thus avoiding confusion when we relate
$N_{LC}^{i}$ to the standard $N^{i}$. Hereafter, we will provide a SU picture
allowing for different lapse function fixings within the LC gauge [44]. This
is a crucial step to obtain the $\delta\mathcal{N}$ formalism on the past
light-cone.
In [44] was shown that the non-linear LC gauge can be interpreted as a $1+1+2$
ADM decomposition with coordinates $x^{\mu}=\left(t,w,\theta^{a}\right)$. This
proviso, the shift vector for the first $1+3$ decomposition is then given by
$N_{LC}^{i}=-\mathcal{M}\left(\frac{1}{\Upsilon},\frac{U^{a}}{\Upsilon}\right)\,,\qquad\qquad
N^{LC}_{j}=-\Upsilon\delta_{j}^{w}\,.$ (3.4)
For what concerns the shift vector $N^{i}_{LC}$, this is orthogonal to the
surface at constant $t$ and $w$. Hence, if we recall that the photon four-
momentum in the LC coordinates is
$k^{\mu}=\omega\mathcal{M}^{-1}\Upsilon^{-1}\delta^{\mu}_{t}$, within the 1+3
decomposition we can write the shift vector as
$N^{i}_{LC}=\frac{k^{i}}{\omega}-n^{i}\,.$ (3.5)
Hence, since $n^{\mu}$ is a time-like vector, $N^{i}_{LC}$ can be interpreted
as the space-like component of the propagation direction of an incoming photon
(see [46]). To completely fix the LC gauge, we still have to fix three
conditions. These are given by the following ones
$f_{ww}=\Upsilon^{2}+U^{2}\,,\qquad\qquad f_{wa}=-U_{a}\,,\qquad\qquad
f_{ab}=\gamma_{ab}\,.$ (3.6)
As one can see, $N_{LC}^{w}=\frac{1}{a}$ on the background level, so it does
not vanish when $\epsilon\rightarrow 0$. We then have that
$N_{LC}^{w}=\mathcal{O}\left(1\right)$. With this choice of coordinates,
however, we will show that the condition
$\partial_{i}N_{LC}^{i}=\mathcal{O}\left(\epsilon^{2}\right)$ holds. In order
to do so, we first perform a finite background coordinate transformation on
the metric in Eq. (2.1), from the $x^{i}=\left(r,\theta^{a}\right)$
coordinates to the light-cone ones $y^{i}=\left(w,\theta^{a}\right)$, given by
$dr=dw-\frac{dt}{a}\,,\qquad\qquad d\theta^{a}=d\theta^{a}\,,\qquad\qquad
dt=dt\,,$ (3.7)
in order to relate $N^{i}$ to $N_{LC}^{i}$. A direct computation for the
controvariant components of $N^{i}$ returns that
$N^{r}=\frac{1}{a}-\frac{\mathcal{M}}{\Upsilon}\,,\qquad\qquad
N^{a}=-\frac{U^{a}}{a}\left(1-\frac{1}{a}+\frac{\mathcal{M}}{\Upsilon}\right)\,,$
(3.8)
or, equivalently, in a covariant form
$N_{r}=-\mathcal{M}\Upsilon+\frac{\Upsilon^{2}+U^{2}}{a}\,,$ (3.9)
and
$N_{a}=-\frac{U_{a}}{a}\,.$ (3.10)
Finally, using Eqs. (3.4) and (3.8), the gradient expansion condition of Eq.
(2.18) given by $N^{i}=\mathcal{O}(\epsilon)$ returns
$\partial_{i}N^{i}=\partial_{i}N_{LC}^{i}=\mathcal{O}\left(\epsilon^{2}\right)$.
Within the gradient expansion, we then have
$\displaystyle\partial_{r}N^{r}=$
$\displaystyle-\partial_{w}\left(\frac{\mathcal{M}}{\Upsilon}\right)=\mathcal{O}\left(\epsilon^{2}\right)\,,$
$\displaystyle\partial_{a}N^{a}=$
$\displaystyle-\frac{1}{a}\left(1-N^{r}\right)\partial_{a}U^{a}+U^{a}\partial_{a}N^{r}=\mathcal{O}\left(\epsilon^{2}\right)\,,$
(3.11)
which indeed show that
$\partial_{w}\Upsilon^{-1}\sim\partial_{w}\mathcal{M}\sim\partial_{a}U^{a}=\mathcal{O}\left(\epsilon^{2}\right)$.
This comes from the fact that both $\mathcal{M}$ and $\Upsilon$ have
background counterparts, therefore, both $\partial_{w}\Upsilon^{-1}$ and
$a^{-1}\partial_{w}\mathcal{M}$ are of order $\epsilon^{2}$. The condition
$\partial_{a}U^{a}=\mathcal{O}(\epsilon^{2})$ is obtained by using the fact
that $N^{r}=\mathcal{O}(\epsilon)$ on the second of Eqs. (3.11).
### 3.2 Separate Light-Cones
As it has been shown in Sect. 2.1, one can still obtain a SU picture when the
shift vector combines to form an integral along the geodesics and only its
spatial derivatives are neglected. We will prove that this is the case for
$\partial_{i}N^{i}$ in Eq. (2.18) and for $\partial_{i}N_{LC}^{i}$ in Eq.
(3.11). Additionally, we need the trace-less part of the extrinsic curvature
$\hat{A}_{ij}$ to be negligible in order to obtain the SU scheme for the LC
metric.
The condition for the shift vector is given by
$\partial_{i}N_{LC}^{i}=\partial_{w}\left(\frac{\mathcal{M}}{\Upsilon}\right)+\partial_{a}\left(\frac{\mathcal{M}U^{a}}{\Upsilon}\right)=\mathcal{O}\left(\epsilon^{2}\right)\,,$
(3.12)
which, with Eqs. (2.9), implies
$\frac{d}{d\lambda}f_{ij}=2K_{ij}+\mathcal{O}\left(\epsilon^{2}\right)\,,$
(3.13)
where in $\mathcal{L}_{N}f_{ij}$ we have neglected $\partial_{i}N^{i}$ but not
$N^{i}\partial_{i}f_{jk}$. In fact, the latter combines with
$\partial_{t}f_{ij}$ to reconstruct $\frac{d}{d\lambda}f_{ij}$, following the
general prescription given in Eq. (2.16). One may note the similarity between
Eqs. (3.13) and (2.9). This is because Eq. (2.9) is the non-perturbative
version in the gradient expansion of Eq. (3.13), where we consider instead the
gradient expansion on the parameter
$\frac{d}{d\tilde{\lambda}}=\frac{d}{d\lambda}+\mathcal{O}(\epsilon^{2})$.
Using Eqs. (3.6) at the background level, i.e.
$f_{ww}=a^{2}\,,\qquad\qquad f_{wa}=0\,,\qquad\qquad
f_{ab}=a^{2}r^{2}\bar{q}_{ab}\,,$ (3.14)
one gets that $df_{ij}/d\lambda=2Hf_{ij}$, where $H$ is the background
expansion rate defined as $H\equiv\bar{\Theta}_{n}/3$, $\bar{\Theta}_{n}$ is
the extrinsic curvature on the background and
$\bar{q}_{ab}=\text{diag}\left(1,\sin\theta\right)$. Thus, from Eq. (3.13), we
also see that $\hat{A}_{ij}$ vanishes on the background. Following the same
procedure adopted in Eqs. (2.21) and (2.23), we get that
$\hat{A}_{ij}=\mathcal{O}\left(\epsilon^{2}\right)$ also when the LC gauge is
fixed.
Now, by taking the trace of Eq. (3.13), we obtain
$\displaystyle\Theta_{n}$
$\displaystyle=\,\frac{1}{\sqrt{-g}}\partial_{\mu}\left(\sqrt{-g}n^{\mu}\right)$
$\displaystyle=\,\frac{1}{\mathcal{M}\Upsilon\sqrt{\gamma}}\frac{d}{d\lambda}\left(\mathcal{M}\Upsilon\sqrt{\gamma}\right)+\partial_{\mu}n^{\mu}$
$\displaystyle=\,\frac{1}{\Upsilon\sqrt{\gamma}}\frac{d}{d\lambda}\left(\Upsilon\sqrt{\gamma}\right)+\frac{1}{\mathcal{M}}\frac{d}{d\lambda}\mathcal{M}+\partial_{\mu}n^{\mu}$
$\displaystyle=\,\frac{1}{\Upsilon\sqrt{\gamma}}\frac{d}{d\lambda}\left(\Upsilon\sqrt{\gamma}\right)+\frac{1}{\mathcal{M}}\left[\partial_{w}\left(\frac{\mathcal{M}}{\Upsilon}\right)+\partial_{a}\left(\frac{\mathcal{M}U^{a}}{\Upsilon}\right)\right]$
$\displaystyle=\,\frac{1}{\Upsilon\sqrt{\gamma}}\frac{d}{d\lambda}\left(\Upsilon\sqrt{\gamma}\right)-\frac{1}{\mathcal{M}}\partial_{i}N_{LC}^{i}\,.$
(3.15)
Thanks to this last equation, within the fixing $\mathcal{M}=1$, we realize
that the difference between $\Theta_{u}\equiv\nabla_{\mu}u^{\mu}$ and
$\Theta_{n}$ is of order $\epsilon^{2}$. We also have from Eq. (2.19), where
we neglect $\partial_{i}N^{i}_{LC}$ in $\Theta_{n}$
$\frac{d\,\Xi}{d\lambda}=\frac{1}{3\Upsilon\sqrt{\gamma}}\frac{d\left(\Upsilon\sqrt{\gamma}\right)}{d\lambda}+\mathcal{O}\left(\epsilon^{2}\right)\,,$
(3.16)
which preserves the background form.
Considering now the homogeneous and isotropic LC background, namely
$\bar{\Upsilon}\sqrt{\bar{\gamma}}=a^{3}r^{2}$, and also identifying
$\bar{E}\equiv\bar{\rho}$ and $\frac{1}{3}\bar{S}\equiv\bar{p}$, Eqs. (2.25)
return
$\displaystyle\frac{\bar{\Theta}_{n}^{2}}{3}=3H^{2}=3\left(\frac{\partial_{t}a}{a}\right)^{2}=\bar{\rho}\,,\hskip
170.71652pt\text{background}$
$\displaystyle\frac{{\Theta_{n}}^{2}}{3}=\frac{1}{3}\left[\frac{1}{(\Upsilon\sqrt{\gamma})}\frac{d\left(\Upsilon\sqrt{\gamma}\right)}{d\lambda}\right]^{2}=\rho+\mathcal{O}\left(\delta^{n},\epsilon^{2}\right)\,.\hskip
56.9055pt\text{non-perturbative}$ (3.17)
Moreover, Eqs. (2.26) and (2.27) give at the background level
$\partial_{t}\bar{\rho}=\,-3H\left(\bar{\rho}+\bar{p}\right)\,,\qquad\qquad\partial_{t}\bar{\Xi}=\,H\,,\qquad\qquad\partial_{t}H=\,-3H^{2}-\frac{1}{2}\left(\bar{\rho}+3\bar{p}\right)\,.$
(3.18)
Here we remark that Eqs. (3.17) mean that the non-linear LC perturbations in a
FLRW universe at first order in the gradient expansion do evolve as a set of
glued background universes with a different set of $\left(a,\,\rho,\,p\right)$
in each patch. Within this picture, then $\Upsilon\sqrt{\gamma}$ can be linked
to the effective local scale factor. To complete the set of equations that
have the same form for the background (last of Eq. (3.18)) and the perturbed
universe, we also have from Eq. (2.27)
$\frac{d}{d\lambda}\left[\frac{1}{(\Upsilon\sqrt{\gamma})}\frac{d(\Upsilon\sqrt{\gamma})}{d\lambda}\right]=-\frac{1}{3}\left[\frac{1}{(\Upsilon\sqrt{\gamma})}\frac{d(\Upsilon\sqrt{\gamma})}{d\lambda}\right]^{2}-\frac{1}{2}\left(3p+\rho\right)+\mathcal{O}\left(\delta^{n},\epsilon^{2}\right)\,.$
(3.19)
Therefore, the SU scheme holds with the universe evolving as a set of
homogeneous and isotropic LC background, where $\lambda$ provides the
evolution of inhomogeneities along $n^{\mu}$. So far we have provided a
consistent SU on the past light-cone in terms of LC gauge entries, which is a
fundamental step to provide the $\delta\mathcal{N}$ formalism on the past
light-cone in the next Sect. 4.
### 3.3 The Geodesic Light-Cone gauge
Let us now apply the SU scheme described in the previous subsection to the
case of the GLC gauge given by Eq. (3.1) with $\mathcal{M}=1$, and show how it
simplifies the evolution of the density perturbations on the past light-cone.
This gauge automatically provides $\nabla_{\mu}u^{\mu}$=$\nabla_{\mu}n^{\mu}$,
then, one can relate the expansion of the 3D-hypersurfaces orthogonal to
$n^{\mu}$ to the matter content described in terms of $u^{\mu}$.
Hence, we recall that the comoving four velocity is given by
$u^{\mu}=\left(1,\Upsilon^{-1},\Upsilon^{-1}U^{a}\right)$ [46, 29], we then
have that the expansion of the 3D-hypersurfaces is
$\Theta_{u}=\nabla_{\mu}u^{\mu}=\frac{\partial_{t}\Upsilon}{\Upsilon}+\frac{\gamma^{ab}\partial_{t}\gamma_{ab}}{2}+\frac{\gamma^{ab}}{2\Upsilon}\partial_{w}\gamma_{ab}+\frac{1}{\Upsilon}\partial_{a}U^{a}+\frac{U^{a}\gamma^{bc}}{2\Upsilon}\partial_{a}\gamma_{bc}\,,$
(3.20)
which can be re-written in a more suitable form as
$\Theta_{u}=\frac{1}{\Upsilon\sqrt{\gamma}}\frac{d\left(\Upsilon\sqrt{\gamma}\right)}{d\lambda}+\partial_{\mu}u^{\mu}\,,$
(3.21)
where $\frac{d}{d\lambda}\equiv u^{\mu}\partial_{\mu}$ accounts for
inhomogeneities along the geodesics and
$\partial_{\mu}u^{\mu}=\partial_{i}N^{i}$ is $\mathcal{O}(\epsilon^{2})$, as
shown in Eqs. (3.11).
An interesting feature of Eq. (3.21) is that the first term contributes both
to the background and to the perturbative level whereas the last term
contributes only to the perturbative level. We thus obtain a separate universe
description using Eq. (3.21) as aimed.
In order to provide the conservation of $\zeta$, when the pressure is
adiabatic, we need to analyze the energy-momentum conservation in the GLC
gauge for the case of a perfect fluid. Starting from
$T_{\mu\nu}=\left(\rho+p\right)u_{\mu}u_{\nu}+g_{\mu\nu}p\,,$ (3.22)
the conservation law along the direction of $u^{\nu}$, i.e.
$u^{\nu}\nabla_{\mu}T_{\,\nu}^{\mu}=0$, exactly returns
$\frac{d\rho}{d\lambda}=-\left(\rho+p\right)\Theta_{u}\,.$ (3.23)
Hence, by using Eq. (3.21), we have
$\displaystyle\frac{d\rho}{d\lambda}=-\left(\rho+p\right)\left[\frac{1}{\Upsilon\sqrt{\gamma}}\frac{d\left(\Upsilon\sqrt{\gamma}\right)}{d\lambda}+\partial_{\mu}u^{\mu}\right]\,.$
(3.24)
Eq. (3.24) is a fully non-linear relation between geometry and matter content.
As an important remark, since Eq. (3.24) is a fully non-linear equation, it
can be seen as a dynamic equation for the exact density perturbations.
According to what outlined so far, within the gradient expansion, where
$\partial_{\mu}u^{\mu}=\partial_{i}N^{i}=\mathcal{O}\left(\epsilon^{2}\right)$,
Eq. (3.24) can then be written as
$\displaystyle\frac{d\rho}{d\lambda}=$
$\displaystyle-\left(\rho+p\right)\frac{1}{\Upsilon\sqrt{\gamma}}\frac{d\left(\Upsilon\sqrt{\gamma}\right)}{d\lambda}+\mathcal{O}(\epsilon^{2})\,.$
(3.25)
In general, pressure and energy density are linked by an equation of state as
$p=q\rho$. The value of $q$ can be time-dependent, accordingly to the specific
era when inhomogeneities are evolving (as happens, for instance, during the
slow-roll inflationary stage). This makes Eq. (3.25) in general quite
complicated to be solved. However, during the late time epochs (e.g.
radiation, matter or cosmological constant dominated universe), $q$ is
constant. In this case, Eqs. (3.25) becomes
$\displaystyle\frac{d\rho}{d\lambda}=$
$\displaystyle-\rho\frac{1+q}{\Upsilon\sqrt{\gamma}}\frac{d\left(\Upsilon\sqrt{\gamma}\right)}{d\lambda}+\mathcal{O}(\epsilon^{2})\,,$
(3.26)
which is exactly solved by
$\rho(\lambda)=A\left(\Upsilon\sqrt{\gamma}\right)^{-(1+q)}(\lambda)+\mathcal{O}(\epsilon^{2})\,,$
(3.27)
where $A$ is a constant. Eq. (3.27) gives the exact link between the geometry
and the energy density in terms of light-cone metric entries. It is also the
starting point to describe non-linear features of inhomogeneities on super-
horizon scales. In fact, in the following we will discuss how
$\Upsilon\sqrt{\gamma}$ relates with the gauge invariant curvature
perturbation $\zeta$ and how both can be computed using the
$\delta\mathcal{N}$ formalism. Furthermore, we will show under which
conditions the quantity $\zeta$ is conserved on super-horizon scales.
## 4 Curvature perturbation evolution
In this section we will study the curvature perturbations along the past
light-cone using the SU scheme developed in the previous section. To this aim,
we will start by considering the linear order in perturbation theory, and by
using the scalar-pseudoscalar decomposition developed in [36, 42]. Hence, we
will first review this perturbation theory, following the approach of [4, 5]
to show the conservation of $\zeta$ along the past light-cone. Thereafter, we
red will generalize this proof to non-linear order in the amplitude of the
perturbations and to first order in the gradient expansion, which allow us to
obtain $\zeta$ at this perturbative level in terms of light-cone
perturbations. Finally, we will generalize the $\delta\mathcal{N}$ formalism
on the past light-cone.
### 4.1 Linear evolution and comparison with previous results
In this section, we want to linearize Eqs. (3.26). To this aim, we first
recall the linear perturbation theory for the GLC coordinates, presented in
[36, 42]. Firstly, we consider general perturbations, i.e. without fixing the
GLC gauge. The metric and its perturbed inverse are then given by
$g_{\mu\nu}=\bar{g}^{\,GLC}_{\mu\nu}+\delta
g_{\mu\nu}=a^{2}\left\\{\begin{pmatrix}0&-a^{-1}&\vec{0}\\\
-a^{-1}&1&\vec{0}\\\
\vec{0}^{\mathbf{T}}&\vec{0}^{\mathbf{T}}&\bar{\gamma}_{ab}\end{pmatrix}+\begin{pmatrix}L&M&V_{b}\\\
M&N&\mathcal{U}_{b}\\\
V_{a}^{\mathbf{T}}&\mathcal{U}_{a}^{\mathbf{T}}&\delta\gamma_{ab}\end{pmatrix}\right\\}\,,$
(4.1)
and
$\delta
g^{\mu\nu}=\begin{pmatrix}-\left(a^{2}L+N+2aM\right)&-a^{-1}\left(a^{2}L+aM\right)&-a\left(aV^{a}+\mathcal{U}^{a}\right)\\\
-a^{-1}\left(a^{2}L+aM\right)&-L&aV^{a}\\\
-a\left(aV^{a}+\mathcal{U}^{a}\right)&aV^{a}&-a^{-2}\delta\gamma^{ab}\end{pmatrix}\,,$
(4.2)
where, following [36, 42], we can decompose $\mathcal{U}_{a},V_{a}$ and
$\delta\gamma_{ab}$ as
$\displaystyle\mathcal{U}_{a}$
$\displaystyle=r^{2}\left(D_{a}u+\tilde{D}_{a}\hat{u}\right)\,,$
$\displaystyle V_{a}$
$\displaystyle=r^{2}\left(D_{a}v+\tilde{D}_{a}\hat{v}\right)\,,$
$\displaystyle\delta\gamma_{ab}$
$\displaystyle=a^{2}r^{2}\left[\left(1+2\nu\right)\bar{q}_{ab}+D_{ab}\mu+\tilde{D}_{ab}\hat{\mu}\right]\,.$
(4.3)
Here, $u,\,v,\,\nu$ and $\mu$ are scalars, and $\hat{u},\,\hat{v}$ and
$\hat{\mu}$ are pseudoscalar degrees of freedom under spatial rotations.
Moreover, the angular derivatives are defined as
$D_{ab}=D_{(a}D_{b)}-\frac{1}{2}q_{ab}D^{2}\,,\qquad\qquad\tilde{D}_{ab}=D_{(a}\tilde{D}_{b)}\,,$
(4.4)
where $\tilde{D}_{a}=\epsilon_{a}^{b}D_{b}$, and $\epsilon_{a}^{b}$ is the
anti-symmetric tensor. We remark that $D_{ab}$ and $\tilde{D}_{ab}$ are trace-
less, so that the trace of $\delta\gamma_{ab}$ is given by the trace of
$q_{ab}=\left(1+2\nu\right)\bar{q}_{ab}$.
At this point, let us also linearize the energy density as
$\rho=\bar{\rho}\left(1+\delta\rho\right)\,.$ (4.5)
In this expansion, however, we keep both $\bar{\rho}$ and $\delta\rho$ as
function of the time-like parameter $\lambda$. By doing this, we then keep
track of the perturbations as projected onto the exact time-like geodesic.
Starting from Eqs. (4.3), we then obtain that
$\sqrt{\gamma}=a^{2}\sqrt{\bar{\gamma}}\,\left(1+2\nu\right)\,.$ (4.6)
At this point, we want to use the metric in Eq. (4.1) to compute the expansion
volume $\Theta_{n}=\nabla_{\mu}n^{\mu}$ of the hypersurfaces orthogonal to $t$
defined in Eq. (2.2). Then, the volume expansion will be given by
$\Theta_{n}=\frac{1}{\sqrt{-g}}\partial_{\mu}\left(\sqrt{-g}\,n^{\mu}\right)=\frac{1}{2}g^{\alpha\beta}\frac{d}{d\lambda}g_{\alpha\beta}+\partial_{\mu}n^{\mu}\,,$
(4.7)
where $n^{\mu}\partial_{\mu}=\frac{d}{d\lambda}$. Using this last equation,
and the fact that $\Theta_{n}=\Theta_{u}+\mathcal{O}(\epsilon^{2})$, as shown
in Eq. (3.15), we then have
$\Theta_{u}=3H\left[1+\frac{1}{2}\left(a^{2}L+N+2aM\right)\right]+\frac{1}{2}\frac{d}{d\lambda}\left(N+4\nu\right)+\mathcal{O}(\epsilon^{2})\,,$
(4.8)
where we are neglecting the terms
$\left(\partial_{w}+\frac{2}{r}\right)\left(N+aM\right)=\mathcal{O}\left(\epsilon^{2}\right)\,,\qquad\qquad
D_{a}\left(\mathcal{U}^{a}+aV^{a}\right)=\mathcal{O}\left(\epsilon^{2}\right)\,,$
(4.9)
since they are proportional to $\nabla_{i}\mathcal{B}^{i}$ in standard
perturbation theory (see [36, 42]). The same happens in the standard approach,
see [4, 5]. The reason why $\nabla_{i}\mathcal{B}^{i}$ can be neglect is that,
in the language of the gradient expansion applied to the ADM metric, this term
is the divergence of the shift vector, hence second order in the gradient
expansion. As a side remark, we underline that Eq. (4.7) is given in terms of
the cosmic time by
$\Theta_{n}=3H-3\partial_{t}\psi+\mathcal{O}(\epsilon^{2})\,,$ (4.10)
which is in agreement with [4].
Let us now consider the energy-momentum tensor conservation projected onto
$u^{\mu}$. From Eq. (3.23), we have that
$\displaystyle\frac{d\rho}{d\lambda}+\left(\rho+p\right)\Theta_{u}=0\,.$
(4.11)
Thanks to Eqs. (4.5) and (4.8), and by using the equation of state $p=q\rho$,
Eq. (4.11) gives
$\displaystyle\frac{d\bar{\rho}}{d\lambda}+3H\bar{\rho}(1+q)=0\,,$
$\displaystyle\frac{d\left(\bar{\rho}\delta\rho\right)}{d\lambda}+\bar{\rho}\left(1+q\right)\left\\{3H\left[\delta\rho+\frac{1}{2}\left(a^{2}L+N+2aM\right)\right]+\frac{1}{2}\frac{d}{d\lambda}\left(N+4\nu\right)\right\\}+\mathcal{O}(\epsilon^{2})=0\,,$
(4.12)
for the background and perturbed quantities respectively. Let us now fix the
GLC gauge, which satisfies the following conditions [36]
$\displaystyle a^{2}L+N+2aM$ $\displaystyle=0\,,$
$\displaystyle\partial_{w}\left(N+aM\right)$
$\displaystyle=\frac{1}{2}\partial_{w}N=\mathcal{O}(\epsilon^{2})\,,$
$\displaystyle\partial_{a}\left(\mathcal{U}^{a}+aV^{a}\right)$
$\displaystyle=\partial_{a}\mathcal{U}^{a}=\mathcal{O}\left(\epsilon^{2}\right)\,.$
(4.13)
Then, by inserting first of Eqs. (4.12) into the second one, by using the GLC
gauge conditions given in Eqs. (4.13), and, finally, by integrating over the
affine parameter $\lambda$, we get
$\delta\rho(\lambda)=-\frac{1+q}{2}\left(N+4\nu\right)\,.$ (4.14)
It is worth to stress that we would obtain the same result by linearizing Eqs.
(3.27). Therefore, this show the self-consistency of our gradient expansion
method.
At first order in the gradient expansion, for a generic adiabatic equation of
state, i.e. $p=p(\rho)$, and by fixing the GLC gauge, we obtain, from Eqs.
(4.8), (4.11) and (4.13), that
$\frac{1}{\rho+p(\rho)}\frac{d\rho}{d\lambda}=-3H-\frac{1}{2}\frac{d\left(N+4\nu\right)}{d\lambda}\,.$
(4.15)
This equation can be integrated between two different hypersurfaces marked by
the values $\lambda_{1}$ and $\lambda_{2}$, and gives
$\frac{1}{6}\left(N+4\nu\right)|_{\lambda_{1}}^{\lambda_{2}}+\bar{\mathcal{N}}(\lambda_{2},\lambda_{1})=-\frac{1}{3}\int_{\rho(\lambda_{1},x^{i})}^{\rho(\lambda_{2},x^{i})}\frac{d\rho}{\rho+p(\rho)}\,,$
(4.16)
where the dependence on $x^{i}$ in the extremes of integration denotes that we
are considering inhomogeneous hyper-surfaces. At the same time, we have
defined $\bar{\mathcal{N}}$ as
$\displaystyle\bar{\mathcal{N}}\left(\lambda_{2},\,\lambda_{1}\right)\equiv\int^{\lambda_{2}}_{\lambda_{1}}Hd\lambda$
(4.17)
Moreover, as done for standard perturbations in [5], we want to extract the
background contribution of Eq. (4.16). Therefore, we start by doing this in
the r.h.s. of Eq. (4.16), which can then be written as
$\int_{\rho(\lambda_{1},x^{i})}^{\rho(\lambda_{2},x^{i})}\frac{d\rho}{\rho+p(\rho)}=\int_{\bar{\rho}(\lambda_{2})}^{\rho(\lambda_{2},x^{i})}\frac{d\rho}{\rho+p(\rho)}-\int_{\bar{\rho}(\lambda_{1})}^{\rho(\lambda_{1},x^{i})}\frac{d\rho}{\rho+p(\rho)}+\int_{\bar{\rho}(\lambda_{1})}^{\bar{\rho}(\lambda_{2})}\frac{d\rho}{\rho+p(\rho)}\,,$
(4.18)
where $\bar{\rho}$ determines the background value of $\rho$. The last term is
the background value given by $\bar{\mathcal{N}}$ (see Eq. (4.17)). Therefore,
we have
$\frac{1}{6}\left(N+4\nu\right)\left(\lambda_{1}\right)+\frac{1}{3}\int_{\bar{\rho}(\lambda_{1})}^{\rho(\lambda_{1},x^{i})}\frac{d\rho}{\rho+p(\rho)}=\frac{1}{6}\left(N+4\nu\right)\left(\lambda_{2}\right)+\frac{1}{3}\int_{\bar{\rho}(\lambda_{2})}^{\rho(\lambda_{2},x^{i})}\frac{d\rho}{\rho+p(\rho)}\,,$
(4.19)
then, the quantity
$\tilde{\zeta}=-\frac{1}{6}\left(N+4\nu\right)-\frac{1}{3}\int_{\bar{\rho}(\lambda)}^{\rho(\lambda,x^{i})}\frac{d\rho}{\rho+p(\rho)}\,,$
(4.20)
is conserved. One interesting aspect of Eq. (4.20) is that the geometrical
terms $N$ and $\nu$, present on the conservation of $\tilde{\zeta}$, are
precisely the same terms that contribute to the linearized angular distance-
redshift relation for the linearized GLC gauge [36].
Let us now link our results to the standard perturbation theory. We follow the
notation of [36, 42], where the standard metric is given by
$\displaystyle ds^{2}$
$\displaystyle=\,a^{2}\left[-\left(1+2\phi\right)d\eta^{2}-2\mathcal{B}_{i}dx^{i}d\eta+\left(\bar{\gamma}_{ij}+\mathcal{C}_{ij}\right)dx^{i}dx^{j}\right]\,,$
(4.21)
and the SVT decomposition given by
$\displaystyle\mathcal{B}_{i}$ $\displaystyle=B_{i}+\partial_{i}B\,,$
$\displaystyle C_{ij}$
$\displaystyle=\,-2\bar{\gamma}_{ij}\psi+2D_{ij}E+2\nabla_{(i}F_{j)}+2h_{ij}\,,$
(4.22)
where $B_{i}$ and $F_{i}$ are divergenceless vectors and $h_{ij}$ is a trace-
less and divergenceless tensor. We also have
$\displaystyle
D_{ij}E=\left(\nabla_{(i}\nabla_{j)}-\bar{\gamma}_{ij}\frac{\Delta_{3}}{3}\right)E\,.$
(4.23)
In this case, the trace of $g_{ij}$ is proportional to $-\psi$, and then by
using the relation between standard and GLC perturbations given by [36, 42]
$\psi=-\frac{1}{6}\left(N+4\nu\right)\,,$ (4.24)
we get that
$\tilde{\zeta}=\psi-\frac{1}{3}\int_{\bar{\rho}(\lambda)}^{\rho(\lambda,x^{i})}\frac{d\rho}{\rho+p(\rho)}\,,$
(4.25)
or, equivalently, with the use of Eq. (4.5),
$\tilde{\zeta}=\psi-\frac{1}{3}\int_{\bar{\rho}(\lambda)}^{\rho(1+\delta\rho)}\frac{d\rho}{\rho+p(\rho)}\approx\psi-\frac{1}{3}\frac{\bar{\rho}\delta\rho}{\rho+p(\rho)}\,,$
(4.26)
where we have expanded at linear order in the density perturbations in the
last equality. Since we are working at first order in the gradient expansion,
where the spatial gauge modes occur at the next-to-leading order, and the time
gauge mode is fixed, the quantity $\tilde{\zeta}$ given in Eq. (4.20) is gauge
invariant. Hence, within this approximation scheme, we may identify it with
the curvature perturbation $\zeta$. For the complete expression of $\zeta$ to
order $\mathcal{O}(\delta,\epsilon^{n})$, see Eqs. (2.27) of [42], where we
also provide its gauge invariance proof in terms of light-cone perturbations.
### 4.2 Non-linear $\zeta$
As we have seen, our non-linear SU approach in the GLC gauge allowed us to
obtain Eq. (3.25) by neglecting the last term in Eq. (3.24), which we have
shown to correspond to the terms neglected in the standard perturbation theory
[4] at linear order. Now, with the aim of going beyond this result, we leave
the lapse function $\mathcal{M}$ unspecified and use the SU approach on the
light-cone to prove the non-linear conservation of the curvature perturbation
in terms of LC parameters. Just as done in the previous subsection, we start
from Eq. (3.23) which gives
$\displaystyle\Theta_{u}=-\frac{1}{\left(\rho+p\right)}\frac{d\rho}{d\lambda}\,.$
(4.27)
Moreover, we consider Eq. (3.15) and the approximation exploited after Eq.
(4.7), namely $\Theta_{n}\equiv\nabla_{\mu}n^{\mu}\simeq\Theta_{u}$. We then
get that
$\displaystyle\frac{1}{\Upsilon\sqrt{\gamma}}\frac{d\left(\Upsilon\sqrt{\gamma}\right)}{d\lambda}=-\frac{1}{\left(\rho+p\right)}\frac{d\rho}{d\lambda}\,.$
(4.28)
We now integrate this equation along $\lambda$ and consider that the pressure
is adiabatic. In this way, one obtains that
$\displaystyle\ln\left[\frac{\left(\Upsilon\sqrt{\gamma}\right)_{\lambda_{2}}}{\left(\Upsilon\sqrt{\gamma}\right)_{\lambda_{1}}}\right]=-\int_{\rho\left(\lambda_{1},x^{i}\right)}^{\rho\left(\lambda_{2},x^{i}\right)}\frac{d\rho}{\rho+p\left(\rho\right)}\,.$
(4.29)
At this point, the r.h.s. can be manipulated in the same spirit of what done
in Eq. (4.18) (see also [4]). By doing so, we get that
$\displaystyle\int_{\rho(\lambda_{1},x^{i})}^{\rho(\lambda_{2},x^{i})}\frac{d\rho}{\rho+p(\rho)}$
$\displaystyle=\int_{\bar{\rho}(\lambda_{2})}^{\rho(\lambda_{2},x^{i})}\frac{d\rho}{\rho+p(\rho)}-\int_{\bar{\rho}(\lambda_{1})}^{\rho(\lambda_{1},x^{i})}\frac{d\rho}{\rho+p(\rho)}+\int_{\bar{\rho}(\lambda_{1})}^{\bar{\rho}(\lambda_{2})}\frac{d\rho}{\rho+p(\rho)}$
$\displaystyle=\int_{\bar{\rho}(\lambda_{2})}^{\rho(\lambda_{2},x^{i})}\frac{d\rho}{\rho+p(\rho)}-\int_{\bar{\rho}(\lambda_{1})}^{\rho(\lambda_{1},x^{i})}\frac{d\rho}{\rho+p(\rho)}+\ln\left(\frac{\bar{\Upsilon}\sqrt{\bar{\gamma}}|_{\lambda_{2}}}{\bar{\Upsilon}\sqrt{\bar{\gamma}}|_{\lambda_{1}}}\right)\,,$
(4.30)
where, from the first to the second line, we have used Eq. (4.29) at the
background level. Now, thanks to Eq. (4.30), we can rewrite Eq. (4.29) as
$\displaystyle\ln\left(\frac{\Upsilon\sqrt{{\gamma}}}{\bar{\Upsilon}\sqrt{\bar{\gamma}}}\right)_{\lambda_{2}}+\int_{\bar{\rho}(\lambda_{2})}^{\rho(\lambda_{2},x^{i})}\frac{d\rho}{\rho+p(\rho)}=\ln\left(\frac{\Upsilon\sqrt{\gamma}}{\bar{\Upsilon}\sqrt{\bar{\gamma}}}\right)_{\lambda_{1}}+\int_{\bar{\rho}(\lambda_{1})}^{\rho(\lambda_{1},x^{i})}\frac{d\rho}{\rho+p(\rho)}\,.$
(4.31)
This shows that there is a conserved quantity at first order in the gradient
expansion. This quantity corresponds to the non-linear curvature perturbation
$\zeta$
$\displaystyle\zeta=\frac{1}{3}\ln\left(\frac{\Upsilon\sqrt{{\gamma}}}{\bar{\Upsilon}\sqrt{\bar{\gamma}}}\right)+\frac{1}{3}\int_{\bar{\rho}(t)}^{\rho(t,x^{i})}\frac{d\rho}{\rho+p(\rho)}+\mathcal{O}\left(\epsilon^{2}\right)\,.$
(4.32)
which then generalizes the linear result in Eq. (4.20).
Thus, the curvature perturbation defined in Eq. (4.32) generalizes at the non-
linear level, but at the first order in the gradient expansion, the expression
of the gauge invariant curvature perturbation in the LC gauge formalism given
in [42]. In fact, the first term corresponds to the curvature perturbations
whilst the second term corresponds to density perturbations in the same spirit
as done in Eq. (4.25). Finally, we remark that we have obtained Eq. (4.32)
without specifying the lapse function $\mathcal{M}$, therefore we still have
the freedom to fix the time gauge mode, as we will see better later.
### 4.3 $\delta\mathcal{N}$ formalism on the LC gauge
Let us begin by writing explicitly the exact expression for the expansion rate
defined by the normal vector $n^{\mu}$ given by Eq. (3.2). This we will be
then applied to the evaluation of the number of e-folds using the SU picture
of the LC gauge. Such approach will allow us to obtain a generalization of the
non-linear $\delta\mathcal{N}$ formalism in terms of LC gauge metric entries.
The expansion rate in the LC gauge is given by
$\Theta_{n}=\nabla_{\mu}n^{\mu}=\frac{1}{\sqrt{-g}}\partial_{\mu}\left(n^{\mu}\sqrt{-g}\right)=\frac{1}{\mathcal{M}\Upsilon\sqrt{\gamma}}\frac{d}{d\lambda}\left(\mathcal{M}\Upsilon\sqrt{\gamma}\right)+\partial_{\mu}n^{\mu}\,,$
(4.33)
where, from Eqs. (3.2) and (3.12), we have that
$\partial_{\mu}n^{\mu}=-\frac{\partial_{t}\mathcal{M}}{\mathcal{M}^{2}}-\partial_{i}\left(\frac{N^{i}_{LC}}{\mathcal{M}}\right)=-\frac{\partial_{t}\mathcal{M}}{\mathcal{M}^{2}}+\mathcal{O}\left(\epsilon^{2}\right)\,.$
(4.34)
Now, using Eqs. (4.33) and (4.34), we obtain
$\Theta_{n}=\frac{1}{\Upsilon\sqrt{\gamma}}\frac{d}{d\lambda}\left(\Upsilon\sqrt{\gamma}\right)+\mathcal{O}\left(\epsilon^{2}\right)\,.$
(4.35)
An interesting thing about this result is that it is invariant in form on the
past light-cone, i.e. all the dependence on the lapse function is hidden in
$n^{\mu}\partial_{\mu}\equiv\frac{d}{d\lambda}$. Let us now integrate Eq.
(4.35) to compute the non-linear number of e-folds at first order in the
gradient expansion in terms of light-cone entries. We can then easily obtain
the following result
$\mathcal{N}\left(\lambda_{f},\lambda_{i}\right)\equiv\frac{1}{3}\int_{\lambda_{i}}^{\lambda_{f}}\Theta_{n}d\lambda^{\prime}=\frac{1}{3}\ln\left[\frac{\left(\Upsilon\sqrt{\gamma}\right)_{\lambda_{f}}}{\left(\Upsilon\sqrt{\gamma}\right)_{\lambda_{i}}}\right]+\mathcal{O}\left(\epsilon^{2}\right)\,.$
(4.36)
Let us note that $\mathcal{N}(\lambda_{f},\,\lambda_{i})$ from Eq. (4.36) is a
biscalar, i.e. depends on the gauge fixing both at the initial and final
slicing. One possible fixing of the lapse function is given by the uniform
curvature light-cone (UCLC) gauge: in this case the effective local scale
factor is given by its background value666We will be using the subscript UC to
describe the UCLC gauge fixing and the subscript UD to describe the UDLC gauge
fixing.
$\displaystyle\left(\Upsilon\sqrt{\gamma}\right)_{UC}=\bar{\Upsilon}\sqrt{\bar{\gamma}}\,.$
(4.37)
Therefore, if we fix both initial and final slices on the UCLC gauge, the
number of e-folds will be given by its background value, as
$\mathcal{N}_{UC}\left(\lambda_{f},\,\lambda_{i}\right)=\bar{\mathcal{N}}\left(\lambda_{f},\,\lambda_{i}\right)\,.$
(4.38)
Let us also introduce the uniform density light-cone (UDLC) gauge defined by
$\displaystyle\rho_{UD}\left(\lambda,\,x^{i}\right)=\bar{\rho}\left(\lambda\right)\,.$
(4.39)
Then, within the UDLC gauge, we have
$\zeta=\,\frac{1}{3}\ln\left[\frac{\left(\Upsilon\sqrt{\gamma}\right)_{\lambda_{f\,UD}}}{\left(\bar{\Upsilon}\sqrt{\bar{\gamma}}\right)_{\lambda_{i}}}\right]=\,\frac{1}{3}\ln\left[\frac{\left(\Upsilon\sqrt{\gamma}\right)_{\lambda_{f\,UD}}}{\left(\Upsilon\sqrt{\gamma}\right)_{\lambda_{i\,UC}}}\right]\,$
(4.40)
where from the first to the second equality we have used Eq. (4.37). We can
now compare Eqs. (4.36) and (4.40), and obtain that $\zeta$ can be related to
the number of e-folds in the following way
$\displaystyle\mathcal{N}\left(\lambda_{f\,UD},\,\lambda_{i\,UC}\right)=$
$\displaystyle\frac{1}{3}\ln\left[\frac{\left(\Upsilon\sqrt{\gamma}\right)_{\lambda_{f\,UD}}}{\left(\Upsilon\sqrt{\gamma}\right)_{\lambda_{i\,UC}}}\right]$
$\displaystyle=$
$\displaystyle\frac{1}{3}\ln\left[\frac{\left(\Upsilon\sqrt{\gamma}\right)_{\lambda_{f\,UD}}}{\left(\bar{\Upsilon}\sqrt{\bar{\gamma}}\right)_{\lambda_{f}}}\frac{\left(\bar{\Upsilon}\sqrt{\bar{\gamma}}\right)_{\lambda_{i}}}{\left(\Upsilon\sqrt{\gamma}\right)_{\lambda_{i\,UC}}}\right]+\bar{\mathcal{N}}\left(\lambda_{f},\lambda_{i}\right)$
$\displaystyle=$
$\displaystyle\frac{1}{3}\ln\left(\frac{\Upsilon\sqrt{\gamma}}{\bar{\Upsilon}\sqrt{\bar{\gamma}}}\right)_{\lambda_{f\,UD}}+\bar{\mathcal{N}}\left(\lambda_{f},\lambda_{i}\right)$
$\displaystyle=$
$\displaystyle-\zeta(\lambda_{f})+\bar{\mathcal{N}}\left(\lambda_{f},\lambda_{i}\right)\,.$
(4.41)
Hence, if we define
$\delta\mathcal{N}\equiv\,\mathcal{N}\left(\lambda_{f\,UD},\,\lambda_{i\,UC}\right)-\mathcal{N}\left(\lambda_{f\,UC},\,\lambda_{i\,UC}\right)\,,$
(4.42)
we straightforwardly get, from (4.38) and (4.41), that
$\delta\mathcal{N}=-\zeta$, which is valid at any order in perturbation
theory, in agreement with [15]. As a remark, we underline that, while Eq.
(4.36) depends directly on the initial and final slice, Eq. (4.42) depends
only on the difference of the perturbations of the e-fold number in the UDLC
and UCLC on the final slices. Now we will give an example of how this last
result can be used to evaluate the power spectrum of $\zeta$ in terms of the
LC metric entries. Following the procedure used in [15], we fix the UCLC gauge
and adopt the SU approximation. In this way, the density perturbations can be
written in terms of the background density with perturbed initial conditions
as follows
$\displaystyle\rho_{UC}(\mathcal{N}_{UC},\,\mathbf{x})=\bar{\rho}(\mathcal{\bar{N}},\,\varphi_{*}^{A}(x))\,,$
(4.43)
where we use Eq. (4.38). Also, $\varphi_{*}^{A}$ is the field content of the
underlying inflationary model evaluated just after the horizon exit. The index
$A=1,...,d$ refers to the possibility that inflation could happen with $d$
scalar fields. Instead, by considering the UDLC gauge, we would have
$\displaystyle\rho_{UD}(\mathcal{N}_{UD},\,\mathbf{x})=\bar{\rho}(\mathcal{N}_{UD})\,.$
(4.44)
Since $\rho$ is a scalar, we can then write
$\displaystyle\rho^{\prime}(\mathcal{N}^{\prime},\,x^{\prime})=\rho(\mathcal{N},\,x)\,,$
(4.45)
which holds between two generic sets of coordinates $\mathcal{N},\,x$ and
$\mathcal{N}^{\prime},\,x^{\prime}$ i.e. the value of a scalar function in a
given physical point does not depend on the choice of coordinates.
Using Eqs. (4.43) and (4.44) on Eq. (4.45), we get
$\displaystyle\bar{\rho}(\bar{\mathcal{N}},\,\varphi_{*}^{A}(x))=\bar{\rho}(\mathcal{N}_{UD})\,,$
(4.46)
where the $x^{\prime}=x$ from Eq. (4.45) corresponds to the choice of the
spatial threading to fix the LC gauge. Also, we choose
$\mathcal{N}^{\prime}=\mathcal{N}_{UC}$, and $\mathcal{N}=\mathcal{N}_{UD}$.
Then, Eq. (4.46) can be inverted as
$\displaystyle\mathcal{N}_{UD}=\bar{\mathcal{N}}(\bar{\rho},\,\varphi_{*}^{A}(\mathbf{x}))\,.$
(4.47)
Hence, by expanding the fields in Eq. (4.47) at linear order as
$\varphi^{A}_{*}=\bar{\varphi}^{A}_{*}+\delta\varphi^{A}_{*}$, using Eq.
(4.42), we obtain
$\displaystyle\frac{1}{3}\ln\left[\frac{\left(\Upsilon\sqrt{\gamma}\right)_{UD}}{\left(\Upsilon\sqrt{\gamma}\right)_{UC}}\right]=$
$\displaystyle\mathcal{N}(\bar{\rho},\,\varphi_{*}^{A}(\mathbf{x}))|_{UD}-\bar{\mathcal{N}}(\bar{\rho})|_{UC}$
$\displaystyle=$
$\displaystyle\delta\varphi_{*}^{A}\partial_{A}\bar{\mathcal{N}}+\frac{1}{2}\delta\varphi_{*}^{A}\delta\varphi_{*}^{B}\partial_{A}\partial_{B}\bar{\mathcal{N}}+...\,.$
(4.48)
Therefore, given an inflationary model one may link the value of
$\varphi_{*}^{A}$ to the left hand side of Eq. (4.48).
Altogether, Eq. (4.48) can be a starting point for the evaluation of $f_{NL}$
in terms of light-cone perturbations (see, for example, [13] for an evaluation
of $f_{NL}$ using the $\delta\mathcal{N}$ formalism). Therefore, the results
presented in Eqs. (4.42) and (4.48) constitute a further step to obtain non-
Gaussian predictions on the past light-cone, from the primordial universe and
directly in terms of the metric entries.
## 5 Conclusions
In this manuscript we have developed a separate universe (SU) description in
the non-linear LC gauge. We provide the non-linear conditions to fix the LC
and the GLC gauges in terms of standard coordinates on the ADM formalism. The
main difference with the previous works [5, 15, 26], where the SU was
considered, is that for the LC and GLC gauges we cannot neglect the shift
vector, since this contains information about inhomogeneities along the world-
line.
As an application of our results, and a consistency check, we repeated the
procedures of [4] and [5] to prove the super-horizon conservation of the
comoving curvature perturbation $\zeta$, when a light-like foliation of
spacetime is taken. This conservation has been achieved by neglecting the non-
adiabatic pressure within the SU scheme.
We then generalize the $\delta\mathcal{N}$ formalism [10, 8, 9, 5, 15, 11, 12,
13], in terms of the combination of the LC metric entries
$\Upsilon\sqrt{\gamma}$ within the uniform density light-cone gauge which is
one of our most important results. Let us remark that the gradient expansion
employed here simplifies the expression of the expansion rate at the non-
linear level (see Eq. (4.33)). This could help in simplifying also the
perturbative expressions (see, for instance, the one presented in Eq. (6.11)
of [43]).
The $\delta\mathcal{N}$ formalism provides a procedure to investigate the
evolution of the perturbations for different inflationary models. The
extension of the $\delta\mathcal{N}$ formalism on the past light-cone, as
developed in this manuscript, allows the evaluation of such dynamical
evolution directly over the past light-cone. This moves us one step forward to
the evaluation of non-linear effects (such as backreaction effects and non-
Gaussianities) since the primordial universe until the late-time one along
such past light-cone. Indeed, as a future step, we aim to investigate
primordial backreaction effects on different expansion rates using the above-
mentioned formalism and well-posed averaging procedures on the past light-cone
[29, 35].
Finally, for what regards the possible non-Gaussianities associated to any
inflationary model, the $\delta\mathcal{N}$ formalism is a very useful tool.
In fact, as shown in [13], this formalism provides very simple expressions for
$f_{NL}$ in terms of $\mathcal{N}$. In other words, the overall goal of the
research program where this work is nested is to obtain a formalism to compute
the curvature perturbations at horizon re-entry expressed in terms of light-
cone entries. We remark this point since the subsequent evolution of this
metric entries could then be compared to late-time expression of cosmological
observables as, for instance, the ones presented in Eq. (3.3). This would
provide a self-consistent framework entirely given on the light-cone to
disentangle the primordial non-Gaussianities from the ones naturally emerging
during the non-linear late-time dynamics [47, 48, 49].
## Acknowledgement
GM and MM are supported in part by INFN under the program TAsP (Theoretical
Astroparticle Physics). GF acknowledges support by the FCT under the program
“Stimulus” with the grant no. CEECIND/04399/2017/CP1387/CT0026 and through the
research project with ref. number PTDC/FIS-AST/0054/2021. GF is also member of
the Gruppo Nazionale per la Fisica Matematica (GNFM) of the Istituto Nazionale
di Alta Matematica (INdAM).
## Appendix A Linear $\delta\mathcal{N}$ formalism on the light-cone
In this appendix we derive the $\delta\mathcal{N}$ formalism at linear order
in perturbation theory using the framework developed in [36, 42] and reviewed
in Sect. 4. In this way we explicitly show the consistency of our results.
To begin, we linearize Eq. (4.42) and obtain
$\displaystyle\delta\mathcal{N}=$
$\displaystyle\frac{1}{3}\ln\left[\frac{\left(\Upsilon\sqrt{\gamma}\right)_{UD}}{\left(\Upsilon\sqrt{\gamma}\right)_{UC}}\right]+\mathcal{O}(\epsilon^{2})$
$\displaystyle=$
$\displaystyle\frac{1}{3}\ln\left[\frac{\left(\bar{\Upsilon}\sqrt{\bar{\gamma}}\right)(1+\delta\Upsilon)(1+2\nu)_{UD}}{\left(\Upsilon\sqrt{\gamma}\right)_{UC}}\right]+\mathcal{O}(\delta^{2},\epsilon^{2})$
$\displaystyle=$
$\displaystyle\frac{1}{3}\ln\left[1+\left(\delta\Upsilon+2\nu\right)_{UD}\right]+\mathcal{O}(\delta^{2},\epsilon^{2})\,$
$\displaystyle=$
$\displaystyle\frac{1}{3}\left(\delta\Upsilon+2\nu\right)_{UD}+\mathcal{O}(\delta^{2},\epsilon^{2})$
(A.1)
where we have defined
$\Upsilon=\bar{\Upsilon}(1+\delta\Upsilon)$ (A.2)
and we recall that $(\Upsilon\sqrt{\gamma})_{UC}$ is equal to the background
value, being $\psi=0$ within the uniform curvature gauge. Also, we have used
the metric in Eq. (4.1) and the scalar/pseudoscalar decomposition of Eq.
(4.3). Since $\delta\Upsilon=N/2$, we then have that
$\delta\mathcal{N}(\lambda_{1,}\lambda_{2,}x^{i})=\frac{1}{6}\left(N+4\nu\right)_{UD}+\mathcal{O}(\delta^{2},\epsilon^{2})\,.$
(A.3)
From the relation between the light-cone perturbation and the standard ones in
Eq. (4.24), (see also [42]), one gets that
$\displaystyle\psi=-\frac{1}{6}(N+4\nu)\,.$ (A.4)
Therefore, Eq. (A.3) together with Eq. (A.4) and the fact that
$\psi_{UD}=\zeta$, gives the well known relation $\delta\mathcal{N}=-\zeta$
[15].
The result obtained in Eq. (A.3) proves that the $\delta\mathcal{N}$ formalism
on the past light-cone is consistent with the light-cone perturbation theory
developed in [36, 42]. Thereby, the $\delta\mathcal{N}$ formalism within the
past light-cone framework, at linear order in perturbation theory, could also
be obtained by starting from the results presented in Sect. 4. To this aim,
one should integrate Eqs. (4.7) and (4.8), evaluating them between the uniform
curvature and the uniform density slices.
## References
* [1] LSST Dark Energy Science Collaboration, A. Abate et al., Large Synoptic Survey Telescope: Dark Energy Science Collaboration, arXiv:1211.0310.
* [2] L. Amendola et al., Cosmology and fundamental physics with the Euclid satellite, Living Rev. Rel. 21 (2018), no. 1 2, [arXiv:1606.00180].
* [3] DESI Collaboration, A. Aghamousa et al., The DESI Experiment Part I: Science,Targeting, and Survey Design, arXiv:1611.00036.
* [4] D. Wands, K. A. Malik, D. H. Lyth, and A. R. Liddle, A New approach to the evolution of cosmological perturbations on large scales, Phys. Rev. D 62 (2000) 043527, [astro-ph/0003278].
* [5] D. H. Lyth, K. A. Malik, and M. Sasaki, A General proof of the conservation of the curvature perturbation, JCAP 05 (2005) 004, [astro-ph/0411220].
* [6] R.-G. Cai, B. Hu, and H.-B. Zhang, Acoustic signatures in the Cosmic Microwave Background bispectrum from primordial magnetic fields, JCAP 08 (2010) 025, [arXiv:1006.2985].
* [7] E. Komatsu and D. N. Spergel, The Cosmic Microwave Background bispectrum as a test of the physics of inflation and probe of the astrophysics of the low-redshift Universe, in 9th Marcel Grossmann Meeting on Recent Developments in Theoretical and Experimental General Relativity, Gravitation and Relativistic Field Theories (MG 9), 12, 2000. astro-ph/0012197.
* [8] M. Sasaki and E. D. Stewart, A General analytic formula for the spectral index of the density perturbations produced during inflation, Prog. Theor. Phys. 95 (1996) 71–78, [astro-ph/9507001].
* [9] M. Sasaki and T. Tanaka, Superhorizon scale dynamics of multiscalar inflation, Prog. Theor. Phys. 99 (1998) 763–782, [gr-qc/9801017].
* [10] D. S. Salopek and J. R. Bond, Nonlinear evolution of long wavelength metric fluctuations in inflationary models, Phys. Rev. D 42 (1990) 3936–3962.
* [11] A. A. Starobinsky, Dynamics of Phase Transition in the New Inflationary Universe Scenario and Generation of Perturbations, Phys. Lett. B 117 (1982) 175–178.
* [12] A. A. Starobinsky, Multicomponent de Sitter (Inflationary) Stages and the Generation of Perturbations, JETP Lett. 42 (1985) 152–155.
* [13] D. H. Lyth and Y. Rodriguez, The Inflationary prediction for primordial non-Gaussianity, Phys. Rev. Lett. 95 (2005) 121302, [astro-ph/0504045].
* [14] R. L. Arnowitt, S. Deser, and C. W. Misner, The Dynamics of general relativity, Gen. Rel. Grav. 40 (2008) 1997–2027, [gr-qc/0405109].
* [15] N. S. Sugiyama, E. Komatsu, and T. Futamase, $\delta$N formalism, Phys. Rev. D 87 (2013), no. 2 023530, [arXiv:1208.1073].
* [16] V. Vennin and A. A. Starobinsky, Correlation Functions in Stochastic Inflation, Eur. Phys. J. C 75 (2015) 413, [arXiv:1506.04732].
* [17] A. A. Starobinsky, STOCHASTIC DE SITTER (INFLATIONARY) STAGE IN THE EARLY UNIVERSE, Lect. Notes Phys. 246 (1986) 107–126.
* [18] F. Finelli, G. Marozzi, A. A. Starobinsky, G. P. Vacca, and G. Venturi, Generation of fluctuations during inflation: Comparison of stochastic and field-theoretic approaches, Phys. Rev. D 79 (2009) 044007, [arXiv:0808.1786].
* [19] F. Finelli, G. Marozzi, A. A. Starobinsky, G. P. Vacca, and G. Venturi, Stochastic growth of quantum fluctuations during slow-roll inflation, Phys. Rev. D 82 (2010) 064020, [arXiv:1003.1327].
* [20] T. Prokopec and G. Rigopoulos, $\Delta$N and the stochastic conveyor belt of ultra slow-roll inflation, Phys. Rev. D 104 (2021), no. 8 083505, [arXiv:1910.08487].
* [21] H. Firouzjahi, A. Nassiri-Rad, and M. Noorbala, Stochastic Ultra Slow Roll Inflation, JCAP 01 (2019) 040, [arXiv:1811.02175].
* [22] G. Ballesteros, J. Rey, M. Taoso, and A. Urbano, Stochastic inflationary dynamics beyond slow-roll and consequences for primordial black hole formation, JCAP 08 (2020) 043, [arXiv:2006.14597].
* [23] C. Pattison, V. Vennin, D. Wands, and H. Assadullahi, Ultra-slow-roll inflation with quantum diffusion, arXiv:2101.05741.
* [24] M. Biagetti, G. Franciolini, A. Kehagias, and A. Riotto, Primordial Black Holes from Inflation and Quantum Diffusion, JCAP 07 (2018) 032, [arXiv:1804.07124].
* [25] J. M. Ezquiaga and J. García-Bellido, Quantum diffusion beyond slow-roll: implications for primordial black-hole production, JCAP 08 (2018) 018, [arXiv:1805.06731].
* [26] A. Talebian-Ashkezari, N. Ahmadi, and A. A. Abolhasani, $\delta$ M formalism: a new approach to cosmological perturbation theory in anisotropic inflation, JCAP 03 (2018) 001, [arXiv:1609.05893].
* [27] T. Tanaka and Y. Urakawa, Anisotropic separate universe and Weinberg’s adiabatic mode, JCAP 07 (2021) 051, [arXiv:2101.05707].
* [28] T. Tanaka and Y. Urakawa, Statistical anisotropy of primordial gravitational waves from generalized $\delta N$ formalism, arXiv:2309.08497.
* [29] M. Gasperini, G. Marozzi, F. Nugier, and G. Veneziano, Light-cone averaging in cosmology: Formalism and applications, JCAP 07 (2011) 008, [arXiv:1104.1167].
* [30] I. Ben-Dayan, M. Gasperini, G. Marozzi, F. Nugier, and G. Veneziano, Backreaction on the luminosity-redshift relation from gauge invariant light-cone averaging, JCAP 04 (2012) 036, [arXiv:1202.1247].
* [31] I. Ben-Dayan, M. Gasperini, G. Marozzi, F. Nugier, and G. Veneziano, Do stochastic inhomogeneities affect dark-energy precision measurements?, Phys. Rev. Lett. 110 (2013), no. 2 021301, [arXiv:1207.1286].
* [32] I. Ben-Dayan, G. Marozzi, F. Nugier, and G. Veneziano, The second-order luminosity-redshift relation in a generic inhomogeneous cosmology, JCAP 11 (2012) 045, [arXiv:1209.4326].
* [33] G. Fanizza, M. Gasperini, G. Marozzi, and G. Veneziano, An exact Jacobi map in the geodesic light-cone gauge, JCAP 11 (2013) 019, [arXiv:1308.4935].
* [34] G. Marozzi, The luminosity distance–redshift relation up to second order in the Poisson gauge with anisotropic stress, Class. Quant. Grav. 32 (2015), no. 4 045004, [arXiv:1406.1135]. [Erratum: Class.Quant.Grav. 32, 179501 (2015)].
* [35] G. Fanizza, M. Gasperini, G. Marozzi, and G. Veneziano, Generalized covariant prescriptions for averaging cosmological observables, JCAP 02 (2020) 017, [arXiv:1911.09469].
* [36] G. Fanizza, G. Marozzi, M. Medeiros, and G. Schiaffino, The Cosmological Perturbation Theory on the Geodesic Light-Cone background, JCAP 02 (2021) 014, [arXiv:2009.14134].
* [37] E. Di Dio, R. Durrer, G. Marozzi, and F. Montanari, Galaxy number counts to second order and their bispectrum, JCAP 12 (2014) 017, [arXiv:1407.0376]. [Erratum: JCAP 06, E01 (2015)].
* [38] E. Di Dio, R. Durrer, G. Marozzi, and F. Montanari, The bispectrum of relativistic galaxy number counts, JCAP 01 (2016) 016, [arXiv:1510.04202].
* [39] G. Marozzi, G. Fanizza, E. Di Dio, and R. Durrer, CMB-lensing beyond the Born approximation, JCAP 09 (2016) 028, [arXiv:1605.08761].
* [40] G. Marozzi, G. Fanizza, E. Di Dio, and R. Durrer, CMB-lensing beyond the leading order: temperature and polarization anisotropies, Phys. Rev. D 98 (2018), no. 2 023535, [arXiv:1612.07263].
* [41] G. Fanizza, M. Gasperini, G. Marozzi, and G. Veneziano, Time of flight of ultra-relativistic particles in a realistic Universe: a viable tool for fundamental physics?, Phys. Lett. B 757 (2016) 505–509, [arXiv:1512.08489].
* [42] G. Fanizza, G. Marozzi, and M. Medeiros, Gauge invariance on the light-cone: curvature perturbations and radiative degrees of freedom, JCAP 06 (2023) 015, [arXiv:2303.11743].
* [43] M. B. Fröb and W. C. C. Lima, Cosmological perturbations and invariant observables in geodesic lightcone coordinates, JCAP 01 (2022), no. 01 034, [arXiv:2108.11960].
* [44] E. Mitsou, G. Fanizza, N. Grimm, and J. Yoo, Cutting out the cosmological middle man: General Relativity in the light-cone coordinates, Class. Quant. Grav. 38 (2021), no. 5 055011, [arXiv:2009.14687].
* [45] C. Tian, M. F. Carney, J. B. Mertens, and G. Starkman, Accurate relativistic observables from post-processing light cone catalogues, arXiv:2110.00893.
* [46] P. Fleury, F. Nugier, and G. Fanizza, Geodesic-light-cone coordinates and the Bianchi I spacetime, JCAP 06 (2016) 008, [arXiv:1602.04461].
* [47] A. Kehagias, A. Moradinezhad Dizgah, J. Noreña, H. Perrier, and A. Riotto, A Consistency Relation for the Observed Galaxy Bispectrum and the Local non-Gaussianity from Relativistic Corrections, JCAP 08 (2015) 018, [arXiv:1503.04467].
* [48] E. Di Dio, H. Perrier, R. Durrer, G. Marozzi, A. Moradinezhad Dizgah, J. Noreña, and A. Riotto, Non-Gaussianities due to Relativistic Corrections to the Observed Galaxy Bispectrum, JCAP 03 (2017) 006, [arXiv:1611.03720].
* [49] T. Schiavone, E. Di Dio, and G. Fanizza, The skewness of the distance-redshift relation in $\Lambda$CDM, arXiv:2307.13455.
|
# Feature Selection Tutorial
with Python Examples
Pádraig Cunningham
School of Computer Science
University College Dublin
<EMAIL_ADDRESS>
&Bahavathy Kathirgamanathan
School of Computer Science
University College Dublin
<EMAIL_ADDRESS>
&Sarah Jane Delany
School of Computer Science
Technological University Dublin
<EMAIL_ADDRESS>
###### Abstract
In Machine Learning, feature selection entails selecting a subset of the
available features in a dataset to use for model development. There are many
motivations for feature selection, it may result in better models, it may
provide insight into the data and it may deliver economies in data gathering
or data processing. For these reasons feature selection has received a lot of
attention in data analytics research. In this paper we provide an overview of
the main methods and present practical examples with Python implementations.
While the main focus is on supervised feature selection techniques, we also
cover some feature transformation methods.
## 1 Introduction
In data analysis, objects described using multiple features may sometimes be
described using a subset of these features without loss of information.
Identifying these feature subsets is termed feature selection, variable
selection or feature subset selection and is a key process in data analysis.
This paper provides a tutorial introduction to the main feature selection
methods used in Machine Learning (ML). While excellent reviews and evaluations
of feature selection methods already exist [11, 23] our main contribution is
to provide examples of these methods in operation along with links to Python
notebooks that implement these methods.
Feature selection is important because it can deliver a number of benefits:
* •
Better classifiers: The obvious benefit of feature selection is that it will
improve accuracy because redundant or noisy features can damage accuracy.
Perhaps surprisingly, improvements in accuracy can be quite limited because
powerful ML techniques are designed to be robust against noise.
* •
Knowledge discovery: Perhaps the most enduring benefit of feature selection is
the insight it provides. Identifying influential features and features that
are not useful teaches us a lot about the data.
* •
Data Gathering: In domains where data comes at a cost (e.g. Medical Diagnosis,
Manufacturing), identifying a minimal set of features for a classification
task can save money.
* •
Computational Cost: Identifying minimal feature subsets will allow for simpler
models that will cost less to set up and run.
* •
The Curse of Dimensionality: In theory, according to the Curse of
Dimensionality, the amount of data required to build a classifier increases
exponentially with the number of features.
Feature selection is most effective in the context of supervised machine
learning (classification/regression) where the availability of labelled
examples can drive the selection process. The methods we cover are summarised
in Figure 1. Other surveys of feature selection [23, 11] divide feature
selection methods into three categories and we follow the same structure:
* •
Wrappers are feature selection methods where the classifier is _wrapped_ in
the feature selection process. This wrapping allows classification performance
to drive the feature selection process. This has the advantage of tying the
feature selection to classifier performance but this comes with a significant
computational cost as very many classifier variants will be evaluated during
the selection.
* •
Filters cover methods that use criteria other than classifier performance to
guide feature selection. Typically a filter provides a feature ranking and
then a selection policy uses this ranking to select a feature subset.
* •
Embedded methods refer to any method where the feature selection _emerges_ as
a by-product of the classifier training process. For instance, training a
decision tree will almost always select a subset of the available features to
build a tree.
Figure 1: An overview of the feature selection methods covered in this paper.
For completeness, Principle Component Analysis and Linear Discriminant
Analysis (which are not really feature selection methods) are also covered.
For completeness we also cover some dimensionality reduction methods that
transform the data into a reduced dimension space rather than select a subset
of features. In section 7 we cover Principle Component Analysis (PCA) and
Linear Discriminant Analysis (LDA) for projecting data into lower dimension
spaces. These are not feature selection methods in the sense that the original
feature representation is left behind.
Before working through the details of the Feature Selection methods the paper
begins with a short discussion on the implications of The Curse of
Dimensionality. Then the evaluation methodology used in the paper is presented
in Section 3. The main material on Feature Selection methods is covered under
the categories of Wrapper methods (Section 4), Filter strategies (Section 5)
and Embedded methods (6). Section 7 provides a brief description of PCA and
LDA as already mentioned.
## 2 The Curse of Dimensionality
Before proceeding it is worth briefly discussing the implications of the Curse
of Dimensionality for ML. The term was coined by Richard Bellman in 1961 and
refers to a number of phenomena associated with data described by many
features [1]. For our purposes there are two theoretical issues. The first is
that the number of samples required to build a model may increase
exponentially with the number of features (dimensions). The second is that the
variation in distance between arbitrary points _decreases_ as more dimensions
are added.
The first of these issues is illustrated in Figure 2. This figure shows plots
of 20 random data points in 1D, 2D and 3D. As the dimension increases the data
gets more and more sparse. A consequence of this is that the number of samples
required to cover a phenomenon increases exponentially with dimension. This is
less of a problem in practice because real data is unlikely to occupy all of
the space, instead it will occupy a lower dimension manifold in the space. So
the _intrinsic_ dimension of the data is likely to be considerably less than
the full dimension [5]. The PCA example in Figure 18 also illustrates how the
intrinsic dimension can be less than the number of features.
Figure 2: 20 random data points in 1D, 2D and 3D space. These plots show how
data becomes increasingly sparse as the dimension is increased.
The second issue is that, somewhat paradoxically, the more features used to
describe the data the more similar everything appears. The box plots in Figure
3 illustrate this. The plots show the distribution of Cosine similarities
between a probe point and 1,000 random points in 5D, 10D and 20D spaces. As
the dimension increases the variation in similarity/distance decreases.
In addition to these theoretical problems with having more features than is
absolutely necessary, there are also practical problems to be considered. For
most ML algorithms, the computational cost will increase with the number of
features. As will the cost of gathering the data in the first place. So if the
number of features can be reduced then there are strong theoretical and
practical reasons for doing so.
Figure 3: A demonstration of how the variation in Cosine similarity reduces as
the dimension of the data increases.
## 3 Methodology
The datasets used in the examples in this paper are summarised in Table 1. The
Segmentation data set is from the UCI
repository111https://archive.ics.uci.edu/, the Penguins dataset was
constructed from a larger dataset available on
GitHub222https://github.com/allisonhorst/palmerpenguins and the Harry Potter
dataset was constructed by the authors based on the popular Top Trumps
game333https://toptrumps.us/collections/harry-potter to illustrate PCA in
operation. The Segmentation data comes from an image segmentation task on
outdoor images. The ground truth segments the images into seven classes and
the instances are represented by 19 features. This is a good dataset for
feature selection as there is some redundancy in the data. The Penguins
dataset is a three class dataset described by four features [10]. The features
are physical parameters (e.g. bill and flipper dimensions) and are strongly
correlated so there is some redundancy.
Table 1: Data Sets: Summary details. Name | Samples | Features | Classes
---|---|---|---
Segmentation | 2,310 | 19 | 7
Penguins | 333 | 4 | 3
Harry Potter | 22 | 5 | -
If we wish to get an assessment of generalisation performance for an ML system
that might be deployed then we need to hold back some data for testing (option
(b) in Figure 4). If we wish to assess a few different feature selection
alternatives as part of the model development then these should be tested
within the confines of training data and cross validation is the most
effective way to do this (option (c)). Most of the testing reported in this
paper follows this pattern. Indeed, the evaluation testing may involve two
levels of cross validation – this strategy is not used here. It should be
remembered that if the objective is to perform feature selection as part of
the deployment of an ML system then all the available data can be used for
feature selection (option (a) in Figure 4).
Figure 4: Evaluation methodology. (a) If an estimation of generalisation
accuracy is not required then all data can be used for all aspects of model
development. (b) Test data can be held back from training to get an estimate
of generalisation accuracy. (c) Cross validation can be used within the
training data for Feature Selection.
Unless otherwise stated, the classifier used in the evaluations in $k$-Nearest
Neighbour ($k$-NN) [5]. $k$-NN is used because it is probably the classifier
in popular use most susceptible to noisy or redundant features. So the impact
of feature selection will be most evident when it is used in evaluations.
Before proceeding we need to introduce the notation that will be used
throughout the paper. Assume we have a dataset $D$ made up of $n$ data
samples. $D=\left<\mathbf{X},\mathbf{y}\right>$ where $\mathbf{y}$ are the
class labels. The examples are described by a set of features $F$ where
$p=|F|$ so there are $n$ objects described by $p$ features. So the dimensions
are $\mathbf{X}_{n\times p}$ and $\mathbf{y}_{p}$. The objective is to
identify a subset $S\subset F$ that captures the important information in the
dataset. In supervised ML the classifiers would work with data represented by
a reduced set of features $\left<\mathbf{X}^{\prime}_{n\times
k},\mathbf{y}\right>$ where $k=|S|$.
## 4 Feature Selection using Wrappers
So the objective with feature selection is to identify a feature subset
$S\subset F$ to represent the data. If $|F|$ is small we could in theory try
out all possible subsets of features and select the best subset. In this case
_‘try out’_ would mean training and testing a classifier using the feature
subset. This would follow the protocol presented in Figure 4 (c) where cross-
validation on the training data would identify a good feature subset and then
this could be tested on the test data. However the number of possibilities is
$2^{p}$ so exhaustive search quickly becomes impossible.
Figure 5: Wrappers versus Filters: (a) With Wrappers the classifier is
_wrapped_ in the search process. (b) A Filter strategy uses a separate
evaluation to score features
Nevertheless this is how a Wrapper feature selection strategy works with the
important modification that the search can be greedy or stochastic rather than
exhaustive. The general idea is shown in Figure 5(a), the classifier is
_wrapped_ in the feature selection process, i.e. classifiers trained using the
feature subsets are used in the search process. The feature subsets will be
evaluated using hold-out testing or cross-validation testing on classifiers
built using the data. The main search strategies used with Wrappers are:
* •
Exhaustive Search evaluates every possible feature subset. If the number of
features to be considered is small it will be possible to consider all feature
combinations. However, if $p>20$ there will be millions of feature subsets to
considered and an exhaustive search will not be practical.
* •
Sequential Forward Selection (SFS) starts with no features selected and all
classifiers incorporating a single feature are considered (see Figure 6 (a)).
The best of these is selected and then two feature combinations including this
feature are evaluated. This process proceeds, adding the winning feature at
each step, until no further improvements can be made.
* •
Backward Elimination (BE) proceeds in the opposite direction to FSS, it starts
with all features selected, considers the options with one feature deleted,
selects the best of these and continues to eliminate features. Again, the
process is terminated when no improvements can be made.
* •
Stochastic Search methods such as genetic algorithms or simulated annealing
can readily be applied to Wrapper feature selection. Each state can be defined
by a feature mask on which crossover and mutation can be performed [22]. Given
this convenient representation, the use of a stochastic search for feature
selection is quite straightforward although the evaluation of the fitness
function (classifier accuracy as measured by cross-validation) is expensive.
Our exploration of Wrappers will focus on SFS and BE. These are greedy
strategies that explore the search space of possible feature subsets as shown
in Figure 6. SFS starts with an empty set and proceeds forward considering
classifiers built on single features. The best of these is selected and then
pairs of features incorporating this feature are considered. The process could
terminate when the addition of a new feature doesn’t result in any
improvement. As the name suggests, Backward Elimination works in the opposite
direction. It starts with a full set of features (Figure 6 (b)) and eliminates
the least useful feature at each step. For both SFS and BE, the feature
subsets are evaluated using cross-validation on the training data. As stated
in Section 3, the classifier used is $k$-NN.
Figure 6: Examples of feature subset selection using wrappers: (a) Sequential
Forward Selection (b) Backward Elimination.
Both methods have their own advantages and disadvantages. SFS is inclined to
require less computation as the models being evaluated are smaller, typically
a classifier with a small number of features will take less time to train and
test. SFS is inclined to select less features; this parsinomy is typically an
advantage. On the other hand, because BE starts with larger feature sets, it
can do a better job of assessing how features work in combination.
In the Appendix a link is provided to code to run SFS and BE in Python. The
evaluation is on the Segmentation dataset and the results are shown in Figure
7. On the left we see a plot of accuracy on the training set as the SFS
proceeds. In this case the search is allowed to run to the end but it is
evident that accuracy stops improving after seven features.
The overall results for SFS and BE are shown in Figure 7 (b). SFS selects
seven features and 11 are selected by BE. Both feature subsets result in
improved accuracy on the training data but only the SFS subset results in
better accuracy on the test data. Indeed the gap between train and test
accuracy for BE is evidence of overfitting – the selection process has fitted
too closely to the characteristics of the training data at the cost of
generalisation accuracy. Indeed overfitting is recognised to be a problem with
Wrapper-based feature selection [22].
(a)
(b)
Figure 7: Feature selection example using wrappers. (a) Accuracy on the
training data as Sequential Forward Selection proceeds measured using cross-
validation. (b) Accuracy estimates for feature subsets selected by SFS and BE.
SFS selects 7 features and BE selects 11.
## 5 Filter Strategies
Figure 5 (a) shows how Wrapper strategies use the classification algorithm in
the feature selection process. Figure 5 (b) shows that Filter strategies do
not use the classifier for feature selection, instead a separate evaluation
function is used. The fact that Filters are independent of the classifier is a
mixed blessing. It means that Filters can be much faster than Wrappers but the
selected features may not be in tune with the inductive bias of the
classifier.
In the next subsection we provide some detail on the operation of a basic
Filter strategy. Then we cover the Relief Algorithm and Correlation-Based
Feature Selection, two Filter strategies that have received a lot of attention
in recent years.
### 5.1 Basic Filters
A basic Filter will entail a feature scoring mechanism and then a selection
strategy based on these scores. The scoring mechanism needs to quantify how
much information the feature has about the outcome. The selection strategy
might be:
* •
Select the top ranked $k$ features,
* •
Select top 50%,
* •
Select features with scores $>50\%$ of the maximum score,
* •
Select features with non-zero scores.
In this analysis we consider the Chi-square statistic and information gain for
scoring. The Chi-square statistic is a measure of independence between a
feature and the class label. If samples are organised into a contingency table
as shown in Figure 8, how different are the cell counts to what would be
observed by chance? The data in Figure 8 (a) suggests that handedness is
independent of gender because the proportions are the same. The data in (b)
suggests that gender is predictive of handedness. The Chi-square statistic
allows us to quantify this:
$\chi^{2}=\sum_{i=1}^{m}\frac{(O_{i}-E_{i})^{2}}{E_{i}}$ (1)
The statistic is a sum over the $m$ cells. For each cell we consider the
difference between the observed count $O_{i}$ and the expected count $E_{i}$
if the feature and the class were independent. In Figure 8 (a) this difference
would be zero because the feature and the class are independent. In (b) there
would be a difference so the statistic would be positive. In general, the
greater the dependence the larger the statistic. If the feature values are
numeric rather than categorical then the feature values can be binned to
enable the construction of the contingency table [15].
Figure 8: Two contingency tables showing relationships between handedness and
gender. If handedness is the class then in (a) it is independent of the gender
feature, in (b) there is a dependence.
Information gain is an alternative information-theoretic measure quantifying
the information a feature contains about a class [16]. In Figure 8 (b) by
knowing the gender we _gain_ information about handedness. In a binary
classification scenario, let’s assume the probability of a positive and
negative outcomes are respectively $p$ and $q$. Then the entropy of a dataset
based on these proportions is:
$H(D)=-p\log_{2}(p)-q\log_{2}(q)$ (2)
then the information gain for any feature $f$ in the dataset in terms of the
class label is:
$IG(D,f)=H(D)-\sum_{v\in values(f)}\frac{|S_{v}|}{S}H(D_{v})$ (3)
As with the Chi-square statistic, information gain (I-Gain) allows us to rank
features for the purpose of feature selection. This is illustrated in Figure 9
(a). This shows the Segmentation features ranked by both measures. A link for
this code is provided in the Appendix. The plot shows the scores sorted by
I-Gain score. It is clear that the scores are well correlated (Pearson
correlation score of 0.86) so feature subsets selected based on these scores
should be reasonably similar.
(a)
(b)
Figure 9: Feature selection using filters. (a) I-Gain and Chi-square scores
for the 19 features in the segmentation dataset. (b) Accuracy estimates for
top-$n$ features.
This does prove to be the case when we look at the performance of classifiers
built with feature subsets based on these rankings. In Figure 9 (b) we see the
results of a range of top $k$ selection policies ($k=3,6,10,15$). At $k=10$
both scores select a feature subset that produces accuracy on the test set
equivalent to that obtained with the full feature set. The evaluation strategy
here conforms to pattern (b) in Figure 4, the feature scoring is done using
the training data and then tested on the test set.
#### 5.1.1 Combining Filters and Wrappers
The results in Figure 9 show that initially performance improves as features
are added based on the ranking generated by the filters. However, this
improvement tails off and eventually no improvement results from the addition
of ‘poorer’ features. Indeed these features may damage performance. This
suggests a hybrid Filter/Wrapper strategy whereby a Filter is used to rank the
features and then a Wrapper is used to identify the optimum feature subset.
This hybrid strategy is shown in operation in Figure 10. The dataset has been
split into train and test sets of equal size. The training set is used to
estimate I-Gain scores for all features, these are shown in blue. Then
classifiers are trained with feature sets of increasing size. The performance
of these feature subsets are scored on the training set using cross validation
and on the test set using hold-out testing. After the addition of nine
features the performance on the training set stops improving (indicated with a
green X). This hybrid strategy would select this as the optimum feature
subset.
Figure 10: A hybrid Wrapper-Filter strategy. Features are ranked using I-Gain
and then the performance of subsets based on this ranking is evaluated. The
training set accuracy of the best performing feature subset is marked with an
X.
### 5.2 Relief Algorithm and Variants
The Relief family of algorithms deserve mention because, as Filter methods,
they have the advantage of speed while scoring features in the context of
other features [17, 18]. Relief algorithms belong to the $k$-NN paradigm in
ML. A $k$-NN analysis is used to score or weight features. The idea is to take
each sample in the dataset (or a subset) and find its nearest neighbour of the
same class and nearest unlike neighbour - these are termed the nearHit and the
nearMiss. Then the general principle is:
* •
For the nearMiss ($nM$) increment the feature weights; weights for unmatching
features will be incremented more.
* •
For the nearHit ($nH$) decrement the feature weights; weights for unmatching
features will be decremented more.
The idea is that this will pull matching instances closer together and push
unmatching instances apart. This is illustrated in the 2D example shown in
Figure 11. The weights should be adjusted to bring $x$ and $nH$ closer
together while pushing $nM$ away. Comparing $x$ and $nM$, they match more on
f2 than on f1 so the weight for f1 will be incremented more. Considering $x$
and $nH$ the opposite happens, f2 is decremented more than f1, so the overall
effect is that f1 scores better than f2. This is achieved with the following
update function where $x$ is the query, $nM$ is the nearMiss, $nH$ is the
nearHit and $w_{f}$ is the weight for feature $f$.
$w_{f}\leftarrow w_{f}-(x_{f}-nH_{f})^{2}+(x_{f}-nM_{f})^{2}$ (4)
This achieves what we want because the feature value differences for matching
features will be smaller than those for unmatching features.
Figure 11: An illustration of the principles underlying Relief in a 2D space.
$x$ is the query, $nM$ is the is the nearMiss, $nH$ is the nearHit. f1 should
be preferred over f2.
This process is repeated multiple times to produce a set of feature weights
with ‘good’ features having high weights. Implementation details of the Relief
family of algorithms are provided by Urbanowicz _et al._ [25]. A link to
sample code for running Relief is provided in the Appendix. The results are
summarised in Figure 12. On the left we see Relief scores for the segmentation
dataset compared with I-Gain scores. The Spearman correlation is 0.91 so both
scores are well correlated as is evident from the plot. The plot suggests that
there is a clear partition between the first 11 features and the remainder so
we test the accuracy of a classifier built using just these 11 features. The
results on the right show that this does result in a small improvement in
accuracy compared with using all features.
Figure 12: Feature selection using ReliefF. (a) I-Gain and ReliefF scores for
the 19 features in the segment dataset. (a) Accuracy estimates using All
features and the top 11 as selected using ReliefF.
### 5.3 Correlation-Based Feature Selection
Correlation Based feature selection (CFS) is a filter strategy that relies on
the principle that "A good feature subset is one that contains features highly
correlated with (predictive of) the class, yet uncorrelated with (not
predictive of) each other" [12]. The feature-class correlation indicates how
representative of the class that feature is while the feature-feature
correlation indicates any redundancies between the features. CFS works by
assigning a merit value based on feature-class and feature-feature
correlations to each feature subset which becomes the measure by which subsets
are evaluated.
Figure 13: Feature Selection using CFS. The graphs on top show how the merit
scores change as the selected features are added to the selected set. The bar
charts show accuracy scores for the different feature subsets. In both cases
the subset with the highest merit score does not have the highest accuracy.
The merit score for a feature subset $S$ is:
$M_{S}=\frac{k\overline{r_{cf}}}{\sqrt{k+k(k-1)\overline{r_{ff}}}}$ (5)
where $k=|S|$, $\overline{r_{ff}}$ is the average correlation between the
features values in the subset ($f\in S)$ and $\overline{r_{cf}}$ is the
average correlation between the values for the selected features and the class
label.
The correlations can be measured using techniques such as symmetrical
uncertainty based on information gain, feature weighting based on the Gini-
index, or by using the minimum description length (MDL) principle [12].
Information gain based methods work well and hence the symmetrical uncertainty
score is commonly used in CFS implementations.
CFS can work with any search strategy in a similar way to wrapper strategies,
but rather than evaluating based on accuracy as in Section 4, the evaluation
will be based on the merit score. For example, when using sequential forward
selection, all single features merit will be evaluated from which the best
will be selected. All two feature combinations which include this first
feature will then be evaluated using the merit score and so on until there is
no more improvement in the merit score.
CFS shows a tendency to favour small subsets with moderate accuracy. The CFS
implementation provided by [20] utilises a Best First search which continues
the search until five consecutive non-improving feature subsets are found.
Therefore, even after the merit starts decreasing, the search continues.
Figure 13 shows the benefit of this as it can be seen that for both the
segmentation and the penguin datasets, the hold-out accuracy from using the
extra features improves in comparison to evaluating the performance at the
point where merit score is highest. In both datasets, CFS has led to a drop in
the hold-out accuracy compared to using the full feature subset, however has
selected a smaller subset than other techniques such as Relief-F. Hence CFS
may be a good technique for feature reduction. A link to sample code for
running CFS is provided in the Appendix.
## 6 Embedded Methods
In this section we cover feature selection methods that _emerge_ naturally
from the classification algorithm or arise as a side effect of the algorithm.
We will see that with Decision Trees and Logistic Regression feature selection
can be an integrated part of the model building process. Then with Random
Forest, we will see how feature importance scores can easily be generated from
the model.
### 6.1 Decision Trees
The construction of a Decision Tree from a data set will very often entail
feature selection as some of the features will not appear in the tree.
Features not included in the tree are effectively selected out. We show an
example of this on the Penguins dataset in Figure 14.
Figure 14: A decision tree for the Penguins dataset. While the data is
described by four features only three are selected.
In this example the dataset has been divided 50:50 into train and test sets.
This tree has been trained on the training data and has 93% accuracy on the
test data (see links in Appendix). This dataset has four features,
flipper_length, bill_length, bill_depth and body_mass. It is clear from the
tree in Figure 14 that three of the four features are selected, body_mass is
not selected.
This tree has been constructed with the default scikit-learn parameters so
there is no pruning. It is normal in Decision Tree learning to constrain (i.e.
prune) the size of the tree to prevent overfitting. The use of pruning to
prevent overfitting will push the feature selection further as even less
features will be selected in smaller trees.
### 6.2 Logistic Regression: Lasso
In multivariate linear models such as linear regression or logistic
regression, feature selection can be achieved as a side effect of
regularization. In ML regularization refers to mechanisms designed to simplify
models in order to prevent overfitting. Thus regularization can cause features
to be deselected. Elastic net and Lasso are popular regularization methods for
linear models. Here we will provide an overview of how Lasso works [24] and
present examples of Lasso in operation. Starting with the basics, a
multivariate regression model works as follows:
$\begin{split}y=f(\mathbf{x})&=\beta_{0}+\sum_{i=1}^{p}\beta_{i}x_{i}\\\
&=\beta_{0}+\mathbf{\beta x}\end{split}$ (6)
The dependent variable $y$ is a linear function of the input features; for
each feature $x_{i}$ the weight of that feature is determined by the
corresponding $\beta_{i}$ parameter. For binary classification problems ([0,1]
labels) we can use logistic regression where the dependent variable is the log
odds that an outcome variable is 1. If $pr$ is the probability that the label
is 1 then $\frac{pr}{1-pr}$ is the odds.
$ln\left(\frac{pr}{1-pr}\right)=\beta_{0}+\mathbf{\beta x}$ (7)
So logistic regression provides a class probability:
$pr=\frac{1}{1-e^{(\beta_{0}+\mathbf{\beta x})}}$ (8)
Regularization prevents overfitting by limiting model capacity; this is done
by limiting the size of weights. The two options are L1 or L2 regularization:
$\text{L}_{1}:\quad\sum_{i=1}^{p}|\beta_{i}|<t$ (9)
$\text{L}_{2}:\quad\sum_{i=1}^{p}\beta_{i}^{2}<t$ (10)
So the $\beta$ parameters in (8) are fitted to the training data subject to
the constraints in (9) or in (10). It transpires that when an L1
regularization is used the weaker weights will go to zero, i.e. those features
will be deselected. There is an excellent explanation of _why_ this happens in
the original Lasso paper by Tibshirani [24].
(a) Penguins
(b) Segmentation
Figure 15: The impact of Lasso on the Segmentation and Penguins datasets.
Lasso reduces the magnitude of the $\beta$ parameters; some parameters get
reduced to zero. Results for Lasso with the default regularization (C=1) and
milder regularization (C=10) are shown.
To demonstrate this on our sample datasets we reduce them to binary
classification problems to make the overall process more transparent. However,
feature selection using Lasso also works with multiclass problems. The results
are shown in Figures 15 and 16. Because the datasets have been reduced to just
two classes (Cement and Window for Segmentation and Adelie and Chinstrap for
Penguins) the accuracies are higher than for the multi-class scenario. The
extent of the feature reduction with Lasso is controlled by the regularization
parameter C. Results are included for two levels of regularization, C=10 and
C=1. C=10 results in less regularization so more features are retained. In
both cases the default regularization results in too much feature reduction
and generalization accuracy is reduced. For the Penguins dataset just two
features are retained while three are retained in the Segmentation dataset
(see Figure 15). The milder regularization retains more features resulting in
no loss of generalization accuracy.
(a) Penguins
(b) Segmentation
Figure 16: The impact of Lasso on train and test accuracy. Results for Lasso
with the default regularization (C=1) and milder regularization (C=10) are
shown.
### 6.3 Random Forest Feature Importance
Ensembles, the idea that a committee of classifiers will be more accurate than
a single classifier, are central to modern ML. It has been known for hundreds
of years that a committee of decision makers will be better than an individual
[6]. In ML an ensemble of classifiers voting on an outcome will be more
accurate than a single classifier provided some conditions are met. Two
ensemble methods, gradient boosting [8] and random forests [3] represent the
state of the art in supervised ML. Breiman [3] has shown that, as a
supplementary benefit of the effort required to build the random forest, the
ensemble offers estimates of generalisation accuracy and feature importance.
In order to explain how these estimates of feature importance work we need to
go into some detail on how random forests work.
In order for an ensemble to be effective, there needs to be some diversity
among the ensemble members. If not the ensemble will be no better than the
individual members (classifiers). If we have a dataset $\mathbf{X}_{n\times
p}$ of $n$ samples described by $p$ features with class labels
$\mathbf{y}_{p}$ then we have two basic strategies for training diverse
classifiers:
* •
Bagging: We can train each classifier with different training sets drawn at
random with replacement from the available data. If these ‘bagged’ training
sets are of size $n$ then some samples will be selected multiple times and
some not at all. For each training set, roughly 37% of samples will not be
selected, i.e. out-of-bag (OOB).
* •
Random Subspacing: The other strategy for ensuring diversity is to work with
subsets of features rather than subsets of samples. With random forest, only a
subset of features are available for consideration at each split point in the
construction of a tree. This subset might be quite small, for instance
$\sqrt{p}$.
Random forest (RF) uses both of these strategies together to ensure diversity.
A typical RF could contain 1,000 trees. The samples that are OOB in these
trees can be used to get an estimate of generalisation accuracy for the RF
without the need to hold back test data from the training process. These OOB
samples can be used to generate feature importance scores. For each of the $p$
features in the OOB samples for a given tree, the values for that feature can
be randomly permuted and the revised classifications for those samples can be
saved. Then, for the whole ensemble, the impact of this feature value
shuffling can be assessed.
If a feature is not important, then the impact of this shuffling will be
minimal. For predictive features this shuffling will have a significant
impact. Furthermore, and this is important for feature selection, this feature
importance is being assessed in the context of other features. This contrasts
with the basic filter methods described in section 5.1 where features are
scored in isolation.
(a)
(b)
Figure 17: Random forest feature importance. (a) RF feature importance scores
for the Penguins dataset compared with I-Gain. (b) RF and I-Gain scores for
the segmentation dataset.
RF feature importance scores for the Penguin and Segmentation datasets are
shown in Figure 17. I-Gain scores are also shown and these correlate well with
the RF scores (0.8 for Penguins and 0.92 for Segmentation). For the Penguins
dataset, the RF score ranks flipper_length highest. Because RF feature
importance considers features in context, this may indicate that this feature
contains information not available in the other features. The same may be true
for the HUE_MEAN and EXGREEN_MEAN features in the Segmentation data which are
also given higher rankings.
## 7 Feature Transformations
So far we have focused on feature ranking and feature subset selection methods
as that is the main focus of the paper. However, it is also worth mentioning
feature transformation methods such as Principal Component Analysis (PCA) as
these methods are also used for dimension reduction.
The dominant feature transformation technique is Principal Components Analysis
(PCA) that transforms the data into a reduced space that captures most of the
variance in the data. PCA is an unsupervised technique in that it does not
take class labels into account. By contrast Linear Discriminant Analysis (LDA)
seeks a transformation that maximises between-class separation (Section 7.2).
As with feature selection we are concerned with datasets of $n$ objects
described by $p$ features. Unfortunately, with feature transformation methods,
it is not unusual to represent the data as a feature-object matrix
$\mathbf{X}_{p\times n}$ where each column represents an object. This
contrasts with the object-feature format $\mathbf{X}_{n\times p}$ popular in
supervised ML where each row is an object. For consistency with the rest of
this paper we will stick with the object-feature format $\mathbf{X}_{n\times
p}$. The objective with Feature Transformation is to transform the data into
another set of features $F^{\prime}$ where $k=|F^{\prime}|$ and $k<p$, i.e.
$\mathbf{X}_{n\times p}$ is transformed to $\mathbf{X^{\prime}}_{n\times k}$.
Typically this is a linear transformation $\mathbf{W}_{p\times k}$ that will
transform each $\mathbf{x}_{i}$ to $\mathbf{x}^{\prime}_{i}$ in $k$
dimensions.
$\mathbf{x}^{\prime}_{i}=\mathbf{x}_{i}\mathbf{W}$ (11)
### 7.1 Principal Component Analysis
In PCA the transformation described in Equation 11 is achieved so that feature
$f_{1}^{\prime}$ is in the dimension in which the variance on the data is
maximum, $f_{2}^{\prime}$ is in an orthogonal dimension where the remaining
variance is maximum and so on. Central to the whole PCA idea is the covariance
matrix of the data [13]. The diagonal terms in $\mathbf{C}$ capture the
variance in the individual features and the off-diagonal terms quantify the
covariance between the corresponding pairs of features. The objective with PCA
is to transform the data so that the covariance terms are zero, i.e. the new
dimensions are independent. The overall process is as follows:
1. 1.
Calculate the means of the columns of X.
2. 2.
Subtract the column means from each row of X to create the _centred matrix_ Z.
3. 3.
Calculate the covariance matrix
$\mathbf{C}=\frac{1}{n-1}\mathbf{Z}^{\mathsf{T}}\mathbf{Z}$.
4. 4.
Calculate the eigenvectors and eigenvalues of the covariance matrix
$\mathbf{C}$.
5. 5.
Examine the eigenvalues in descending order to determine the number of
dimensions $k$ to be retained - this is the number of principle components.
6. 6.
The top $k$ eigenvectors make up the columns of the transformation matrix
$\mathbf{P}$ which has dimension $(p\times k)$.
7. 7.
The data is transformed by $\mathbf{X}^{\prime}=\mathbf{Z}\mathbf{P}$ where
$\mathbf{X}^{\prime}$ has dimension $(n\times k$).
The $i^{th}$ diagonal entry in $\mathbf{C}$ quantifies the variance of the
data in the direction of the corresponding principal component. Dimension
reduction is achieved by discarding the lesser principal components, i.e.
$\mathbf{P}$ has dimension $(p\times k)$ where $k$ is the number of principal
components retained.
We illustrate PCA in operation with the example shown in Table 2 and Figure
18. The code for this is linked in the Appendix. This example is based on the
Top Trump childrens’ game. Each object is a card representing a Harry Potter
character described by five features. In this dataset there are 22 cards so
$n=22$ and $p=5$. Figure 18(a) shows the variance in the data captured by the
first four principal components (PCs). The first two PCs together account for
81% of the variance in the data so when in Figure 18(b) we plot the data in
terms of these two PCs most of the variance in the data is retained.
Table 2: Sample Harry Potter data for use in PCA. Name | Magic | Cunning | Courage | Wisdom | Temper
---|---|---|---|---|---
Harry Potter | 62 | 21 | 42 | 26 | 7
Hermione Granger | 60 | 16 | 40 | 73 | 2
Ron Weasley | 45 | 14 | 40 | 22 | 4
Prof. Dumbledore | 105 | 24 | 39 | 82 | 0
Prof. Snape | 85 | 24 | 19 | 71 | 7
… | … | … | … | … | …
(a)
(b)
Figure 18: PCA on the Harry Potter dataset shown in Table 2. (a) The variance
explained by the first four principal components (original data has five
features). (b) A 2D plot of the data in the first two principal components.
While PCA is the established method for unsupervised dimension reduction there
are other methods also in use. Singular Value Decomposition (SVD), a closely
related matrix factorization method, is popular in text analytics [4] and
Latent Dirichlet Allocation [2] is popular for topic modelling.
### 7.2 Linear Discriminant Analysis
PCA on the Penguins dataset is shown in Figure 19(a). Given that PCA is
designed to project the data into dimensions that capture the variance in the
data it should not be surprising that it does not do a great job of separating
the classes. Figure 19(b) shows Linear Discriminant Analysis (LDA) on the same
dataset. LDA takes the class labels into account and seeks a projection that
maximises the separation between the classes. The objective is to uncover a
transformation that will maximise between-class separation and minimise
within-class separation. To do this we define two scatter matrices,
$\mathbf{S}_{B}$ for between-class separation and $\mathbf{S}_{W}$ for within-
class separation:
(a) PCA
(b) LDA
Figure 19: PCA and LDA on the Penguins dataset. PCA seeks to maximize the
variance captured by the two components whereas LDA seeks to maximise the
separation between the classes.
$\mathbf{S}_{B}=\sum_{c\in C}n_{c}(\mu_{c}-\mu)(\mu_{c}-\mu)^{\mathsf{T}}$
(12)
$\mathbf{S}_{W}=\sum_{c\in
C}\sum_{j:y_{j}=c}n_{c}(x_{j}-\mu_{c})(x_{j}-\mu_{c})^{\mathsf{T}}$ (13)
where $n_{c}$ is the number of objects in class $c$, $\mu$ is the mean of all
examples and $\mu_{c}$ is the mean of all examples in class $c$:
$\mu=\frac{1}{n}\sum_{i=1}^{n}x_{i}\qquad\mu_{c}=\frac{1}{n_{c}}\sum_{j:y_{j}=c}x_{j}$
(14)
The components within these summations $\mu,\mu_{c},x_{j}$ are vectors of
dimension $p$ so $\mathbf{S}_{B}$ and $\mathbf{S}_{W}$ are matrices of
dimension $p\times p$. The objectives of maximising between-class separation
and minimising within-class separation can be combined into a single
maximisation called the Fisher criterion [7, 9]:
$\mathbf{W}_{LDA}=\arg\max_{\mathbf{W}}\frac{|\mathbf{W}^{\mathsf{T}}\mathbf{S}_{B}\mathbf{W}|}{|\mathbf{W}^{\mathsf{T}}\mathbf{S}_{W}\mathbf{W}|}$
(15)
i.e. find $\mathbf{W}\in\mathbb{R}_{p\times k}$ so that this fraction is
maximised ($|A|$ denotes the determinant of matrix $A$). This matrix
$\mathbf{W}_{LDA}$ provides the transformation described in equation 11. While
the choice of $k$ is again open to question it is sometimes selected to be
$k=|C|-1$, i.e. one less than the number of classes in the data. Solving the
optimization problem presented in equation 15 is a research topic in its own
right [21, 14] so we won’t explore it in more detail here.
In addition to projecting the data into a reduced dimension space LDA can also
be used for classification. In the LDA space each class $c\in C$ is modelled
by a multivariate Gaussian distribution $P(\mathbf{X}|\mathbf{y}=c)$. So a
query sample $x_{i}$ can be assigned to the class $c_{LDA}$ for which the
probability $P(y_{i}=c|x_{i})$ is largest. This is effectively a Naive Bayes
classifier:
$\begin{split}c_{LDA}&=\arg\max_{c\in C}P(y_{i}=c|x_{i})\\\
c_{LDA}&=\arg\max_{c\in C}P(x_{i}|y_{i}=c)P(y_{i}=c)\end{split}$ (16)
In the code linked in the Appendix, LDA has 97% accuracy on the hold-out set.
Half the Penguins data is used to build the LDA model shown in Figure 19 and
the other half is used for testing.
## 8 Discussion & Recommended Strategies
If the objective is to identify an effective feature selection strategy then
there are many methods to choose from. Even though these methods have
different inductive biases we generally see good correlations between the
feature rankings on the two datasets we consider. This shows that all these
methods have some merit in identifying good features. Our objective is not to
identify a single best strategy as this will depend to some extent on the
data. Given this, we propose the following methodology for feature selection
on a new dataset.
1. 1.
Preliminary Analysis: Use RF feature importance to rank all features (e.g. as
shown in Figure 17). This will provide some insight into what features are
important and it may also indicate features that can be dropped from further
consideration. We recommend the RF method as it considers features in context.
If the objective of the exercise is to gain an insight into the data then the
analysis may stop at this point.
2. 2.
Subset Selection: If the objective is to identify an effective subset for
classification then a subset selection strategy is required. The main argument
_against_ a Wrapper strategy is the computational cost. With advances in
computing resources this is now less of an issue so we recommend a Wrapper
strategy as described in Section 4. If the number of features still in
consideration is not high then BE should be considered. If the set of possible
features is large then SFS may be the pragmatic choice. The hybrid strategy
described in Section 5.1.1 could also be considered.
So our overall recommendation for feature subset selection is to use a Wrapper
strategy. This would be in tune with the current model selection protocols in
ML [19]. However, the recommendation for Preliminary Analysis is also
important. Early in the analysis of new data it will be helpful to use a
Filter strategy to gain some insight into the data. This Filter analysis will
be independent of any specific classifier models that might be used
subsequently.
## 9 Conclusions
The objective for this tutorial paper was to provide an overview of popular
feature selection methods, describing how they work and providing links to
Python implementations. It was not our intention to offer a comparative
evaluation to identify the best methods; that is done very well elsewhere [11,
23]. Instead, our objective was to provide a stepping stone to help
researchers get started with feature selection on their own datasets.
In the Introduction we listed objectives for feature selection other that to
improve classifier performance. Indeed, the examples in this paper suggest
that feature subset selection may not have a big impact on classifier
performance - this is borne out by other studies [11]. Instead feature
selection can deliver other benefits through savings in data gathering an
model execution. Perhaps, the biggest benefit of feature selection is the
insight it offers into what is important in the data under analysis.
## Acknowledgements
This publication has emanated from research conducted with the financial
support of Science Foundation Ireland under Grant numbers 16/RC/3872 and
18/CRT/6183. For the purpose of Open Access, the author has applied a CC BY
public copyright licence to any Author Accepted Manuscript version arising
from this submission.
## References
* [1] Richard E Bellman. Adaptive Control Processes: A Guided Tour. Princeton University Press, 1961.
* [2] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. the Journal of machine Learning research, 3:993–1022, 2003.
* [3] Leo Breiman. Random forests. Machine learning, 45(1):5–32, 2001.
* [4] Pádraig Cunningham. Dimension reduction. In Machine learning techniques for multimedia, pages 91–112. Springer, 2008.
* [5] Padraig Cunningham and Sarah Jane Delany. k-Nearest Neighbour Classifiers: 2nd Edition (with Python examples), 2020.
* [6] Marie Jean Antoine Nicolas de Caritat and Marquis De Condorcet. Essai sur l’application de l’analyse à la probabilité des décisions rendues à la pluralité des voix. L’imprimerie royale, 1785.
* [7] Ronald A Fisher. The use of multiple measurements in taxonomic problems. Annals of eugenics, 7(2):179–188, 1936.
* [8] Jerome H Friedman. Stochastic gradient boosting. Computational statistics & data analysis, 38(4):367–378, 2002\.
* [9] Keinosuke Fukunaga. Introduction to statistical pattern recognition. Elsevier, 2013.
* [10] Kristen B Gorman, Tony D Williams, and William R Fraser. Ecological sexual dimorphism and environmental variability within a community of antarctic penguins (genus pygoscelis). PloS one, 9(3):e90081, 2014.
* [11] Isabelle Guyon and André Elisseeff. An introduction to variable and feature selection. Journal of machine learning research, 3(Mar):1157–1182, 2003.
* [12] Mark A Hall. Correlation-based Feature Selection for Machine Learning. PhD thesis, The University of Waikato, 1999.
* [13] Harold Hotelling. Analysis of a complex of statistical variables into principal components. Journal of educational psychology, 24(6):417, 1933.
* [14] Rui Huang, Qingshan Liu, Hanqing Lu, and Songde Ma. Solving the small sample size problem of lda. In Object recognition supported by user interaction for service robots, volume 3, pages 29–32. IEEE, 2002.
* [15] Xin Jin, Anbang Xu, Rongfang Bie, and Ping Guo. Machine learning techniques and chi-square feature selection for cancer classification using sage gene expression profiles. In International Workshop on Data Mining for Biomedical Applications, pages 106–115. Springer, 2006.
* [16] John D Kelleher, Brian Mac Namee, and Aoife D’arcy. Fundamentals of machine learning for predictive data analytics: algorithms, worked examples, and case studies. MIT Press, 2020.
* [17] Kenji Kira, Larry A Rendell, et al. The feature selection problem: Traditional methods and a new algorithm. In Aaai, volume 2, pages 129–134, 1992.
* [18] Igor Kononenko. Estimating attributes: Analysis and extensions of relief. In European conference on machine learning, pages 171–182. Springer, 1994.
* [19] Arun Kumar, Robert McCann, Jeffrey Naughton, and Jignesh M Patel. Model selection management systems: The next frontier of advanced analytics. ACM SIGMOD Record, 44(4):17–22, 2016.
* [20] Jundong Li, Kewei Cheng, Suhang Wang, Fred Morstatter, Robert P Trevino, Jiliang Tang, and Huan Liu. Feature selection: A data perspective. ACM Computing Surveys (CSUR), 50(6):94, 2018.
* [21] Wei Liu, Yunhong Wang, Stan Z Li, and Tieniu Tan. Null space approach of fisher discriminant analysis for face recognition. In International Workshop on Biometric Authentication, pages 32–44. Springer, 2004.
* [22] John Loughrey and Pádraig Cunningham. Overfitting in wrapper-based feature subset selection: The harder you try the worse it gets. In International Conference on Innovative Techniques and Applications of Artificial Intelligence, pages 33–43. Springer, 2004.
* [23] Luis Carlos Molina Félix, Luis Antonio Belanche Muñoz, and M Àngela Nebot Castells. Feature selection algorithms: a survey and experimental evaluation. In 2002 IEEE International Conference on Data Mining, ICDM 2002: 9-12 December 2002, Maebashi City, Japan: proceedings, pages 306–313. Institute of Electrical and Electronics Engineers (IEEE), 2002.
* [24] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267–288, 1996.
* [25] Ryan J Urbanowicz, Randal S Olson, Peter Schmitt, Melissa Meeker, and Jason H Moore. Benchmarking relief-based feature selection methods for bioinformatics data mining. Journal of Biomedical Informatics, 85:168–188, 2018.
## Appendix A Appendix I: Python Code
The GitHub repository444https://github.com/PadraigC/FeatSelTutorial associated
with this paper contains the following Python Notebooks:
* •
FS-Wrappers: Code for SFS and BE Wrappers from mlxtend.
* •
FS-Filters: Code for using I-Gain and Chi-square Filters from scikit-learn.
* •
FS-ReliefF: Code for using ReliefF Filters from skrebate.
* •
FS-D-Tree: Building D-Trees with embedded feature selction using scikit-learn.
* •
FS-Lasso: Feature selection for Logistic Regression using scikit-learn.
* •
FS-CFS: Correlation-Based feature subset selection.
* •
FS-Random-Forest: Feature importance from Random Forest using scikit-learn.
* •
FS-PCA: Principal Component Analysis using the PCA implementation in scikit-
learn.
* •
FS-LDA: Linear Discriminant Analysis using the LDA implementation in scikit-
learn.
|
# A novel Fourier neural operator framework for classification of multi-sized
images: Application to 3D digital porous media
Ali Kashefi<EMAIL_ADDRESS>Tapan Mukerji<EMAIL_ADDRESS>
###### Abstract
Fourier neural operators (FNOs) are invariant with respect to the size of
input images, and thus images with any size can be fed into FNO-based
frameworks without any modification of network architectures, in contrast to
traditional convolutional neural networks (CNNs). Leveraging the advantage of
FNOs, we propose a novel deep-learning framework for classifying images with
varying sizes. Particularly, we simultaneously train the proposed network on
multi-sized images. As a practical application, we consider the problem of
predicting the label (e.g., permeability) of three-dimensional digital porous
media. To construct the framework, an intuitive approach is to connect FNO
layers to a classifier using adaptive max pooling. First, we show that this
approach is only effective for porous media with fixed sizes, whereas it fails
for porous media of varying sizes. To overcome this limitation, we introduce
our approach: instead of using adaptive max pooling, we use static max pooling
with the size of channel width of FNO layers. Since the channel width of the
FNO layers is independent of input image size, the introduced framework can
handle multi-sized images during training. We show the effectiveness of the
introduced framework and compare its performance with the intuitive approach
through the example of the classification of three-dimensional digital porous
media of varying sizes.
††journal: arXiv
[first]organization=Department of Civil and Environmental Engineering,
Stanford University, city=Stanford, postcode=94305, state=CA, country=USA
[third]organization=Department of Energy Science and Engineering, Stanford
University, city=Stanford, postcode=94305, state=CA, country=USA
## 1 Introduction and motivation
Since 2020, neural operators have gained extensive popularity, specifically
with two versions of graph neural operators (Li et al., 2020b) and Fourier
neural operators (FNOs) (Li et al., 2020a). In this article, our attention is
on FNOs. From a computer science perspective, regular FNOs fall in the
category of supervised deep learning framework, necessitating a large volume
of labeled data for training. FNOs have demonstrated their proficiency in
input-output mapping across various industrial and scientific applications
such as incompressible flows (Li et al., 2022b; Bonev et al., 2023; Peng et
al., 2024; Choubineh et al., 2023; Lyu et al., 2023; Gupta and Brandstetter,
2022; Peng et al., 2023a), wave equations (Zhu et al., 2023a; Zou et al.,
2023; Yang et al., 2023), thermal fields (Zhao et al., 2024; Hao et al.,
2023), carbon storages and sequestration (Wen et al., 2022; Jiang et al.,
2023b), and other areas (Peng et al., 2023b; You et al., 2022; Kontolati et
al., 2023; Zhu et al., 2023b; Hua and Lu, 2023; White et al., 2023a; Li et
al., 2021; Pathak et al., 2022; Rahman et al., 2022b, a; Yang et al., 2022; Li
et al., 2022a; Maust et al., 2022; Zhao et al., 2022; Renn et al., 2023; Xiong
et al., 2023; Chen et al., 2023; Huang et al., 2023; Poels et al., 2023; White
et al., 2023b; Thodi et al., 2023; Zhao et al., 2023; Tran et al., 2023; Lee,
2022; Brandstetter et al., 2023; Li et al., 2023; Majumdar et al., 2023; Jiang
et al., 2023a; Lehmann et al., 2024). From a computer vision perspective,
these are framed as segmentation problems, where an input image, such as the
geometry of an airfoil, is mapped to another image, for instance, the velocity
field around that airfoil. An analogous area in computer vision is
classification, where an input image is mapped, for example, to a name or
number. While FNOs have potential in classification tasks, there exists only a
limited number of research conducted in this application as per our knowledge
(Johnny et al., 2022; Xi et al., 2022; Kabri et al., 2023).
Johnny et al. (2022) used the FNO architecture for classifying images in the
CIFAR-10 dataset, containing ten different classes; however, they trained the
network only on images with a fixed size of 32 by 32 pixels. Additionally,
Kabri et al. (2023) examined the FNO architecture for image classification.
Although they tested images of various sizes (e.g., 28 by 28 pixels, 112 by
112 pixels, etc.), they trained and then tested the network separately for
each size, assessing its performance on the corresponding size. Xi et al.
(2022) utilized the FNO architecture for the hyperspectral remote sensing
image classification. Their dataset comprised images of various sizes,
including 512 by 614 pixels, 610 by 340 pixels, and 512 by 217 pixels.
However, they adjusted all images to a fixed size by adding patches.
Consequently, although they employed the FNO architecture, in practice, they
limited their analysis to images of a uniform size. In the current study, we
narrow our focus on classification problems. More specifically, we consider
the problem of predicting the permeability of three-dimensional digital porous
media, which vary in size, as a benchmark test case.
FNOs are invariant with respect to the size of input images, and this
characteristic ensures that images of varying sizes can be processed by FNO-
based deep learning frameworks without requiring any architectural
alterations. Note that regular convolutional neural networks (CNNs) lack this
feature (Goodfellow et al., 2016). Building on this strength of FNOs, we
introduce a deep-learning framework for training the network simultaneously on
images with varying sizes for a classification problem. To achieve this deep
learning framework, FNO layers must be connected to a classifier, which is
commonly a multilayer perceptron (MLP). An intuitive approach to set this
would be to link FNO layers to a classifier via adaptive max pooling.
Considering the application of permeability prediction of three-dimensional
porous media, our machine-learning experiments show that this intuitive
approach only works well for porous media with fixed sizes. Pivoting from
this, we propose our novel approach. Rather than using adaptive max pooling,
we implement static max pooling with the size of the channel width of the FNO
layers. Given that the size of the channel width of FNO layers is independent
of the size of input images, our proposed framework can be efficiently trained
on various image sizes at once (see Fig. 1 and Fig. 2).
To explain, at a high level, the difference between using adaptive max pooling
(see Fig. 2) and static max pooling (see Fig. 1), let us consider for example
a three-dimensional image being fed as an input of the deep learning
framework. For both pooling methods, at the framework’s outset, FNO layers
lift the input image from its three-dimensional space to a higher dimensional
space, determined by the size of the channel width of the FNO layers. In the
case of adaptive max pooling, after FNO layer operations, the outcome
eventually is dropped into the initial three-dimensional space with the same
size as the input image. This array then serves as the input of adaptive max
pooling. The output of the adaptive pooling is then the input of the
classifier. In the case of static max pooling, before FNO layers drop the
output, we implement static max pooling, which functions within the high
dimensional space and pools with the size of the channel width of FNO layers.
The resulting output from this pooling then becomes the classifier’s input. A
more detailed exploration of these concepts is provided in Sect. 2.
The study of physical and geometric features of porous media is important in
diverse scientific and industrial areas such as digital rock physics (Andra et
al., 2013a, b), membrane systems (Liang and Fletcher, 2023), geological carbon
storages (Blunt et al., 2013), and medicine (Kumeria, 2022; Das et al., 2018).
Deep learning frameworks have been widely used for predicting the permeability
of porous media (Meng et al., 2023; Xie et al., 2023; Kashefi and Mukerji,
2023, 2021; Liu et al., 2023; Hong and Liu, 2020; Wu et al., 2018; Tembely et
al., 2020; Masroor et al., 2023; Sun et al., 2023), but, to the best of our
knowledge, all these frameworks were trained on a fixed-size porous media.
Note that training the proposed network to predict the permeability of porous
media of varying sizes comes with an exclusive challenge when compared to
training the network on conventional images for the purpose of classifying
them by their names (like those of cats and dogs). For conventional images,
one possible solution to handle images with different sizes is to equalize
them by adding mini patches to the smaller images. Nevertheless, this solution
is inapplicable to the porous media problem. Adding mini patches to porous
media can alter their physical properties such as permeability. For instance,
adding mini patches around a porous medium simulates sealing it with wall
boundaries, which prohibits flow within its pore spaces, resulting in a
permeability of zero. Additionally, the inherently three-dimensional nature of
porous media introduces another layer of complexity compared to the two-
dimensional conventional images.
The remainder of this article is organized as follows. We introduce and
discuss the concept of Fourier neural operators for image classification in
Sect. 2, starting with the traditional strategy of adaptive max pooling,
followed by our novel approach of static max pooling in the high dimension of
the Fourier space channel. A brief review of theoretical aspects of FNOs is
given in Sect. 2.3. Data generation and the training methodologies are
respectively presented in Sect. 3 and Sect. 4. In Sect. 5, we provide results
and discussion, including a comparison between traditional strategy and our
novel approach. Moreover, we present a sensitivity analysis, covering the
number of Fourier modes, the channel width of discrete Fourier space, the
number of FNO units, and the effect of activation functions and average
pooling. The deep learning model generalizability is discussed in this section
as well. Finally, we summarize the work and present insight into future
directions in Sect. 6.
Figure 1: Schematic of the proposed FNO-based framework for multi-size image
classification Figure 2: Schematic of the intuitive FNO-based framework for
multi-size image classification
## 2 Fourier neural operators for image classification
### 2.1 Our novel approach: Static max pooling in channel width of FNO layers
In this subsection, we introduce the architecture of our proposed deep
learning framework. Our explanation heavily uses matrix notation to ensure
clarity and provide a deeper understanding. As illustrated in Fig. 1, the
input of the deep learning framework is a cubic binary porous medium
represented as the matrix $\mathbf{A}_{n\times n\times n}$. As a first step,
the matrix $\mathbf{A}_{n\times n\times n}$ is lifted to a higher dimensional
space using a fully connected network. The dimension of this space is termed
the channel width of an FNO layer, shown by “width” in our matrix notation.
This lifting results in a four-dimensional matrix, denoted as
$\mathbf{B}_{\text{width}\times n\times n\times n}^{0}$. The matrix
$\mathbf{B}_{\text{width}\times n\times n\times n}^{0}$ becomes subsequently
the input of an FNO layer. Within the FNO layer, two operations are applied to
$\mathbf{B}_{\text{width}\times n\times n\times n}^{0}$: the kernel
integration operator, denoted by
$\mathbf{K}_{\text{width}\times\text{width}}^{0}$, and the linear
transformation operator, denoted by
$\mathbf{W}_{\text{width}\times\text{width}}^{0}$. The network computes the
matrix-matrix multiplication of
$\mathbf{K}_{\text{width}\times\text{width}}^{0}\mathbf{B}_{\text{width}\times
n\times n\times n}^{0}$ and
$\mathbf{W}_{\text{width}\times\text{width}}^{0}\mathbf{B}_{\text{width}\times
n\times n\times n}^{0}$ and then sums up the resulting matrices, as depicted
in Fig. 1. The output undergoes element-wise operations of the Rectified
Linear Unit (ReLU) activation function (Goodfellow et al., 2016) defined as
$\sigma(\gamma)=\max(0,\gamma),$ (1)
Resulting in a four-dimensional matrix $\mathbf{B}_{\text{width}\times n\times
n\times n}^{1}$. Mathematically, this procedure can be summarized as
$\displaystyle\mathbf{B}_{\text{width}\times n\times n\times n}^{1}$
$\displaystyle=\sigma\left(\mathbf{K}_{\text{width}\times\text{width}}^{0}\mathbf{B}_{\text{width}\times
n\times n\times n}^{0}\right.$
$\displaystyle\quad\left.+\mathbf{W}_{\text{width}\times\text{width}}^{0}\mathbf{B}_{\text{width}\times
n\times n\times n}^{0}\right).$ (2)
In scenarios where multiple FNO layers exist in the framework, the matrix
$\mathbf{B}_{\text{width}\times n\times n\times n}^{1}$ serves as the input
for the succeeding FNO layers, and the same sequence of operations is applied.
If we assume that there are $l$ number of FNO layers, the output from the
final FNO layer is the matrix $\mathbf{B}_{\text{width}\times n\times n\times
n}^{l}$. In the next step, we implement static max pooling on the first
dimension of matrix $\mathbf{B}_{\text{width}\times n\times n\times n}^{l}$.
Because “width” is independent of the input image dimension (i.e., $n$), the
static pooling works for input images with any desired size (e.g., $n=40$,
$n=48$, and $n=56$). Note that “width” is a hyper parameter of FNO layers and
independent of $n$, as all the matrix-matrix multiplication operates on the
dimension with the size “width”, and not “$n$”. The static max pooling
produces a vector of length width, representing the global features of the
input images. The vector is then connected to a classifier. The classifier is
a Multilayer Perceptron (MLP) composed of three layers of sizes 128, 128, and
1. The ReLU activation function is used in the initial two layers along with a
dropout with a rate of 0.3. Following the third layer, a sigmoid activation
function defined as
$\sigma(\gamma)=\frac{1}{1+e^{-\gamma}},$ (3)
is used to ensure output values are bounded between 0 and 1.
### 2.2 Intuitive approach: Adaptive max pooling in 3D spatial space
In this subsection, we explain the intuitive approach (see Fig. 2). Drawing
parallels to our approach elaborated in the previous subsection, we begin by
considering the input porous medium, which is a three-dimensional matrix
represented by $\mathbf{A}_{n\times n\times n}$. All operations outlined in
Sect. 2.1 are applied to $\mathbf{A}_{n\times n\times n}$ until the network
obtains the matrix $\mathbf{B}_{\text{width}\times n\times n\times n}^{l}$ at
an intermediate step, as depicted in Fig. 2. As the next step, we drop (as an
inverse of the lifting operator explained in Sect. 2.1) the matrix
$\mathbf{B}_{\text{width}\times n\times n\times n}^{l}$ from the high
dimensional space to the default space by means of a fully connected network.
This transformation results in the matrix $\mathbf{Z}_{n\times n\times n}$. At
this juncture, we use adaptive three-dimensional max pooling, a functionality
that is available in deep learning platforms such as PyTorch (Paszke et al.,
2019) or TensorFlow (Abadi et al., 2015). To ensure a fair comparison between
the traditional approach and our novel approach, we keep the size of the
vector of the global feature consistent across both approaches. To this end,
the output of the adaptive max pooling is tailored to yield a vector of size
“width”. The resulting vector represents the global features of the input
images.
Note that because the size of matrix $\mathbf{Z}_{n\times n\times n}$ depends
on the size of the input image (i.e., $n$), the pooling must be adaptive as we
plan to train the network simultaneously on input images with varying sizes
(e.g., $\mathbf{A}_{40\times 40\times 40}$, $\mathbf{A}_{48\times 48\times
48}$, and $\mathbf{A}_{56\times 56\times 56}$). Subsequent to the adaptive max
pooling, the global feature vector is connected to a classifier. This
classifier features and architecture is precisely the same as the one
elucidated in Sect. 2.1.
### 2.3 A brief review of theoretical aspects of Fourier neural operators
We focused on the technical aspects and computer implementation of FNO layers
in Sect. 2.1 and Sect. 2.2. Theoretical aspects of FNO layers have already
been vastly explained and discussed in the literature (Li et al., 2020a). In
this subsection, we briefly review the theory behind FNO layers and highlight
some important features.
As discussed in Sect. 2.1, an FNO layer comprises two main operators: the
integral kernel operator and the linear transformation. We overview the
integral kernel operator. We consider the bounded domain $D$ such that
$D\subset\mathbb{R}^{d}$, where $d$ indicates the physical dimensionality of
the problem and is equal to 3 (i.e., $d=3$) for the current problem since we
deal with three-dimensional porous media. We further show the input of the FNO
layer by $b(x)$ with $x\in D$, where $b$ is a function representing all the
operators applied to $x$ when it arrives at the gate of the FNO layer.
Moreover, we define the periodic kernel function
$\tau:\mathbb{R}^{2(d+d_{a})}\rightarrow\mathbb{R}^{\text{width}\times\text{width}}$,
where $d_{a}$ is the number of input features and is equal to 1 (i.e.,
$d_{a}=1$) in this study, because the input of the deep learning framework is
only a cubic binary array (representing a porous medium), and this array only
provides one feature, which is the geometry of the porous medium.
Additionally, recall that “width” is the channel width of the FNO layer, as
illustrated in Sect. 2.1 and Sect. 2.2. Following the formulation proposed by
Li et al. (2020a), the operation of the integral kernel $\mathcal{K}$ on the
function $b(x)$ in the continuous space is defined as
$\mathcal{K}b(x)=\int_{D}\tau(x,y)b(y)\,dy,\quad\forall x\in D.$ (4)
Following the original design of FNO layers by Li et al. (2020a), we introduce
the condition $\tau(x,y)=\tau(x-y)$. By applying the convolution theorem as
detailed in the literature (Li et al., 2020a), the following expression for
the integral kernel operator is obtained:
$\mathcal{K}b(x)=\mathcal{F}^{-1}\left(\mathcal{F}(\tau)\cdot\mathcal{F}(b(x))\right),\quad\forall
x\in D,$ (5)
where the Fourier transform and its inverse are shown by $\mathcal{F}$ and
$\mathcal{F}^{-1}$, respectively. We introduce $\mathcal{R}$ as the learnable
Fourier transform of $\tau$ such that
$\mathcal{R}=\mathcal{F}(\tau).$ (6)
Beyond the theory, we must implement these mathematical concepts in a deep
learning framework. In this way, we work with discrete spaces and
consequently, discrete modes of the Fourier transform. Hence, $\mathcal{R}$ is
implemented as a neural network. Additionally, each porous medium is
represented by $n^{3}$ discrete points such that
$\\{x_{1},x_{2},\cdots,x_{n^{3}}\\}\subset D$. Moreover, Fourier series
expansions are truncated at a maximum number of modes $m_{\text{max}}$
computed as
$m_{\text{max}}=\left|\left\\{m\in\mathbb{Z}^{d}:|m_{j}|\leq
m_{\text{max},j},\text{ for }j=1,\cdots,d\right\\}\right|,$ (7)
where $m_{\text{max},j}$ is the maximum number of modes taken in the dimension
$j$, and is a hyper-parameter of the FNO layer. Note that since we work on
three-dimensional problems in the current study, $d=3$, and thus, there are
only $m_{\text{max},1}$, $m_{\text{max},2}$, and $m_{\text{max},3}$. As a
result, the components of the $\mathcal{R}\cdot\mathcal{F}(b(x))$ operation
can be computed by the following formulation
$\displaystyle[\mathcal{R}\cdot\mathcal{F}(b(x))]_{m,i}$
$\displaystyle=\sum_{j=1}^{\text{width}}[\mathcal{R}]_{m,i,j}[\mathcal{F}(b(x))]_{m,j},$
(8) $\displaystyle\quad m=1,\cdots,m_{\text{max}},\quad
i=1,\cdots,\text{width},$
where
$[\mathcal{R}]\in\mathbb{C}^{m_{\text{max}}\times\text{width}\times\text{width}}$
is the matrix representation of $\mathcal{R}$ in the discrete space.
$[\mathcal{R}\cdot\mathcal{F}(b(x))]\in\mathbb{C}^{m_{\text{max}}\times\text{width}}$
and $[\mathcal{F}(b(x))]\in\mathbb{C}^{m_{\text{max}}\times\text{width}}$ are
similarly defined. To increase computing efficiency and enable parallel
computing, the operator $[\mathcal{R}]$, for the current three-dimensional
problem, is better to be implemented as a five-dimensional matrix expressed as
$\mathbf{R}_{m_{\text{max},1}\times m_{\text{max},2}\times
m_{\text{max},3}\times\text{width}\times\text{width}}.$ (9)
As can be seen from Eq. 9, the size of matrix $\mathbf{R}$, and thus the count
of trainable parameters in the FNO layer, is a function of the number of
maximum Fourier modes at each dimension and the channel width of the FNO
layer. Recall that these parameters (i.e., $m_{\text{max},1}$,
$m_{\text{max},2}$, $m_{\text{max},3}$, and width) are the hyperparameter of
FNO layers and need to be tuned by potential users.
\begin{overpic}[width=433.62pt]{CNN40.png} \put(5.0,90.0){a} \end{overpic}
\begin{overpic}[width=433.62pt]{CNN48.png} \put(5.0,90.0){b} \end{overpic}
\begin{overpic}[width=433.62pt]{CNN56.png} \put(5.0,90.0){c} \end{overpic}
Figure 3: A few examples of synthetically generated three-dimensional digital
porous media for training the proposed neural network; a an image of size
$40^{3}$, b an image of size $48^{3}$, and c an image of size $56^{3}$
## 3 Data generation
To generate synthetic data to examine the deep learning framework under
investigation in this study, we consider cubic porous medium domains with
length L along each side, spatial correlation length of $l_{c}$, and porosity
of $\phi$. We use the truncated Gaussian algorithm (Lantuejoul, 2002; Le
Ravalec-Dupin et al., 2004) to generate synthetic porous media. In practice,
we create three-dimensional cubic arrays of dimension $n\times n\times n$,
populated with random numbers conforming to a normal distribution with the
characteristics of a mean value of 0.0 and a standard deviation of 1.0.
Subsequently, we filter the arrays by a three-dimensional Gaussian smoothing
kernel with a standard deviation of 5.0 and a filter size commensurate with a
spatial correlation length ($l_{c}$) of 17. We then subject the arrays to a
binarization process via a thresholding number such that the porosity ($\phi$)
of the resulting arrays lies within the range of [0.125, 0.200]. We use the
MATLAB software to handle the above-described steps. We set $L$ as
$n\times\delta x$, where $\delta x$ represents the length of each side of a
pixel in porous media. We set $\delta x$ to 0.003 m. We generate porous media
with three different sizes by considering three different values for $n$, such
that $n_{1}=40$, $n_{2}=48$, and $n_{3}=56$. In this way, each cubic porous
medium can be characterized by its size as $n^{3}$ (e.g., $40^{3}$, $48^{3}$,
and $56^{3}$). For each $n$, we generate 1250 data. We randomly split the
generated data corresponding to each size into three categories of training
(80%, i.e., 1000 data), validation (10%, i.e., 125 data), and test (10%, i.e.,
125 data). Hence, there are 3750 data in total, 3000 data for the training
set, 375 data for the validation set, and 375 data for the test set. Figure 3
exhibits a few examples of the generated synthetic data.
To stimulate the incompressible viscous Newtonian flow within the generated
porous media, we apply a constant pressure gradient in the $x$ direction
($\Delta p/L$). Zero velocity boundary condition is applied at the top and
bottom of the porous medium on the $y-z$ planes. Given the geometry and
boundary conditions illustrated above, we use a Lattice Boltzmann solver
(Keehm et al., 2004) to solve the continuity and steady-state Stokes
equations, which are written as follows:
$\nabla\cdot\bm{\mathit{u}}=0,\quad\text{in }V,$ (10)
$-\nabla p+\mu\Delta\bm{\mathit{u}}=\textbf{0},\quad\text{in }V,$ (11)
where $\mu$ is the dynamic viscosity, $\bm{\mathit{u}}$ and $p$ indicate,
respectively, the velocity vector and pressure fields in the pore space of the
porous medium, $V$. In the next step, we compute the permeability in the
$x-$direction ($k$) using Darcy’s law (Darcy, 1856),
$k=-\frac{\mu\bar{U}}{\Delta p/L},$ (12)
where $\bar{U}$ shows the average velocity in the entire porous medium (i.e.,
including solid matrices). The computed permeabilities of our data set fall in
the range [20 mD, 200 mD].
## 4 Training
To accelerate the convergence of the training procedure, the output training
data (i.e., permeability) are scaled in the range of [0, 1] using the maximum
and minimum values of the training set. Note that although we train a single
neural network simultaneously on porous media with three different sizes
(corresponding to $n_{1}$, $n_{2}$, and $n_{3}$), we normalize the
permeability of porous media of each size using the maximum and minimum values
of the specific size. Mathematically, it can be written as
$\\{\hat{k}_{\text{truth}}\\}_{n_{j}}=\frac{\\{k\\}_{n_{j}}-\min\\{k\\}_{n_{j}}}{\max\\{k\\}_{n_{j}}-\min\\{k\\}_{n_{j}}},\quad
j=1,2,\text{and }3,$ (13)
where, $\hat{k}_{\text{truth}}$ shows the ground truth scaled permeability.
Moreover, for instance, $\\{k\\}_{n_{1}}$ indicates the training data
containing porous media with the size of $40^{3}$ (because $n_{1}=40$). Note
that we eventually rescale predicted permeability
($\hat{k}_{\text{prediction}}$) to the physical domain
($k_{\text{prediction}}$) for analyzing the neural network performances.
Concerning the loss function, we use the mean squared error function defined
as
$\text{Loss}=\frac{1}{N}\sum_{i=1}^{N}(\hat{k}_{\text{prediction}}-\hat{k}_{\text{truth}})^{2},$
(14)
where $N$ is the number of data in the training set (i.e., $N=3000$). Note
that using relative mean squared error as the loss function does not lead to a
significant difference in the results, based on our experiments. We set the
number of modes in each dimension to 2 (i.e., set $m_{\text{max},1}=2$,
$m_{\text{max},2}=2$, and $m_{\text{max},3}=2$). The channel width of the
discrete Fourier space is set to 64 (i.e., $\text{width}=64$). It is worth
noting that both the number of modes and the channel width play pivotal roles
in the network performance. Detailed discussions on their significance and
implications are provided in Sect. 5.2 and 5.3, respectively. Additionally, we
implement three units of FNOs in the network. The Adam optimizer (Kingma and
Ba, 2014) is used. A constant learning rate of 0.001 is selected. We use the
stochastic gradient descent (Goodfellow et al., 2016) with a mini-batch size
of 50. As discussed in Sect. 2, the architecture of FNOs is designed to be
independent of the spatial resolution of input images. During the training
process, however, all the input images within a mini-batch must be the same
size. In practice, each epoch of training is characterized by an inner loop
that iterates through mini-batches of differing porous medium sizes (i.e.,
$40^{3}$, $48^{3}$, and $56^{3}$). Within this loop, the training process
starts with a mini-batch of data of size $40^{3}$, followed by one of size
$48^{3}$, and then continues to $56^{3}$, in sequence until all the data in
the training set are covered within the epoch. Note that the trainable
parameters of the network are updated only at the end of each epoch. Our deep
learning experiments show that the order in which these differently sized
porous media are fed within an epoch has no significant influence on the
result accuracy and convergence speed, whether starting with the porous media
of size $40^{3}$, followed by $48^{3}$ and $56^{3}$, or any other permutation.
From a software perspective, we employ the NVIDIA A100 (SXM4) graphic card
with 80 Gigabytes of RAM for training the networks.
In the last paragraph of this subsection, we address the metric used for
assessing the effectiveness of permeability prediction. We use the coefficient
of determination, also known as the $R^{2}$ score, which can be calculated
using the following formula
$R^{2}=1-\frac{\sum_{i=1}^{Q}({k_{\text{truth}}}_{i}-{k_{\text{prediction}}}_{i})^{2}}{\sum_{i=1}^{Q}({k_{\text{truth}}}_{i}-\bar{k})^{2}},$
(15)
where $Q$ represents the number of the data in a set (e.g., training, test,
etc.) and $\bar{k}$ is the average value of the set
$\\{{k_{\text{truth}}}_{i}\\}_{i=1}^{Q}$.
Figure 4: $R^{2}$ plots for the test set (375 data) using the proposed
approach for classification of multi-sized images
## 5 Results and discussion
### 5.1 General analysis
As illustrated in Fig. 4, the success of our approach is evident in the
$R^{2}$ score, 0.96809, obtained for the test set (e.g., 375 data).
Additionally, Fig. 5 specifically showcases the $R^{2}$ scores for the test
set but individualized for each cubic size (i.e., $40^{3}$, $48^{3}$, and
$56^{3}$). As can be seen in Fig. 5, the $R^{2}$ scores obtained are equal to
0.96830, 0.96978, and 0.96607, respectively for the cubic digital porous media
of sizes $40^{3}$, $48^{3}$, and $56^{3}$. The range of $R^{2}$ scores for the
three different sizes remains at an excellent level, demonstrating that our
FNO-based framework is robust and not overfitted to any specific size.
Table 1: $R^{2}$ score of the test set for different mode numbers of the proposed FNO-based framework Number of modes in each dimension | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10
---|---|---|---|---|---|---|---|---|---
$R^{2}$ score | 0.96809 | 0.15416 | 0.26757 | 0.26361 | 0.23325 | 0.38789 | 0.40433 | 0.31773 | 0.28839
\begin{overpic}[width=433.62pt]{C1_40.pdf} \put(5.0,90.0){a} \end{overpic}
\begin{overpic}[width=433.62pt]{C1_48.pdf} \put(5.0,90.0){b} \end{overpic}
\begin{overpic}[width=433.62pt]{C1_56.pdf} \put(5.0,90.0){c} \end{overpic}
Figure 5: $R^{2}$ plots for the test set (375 data) using the proposed
approach for the classification of multi-sized images. The results are
individually shown for a images of size $40^{3}$ (125 data), b images of size
$48^{3}$ (125 data), and c images of size $56^{3}$ (125 data)
\begin{overpic}[width=433.62pt]{Loss_mode2.pdf} \put(5.0,90.0){a}
\end{overpic}
\begin{overpic}[width=433.62pt]{Loss_mode7.pdf} \put(5.0,90.0){b}
\end{overpic}
\begin{overpic}[width=433.62pt]{Loss_mode10.pdf} \put(5.0,90.0){c}
\end{overpic}
Figure 6: Evolution of the loss function for the validation and training sets
for the choice of a $m_{\text{max},1}=m_{\text{max},2}=m_{\text{max},3}=2$, b
$m_{\text{max},1}=m_{\text{max},2}=m_{\text{max},3}=7$, and c
$m_{\text{max},1}=m_{\text{max},2}=m_{\text{max},3}=10$
\begin{overpic}[width=433.62pt]{C9.pdf} \put(5.0,90.0){a} \end{overpic}
\begin{overpic}[width=433.62pt]{C12.pdf} \put(5.0,90.0){b} \end{overpic}
\begin{overpic}[width=433.62pt]{C15.pdf} \put(5.0,90.0){c} \end{overpic}
Figure 7: $R^{2}$ plots for the test set (375 data) using the proposed
approach for the classification of multi-sized images for the choice of a
$m_{\text{max},1}=m_{\text{max},2}=m_{\text{max},3}=4$, b
$m_{\text{max},1}=m_{\text{max},2}=m_{\text{max},3}=7$, and c
$m_{\text{max},1}=m_{\text{max},2}=m_{\text{max},3}=10$
\begin{overpic}[width=433.62pt]{C24.pdf} \put(5.0,90.0){a} \end{overpic}
\begin{overpic}[width=433.62pt]{C25.pdf} \put(5.0,90.0){b} \end{overpic}
Figure 8: $R^{2}$ plots for the test set (375 data) using the proposed
approach for the classification of multi-sized images for the choice of a
$m_{\text{max},1}=m_{\text{max},2}=2$ and $m_{\text{max},3}=10$, and the
choice of b $m_{\text{max},1}=2$, and $m_{\text{max},2}=m_{\text{max},3}=10$
### 5.2 Number of Fourier modes in each dimension
Our deep learning experiments demonstrate that there is a critical interplay
between the number of modes (i.e., $m_{\text{max},1}$, $m_{\text{max},2}$, and
$m_{\text{max},3}$) set in the proposed FNO framework and the tendency for
overfitting during the training procedure. Accordingly, setting the number of
modes beyond 2 leads to a severe divergence between the training and
validation loss. This fact can be observed in Fig. 6 when we set
$m_{\text{max},1}=7$, $m_{\text{max},2}=7$, and $m_{\text{max},3}=7$ or
$m_{\text{max},1}=10$, $m_{\text{max},2}=10$, and $m_{\text{max},3}=10$. The
reported results indicate that the number of modes plays a critical role in
the FNO model generalization. A further survey of the influence of the number
of modes in the FNO configuration is performed by varying the number of modes
in all three principal directions, from 2 to 10, and the obtained $R^{2}$
scores are tabulated in Table 1. Accordingly, the optimal mode configuration
for avoiding overfitting is 2, as the divergence between the validation and
training loss is minimized. Consequently, a careful selection of the number of
modes in the FNO units is necessary to make the deep learning framework robust
and reliable for the image classification application. The consequence of this
scenario is observable in Fig. 7, where we plot the $R^{2}$ score for the test
sets, for example, for the choice of
$m_{\text{max},1}=m_{\text{max},2}=m_{\text{max},3}=4$,
$m_{\text{max},1}=m_{\text{max},2}=m_{\text{max},3}=7$, and
$m_{\text{max},1}=m_{\text{max},2}=m_{\text{max},3}=10$. In all of these
cases, the $R^{2}$ scores obtained for the prediction of the permeability of
the porous media in the test set are less than 0.4.
We perform two other experiments. In the first one, we set only one mode
(e.g., $m_{\text{max},3}$) to 10 ($m_{\text{max},3}=10$) and the other two
modes to 2 (i.e., $m_{\text{max},1}=2$ and $m_{\text{max},2}=2$). In the
second one, we set only two modes (e.g., $m_{\text{max},2}$ and
$m_{\text{max},3}$) to 10 and the reminder mode to 2 (i.e.,
$m_{\text{max},1}=2$). The outputs of these two experiments are illustrated in
Fig. 8. As can be seen in Fig. 8, the resulting $R^{2}$ scores of the test set
are equal to 0.22298 and 0.34728, respectively, for the first and second
experiments. Accordingly, we conclude that even increasing one mode beyond 2
drastically negatively affects the performance of the proposed FNO framework
for the current application.
### 5.3 Channel width of FNOs
We further analyze the impact of different channel widths on the performance
of the introduced deep learning framework. Based on our machine learning
experiments, $R^{2}$ scores obtained for the channel width of 8, 27, 64, and
125 are 0.49904, 0.81618, 0.96815, and 0.94457, respectively. When the channel
width decreases from 64 to 27 or to 8, a significant drop in the $R^{2}$ score
is observed. Notably, increasing the channel width beyond 64 to 125 also leads
to a slight decrease in the precision of permeability predictions.
As discussed in Sect. 2.3, the choice of channel width is directly related to
the number of trainable parameters, which are 30897, 163783, 828673, and
3096531 for each respective channel width. Moreover, the channel width also
determines the size of the max pooling, representing the size of the global
feature vector. Hence, optimizing channel width is critical. Small channel
width leads to poor performance, whereas large channel width imposes high
computational costs and memory allocation without necessarily a significant
performance improvement.
### 5.4 Number of FNO units
We investigate the effect of varying the number of FNO units (see Fig. 1).
Deep learning experiments are conducted using one, two, three, four, and five
units to assess the impact on the introduced FNO performance. By computing the
$R^{2}$ score across the test set, we realize that there is no significant
improvement in the prediction accuracy. $R^{2}$ score for the FNO
configuration with one, two, three, four, and five units are respectively,
0.82767, 0.91703, 0.96813, 0.96759, and 0.97818. Hence, adding more units
(beyond 3) and making the network deeper does not have a remarkable effect on
the prediction accuracy. However, the number of trainable parameters and
consequently, the computational cost and required GPU memory (e.g., RAM)
escalated by adding FNO units. For example, 820353, 824513, 828673, 832833,
and 836993 are, respectively, the number of trainable parameters of the model
with one, two, three, four, and five layers of FNOs.
### 5.5 Activation functions
We give particular attention to the effect of choosing an activation function
on the prediction ability of our FNO model. In the primary setup, we configure
all layers to employ the ReLU activation function, except the last layer of
the classifier, where we utilize a sigmoid function. We implement two
alternative setups. In the first one, we alter the activation function of the
last layer to ReLU, this configuration results in a drastic reduction in the
$R^{2}$ score of the test set, regardless of if the output permeability is
normalized between 0 and 1. In the second setup, we replace the activation
function in all layers with sigmoid. As a consequence of this setup, a slight
decrease in performance is indicated, as $R^{2}$ score of 0.91478 is obtained
for the test set. Note that the training procedure becomes slower in this
setup, as the derivative of the sigmoid function results in a more complicated
computation graph compared to that one output by the derivation of the ReLU
function.
### 5.6 Static max pooling versus static average pooling
Within the context of capturing global features in the proposed FNO-based
framework, we explore the efficacy of implementing static average pooling as
an alternative to static max pooling. Our machine learning experiment yields a
$R^{2}$ score of 0.94478 in this case, demonstrating a marginal diminishment
in the network performance compared to the presence of static max pooling. As
supported by the literature (Qi et al., 2017a, b; Kashefi et al., 2021;
Kashefi and Mukerji, 2022; Kashefi et al., 2023), max pooling is a preferred
technique for classification tasks compared to average pooling. Our finding
shows a similar pattern for the introduced FNO-based framework.
\begin{overpic}[width=433.62pt]{CNN36.png} \put(5.0,90.0){a} \end{overpic}
\begin{overpic}[width=433.62pt]{CNN44.png} \put(5.0,90.0){b} \end{overpic}
\begin{overpic}[width=433.62pt]{CNN52.png} \put(5.0,90.0){c} \end{overpic}
\begin{overpic}[width=433.62pt]{CNN60.png} \put(5.0,90.0){d} \end{overpic}
Figure 9: A few examples of synthetically generated three-dimensional digital
porous media for examining the generalizability of the proposed neural
network; a an image of size $36^{3}$, b an image of size $44^{3}$, c an image
of size $52^{3}$, and d an image of size $60^{3}$
\begin{overpic}[width=433.62pt]{C26.pdf} \put(5.0,90.0){a} \end{overpic}
\begin{overpic}[width=433.62pt]{C27.pdf} \put(5.0,90.0){b} \end{overpic}
\begin{overpic}[width=433.62pt]{C28.pdf} \put(5.0,90.0){c} \end{overpic}
\begin{overpic}[width=433.62pt]{C29.pdf} \put(5.0,90.0){d} \end{overpic}
Figure 10: $R^{2}$ plots demonstrating the generalizability of the proposed
approach in classifying multi-sized images. The network, trained on images of
sizes $40^{3}$, $48^{3}$, and $56^{3}$, is used to predict images of sizes a
$36^{3}$ (375 data), b $44^{3}$ (375 data), c $52^{3}$ (375 data), and d
$60^{3}$ (375 data).
### 5.7 Generalizability
In this subsection, we assess the generalizability ability of the proposed
FNO-based framework. Note that the concept of generalizability in the context
of the present work extends to the network’s performance to predict the
permeability of cubic porous media with unseen sizes. As discussed in Sect. 4,
the network was initially trained using porous media with cubic geometries of
sizes $40^{3}$, $48^{3}$, and $56^{3}$. To examine the network capacity to
generalize, we predict the permeability of porous media with sizes $36^{3}$,
$44^{3}$, $52^{3}$, and $60^{3}$ with 375 cubes for each of these sizes using
our pretrained FNO-based framework. Figure 9 shows a few examples of these
synthetic data, generated for the purpose of examining the network
generalizability. As shown in Fig. 10, a slight decline is observed in the
accuracy of permeability predictions for porous media with unseen sizes.
However, the obtained $R^{2}$ scores remain in an excellent range. These
scores are 0.93185, 0.91124, 0.91500, and 0.90844 for the porous media sizes
of $36^{3}$, $44^{3}$, $52^{3}$, and $60^{3}$, respectively. As another
observation, the performance of our approach is marginally higher in
predicting the permeability of unseen porous media with smaller cubic sizes.
As highlighted in Fig. 10, $R^{2}$ scores of porous media with sizes of
$36^{3}$ are greater than ones with a size of $44^{3}$. A similar scenario
occurs when we compare porous media of sizes $52^{3}$ and $60^{3}$. This can
be attributed to the fact that, for smaller sizes, the fixed-size vector of
the global feature encodes the features of smaller cubes more effectively.
Moreover, note that the vector size is the same as the width channel. As a
last comment in this subsection, to enhance the network’s generalizability, a
potential strategy could involve expanding the training dataset to include
more than the initial three sets of geometry sizes.
\begin{overpic}[width=433.62pt]{Loss_mode2.pdf} \put(5.0,90.0){a}
\end{overpic}
\begin{overpic}[width=433.62pt]{Loss5_V2.pdf} \put(5.0,90.0){b} \end{overpic}
Figure 11: Evolution of the loss function for the validation and training sets
using a the proposed approach (see Fig. 1) and b the intuitive approach (see
Fig. 2)
### 5.8 Comparison with intuitive approach
#### 5.8.1 Classification of fixed-sized image
For the comparison between the proposed approach (see Fig. 1) and the
intuitive approach (see Fig. 2), we consider the problem of predicting the
permeability of porous media with fixed cubic sizes. Specifically, we consider
a size of $48^{3}$. Similar outputs are observed for other sizes. To ensure a
fair comparison, both methodologies are investigated under similar conditions.
Specifically, both methods are set to have an approximately equal number of
trainable parameters (i.e., 828738 for the intuitive strategy and 828673 for
our approach). Accordingly, the size of the vector representing the global
feature is 64 in both methods. All other parameters such as the number of
modes in each direction, the number of FNO units, the classifier architecture,
and size, are the same in both methods and are set as those listed in Sect. 4
(i.e., the training section).
Our results demonstrate that both methods perform proficiently, with the
$R^{2}$ score of 0.99348 and 0.97360 over the test set for the intuitive
approach (see Fig. 2) and the proposed approach (see Fig. 1), respectively.
The evolution of the loss function for the training and validation sets
indicates a convergence after approximately 3000 epochs. This deep learning
experiment confirms an approximately equivalent computational cost between the
two approaches. Hence, when the image size of training data is fixed, both
strategies are effective for the defined image classification task and there
is no significant advantage for one method over the other, according to our
analysis. As a last point in this subsection, we note that one may also use
static max pooling in the architecture of the traditional approach since the
size of porous media is fixed in this experiment. Based on our results, the
performance does not change.
#### 5.8.2 Classification of multi-sized images
In this subsection, we compare the performance of the proposed approach (see
Fig. 1) with the intuitive approach (see Fig. 2) in predicting the
permeability of porous media with varying sizes. The evolution of both
training and validation losses is depicted in Fig. 11. Figure 11 indicates a
divergence between the training and validation losses for the network used in
the intuitive approach, which suffers from overfitting, whereas this is not
the case for the proposed approach. The superiority of the proposed approach
is also evident by the $R^{2}$ score obtained for the test set. Accordingly,
the $R^{2}$ scores of the proposed approach and the intuitive approach are
respectively 0.96809 and $-0.42632$. The negative value of the $R^{2}$ score
for the intuitive approach demonstrates that its model makes worse predictions
than a model that simply predicts all outputs as the mean value of the
dataset. Note that changing hyper-parameters, such as the number of modes,
channel width, and number of FNO layers, does not improve the model of the
intuitive approach.
This flaw stems from two reasons. First, using the intuitive approach, the
network captures the global feature after lifting cubes into the original
space, while the trainable parameters of the network are mainly defined in the
Fourier space. Second, the adaptive max pooling’s size is altered depending on
the size of the input cubic porous medium. These two together lead to a
misrepresentation of the global feature of cubes with different sizes, when
the network tends to predict the permeability of the validation and test sets.
Note that in Sect. 5.8.1, we showed that the intuitive approach worked well
when it was trained over porous media with fixed sizes. However, the result of
our machine learning experiments illustrates that the global features of cubes
with different sizes are amalgamated. In contrast, our approach uses static
max pooling consistent with the channel width of Fourier neural operators
before transitioning back to the original space. This approach enables the
capture of global features prior to changing spaces.
## 6 Summary and future outlooks
In this research study, we introduced a novel deep learning framework based on
Fourier neural operators for classifying images with different sizes (see Fig.
1). Because Fourier neural operators are resolution invariant, they have the
potential to be used for the task of multi-sized image classification. To
reach this goal, Fourier neural operators must be connected to a classifier,
ideally using a pooling operator. To this end, we proposed the novel idea of
implementing a static max pooling operator, which functions in a high
dimensional space with the size of Fourier channel width. We showed the
efficiency and robustness of this framework by predicting the permeability of
three-dimensional digital porous media with three different sizes of $40^{3}$,
$48^{3}$, and $56^{3}$. We explored the effect of key parameters such as the
number of Fourier modes in each dimension, the channel width of the discrete
Fourier space, activation functions in different layers, and the number of
Fourier units. Additionally, we showed that while the network was only trained
on the porous media with the sizes of $40^{3}$, $48^{3}$, and $56^{3}$, it
could successfully predict the permeability of the porous media with the sizes
of $36^{3}$, $44^{3}$, $52^{3}$, and $60^{3}$, indicating its
generalizability. Moreover, we demonstrated that the idea of implementing an
adaptive max pooling (see Fig. 2), as an intuitive approach for connecting the
FNO layers to the classifier, showed a lack of performance when predicting the
permeability of porous media of varying sizes. Note that the adaptive max
pooling operated in spatial spaces and that pooling had to be adaptive to
handle input images with varying sizes.
As a future research direction, we aim to adapt the current architecture and
extend its capabilities to image classification. In contrast to the problem of
permeability prediction, this approach reduces the problem’s dimensionality to
two. Additionally, given that the standard dataset for image classification is
usually large, we anticipate improved generalizability of the proposed
framework.
## Data availability
The Python code for the three-dimensional problems is available on the
following GitHub repository, https://github.com/Ali-
Stanford/FNOMultiSizedImages.
## Acknowledgements
Financial support by the Shell-Stanford collaborative project on digital rock
physics is acknowledged. Additionally, the first author would like to thank
Prof. Gege Wen at Imperial College London for her helpful guidance and
discussion on the software engineering aspects of this study.
## References
* Abadi et al. (2015) Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., Zheng, X., 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. URL: https://www.tensorflow.org/. software available from tensorflow.org.
* Andra et al. (2013a) Andra, H., Combaret, N., Dvorkin, J., Glatt, E., Han, J., Kabel, M., Keehm, Y., Krzikalla, F., Lee, M., Madonna, C., Marsh, M., Mukerji, T., Saenger, E.H., Sain, R., Saxena, N., Ricker, S., Wiegmann, A., Zhan, X., 2013a. Digital rock physics benchmarks—part i: Imaging and segmentation. Computers & Geosciences 50, 25–32. doi:https://doi.org/10.1016/j.cageo.2012.09.005. benchmark problems, datasets and methodologies for the computational geosciences.
* Andra et al. (2013b) Andra, H., Combaret, N., Dvorkin, J., Glatt, E., Han, J., Kabel, M., Keehm, Y., Krzikalla, F., Lee, M., Madonna, C., Marsh, M., Mukerji, T., Saenger, E.H., Sain, R., Saxena, N., Ricker, S., Wiegmann, A., Zhan, X., 2013b. Digital rock physics benchmarks—part ii: Computing effective properties. Computers & Geosciences 50, 33–43. doi:https://doi.org/10.1016/j.cageo.2012.09.008. benchmark problems, datasets and methodologies for the computational geosciences.
* Blunt et al. (2013) Blunt, M.J., Bijeljic, B., Dong, H., Gharbi, O., Iglauer, S., Mostaghimi, P., Paluszny, A., Pentland, C., 2013. Pore-scale imaging and modelling. Advances in Water Resources 51, 197–216. doi:https://doi.org/10.1016/j.advwatres.2012.03.003. 35th Year Anniversary Issue.
* Bonev et al. (2023) Bonev, B., Kurth, T., Hundt, C., Pathak, J., Baust, M., Kashinath, K., Anandkumar, A., 2023. Spherical fourier neural operators: Learning stable dynamics on the sphere. arXiv preprint arXiv:2306.03838 .
* Brandstetter et al. (2023) Brandstetter, J., van den Berg, R., Welling, M., Gupta, J.K., 2023. Clifford neural layers for pde modeling. arXiv:2209.04934.
* Chen et al. (2023) Chen, G., Liu, X., Li, Y., Meng, Q., Chen, L., 2023. Laplace neural operator for complex geometries. arXiv preprint arXiv:2302.08166 .
* Choubineh et al. (2023) Choubineh, A., Chen, J., Wood, D.A., Coenen, F., Ma, F., 2023. Fourier neural operator for fluid flow in small-shape 2d simulated porous media dataset. Algorithms 16. URL: https://www.mdpi.com/1999-4893/16/1/24, doi:10.3390/a16010024.
* Darcy (1856) Darcy, H., 1856. Les fontaines publiques de la ville de Dijon: exposition et application des principes à suivre et des formules à employer dans les questions de distribution d’eau. volume 1. Victor Dalmont.
* Das et al. (2018) Das, M.K., Mukherjee, P.P., Muralidhar, K., 2018. Porous Media Applications: Biological Systems. Springer International Publishing.
* Goodfellow et al. (2016) Goodfellow, I., Bengio, Y., Courville, A., 2016. Deep Learning. MIT Press. http://www.deeplearningbook.org.
* Gupta and Brandstetter (2022) Gupta, J.K., Brandstetter, J., 2022. Towards multi-spatiotemporal-scale generalized pde modeling. arXiv:2209.15616.
* Hao et al. (2023) Hao, Z., Wang, Z., Su, H., Ying, C., Dong, Y., Liu, S., Cheng, Z., Song, J., Zhu, J., 2023. Gnot: A general neural operator transformer for operator learning, in: International Conference on Machine Learning, PMLR. pp. 12556–12569.
* Hong and Liu (2020) Hong, J., Liu, J., 2020. Rapid estimation of permeability from digital rock using 3d convolutional neural network. Computational Geosciences 24, 1523–1539. doi:10.1007/s10596-020-09941-w.
* Hua and Lu (2023) Hua, N., Lu, W., 2023. Basis operator network: A neural network-based model for learning nonlinear operators via neural basis. Neural Networks 164, 21–37.
* Huang et al. (2023) Huang, X., Shi, W., Meng, Q., Wang, Y., Gao, X., Zhang, J., Liu, T.Y., 2023. Neuralstagger: accelerating physics-constrained neural pde solver with spatial-temporal decomposition. arXiv preprint arXiv:2302.10255 .
* Jiang et al. (2023a) Jiang, P., Yang, Z., Wang, J., Huang, C., Xue, P., Chakraborty, T.C., Chen, X., Qian, Y., 2023a. Efficient super-resolution of near-surface climate modeling using the fourier neural operator. Journal of Advances in Modeling Earth Systems 15, e2023MS003800. doi:https://doi.org/10.1029/2023MS003800. e2023MS003800 2023MS003800.
* Jiang et al. (2023b) Jiang, Z., Zhu, M., Li, D., Li, Q., Yuan, Y.O., Lu, L., 2023b. Fourier-mionet: Fourier-enhanced multiple-input neural operators for multiphase modeling of geological carbon sequestration. arXiv:2303.04778.
* Johnny et al. (2022) Johnny, W., Brigido, H., Ladeira, M., Souza, J.C.F., 2022. Fourier neural operator for image classification, in: 2022 17th Iberian Conference on Information Systems and Technologies (CISTI), pp. 1–6. doi:10.23919/CISTI54924.2022.9820128.
* Kabri et al. (2023) Kabri, S., Roith, T., Tenbrinck, D., Burger, M., 2023. Resolution-invariant image classification based on fourier neural operators, in: Calatroni, L., Donatelli, M., Morigi, S., Prato, M., Santacesaria, M. (Eds.), Scale Space and Variational Methods in Computer Vision, Springer International Publishing, Cham.
* Kashefi et al. (2023) Kashefi, A., Guibas, L.J., Mukerji, T., 2023. Physics-informed PointNet: On how many irregular geometries can it solve an inverse problem simultaneously? application to linear elasticity. Journal of Machine Learning for Modeling and Computing 4, 1–25.
* Kashefi and Mukerji (2021) Kashefi, A., Mukerji, T., 2021. Point-cloud deep learning of porous media for permeability prediction. Physics of Fluids 33, 097109.
* Kashefi and Mukerji (2022) Kashefi, A., Mukerji, T., 2022. Physics-informed pointnet: A deep learning solver for steady-state incompressible flows and thermal fields on multiple sets of irregular geometries. Journal of Computational Physics 468, 111510. doi:https://doi.org/10.1016/j.jcp.2022.111510.
* Kashefi and Mukerji (2023) Kashefi, A., Mukerji, T., 2023. Prediction of fluid flow in porous media by sparse observations and physics-informed pointnet. Neural Networks 167, 80–91.
* Kashefi et al. (2021) Kashefi, A., Rempe, D., Guibas, L.J., 2021. A point-cloud deep learning framework for prediction of fluid flow fields on irregular geometries. Physics of Fluids 33, 027104. doi:10.1063/5.0033376.
* Keehm et al. (2004) Keehm, Y., Mukerji, T., Nur, A., 2004. Permeability prediction from thin sections: 3d reconstruction and lattice-boltzmann flow simulation. Geophysical Research Letters 31. doi:https://doi.org/10.1029/2003GL018761.
* Kingma and Ba (2014) Kingma, D.P., Ba, J., 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .
* Kontolati et al. (2023) Kontolati, K., Goswami, S., Shields, M.D., Karniadakis, G.E., 2023. On the influence of over-parameterization in manifold based surrogates and deep neural operators. Journal of Computational Physics 479, 112008.
* Kumeria (2022) Kumeria, T., 2022. Advances on porous nanomaterials for biomedical application (drug delivery, sensing, and tissue engineering). ACS Biomaterials Science & Engineering 8, 4025–4027. doi:10.1021/acsbiomaterials.2c01103.
* Lantuejoul (2002) Lantuejoul, C., 2002. Geostatistical Simulation: Models and Algorithms. Springer.
* Le Ravalec-Dupin et al. (2004) Le Ravalec-Dupin, M., Roggero, F., Froidevaux, R., 2004. Conditioning truncated gaussian realizations to static and dynamic data. SPE Journal 9, 475–480. doi:10.2118/84944-PA.
* Lee (2022) Lee, S., 2022. Mesh-independent operator learning for partial differential equations, in: ICML 2022 2nd AI for Science Workshop.
* Lehmann et al. (2024) Lehmann, F., Gatti, F., Bertin, M., Clouteau, D., 2024. 3d elastic wave propagation with a factorized fourier neural operator (f-fno). Computer Methods in Applied Mechanics and Engineering 420, 116718.
* Li et al. (2022a) Li, Z., Huang, D.Z., Liu, B., Anandkumar, A., 2022a. Fourier neural operator with learned deformations for pdes on general geometries. arXiv preprint arXiv:2207.05209 .
* Li et al. (2020a) Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A., Anandkumar, A., 2020a. Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895 .
* Li et al. (2020b) Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A., Anandkumar, A., 2020b. Neural operator: Graph kernel network for partial differential equations. arXiv preprint arXiv:2003.03485 .
* Li et al. (2023) Li, Z., Kovachki, N.B., Choy, C., Li, B., Kossaifi, J., Otta, S.P., Nabian, M.A., Stadler, M., Hundt, C., Azizzadenesheli, K., Anandkumar, A., 2023. Geometry-informed neural operator for large-scale 3d pdes. arXiv:2309.00583.
* Li et al. (2022b) Li, Z., Peng, W., Yuan, Z., Wang, J., 2022b. Fourier neural operator approach to large eddy simulation of three-dimensional turbulence. Theoretical and Applied Mechanics Letters 12, 100389.
* Li et al. (2021) Li, Z., Zheng, H., Kovachki, N., Jin, D., Chen, H., Liu, B., Azizzadenesheli, K., Anandkumar, A., 2021. Physics-informed neural operator for learning partial differential equations. arXiv preprint arXiv:2111.03794 .
* Liang and Fletcher (2023) Liang, Y., Fletcher, D., 2023. Computational fluid dynamics simulation of forward osmosis (fo) membrane systems: Methodology, state of art, challenges and opportunities. Desalination 549, 116359. doi:https://doi.org/10.1016/j.desal.2022.116359.
* Liu et al. (2023) Liu, M., Ahmad, R., Cai, W., Mukerji, T., 2023. Hierarchical homogenization with deep-learning-based surrogate model for rapid estimation of effective permeability from digital rocks. Journal of Geophysical Research: Solid Earth 128, e2022JB025378. doi:https://doi.org/10.1029/2022JB025378.
* Lyu et al. (2023) Lyu, Y., Zhao, X., Gong, Z., Kang, X., Yao, W., 2023. Multi-fidelity prediction of fluid flow based on transfer learning using Fourier neural operator. Physics of Fluids 35, 077118. doi:10.1063/5.0155555.
* Majumdar et al. (2023) Majumdar, R., Karande, S., Vig, L., 2023. How important are specialized transforms in neural operators?, in: 1st Workshop on the Synergy of Scientific and Machine Learning Modeling @ ICML2023. URL: https://openreview.net/forum?id=DU3Z6ZdqhZ.
* Masroor et al. (2023) Masroor, M., Emami Niri, M., Sharifinasab, M.H., 2023. A multiple-input deep residual convolutional neural network for reservoir permeability prediction. Geoenergy Science and Engineering 222, 211420. doi:https://doi.org/10.1016/j.geoen.2023.211420.
* Maust et al. (2022) Maust, H., Li, Z., Wang, Y., Leibovici, D., Bruno, O., Hou, T., Anandkumar, A., 2022. Fourier continuation for exact derivative computation in physics-informed neural operators. arXiv preprint arXiv:2211.15960 .
* Meng et al. (2023) Meng, Y., Jiang, J., Wu, J., Wang, D., 2023. Transformer-based deep learning models for predicting permeability of porous media. Authorea Preprints .
* Paszke et al. (2019) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al., 2019. Pytorch: An imperative style, high-performance deep learning library, in: Advances in Neural Information Processing Systems, pp. 8024–8035.
* Pathak et al. (2022) Pathak, J., Subramanian, S., Harrington, P., Raja, S., Chattopadhyay, A., Mardani, M., Kurth, T., Hall, D., Li, Z., Azizzadenesheli, K., et al., 2022. Fourcastnet: A global data-driven high-resolution weather model using adaptive fourier neural operators. arXiv preprint arXiv:2202.11214 .
* Peng et al. (2024) Peng, W., Qin, S., Yang, S., Wang, J., Liu, X., Wang, L.L., 2024. Fourier neural operator for real-time simulation of 3d dynamic urban microclimate. Building and Environment 248, 111063.
* Peng et al. (2023a) Peng, W., Yuan, Z., Li, Z., Wang, J., 2023a. Linear attention coupled Fourier neural operator for simulation of three-dimensional turbulence. Physics of Fluids 35, 015106. doi:10.1063/5.0130334.
* Peng et al. (2023b) Peng, Z., Yang, B., Liu, L., Xu, Y., 2023b. Rapid surrogate modeling of magnetotelluric in the frequency domain using physics-driven deep neural networks. Computers & Geosciences 176, 105360.
* Poels et al. (2023) Poels, Y., Derks, G., Westerhof, E., Minartz, K., Wiesen, S., Menkovski, V., 2023. Fast dynamic 1d simulation of divertor plasmas with neural pde surrogates. arXiv preprint arXiv:2305.18944 .
* Qi et al. (2017a) Qi, C.R., Su, H., Mo, K., Guibas, L.J., 2017a. Pointnet: Deep learning on point sets for 3d classification and segmentation, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 652–660.
* Qi et al. (2017b) Qi, C.R., Yi, L., Su, H., Guibas, L.J., 2017b. Pointnet++: Deep hierarchical feature learning on point sets in a metric space, in: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (Eds.), Advances in Neural Information Processing Systems, Curran Associates, Inc.
* Rahman et al. (2022a) Rahman, M.A., Florez, M.A., Anandkumar, A., Ross, Z.E., Azizzadenesheli, K., 2022a. Generative adversarial neural operators. arXiv preprint arXiv:2205.03017 .
* Rahman et al. (2022b) Rahman, M.A., Ross, Z.E., Azizzadenesheli, K., 2022b. U-no: U-shaped neural operators. arXiv preprint arXiv:2204.11127 .
* Renn et al. (2023) Renn, P.I., Wang, C., Lale, S., Li, Z., Anandkumar, A., Gharib, M., 2023. Forecasting subcritical cylinder wakes with fourier neural operators. arXiv preprint arXiv:2301.08290 .
* Sun et al. (2023) Sun, H., Zhou, L., Fan, D., Zhang, L., Yang, Y., Zhang, K., Yao, J., 2023. Permeability prediction of considering organic matter distribution based on deep learning. Physics of Fluids 35, 032014. doi:10.1063/5.0142574.
* Tembely et al. (2020) Tembely, M., AlSumaiti, A.M., Alameri, W., 2020. A deep learning perspective on predicting permeability in porous media from network modeling to direct simulation. Computational Geosciences 24. doi:10.1007/s10596-020-09941-w.
* Thodi et al. (2023) Thodi, B.T., Ambadipudi, S.V.R., Jabari, S.E., 2023. Fourier neural operator for learning solutions to macroscopic traffic flow models: Application to the forward and inverse problems. arXiv:2308.07051.
* Tran et al. (2023) Tran, A., Mathews, A., Xie, L., Ong, C.S., 2023. Factorized fourier neural operators. arXiv:2111.13802.
* Wen et al. (2022) Wen, G., Li, Z., Azizzadenesheli, K., Anandkumar, A., Benson, S.M., 2022. U-fno—an enhanced fourier neural operator-based deep-learning model for multiphase flow. Advances in Water Resources 163, 104180.
* White et al. (2023a) White, C., Berner, J., Kossaifi, J., Elleithy, M., Pitt, D., Leibovici, D., Li, Z., Azizzadenesheli, K., Anandkumar, A., 2023a. Physics-informed neural operators with exact differentiation on arbitrary geometries, in: The Symbiosis of Deep Learning and Differential Equations III.
* White et al. (2023b) White, C., Tu, R., Kossaifi, J., Pekhimenko, G., Azizzadenesheli, K., Anandkumar, A., 2023b. Speeding up fourier neural operators via mixed precision. arXiv preprint arXiv:2307.15034 .
* Wu et al. (2018) Wu, J., Yin, X., Xiao, H., 2018. Seeing permeability from images: fast prediction with convolutional neural networks. Science Bulletin 63, 1215–1222. doi:https://doi.org/10.1016/j.scib.2018.08.006.
* Xi et al. (2022) Xi, J., Ersoy, O.K., Cong, M., Zhao, C., Qu, W., Wu, T., 2022. Wide and deep fourier neural network for hyperspectral remote sensing image classification. Remote Sensing 14. URL: https://www.mdpi.com/2072-4292/14/12/2931, doi:10.3390/rs14122931.
* Xie et al. (2023) Xie, C., Zhu, J., Yang, H., Wang, J., Liu, L., Song, H., 2023. Relative permeability curve prediction from digital rocks with variable sizes using deep learning. Physics of Fluids 35, 096605. doi:10.1063/5.0167998.
* Xiong et al. (2023) Xiong, W., Ma, M., Sun, P., Tian, Y., 2023. Koopmanlab: machine learning for solving complex physics equations. arXiv preprint arXiv:2301.01104 .
* Yang et al. (2022) Yang, H., Li, Z., Sastry, K., Mukhopadhyay, S., Anandkumar, A., Khailany, B., Singh, V., Ren, H., 2022. Large scale mask optimization via convolutional fourier neural operator and litho-guided self training. arXiv preprint arXiv:2207.04056 .
* Yang et al. (2023) Yang, Y., Gao, A.F., Azizzadenesheli, K., Clayton, R.W., Ross, Z.E., 2023. Rapid seismic waveform modeling and inversion with neural operators. IEEE Transactions on Geoscience and Remote Sensing 61, 1–12. doi:10.1109/TGRS.2023.3264210.
* You et al. (2022) You, H., Yu, Y., D’Elia, M., Gao, T., Silling, S., 2022. Nonlocal kernel network (nkn): a stable and resolution-independent deep neural network. Journal of Computational Physics 469, 111536.
* Zhao et al. (2022) Zhao, J., George, R.J., Zhang, Y., Li, Z., Anandkumar, A., 2022. Incremental fourier neural operator. arXiv preprint arXiv:2211.15188 .
* Zhao et al. (2024) Zhao, X., Chen, X., Gong, Z., Zhou, W., Yao, W., Zhang, Y., 2024. Recfno: a resolution-invariant flow and heat field reconstruction method from sparse observations via fourier neural operator. International Journal of Thermal Sciences 195, 108619.
* Zhao et al. (2023) Zhao, X., Sun, Y., Zhang, T., Xu, B., 2023. Local convolution enhanced global fourier neural operator for multiscale dynamic spaces prediction. arXiv:2311.12902.
* Zhu et al. (2023a) Zhu, M., Feng, S., Lin, Y., Lu, L., 2023a. Fourier-deeponet: Fourier-enhanced deep operator networks for full waveform inversion with improved accuracy, generalizability, and robustness. arXiv preprint arXiv:2305.17289 .
* Zhu et al. (2023b) Zhu, M., Zhang, H., Jiao, A., Karniadakis, G.E., Lu, L., 2023b. Reliable extrapolation of deep neural operators informed by physics or sparse observations. Computer Methods in Applied Mechanics and Engineering 412, 116064.
* Zou et al. (2023) Zou, C., Azizzadenesheli, K., Ross, Z.E., Clayton, R.W., 2023. Deep neural helmholtz operators for 3d elastic wave propagation and inversion. arXiv:2311.09608.
|
# Invisible, Unreadable, and Inaudible Cookie Notices: An Evaluation of Cookie
Notices for Users with Visual Impairments
James M Clarke University of Surrey<EMAIL_ADDRESS>&
<EMAIL_ADDRESS>Maryam Mehrnezhad Royal Holloway University of London,
<EMAIL_ADDRESS>Ehsan Toreini University of Surrey,
<EMAIL_ADDRESS>&<EMAIL_ADDRESS>
###### Abstract
This paper investigates the accessibility of cookie notices on websites for
users with visual impairments (VI) via a set of system studies on top UK
websites (n=46) and a user study (n=100). We use a set of methods and
tools–including accessibility testing tools, text-only browsers, and screen
readers, to perform our system studies. Our results demonstrate that the
majority of cookie notices on these websites have some form of accessibility
issues including contrast issues, not having headings, and not being read
aloud immediately when the page is loaded. We discuss how such practises
impact the user experience and privacy and provide a set of recommendations
for multiple stakeholders for more accessible websites and better privacy
practises for users with VIs. To complement our technical contribution we
conduct a user study and finding that people with VIs generally have a
negative view of cookie notices and believe our recommendations could help
their online experience. We also find a disparity in how users wish to respond
to cookie notices as apposed to how they do in reality.
## 1 Introduction
Visual impairment (VI) is a term used to describe any type of vision loss,
ranging from partial vision loss to someone who cannot see at all [1, 2].
People with VI have various types of assistive technologies (AT) available to
help them browse the internet [56], e.g., text-only browsers and screen
readers. Screen readers are installed on users’ computers or phones to read
information by outputting it as sound [49, 56, 65]. They work with the browser
and interpret the code that is used to build web pages [6]. Screen readers are
not capable of conveying visual and spatial information, such as layout and
images, to the user unless relevant meta-information is provided in the web
page code through _markups_. In addition, the text is only presented line by
line, making it harder to get an overview of the page [49]. This also makes it
more difficult to understand the relations of a website’s different parts and
identify navigation links. To ensure that AT can correctly interpret websites,
there are various accessibility standards, such as the Web Content
Accessibility Guidelines (WCAG) provided by the World Wide Web Consortium
(W3C) [63]. WCAG aims to provide a shared standard for web content
accessibility. The WCAG documents explain how to make web content more
accessible to disabled people. To be included in the WCAG, issues must impact
disabled people with greater effect than those without disabilities [64].
The majority of websites employ some type of tracking, using various
techniques such as cookies and fingerprinting [12]. There are two types of
cookies, functional and non-functional [30], with the most common use of non-
functional cookies being for personalised advertising [70, 55, 59]. A simple
method to counteract this type of tracking is to allow users to manage which
cookies are stored on their device [53]. With the implementation of the
General Data Protection Regulation (GDPR) in 2018, companies operating in the
EU and the UK and/or handling EU/UK citizens data need to choose a legal basis
to collect and process user data [48]. One of the most well-known of these is
cookie notices to gain consent from users [33]. Alongside the GDPR, the
ePrivacy Directive and the Information Commissioner Office (ICO) give specific
guidance on obtaining consent through cookie notices [42, 34].
Previous research has shown that individuals want to protect themselves from
online tracking [8, 47], though they are not always confident [8, 36].
Multiple studies have looked at how the function and presentation of cookie
notices differ [36, 34, 24, 61, 9]. Similarly, there are studies showing that
the designs of cookie notices can affect users’ interactions [61], including
through dark patterns [40]. Previous research has examined the effect of GDPR
and cookie notices on the number of cookies [30, 9, 24]. It has been shown
that there is a disparity between the requirement of data protection laws, the
practises of websites, and users behaviour regarding online tracking
protection [36].
Limited research has been conducted on privacy and VIs. Users with VIs have
previously been found to have concerns about being tracked online [25],
similar to others [8, 47]. There has also been research looking at VI and
online information credibility [6]. In the context of cookie notices and VI,
research is extremely sparse [52]. There are some reports on usability issues
with cookie notices while looking at the wider accessibility of websites [66].
To the best of our knowledge, there is no research on cookie notices and AT
where a comprehensive range of methods is utilised. Our research questions
include:
* •
RQ1: How do websites and cookie notices comply with the web content
accessibility guidelines and the general data protection regulations? RQ1-a:
How do popular websites comply with the current accessibility guidelines
(e.g., WCAG) and the GDPR? RQ1-b: Does compliance necessarily mean good
privacy practices for VI users?
* •
RQ2: Can the existing automated accessibility tools evaluate cookie notices?
RQ2-a: How do the current cookie notices score with the automated
accessibility tools (e.g., WAVE and Google Lighthouse)? RQ2-b: Does a high
score necessarily mean good practice for VI users?
* •
RQ3: How do cookie notices interface with AT? RQ3-a: How does the mainstream
AT (e.g., text-only browsers and screen readers) interact with cookie notices?
RQ3-b: How do the current practises impact VI users’ privacy?
* •
RQ4: What are the general perception and practice of VI users regarding cookie
notices? RQ4-a: What issues have VI users encountered with cookie notices?
RQ4-b: Who do participants believe is responsible for online accessibility?
This paper contributes to the body of knowledge via its system studies, user
studies, and the discussions and recommendations that we provide for improving
the online privacy of users with VIs. First, we provide a set of evaluation
methods based on the off-the-shelf tools for AT and for users with VI. This
enables us and other researchers to conduct system experiments and assess
websites and cookie notices for their accessibility. Second, using these
methods and tools, we run experiments on 46 popular UK websites (according to
Alexa) and report a wide range of accessibility issues with their cookie
notices. Table 1 presents an overview of our system studies. Third, we conduct
user studies with 100 UK participants who use AT and extract their perception,
practices, and preferences regarding cookies notices on websites. The results
of our systems studies as well as the user studies confirm that current
practises are far from ideal in protecting the privacy of users with VIs.
Finally, we discuss the impact of these practises on user privacy and provide
recommendations for web developers, AT designers, policymakers, and end users
to improve the privacy of real-world practises.
## 2 Background and Related Work
Differential Vulnerabilities recognise how different populations face
different types and degrees of security and privacy risks [44]. This
challenges the universalising tendencies that frame cybersecurity around an
abstract or generic user who either does not exist or is only a subset of
actual end users [11, 35]. This ties into social sciences research looking at
models of disability such as the Critical Realist Model [14]. Both of these
threads consider the real-world lived experiences of disabled people, as well
as their thoughts. With differential vulnerabilities considering how different
threats can arise for different user groups. Studying and evaluating the
privacy of users with VIs is challenging. The common range of privacy
assessment methods would not be directly useful here. Instead, such approaches
should be combined with accessibility assessment methods, as defined in the
accessible writing guidelines of the Association for Computing Machinery [21].
According to the Office of National Statistics in 2020, almost 11 million
adults with disabilities recently used the Internet in the UK [41]. A 2016
GOV.UK survey of 712 AT users found that 29% used a screen reader to browse
the Internet, while others used screen magnifiers, speech recognition or
readability software [18]. They also found several different screen readers
being used, the most popular being JAWS. WebAIM also found that JAWS was the
most popular screen reader with 53.7% of users using it as their primary
screen reader and NVDA was the second most popular with 30.7% of users using
it as their primary screen reader [68].
### 2.1 Users with VI and Privacy
Evaluating the accessibility of websites is possible through a number of
automatic and manual ways and through the use of a range of tools such as
screen readers and text-only browsers. For example, Southwell and Slater have
previously used the WCAG to evaluate university library finding aids [56].
They used an automated web-accessibility checker, WAVE 5.0 by WebAim, to
perform an initial assessment of each finding aid and then manually tested
each website using the WebbIE 3 Web browser, which is a non-graphical text-
only browser. They also used screen readers directed by keyboard navigation
including _System Access to GO_ and _NVDA_. When using the automated checker,
they found that most of the websites tested (58 of 65) had at least one
accessibility error. The most common errors were missing document language,
missing alternative text, missing form labels, and linked images missing
alternative text. They then used the non-graphical browser, finding only 68%
had headings that enabled navigation to another important section of the
document. Of those which had enough headings, they did not always have the
headings in proper sequential order, or were missing first-level headings.
Fewer sites offered links for navigation, 57% did, 43% did not, and 25% of the
sites lacked both headings and links for any kind of navigation. Using the
screen readers, they found that the main content of all 65 finding aids was
readable; this opposes the 89% error rate noted by the automatic checker.
Table 1: Overall view of our system studies Method | (I) Cookie notice | (II) General | (III) Manual | (IV) Manual
---|---|---|---|---
| & Tracking | Automated | Testing via | Testing via
| Behaviour Evaluation | Accessibility Tools | Text-only Browser | Screen Readers
Tools | Google Chrome, Brave | WAVE, | WebbIE | JAWS, NVDA
| | Google Lighthouse | |
Website Accessibility | NA | Yes | Yes | Yes
Assessment | | | |
Cookie Notice | Yes (General, Baseline) | Partial (Accessibility) | Yes (Accessibility) | Yes (Accessibility)
Assessment | | | |
There is scarce research on the security and privacy of users with VI. Brulé
et al. analysed 178 papers on technologies designed for people with VI, with
the aim of facilitating and stimulating future research [5]. Inan et al.
surveyed 20 individuals who are visually impaired to investigate their
internet use and explore their cybersecurity challenges and concerns while
browsing the internet [25]. They found a number of problems, such as automatic
web page refreshing and missing or improper headings. In this study, the
possibility of someone tracking their internet activities was the highest-
rated concern. The authors suggest that it is important to guide the user to
enable security and privacy settings and to provide accessible software
solutions to protect and warn this marginalised group. Hayes et al. shadowed
and interviewed people with VI and their allies [23]. Finding that self-
perceptions can influence someone’s behaviour, which could have privacy and
security implications, such as hiding or concealing their disability due to
perceived stigma. Akter et al. studied 155 people with VIs privacy concerns
relating to camera-based AT [3]. Finding that users of these systems were more
concerned about the privacy of others, who may inadvertently be captured in
their images, than themselves. However, camera-based AT can create a lack of
personal security in the lives of the people they are trying to help. Previous
research reports that users with VIs often find it difficult to complete their
security task and that they had moderately high levels of concern about
cybersecurity [39]. Similarly, there are reports on the complications of
authentication methods such as passwords and two-factor authentication for
users with VIs [51]. An exploratory user study, conducted using semi-
structured in-person interviews with 14 people with VIs found that
participants were aware and concerned about privacy and security and faced a
variety of risks [2].
More relevant to this paper is the work of Schnell and Roy [52]. They
conducted an evaluation of a select group of 40 educational and financial
website cookie notices using WCAG [52]. Finding that even for users without
disabilities, there were challenges to accessing, understanding, and
processing privacy information. Also, finding that educational websites were
more accessible than financial websites, however, not all websites complied
with the WCAG’s criteria chosen for their testing. In contrast to this work,
we offer a comprehensive evaluation method to review website cookie notices
and apply our methods to a range of websites rather than only educational and
financial.
Although there have been a number of user studies looking generally at users
who are VI and security, to the best of our knowledge, there have been none
looking specifically at cookie notices. In this paper, we aim to address this
gap via a series of system studies and a dedicated user study with users who
have VIs, both focusing on cookie notices.
### 2.2 AT Regulations, Standards, and Tools
According to GDPR, cookie notices should be presented on all websites that use
cookies and should include opt-out options, as well as opt-in options without
highlighting the latter and including any privacy or cookie walls. Cookie
notices should be separated from other matters such as privacy policy and
terms and conditions, and the user should be able to opt-out of the previously
accepted cookie settings with the same ease as they gave the consent. Enabling
non-essential cookies before the user’s consent is a non-compliant practice
too. Based on Article 12 et seq. GDPR [16]: “The controller shall take
appropriate measures to provide any information referred to in Articles 13 and
14 and any communication under Articles 15 to 22 and 34 relating to processing
to the data subject in a concise, transparent, intelligible, and easily
accessible form, using clear and plain language, in particular for any
information specifically addressed to a child. The information shall be
provided in writing, or by other means, including, where appropriate, by
electronic means. When requested by the data subject, the information may be
provided orally, provided that the identity of the data subject is proven by
other means.” This article interprets as the data controller (i.e. web tracker
in the context of this paper), must inform every user about the nature of the
data to be collected and the purposes of such collection. Hence, websites need
to be fully compliant with the regulations and also offer usable practises to
comply with further requirements. The ambiguity of how practises should
include marginalised users has not been discussed widely, and only limited
examples are available. For example, in the verdict of an Italian case in
which the data controller was mandated to provide the information acoustically
for video surveillance [15].
There are many aspects to the real-world implementation of accessible web
technologies [43]. For instance, an accessible web design approach should
support enhancing the visual characteristics of the front–end design and
utilise a range of colours, while ensuring the contrast of the colours is
accessible to users who are visually impaired or colour–blind. Also, they need
to build an audio commentary for the page and the images. The interconnected
nature of web pages (as various resources fetched from different origins in
the page) could potentially increase the complexity of fully accessible web
design. To harmonise such practises, the W3C has provided a comprehensive list
of 167 tools to evaluate accessibility compatibility
measurements111w3.org/WAI/ER/tools/. They are implemented on a number of
platforms and technologies, some supporting cross-platform products. These
products include 20 support APIs, 14 authoring tool plugins, 45 browser
plugins, 19 command line tools, 25 desktop applications, 4 mobile
applications, and 90 online tools.
There are a number of standards and regulations worldwide to provide
accessibility requirements for the technologies to be considered as _publicly
presentable_ in a region, country, or regional-based regulations e.g.,
European, Italian, Irish, Israeli, Japanese, Korean, and US Federal Law,
platform (web accessibility frameworks), e.g., various versions of WCAG (2.1,
2.0 and 1.0), or file formats, e.g., EPUB standards.
Looking at the standards based on the support for VI, we can conclude that all
of them recognise such disabilities and provide a standardised set of
guidelines for the implementation of such support. In general, they offer
similar suggestions to mitigate such disabilities. For example, these
standards support VI by advising to provide a form of AT and non-visual access
to support visually impaired users, including the proprietary agent (in this
case, the dedicated hardware or special browsers), an audio description to
explain the important visual detail, the high contrast visualisation, adoption
of flash thresholds, magnification, reduction of the required field of vision,
and control of contrast, brightness, and intensity. Following these guidelines
can contribute towards meeting the minimum requirements for complying with
such regulations.
In our observations, most accessibility tools are based on W3C standard family
(WCAG 2.1 (85 tools out of 167), WCAG 2.0 (139 tools), WCAG 1.0 (46 tools)).
Moreover, some of them comply with country-specific regulations such as German
standards (21 tools), French standards (12 tools), Japanese standards (18
tools), EU standards (9 tools), US federal procurement (67 tools), Irish
National IT Accessibility Guidelines (16 tools), Israeli web accessibility
guidelines (7 tools), Italian accessibility legislation (11 tools), Korean
standards (1 tool). Finally, format-specific standards such as EPUB
accessibility 1.0 are only supported in 3 tools.
### 2.3 Online Tracking
Adopted in April 2016 and implemented in May 2018, the GDPR changed the rules
on online tracking and consent (including consent to cookies) [48, 33]. In
order to process personal data, companies must choose a legal basis for
processing. One of the most well-known is consent. Valid consent must be
freely given, specific, informed, unambiguous, explicit, revocable, given
prior to any data collection, and requested in a readable and accessible
manner [48]. The ePrivacy Directive (“ePD”, aka “cookie law”) [42], provides
supplementary rules to the GDPR. According to the ePD website, publishers must
rely on user consent when collecting and processing personal data using non-
mandatory (not strictly necessary for the services requested by the user)
cookies or other technologies [42]. This is in accordance with the guidance
given by the European Data Protection Board and the ICO [13, 27]. Various
studies (e.g., [61, 9, 34, 59, 36, 24]) exist on the implementation and
effectiveness of cookie notices, privacy banners, and tracking practises.
Examples of dark patterns include providing invalid consent, nudging the user,
making opting-out difficult, not providing the user with opting-out options
from previously accepted cookies settings, pre-enabling non-essential cookies,
and including trackers in the cookie notice itself. For example, the top 5
consent management platforms have been reported to use dark patterns and
implied consent [40].
There is a body of knowledge on the user dimensions of tracking, including
concerns and negative feelings of users about tracking [47], differences
between demographics such as gender and country [8], the disparity between
regulations, website practises and users’ limited knowledge for protecting
against tracking and their demand for more transparency and control [36, 54,
46, 37, 60]. What is lacking in the previous work is the measurement of the
current practises in the wild for web tracking notices for users with visually
impairments. In this paper, we aim to run experiments in order to fill this
gap.
## 3 Accessibility Evaluation Methodology
In this section, we present our methodology for the evaluation of the
websites.
Our assessment includes a number of different methods and tools including
automated accessibility testing tools, a non-graphical browser and screen
readers, as explained at length in this section. The overall design of our
experiments and the tools used in each part is presented in Table 1. We have
included a website analysis template in Appendix A.
All experiments took place between April and October 2022 on a laptop PC
running Windows with a screen size of 13.3 inches and a resolution of 3840 x
2160. Windows is the most commonly used desktop OS among screen reader users
according to the WebAIM 2021 survey [68]. As a case study, we use Alexa’s top
50 UK websites in April 2022. We selected this sample since GDPR is a regional
regulation (EU/UK). We looked at English websites for analysis in our fluent
language. Based on Alexa, the popular UK websites are comparable to others in
Europe, e.g., Germany and France. From this list, four websites are excluded
because they redirect to another website already on the list or are down.
_t.co_ is an example of a website that was excluded due to redirecting to
_twitter.com_ , however, both _amazon.com_ and _amazon.co.uk_ are retained.
The US version of the site (.com) does not contain a cookie notice, whereas
the UK version (.co.uk) does; therefore, it was important to keep both sites
on the list for comparison. These are just examples, and the full list is
presented in Table 8 (Appendix). The cookie notice experiments were conducted
by two researchers to ensure consistency. A researcher performed accessibility
testing twice with one specialist software/tool, by recording the results in
tables (Appendix). Due to the rounds of experiments taking place over the
course of six months, we believe this demonstrates stability in our results.
### 3.1 Cookie Notices
All 46 websites were visited using Google Chrome (Version 103.0.5060.134
(Official Build) (64-bit)) and Brave222brave.com (Version 1.41.99 Chromium:
103.0.5060.134 (Official Build) (64-bit)). Using Google Chrome without a
screen reader acts as a baseline and gave an example of how sighted users
would see the site and the cookie notice. Chrome is one of the most popular
browsers with the highest market share in 2022 [57]. Brave is a secure browser
that was created in 2016 by two former executives of Mozilla Corporation, the
company that makes the Firefox browser [32]. Brave comes with a feature called
Brave Shields built in, which includes several privacy-preserving features.
Brave adopts various privacy-enhancing techniques which are not possible at
the browser extension level (due to access restrictions and performance
limitations), making it a powerful tool to observe the tracking behaviours of
websites. It is commonly used for assessing the tracking behaviour of websites
on PC and mobile platforms [34, 36]. We completed these experiments before the
introduction of cookie notice blocking by Brave [58].
For each of the 46 websites, we open them in these two browsers and record the
location and control options given to users. When recording the details, we do
not interact with the website in any way, including not interacting with
notifications (e.g. requesting location permission, update notifications). To
ensure that no cookies had previously been cached, each website is viewed in a
new private or incognito window.
We also record which options are given to the user in the cookie notice
according to the categories suggested in similar work, e.g. [34, 36]. These
categories include: (i) _Agree or Reject_ : where two options are presented,
Agree (Agree, Accept, OK, Understand, etc.) or Reject (Reject, Decline, No,
etc.), with the same level of control (e.g., two buttons). These are further
categorised by which option is emphasised. (ii) _Agree or Settings_ : where
two options are presented, Agree or Settings (Options, Settings, Policy,
Manage, Learn more, etc.), again with the same level of control. Which are
further categorised by which option is emphasised. (iii) _Agree, Reject, or
Settings_ : where three options are presented; Agree, Reject, and Settings.
These are further categorised on the basis of which item is highlighted in the
notice. (iv) _No Notice_ : The website does not display a cookie notice.
### 3.2 Accessibility Evaluation
Our accessibility evaluation consists of two parts. First, we use automated
testing tools, which are designed to give developers an overview of how
accessible their website is [63]. This allows us to get an impression of the
overall accessibility of a website, and in some cases includes information
about the accessibility of the cookie notice. Second, we use software designed
for individuals with VI in the real world to assess the results of the
automated testing tools and to allow us to more specifically focus on cookie
notices. In this section, we explain these approaches.
#### 3.2.1 Automated Accessibility Testing Tools
Websites are evaluated using two different automated accessibility testing
tools, WebAIM WAVE 5.0 Web Accessibility Evaluation Tool333wave.webaim.org/
and Google Lighthouse444developer.chrome.com/docs/lighthouse/overview/.
WAVE is an automated accessibility tool that we use to perform an initial
assessment of the conformance of each website to WCAG. WAVE generates a report
containing Errors, Alerts, Features, Structural elements, and ARIA landmarks.
Errors indicate issues that will impact certain users with disabilities, as
well as showing failures to meet the requirements of the WCAG. Whereas alerts
are elements which may cause accessibility issues, but need further manual
testing to determine this. Features are elements that can improve
accessibility if implemented correctly. Structural elements are HTML regions
and headings, and ARIA can be used to present important accessibility
information to people with disabilities. WAVE has been used in previous work,
e.g. Southwell and Slater, when evaluating university library finding aids
[56]. During their testing, they used WAVE to perform an initial evaluation of
the conformity of each finding aid to Section 508 and WCAG 2.0 guidelines. We
tested the web version of WAVE 5.0 in our preliminary testing and it did not
detect any cookie notices. Therefore, we use the browser extension
version555wave.webaim.org/extension/ for our experiments. The WAVE extension
evaluates the rendered version of the web page allowing dynamically generated
content to be evaluated [69], while the WAVE Web version may not be able to
apply all the scripting on the page. This is a possible reason for the cookie
notices not being displayed during our preliminary tests.
We use Google Lighthouse to give an overall accessibility score, as well as to
record specific problems with each website. Lighthouse is an open-source
automated testing tool, which can audit performance, accessibility, and more
[17]. We only test accessibility using the default (navigation) mode and while
representing a desktop device. We record the score out of 100 and the
individual issues with each website. This score is a weighted average of all
accessibility audits it performs, with weighting based on axe user impact
assessments [10]. The axe user impact assessments are composed of WCAG rules
with some supplementary rules added [26].
Both WAVE and Google Lighthouse give an overview of accessibility for a whole
website, however, WAVE also allowed us to view where specific problems
occurred.
Manual Testing via Text-only Browser: To complement and verify the results of
the testing tools, we apply a range of methods to manually assess the privacy
practises of these websites via their cookies notices. We visit all these
websites using WebbIE666webbie.org.uk/, a text-only browser for people with
VI. The WebbIE Ctrl-H and Ctrl-L commands are used to examine the heading and
links on a page. This approach has been used in similar work, e.g., work [56].
WebbIE was uninstalled and reinstalled for each round of testing, as it does
not have a private browsing mode or cookie manager. Through this method, we
examine how users navigate the page and if and how cookie notices are
displayed. We assign each website to one of the following categories: (i) _No
Headings_ : The website in general has no headings which can be used for
navigation. (ii) _Basic Headings_ : The website has some headings but there is
a limited amount which is not useful for navigation. (iii) _Full Headings_ :
The website has a number of headings that are useful for navigation.
Headings allow screen readers and other accessibility software to navigate
around a webpage. For example, WebbIE can move easily to different headings on
a website allowing for quicker navigation and locating key information, e.g. a
cookie notice. The categories above are derived from previous work [56], where
similar categories were used to evaluate the accessibility of library finding
aids. Similarly, we observe the website’s behaviour in presenting the cookie
notice and each website’s cookie notice was also put into the following
categories: _(i) Headings throughout_ : Headings are available throughout the
cookie notice. _(ii) Heading at the start_ : A heading is present at the start
of the notice, however, there were no other headings in the body of the
notice. _(iii) No headings_ : There are no headings present in the cookie
notice at all. _(iv) Notice missing_ : The cookie notice is not shown when
using WebbIE, however, one is present when using the graphical browsers. _(v)
No notice_ : The website does not have a cookie notice when viewed with the
graphical browser.
The _Headings throughout_ category for cookie notices is based on the _Full
Headings_ category for the website as a whole. Meaning that a user would be
able to navigate the cookie notice using heading-based navigation; this is
particularly useful for longer cookie notices as seen on some websites. The
_Heading at the start_ category is used to classify notices that only have a
heading at the start. This would allow for navigation to the notice itself but
means that a user would have to rely on a different type of navigation within
the notice, e.g. line-by-line or link-based navigation. Whereas _No headings_
would mean a user would not be able to use heading navigation at all within
the cookie notice and would have to rely on another form of navigation. In
some instances when viewed graphically, a website did display a cookie notice,
however, when using WebbIE one was not present. For this reason, we include a
_Notice missing_ category to signify this. Whereas with the _No notice_ , a
notice was not present on the website when viewed graphically. We included
different categories for headings (Basic, Full, and No headings) since we
found lengthy cookie notices on some websites (e.g., google.com,
facebook.com), however, headings are not always needed due to a number of
cookie notices being shorter.
#### 3.2.2 Manual Testing via Screen Readers
In order to have more comprehensive and conclusive results, we also carry out
our experiments using screen readers to manually test each website. JAWS and
NVDA were chosen as the most popular according to WebAIM [68], 53.7% and
30.7%, respectively. We use these screen readers in conjunction with Google
Chrome as these are the most common combinations of screen reader and browser
[68], 32.5% and 16.0%, respectively. NVDA is a free OS-level screen reader
with support for popular applications such as web browsers, email clients,
music players, and office programmes. JAWS is another OS-level screen reader
that users need to purchase. For our experiments, we purchased a home licence
(£865 with the Authorisation USB Dongle). Both screen readers should have
similar reliability when parsing websites [67], however, they often parse
website code slightly differently [45]. It is for this reason that we use the
two most popular screen readers during our testing.
We categorise cookie notices based on the way these screen readers are able to
read them [62, 52]. Accordingly, each website’s cookie notice was given a pass
or fail for the following categories: _(1) Readable_ : The screen reader
software is able to read the cookie notice. _(2) Immediately Read_ : The
cookie notice is the first thing to be read from the page, excluding the page
title. _(3) Keyboard navigable_ : The cookie notice of a website is navigable
using a keyboard while using a screen reader. _(4) Link or button purpose_ :
The purpose of a link or button can be solely determined by the link or
button. _(5) Abbreviations are explained_ : All abbreviations are explained.
This was either in the cookie notice or the website offered some mechanism for
identifying the expanded form. _(6) Page titled_ : The page has a title that
describes its topic or purpose. _(7) Cookie notice titled_ : The cookie notice
has a title or heading which is readable by the screen reader software. _(8)
Headings useful for navigation_ : There are useful headings for navigation
present throughout the cookie notice. _(9) No timing_ : There is no timing for
reading the cookie notice.
The _Readable_ category is based upon WCAG 3.1, Readable, defined as “Make
text content readable and understandable” by WCAG [62], with the guideline
being used in previous work [52]. We created the _Immediately Read_ category
to show that a cookie notice is read close to the start of a web page. This is
important as a number of websites start tracking a user before they respond to
the notice, and therefore users must be able to respond to the notice at the
first given opportunity. Also, meaning that users do not have to actively
search the website for the notice to respond. The category _Link or button
purpose_ is based on WCAG 2.4.9, Link purpose (link only), which is defined as
“A mechanism is available to allow the purpose of each link to be identified
from the link text alone, except where the purpose of the link would be
ambiguous to users in general” [62]. _Abbreviations are explained_ is based
upon WCAG 3.1, Abbreviations, which W3C define as “A mechanism for identifying
the expanded form or meaning of abbreviations is available”. _Page titled_ is
also based on a WCAG, namely 2.4.2 Page Titled which is defined as “Web pages
have titles that describe topic or purpose”. We create another category based
on this called _Cookie notice titled_ , this is to judge if a cookie notice
can easily be navigated to. It also aligns with previous testing for headings,
as titles often consist of headings. Alongside this, we test for _Heading
useful for navigation_ , which is based on the previous heading testing with
WebbIE in cookie notices. It also aligns with WCAG 2.4.10 Section headings,
defined as “Section headings are used to organize the content”. We also define
the category _No timing_ which is based on WCAG 2.2.3 No timing. This is
defined as “Timing is not an essential part of the event or activity presented
by the content, except for non-interactive synchronized media and real-time
events”.
### 3.3 Limitations
To the best of our knowledge, this is the first work on the assessment of
cookie notices on a range of websites for users with VI. We simply chose to
test the 46 top websites in the UK (out of 50). In practise, the top Alexa
websites may not be the most popular websites for users with VI. However, we
could not find a formal report on popular websites for this group of users. We
acknowledge that this is a limited sample set and more research is required to
evaluate a larger number of websites. When testing websites, we only tested
the first page in our experiments. Although this is a common practise for the
privacy assessment of websites in general, it is not clear if all pages would
present the same information and produce the same output for AT. Further,
detailed work would be needed to explain how different web pages interact with
AT.
Previous research has demonstrated the usefulness of mobile technology for
people with VI, e.g. [19, 20]. However, due to the lack of research in this
area, we only generally focus on desktop web browsers, for which the majority
of the accessibility and AT tools and standards are also designed. Cross-
platform studies are left as future work.
## 4 Accessibility evaluation Results
Our results include (1) a general assessment of the cookie notices of the
websites and their tracking behaviour, and (2) an accessibility evaluation of
these websites and their cookie notices.
### 4.1 Cookie Notices and Tracking Behaviour
Cookie Notice Position: We observed that the majority of websites displayed a
cookie notice (n = 35 or 76.1%) when using Google Chrome. Of the positions, a
bottom overlay was the most common (n = 15 or 32.6%), followed by a middle
overlay (n = 7 or 15.2%). When using Brave, a higher number of web pages
displayed no notice (n = 15 or 32.6%). Other than this, the popularity of
categories is in the same order as that of Google Chrome. While there are some
papers (e.g. [61, 4]) looking at cookie notice positions and user engagement,
we could not find any for users with VI.
Cookie Notice Control Options: Of the options given when using Google Chrome,
Agree or Settings was the most common (17 or 37.0%). The most commonly
emphasised option along with Agree or Settings was Agree (13 or 28.3%). Table
2 describes the options presented to users in Chrome and Brave. These results
from Brave resemble those of Google Chrome; however, when using Brave, there
was a higher percentage of websites which displayed no notice. These results
are consistent with previous work (e.g., [34]) when cookie notices were
evaluated across platforms. The cookie notices of some websites are blocked
due to the notice itself being a tracker; resulting in it being blocked by
Brave.
Previous research studying GDPR compliance has focused on the following
requirements: consent must be explicit, accepting all is as easy as rejecting
all, and there are no pre-ticked boxes [40]. It has been shown that the pre-
selection of options can impact users’ choices when giving consent [61]. For
this reason and to respond to RQ1-a, in Table 2, we highlight categories that
are in violation of the above requirements and therefore in violation of GDPR.
As you can see, three categories (14 websites) comply with the above
requirements. However, we did not test them for additional GDPR compliance
items, such as opting out from previously accepted cookie notices with the
same ease of opting in.
Table 2: Cookie notices’ user control options in Chrome and Brave, as well as GDPR violations. | Emphasised | Browser | GDPR
---|---|---|---
Options | option | Chrome | Brave | violation
(i) Agree or Reject | None | 4 | 4 | No
| Agree | 4 | 4 | Yes
(ii) Agree | None | 4 | 4 | Yes
or Settings | Agree | 13 | 9 | Yes
(iii) Agree, Reject | None | 5 | 5 | No
or Settings | Agree & Reject | 5 | 5 | No
(iv) No Notice | 11 | 15 | Yes
Tracking Behaviour: We also observed these websites regarding their tracking
behaviour through Brave. Before interacting with the cookie notices only 3 of
the 46 websites (6.5%) did not have at least one item blocked by the Brave
Shields without any interaction with the cookie notice. The average number of
items blocked was 9, the maximum was 81, 11 of the websites had more than 10
items blocked, and 6 had more than 20. The majority of items blocked were in
the _trackers & ads_ category. Our results support similar work e.g., [34, 36,
33] reporting that the majority of websites start tracking the user regardless
of the presence of the cookie notice before any user interaction with it.
### 4.2 Automated Accessibility Testing Tools in Websites
WAVE ran on all but one website; when using it on _ebay.co.uk_ , the overlay
containing the results did not appear. Of the remaining sites, 42 (93.3%)
contained at least one accessibility error, with the average number of errors
being 18.98. Of the websites tested 35 (77.8%) contained at least one contrast
error. All websites tested contained at least one structural element with the
average being 84.02. Table 3 shows a summary of the results. We further break
these down into categories which could cause issues, e.g. errors, contrast
errors, and alerts, and those which could improve user experience, e.g.
features, structural elements, and ARIA.
Table 3: Summary of WAVE 5.0 test results for 46 websites. Criteria | no. of websites | Average no. of items
---|---|---
| with at least one | across websites
| item per criteria |
Cookie Notice | 33 | -
Errors | 42 | 18.98
Contrast Errors | 35 | 22.98
Alerts | 45 | 124
Features | 46 | 77.16
Structural Elements | 46 | 84.02
ARIA | 43 | 235.42
Errors are general issues that cause problems such as missing labels from HTML
code. While a contrast error would cause issues for someone with vision loss,
e.g., light text on a light background or dark text on a dark background.
Alerts are criteria that need further testing to establish if they hinder or
help accessibility. For example, for an image with long alternative text, a
long description could be needed to fully describe the image, or it may be
unjustified. Features are elements which work to improve a user’s experience.
For example, a form label is present and associated with a form control. This
is similar to structural elements such as headings and lists which also help
the user’s experience. ARIA is a set of roles and attributes that define ways
to make websites more accessible to people with VI. An example of an error
could be missing alternative text or a form control that does not have a
corresponding label. ARIA is only useful if implemented correctly such as when
an ‘aria-label’ or ‘aria-labelledby’ attribute is present which can be
interpreted by AT.
To complement this, we also note whether a cookie notice was present when
testing using WAVE. In some instances, we observed specific issues with the
cookie notice. The most common problems were with the low contrast between the
background of the cookie notice and the text, links, or buttons. Table 8
(Appendix) shows detailed results. Overall, the website with the lowest number
of issues was bbc.co.uk, with 0 errors, 0 contrast errors, 134 alerts, 23
features, 119 structural elements and 371 ARIA. There were multiple websites
with close to the same number of issues, namely xvideos.com, spankbang.com and
xnxx.com, all of which had between 165 and 176 items which would cause issues.
In addition, we used Google Lighthouse for the overall accessibility score of
the website. The average score was 89% (highest: 100%, lowest: 63%). Since
Google Lighthouse uses the axe user impact assessments, the overall score is
affected largely in the same way as individual WAVE tests. For example,
including a button that has an accessible name will improve the overall score
given to a web page. In general, these popular websites had a range of good
and poor accessibility practises when tested with these automated
accessibility tools. There were several websites we tested that achieved the
best possible Lighthouse score of 100%: bbc.co.uk, wikipedia.org, gov.uk,
paypal.com, microsoft.com, linkedin.com, and doubleclick.net. The lowest
score, 63%, was achieved by tiktok.com. which, according to Lighthouse, had a
number of labels and names missing, as well as navigation and contrast issues.
These scores for each website are shown in Table 8 (Appendix).
### 4.3 Manual Cookie Notice Accessibility Testing via AT Tools
Manual Testing via Text-only Browser: By using a text-only browser, we
performed an analysis on the overall accessibility of the websites and their
cookie notices. When using WebbIE, 27 of the 46 (58.7%) websites contained
_full headings_ which would be useful for navigation. With 7 (15.2%) of them
only having _basic headings_ and 12 (26.0%) containing _no headings_. The
inclusion of headings throughout the website does not directly impact privacy,
and was included in this analysis to give context to the accessibility of
cookie notices.
Percent of websites58.7%15.2%26.1%Headings at the start (13.0%)No headings
(17.4%)Notice missing (15.2%)No notice (13.0%)Headings throughout (2.2%)No
headings (4.4%)Notice missing (4.4%)No notice (4.4%)No headings (2.2%)Notice
missing (17.4%)No notice (6.5%)Full HeadingsBasic HeadingsNo Headings Figure
1: WebbIE accessibility testing; inner circle: the whole site, outer circle:
the cookie notice.
When observing the cookie notices, 17 (48.6%) of the 35 websites which
previously displayed a cookie notice did not display a cookie notice when
using WebbIE (_notice missing_). Furthermore, only 1 (2.9%) of the 35 websites
which had previously displayed a cookie notice website had _headings
throughout_ , and 6 (17.1%) had a _heading at the start_ of the notice. 11
(31.4%) of the websites’ cookie notices contained _no headings_ , although,
the majority of the websites which did not contain a notice did include links
to privacy and cookies. Regardless of the number of headings throughout the
website, we often found that cookie notices were missing. However, when a
website had full headings the cookie notice was more likely to have a heading
at the start. The results of these tests are shown in Figure 1, with the inner
circle representing the headings in the website as a whole and the outer
circle specifically looking at the cookie notice. The use of a heading at the
start of the cookie notice can make it easier to locate, due to this the lack
of headings seen in our testing could lead to problems. Headings inside the
notice can also make it easier to navigate within a cookie notice, especially
if it is lengthy, and therefore easier to make a decision.
Manual Testing via Screen Readers: When testing with NVDA, 29 (82.9%) of the
35 websites, which graphically included a cookie notice, contained a cookie
notice which could be read aloud. This result is higher than was expected
following the other test. However, it still means that 6 of the cookie notices
could not be read at all with NVDA. Of the cookie notices that could be read,
20 of the 35 (57.1%) were read aloud immediately when the website loaded.
Others were read aloud after other elements of the page had been read or had
to be specifically located to be read. 27 (77.1%) of the 35 cookie notices
were keyboard navigable, these were not always the same websites as those read
immediately. Therefore, this leaves 8 websites which users with VI may not be
able to navigate. In some cases, these cookie notices created keyboard traps
that the user would not be able to leave. Only 5 (14.3%) of the 35 cookie
notices contained a link or button that could be solely determined by the link
or button. Hence, without allowing time for the screen reader to output the
notice, the user may not understand what they are agreeing to.
Although all 46 (100%) websites contained a title, only 19 of the 35 (54.2%)
cookie notices contained a title. This means it would not be possible to
navigate to them using the heading, it could also make it more difficult to
search for the cookie notice. 35 of the 35 (100%) cookie notices did not have
any type of time limit on replying to the cookie notice. This is an excellent
result, meaning that users will have time to ingest the information and make a
decision. None of the 7 websites which contained abbreviations explained them,
meaning that if users are unfamiliar with these terms they may not understand
what they are consenting to. Also, none of the 35 websites’ privacy policies
contained headings which were useful for navigation, however, some did contain
different links. Due to this, it may be difficult to navigate the cookie
notices, which is particularly important for some of the longer cookie notices
we observed. We summarise these results in Table 4 (detailed results in Table
9 (Appendix)).
In comparison, JAWS enabled 34 (97.1%) of the 35 websites with a cookie notice
to be read out loud. Meaning all but one of the cookie notices could be read
aloud, which is a significantly better result than when using NVDA. Of these,
22 (62.9%) of the 35 were read aloud immediately when the website loaded,
which again is higher than when using NVDA. 29 of the 35 (82.9%) were keyboard
navigable, this is an improvement of 2 cookie notices over NVDA. The number of
cookie notices with a link or button that could be solely determined by the
link or button was also higher at 11 (31.4%) of the 35. All of the other
results were the same for JAWS as NVDA. These results are summarised in Table
4 and detailed results are available in Table 9 (Appendix). The reason for the
disparity in results is due to the fact that the screen readers parse webpages
differently, resulting in differing numbers of readable notices. This
underscores the importance (lack) of standardisation efforts.
Table 4: Number of websites which passed and failed each criterion of the manual testings via NVDA and JAWS. | NVDA | JAWS
---|---|---
Criteria | Pass | Fail | Pass | Fail
Readable | 29 | 6 | 34 | 1
Immediately read | 20 | 15 | 22 | 13
Keyboard navigable | 27 | 8 | 29 | 6
Link or button purpose | 5 | 30 | 11 | 24
Abbreviations are explained | 0 | 7 | 0 | 7
Page titled | 46 | 0 | 46 | 0
Cookie notice titled | 19 | 16 | 19 | 16
Headings useful for navigation | 0 | 35 | 2 | 33
No timing | 35 | 0 | 35 | 0
We identified poor practices on some of these websites. For instance, a news
website (dailymail.co.uk) read out adverts immediately before reading anything
else such as the navigation bar or the cookie notice. This is highlighted in
Figure 6 (Appendix). This is despite the fact that this website’s cookie
notice is displayed using a large portion of the website. Another example was
an online payment site (paypal.com), which read the body of the website aloud
before reading the cookie notice. This aligns with the cookie notice being
visually at the bottom of the page; however, this means that a user with VI
using a screen reader could easily miss the cookie notice. An example of the
visual representation of this notice and a scripted output of the website
while using JAWS is available in Figure 8 (Appendix). We highlight this
example, however, a similar output was common across multiple websites. One
social news website (reddit.com) was the only website with a cookie notice
which could not be read with either screen reader, even with intervention with
mouse input. Visually the cookie notice was located at the bottom of the
window, however, it could not be selected with the screen readers. A visual
example of the cookie notice is included in Figure 7 (Appendix). In contrast,
some of the websites presented the user with reasonable options when using a
screen reader. For instance, bbc.co.uk clearly presented the users with opt-in
and settings options. A scripted version of such output via NDVA screen reader
is provided in Figure 9 (Appendix).
## 5 User Study Methodology
In this Section, we explain the design of our online survey, data collection,
and analysis.
### 5.1 Questionnaire Design
When designing this survey, we followed the design principles for
questionnaires for people with VIs [28]. Specifically, we aimed to inform
participants about the topic of the survey before beginning the questionnaire,
we indicate the type of answer after each question, and we also start each
question with a consecutive number. We conducted an accessibility evaluation
before conducting the survey, using the tools mentioned above, during which we
did not find any issues.
Our questionnaire is made up of five sections—Internet and AT, Privacy-
enhancing technology usage, Cookie notices, Suggestions, and Demographics—with
the complete questionnaire included in Appendix B.
Internet and AT: After verifying the screening questions, we ask our
participants a few background questions about technology usage, such as what
devices and AT they use. We list different AT technologies including screen
readers, braille displays, text only browsers, magnification software, and
assistive browser extensions based on our research as well as allowing
participants to add additional items.
PETs usage: Next, after a brief explanation of PETs, we ask participants which
PETs they use listening to them according to the categorisation suggested in
[8], where the authors measure the correlation of people’s feelings about
tracking and their protective actions. These categories include: browsers do
not track, virtual private networks, private browsing, password manger,
privacy-focused web browsers, encrypted messaging apps, ad blockers, file
encryption tools, we additionally allowed participants to name other tools.
Cookie notices: We also ask our participants what they think cookie notices
are and what they think they are supposed to do. As well as how they feel
about cookie notices and how they interact with them. For the design of these
questions we followed [36, 8]. We ended this section by asking if they have
encountered issues with cookie notices, what they were, and why they think
they happen.
Suggestions: Finally, informed by the results of our website experiments, we
ask participants who they believe should be responsible for ensuring the
privacy and accessibility of websites. We also ask which of our suggestions
would help improve their experience online.
Demographics: We conclude by asking demographic questions.
### 5.2 Data Collection and Analysis
We conducted our user studies via Prolific Academic777prolific.co/ among UK
participants. We conducted one initial testing round of the survey with 10
participants, asking for feedback upon completion. We fixed minor typos and
made a few structural changes accordingly. At this stage and throughout
further participation we received no complaints relating to accessibility of
the questionnaire. We then distributed the questionnaire to a further 100
participants. We chose to use Prolific Academic to distribute the survey as
this user group is notoriously difficult to recruit, therefore using a paid
platform allowed us to gain this sample size. We rewarded participants at a
rate of £12 per hour, which is categorised as ‘Great’ by Prolific Academic.
This research received full approval from the University of Surrey’s Ethical
Committee before the research commenced.
Our method for processing the collected data is a mix of quantitative and
qualitative analysis. For our free-text style questions, we run thematic
analysis [7]; taking an inductive approach and allowing the data to determine
our themes. We are confident that adopting a deductive approach would have
yielded comparable themes. Two researchers conducted the thematic analysis
independently, and due to the small sample size all authors discussed and
agreed on these themes. Our research focused on exploring potential
differences between users with visual impairments and those of previous work,
allowing data to determine themes. For lengthy and complex responses multiple
themes were assigned to them. We also chose participant responses which
represent themes for inclusion in the paper.
### 5.3 Limitations
The interaction and intersection of online services and AT could be a complex
research topic to be investigated by user studies. To complement the technical
findings of this paper, we ran our studies on an online platform and through a
survey that provided us with self-reported responses which has its own
limitations. We plan to extend our user studies to one-to-one interviews as
well as focus groups to gain a deeper understanding of the implications of
this user group.
## 6 User Study Results
In this section, we present our findings of the user study. Our study was
completed by 100 participants who self certify as using AT and live in the UK.
Our participants occupy different jobs ranging from students, educators,
healthcare and social assistance to business, hospitality, and some not
working. 59 participants identify as male, 38 as female, 2 as non-binary, and
1 choosing not to say.
### 6.1 AT and PETs Usage
Half of the users surveyed use magnification software, 42 used a screen
reader, 22 use an assistive browser extension, with 9 not using any AT while
browsing the web, and other forms of AT having less than 15 users.
Participants use various online services, with the most popular being email
(98), shopping and e-commerce (93), and social media (90). Table 10 (Appendix)
gives an overview of the demographic questions.
In response to Q2.1, all but 3 participants use at least one of the PET
categories suggested, Figure 2 shows detailed results. The most popular
technology used was a password manager (67%), and the least popular was file
encryption tools (11%).
010203040506070Percent of participantsPassword managerAd blockersVirtual
Private Network (VPN)Manual opt-outPrivate browsingBrowsers do not
trackEncrypted messaging appsPrivacy-focused web browsersFile encryption
toolsNone Figure 2: Q2.1: Which of the following privacy enhancing
technologies do you use? (multiple choice).
We also asked about the ways these users learn about PETs. Participants
reported different ways including recommendation of friends / social contacts,
being informed at work / school, and news. Only 19% of participants said that
they learn about these PETs via the privacy/cookie policy of a website.
### 6.2 Cookie Notices
When we asked our participants about their understanding of a cookie notice
(Q3.1), they described it via different terms and we observed a few themes,
where one third of our participants mentioned ‘tracking’ with a negative tone.
For instance, P94 said: “It is a pop up that appears on virtually every
website I visit these days. Can be quite annoying since it collects data, but
I tend to reject the tracking cookies if possible”.
In response to Q3.2 on the feelings of the participants about cookie notices
(Table 5). Around half of our participants expressed negative feelings, one
third had neutral feelings, and around a quarter expressed positive feelings
regarding cookie notices. For instance, P9 said: “I don’t have any specific
feeling about them just something that’s there.” and P33 said “I don’t like
them, they are made difficult to understand on purpose, in order to make the
user click ”Accept”. They need to be made more simple.”
Table 5: Q3.2: How do you feel about cookie notices? Category | Examples | N
---|---|---
Strongly negative | Don’t trust, Intrusive | 24
| Very bad, Frustrating |
Negative | Dislike, Don’t understand | 19
| Confusing |
Neutral | Okay, Not bothered | 31
| Don’t care |
Positive | Important, Essential | 26
| Useful |
In Q3.3 and Q3.4, we asked the participants how they interact with cookie
notices (Table 6). The responses varied across categories including agree,
decline, ignore, edit cookie settings, get rid of it, and use other PETs.
Except those who said they would agree to the cookie notice (47%), all the
other categories included the word “try” in some of the responses e.g., “try
to decline” and “try to edit the settings and say no.” Interestingly, 7% of
participants spoke about trying to get rid of the cookie notice in any
possible way by e.g., responding quickly. P46 said: “generally tick as little
as possible to view the page and also reject where I can if not[,] I have to
accept if the page [is needed]”. Where as P13 said “I try to reject them but
this can be very difficult- I find they are often deliberately set up to make
it impossible to read.”
Table 6: Q3.3: How do you interact with cookie notices? Category | Examples | N
---|---|---
Agree | Accept, Say yes, Agree | 47
Decline | Reject, Reject, Cancel, Disagree | 34
Ignore | Ignore, Skip it | 8
Edit cookie settings | Edit cookie setting/notice | 7
Get rid of it | Make it go away, Respond quickly | 7
Use PET | Clear cookies later/regularly | 6
In response to more questions in this category (Q3.7 and Q3.8), we found a gap
between the actual handling of cookie notices vs their preferred way. For
instance, 20% of participants said they agreed to cookie notices in reality,
when they wanted to act differently. Figure 3 shows the differences for each
category.
01020304050Percent of participantsUnable to respondOtherIgnore cookie
noticeEditing the settingsDisagreeAgreeWishReality Figure 3: Q3.7: How would
you like to handle cookie notices? and Q3.8: How do you actually respond to
cookie notices?
### 6.3 Issues with Cookie Notices
For Q3.5, the majority (59%) of participants said they had not encountered
issues with cookie notices (Figure 7). The rest said they have experienced
issues regarding cookie display or settings or described negative feelings
such as frustration regarding them. P98 said that they had experienced “cookie
notices blocking content on the page that, if not blocked, I could read and
close the page without having to interact with the notice.” P50 said “some
websites make it a bit difficult to reject all cookies, it’ll open up another
page where you’ll have to individually select each tick box to reject.”
Table 7: Q3.5: Please describe in your own words what type of issues have you experienced with cookie notices? Category | Examples | N
---|---|---
None | Nothing, None, No problem | 59
Display problems | Too big, Loading, Can’t find, Can’t read | 14
Cookie settings problems | Difficult to reject, Forced to accept | 13
Negative feelings | Too many, Tired of disabling, Annoying | 9
Other | Cookies full, Tried to disable | 2
However, when presented with a list of possible problems in Q3.6, only 20%
said none. 79% of the participants said that they had experienced at least
one, the most common being unclear response options in a cookie notice and
being unable to leave a cookie notice. Detailed results for this question are
in Figure 4.
0510152025Percent of participantsUnclear response options in cookie
noticeUnable to leave cookie noticeUnable to answer cookie noticeNoneUnable to
find cookie noticeUnable to enter cookie noticeLack of headings for cookie
noticeCookie notice not being present via ATLow contrast cookie noticeCookie
basket was too full Figure 4: Q3.6: Which of the following issues have you
experienced?
In a follow-up question (Q3.9), we asked what is the potential reason when
participants cannot respond to a cookie notice. The responses of the
participants fell into two main categories: technical issues (37%) or
malicious behaviour (16%). Four participants explicitly mentioned issues with
AT e.g., P27 said that “Assistive technology may not be picking up a notice
that has been given.” For example, P15 said they believe its “because they’re
trying to force you to accept by pretending it’s broken?”
### 6.4 Suggestions
We asked our participants about the responsible stakeholders for accessibility
and security/privacy of web services. In this multiple-choice question,
several entities came up including: website developers (77%), policymakers
(48%), end-users (24%), accessibility evaluation designers (18%), and AT
designers (15%).
In addition, in response to Q4.2 in which we listed a set of recommendations
(based on our system studies), all participants thought that at least one of
our suggestions would help to improve user experience. For example, 79% of
participants believe accessibility-by-design in websites would help their
experience. Figure 5 shows the popularity of other recommendations. We discuss
these in Section 7 at length.
020406080Percent of participantsAccessibility-by-design in
websitesAccessibility testing by web designersInclusion of more
headingsImproving the related lawsImproving the related
specificationsDevelopment of more specific testing toolsEnd user’s engagement
with cookie noticesAccessibly testing of sections of websitesDesigning AT-
friendly PETs Figure 5: Q4.2: Which of these recommendation do you think would
help improve your experience online? (multiple choice)
## 7 Discussion
In this section, we discuss our results across our system studies and user
study.
### 7.1 Website Accessibility and User Privacy
In response to RQ2-a, we found that 93.3% of websites contained at least one
accessibility error and 77.8% contained at least one contrast error. This
means that most websites tested are not compliant with the WCAG success
criteria and, therefore, could be inaccessible, difficult for people with VI
to access, or cause access issues.
The most common error during our testing of cookie notices was low-contrast
buttons or links. The WCAG criteria 1.4.3 and 1.4.6 give guidance for
contrast, the minimum guidance is a contrast ratio of at least 4.5 to 1 with
enhanced guidance of a contrast ratio of at least 7 to 1 [62]. For the
websites that contained a contrast error, this means that they did not meet
the minimum guidance and, therefore, could make text difficult to read for
people with VI. Alongside this, we found a number of websites that had no
errors in their cookie notices but contained errors elsewhere on the page.
Suggesting that the overall accessibility landscape is inadequate, this aligns
with previous research, e.g. [22]. Our results align with previous work,
reporting that the majority of university library finding aids had at least
one accessibility error [56].
These errors with contrast could affect users who have vision loss but are not
fully blind. Due to this group of people being larger than those who are fully
blind, this result is concerning. Low contrast could cause users to miss
important links or become confused about where to give or reject consent. For
example, previous research has shown that higher contrasts between text and
background colour led to faster searching [31], as well as affecting reading
speed [50]. It has also been shown in a requirement survey that links can
cause usability issues for users with VI [71].
### 7.2 Cookie Notice Accessibility Issues
In response to RQ3-a, we have categorised cookie notice accessibility issues
including text-only browser issues, keyboard traps, and visual presentation of
cookie notices vs. screen reader output. We explain each category here.
Text-only browser issues: WebbIE was used to manually examine the heading
contained within a website and its cookie notice. We found that 58.7% of the
websites contained headings that could be useful for navigation, 15.2%
containing basic headings and 26.0% containing no headings at all. Only 2.9%
of websites contained headings throughout their cookie notice, with 17.1%
having a heading at the start. A number of cookie notices did not appear when
using WebbIE, this is most likely due to WebbIE being built using the
Microsoft Web Browser object which gives a program its own internal IE [29].
In June 2022, Microsoft officially ended support for IE for some OSs [38]. It
is therefore likely that web pages stopped supporting IE due to it being a
legacy browser, and this caused these websites not to work with WebbIE.
Keyboard traps: It was found that 77.1% of websites that contained cookie
notices were keyboard navigable when using NVDA. The most common problem found
was having to intervene and use a mouse, an option that is not feasible for
people with VI. There were two main times when a mouse was needed. Firstly, to
get NVDA to read the cookie notice, as some websites required the user to
click on the cookie notice to interact with it. The other issue was escaping
the cookie notice as there were websites that trapped the user in the cookie
notice. This directly contradicts the WCAG success criteria 2.1.2, which is
rated at level A. Whereas when using JAWS 82.9% of websites that contained
cookie notices were keyboard navigable. Due to how JAWS operates a higher
number of privacy policies could be read, with fewer of them creating a
keyboard trap. This is most likely due to how different screen readers handle
CSS code differently [67].
Visual presentation vs screen reader audio output: Only 14.3% of the cookie
notices contained buttons or links whose use could be determined solely by the
button or link when using NVDA. Whereas 31.4% of the cookie notices contained
buttons or links whose use could be determined solely by the button or link.
This difference was due to JAWS reading out alternative text associated with
buttons on some websites. An example of this is where a button might visually
only say _Accept all_ whereas when read aloud using JAWS it says “Accept the
use of cookies and other data for the purposes described”. This change gives
the user significantly more context on what the button does for them and
allows them to skip the reading of the cookie notice. However, it could be
argued that this could be done without the additional alternative text and,
therefore, benefit both users without and with VI. For example, the accept
button on a website simply reads _Accept all cookies_. Therefore, it was easy
to ascertain the function of the button, only from this text, without the need
for additional markup.
### 7.3 Reading Aloud the Cookie Notice
When using a screen reader, the content of the web page is spoken out loud in
a linear order, which may differ from the visual order on the screen [56].
When using WebbIE to view the web pages non-graphically, the cookie notices
were often not at the start of the web page. To combat this, screen readers
can also navigate using headings to jump to different sections. However, the
lack of headings at the start of cookie notices makes it difficult to locate
them when using this method. Screen readers can also search for content within
a web page [49], but without a clear starting heading this becomes difficult.
There were multiple websites where the cookie notice was not read aloud
immediately, and the cookie notice also did not include a heading. In these
examples, it would be difficult to navigate to the cookie notice, without
either knowing what you are searching for or visually identifying it.
When using NVDA with Chrome, 82.9% of cookie notices were read aloud, with
57.1% immediately after the website title was read. There were also websites
that read the cookie notice quickly after opening but not immediately; for
example, elements such as navigation bars were often read aloud before cookie
notices. For example, one website (ebay.co.uk) reads the title of the page,
then the navigation bar, and then the cookie notice. These were normally
websites that did not display the notice at the top when using a browser
graphically. In another example, graphically the cookie notice was at the
bottom, but was read after the heading of the website, and before the main
body. A possible reason for this is the hierarchy of the underlying HTML and
CSS code.
When using JAWS with Google Chrome, 97.1% of cookie notices could be read
aloud, with 62.9% being read aloud immediately. The main issue when a cookie
notice was not read immediately was that the user had to go through the whole
page to read the cookie notice. As we showed in the results section, once
loaded, these websites start collecting data at a large scale and even before
user interaction with the cookie notice. When the cookie notice is the last
item to be read to a VI user, it can easily distract the user from engaging
with it leading to missing the cookie notice altogether.
The results of our user study also confirm that cookie notices accessibility
issues are indeed associated with negative feelings (RQ4-a). They also
highlighted that there are a range of display issues with these cookie notices
such as “they can’t be read”. This contributed to the gap we identified
between the way that these users handle these cookie notices vs how they would
like to handle them (Figure 3).
### 7.4 Website vs Cookie Notice Accessibility
100% of websites contained a title, while only 51.4% of cookie notices
contained a title, which explained what it was. This result was the same for
both screen readers. This lack of titles makes it more difficult to use
headings to quickly navigate to the cookie notice. It is more of a problem
when the notice is not immediately read aloud and then the user has to
navigate to it. The lack of a title also means the user might miss the cookie
notice. None of the 6 websites which contained abbreviations explained them on
both screen readers. This lack of explanation affects the understanding of all
users and directly contradicts WCAG success criteria 3.1.4. Regardless of this
being a high-level success criterion, it is important in the context of cookie
notices. Adding some type of mechanism for understanding abbreviations when
they are used would help all users understand what they are agreeing to.
In response to RQ3-b, we summarise the impact of the issues we encountered on
users with VI. The fact that some cookie notices were missing when using the
text-only browser means that users would not be able to respond to them. This
also applies to the cookie notices that were not readable using screen
readers. Similarly, users could not consent when cookie notices are not being
read immediately, not including headings, and generally being difficult to
navigate. Such a practice might require users to apply additional effort to
specifically navigate to the notice. The lack of headings, structural
elements, and explanatory buttons within the cookie notice means that it could
take users with VI a longer time to respond to a cookie notice than other
users. All these issues mean that these users are less protected against
online tracking and cookies can be placed on their devices without any
possibility for the users to know or give consent.
## 8 Recommendations
In this section, we provide a set of recommendations and best practices for
different stakeholders.
Website developers: There are a number of ways for websites to have the
maximum compatibility with the tools and software used by people with VI. When
including a cookie notice, it should be close to the top of the document’s
code. This will allow screen readers and other accessibility tools to quickly
output this to the user. Developers could then use CSS code to change the
visual location, meaning that a screen reader would always be able to read it
aloud quickly. For example, developers may want visually to move it to the
lower left corner (on desktop) or the lower part of the screen (on mobile) to
improve the number of consent decisions for users without VI [61]. In
addition, developers should always include a heading at the start of important
content, whether this is a cookie notice or other important information. This
allows for ATs to easily and quickly navigate to this information. It also
allows users to quickly understand the content of the section they are about
to interact with and therefore if this information is useful to them.
To assess usability, developers should aim to use a multitude of tools. Tools
such as WAVE and Lighthouse are aimed at allowing developers to easily
evaluate a website. However, we showed in response to RQ2-b, they do not
always highlight the problems users may face and high scores do not
necessarily mean that a website is accessible. This is specifically the case
when it comes to cookie notices and potentially other PETs. Therefore, more
manual tests should be undertaken to find more nuanced issues with a website.
Such testing should be conducted in a comprehensive manner and by multiple VI
tools, since using a combination of such tools is a common practice for VI
users. The tools we suggest are WAVE to perform automated testing, followed up
by using a screen reader such as NVDA since it is free and relatively easy to
use.
Designers of accessibility evaluation tools: This research shows that
accessibility tools and software available do not automatically assess
websites for their privacy practises. The addition of the ability to test sub-
sections of a website for accessibility issues would make testing elements of
a website, such as a cookie notice, a simpler process. This would allow
testing just to focus on such elements. In addition, this will enable subsets
of development teams to test the accessibility of their work. In our testing
with automated tools, it was often not clear where the errors and alerts were
without further manual evaluation. However, this practise should not replace
the overall accessibility testing of the website, but would allow more focus
to be given to some areas of the website.
Furthermore, the creation of specific accessibility tools and tests for cookie
notices and other PETs would greatly improve real-world standard practises.
Such tools could not only test the accessibility of the cookie notices,
privacy policies, and other PETs, but also could evaluate the law compliance
across platforms e.g., software, websites, and mobile apps.
Policymakers: To respond to RQ1-b, we have performed both accessibility and
GDPR compliance analysis. Overall if all websites comply with WCAG, it would
benefit to all users, especially users with disabilities. The question of GDPR
compliance is a more complicated question in relation to users with VI. GDPR
bring many benefits for the privacy of users, however, in some cases, the
implementation of cookie notices has affected the overall accessibility of
websites. For this reason, we make the following recommendation to
policymakers.
The inclusion of specific guidelines for accessibility issues of user privacy
which align with those included in GDPR and the ePD would generally improve
the landscape. For example, guidelines on specific matters such as the length
of time after loading that a cookie notice should be read aloud, what should
be included in the content of the notices, and how should the options be
presented to the users.
Standardisation bodies can create comprehensive specifications for website
developers and dedicated privacy sections. For example, a W3C specification
which includes all the information that developers need to comply with legal
frameworks, such as GDPR or The California Consumer Privacy Act (CCPA), as
well as guidelines, such as WCAG. Such specifications can be also offered by
Google and Apple for app developers in order to improve the privacy of VI
users.
End users: Generally, we believe that the onus around this issue should not be
pushed onto end users, who are already a marginalised group. However, there
are still additional steps users with VI could take. End-users who are
concerned regarding cookie notices can manually search for them. All of the
browsers tested have a feature to search within websites. Though, such a
practise might not be needed in a near future due to the non-effective nature
of cookie notices on websites. Several papers have reported that cookie
notices are not practical and even when the user opts-out the websites still
track the users. Some of these cookie notices are trackers themselves [33]. As
a response, Brave has recently announced that it would automatically block
cookie notices altogether [58]. This option could work to improve the privacy
of users, along with the privacy-preserving nature of the Brave browser. Due
to the browser being based on chromium, it would likely be just as accessible
as Google Chrome. However, this remains an open research problem to be tackled
in the future.
In response to RQ4-b, we concluded that our participants believed that our set
of recommendations can improve their online experience and privacy. Figure 5
displays the popularity of each item where accessibility-by-design in websites
is rated top, followed by accessibility testing by web designers, inclusion of
more headings, improvement of related laws/specifications, development of more
specific testing tools, end user engagement with cookie notices, accessibility
testing of sections of websites (including cookie notices), and designing AT-
friendly PETs.
## 9 Conclusion
This paper investigated the interaction between ATs and cookie notices via a
set of system studies of 46 top UK websites and a user study of 100 users with
VI via Prolific Academic. We find that 22 of these websites had at least one
issue with the accessibility of their cookie notice when manually tested using
a screen reader. We also observed websites which did not have issues with
their cookie notices when using AT but did include issues such as low contrast
when viewing them graphically. These practises often created accessibility
issues when trying to read and respond to cookie notices. The results of our
user study revealed that users with VI overall have a negative view on cookie
notices. We also find that all users believe that at least one of our
recommendations would help improve their experience online.
In future work, we would like to conduct cross-platform studies looking at
mobile web browsers, mobile apps, and desktop web browsers and their
interaction with AT. We would also like to automate our methodology and run
large-scale system studies. We also would like to focus on the creation and
adaptation of dedicated accessibility testing tools for privacy matters and
compliance with the law.
## Acknowledgements
This research project has been granted ethical approval by the University of
Surrey’s ethics committee.
## References
* [1] Adeyemi, I., Sanders, C., Ong, B. N., Howells, K., Quinlivan, L., Gorman, L., Giles, S., Amp, M., Monaghan, E., Naseem, S., Pearson, A., and Cheraghi-Sohi, S. Challenges and adaptations to public involvement with marginalised groups during the COVID-19 pandemic: Commentary with illustrative case studies in the context of patient safety research. Research Involvement and Engagement 8, 1 (Apr. 2022), 13.
* [2] Ahmed, T., Hoyle, R., Connelly, K., Crandall, D., and Kapadia, A. Privacy Concerns and Behaviors of People with Visual Impairments. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul Republic of Korea, Apr. 2015), ACM, pp. 3523–3532.
* [3] Akter, T., Dosono, B., Ahmed, T., Kapadia, A., and Semaan, B. ”i am uncomfortable sharing what i can’t see”: Privacy concerns of the visually impaired with camera based assistive applications. In 29th USENIX Security Symposium (USENIX Security 20) (online, Aug. 2020), USENIX Association, pp. 1929–1948.
* [4] Bermejo Fernandez, C., Chatzopoulos, D., Papadopoulos, D., and Hui, P. This website uses nudging: Mturk workers’ behaviour on cookie consent notices. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–22.
* [5] Brulé, E., Tomlinson, B. J., Metatla, O., Jouffrais, C., and Serrano, M. Review of quantitative empirical evaluations of technology for people with visual impairments. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2020), CHI ’20, Association for Computing Machinery, pp. 1–14.
* [6] Chandrashekar, S. Is hearing believing? Perception of online information credibility by screen reader users who are blind or visually impaired. PhD thesis, University of Toronto Toronto, ON, 2010.
* [7] Coopamootoo, K. P., and Groß, T. Why privacy is all but forgotten. Proceedings on Privacy Enhancing Technologies 2017, 4 (2017), 97–118.
* [8] Coopamootoo, K. P., Mehrnezhad, M., and Toreini, E. ”i feel invaded, annoyed, anxious and i may protect myself”: Individuals’ feelings about online tracking and their protective behaviour across gender and country. In 31st USENIX Security Symposium (USENIX Security 22) (Boston, MA, Aug. 2022), USENIX Association, pp. 287–304.
* [9] Degeling, M., Utz, C., Lentzsch, C., Hosseini, H., Schaub, F., and Holz, T. We value your privacy… now take some cookies. Informatik Spektrum 42, 5 (2019), 345–346.
* [10] Developers, C. Lighthouse accessibility scoring. https://developer.chrome.com/docs/lighthouse/ accessibility/scoring/, 2019\.
* [11] Egelman, S., and Peer, E. The Myth of the Average User: Improving Privacy and Security Systems through Individualization. In Proceedings of the 2015 New Security Paradigms Workshop (Twente Netherlands, Sept. 2015), ACM, pp. 16–28.
* [12] Englehardt, S., and Narayanan, A. Online Tracking: A 1-million-site Measurement and Analysis. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (Vienna Austria, Oct. 2016), ACM, pp. 1388–1401.
* [13] European Data Protection Board. Guidelines on consent under regulation 2016/679. https://ec.europa.eu/newsroom/article29/items/623051, adopted on 10 april 2018.
* [14] Frauenberger, C. Disability and Technology: A Critical Realist Perspective. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility - ASSETS ’15 (Lisbon, Portugal, 2015), ACM Press, pp. 89–96.
* [15] Garante per la protezione dei dati personali. Video Surveillance ’ Decision. https://garanteprivacy.it:443/web/guest/home/docweb/-/docweb-display/docweb/1734653, Apr. 2010.
* [16] Gerl, A., and Meier, B. The layered privacy language art. 12–14 gdpr extension–privacy enhancing user interfaces. Datenschutz und Datensicherheit-DuD 43, 12 (2019), 747–752.
* [17] Google Developers. Lighthouse Overview. https://developer.chrome.com/docs/lighthouse/overview/, May 2022.
* [18] GOV.UK. Results of the 2016 GOV.UK assistive technology survey - Accessibility in government. https://accessibility.blog.gov.uk/2016/11/01/results-of-the-2016-gov-uk-assistive-technology-survey/, Nov. 2016.
* [19] Griffin-Shirley, N., Banda, D. R., Ajuwon, P. M., Cheon, J., Lee, J., Park, H. R., and Lyngdoh, S. N. A survey on the use of mobile applications for people who are visually impaired. Journal of Visual Impairment & Blindness 111, 4 (2017), 307–323.
* [20] Hakobyan, L., Lumsden, J., O’Sullivan, D., and Bartlett, H. Mobile assistive technologies for the visually impaired. Survey of ophthalmology 58, 6 (2013), 513–528.
* [21] Hanson, V. L., Cavender, A., and Trewin, S. Writing about accessibility. Interactions 22, 6 (Oct. 2015), 62–65.
* [22] Hanson, V. L., and Richards, J. T. Progress on website accessibility? ACM Transactions on the Web (TWEB) 7, 1 (2013), 1–30.
* [23] Hayes, J., Kaushik, S., Price, C. E., and Wang, Y. Cooperative privacy and security: Learning from people with visual impairments and their allies. In Fifteenth Symposium on Usable Privacy and Security (SOUPS 2019) (Santa Clara, CA, Aug. 2019), USENIX Association, pp. 1–20.
* [24] Hu, X., and Sastry, N. Characterising Third Party Cookie Usage in the EU after GDPR. In Proceedings of the 10th ACM Conference on Web Science (Boston Massachusetts USA, June 2019), ACM, pp. 137–141.
* [25] Inan, F. A., Namin, A. S., Pogrund, R. L., and Jones, K. S. Internet use and cybersecurity concerns of individuals with visual impairments. Journal of Educational Technology & Society 19, 1 (2016), 28–40.
* [26] Inc., D. S. Axe-core. https://github.com/dequelabs/axe-core, Nov. 2022.
* [27] Information Commissioner’s Office. Guidance on the use of cookies and similar technologies. https://ico.org.uk/for-organisations/guide-to-pecr/guidance-on-the-use-of-cookies-and-similar-technologies/, Feb. 2022.
* [28] Kaczmirek, L., and Wolff, K. G. Survey Design for Visually Impaired and Blind People. In Universal Acess in Human Computer Interaction. Coping with Diversity, C. Stephanidis, Ed., vol. 4554. Springer Berlin Heidelberg, Berlin, Heidelberg, 2007, pp. 374–381.
* [29] King, A., Evans, G., and Blenkhorn, P. Blind people and the world wide web. https://www.webbie.org.uk/webbie.html, 2004.
* [30] Kretschmer, M., Pennekamp, J., and Wehrle, K. Cookie Banners and Privacy Policies: Measuring the Impact of the GDPR on the Web. ACM Transactions on the Web 15, 4 (July 2021), 1–42.
* [31] Ling, J., and Van Schaik, P. The effect of text and background colour on visual search of web pages. Displays 23, 5 (2002), 223–230.
* [32] Lund, B. The Brave browser: A monetary opportunity for libraries in the cryptoverse. Library Hi Tech News 38, 6 (Jan. 2021), 15–16.
* [33] Matte, C., Bielova, N., and Santos, C. Do cookie banners respect my choice? : Measuring legal compliance of banners from iab europe’s transparency and consent framework. In 2020 IEEE Symposium on Security and Privacy (SP) (San Francisco, CA, USA, 2020), 2020 IEEE Symposium on Security and Privacy (SP), pp. 791–809.
* [34] Mehrnezhad, M. A Cross-Platform Evaluation of Privacy Notices and Tracking Practices. In 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW) (Genoa, Italy, Sept. 2020), IEEE, pp. 97–106.
* [35] Mehrnezhad, M., and Almeida, T. Caring for Intimate Data in Fertility Technologies. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama Japan, May 2021), ACM, pp. 1–11.
* [36] Mehrnezhad, M., Coopamootoo, K., and Toreini, E. How Can and Would People Protect From Online Tracking? Proceedings on Privacy Enhancing Technologies 2022, 1 (Jan. 2022), 105–125.
* [37] Melicher, W., Sharif, M., Tan, J., Bauer, L., Christodorescu, M., and Leon, P. G. Preferences for Web Tracking. Proceedings on Privacy Enhancing Technologies 2016, 2 (2016), 1–20.
* [38] Microsoft. Lifecycle FAQ - Internet Explorer and Microsoft Edge. https://docs.microsoft.com/en-us/lifecycle/faq/internet-explorer-microsoft-edge, n.d.
* [39] Napoli, D., Baig, K., Maqsood, S., and Chiasson, S. ” i’m literally just hoping this will $\\{$Work:’$\\}$’obstacles blocking the online security and privacy of users with visual disabilities. In Seventeenth Symposium on Usable Privacy and Security (SOUPS 2021) (2021), pp. 263–280.
* [40] Nouwens, M., Liccardi, I., Veale, M., Karger, D., and Kagal, L. Dark Patterns after the GDPR: Scraping Consent Pop-ups and Demonstrating their Influence. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu HI USA, Apr. 2020), ACM, pp. 1–13.
* [41] Office for National Statistics. Internet users, UK. https://www.ons.gov.uk/businessindustryandtrade /itandinternetindustry/bulletins/internetusers/2020, 2020.
* [42] Parliament, T. E., and the Council of the European Union. Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications), July 2002.
* [43] Pernice, K., Nielsen, J., Farrell, S., Mizobuchi, S., Ishida, N., Stover, U. A., Yohay, M., Franko, E., and Richardson, A. Usability guidelines for accessible web design. Evidence–Based User Experience Research, Training, Consulting (2001).
* [44] Pierce, J., Fox, S., Merrill, N., and Wong, R. Differential vulnerabilities and a diversity of tactics: What toolkits teach us about cybersecurity. Proceedings of the ACM on Human-Computer Interaction 2, CSCW (2018), 1–24.
* [45] PowerMapper Software. Screen reader reliability. https://www.powermapper.com/tests/screen-readers/, Sept. 2022.
* [46] Pugliese, G., Riess, C., Gassmann, F., and Benenson, Z. Long-Term Observation on Browser Fingerprinting: Users’ Trackability and Perspective. Proceedings on Privacy Enhancing Technologies 2020, 2 (Apr. 2020), 558–577.
* [47] Rao, A., Schaub, F., and Sadeh, N. What do they know about me? Contents and Concerns of Online Behavioral Profiles, June 2015.
* [48] Regulation, G. D. P. General Data Protection Regulation (GDPR) – Official Legal Text. https://gdpr-info.eu/, 2016.
* [49] Rotard, M., Taras, C., and Ertl, T. Tactile web browsing for blind people. Multimedia Tools and Applications 37, 1 (Mar. 2008), 53–69.
* [50] Rubin, G. S., and Legge, G. E. Psychophysics of reading. VI—The role of contrast in low vision. Vision Research 29, 1 (Jan. 1989), 79–91.
* [51] Schmeelk, S., and Petrie, H. Digital authentication for visually disabled people: Initial results of an online survey. In Computers Helping People with Special Needs: 18th International Conference, ICCHP-AAATE 2022, Lecco, Italy, July 11–15, 2022, Proceedings, Part II (2022), Springer, pp. 41–50.
* [52] Schnell, K., and Roy, K. Website privacy notification for the visually impaired. In 2021 IEEE Symposium Series on Computational Intelligence (SSCI) (Orlando, FL, USA, 2021), 2021 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1–6.
* [53] Seničar, V., Jerman-Blažič, B., and Klobučar, T. Privacy-Enhancing Technologies—approaches and development. Computer Standards & Interfaces 25, 2 (May 2003), 147–158.
* [54] Shirazi, F., and Volkamer, M. What deters jane from preventing identification and tracking on the web? In Proceedings of the 13th Workshop on Privacy in the Electronic Society (New York, NY, USA, 2014), WPES ’14, Association for Computing Machinery, pp. 107–116.
* [55] Sørensen, J., and Kosta, S. Before and After GDPR: The Changes in Third Party Presence at Public and Private European Websites. In The World Wide Web Conference (New York, NY, USA, May 2019), WWW ’19, Association for Computing Machinery, pp. 1590–1600.
* [56] Southwell, K. L., and Slater, J. An evaluation of finding aid accessibility for screen readers. Information Technology and Libraries 32, 3 (2013), 34–46.
* [57] Statcounter. Browser Market Share Worldwide. https://gs.statcounter.com/browser-market-share, n.d.
* [58] the Brave Privacy Team. Blocking annoying and privacy-harming cookie consent banners. https://brave.com/privacy-updates/21-blocking-cookie-notices/, Sept. 2022\.
* [59] Trevisan, M., Traverso, S., Bassi, E., and Mellia, M. 4 Years of EU Cookie Law: Results and Lessons Learned. Proceedings on Privacy Enhancing Technologies 2019, 2 (Apr. 2019), 126–145.
* [60] Ur, B., Leon, P. G., Cranor, L. F., Shay, R., and Wang, Y. Smart, useful, scary, creepy: Perceptions of online behavioral advertising. In Proceedings of the Eighth Symposium on Usable Privacy and Security - SOUPS ’12 (Washington, D.C., 2012), ACM Press, p. 1.
* [61] Utz, C., Degeling, M., Fahl, S., Schaub, F., and Holz, T. (Un)informed Consent: Studying GDPR Consent Notices in the Field. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security (London United Kingdom, Nov. 2019), ACM, pp. 973–990.
* [62] W3. How to Meet WCAG (Quickref Reference). https://www.w3.org/WAI/WCAG21/quickref/, Oct. 2019.
* [63] W3. WCAG 2 Overview. https://www.w3.org/WAI/standards-guidelines/wcag/, Mar. 2022.
* [64] W3. Understanding Conformance. https://www.w3.org/WAI/WCAG21/Understanding/ conformance#levels, n.d.
* [65] Wang, Y., and Price, C. E. Accessible privacy. In Modern Socio-Technical Perspectives on Privacy. Springer, Cham, 2022, pp. 293–313.
* [66] Washington, B. S., Feng, J. H., Ahram, T., and Falcão, C. Proper Implementation of Website Features Affecting the Use of Screen Readers, vol. 972 of Advances in Usability and User Experience. Springer International Publishing, 2020.
* [67] WebAIM. Screen Readers and CSS: Are We Going Out of Style (and into Content)? https://webaim.org/blog/screen-readers-and-css/, Aug. 2017.
* [68] WebAIM. WebAIM: Screen Reader User Survey #9 Results. https://webaim.org/projects/screenreadersurvey9/, June 2021.
* [69] WebAIM. Wave chrome, firefox, and edge extensions. https://wave.webaim.org/extension/, n.d.
* [70] whotracksme. GDPR - what happened? https://whotracks.me/blog/gdpr-what-happened.html, 2018.
* [71] Yu, W., Kuber, R., Murphy, E., Strain, P., and McAllister, G. A novel multimodal interface for improving visually impaired people’s web accessibility. Virtual Reality 9, 2 (2006), 133–148.
## Appendix A Website Analysis Template
For each website in our list, we followed these steps for our analysis:
* •
Part I: Cookie Notice (baseline)
* -
Step 1: Open Google Chrome incognito window and visit the homepage of the
website.
* -
Step 2: Observe if there is a notice (cookie consent, privacy settings,
banner, etc.).
* -
No: Write it in the file.
* -
Yes: Observe the location and user control options e.g. OK, Accept, Yes,
Reject, No, More Options, Settings, Links to privacy-related pages, etc. Write
your observations in the file.
* -
Step 3: Close the Google Chrome incognito window.
* -
Step 4: Open Brave private window and visit the homepage of the website.
* -
Step 5: Repeat Step - ‣ • ‣ A (cookie notice).
* -
Step 6: Record the number of items blocked by the Brave Shields in the file.
* -
Step 7: Close the Brave private window.
* •
Part II: Automated Accessibility Testing Tools
* -
Step 8: Open a new Google Chrome incognito window and visit the website’s
homepage.
* -
Step 9: Click on WAVE extension to run the test.
* -
Step 10: Record the number of each of the categories in the file.
* -
Step 11: Close the Google Chrome incognito window.
* -
Step 12: Open a new Google Chrome incognito window and visit the website’s
homepage.
* -
Step 13: Open developer tools and navigate to the Lighthouse tab.
* -
Step 14: Select Navigation Mode, A desktop device and the Accessibility
Categories.
* -
Step 15: Analyse the page.
* -
Step 16: Record the overall accessibility score and the number of each item
shown.
* -
Step 17: Close the Google Chrome incognito window.
* •
Part III: Manual Testing Via Text-only Browser
* -
Step 18: Open WebbIE Browser and visit the homepage of the website.
* -
Step 19: Record the number of headings for the website overall.
* -
Step 20: Record the presence of a notice, and if so the presence of headings.
* -
Step 21: Close WebbIE.
* •
Part IV: Manual Testing via Screen Readers
* -
Step 22 : Open NVDA screen reader.
* -
Step 23: Open a new Google Chrome incognito window and visit the website’s
homepage.
* -
Step 24: Allow screen reader to read website.
* -
Step 25: Interact with the website using keyboard.
* -
Step 26: Record pass/fail for categories in 3.2.3.
* -
Step 27: Close the Google Chrome incognito window and screen reader.
* -
Step 28: Open JAWS screen reader.
* -
Step 29: Repeat Steps - ‣ • ‣ A-- ‣ • ‣ A.
Note that we performed two rounds of testing (with identical results). We
uninstalled and reinstalled WebbIE for each round since it does not support
cookie management.
## Appendix B User Study Template
### B.1 Screening validation
* •
Do you live in the United Kingdom? (One answer is possible)
* $\square$
Yes
* $\square$
No
* •
Do you use assistive technology? (One answer is possible)
* $\square$
Yes
* $\square$
No
### B.2 Section 1: Internet and assistive technology
This section is about your internet usage and any assistive technology you may
utilise while browsing the web.
* •
1.1: How do you describe your visual impairment? (Text input is possible)
* •
1.2: Which devices do you use to access the internet? Please specify if other
(Several answers are possible)
* $\square$
Personal Computer (Desktop or Laptop) Mobile Phone
* $\square$
Tablet Computer
* $\square$
Smart home devices
* $\square$
Wearable devices Internet-enabled TVs Gaming consoles Public computers
* $\square$
Other:
* •
1.3: Which device are you using to complete this questionnaire? Please specify
if other (One answer possible)
* $\square$
Personal Computer (Desktop or Laptop)
* $\square$
Mobile Phone
* $\square$
Tablet Computer
* $\square$
Smart home devices
* $\square$
Wearable devices
* $\square$
Internet-enabled TVs
* $\square$
Gaming consoles
* $\square$
Public computers
* $\square$
Other:
* •
1.4: In an average week, how many hours do you use the internet? (One answer
is possible)
* $\square$
Less then 1 hour 1-5 hours
* $\square$
6-10 hours
* $\square$
11-15 hours
* $\square$
16-20 hours
* $\square$
21-15 hours
* $\square$
26-30 hours
* $\square$
More than 30 hours
* •
1.5: What assistive technology do you use when browsing the web? Please
specify if other. (Several answers are possible)
* $\square$
Screen reader (Jaws, NVDA, Voice over or other)
* $\square$
Braille Display
* $\square$
Text Only Browser (WebbIE or other)
* $\square$
Magnification software
* $\square$
Assistive browser extension
* $\square$
Alternative input devices
* $\square$
None
* •
1.6: Which of the following screen readers do you use? Please specify if
other. (Several answers are possible)
* $\square$
JAWS
* $\square$
NVDA
* $\square$
VoiceOver
* $\square$
Natural Reader
* $\square$
Read&Write
* $\square$
Narrator
* $\square$
Talkback
* $\square$
ChromeVox
* $\square$
Orca
* $\square$
I don’t use a screen reader
* $\square$
Other:
* •
1.7: How would you describe your level of expertise with a screen reader?
(Text input is possible)
* •
1.8: Do you use plug-ins with a screen reader? (One answer is possible)
A plugin or add-on adds a specific feature or functionality to a screen
reader. Allowing users to customise their software and add the features they
need.
* $\square$
Yes
* $\square$
No
* •
1.9: Which plugins do you use? (Text input is possible)
### B.3 Section 2: Privacy enhancing technology usage
This section is about the measures you take to protect your privacy and
security while browsing the internet and Privacy Enhancing Technologies
(PETs). PETs are tools that can help protect your privacy online by limiting
the collection, use, and dissemination of your personal information.
* •
2.1: Which of the following privacy enhancing technologies do you use? Please
specify if other. (Several answers are possible)
* $\square$
Browsers do not track
* $\square$
Virtual Private Network (VPN)
* $\square$
Private browsing
* $\square$
Password manager
* $\square$
Manual cookie opt-out
* $\square$
Privacy-focused web browsers
* $\square$
Encrypted messaging apps
* $\square$
Ad blockers
* $\square$
File encryption tools
* $\square$
None
* $\square$
Other:
* •
2.2: How did you learn about privacy enhancing technology? Please specify if
other. (Several answers are possible)
* $\square$
Friend / social contact recommendation
* $\square$
Work / school recommendation
* $\square$
Privacy/cookie policy of a website
* $\square$
News
* $\square$
I don’t know
* $\square$
I don’t use privacy enhancing technologies
* $\square$
Other:
### B.4 Section 3: Cookie notice
Cookie notices appear when a user lands on a website and informs them that the
website is using cookies (a data file) and other trackers that process
personal data, and that the user must make a choice whether they want their
personal data processes.
* •
3.1: In your own words what is a cookie notice and what are they supposed to
do? (Text input is possible)
* •
3.2: How do you feel about cookie notices? (Text input is possible)
* •
3.3: How do you interact with cookie notices? (Text input is possible)
* •
3.4: Have you encountered any issues with cookie notices? (One answer is
possible)
* $\square$
Yes
* $\square$
No
* •
3.5: Please describe in your won words what type of issues have you
experienced with cookie notices? (Text input is possible)
* •
3.6: Which of the following issues have you experienced? Please specify if
other (Several answers are possible)
* $\square$
Cookie notice not being present via my assistive technology
* $\square$
Unable to find cookie notice
* $\square$
Unable to answer cookie notice
* $\square$
Low contrast cookie notice
* $\square$
Unclear response options in cookie notice
* $\square$
Lack of headings for cookie notice
* $\square$
Unable to leave cookie notice
* $\square$
Unable to enter cookie notice
* $\square$
Other:
* •
3.7: How would you wish to handle cookie notices? Please specify if other.
(One answer is possible)
* $\square$
Agree
* $\square$
Disagree
* $\square$
Editing the settings
* $\square$
Ignore cookie notice
* $\square$
Other:
* •
3.8: How do you actually respond to cookie notices? Please specify if other.
(One answer is possible)
* $\square$
Agree
* $\square$
Disagree
* $\square$
Editing the settings
* $\square$
Ignore cookie notice
* $\square$
Unable to respond to cookie notice
* $\square$
Other:
* •
3.9: If you are unable to respond to cookie notices, what do you think is the
reason? (Text input is possible)
### B.5 Section 4: Suggestions
* •
4.1: Who do you think should be responsible for ensuring the secure
accessibility of the Internet? Please specify if other. (Several answers are
possible)
* $\square$
Website Developers
* $\square$
Designers of Accessibility Evaluation Tools
* $\square$
Policymakers
* $\square$
Users of the internet
* $\square$
Designers of Assistive Technologies
* $\square$
Other:
* •
4.2: Which of these recommendation do you think would help improve your
experience online? Please specify if other (Several answers are possible)
* $\square$
Website designers should design websites with accessibility in mind
* $\square$
Website designers should include more headings for useful information
* $\square$
Website designers should complete more accessibility testing
* $\square$
Evaluation tools should allow for testing of sections of a web page
* $\square$
Specific testing tools for parts of a webpage (i.e. cookie notices or other
elements)
* $\square$
The inclusion of laws specifying privacy and accessibility
* $\square$
Specifications of achieving privacy and accessibility
* $\square$
End users can searching for cookie notices
* $\square$
End users can specific privacy protecting tools such as the Brave internet
browser
* $\square$
Other:
### B.6 Section 5: Demographic and background questions
* •
5.1: What is your age? (One answer is possible)
* $\square$
18 to 24
* $\square$
25 to 34
* $\square$
35 to 44
* $\square$
45 to 54
* $\square$
55 to 64
* $\square$
65 or over
* $\square$
Prefer not to say
* •
5.2: What is your gender? (One answer is possible)
* $\square$
Female
* $\square$
Male
* $\square$
Non-binary
* $\square$
Prefer not to say
* $\square$
Other
* •
5.3: What is your highest level of education? (One answer is possible)
* $\square$
Secondary education
* $\square$
Post-secondary education
* $\square$
Undergraduate education
* $\square$
Graduate education
* $\square$
Prefer not to say
* •
5.4: What is your occupation? Please specify if other. (One answer is
possible)
* $\square$
Healthcare and social assistance
* $\square$
Education and training
* $\square$
Sales and retail
* $\square$
Administrative and support
* $\square$
Manufacturing and production
* $\square$
Information technology
* $\square$
Business and finance
* $\square$
Transportation and logistics
* $\square$
Construction and trades
* $\square$
Arts, entertainment, and media
* $\square$
Prefer not to say
* $\square$
Other
* •
5.5: What services do you use online? Please specify if other. (Multiple
answers are possible)
* $\square$
Email
* $\square$
Social media
* $\square$
Online shopping and e-commerce
* $\square$
Video Streaming
* $\square$
Music/audio streaming
* $\square$
Online payment/banking
* $\square$
File sharing and cloud storage
* $\square$
Online travel booking
* $\square$
Online education and e-learning
* $\square$
Online commutation and collaboration services
* $\square$
Other
## Appendix C Detailed results of automated accessibility tools
Table 8: Detailed results of WAVE 5.0 and Google Lighthouse
Website | Privacy Policy | Errors | Contrast Errors | Alerts | Features | Structural Elements | ARIA | Lighthouse Score
---|---|---|---|---|---|---|---|---
google.com | ✓ | 1 | 10 | 4 | 5 | 7 | 350 | 97
youtube.com | ✓ | 26 | 1 | 70 | 92 | 66 | 676 | 89
yahoo.com | ✓ | 2 | 8 | 6 | 2 | 4 | 4 | 86
facebook.com | ✓ | 4 | 46 | 12 | 1 | 9 | 9 | 93
bbc.co.uk | ✓ | 0 | 0 | 134 | 23 | 119 | 371 | 100
amazon.co.uk | ✓ | 6 | 2 | 139 | 194 | 55 | 381 | 95
reddit.com | ✓ | 9 | 139 | 657 | 104 | 36 | 433 | 77
wikipedia.org | ✗ | 3 | 0 | 97 | 123 | 70 | 24 | 100
live.com | ✗ | 0 | 5 | 8 | 11 | 38 | 77 | 98
instagram.com | ✓ | 0 | 25 | 3 | 10 | 10 | 37 | 95
twitter.com | ✓ | 1 | 19 | 3 | 1 | 4 | 55 | 88
ebay.co.uk | ✗ | 0 | 0 | 0 | 0 | 0 | 0 | 93
dailymail.co.uk | ✓ | 75 | 31 | 890 | 628 | 497 | 61 | 74
bing.com | ✓ | 12 | 23 | 38 | 26 | 40 | 244 | 94
gov.uk | ✓ | 0 | 1 | 5 | 16 | 68 | 38 | 100
netflix.com | ✓ | 2 | 22 | 9 | 22 | 36 | 68 | 88
theguardian.com | ✗ | 17 | 59 | 131 | 102 | 290 | 624 | 79
pornhub.com | ✗ | 65 | 1 | 197 | 110 | 67 | 119 | 96
office.com | ✓ | 4 | 0 | 3 | 25 | 59 | 199 | 94
fandom.com | ✓ | 12 | 33 | 14 | 28 | 28 | 2 | 86
xvideos.com | ✓ | 104 | 64 | 158 | 2 | 22 | 1 | 88
paypal.com | ✓ | 12 | 33 | 14 | 28 | 28 | 2 | 100
microsoft.com | ✓ | 2 | 0 | 3 | 24 | 47 | 183 | 100
linkedin.com | ✓ | 27 | 0 | 0 | 9 | 32 | 174 | 100
xhamster.com | ✗ | 8 | 118 | 57 | 58 | 25 | 2 | 81
imdb.com | ✗ | 8 | 0 | 42 | 98 | 37 | 1552 | 88
duckduckgo.com | ✗ | 23 | 5 | 13 | 6 | 38 | 35 | 96
amazon.com | ✗ | 4 | 2 | 178 | 216 | 45 | 317 | 98
zoom.us | ✓ | 7 | 3 | 58 | 64 | 93 | 889 | 80
twitch.tv | ✓ | 71 | 0 | 150 | 130 | 48 | 300 | 86
amazonaws.com | ✓ | 4 | 14 | 140 | 228 | 377 | 87 | 86
tiktok.com | ✓ | 52 | 14 | 54 | 9 | 59 | 0 | 63
whatsapp.com | ✓ | 5 | 14 | 10 | 4 | 142 | 5 | 85
doubleclick.net | ✓ | 2 | 4 | 51 | 11 | 78 | 141 | 100
spankbang.com | ✗ | 16 | 160 | 500 | 190 | 36 | 0 | 72
sky.com | ✓ | 3 | 16 | 28 | 22 | 41 | 105 | 90
apple.com | ✗ | 11 | 0 | 15 | 47 | 62 | 217 | 92
rightmove.co.uk | ✓ | 3 | 7 | 68 | 12 | 48 | 55 | 87
booking.com | ✓ | 30 | 3 | 37 | 40 | 85 | 574 | 92
etsy.com | ✓ | 7 | 1 | 31 | 42 | 175 | 1682 | 73
indeed.com | ✓ | 2 | 0 | 8 | 20 | 25 | 134 | 90
msn.com | ✓ | 26 | 0 | 213 | 359 | 311 | 44 | 79
github.com | ✗ | 2 | 35 | 18 | 114 | 74 | 179 | 86
adobe.com | ✓ | 2 | 1 | 12 | 111 | 120 | 139 | 96
chaturbate.com | ✗ | 19 | 113 | 494 | 102 | 210 | 5 | 84
xnxx.com | ✓ | 165 | 0 | 808 | 3 | 20 | 0 | 97
## Appendix D Detailed results of screen reader tests
Table 9: NVDA and JAWS results
| NVDA | JAWS
---|---|---
Website | Readable | Immediately | Keyboard Navigable | Link or button Purpose | Abbreviations are explained | Page Titled | Cookie Notice Titled | Headings useful for navigation | No Timing | Readable | Immediately | Keyboard Navigable | Link or button Purpose | Abbreviations are explained | Page Titled | Cookie Notice Titled | Headings useful for navigation | No Timing
google.com | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✗ | ✗ | ✓
youtube.com | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | - | ✓ | ✗ | ✗ | ✓
yahoo.com | ✓ | ✓ | ✓ | ✗ | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✓ | ✓ | ✗ | ✓
facebook.com | ✓ | ✗ | ✓ | ✗ | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓
bbc.co.uk | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✓ | ✗ | ✓
amazon.co.uk | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✓ | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ | - | ✓ | ✓ | ✗ | ✓
reddit.com | ✗ | ✗ | ✗ | ✗ | - | ✓ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | - | ✓ | ✗ | ✗ | ✓
wikipedia.org | - | - | - | - | - | ✓ | - | - | - | - | - | - | - | - | ✓ | - | - | -
live.com | - | - | - | - | - | ✓ | - | - | - | - | - | - | - | - | ✓ | - | - | -
instagram.com | ✗ | ✗ | ✓ | ✗ | - | ✓ | ✓ | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ | - | ✓ | ✓ | ✓ | ✓
twitter.com | ✓ | ✓ | ✓ | ✓ | - | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | - | ✓ | ✓ | ✗ | ✓
ebay.co.uk | ✓ | ✗ | ✓ | ✗ | ✗ | ✓ | ✗ | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ | ✗ | ✓ | ✗ | ✗ | ✓
dailymail.co.uk | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✓
bing.com | ✓ | ✗ | ✓ | ✗ | - | ✓ | ✗ | ✗ | ✓ | ✓ | ✗ | ✓ | ✗ | - | ✓ | ✗ | ✗ | ✓
gov.uk | ✓ | ✓ | ✓ | ✓ | - | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | - | ✓ | ✓ | ✗ | ✓
netflix.com | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✗ | ✗ | ✓
theguardian.com | ✓ | ✗ | ✓ | ✗ | ✗ | ✓ | ✗ | ✗ | ✓ | ✓ | ✗ | ✓ | ✗ | ✗ | ✓ | ✗ | ✗ | ✓
pornhub.com | - | - | - | - | - | ✓ | - | - | - | - | - | - | - | - | ✓ | - | - | -
office.com | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✗ | ✗ | ✓
fandom.com | ✓ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✓
xvideos.com | ✗ | ✗ | ✗ | ✗ | - | ✓ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✗ | ✗ | ✓
paypal.com | ✓ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✓
microsoft.com | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✗ | ✗ | ✓
linkedin.com | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✓ | ✗ | ✓
xhamster.com | - | - | - | - | - | ✓ | - | - | - | - | - | - | - | - | ✓ | - | - | -
imdb.com | - | - | - | - | - | ✓ | - | - | - | - | - | - | - | - | ✓ | - | - | -
duckduckgo.com | - | - | - | - | - | ✓ | - | - | - | - | - | - | - | - | ✓ | - | - | -
amazon.com | - | - | - | - | - | ✓ | - | - | - | - | - | - | - | - | ✓ | - | - | -
zoom.us | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | - | ✓ | ✗ | ✗ | ✓
twitch.tv | ✓ | ✗ | ✓ | ✓ | - | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | - | ✓ | ✓ | ✗ | ✓
amazonaws.com | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | - | ✓ | ✓ | ✗ | ✓
tiktok.com | ✓ | ✗ | ✗ | ✗ | - | ✓ | ✓ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | - | ✓ | ✓ | ✗ | ✓
whatsapp.com | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✓ | ✗ | ✓
doubleclick.net | ✓ | ✗ | ✓ | ✗ | - | ✓ | ✗ | ✗ | ✓ | ✓ | ✓ | ✗ | ✗ | - | ✓ | ✗ | ✗ | ✓
spankbang.com | - | - | - | - | - | ✓ | - | - | - | - | - | - | - | - | ✓ | - | - | -
sky.com | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✓ | ✗ | ✓
apple.com | - | - | - | - | - | ✓ | - | - | - | - | - | - | - | - | ✓ | - | - | -
rightmove.co.uk | ✓ | ✓ | ✓ | ✓ | - | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✓ | ✗ | ✓
booking.com | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✓ | ✗ | ✓
etsy.com | ✓ | ✓ | ✓ | - | - | ✓ | - | - | ✓ | ✓ | ✓ | ✓ | ✗ | - | ✓ | - | ✗ | ✓
indeed.com | ✓ | ✓ | ✓ | ✓ | - | ✓ | ✗ | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ | - | ✓ | ✗ | ✗ | ✓
msn.com | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✓ | ✗ | ✓
github.com | - | - | - | - | - | ✓ | - | - | - | - | - | - | - | - | ✓ | - | - | -
adobe.com | ✗ | ✗ | ✗ | ✗ | - | ✓ | ✗ | ✗ | ✓ | ✓ | ✗ | ✓ | ✗ | - | ✓ | ✗ | ✗ | ✓
chaturbate.com | - | - | - | - | - | ✓ | - | - | - | - | - | - | - | - | ✓ | - | - | -
xnxx.com | ✗ | ✗ | ✗ | ✗ | - | ✓ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | - | ✓ | ✗ | ✗ | ✓
### D.1 Answers to demographic questions
Table 10: Answers to demographic questions 1.2 Devices used | N | 1.3 To Answer | N | 1.4 Hours a Week | N | 1.5 AT | N
---|---|---|---|---|---|---|---
Personal Computer | 94 | Personal Computer | 71 | More than 30 hours | 28 | Magnification software | 50
Mobile Phone | 94 | Mobile Phone | 22 | 26-30 hours | 23 | Screen reader | 42
Tablet Computer | 47 | Tablet Computer | 5 | 6-10 hours | 14 | Assistive browser extension | 22
Gaming consoles | 36 | Other | 2 | 16-20 hours | 12 | Other | 14
Internet-enabled TV | 34 | | | 21-15 hours | 10 | None | 9
Smart home devices | 33 | | | 11-15 hours | 8 | Alternative input devices | 7
Wearable devices | 16 | | | 1-5 hours | 5 | Braille Display | 2
Public computers | 9 | | | | | Text Only Browser | 1
1.6 Screeen reader | N | 5.1 Age | N | 5.2 Gender | N | 5.3 Education | N
No screen reader | 40 | 25 to 34 | 33 | Male | 59 | Undergraduate | 37
VoiceOver | 18 | 45 to 54 | 20 | Female | 38 | Graduate | 26
Narrator | 12 | 18 to 24 | 18 | Non-binary | 2 | Post-secondary | 19
ChromeVox | 11 | 35 to 44 | 14 | Prefer not to say | 1 | Secondary | 15
JAWS | 10 | 55 to 64 | 12 | | | Prefer not to say | 3
Natural Reader | 10 | 65 or over | 2 | | | |
Read&Write | 9 | Prefer not to say | 1 | | | |
Talkback | 6 | | | | | |
NVDA | 5 | | | | | |
Other | 5 | | | | | |
Orca | 1 | | | | | |
5.5 Online Services | N | 1.7 Experience | N | | | |
Email | 98 | Basic | 33 | | | |
Shopping and e-commerce | 93 | Moderate | 29 | | | |
Social media | 90 | None | 20 | | | |
Payment/banking | 89 | Expert | 9 | | | |
Video Streaming | 80 | Previous experience | 5 | | | |
Music/audio streaming | 77 | Other | 4 | | | |
Travel booking | 68 | | | | | |
File sharing and cloud storage | 64 | | | | | |
Education and e-learning | 62 | | | | | |
Commutation and collaboration | 35 | | | | | |
## Appendix E Examples of websites, cookie notices and their outcome via
screen readers
---
(Enter webpage URL and press Enter Key)
NVDA: | dailymail.co.uk selected
NVDA: | UK home daily mail online
NVDA: | Link, graphic online news, sport, celebrity, science and health stories
NVDA: | List with 14 items
NVDA: | Link home
NVDA: | Link, news
NVDA: | Link u.s
NVDA: | [Advertisement read], [Same advertisement read]
NVDA: | [Advertisement read], [Same advertisement read]
NVDA: | [Advertisement read], [Same advertisement read]
NVDA: | Discover the best black Friday deals, discover the best black Friday deals
(Down arrow key pressed)
NVDA: | Link home
(Proceeds to read the navigation bar)
(Starts to read news items on homepage)
Figure 6: Top: Graphical representation of dailymail.co.uk with the
highlighted cookie notice at the bottom of the page. Bottom: Example
transcript via a screen reader. Figure 7: Visual representation of reddit.com
cookie notice. None of the two screen readers could audibly output the cookie
notice. They only read the body of the web page.
---
(Enter webpage URL and press Enter Key)
JAWS: | A simple and safer way to pay and get paid, vertical bar, PayPal UK
JAWS: | Page has three regions, 8 headings and 33 links
JAWS: | A simple and safer way to pay and get paid, vertical bar, PayPal UK
JAWS: | Link PayPal
JAWS: | Navigation region, list of four items
(Reads aloud navigation bar)
(Navigate through body of site)
JAWS: | If you accept cookies, we’ll use them to improve and customise your experience and enable our partners to show you personalised PayPal ads when you visit other sites.
JAWS: | Link, manage cookies and learn more
JAWS: | Accept button
JAWS: | Decline button
Figure 8: Top: Graphical representation of paypal.com with highlighted cookie
notice at the bottom of the page. Bottom: Example transcript via a screen
reader.
---
(Enter webpage URL and press Enter Key)
NVDA: | BBC.co.uk selected
NVDA: | BBC Home
NVDA: | Banner landmark, Let us know you agree to cookies, Heading level two
NVDA: | Clickable banner landmark, We use, link, cookies to give you the best online experience. Please let us know if you agree to all of these cookies
NVDA: | Button, Yes, I agree
NVDA: | Link, No, take me to settings
NVDA: | BBC Navigation landmark, BBC Homepage
(Proceeds to read the rest of the home page)
Figure 9: Top: Graphical representation of bbc.co.uk with highlighted cookie
notice at the top of the page. Bottom: Example transcript via a screen reader.
|
# A Geometric flow towards hamiltonian stationary submanifolds
Jingyi Chen and Micah Warren Department of Mathematics
The University of British Columbia, Vancouver, BC V6T 1Z2<EMAIL_ADDRESS>Department of Mathematics
University of Oregon, Eugene, OR 97403<EMAIL_ADDRESS>
###### Abstract.
In this paper, we introduce a geometric flow for Lagrangian submanifolds in a
Kähler manifold that stays in its initial Hamiltonian isotopy class and is a
gradient flow for volume. The stationary solutions are the Hamiltonian
stationary Lagrangian submanifolds. The flow is not strictly parabolic but it
corresponds to a fourth order strictly parabolic scalar equation in the
cotangent bundle of the submanifold via Weinstein’s Lagrangian neighborhood
theorem. For any compact initial Lagrangian immersion, we establish short-time
existence, uniqueness, and higher order estimates when the second fundamental
forms are uniformly bounded up to time $T$.
Chen is partially supported by NSERC Discovery Grant GR010074
## 1\. Introduction
The objective of this paper is to introduce a fourth order flow of Lagrangian
submanifolds in a Kähler manifold as a gradient flow of volume within a
Hamiltonian isotopy class and establish basic properties such as short-time
existence, uniqueness, and extendibility with bounded second fundamental form.
Our setting includes a Kähler manifold $(M^{2n},h,\omega,J)$ with symplectic
form $\omega$ and compatible Kähler metric $h$ satisfying
$h(JV,W)=\omega(V,W)$ where $J$ is a complex structure, and a given compact
Lagrangian immersion $\iota:L^{n}\rightarrow M^{2n}.$ We propose to find
$F:L\times[0,T)\rightarrow M^{2n}$ satisfying
(1.1) $\displaystyle\frac{dF}{dt}=J\nabla\operatorname{div}\left(JH\right)$
(1.2) $\displaystyle F\left(\cdot,0\right)=\iota\left(\cdot\right)$
where $H$ is the mean curvature vector of $L_{t}=F(\cdot,t)$ in $M$ and
$\nabla,\operatorname{div}$ are along $L_{t}$ in the induced metric from $h$.
The stationary solutions of the above evolution equation are the so-called
Hamiltonian stationary submanifolds, which are fourth order generalizations of
special Lagrangians, and exist in more abundance. Within a Hamiltonian isotopy
class it is possible for a compact Lagrangian submanifold of $\mathbb{C}^{n}$
to minimize volume (for example, a Clifford torus). Meanwhile, compact special
Lagrangian submanifolds of $\mathbb{C}^{n}$ do not exist. Recall that two
manifolds are Hamiltonian isotopic if they can be joined by the flow generated
by a vector field of the form $J\nabla f$ for a function $f$.
We will show that the initial value problem (1.1) - (1.2):
1. (1)
stays within the Hamiltonian isotopy class - that is, the flow is generated by
a vector field of the form $J\nabla f,$
2. (2)
is the gradient flow of volume with respect to an appropriate metric, so
decreases volume along the flow,
3. (3)
enjoys short time existence given smooth initial conditions, and
4. (4)
continues to exist as long as a second fundamental form bound is satisfied.
The flow (1.1) is degenerate parabolic but not strictly parabolic. Our proof
of existence and uniqueness involves constructing global solutions (locally in
time) using Weinstein’s Lagrangian neighbourhood theorem, which results in a
nice fourth order parabolic scalar equation. This equation has a good
structure, satisfying conditions required in [MM12] so we can conclude
uniqueness and existence within a given Lagrangian neighbourhood. We then
argue that the flows described by solutions to the scalar fourth order
equations are in correspondence with the normal flows of the form (1.1)
leading to uniqueness and extendability provided the flow remains smooth.
After proving existence and uniqueness using Weinstein neighborhoods, we turn
to local Darboux charts to prove higher regularity from second fundamental
form bounds. It’s not immediately clear how to extract regularity from
arbitrary Weinstein neighbourhoods as the submanifolds move, but using a
description of Darboux charts in [JLS11] we can fix a finite set of charts and
perform the regularity theory in these charts. Unlike mean curvature flow,
there is no maximum principle for fourth order equations. We deliver a
regularity theory using Sobolev spaces.
The newly introduced flow already exhibits nice properties in special cases:
(1) For Calabi-Yau manifolds, Harvey and Lawson [HL82] showed that, for a
Lagrangian submanifold, the Lagrangian angle $\theta$ generates the mean
curvature via
$H=J\nabla\theta.$
In this case (1.1) becomes
$\frac{dF}{dt}=-J\nabla\Delta\theta,$
while $\theta$ satisfies the pleasant fourth order parabolic equation
$\frac{d\theta}{dt}=-\Delta_{g}^{2}\theta$
where $g$ is the induced metric on $L_{t}$.
(2) The case $n=1$ involves curves in $\mathbb{C}$. Hamiltonian isotopy
classes bound a common signed area. Higher order curvature flows have been
studied and are called polyharmonic heat flow or curve diffusion flow (cf.
[PW16], [EI05], [May01]), dealing with evolution in the form
$\gamma^{\prime}=(-1)^{p-1}\kappa_{s^{2p}}$ where $\kappa_{s^{2p}}$ is the
$2p$-th order derivative of the curvature $\kappa$ of a plane curve $\gamma$
with respect to the arclength $s$. For $p=0$ it is the standard curve
shortening flow. The flow discussed here corresponds to $p=1$.
The case $n=1$ arises in material science [Mul99]. For more work on the pure
side, see [Whe13] and the many references therein. In [Whe13] it is shown that
curves near a circle have flows which exist forever and converge to a round
circle, as well as a short-time existence theorem that requires only $L^{2}$
initial curvature. It is not difficult to argue that for immersed figure 8
type curves singularities must develop in finite time. It is curious to know
whether there exist embedded curves which develop singularities in finite time
(cf. [EI05], [PW16], [May01]).
## 2\. Gradient Flow
We begin by setting up the equation as a formal gradient flow over an $L^{2}$
metric space. Given an embedded Lagrangian submanifold $L^{n}\subset M^{2n}$,
we can consider Hamiltonian deformations of $L$, these will be flows of vector
fields $J\nabla f$ for scalar functions $f$ on $M$. At $x\in L$, the normal
component of $J\nabla f$ is given by $J\nabla_{L}f$ where $\nabla_{L}f$ is the
gradient of $f$ as a function restricted to $L$; conversely, given any smooth
function $f$ on an embedded $L,$ we can extend $f$ to a function on $M$ so
that $\nabla_{L}f$ is not changed and is independent of the extension of $f$
(see section 2.1 below).
In other words, given a family of $C^{1}$ functions $f\left(\cdot,t\right)$
along $L,$ one can construct a family of embeddings
(2.1) $F:L\times\left(-\varepsilon,\varepsilon\right)\rightarrow M$
satisfying
(2.2) $\frac{d}{dt}F(x,t)=J\nabla f\left(x,t\right)$
and conversely, given any path (2.1) within a Hamiltonian isotopy class, there
will be a function $f$ so that (possibly after a diffeomorphism to ensure the
deformation vector is normal) the condition (2.2) is satisfied.
Let $\mathcal{I}_{L_{0}}$ be the set of smooth manifolds that are Hamiltonian
isotopic to $L_{0}$. The (smooth) tangent space at any
$L\in\mathcal{I}_{L_{0}}$ is parameterized via
(2.3) $T_{L}\mathcal{I}_{L_{0}}\mathcal{=}\left\\{f\in
C^{\infty}(L):\int_{L}fdV_{g}=0\right\\}.$
We use the $L^{2}$ metric on $T_{L}\mathcal{I}_{L_{0}}$: For $JX_{i}=\nabla
f_{i}$ $\in T_{L}\mathcal{I}_{L_{0}},i=1,2$
(2.4) $\langle X_{1},X_{2}\rangle=\int_{L}f_{1}f_{2}\,dV_{g}.$
With volume function given by
$\operatorname{Vol}(L)=\int_{L}dV_{g}$
the classical first variation formula gives
$d\operatorname{Vol}(L)(W)=-\int_{L}\langle W,H\rangle dV_{g}$
where $W$ is the deformation vector field and $H$ is the mean curvature
vector. In the situation where allowable deformation vectors are of the form
$W=J\nabla f,$ we get
$\displaystyle d\operatorname{Vol}(L)(f)$ $\displaystyle=-\int_{L}\langle
J\nabla f,H\rangle dV_{g}$ $\displaystyle=\int_{L}\langle\nabla f,JH\rangle
dV_{g}$ $\displaystyle=-\int_{L}f\operatorname{div}\left(JH\right)dV_{g}$
using the fact that $J$ is orthogonal, then integrating by parts. Note that
$-\operatorname{div}\left(JH\right)$ belongs to $T_{L}\mathcal{I}_{L_{0}}$
since it integrates to 0 on $L$. Therefore, it is the gradient of the volume
function with respect to the metric. Thus a (volume decreasing) gradient flow
for volume would be a path satisfying
(2.5) $\frac{dF}{dt}=J\nabla\operatorname{div}\left(JH\right).$
###### Remark 2.1.
The metric (2.3) is not the usual $L^{2}$ metric for deformations of a
submanifold, which would measure the length of the tangent vector by
$\int\left|J\nabla f\right|^{2}dV_{g}.$ It is better suited than the standard
metric on vector fields. Suppose instead we take the “standard” $L^{2}$ metric
on deformation fields:
$d\operatorname{Vol}(L)(J\nabla f)=-\int_{L}\langle J\nabla f,H\rangle dV_{g}$
The gradient with respect to this metric would be $J\nabla\eta$ for some
$\eta\in C^{\infty}(L)$ such that
$-\int_{L}\langle J\nabla f,H\rangle=\int_{L}\langle J\nabla
f,J\nabla\eta\rangle.$
While the Lagrangian angle $\theta$ (in the Calabi-Yau case) does produce this
gradient locally, typically $\theta$ is not globally defined on $L$. So
instead, we must find a function $\eta$ solving the equation
$\Delta\eta=-\operatorname{div}(JH)$
which one can solve uniquely up to additive constants since $L$ is compact and
$\nabla\eta=-JH$ \+ $X$ (divergence free vector field on $L$). A gradient flow
would be $dF/dt=H-JX$ but there is no canonical way to determine $X$.
It is also worth noting that gradient flow with respect to the $L^{2}$ metric
(sometimes called $\mathcal{H}^{-1}$) is not new: It has been used for example
in mechanics to describe the flow of curves [Fif00].
### 2.1. Related definitions of Hamiltonian deformations
Traditionally, Hamiltonian isotopies are defined as flows of the entire
manifold along the direction of a time-dependent vector field $J\nabla f$ for
some $f$ a smooth function on $M$. Two submanifolds are Hamiltonian isotopic
if the one submanifold is transported to the other via the isotopy. In order
to use the description (2.3), we note the following standard result:
###### Lemma 2.2.
For a smooth flow of embedded Lagrangian submanifolds satisfying
(2.6) $\frac{dF}{dt}=J\nabla f(\cdot,t)\text{ }$
for some function $f$ defined on $L$ for each $t$, there exists a function
$\tilde{f}$ on $M\times[0,1]$ that defines a Hamiltonian isotopy on $M$ and
determines the same Hamiltonian isotopy of the submanifolds.
Conversely, given a global Hamiltonian isotopy determined by $\tilde{f}$, the
function $\tilde{f}$ restricted to $L_{t}$ determines a flow of the form
(2.6), possibly up to reparameterization by diffeomorphisms of $L.$
###### Proof.
The function $f$ is defined on a smooth compact submanifold of $M\times[0,1]$.
We can use the Whitney extension theorem to extend a smooth function off this
set in which the normal derivatives vanish. Thus along the Lagrangian
submanifolds, $J\nabla f=J\nabla\tilde{f}$.
Conversely, given any function $\tilde{f}$ its gradient decomposes into the
normal and tangential parts on the Lagrangian submanifold. By the Lagrangian
condition, $J\nabla^{T}\tilde{f}$ is normal and $J\nabla^{\perp}\tilde{f}$ is
tangential with the latter component describing merely a reparameterization of
$L$. So the flow is completely determined by the component
$J\nabla^{T}\tilde{f}$ which is determined by the restriction to $L$. ∎
### 2.2. Immersed Lagrangian submanifolds and their Hamiltonian deformations
Along the evolution equation (1.1), it is feasible that a submanifold which is
initially embedded will become merely immersed. Thus we would like the
equation to behave well even when the submanifold is immersed.
Weinstein’s Lagrangian neighborhood Theorem for immersed Lagrangian
submanifolds [EM02, Theorem 9.3.2] states that any Lagrangian immersion
$F_{0}:L\to M$ extends to an immersion $\Psi$ from a neighborhood of the
0-section in $T^{*}L$ to $M$ with $\Psi^{*}\omega_{M}=\omega_{\text{can}}$.
Sections of the cotangent bundle $T^{*}L$ are clearly embedded as graphs over
the 0-section of $T^{*}L$ which is identified with $L$, so by factoring the
immersion through $T^{*}L$, we get immersed submanifolds in $M$, in
particular, immersed Lagrangian submanifolds in $M$ for sections defined by
closed 1-forms on $L$.
Even though the deformation of an immersed manifold is not properly
Hamiltonian (that is, velocity vector $J\nabla f$ determined by a global
function $f$ on $M$) one can define deformations by using a function $f$
defined on the submanifold, and $J\nabla f$ makes sense within $T^{*}L$ as
pullback by the immersion. For example, the figure 8 is not problematic
because the two components of a neighborhood of the crossing point can have
different velocity vectors; these are separated within the cotangent bundle.
### 2.3. The evolution equation in terms of $\theta$
By [HL82], for a Lagrangian submanifold $L$ in a Calabi-Yau manifold
$(M^{n},\omega,J,\Omega)$ with a covariant constant holomorphic $n$-form
$\Omega$, the mean curvature of $L$ satisfies $H=J\nabla\theta$ where
$\Omega|_{L}=e^{i\theta}d\operatorname{Vol}_{L}.$
Now (2.5) leads to
$\frac{dF}{dt}=J\nabla\operatorname{div}\left(JH\right)=J\nabla\operatorname{div}\left(JJ\nabla\theta\right)=-J\nabla\Delta\theta.$
Differentiating the left-hand side (cf. [Woo20, Prop 3.2.1]):
$\displaystyle\frac{d}{dt}\Omega|_{L}$
$\displaystyle=\frac{d}{dt}\left(F_{t}^{\ast}\Omega\right)=F_{0}^{\ast}\mathcal{L}_{-J\nabla\Delta\theta}\Omega$
$\displaystyle=F_{0}^{\ast}d\left(\iota_{-J\nabla\Delta\theta}\Omega\right)=d\left(F_{0}^{\ast}\left(\iota_{-J\nabla\Delta\theta}\Omega\right)\right)$
$\displaystyle=d\left(F_{0}^{\ast}i\left(\iota_{-\nabla\Delta\theta}\Omega\right)\right)$
$\displaystyle=d\left(ie^{i\theta}dVol_{L}(-\nabla\Delta\theta,\cdot,...,\cdot)\right)$
$\displaystyle=-d\left(ie^{i\theta}\ast d\Delta\theta\right)$
$\displaystyle=\left(e^{i\theta}\ast d\Delta\theta-ie^{i\theta}d\left(\ast
d\Delta\theta\right)\right).$
Then differentiating the right hand side:
$\frac{d}{dt}e^{i\theta}d\operatorname{Vol}_{L}=e^{i\theta}\frac{d}{dt}d\operatorname{Vol}_{L}+ie^{i\theta}\frac{d\theta}{dt}d\operatorname{Vol}_{L}.$
Comparing the imaginary parts (after multiplying by $e^{-i\theta}$) of the
above two gives
(2.7) $\frac{d\theta}{dt}=-\Delta_{g(t)}^{2}\theta.$
## 3\. Existence and Uniqueness via a scalar equation on a Lagrangian
Neighborhood
The system of equations (1.1) is not strictly parabolic as given. Our approach
is to make good use of the Lagrangian property, in particular, by setting up
the equation as a scalar, uniformly parabolic equation via Weinstein’s
Lagrangian neighborhood theorem. For the convenience of using common
terminologies, we make our discussion for embeddings but the conclusions hold
for immersions in view of subsection 2.2.
### 3.1. Accompanying flow of scalar functions
Let $L$ be an embedded compact Lagrangian submanifold in a symplectic manifold
$(M,\omega)$. By Weinstein’s Lagrangian neighborhood [Wei71, Corollary 6.2]
theorem, there is a diffeomorphism $\Psi$ from a neighborhood $U\subset
T^{*}L$ of the 0-section (identified with $L$) to a neighborhood $V\subset M$
of $L$ such that $\Psi^{\ast}\omega=d\lambda_{can}$ and $\Psi$ restricts to
the identity map on $L$.
Let $\varphi(x,t)$ be a smooth function on $L\times[0,\delta)$ with
$\varphi(\cdot,0)=0$. Then $d\varphi$ is a $t$-family of exact (hence closed)
1-forms on $L$ hence a family of sections of $T^{\ast}L$ and each is a graph
over the 0-section. The symplectomorphism $\Psi$ yields a $t$-family of
Lagrangian submanifolds $L_{t}$ in $M$ near $L$:
(3.1) $F=\Psi(x,d\varphi(x,t))=\Psi\left(x,\frac{\partial\varphi}{\partial
x^{k}}dx^{k}\right).$
###### Proposition 3.1.
Suppose that $d\varphi(x,t)$ is an exact section describing an evolution of
Lagrangian submanifolds which satisfy the equation
(3.2)
$\left(\frac{dF}{dt}\right)^{\perp}=J\nabla\operatorname{div}\left(JH\right).$
Then there is a function $G$ (depending on $\Psi$) such that $\varphi$
satisfies
$\frac{\partial\varphi}{\partial
t}=-g^{ap}g^{ij}\frac{\partial^{4}\varphi}{\partial x^{a}\partial
x^{j}\partial x^{i}\partial x^{p}}+G(x,D\varphi,D^{2}\varphi,D^{3}\varphi).$
The coordinate free expression is
$\frac{\partial\varphi}{\partial
t}=\operatorname{div}J\Delta_{g}\Psi(x,d\varphi).$
###### Proof.
Taking $(x,v)$ for coordinates of $T^{\ast}L,$ let $y^{\alpha}$ be coordinates
in $M$, $\alpha=1,...,2n$. This gives a frame
(3.3) $e_{i}:=\frac{\partial F}{\partial
x^{i}}=\frac{\partial\Psi^{\alpha}}{\partial x^{i}}\frac{\partial}{\partial
y^{\alpha}}+\frac{\partial\Psi^{\alpha}}{\partial
v^{k}}\frac{\partial^{2}\varphi}{\partial x^{i}\partial
x^{j}}\delta^{jk}\frac{\partial}{\partial
y^{\alpha}}=\Psi_{i}+\varphi_{ij}\delta^{jk}\Psi_{k+n}$
where
$\displaystyle\Psi_{i}$
$\displaystyle:=D_{x^{i}}\Psi=\frac{\partial\Psi^{\alpha}}{\partial
x^{i}}\frac{\partial}{\partial y^{\alpha}}$ $\displaystyle\Psi_{j+n}$
$\displaystyle:=D_{v^{j}}\Psi=\frac{\partial\Psi^{\alpha}}{\partial
v^{j}}\frac{\partial}{\partial y^{\alpha}}.$
Letting also
$F_{i}^{\alpha}:=\frac{\partial\Psi^{\alpha}}{\partial
x^{i}}+\frac{\partial\Psi^{\alpha}}{\partial
v^{k}}\frac{\partial^{2}\varphi}{\partial x^{i}\partial x^{j}}\delta^{jk}$
we have
$e_{i}=F_{i}^{\alpha}\frac{\partial}{\partial y^{\alpha}}.$
Now suppose
$h=h_{\alpha\beta}dy^{a}dy^{\beta}$
is the Riemannian metric on $M$. We are also assuming $\omega(V,W)=h(JV,W)$.
We compute the induced metric $g$ from the immersion:
(3.4) $\displaystyle g_{ij}$ $\displaystyle=h(\partial_{i}F,\partial_{j}F)$
$\displaystyle=\left(\frac{\partial\Psi^{\alpha}}{\partial
x^{i}}\frac{\partial\Psi^{\beta}}{\partial
x^{j}}+\sum_{k}\left(\frac{\partial\Psi^{\alpha}}{\partial
x^{i}}\frac{\partial\Psi^{\beta}}{\partial
v^{k}}\frac{\partial^{2}\varphi}{\partial x^{j}\partial
x^{k}}+\frac{\partial\Psi^{\alpha}}{\partial
x^{j}}\frac{\partial\Psi^{\beta}}{\partial
v^{k}}\frac{\partial^{2}\varphi}{\partial x^{i}\partial
x^{k}}\right)+\sum_{k.l}\frac{\partial\Psi^{\alpha}}{\partial
v^{k}}\partial_{ik}^{2}\varphi\frac{\partial\Psi^{\beta}}{\partial
v^{l}}\partial_{jl}^{2}\varphi\right)h_{\alpha\beta}.$
$\displaystyle=h(\Psi_{i}+\varphi_{ik}\Psi_{k+n},\Psi_{j}+\varphi_{jl}\Psi_{l+n})$
$\displaystyle=h(\Psi_{i},\Psi_{j})+\sum_{k}\left(\varphi_{ik}h(\Psi_{k+n},\Psi_{j})+\varphi_{jk}h(\Psi_{i},\Psi_{k+n})\right)+\sum_{k,l}\varphi_{ik}\varphi_{jl}h(\Psi_{k+n},\Psi_{l+n}).$
Since $\Psi:T^{\ast}L\rightarrow M$ is a symplectomorphism with
$\Psi^{\ast}\omega=dx\wedge dv$, we have
(3.5) $\displaystyle\delta_{ij}$ $\displaystyle=dx\wedge dv(\partial/\partial
x^{i},\partial/\partial v^{j})=\Psi^{\ast}\omega\,(\partial/\partial
x^{i},\partial/\partial v^{j})$
$\displaystyle=h(J\Psi_{\ast}(\partial/\partial
x^{i}),\Psi_{\ast}(\partial/\partial v^{j}))=h(J\Psi_{i},\Psi_{j+n}).$
Similarly
(3.6) $h(J\Psi_{i+n},\Psi_{j+n})=\omega(\Psi_{\ast}(\partial/\partial
v^{i}),\Psi_{\ast}(\partial/\partial v^{j}))=0.$
Now as $F$ describes a Lagrangian manifold, (summing repeated indices below)
(3.7) $\displaystyle 0$
$\displaystyle=\omega(e_{i},e_{j})=h(J\Psi_{i}+\partial_{k}\varphi_{i}J\Psi_{k+n},\Psi_{j}+\partial_{l}\varphi_{j}\Psi_{l+n})$
$\displaystyle=h(J\Psi_{i},\Psi_{j})+\partial_{l}\varphi_{j}h(J\Psi_{i},\Psi_{l+n})+\partial_{k}\varphi_{i}h(J\Psi_{k+n},\Psi_{j})+\partial_{k}\varphi_{i}\partial_{l}\varphi_{j}h(J\Psi_{k+n},\Psi_{l+n})$
$\displaystyle=h(J\Psi_{i},\Psi_{j})-\varphi_{jk}h(\Psi_{i},J\Psi_{k+n})+\varphi_{ik}h(J\Psi_{k+n},\Psi_{j})+\varphi_{ik}\varphi_{jl}h(J\Psi_{k+n},\Psi_{l+n})$
$\displaystyle=h(J\Psi_{i},\Psi_{j})+\varphi_{ik}\varphi_{jl}h(J\Psi_{k+n},\Psi_{l+n})\hskip
142.26378pt\mbox{by \eqref{psi}}$ $\displaystyle=h(J\Psi_{i},\Psi_{j}).$
Now, $\\{\Psi_{i},J\Psi_{j}:1\leq i,j\leq n\\}$ is a basis for the ambient
tangent space at a point in the image of $F$. So is
$\\{\Psi_{i},\Psi_{j+n}:1\leq i,j\leq n\\}$ (as $\Psi$ is a local
diffeomorphism). We represent the latter vectors by
(3.8) $\Psi_{i+n}=a^{ij}\Psi_{j}+b^{ij}J\Psi_{j}.$
Computing the pairing $h\left(J\Psi_{j},\Psi_{i+n}\right)$ using (3.5) on the
left and (3.8) on the right yields $b^{ij}=h^{ij}$ as the inverse of the
positive definite matrix $h_{ij}:=h(\Psi_{i},\Psi_{j})$. Now recalling (3.1)
$\frac{\partial F}{\partial t}=\left(\frac{\partial}{\partial
t}\frac{\partial\varphi}{\partial x^{k}}\right)\frac{\partial\Psi}{\partial
v^{k}},$
project onto the normal space:
$\displaystyle\left(\frac{\partial F}{\partial t}\right)^{\perp}$
$\displaystyle=\frac{\partial\varphi_{t}}{\partial
x^{k}}h(\Psi_{k+n},Je_{p})Je_{q}g^{pq}$
$\displaystyle=\frac{\partial\varphi_{t}}{\partial
x^{k}}h(\Psi_{k+n},J\Psi_{p}+\varphi_{pj}J\Psi_{j+n})Je_{q}g^{pq}$
$\displaystyle=\frac{\partial\varphi_{t}}{\partial
x^{k}}h(\Psi_{k+n},J\Psi_{p})Je_{q}g^{pq}$
$\displaystyle=\frac{\partial\varphi_{t}}{\partial
x^{k}}\delta_{kp}Je_{q}g^{pq}\,\,\,\,\,\,{\mbox{ by \eqref{psi}}}$
$\displaystyle=\frac{\partial\varphi_{t}}{\partial
x^{k}}\left(Je_{q}g^{kq}\right)$ $\displaystyle=J\nabla\varphi_{t}.$
Let $H=H^{m}Je_{m}$ for the Lagrangian $L$. As $JH$ is tangential its
divergence on $L$ is
(3.9) $\displaystyle\operatorname{div}(JH)$
$\displaystyle=-\operatorname{div}\left(H^{m}e_{m}\right)$
$\displaystyle=-g^{ab}h(\nabla_{e_{a}}\left(H^{m}e_{m}\right),e_{b})$
$\displaystyle=-g^{ab}h\left(\frac{\partial}{\partial
x^{a}}H^{m}e_{m}+H^{m}\Gamma_{am}^{p}e_{p},e_{b}\right)$
$\displaystyle=-g^{ab}\frac{\partial}{\partial
x^{a}}H^{m}g_{mb}-g^{ab}H^{m}\Gamma_{am}^{p}g_{pb}$
$\displaystyle=-\frac{\partial}{\partial x^{a}}H^{a}-H^{m}\Gamma_{am}^{a}$
where the Christoffel symbols are for the induced metric $g$. The components
of $H$ are given by
(3.10)
$H^{a}=h\left(H,Je_{p}\right)g^{ap}=h\left(g^{ij}\left(\frac{\partial^{2}F^{\beta}}{\partial
x^{i}\partial
x^{j}}+F_{i}^{\alpha}F_{j}^{\gamma}\tilde{\Gamma}_{\alpha\gamma}^{\beta}\right)\frac{\partial}{\partial
y^{\beta}},Je_{p}\right)g^{ap}.$
Now differentiate components of (3.3)
$\frac{\partial^{2}F^{\beta}}{\partial x^{i}\partial
x^{j}}=\frac{\partial\Psi^{\beta}}{\partial x^{j}\partial
x^{i}}+\frac{\partial^{3}\varphi}{\partial x^{j}\partial x^{i}\partial
x^{k}}\delta^{kl}\frac{\partial\Psi^{\beta}}{\partial
v^{l}}+\frac{\partial^{2}\varphi}{\partial x^{i}\partial
x^{k}}\delta^{kl}\frac{\partial^{2}\Psi^{\beta}}{\partial x^{j}\partial
v^{l}}.$
Plug in (3.3) to get
$\displaystyle h$ $\displaystyle\left(\frac{\partial^{2}F^{\beta}}{\partial
x^{i}\partial x^{j}}\frac{\partial}{\partial
y^{\beta}},Je_{p}\right)=\omega\left(e_{p},\frac{\partial^{2}F^{\beta}}{\partial
x^{i}\partial x^{j}}\frac{\partial}{\partial y^{\beta}}\right)$
$\displaystyle=$
$\displaystyle\omega\left(\frac{\partial\Psi^{\delta}}{\partial
x^{p}}\frac{\partial}{\partial y^{\delta}}+\frac{\partial^{2}\varphi}{\partial
x^{p}\partial x^{q}}\delta^{qm}\frac{\partial\Psi^{\delta}}{\partial
v^{m}}\frac{\partial}{\partial
y^{\delta}},\frac{\partial\Psi^{\beta}}{\partial x^{j}\partial
x^{i}}\frac{\partial}{\partial y^{\beta}}+\frac{\partial^{3}\varphi}{\partial
x^{j}\partial x^{i}\partial
x^{k}}\delta^{kl}\frac{\partial\Psi^{\beta}}{\partial
v^{l}}\frac{\partial}{\partial y^{\beta}}+\frac{\partial^{2}\varphi}{\partial
x^{i}\partial x^{k}}\delta^{kl}\frac{\partial^{2}\Psi^{\beta}}{\partial
x^{j}\partial v^{l}}\frac{\partial}{\partial y^{\beta}}\right)$
$\displaystyle=$ $\displaystyle\frac{\partial^{3}\varphi}{\partial
x^{j}\partial x^{i}\partial
x^{k}}\delta^{kl}\omega\left(\frac{\partial\Psi^{\delta}}{\partial
x^{p}}\frac{\partial}{\partial
y^{\delta}},\frac{\partial\Psi^{\beta}}{\partial
v^{l}}\frac{\partial}{\partial
y^{\beta}}\right)+\frac{\partial^{2}\varphi}{\partial x^{p}\partial
x^{q}}\delta^{qm}\frac{\partial^{3}\varphi}{\partial x^{j}\partial
x^{i}\partial
x^{k}}\delta^{kl}\omega\left(\frac{\partial\Psi^{\delta}}{\partial
v^{m}}\frac{\partial}{\partial
y^{\delta}},\frac{\partial\Psi^{\beta}}{\partial
v^{l}}\frac{\partial}{\partial y^{\beta}}\right)$
$\displaystyle+F_{p}^{\delta}\left(\frac{\partial\Psi^{\beta}}{\partial
x^{j}\partial x^{i}}+\frac{\partial^{2}\varphi}{\partial x^{i}\partial
x^{k}}\delta^{kl}\frac{\partial^{2}\Psi^{\beta}}{\partial x^{j}\partial
v^{l}}\right)\omega_{\delta\beta}\circ F$ $\displaystyle=$
$\displaystyle\frac{\partial^{3}\varphi}{\partial x^{j}\partial x^{i}\partial
x^{k}}\delta^{kl}\Psi^{\ast}\omega\left(\frac{\partial}{\partial
x^{p}},\frac{\partial}{\partial
v^{l}}\right)+\frac{\partial^{2}\varphi}{\partial x^{p}\partial
x^{q}}\delta^{qm}\frac{\partial^{3}\varphi}{\partial x^{j}\partial
x^{i}\partial x^{k}}\delta^{kl}\Psi^{\ast}\omega\left(\frac{\partial}{\partial
v^{m}},\frac{\partial}{\partial v^{l}}\right)$
$\displaystyle+F_{p}^{\delta}\left(\frac{\partial\Psi^{\beta}}{\partial
x^{j}\partial x^{i}}+\frac{\partial^{2}\varphi}{\partial x^{i}\partial
x^{k}}\delta^{kl}\frac{\partial^{2}\Psi^{\beta}}{\partial x^{j}\partial
v^{l}}\right)\omega_{\delta\beta}\circ F$ $\displaystyle=$
$\displaystyle\frac{\partial^{3}\varphi}{\partial x^{j}\partial x^{i}\partial
x^{k}}\delta^{kl}\delta_{pl}+F_{p}^{\delta}\left(\frac{\partial\Psi^{\beta}}{\partial
x^{j}\partial x^{i}}+\frac{\partial^{2}\varphi}{\partial x^{i}\partial
x^{k}}\delta^{kl}\frac{\partial^{2}\Psi^{\beta}}{\partial x^{j}\partial
v^{l}}\right)\omega_{\delta\beta}\circ F.$
Now also
$h\left(\frac{\partial}{\partial
y^{\beta}},Je_{p}\right)=\omega\left(F_{p}^{\delta}\frac{\partial}{\partial
y^{\delta}},\frac{\partial}{\partial
y^{\beta}}\right)=F_{p}^{\delta}\omega_{\delta\beta}\circ F.$
Combining (3.10) and the above
$\displaystyle H^{a}$
$\displaystyle=g^{ij}g^{ap}\left(\frac{\partial^{3}\varphi}{\partial
x^{j}\partial x^{i}\partial
x^{k}}\delta^{kl}\delta_{pl}+F_{p}^{\delta}\left(\frac{\partial\Psi^{\beta}}{\partial
x^{j}\partial x^{i}}+\frac{\partial^{2}\varphi}{\partial x^{i}\partial
x^{k}}\delta^{kl}\frac{\partial^{2}\Psi^{\beta}}{\partial x^{j}\partial
v^{l}}\right)\omega_{\delta\beta}\circ F\right)$
$\displaystyle+g^{ij}g^{ap}F_{i}^{\alpha}F_{j}^{\gamma}\tilde{\Gamma}_{\alpha\gamma}^{\beta}F_{p}^{\delta}\omega_{\delta\beta}\circ
F.$
Thus using the expression we derived in (3.9)
(3.11) $\displaystyle\operatorname*{div}$
$\displaystyle(JH)=-g^{ap}g^{ij}\frac{\partial^{4}\varphi}{\partial
x^{a}\partial x^{j}\partial x^{i}\partial
x^{p}}-\left(\frac{\partial}{\partial
x^{a}}\left(g^{ij}g^{ap}\right)\right)\frac{\partial^{3}\varphi}{\partial
x^{j}\partial x^{i}\partial x^{p}}$ $\displaystyle-\frac{\partial}{\partial
x^{a}}\left(g^{ij}g^{ap}\right)F_{p}^{\delta}\left[\left(\frac{\partial\Psi^{\beta}}{\partial
x^{j}\partial x^{i}}+\frac{\partial^{2}\varphi}{\partial x^{i}\partial
x^{k}}\delta^{kl}\frac{\partial^{2}\Psi^{\beta}}{\partial x^{j}\partial
v^{l}}\right)+F_{i}^{\alpha}F_{j}^{\gamma}\tilde{\Gamma}_{\alpha\gamma}^{\beta}\right]\omega_{\delta\beta}\circ
F-H^{m}\Gamma_{am}^{a}.$
Now recalling (3.4), we see the metric components $g_{ab}$ involve second
order derivatives in terms of $\varphi,$ thus $\Gamma_{ij}^{k}$ are third
order. So each term above after the first term is at most third order. ∎
### 3.2. Short time existence
###### Proposition 3.2.
Given an initial smooth immersion of a compact $L\rightarrow M,$ there exists
a solution to (1.1,1.2) for some short time.
###### Proof.
Choose a Weinstein neighborhood containing $L.$ Now suppose we have $\varphi$
which satisfies the fourth order equation
(3.12)
$\displaystyle\varphi_{t}=-g^{ap}g^{ij}\frac{\partial^{4}\varphi}{\partial
x^{a}\partial x^{j}\partial x^{i}\partial
x^{p}}+G(x,D\varphi,D^{2}\varphi,D^{3}\varphi)=\operatorname{div}(JH)$
$\displaystyle\varphi(\cdot,0)=0.$
Then the immersions $F$ generated from $\varphi(x,t)$ satisfy
$\displaystyle\left(\frac{\partial F}{\partial t}\right)^{\perp}$
$\displaystyle=J\nabla\varphi_{t}=J\nabla\operatorname{div}(JH).$
As the normal component satisfies the appropriate equation, we may compose
with diffeomorphisms to get a flow (see Claim 3.3 below) such that
(3.13) $\frac{\partial F}{\partial t}=J\nabla\operatorname{div}(JH).$
Now the equation (3.12) is precisely of the form of $2p$ order quasilinear
parabolic equation studied in [MM12]. By [MM12, Theorem 1.1] we have short
time existence for the solution to (3.12), thus we have short-time existence
for the flow (3.13). ∎
### 3.3. Uniqueness
We start with a standard observation.
###### Claim 3.3.
Suppose that $F:L\times[0,T)\rightarrow M$ is a family of immersions
satisfying
$\left(\frac{\partial F}{\partial t}\right)^{\perp}=N(x,t)$
for some vector field $N(x,t)$ which is normal to the immersed submanifold
$F(\cdot,t)(L)$. There exists a unique family of diffeomorphisms
$\chi_{t}:L\rightarrow L$ such that
$\frac{\partial}{\partial
t}F(\chi_{t}(x),t)=N(\chi_{t}(x),t)\,\,\,\,{and}\,\,\,\,\chi_{0}=Id_{|L}.$
###### Proof.
Given the flow exists, the given velocity field will decompose orthogonally
into normal and tangential components:
$\frac{\partial F}{\partial t}=N(x,t)+T\left(x,t\right).$
Consider the time-dependent vector field on $L$
$V(x,t)=-D_{L}F(x,t)^{-1}T\left(x,t\right)$
By the Fundamental Theorem on Flows, (cf. [Lee13, Theorem 9.48]) there is a
unique flow on $L$ starting at the identity and satisfying
$\frac{\partial}{\partial t}\chi_{t}(x)=V(\chi_{t}(x),t).$
Composing this flow with the original flow $F$ yields the result. ∎
###### Theorem 3.4.
The solution to the initial value problem (1.1) - (1.2) is unique. More
precisely, if $F_{1}$ and $F_{2}$ are two solutions of (1.1) such that
$F_{1}(x,t_{0})=F_{2}(x,t_{0}^{\prime})$ for some $t_{0},t_{0}^{\prime}$ and
all $x\in L$, then
$F_{1}(x,t_{0}+\tau)=F_{2}(x,t_{0}^{\prime}+\tau)$
for all $\tau$ in an open neighborhood of $0$ where both sides above are
defined.
###### Proof.
Without loss of generality, we take $t_{0}=t_{0}^{\prime}=0$. Let
$L=F_{1}(\cdot,0)=F_{2}(\cdot,0)$ and $\Psi:U\subset T^{\ast}L\to V\subset M$
be a Lagrangian neighborhood mapping.
First, we show that the normal flow of $F_{i}(\cdot,t)$ is given in the
neighborhood $\Psi$ by the graph of an exact section $d\varphi_{i}(\cdot,t)$
where $\varphi_{i}$ solves a problem of the form (3.12). To this end, note
that for $\tau$ in the domain, the path
$\left\\{F_{i}(\cdot,t),t\in[0,\tau]\text{ }\right\\}$ is a Hamiltonian
isotopy between $F_{i}(\cdot,0)$ and $F_{i}(\cdot,\tau)$. Being a Hamiltonian
isotopy is invariant under the symplectomorphism $\Psi$, so the sections
$\Psi^{-1}\left(F_{i}(\cdot,0)\right)$ and
$\Psi^{-1}\left(F_{i}(\cdot,\tau)\right)$ are Hamiltonian isotopic. By [Wei71,
Corollary 6.2], Lagrangian submanifolds that are near to the $0$-section are
given as graphs of closed sections of the cotangent bundle. As the flow is
smooth, for small times the Lagrangian submanifolds are near enough to be
described by closed sections. According to [MS17, Proposition 9.4.2], these
sections are exact, that is
$\Psi^{-1}\circ
F_{i}(\cdot,\tau)\left(L\right)=\left\\{d\varphi_{i}\left(x,\tau\right):x\in
L\right\\}.$
In other words,
$\Psi(\left\\{d\varphi_{i}\left(x,\tau\right):x\in
L_{i}\right\\})=F_{i}(\cdot,\tau)\left(L\right)$
meaning that for each $\tau$
$\Psi\circ d\varphi_{i}:L\rightarrow T^{\ast}L\rightarrow M$
is a Lagrangian immersion, which may have reparameterized the base. In
particular, the flow $F_{i}$ determines a flow of scalar functions
$\varphi_{i},$ which recovers the same family of submanifolds at $F_{i}$ (up
to reparameterization) as do $\Psi\circ d\varphi_{i}\left(\cdot,t\right)$. By
Proposition 3.1, the scalar equation (3.12) holds on $\varphi_{i}$ as does the
initial condition $d\varphi_{i}\equiv 0$. It follows that $\varphi_{1}$ and
$\varphi_{2}$ both satisfy the same equation (3.12) and have the same initial
condition, so $\varphi_{1}=\varphi_{2}+C$ for some constant $C$. Thus the
flows $F_{1}$ and $F_{2}$ are the same. ∎
Theorem 3.4 allows for seamless extension of the flow: While the Weinstein’s
Lagrangian neighborhood may only exist around $L_{0}$, if another Lagrangian
neighborhood of $L_{0}$ extends the flow, the two flows patch together
smoothly.
## 4\. Higher order estimates based on curvature bounds
The goal of this section is to show that a solution with uniformly bounded
second fundamental form over $[0,T)$ enjoy estimates of all orders and can be
extended.
###### Theorem 4.1.
Suppose that the flow (1.1) exists on $[0,T)$ and the second fundamental form
has a uniform bound on $[0,T)$. Then the flow converges smoothly as
$t\rightarrow T$ so can be extended to $[0,T+\varepsilon)$ for some
$\varepsilon>0$.
To prove this theorem, it is essential in our approach to establish a-priori
estimates from the integral estimates derived from the following differential
inequality:
###### Proposition 4.2.
Suppose that $F$ is a solution to (1.1) on $[0,T)$ for a compact Lagrangian
submanifold $L$ inside a compact $M$. Suppose the second fundamental form has
a uniform bound $K$. There exists $C$ depending on $K$, the ambient geometry
of $M$ and $\operatorname{Vol}(L_{0})$ such that for all $k\geq 2$
(4.1) $\frac{d}{dt}\int_{L}\left|\nabla^{k-1}A\right|^{2}dV_{g}(t)\leq
C\int_{L}\left|\nabla^{k-1}A\right|^{2}dV_{g}(t)+C\sum_{l=0}^{k-2}\int_{L}\left|\nabla^{l}A\right|^{2}dV_{g}(t).$
A Weinstein neighborhood map determines the equation the flow must satisfy,
and we could derive estimates of all orders based on this particular equation.
However, the flow is expected to leave a given neighborhood after some time,
and we will need to take a new neighborhood. We would need to know the speed
of the flow to patch estimates from one neighborhood to another, but this
requires knowing the size of the Weinstein neighborhoods around the Lagrangian
submanifolds at different times.
We require charts with uniform geometric estimates. To obtain these we appeal
to uniform local Darboux coordinates given in [JLS11]. These charts are local
but are given with uniform geometric bounds. The short-time existence of the
flow is already determined by the global Weinstein neighborhoods; we write the
flow in these Darboux charts as a scalar equation from which we derive
integral estimates for derivatives of any order.
### 4.1. Uniform Darboux charts.
We record [JLS11, Prop.3.2 and Prop.3.4] on existence of Darboux coordinates
with estimates on a symplectic manifold. Let $\pi:\mathcal{U}\rightarrow M$ be
the $U(n)$ frame bundle of $M$. A point in $\mathcal{U}$ is a pair $(p,v)$
with $\pi(p,v)=p\in M$ and $v:\mathbb{R}^{2n}\rightarrow T_{p}M$ an
isomorphism satisfying $v^{\ast}(\omega_{p})=\omega_{0}$ and
$v^{\ast}(h|_{p})=h_{0}$ (the standard metric on $\mathbb{C}^{n}$). The right
action of $U(n)$ on $\mathcal{U}$ is free: $\gamma(p,v)=(p,v\circ\gamma)$ for
any $\gamma\in U(n)$.
###### Proposition 4.3 (Joyce-Lee-Schoen).
Let $(M,\omega)$ be a real $2n$-dimensional symplectic manifold without
boundary, and a Riemannian metric $h$ compatible with $\omega$ and an almost
complex structure $J$. Let $\mathcal{U}$ be the $U(n)$ frame bundle of $M$.
Then for small $\varepsilon>0$ we can choose a family of embeddings
$\Upsilon_{p,v}:B^{2n}_{\varepsilon}\rightarrow M$ depending smoothly on
$\left(p,v\right)\in U$, where $B^{2n}_{\varepsilon}$ is the ball of radius
$\varepsilon$ about $0$ in $\mathbb{C}^{n},$ such that for all
$\left(p,v\right)\in U$ we have
1. (1)
$\Upsilon_{p,v}(0)=p$ and $d\Upsilon_{p,v}|_{0}=v:\mathbb{C}^{n}\rightarrow
T_{p}M;$
2. (2)
$\Upsilon_{p,v\circ\gamma}(0)\equiv\Upsilon_{p,v}\circ\gamma$ for all
$\gamma\in U(n);$
3. (3)
$\Upsilon_{p,v}^{\ast}(\omega)\equiv\omega_{0}=\frac{\sqrt{-1}}{2}\sum_{j=1}^{n}dz_{j}\wedge
d\bar{z}_{j};$ and
4. (4)
$\Upsilon_{p,v}^{\ast}(h)=h_{0}+O(|z|)$.
Moreover, for a dilation map $\mathbf{t}:B^{2n}_{R}\to B^{2n}_{\varepsilon}$
given by $\mathbf{t}(z)=tz$ where $t\leq\varepsilon/R$, set
$h^{t}_{p,v}=t^{-2}(\Upsilon_{p,v}\circ\mathbf{t})^{*}h$. Then it holds
1. (5)
$\|h^{t}_{p,v}-h_{0}\|_{C^{0}}\leq C_{0}t\ \ \ \ \mbox{and}\ \ \ \|\partial
h^{t}_{p,v}\|_{C^{0}}\leq C_{1}t$,
where norms are taken w.r.t. $h_{0}$ and $\partial$ is the Levi-Civita
connection of $h_{0}$.
###### Proposition 4.4.
Suppose that $M$ is a compact symplectic manifold with a compatible Riemannian
metric $h$. Suppose that $L$ is a compact Lagrangian submanifold of $M$ with
second fundamental form bounded above by $K$ and volume bounded above by
$V_{0}$. Given $c_{n}>0,$ there exists an $r_{0}=r_{0}(K,c_{n})>0$ and a
finite cover of $L$ by Darboux charts
$\Upsilon_{p_{i},v_{i}}$:$B_{r_{0}}^{2n}\rightarrow M$ centered at points
$p_{i}$ on $L$ such that
1. (1)
The connected component of $L\cap B_{r_{0}}^{2n}$ containing $p_{j}$ is
represented by a graph $\left(x,d\varphi^{(j)}\right)$ over
$B_{r_{0}}^{2n}\cap\mathbb{R}^{n}\times\left\\{0\right\\}$ for some potential
$\varphi^{(j)}$.
2. (2)
The tangent plane at each point of this connected component satisfies a
closeness condition with respect to the planes
$v_{i}(\mathbb{R}^{n}\times\\{0\\}):$
(4.2) $\max_{\begin{subarray}{c}\left|e\right|_{g}=1,\text{ }e\in T_{p}L\\\
\left|\nu\right|_{\delta_{0}}=1,\nu\in\left\\{0\right\\}\times\mathbb{R}^{n}\end{subarray}}e\cdot\nu<c_{n}$
where the dot product is in the euclidean metric $\delta_{0}$, and $c_{n}$ is
a small universal constant (say $c_{n}=\frac{1}{10\sqrt{n}}$) chosen so that
quantities such as the volume element and coordinate expression for $h$ are
bounded by universal constants.
3. (3)
The ambient metric $h$ is very close to the euclidean metric, that is
$\|h-\delta_{0}\|<c_{n}$ for some $c_{n}$ (can be the same $c_{n}$ as in (2)
above).
4. (4)
The submanifold $L$ is covered by the charts obtained by restricting these
charts to $B_{r_{0}}^{2n}(p_{j})$
5. (5)
The number of such points $\left\\{p_{j}\right\\}$ satisfies
(4.3) ${N}(K,V_{0})\leq\frac{C(n)V_{0}}{r_{0}^{n}(K,c_{n})}.$
###### Proof.
At each point $p\in L$ we take a Darboux chart $\Upsilon_{p,v}$ as described
above with that $T_{p}L=\mathbb{R}^{n}\times\left\\{0\right\\}$ in the given
chart. Note that after some fixed re-scalings, we can assert via Proposition
4.3 that $\Upsilon_{p,v}$ exists on $B_{\varepsilon_{0}}^{2n}$ and satisfies
any near euclidean metric conditions we choose to prescribe, including the
closeness condition: $\left|h-\delta_{0}\right|<c$. Now we may apply
Proposition 5.1 which asserts existence of a ball
$B_{r_{0}}^{n}(p)\subset\mathbb{R}^{n}\times\\{0\\}$ over which $L$ is
representable as a graph, with (4.2) holding. The quantity $r_{0}$ will depend
on $K$.
Consider the compact immersed submanifold $L$ as a metric space $(L,d)$.
Taking a finite cover of metric balls $B_{r_{0}/4}(p)$ for $p\in L$ and
applying Vitalli’s covering Lemma, we conclude that there is a subset of these
points $\\{p_{j}\\}$ so that $L=\cup_{i}B_{3_{0}/4}(p_{i})$ and
$B_{r_{0}/4}(p_{i})$ are mutually disjoint. By (4.2), $L\cap
B_{3r_{0}/4}(p_{i})$ is in the image of a graph given by Proposition 5.1. In
particular, the disjoint $B_{r_{0}/4}(p_{i})$’s have a minimum total volume
$\omega_{n}c^{n}r_{0}^{n}$. The bound (4.3) on the number of balls follows. As
$\\{B_{3r_{0}/4}(p_{i})\\}$ covers $L$ and each of these balls is contained in
a graph over $B_{r_{0}}^{n},$ we take the set of the graphs as the cover. ∎
The scalar functions from the the exact sections of $T^{*}L$ are globally
defined on $L$ via the abstract Weinstein map $\Psi$. We have utilized them to
establish short-time existence and uniqueness for our geometric flow of $F$.
However, for higher order a-priori estimates, we need to set up the flow
equation in a Darboux chart with estimates on the metric as described above.
Fortunately, each $\Upsilon_{p,v}$ is a symplectomorphism, which takes
gradient graphs $(x,d\varphi)$ to Lagrangian submanifolds, so the computations
in section 3.1 can be repeated verbatim, with $\Upsilon_{p,v}$ in place of
$\Psi$. In particular, in each chart, the flow is determined by an equation
(4.4) $\varphi_{t}=-g^{ap}g^{ij}\frac{\partial^{4}\varphi}{\partial
x^{a}\partial x^{j}\partial x^{i}\partial
x^{p}}+G(x,D\varphi,D^{2}\varphi,D^{3}\varphi).$
###### Remark 4.5.
A precise computation in Darboux coordinates of the expression (3.10) gives
$\displaystyle h\left(H,Je_{p}\right)$
$\displaystyle=g^{ij}\varphi_{pij}+g^{ij}\tilde{\Gamma}_{ij}^{p+n}+g^{ij}\varphi_{kj}\delta^{km}\tilde{\Gamma}_{i,m+n}^{p+n}+g^{ij}\varphi_{ki}\delta^{km}\tilde{\Gamma}_{m+n,j}^{p+n}$
$\displaystyle+g^{ij}\varphi_{ki}\varphi_{lj}\delta^{km}\delta^{lr}\tilde{\Gamma}_{m+n,r+n}^{p+n}-g^{ij}\tilde{\Gamma}_{ij}^{q}\varphi_{pq}-g^{ij}\varphi_{kj}\delta^{km}\tilde{\Gamma}_{i,m+n}^{q}\varphi_{pq}$
$\displaystyle-g^{ij}\varphi_{ki}\delta^{km}\tilde{\Gamma}_{m+n,j}^{q}\varphi_{pq}-g^{ij}\varphi_{ki}\varphi_{lj}\delta^{km}\delta^{lr}\tilde{\Gamma}_{m+n,r+n}^{q}\varphi_{pq}$
where $\tilde{\Gamma}_{ij}^{q}$ are Christoffel symbols in the ambient metric
$\left(M,h\right)$. Considering that each expression of the form $g^{ab}$ is a
smooth function in terms of $D^{2}\varphi$ with dependence on zero order of
$h$ and each $\tilde{\Gamma}_{ij}^{\beta}$ expression depends on $Dh$ and $h$,
one may conclude (after computing $\operatorname{div}(JH)$ as in (3.11)) that
$G$ can be written as a sum of expressions that are
1. (1)
quadratic in $D^{3}\varphi$ and smooth in $D^{2}\varphi,h$ in a predetermined
way
2. (2)
linear in $D^{3}\varphi$, smooth in $D^{2}\varphi,h,Dh$ in a predetermined way
3. (3)
smooth in $D^{2}\varphi,h$ and linear in $D^{2}h$ in a predetermined way
4. (4)
smooth in $D^{2}\varphi,h,Dh$ in a predetermined way.
This allows us to make a claim that there is uniform control on the important
quantities involved in the equation we are solving.
###### Proposition 4.6.
Suppose that $L$ is a compact Lagrangian manifold with volume $V_{0}$ and
evolves by (1.1) on $[0,T)$. If the norm of second fundamental $A$ of $L_{t}$
satisfies $|A|_{g(t)}\leq K$ for $t\in[0,T)$, then after a fixed rescaling on
$M$ there is a finite set of Darboux charts such that
1. (1)
The submanifold is covered by graphs over
$B_{1}^{2n}\cap\mathbb{R}^{n}\times\left\\{0\right\\}$.
2. (2)
The submanifold is graphical over
$B_{5}^{2n}\cap\mathbb{R}^{n}\times\left\\{0\right\\}$ in each chart.
3. (3)
The slope bound (4.2) holds over $B_{5}^{n}(0).$
4. (4)
The flow (1.1) is governed by (4.4) locally in these charts.
5. (5)
For each chart, the $G$ from (4.4) satisfies a uniform bound on any fixed
order derivatives of $G$ (in terms of all four arguments, not with respect to
$x$ coordinate before embedding.)
6. (6)
The number of charts is controlled
(4.5) ${N}(K,V_{0})\leq C(K)\frac{V_{0}}{r_{0}^{n}(K)}.$
###### Proof.
Rescale $M$ so that $r_{0}=5.$ Then the expression $G$ becomes predictably
controlled by Remark 4.5. Choosing a cover with interior balls, as in the
proof of Proposition 4.4, determines the necessary number of balls. ∎
### 4.2. Localization
Let $L_{t}$ evolve by (1.1) with time $t\in[0,T),$ and assume
$\left|A\right|_{g(t)}\leq K$ for all $L_{t}.$ Our goal is to establish
integral bounds for $\left|\nabla^{l}A\right|_{g(t)}^{2},$ which only depend
on $k,K,M$ and the initial volume $V_{0}$ of $L_{0}.$ To derive the
differential inequality (4.1) at any time $t_{0}$, we use Proposition 4.6 and
express geometric quantities $g,A,\nabla^{l}A$, etc., in the (no more than
$N)$ Darboux charts in terms of $\varphi(x,t)$ for $x\in B_{5}^{n}.$ By
compactness of $L$ and smoothness of the flow, the flow will continue to be
described by graphs of $d\varphi(x,t)$ in this open union of $N$ charts for
$t\in[t_{0},t_{1})$ for some $t_{1}>t_{0}.$
To be precise, each of the Darboux charts in Lemma 4.6 has a product
structure; we may assume that each chart contains coordinates
$B_{4}^{n}(0)\times B_{2}^{n}(0)$ so that $L$ is graphical over $B_{5}^{n}(0)$
and further that the collection of $B_{1}^{n}(0)\times B_{1}^{n}(0)$ covers
$L$. Now we may fix once and for all a function $\eta$ which is equal to $1$
on $B_{1}^{n}(0)\times B_{1}^{n}(0)$ and vanishes within $B_{2}^{n}(0)\times
B_{2}^{n}(0)$. For a given chart $\Upsilon^{\alpha}$ (here
$\alpha\in\left\\{1,..N\right\\}$ indexes our choice of charts) we call the
function $\eta_{\alpha}$. This function will have uniformly bounded dependence
on the variables $x$ and $y$ in the chart.
Now once these $\eta_{\alpha}$ are chosen, we may then define a partition of
unity for the union of charts which form a tubular neighborhood of $L$, which
will restrict to a partition of unity for small variations of $L$:
(4.6) $\rho_{\alpha}^{2}:=\frac{\eta_{\alpha}^{2}}{\sum\eta_{\alpha}^{2}}.$
By compactness of the unit frame bundle and the smoothness of the family of
charts defined in Proposition 4.3, the transition functions between charts
will have bounded derivatives to any order. Thus, in a fixed chart, where a
piece of $L$ is represented as $\left\\{\left(x,d\varphi(x)\right):x\in
B_{1}(0)\right\\}$, the dependence of $\rho_{\alpha}^{2}$ will be uniformly
controlled in terms of these variables, so there is a uniform pointwise bound
(4.7) $\left|D_{x}^{2}\rho_{\alpha}^{2}\right|\leq
C(D^{3}\varphi,D^{2}\varphi,D\varphi,x)$
were this dependence is at most linear on $D^{3}\varphi$. We will be using the
$x$ coordinates as charts for $L.$
Note also that, if we have a uniform bound on $\frac{d}{dt}D\varphi$ and
$\frac{d}{dt}D^{2}\varphi$ we can conclude a positive lower bound on
$t_{1}-t_{0};$ the flow will be described by graphs of $\varphi(x,t)$ in these
$N$ charts, and the condition (4.2) will be satisfied for a slightly larger
$c_{n}^{\prime}$ (say $c_{n}^{\prime}=\frac{1}{5\sqrt{n}}$ instead of
$c_{n}=\frac{1}{10\sqrt{n}})$.
#### 4.2.1. Expression for metric and second fundamental form
In the Darboux charts for $M$, the manifold $L$ is expressed graphically over
the $x$ coordinate via
$x\mapsto F(x)=\left(x,d\varphi(x)\right).$
Thus we have a tangential frame:
(4.8) $e_{i}=\partial_{x^{i}}F=E_{i}+\varphi_{ik}\delta^{km}E_{m+n}$
with
$g_{ij}=h_{ij}+\varphi_{ik}\delta^{km}\varphi_{jl}\delta^{lr}h_{\left(m+n\right)\left(l+n\right)}+\varphi_{jl}\delta^{lr}h_{\left(i\right)\left(l+n\right)}+\varphi_{ik}\delta^{km}h_{\left(m+n\right)\left(j\right)}.$
Recalling (2) and (3) in Proposition 4.4 we may assume that the expression of
$h$ in these coordinates is very close to $\delta_{ij}$ and that
$D^{2}\varphi$ is not large.
Differentiating the components of the induced metric gives
(4.9) $\partial_{x^{p}}g_{ij}=\text{function of }\left(x,D\varphi\right)\\\
+\text{Terms involving up to three factors of }D^{2}\varphi\text{ but no
higher}\\\ +\text{Terms involving up to two factors of }D^{2}\varphi\text{ and
one factor of }D^{3}\varphi.$
###### Lemma 4.7.
In a Darboux chart, using the coordinate basis (4.8) for the tangent space and
$\left\\{Je_{l}\right\\}$ for the normal space, the covariant derivatives of
the second fundamental form and of the potential $\varphi$ are related by
(4.10) $\nabla^{k-1}A=D^{k+2}\varphi+S_{k}$
where $S_{1}$ is a smooth controlled function depending on the chart, $h$ and
$D^{2}\varphi$, $S_{2}$ depends also on $D^{3}\varphi$ and for $k\geq 3:$
1. (1)
Each $S_{k}$ is a sum of of multilinear forms of
$D^{4}\varphi,...,D^{k+1}\varphi$
2. (2)
The coefficients of these forms are functions of
$\left(x,D\varphi,D^{2}\varphi,D^{3}\varphi\right)$
3. (3)
The total sum of the derivatives of $D^{3}\varphi$ that occur in a given term
is no more than $k-2.$
(Note that (4.10) is interpreted as literal equality of the symbols in the
choice of basis, not simply “up to a smooth function”)
###### Proof.
Starting with $k=1$, differentiate in the ambient space
$\tilde{\nabla}_{e_{i}}e_{j}=\tilde{\Gamma}_{ji}^{\beta}E_{\beta}+\varphi_{jmi}\delta^{mk}E_{n+k}+\varphi_{jm}\delta^{mk}\tilde{\Gamma}_{n+k,i}^{\beta}E_{\beta}+\varphi_{jm}\varphi_{ri}\delta^{mk}\delta^{rl}\tilde{\Gamma}_{n+k,n+l}^{\beta}E_{\beta}.$
Using $e_{j}$ and $Je_{k}$ as frame and normal frame,
$\displaystyle A_{ijl}$
$\displaystyle=\langle\tilde{\nabla}_{e_{i}}e_{j},Je_{l}\rangle$
$\displaystyle=\omega(e_{l},\varphi_{jmi}\delta^{mk}E_{n+k})+\omega(e_{l},\tilde{\Gamma}_{ji}^{\beta}E_{\beta}+\varphi_{jm}\delta^{mk}\tilde{\Gamma}_{n+k,i}^{\beta}E_{\beta}+\varphi_{jm}\varphi_{ri}\delta^{mk}\delta^{rl}\tilde{\Gamma}_{n+k,n+l}^{\beta}E_{\beta})$
$\displaystyle=\varphi_{jli}+S_{1}$
where $S_{1}$ is a smooth function involving $D^{2}\varphi$ and the ambient
Christoffel symbols at $\left(x,D\varphi\right)$ and recalling
$\omega(e_{l},E_{\beta})=\omega(E_{l}+\varphi_{lk}\delta^{jk}E_{j+n},E_{\beta})=\left\\{\begin{array}[c]{cl}\delta_{sl},&\
\ \ \text{ if }\beta=s+n\text{ for }s\in\left\\{1,...,n\right\\}\\\
-\varphi_{sl},&\,\,\,\,\text{ if
}\beta=s\in\left\\{1,...,n\right\\}.\end{array}\right.$
Now for $k=2$ ($\nabla$ denotes covariant derivatives on the submanifold):
$\left(\nabla
A\right)_{pijl}=\partial_{p}A_{ijl}-A\left(\nabla_{e_{p}}e_{i},e_{j},e_{l}\right)-A\left(e_{i},\nabla_{e_{p}}e_{j},e_{l}\right)-A\left(e_{i},e_{j},\nabla_{e_{p}}e_{l}\right).$
Now we can compute the Christoffel symbols with respect to the induced metric
$g$:
$\nabla_{e_{p}}e_{i}=1\ast D^{3}\varphi\text{ + lower order }$
thus
$\displaystyle\left(\nabla A\right)_{pijl}$
$\displaystyle=\varphi_{jlip}+\partial_{x^{p}}S_{1}-A\ast\left(D^{3}\varphi\right)+\text{lower
order}$ $\displaystyle=\varphi_{jlip}+D^{3}\varphi\ast D^{3}\varphi+1\ast
D^{3}\varphi+\text{ smooth in other arguments.}$
Here and in sequel, we use $A\ast B$ to denote a predictable linear
combination of terms from tensors $A$ and $B$, and $1\ast T$ to be a
predictable linear combination of $T.$ Now
$\displaystyle\nabla^{2}A$ $\displaystyle=D^{5}\varphi+D^{4}\varphi\ast
D^{3}\varphi+\text{lower order}$ $\displaystyle\nabla^{3}A$
$\displaystyle=D^{6}\varphi+D^{5}\varphi\ast D^{3}\varphi+D^{4}\varphi\ast
D^{4}\varphi\text{ + lower order}$
and so forth. The result follows by inductively applying the product rule and
noting
$\nabla^{k-1}A=D(\nabla^{k-2}A)+D^{3}\varphi\ast\nabla^{k-2}A\text{ + lower
order}$
by the formula for covariant derivative. ∎
### 4.3. Integral inequalities
We will use $\|\cdot\|_{\infty}$ for the supremum norm in the euclidean metric
$\delta_{0}$ and
$\left|D^{m}\varphi\right|_{g}^{2}=g^{i_{1}j_{1}}g^{i_{2}j_{2}}...g^{i_{m}j_{m}}\varphi_{i_{1}...i_{m}}\varphi_{j_{1}...j_{m}}$
to denote the norm squared with respect to $g$ for the locally defined
$m$-tensor $D^{m}\varphi$ instead of the higher covariant derivative tensor
$\nabla^{m}\varphi$. We find that this makes computations on the chosen
Darboux chart more transparent. Note that since $g$ is close to $\delta_{0}$
on the chart (with estimates on errors)
$\frac{\left|D^{m}\varphi\right|_{g}^{2}}{\left|D^{m}\varphi\right|_{\delta_{0}}^{2}}\in(1-c_{n},1+c_{n})$
and
$(1-c_{n})dx\leq dV_{g}\leq\left(1+c_{n}\right)dx$
for some small $c_{n}$. Thus we may regard as equivalent estimates on
integrals against $dx$ and integrals against $dV_{g}$, provided the quantities
we are integrating are nonnegative. However, if the quantity being integrated
is not known to be non-negative, we have to be precise in performing
estimates.
All estimates below implicitly depend on $c_{n},$ but $c_{n}$ need not be
tracked closely: it need not be close to zero.
#### 4.3.1. Interpolation inequalities
We use Gagliardo-Nirenberg interpolation to derive integral inequalities that
allow us to integrate multilinear combinations of higher derivatives of
$\varphi$. For simplicity of notation, we will use $C$ for uniform constants
with dependence indicated in its arguments. For our application, we give
interpolations for different range of indices.
###### Lemma 4.8.
Let $\xi$ be a smooth compactly supported vector-valued function on
$\mathbb{R}^{n}$.
1. (1)
If $j_{1}+j_{2}+j_{3}+...+j_{q}=m,$ then
$\int\left|D^{j_{1}}\xi\ast D^{j_{2}}\xi...\ast D^{j_{q}}\xi\right|^{2}\leq
C\left\|\xi\right\|_{\infty}^{2q-2}\int\left|D^{m}\xi\right|^{2}.$
2. (2)
If $j_{1}+j_{2}+j_{3}+...+j_{q}+j^{\ast}=2\tilde{m}$ where $j_{q}<\tilde{m}$
and $j^{\ast}\geq 0$, then
$\int\left|D^{j_{1}}\xi\ast D^{j_{2}}\xi...\ast D^{j_{q}}\xi\right|\leq
C\left\|\xi\right\|_{\infty}^{q-\left(2-j^{\ast}/2\tilde{m}\right)}\left(\frac{2\tilde{m}-j^{\ast}}{2\tilde{m}}\int\left|D^{\tilde{m}}\xi\right|^{2}+\frac{j^{\ast}}{2\tilde{m}}\left\|\chi_{\text{supp}\left(\xi\right)}\right\|_{p^{\ast}}^{2\tilde{m}/j^{\ast}}\right).$
3. (3)
If $j_{1}+...+j_{r}=2\bar{m}+1$ and all $j_{i}\leq\bar{m},$ then for
$\varepsilon>0$
$\int\left|D^{j_{1}}\xi\ast D^{j_{2}}\xi...\ast
D^{j_{r}}\xi\right|\leq\varepsilon\int\left|D^{\bar{m}+1}\xi\right|^{2}+C(\varepsilon,\bar{m},\|\xi\|_{\infty})\,\left(\int\left|D^{\bar{m}}\xi\right|^{2}+\int\chi_{\text{supp}\left(f\right)}\right).$
###### Proof.
For (1), use $p_{i}=\frac{m}{j_{i}}$ and apply the generalized Hölder’s
inequality
$\int\left|D^{j_{1}}\xi\ast D^{j_{2}}\xi...\ast
D^{j_{q}}\xi\right|^{2}\leq\left\|D^{j_{1}}\xi\right\|_{2p_{1}}^{2}...\left\|D^{j_{q}}\xi\right\|_{2p_{q}}^{2}$
and then use the Gagliardo-Nirenberg interpolation inequality (cf. [FFRS21,
Theorem 1.1]) with $\theta_{i}=\frac{j_{i}}{m}$.
For (2), taking $p_{i}=\frac{2\tilde{m}}{j_{i}}$ and
$p^{\ast}=\frac{2\tilde{m}}{j^{\ast}}$ if $j^{\ast}>0$, then
$\int\left|D^{j_{1}}\xi\ast D^{j_{2}}\xi...\ast
D^{j_{q}}\xi\right|\leq\left\|D^{j_{1}}\xi\right\|_{p_{1}}...\left\|D^{j_{q}}\xi\right\|_{p_{q}}\left\|\chi_{\text{supp}\left(\xi\right)}\right\|_{p^{\ast}}.$
Now apply the Gagliardo-Nirenberg interpolation inequality with
$\theta_{i}=\frac{j_{i}}{\tilde{m}}$, applying Young’s inequality if
$j^{\ast}>0$.
For (3), we may split, with $a\leq\bar{m}\leq\bar{m}+1\leq b$
$\displaystyle j_{1}+...+j_{s}$ $\displaystyle=a$ $\displaystyle
j_{s+1}+...+j_{r}$ $\displaystyle=b.$
Now for some $p,q$ conjugates to be determined, let
$p_{i}=\left\\{\begin{array}[c]{cl}\frac{ap}{j_{i}},&\ \ \ \text{ if
}i\in\left\\{1,...,s\right\\}\\\ \frac{bq}{j_{i}},&\,\,\,\,\text{ if
}i\in\left\\{s+1,...,r\right\\}.\end{array}\right.$
Apply the generalized Hölder’s inequality
(4.11) $\int\left|D^{j_{1}}\xi\ast D^{j_{2}}\xi...\ast
D^{j_{r}}\xi\right|\leq\left\|D^{j_{1}}\xi\right\|_{p_{1}}...\left\|D^{j_{q}}\xi\right\|_{p_{r}}.$
We have from the Gagliardo-Nirenberg interpolation inequality
$\left\|D^{j_{i}}\xi\right\|_{p_{i}}\leq\left\\{\begin{array}[c]{cl}\left\|D^{\bar{m}}\xi\right\|_{\frac{ap}{\bar{m}}}^{\frac{j_{i}}{\bar{m}}}\left\|\xi\right\|_{\infty}^{1-\frac{j_{i}}{\bar{m}}}&\
\ \ \text{ if }i\in\left\\{1,...,s\right\\}\text{ with
}\theta_{i}=\frac{j_{i}}{\bar{m}}\\\
\left\|D^{\bar{m}+1}\xi\right\|_{\frac{bq}{\bar{m}+1}}^{\frac{j_{i}}{\bar{m}+1}}\left\|\xi\right\|_{\infty}^{1-\frac{j_{i}}{\bar{m}+1}},&\,\,\,\,\text{
if }i\in\left\\{s+1,...,r\right\\}\text{ with
}\theta_{i}=\frac{j_{i}}{\bar{m}+1}.\end{array}\right.$
Taking the product and then applying Young’s inequality (for the same $p,q$)
$\displaystyle\left\|D^{j_{1}}\xi\right\|_{p_{1}}...\left\|D^{j_{q}}\xi\right\|_{p_{r}}$
$\displaystyle\leq\left\|D^{\bar{m}}\xi\right\|_{\frac{ap}{\bar{m}}}^{\frac{a}{\bar{m}}}\left\|D^{\bar{m}+1}\xi\right\|_{\frac{bq}{\bar{m}+1}}^{\frac{b}{\bar{m}+1}}\left\|\xi\right\|_{\infty}^{r-\frac{a}{m}-\frac{b}{m+1}}$
$\displaystyle\leq
C(\varepsilon,p,q,r,\left\|\xi\right\|_{\infty})\left\|D^{\bar{m}}\xi\right\|_{\frac{ap}{\bar{m}}}^{\frac{ap}{\bar{m}}}+\varepsilon\left\|D^{\bar{m}+1}\xi\right\|_{\frac{bq}{\bar{m}+1}}^{\frac{bq}{\bar{m}+1}}$
(4.12)
$\displaystyle=C(\varepsilon,p,q,r,\left\|\xi\right\|_{\infty})\left\|D^{\bar{m}}\xi\right\|_{2\frac{a(\bar{m}+1)}{\bar{m}(a+1)}}^{2\frac{a(\bar{m}+1)}{\bar{m}(a+1)}}+\varepsilon\left\|D^{\bar{m}+1}\xi\right\|_{2}^{2}$
where in the last line we have made the choices
$q=\frac{2(\bar{m}+1)}{b},\text{ \ }p=\frac{2(\bar{m}+1)}{a+1}.$
Since $1\leq a\leq\bar{m}$ we have
$\frac{a(\bar{m}+1)}{\bar{m}(a+1)}\leq 1$
and can use Hölder’s and Young’s inequalities to get
(4.13)
$C(\varepsilon,p,q)\left\|D^{\bar{m}}\xi\right\|_{2\frac{a(\bar{m}+1)}{\bar{m}(a+1)}}^{2\frac{a(\bar{m}+1)}{\bar{m}(a+1)}}\leq
C(a,\bar{m})\left(\int\left|D^{\bar{m}}\xi\right|^{2}+\int\chi_{\text{supp}\left(f\right)}\right)$
omitting the last term in the case $a=\bar{m}$. Chaining together (4.11, 4.12,
4.13) gives the result.
∎
###### Lemma 4.9.
Let $f\in C^{\infty}(B_{4})$ and $r_{1}<r_{2}\leq 4$.
1. (1)
If $j_{1}+...+j_{s}=m,$ then
$\int_{B_{r_{1}}}\left|D^{j_{i}}f\cdots D^{j_{s}}f\right|^{2}\leq
C(m,r\,_{1},r_{2})\,\|f\|_{\infty}^{2s-2}\sum_{j=0}^{m}\int_{B_{r_{2}}}\left|D^{j}f\right|^{2}.$
2. (2)
If $j_{1}+j_{2}+j_{3}+...+j_{s}+j^{\ast}=2\tilde{m}$ where $j_{q}<\tilde{m}$
and $j^{\ast}\geq 0,$ then
$\int_{B_{r_{1}}}\left|D^{j_{i}}f\cdots D^{j_{s}}f\right|\leq
C(\tilde{m},r_{1},r_{2})\,\left\|f\right\|_{\infty}^{s-\left(2-j^{\ast}/2\tilde{m}\right)}\left(\frac{2\tilde{m}-j^{\ast}}{2\tilde{m}}\sum_{j=0}^{\tilde{m}}\int_{B_{r_{2}}}\left|D^{j}f\right|^{2}+\frac{j^{\ast}}{2\tilde{m}}\left\|\chi_{\text{supp}\left(f\right)}\right\|_{p^{\ast}}^{2\tilde{m}/j^{\ast}}\right).$
3. (3)
If $j_{1}+...+j_{r}=2\bar{m}+1$ and all $j_{i}\leq\bar{m},$ then for
$\varepsilon>0$
$\int_{B_{r_{1}}}\left|D^{j_{1}}f\ast D^{j_{2}}f...\ast
D^{j_{r}}f\right|\leq\varepsilon\int_{B_{r_{2}}}\left|D^{\bar{m}+1}\xi\right|^{2}+C(\varepsilon,r_{1},r_{2},\bar{m},\|f\|_{\infty})\,\left(\sum_{j=0}^{\tilde{m}}\int_{B_{r_{2}}}\left|D^{j}\xi\right|^{2}+1\right).$
###### Proof.
Set $\tilde{\eta}\in C_{0}^{\infty}(B_{3})$ that is 1 on $B_{r_{1}}$, 0 on
$B_{3}(0)\backslash B_{r_{2}}(0)$, $0\leq\tilde{\eta}\leq 1$ and
$\|\tilde{\eta}\|_{C^{m}}\leq C(m)$. By Lemma 4.8 (second line below)
$\displaystyle\int_{B_{r_{1}}}\left|D^{i_{1}}f\cdots D^{i_{s}}f\right|^{2}$
$\displaystyle\leq\int_{B_{r_{2}}}\left|D^{i_{1}}(\tilde{\eta}f)\cdots
D^{i_{s}}(\tilde{\eta}f)\right|^{2}$
$\displaystyle\leq\|\tilde{\eta}f\|_{\infty}^{2s-2}\int_{B_{r_{2}}}\left|D^{m}(\tilde{\eta}f)\right|^{2}$
$\displaystyle\leq
C(m,r_{1},r_{2},\|\tilde{\eta}\|_{C^{m}})\,\|f\|_{\infty}^{2s-2}\sum_{j=0}^{m}\int_{B_{r_{2}}}\left|D^{j}f\right|^{2}.$
The second and third inequalities in the statement of the Lemma follows by
applying the previous Lemma in a similar way. ∎
The following is simple but will be used repeatedly, so we explicitly note it.
###### Lemma 4.10.
Suppose that $r_{1}<r_{2}.$ Then for $\tilde{\varepsilon}>0$
$\int_{B_{r_{1}}}\left|D^{k+3}\varphi\right|^{2}\leq\tilde{\varepsilon}\int_{B_{r_{2}}}\left|D^{k+4}\varphi\right|^{2}+C(\tilde{\varepsilon},r_{1},r_{2})\int_{B_{r_{2}}}\left|D^{k+2}\varphi\right|^{2}.$
###### Proof.
For some $\tilde{\eta}=1$ on $B_{r_{1}}$ supported inside $B_{r_{2}}$
$\displaystyle\int_{B_{r_{2}}}\left|D^{k+3}\varphi\right|^{2}\tilde{\eta}^{2}$
$\displaystyle=-\int_{B_{r_{2}}}\
D^{k+2}\varphi\ast\left(\tilde{\eta}^{2}D^{k+4}\varphi+2\tilde{\eta}D\tilde{\eta}D^{k+3}\varphi\right)$
$\displaystyle\leq\tilde{\varepsilon}\int_{B_{r_{2}}}\tilde{\eta}^{2}\left|D^{k+4}\varphi\right|^{2}+\frac{1}{\tilde{\varepsilon}}C\int_{B_{r_{2}}}\tilde{\eta}^{2}\left|D^{k+2}\varphi\right|^{2}$
$\displaystyle+\frac{1}{2}\int_{B_{r_{2}}}\tilde{\eta}^{2}\left|D^{k+3}\varphi\right|^{2}+C\int_{B_{r_{2}}}\left|D\tilde{\eta}\right|^{2}\left|D^{k+2}\varphi\right|^{2}.$
Thus
$\int_{B_{r_{1}}}\left|D^{k+3}\varphi\right|^{2}\leq
2\tilde{\varepsilon}\int_{B_{r_{2}}}\tilde{\eta}^{2}\left|D^{k+4}\varphi\right|^{2}+\frac{2}{\tilde{\varepsilon}}C\int_{B_{r_{2}}}\left(\tilde{\eta}^{2}+\left|D\tilde{\eta}\right|^{2}\right)\left|D^{k+2}\varphi\right|^{2}.$
∎
#### 4.3.2. Evolution inequalities
###### Proposition 4.11.
Let $\rho_{\alpha}^{2}\in C_{0}^{\infty}(B_{2}(0))$ defined by (4.6). Working
in Darboux charts, for $\varepsilon>0$ we have
(4.14)
$\displaystyle\int_{B_{2}}\frac{d}{dt}\left(\left|D^{k+2}\varphi\right|_{g}^{2}dV_{g}\right)\rho_{\alpha}^{2}$
$\displaystyle\leq-2\int_{B_{2}}\left|D^{k+4}\varphi\right|_{g}^{2}\rho_{\alpha}^{2}dV_{g}+\varepsilon\int_{B_{3}}|D^{k+4}\varphi|^{2}dV_{g}$
$\displaystyle+C(k,\varepsilon,\left\|\varphi\right\|_{C^{3}})\left(\sum_{m=3}^{k+2}\int_{B_{3}}\left|D^{m}\varphi\right|_{g}^{2}dV_{g}+1\right).$
###### Proof.
In a Darboux chart, express $dV_{g}=V_{g}dx$. We have
$\displaystyle\frac{d}{dt}\left(\left|D^{k+2}\varphi\right|_{g}^{2}V_{g}\right)=\,$
$\displaystyle
2\left(\partial_{t}\varphi_{i_{1}...i_{k+2}}\right)\left(\varphi_{j_{1}...j_{k+2}}g^{i_{1}j_{1}}g^{i_{2}j_{2}}...g^{i_{k+2}j_{k+2}}V_{g}\right)$
$\displaystyle+\varphi_{i_{1}...i_{k+2}}\varphi_{j_{1}...j_{k+2}}\partial_{t}\left(g^{i_{1}j_{1}}g^{i_{2}j_{2}}...g^{i_{k+2}j_{k+2}}V_{g}\right)$
(4.15) $\displaystyle=\,$
$\displaystyle-2(g^{kl}g^{pq}\varphi_{klpq}+G)_{i_{1}...i_{k+2}}\left(\varphi_{j_{1}...j_{k+2}}g^{i_{1}j_{1}}...g^{i_{k+2}j_{k+2}}V_{g}\right)$
$\displaystyle+\varphi_{i_{1}...i_{k+2}}\varphi_{j_{1}...j_{k+2}}\partial_{t}\left(g^{i_{1}j_{1}}g^{i_{2}j_{2}}...g^{i_{k+2}j_{k+2}}V_{g}\right).$
We count the highest order of derivatives of $\varphi$ in $x^{1},...,x^{n}$
for each term below:
1. (1)
$g,g^{-1},V_{g}$ are of 2nd order
2. (2)
$\partial_{t}g,\partial_{t}g^{-1}$ and
$\partial_{t}V_{g}=V_{g}g^{ij}\partial_{t}g_{ij}$ are of 6th order
3. (3)
$(g^{kl}g^{pq}\varphi_{klpq})_{i_{1}...i_{k+2}}$ is of $(k+6)$th order and
$G_{i_{1}...i_{k+2}}$ is of $(k+5)$th order.
For the sake of notation, we will use
1. (1)
$P=P(x,D\varphi,D^{2}\varphi,D^{3}\varphi)$,
2. (2)
$Q=P(x,D\varphi,...,D^{k+3}\varphi)\ $to be described in (4.19) and below.
3. (3)
Bounded second order quantities are absorbed and not explicitly stated unless
necessary (in particular, $dV_{g}$ will be dropped when not being
differentiated.)
Multiplying $\rho_{\alpha}^{2}$ to localize in a chart then integrate on $L$,
we may then perform:
(A) Integration by parts twice the first term in (4.15) leads to
(4.16)
$\int_{B_{2}}(-2g^{kl}g^{pq}\varphi_{klpq})_{i_{1}...i_{k+2}}\left(\varphi_{j_{1}...j_{k+2}}g^{i_{1}j_{1}}...g^{i_{k+2}j_{k+2}}V_{g}\right)\rho_{\alpha}^{2}=-2\int_{B_{2}}\left|D^{k+4}\varphi\right|_{g}^{2}\rho_{\alpha}^{2}\,dV_{g}+I$
where
$I=\int_{B_{2}}D^{k+4}\varphi\ast\left(D^{4}\varphi+P^{2})\ast
D^{k+2}\varphi\rho_{\alpha}^{2}+D^{k+3}\varphi\ast\left(P\ast\rho_{\alpha}^{2}+D\rho_{\alpha}^{2}\right)+D^{k+2}\varphi\ast\left(P\ast
D\rho_{\alpha}^{2}+D^{2}\rho_{\alpha}^{2}\right)\right).$
To deal with the first term in $I$:
(4.17) $\int_{B_{2}}D^{k+4}\varphi\ast\left(D^{4}\varphi+P^{2}\right)\ast
D^{k+2}\varphi\leq\varepsilon\int_{B_{2}}\rho_{\alpha}^{2}|D^{k+4}\varphi|^{2}+C(\varepsilon)\int_{B_{2}}\rho_{\alpha}^{2}\left(\left|D^{k+2}\varphi\ast\left(D^{4}\varphi+P^{2}\right)\right|^{2}\right).$
Lemma 4.9 with $m=k$, $f=$ $D^{3}\varphi$ and $r_{2}=5/2$ yields
$\int_{B_{2}}\left|D^{k+2}\varphi\ast D^{4}\varphi\right|^{2}\leq
C(D^{3}\varphi)\sum_{j=0}^{m}\int_{B_{5/2}}\left|D^{j+3}\varphi\right|^{2}.$
Applying Lemma (4.10) to the highest order term provides a bound of (4.17) by
the positive terms in (4.14) noting also that
$\int_{B_{2}}\left|D^{k+2}\varphi\ast P^{2}\right|^{2}\leq
C(D^{3}\varphi)\int_{B_{2}}\left|D^{k+2}\varphi\right|^{2}.$
Next
(4.18)
$\int_{B_{2}}\left|(P\ast\rho_{\alpha}^{2}+D\rho_{\alpha}^{2})D^{k+4}\varphi\ast
D^{k+3}\varphi\right|\leq\varepsilon\int_{B_{2}}\rho_{\alpha}^{2}|D^{k+4}\varphi|^{2}+\frac{1}{\varepsilon}C(P)\int_{B_{2}}\left|D^{k+3}\varphi\right|^{2}$
recalling that $D\rho_{\alpha}$ is bounded by uniform constants and
$D^{2}\varphi.$ By Lemma 4.10 (choosing $\tilde{\varepsilon}\approx
c\varepsilon^{2}$), (4.18) is bounded by
$\varepsilon\int_{B_{2}}\rho_{\alpha}^{2}|D^{k+4}\varphi|^{2}+\varepsilon\int_{B_{3}}\left|D^{k+4}\varphi\right|^{2}+\frac{1}{\varepsilon^{3}}C\int_{B_{3}}\left|D^{k+2}\varphi\right|^{2}$
which is of the correct form. Finally for $I$, using (4.7),
$\int_{B_{2}}\left|D^{k+4}\varphi\ast D^{k+2}\varphi\ast\left(P\ast
D\rho_{\alpha}^{2}+D^{2}\rho_{\alpha}^{2}\right)\right|\leq\varepsilon\int_{B_{2}}\rho_{\alpha}^{2}|D^{k+4}\varphi|^{2}+C(\varepsilon,P)\int_{B_{2}}\left|D^{k+2}\varphi\right|^{2}.$
(B) Note that when applying the product rule successively to $G$, we will get
1. (1)
A single highest order term which is linear in the highest order with
coefficients involving at most order $D^{3}\varphi.$
2. (2)
Second to highest order terms that are linear in the second highest order, may
have a factor of $D^{4}\varphi$, all other dependence of lower order.
3. (3)
Terms of lower order, which could be multilinearly dependent on various lower
orders.
Thus
(4.19) $G_{i_{1}...i_{k+1}}=1\ast D^{k+4}\varphi+Q.$
with $Q$ having highest order $D^{k+3}\varphi.$ This is observed by iterating
the following expansion: Using $DG$ to denote a derivative in $x$ of the
composition
$x\mapsto G(x,D\varphi(x),D^{2}\varphi(x),D^{3}\varphi(x))$
and $\bar{D}G$ to denote derivatives in all 4 arguments of $G,$ we have
$DG=\bar{D}G\ast\left(D^{4}\varphi+D^{3}\varphi+D^{2}\varphi+\phi\right)$
where $\phi$ is the term generated by $\bar{D}G/Dx.$ Continuing
$\displaystyle D^{2}G$
$\displaystyle=\bar{D}G\ast\left(D^{5}\varphi+D^{4}\varphi+D^{3}\varphi+D\phi\ast\left(D^{4}\varphi+D^{3}\varphi+D^{2}\varphi\right)\right)$
$\displaystyle+\bar{D}^{2}G\ast\left(D^{4}\varphi+D^{3}\varphi+D^{2}\varphi+\phi\right)\ast\left(D^{4}\varphi+D^{3}\varphi+D^{2}\varphi+\phi\right)$
$\displaystyle...$ (4.20) $\displaystyle D^{k+1}G$
$\displaystyle=\bar{D}G\ast\left(D^{k+4}\varphi+D^{k+3}\varphi+D^{k+2}\varphi+...\right)$
$\displaystyle+\bar{D}^{2}G\ast\left(D^{k+3}\varphi+D^{k+2}\varphi+D^{k+1}\varphi+...\right)\ast\left(D^{4}\varphi+D^{3}\varphi+D^{2}\varphi+\phi\right)$
$\displaystyle+\bar{D}^{3}G\ast\left(D^{k+2}\varphi+...\right)\ast\left\\{\left(D^{5}\varphi+...\right)+\left(D^{4}\varphi+...\right)\ast\left(D^{4}\varphi+...\right)\right\\}$
$\displaystyle...$
Now integrate by parts:
$\displaystyle\int_{B_{2}}$ $\displaystyle
G_{i_{1}...i_{k+2}}\left(\varphi_{j_{1}...j_{k+2}}g^{i_{1}j_{1}}...g^{i_{k+2}j_{k+2}}V_{g}\right)\rho_{\alpha}^{2}=-\int_{B_{2}}G_{i_{1}...i_{k+1}}\partial_{{i_{k+2}}}\left[\rho_{\alpha}^{2}\left(\varphi_{j_{1}...j_{k+2}}g^{i_{1}j_{1}}...g^{i_{k+2}j_{k+2}}V_{g}\right)\right]$
$\displaystyle=$ $\displaystyle\int_{B_{2}}\left(D^{k+4}\varphi\ast
D^{k+3}\varphi\right)\rho_{\alpha}^{2}+\int_{B_{2}}(D\rho_{\alpha}^{2}+\rho_{\alpha}^{2}P)D^{k+4}\varphi\ast
D^{k+2}\varphi$ $\displaystyle+\int_{B_{2}}\left(Q\ast
D^{k+3}\varphi\right)\rho_{\alpha}^{2}+\int_{B_{2}}(D\rho_{\alpha}^{2}+\rho_{\alpha}^{2}P)Q\ast
D^{k+2}\varphi.$
We use Peter-Paul’s inequality we split into two types of terms:
$\varepsilon\int_{B_{2}}\left|D^{k+4}\varphi\right|^{2}\,\rho_{\alpha}^{2}+\varepsilon\int_{B_{2}}Q^{2}\rho_{\alpha}^{2}$
and
$C(\varepsilon,D\rho_{\alpha})\int_{B_{2}}\left(\left|D^{k+3}\varphi\right|^{2}+\left|D^{k+2}\varphi\right|^{2}\right)(P^{2}+P+1)\rho_{\alpha}^{2}.$
First,
$\int_{B_{2}}\left|D^{k+3}\varphi\right|^{2}\left(P^{2}+P+1\right)\rho_{\alpha}^{2}\leq
C\left(D^{3}\varphi\right)\int_{B_{2}}\left|D^{k+3}\varphi\right|^{2}$
is bounded by the argument in Lemma (4.10).
One can prove by induction, observing (4.19) and (4.20), that for each term in
$Q,$ the total number of derivatives of $D^{3}\varphi$ that arise will sum up
to no more than $k+1$ (i.e. $D^{k-1}\varphi\ast D^{6}\varphi\ast
D^{5}\varphi=D^{3+k-4}\varphi\ast D^{3+3}\varphi\ast D^{3+2}\varphi$, here
$k-4+3+2=k+1.$) Applying Lemma 4.9 for $f=D^{3}\varphi$ and $m=k+1$ to each of
the squared terms gives
$\int_{B_{2}}Q^{2}dV_{g}\leq
C\left(\|D^{3}\varphi\|\right)\sum_{i=0}^{k+1}\int_{B_{3}}|D^{i+3}\varphi|^{2}$
thus $\varepsilon\int_{B_{2}}Q^{2}dV_{g}$ has the correct bound, by Lemma
4.10.
Finally we finish bounding the last term in (4.15)
$\int\varphi_{i_{1}...i_{k+2}}\varphi_{j_{1}...j_{k+2}}\partial_{t}\left(g^{i_{1}j_{1}}g^{i_{2}j_{2}}...g^{i_{k+2}j_{k+2}}V_{g}\right)\rho_{\alpha}^{2}=\int\left(D^{k+2}\varphi\ast
D^{k+2}\varphi\ast D^{6}\varphi\right)\rho_{\alpha}^{2}.$
Apply Lemma 4.9 for $f=D^{3}\varphi$ and $\tilde{m}=k$
$\int_{B_{2}}\left(D^{k+2}\varphi\ast D^{k+2}\varphi\ast
D^{6}\varphi\right)\rho_{\alpha}^{2}\leq\varepsilon\int_{B_{3}}\left|D^{k+4}\varphi\right|^{2}+C\left(\varepsilon,k,D^{3}\varphi\right)\sum_{j=3}^{k+3}\int_{B_{3}}\left|D^{j}\varphi\right|^{2}$
We may then sweep away the $\int_{B_{2}}\left|D^{k+3}\varphi\right|^{2}$ term
Lemma 4.10 (choosing $\tilde{\varepsilon}\approx c\varepsilon^{2}$) to
conclude the proof. ∎
###### Proposition 4.12.
Let $\rho_{\alpha}^{2}\in C_{0}^{\infty}(B_{2}(0))$. Considering the
decomposition in Lemma 4.7, for $\varepsilon>0$ we have
$\displaystyle\int_{B_{2}}\frac{d}{dt}\left(\left(\left|S_{k}\right|_{g}^{2}+2\langle
D^{k+2}\varphi,S_{k}\rangle_{g}\right)dV_{g}\right)\rho_{\alpha}^{2}$
$\displaystyle\leq
C(k,\varepsilon,D^{3}\varphi)\,\left(\sum_{j=3}^{k+2}\int_{B_{3}}\left|D^{j}\varphi\right|^{2}+1\right)$
$\displaystyle+\varepsilon\int_{B_{3}}\left|D^{k+4}\varphi\right|^{2}.$
###### Proof.
Recall that
(4.21) $S_{k}=\left(1+D^{3}\varphi\right)\ast(D^{k+1}\varphi+D^{k}\varphi\ast
D^{4}\varphi+D^{k-1}\varphi\ast D^{4}\varphi\ast
D^{4}\varphi+D^{k-1}\varphi\ast D^{5}\varphi+...)$
Differentiating with respect to $t$ generates product rule expansions with 4
orders of derivatives added to a factor in each term, that is (modulo lower
order geometrically controlled values like $V_{g})$
(4.22) $\displaystyle\frac{d}{dt}\left|S_{k}\right|_{g}^{2}dV_{g}$
$\displaystyle=\left(1+D^{3}\varphi\right)\ast\left(D^{k+5}\varphi+D^{k+4}\varphi\ast
D^{4}\varphi+D^{k}\varphi\ast D^{8}\varphi+...\right)\ast S_{k}$
$\displaystyle\left(D^{6}\varphi+D^{7}\varphi\right)\ast\left(D^{k+1}\varphi+D^{k}\varphi\ast
D^{4}\varphi+D^{k-1}\varphi\ast D^{4}\varphi\ast D^{4}\varphi+...\right)\ast
S_{k}.$
Integrate by parts:
$\displaystyle\int_{B_{2}}$
$\displaystyle\left(D^{k+5}\varphi\ast\left(1+D^{3}\varphi\right)\ast
S_{k}\right)\rho_{\alpha}^{2}=\int_{B_{2}}D^{k+4}\varphi\ast\left(D^{4}\varphi\ast
S_{k}+\left(1+D^{3}\varphi\right)\ast DS_{k}\right)\rho_{\alpha}^{2}$
$\displaystyle+\int_{B_{2}}\left(D^{k+4}\varphi\ast\left(1+D^{3}\varphi\right)\ast
S_{k}\right)\ast D\rho_{\alpha}^{2}$
$\displaystyle\leq\varepsilon\int_{B_{2}}\left|D^{k+4}\varphi\right|^{2}\rho_{\alpha}^{2}+C(\varepsilon)\int_{B_{2}}\left(\left|D^{4}\varphi\ast
S_{k}\right|^{2}+\left|D^{3}\varphi\ast
DS_{k}\right|^{2}\right)\rho_{\alpha}^{2}$
$\displaystyle+C(\varepsilon)\int_{B_{2}}\left|\left(1+D^{3}\varphi\right)\ast
S_{k}\right|^{2}\left|D\rho_{\alpha}\right|^{2}.$
Now apply Lemma 4.9 with $f=D^{3}\varphi$ and $m=k-1$
(4.23) $\int_{B_{2}}\left(\left|D^{4}\varphi\ast
S_{k}\right|^{2}+\left|D^{3}\varphi\ast
DS_{k}\right|^{2}\right)\rho_{\alpha}^{2}\leq
C(k,D^{3}\varphi)\,\sum_{j=3}^{k+2}\int_{B_{3}}\left|D^{j}\varphi\right|^{2}.$
Similarly using $\tilde{\eta}$ as in the proof of Lemma 4.9, we have
$\int_{B_{2}}\left|\left(1+D^{3}\varphi\right)\ast
S_{k}\right|^{2}\left|D\rho_{\alpha}\right|^{2}\leq
C\left(D\rho_{\alpha},D^{3}\varphi\right)\sum_{j=3}^{k+1}\int_{B_{3}}\left|D^{j}\varphi\right|^{2}.$
Continuing with the terms in (4.22)
$\int_{B_{2}}\left(1+D^{3}\varphi\right)\ast\left(D^{k+4}\varphi\ast
D^{4}\varphi\ast
S_{k}\right)\rho_{\alpha}^{2}\leq\varepsilon\int_{B_{2}}\left|D^{k+4}\varphi\right|^{2}\rho_{\alpha}^{2}+C(\varepsilon)\int_{B_{2}}\left|D^{3}\varphi\ast
D^{4}\varphi\ast S_{k}\right|^{2}\rho_{\alpha}^{2}$
with the latter term enjoying the same bound as (4.23). The remaining terms
are of the form
$\int\left(D^{3+j_{1}}\varphi\ast D^{3+j_{2}}\varphi\ast..\ast
D^{3+j_{q}}\varphi\right)\rho_{\alpha}^{2}$
with $j_{1}+...+j_{q}\leq 2k$ so
$\int_{B_{2}}\left(D^{3+j_{1}}\varphi\ast D^{3+j_{2}}\varphi\ast..\ast
D^{3+j_{q}}\varphi\right)\rho_{\alpha}^{2}\leq
C(m,D^{3}\varphi)\,\left(\sum_{j=3}^{k+3}\int_{B_{3}}\left|D^{j}\varphi\right|^{2}+1\right)$
by Lemma 4.9 again. Applying Lemma 4.10 to
$\int_{B_{3}}\left|D^{k+3}\varphi\right|^{2}$ completes the desired bound for
the integral of the (4.22) terms.
Next
(4.24) $\int_{B_{2}}\frac{d}{dt}\left(2\langle
D^{k+2}\varphi,S_{k}\rangle_{g}dV_{g}\right)\rho_{\alpha}^{2}dV_{g}=\int_{B_{2}}\left(D^{k+6}\varphi\ast
S_{k}\right)\rho_{\alpha}^{2}dV_{g}+\int_{B_{2}}D^{k+2}\varphi\ast\frac{d}{dt}\left(S_{k}\ast
V\right)\rho_{\alpha}^{2}.$
Integrating the first term by parts twice yields
$\displaystyle\int_{B_{2}}$ $\displaystyle D^{k+6}\varphi\ast
S_{k}\rho_{\alpha}^{2}=\int_{B_{2}}D^{k+4}\varphi\ast D^{2}\left(S_{k}\ast
V\right)\rho_{\alpha}^{2}+D\rho_{\alpha}^{2}\ast D\left(S_{k}\ast
V\right)+D^{2}\rho_{\alpha}^{2}\ast\left(S_{k}\ast V\right)$
$\displaystyle\leq\varepsilon\int_{B_{2}}\left|D^{k+4}\varphi\right|^{2}\rho_{\alpha}^{2}+C(\varepsilon)\int_{B_{2}}\left|D^{2}S_{k}\right|^{2}\rho_{\alpha}^{2}+C\left(\varepsilon,D^{2}\rho_{\alpha}^{2}\right)\int_{B_{2}}\left(\left|DS_{k}\right|^{2}+\left|S_{k}\right|^{2}\right).$
Again Lemma 4.9 with $f=D^{3}\varphi$ and $\tilde{m}=k$ and $r_{2}=5/2$
(4.25) $\displaystyle\int_{B_{2}}\left|D^{2}S_{k}\right|^{2}\rho_{\alpha}^{2}$
$\displaystyle\leq
C(k,D^{3}\varphi)\,\sum_{j=3}^{k+3}\int_{B_{5/2}}\left|D^{j}\varphi\right|^{2}.$
(4.26)
$\displaystyle\leq\varepsilon\int_{B_{3}}\left|D^{k+4}\varphi\right|^{2}+C\left(\varepsilon,k,D^{3}\varphi\right)\sum_{j=3}^{k+2}\int_{B_{3}}\left|D^{j}\varphi\right|^{2}$
using Lemma 4.10.
Now look at second term in (4.24). Note that
$D^{k+2}\varphi\ast\frac{d}{dt}\left(S_{k}\ast V\right)=D^{k+2}\varphi\ast
D^{k+5}\varphi\ast D^{3}\varphi+D^{k+2}\varphi\ast\left(D^{k+4}\varphi\ast
D^{4}\varphi+...\right).$
The highest order term can be dealt with via integration by parts away from
$D^{k+5}\varphi$ and then an iterated Peter-Paul, carefully choosing smaller
$\varepsilon$ and using Lemma 4.9. For the remaining terms, we need the third
statement in Lemma 4.9 which gives
$\displaystyle\int_{B_{2}}\left(D^{3+j_{1}}\varphi\ast
D^{3+j_{2}}\varphi\ast..\ast D^{3+j_{q}}\varphi\right)\rho_{\alpha}^{2}$
$\displaystyle\leq\varepsilon\int_{B_{5/2}}\left|D^{k+4}\varphi\right|^{2}$
$\displaystyle+C(\varepsilon,k,\frac{5}{2},\|f\|_{\infty})\,\left(\sum_{j=0}^{k}\int_{B_{5/2}}\left|D^{3+j}\varphi\right|^{2}+1\right)$
as $j_{1}+...+j_{q}=2k+1$. A final application of Lemma 4.10 to
$\int_{B_{5/2}}\left|D^{k+3}\varphi\right|^{2}$ completes the proof. ∎
In Proposition 4.11 we have isolated ‘good’ terms
$-2\int_{B_{2}}\left|D^{k+4}\varphi\right|^{2}\rho_{\alpha}^{2}.$ We would
like to use them to offset the ‘bad’ terms of the form
$\varepsilon\int_{B_{3}}\left|D^{k+4}\varphi\right|^{2}dV_{g}$ that occur in
Propositions 4.11 and 4.12. Because the expressions for $D^{k+4}\varphi$ are
different in each chart in the cover, the difficulty arises that we cannot
directly beat the terms occurring on a larger ball by terms on a smaller ball
of different charts, even when the smaller balls cover the larger ball. To
make an argument that the bad terms in a larger ball of one chart are offset
by the good terms in a smaller ball in a different chart requires bounding the
bad terms by a global, well-defined geometric quantity involving derivatives
of the second fundamental form, modulo a lower order difference. This is the
point of the following lemma.
###### Lemma 4.13.
Take a finite cover of charts $\Upsilon^{\alpha}$, each over $B_{4}(0)$ and
partition of unity $\rho_{\alpha}^{2}$ in $B_{2}(0)$ in each respective chart
as described by (4.6). Then
$\sum_{\alpha}\int_{B_{3}}\left|D^{k+4}\varphi\right|^{2}dV_{g}\leq
2N\sum_{m=0}^{k+1}\int_{L}\left|\nabla^{k+1-m}A\right|^{2}dV_{g}+C.$
###### Proof.
Note that from Lemma 4.7
$\left|D^{k+4}\varphi\right|_{g}^{2}\leq
2\left|\nabla^{k+1}A\right|_{g}^{2}+2\left|S_{k+2}\right|_{g}^{2}.$
Let $\iota=\frac{1}{k+2}.$ Then we have, taking $\tilde{\eta}=1$ on each
$B_{3}(0),$ with $\tilde{\eta}\in C_{c}^{\infty}(B_{3+\iota}(0))$
$\displaystyle\int_{B_{3}}\left|D^{k+4}\varphi\right|^{2}dV_{g}$
$\displaystyle\leq
2\int_{B_{3}}\left(\left|\nabla^{k+1}A\right|^{2}+\left|S_{k+2}\right|_{g}^{2}\right)dV_{g}$
$\displaystyle\leq
2\int_{B_{3}}\left|\nabla^{k+1}A\right|^{2}dV_{g}+2\int_{B_{3+\iota}}\left|S_{k+2}\right|_{g}^{2}\tilde{\eta}^{2}dV_{g}$
$\displaystyle\leq
2\int_{B_{3}}\left|\nabla^{k+1}A\right|^{2}dV_{g}+C\left(\sum_{m=3}^{k+3}\int_{B_{3+\iota}}\left|D^{m}\varphi\right|^{2}dV_{g}+1\right)$
by Lemma 4.9. Iterating this argument, using
$\int_{B_{3+\iota}}\left|D^{k+3}\varphi\right|^{2}dV_{g}\leq
2\int_{B_{3+\iota}}\left|\nabla^{k}A\right|^{2}dV_{g}+C\left(\sum_{m=3}^{k+2}\int_{B_{3+2\iota}}\left|D^{m}\varphi\right|^{2}dV_{g}+1\right)$
and so forth, for a total of $k+1$ steps, we have by using
$\left|D^{3}\varphi\right|\leq$ $\left|A\right|+C$ that
$\int_{B_{3}}\left|D^{k+4}\varphi\right|^{2}dV_{g}\leq
2\sum_{m=0}^{k+1}\int_{B_{3+\frac{k+1}{k+2}}}\left|\nabla^{k+1-m}A\right|^{2}\tilde{\eta}^{2}dV_{g}+C.$
Now for any set of functions $\tilde{\eta}_{\alpha}$ who are $1$ on
$B_{r}\subset B_{4}$ on each chart $\Upsilon^{\alpha}$, we can bound
$\displaystyle\sum_{\alpha}\int_{B_{4}}\left|\nabla^{m}A\right|^{2}\tilde{\eta}_{\alpha}^{2}dV_{g}$
$\displaystyle\leq\max_{x\in
L}\left(\sum_{\alpha}\tilde{\eta}_{\alpha}^{2}(x)\right)\int_{L}\left|\nabla^{m}A\right|^{2}dV_{g}\leq
N\int_{L}\left|\nabla^{m}A\right|^{2}dV_{g}.$
It follows that
$\sum_{\alpha}\int_{B_{3}}\left|D^{k+4}\varphi\right|^{2}dV_{g}\leq
2N\sum_{m=0}^{k+1}\int_{L}\left|\nabla^{k+1-m}A\right|^{2}dV_{g}+C.$
∎
### 4.4. Proof of the main theorem
###### Proof of Proposition 4.2.
At a fixed time $t_{0}$ we may take the ambient charts
$\left\\{\Upsilon^{\alpha}\right\\}$ for a tubular neighborhood of $L$ and
subordinate partition of unity $\left\\{\rho_{\alpha}^{2}\right\\}$ which
restrict to charts (via the $x$ coordinate) for $L$ with the same partition of
unity.
Differentiate
$\displaystyle\frac{d}{dt}\int_{L}\left|\nabla^{k-1}A\right|_{g}^{2}dV_{g}$
$\displaystyle=\int_{L}\frac{d}{dt}\left(\left|\nabla^{k-1}A\right|_{g}^{2}dV_{g}\right)$
$\displaystyle=\int_{B_{2}}\left(\sum_{\alpha}\rho_{\alpha}^{2}\right)\frac{d}{dt}\left[\left(\left|D^{k+2}\varphi\right|_{g}^{2}+\left|S_{k}\right|_{g}^{2}+2\langle
D^{k+2}\varphi,S_{k}\rangle_{g}\right)dV_{g}\right]$
$\displaystyle=\sum_{\alpha}\int_{B_{2}}\frac{d}{dt}\left(\left|D^{k+2}\varphi\right|_{g}^{2}dV_{g}\right)\rho_{\alpha}^{2}$
$\displaystyle+\sum_{\alpha}\int_{B_{2}}\frac{d}{dt}\left[\left(\left|S_{k}\right|_{g}^{2}+2\langle
D^{k+2}\varphi,S_{k}\rangle_{g}\right)dV_{g}\right]\rho_{\alpha}^{2}.$
Thus
(4.27)
$\displaystyle\frac{d}{dt}\int_{L}\left|\nabla^{k-1}A\right|_{g}^{2}dV_{g}$
$\displaystyle\leq-2\sum_{\alpha}\int_{B_{2}}\left|D^{k+4}\varphi\right|_{g}^{2}\rho_{\alpha}^{2}dV_{g}+\varepsilon\sum_{\alpha}\int_{B_{3}}|D^{k+4}\varphi|^{2}dV_{g}$
$\displaystyle+\sum_{\alpha}C(k,\varepsilon,\left\|\varphi\right\|_{C^{3}})\left(\sum_{m=3}^{k+2}\int_{B_{3}}\left|D^{m}\varphi\right|_{g}^{2}dV_{g}+1\right)$
by Propositions 4.11 and 4.12. Now apply Lemma 4.13
$\displaystyle\sum_{\alpha}\int_{B_{3}}|D^{k+4}\varphi|^{2}dV_{g}$
$\displaystyle\leq\left(NC\sum_{m=0}^{k+1}\int_{L}|\nabla^{m}A|^{2}dV_{g}+C\right)$
$\displaystyle=NC\sum_{m=0}^{k+1}\int_{L}|\nabla^{m}A|^{2}\left(\sum_{\alpha}\rho_{\alpha}^{2}\right)dV_{g}+C$
$\displaystyle=NC\sum_{m=0}^{k+1}\sum_{\alpha}\int_{B_{2}}|\nabla^{m}A|^{2}\rho_{\alpha}^{2}dV_{g}$
$\displaystyle\leq
NC\sum_{m=0}^{k+1}\sum_{\alpha}\int_{B_{2}}2\left(\left|D^{m+3}\varphi\right|^{2}+\left|S_{m+1}\right|_{g}^{2}\right)\rho_{\alpha}^{2}dV_{g}$
$\displaystyle=2NC\sum_{\alpha}\int_{B_{2}}\left(\left|D^{k+4}\varphi\right|^{2}+\left|S_{k+2}\right|_{g}^{2}\right)\rho_{\alpha}^{2}dV_{g}$
$\displaystyle+2NC\sum_{m=0}^{k}\sum_{\alpha}\int_{B_{2}}2\left(\left|D^{m+3}\varphi\right|^{2}+\left|S_{m+1}\right|_{g}^{2}\right)\rho_{\alpha}^{2}dV_{g}.$
Note that from Lemma 4.10
$2\int_{B_{2}}\left|D^{k+3}\varphi\right|^{2}\rho_{\alpha}^{2}dV_{g}\leq\frac{1}{4NC}\int_{B_{3}}|D^{k+4}\varphi|^{2}dV_{g}+C\left(N,\left|D\rho_{\alpha}^{2}\right|\right)\int_{B_{3}}|D^{k+2}\varphi|^{2}dV_{g}.$
Note also Lemma 4.9, recalling (4.21), then Lemma 4.10 on the highest order
resulting term gives
$\displaystyle
2\int_{B_{2}}\left|S_{k+2}\right|_{g}^{2}\rho_{\alpha}^{2}dV_{g}$
$\displaystyle\leq
C\sum_{m=3}^{k+3}\int_{B_{5/2}}\left|D^{m}\varphi\right|_{g}^{2}+C$
$\displaystyle\leq\frac{1}{4NC}\int_{B_{3}}|D^{k+4}\varphi|^{2}dV_{g}+C\left(N,\left|D\rho_{\alpha}^{2}\right|\right)\sum_{m=3}^{k+2}\int_{B_{3}}\left|D^{m}\varphi\right|_{g}^{2}+C.$
Thus
$\displaystyle\sum_{\alpha}\int_{B_{3}}|D^{k+4}\varphi|^{2}dV_{g}$
$\displaystyle\leq
4NC\sum_{\alpha}\int_{B_{2}}\left|D^{k+4}\varphi\right|^{2}\rho_{\alpha}^{2}dV_{g}+8NC\sum_{m=3}^{k+2}\sum_{\alpha}\int_{B_{2}}\left|D^{m}\varphi\right|^{2}\rho_{\alpha}^{2}dV_{g}$
(4.28)
$\displaystyle+8NC\sum_{m=1}^{k+1}\sum_{\alpha}\int_{B_{2}}\left|S_{m}\right|_{g}^{2}\rho_{\alpha}^{2}dV_{g}.$
Choosing $\varepsilon<(2NC)^{-1}$ in (4.27) in light of (4.4) we have
$\frac{d}{dt}\int_{L}\left|\nabla^{k-1}A\right|_{g}^{2}dV_{g}\leq
C(N,k,\left\|\varphi\right\|_{C^{3}})\left(\sum_{m=3}^{k+2}\int_{B_{3}}\left|D^{m}\varphi\right|_{g}^{2}+\sum_{m=1}^{k+1}\int_{B_{3}}\left|S_{m}\right|_{g}^{2}+1\right).$
Applying Lemma 4.9 to the $\int\left|S_{m}\right|_{g}^{2}$ terms and then
Lemma 4.13 to the $\left|D^{m}\varphi\right|_{g}^{2}$ terms yields the result.
∎
###### Proof of Theorem 4.1.
Suppose now that $F$ is a solution to (1.1) with $\left|A\right|\leq K$ on
$[0,T)$. Starting with
$\int_{L}\left|A\right|^{2}dV_{g}(t)\leq K\operatorname{Vol}\left(L\right)\leq
C$
we may apply Proposition 4.2 and apply differential inequalities: continuing
with
$\frac{d}{dt}\int_{L}\left|\nabla A\right|^{2}dV_{g}(t)\leq
C\int_{L}\left|\nabla
A\right|^{2}dV_{g}(t)+C\int_{L}\left|A\right|^{2}dV_{g}(t)$
and so forth, obtaining bounds of the form
(4.29) $\int_{L}\left|\nabla^{k-1}A\right|^{2}dV_{g}(t)\leq C(k,K,F_{0},T)$
for arbitrary $k$.
Now at any $t_{0}\in[0,T)$ we may take a cover $\Upsilon^{\alpha}$ as
described in Proposition 4.6. By Lemma 4.13 and (4.29) we have
(4.30) $\left\|D^{k}\varphi\right\|_{L^{2}(B_{3})}\leq C(k,K,F_{0},T)$
for all $k,$ in every chart. By Sobolev embedding theorems, we have Hölder
bounds on $D^{k}\varphi$ over $B_{2}$ for each chart. In particular, there
will be uniform bounds on $\frac{d}{dt}D\varphi$ and
$\frac{d}{dt}D^{2}\varphi$ which control the speed of the flow in the chart
and the rate of change of the slope the manifold $L_{t}$ makes with respect to
the tangent plane at the origin in the chart. We conclude then the manifolds
$L_{t}$ will continue to be described by the set of charts taken at $t_{0}$
for $t<\max\left\\{T,t_{0}+\tau\right\\}$ for some positive $\tau$ with an
apriori lower bound. (Perhaps we take $c_{n}$ slightly larger in (4.2)). By
choosing $t_{0}$ near $T$ we are assured that these fixed charts describe the
flow for all values $t\in[t_{0},T)$.
Now observe that with fixed speed bounds, the paths $x\mapsto F(x,t)$ of the
normal flow are Lipschitz and hence the normal flow extends to a well-defined
continuous map
(4.31) $F:L\times[0,T]\rightarrow M.$
We claim that $F\left(\cdot,T\right)$ is a smooth immersion. While within a
chart, the vertical maps
(4.32) $\bar{F}(x):=(x,d\varphi(x,t))$
converge in every Hölder norm to a smooth map at $T,$ we still must argue that
the charts given by the $x$ coordinates do not collapse as $t\rightarrow T$.
This can be argued locally, using coordinates on $L_{t_{0}}$. For any given
$x\in L_{t_{0}}$ we may choose a chart such that $x\in B_{1}(0)\subset
B_{3}(0).$ We are already assuming $F$ is an immersion at $t_{0}$ so this
coordinate chart gives us a coordinate chart for the abstract smooth manifold
$L.$ For $t>t_{0}$ the normal flow $F$ is given by
(4.33) $F(x,t)=(\chi_{t}(x),d\varphi(\chi_{t}(x),t))$
for some local diffeomorphism $\chi_{t}(x):B_{1}(0)\rightarrow B_{2}(0)$ from
Claim 3.3, provided that $t_{0}$ is chosen close enough to $T$ such that
${\chi}_{t}(x)\in B_{2}(0)\text{ for all }x\in B_{1}(0)\text{ and
}t\in[t_{0},T).\text{ }$
This choice of $t_{0}$ is possible given that $\chi_{t}(x)$ is controlled by
the normal projection of $\frac{d\bar{F}}{dt}$ and the inverse
$(d\bar{F})^{-1}$, for $\bar{F}$ defined by (4.32), both of which are
universally controlled given (4.2) and (4.30).
Now because (4.32) is uniformly smooth, it can be extended smoothly to
$[t_{0},T+\delta),$ as well as the normal flow associated to this extension.
Applying Claim 3.3 (note that we may extend the flow outside $B_{3}$ in a nice
way which doesn’t affect the behavior in $B_{2}(0))$ we get a smooth
diffeomorphism $\chi_{T}.$ For $x\in B_{1}(0)$ we can compute the normal flow
$F$,
$F(x,T)=(\chi_{T}(x),d\varphi(\chi_{T}(x),T))$
which is a smooth extension of (4.33) to $T$, by the uniform estimates on
$\varphi$. Now $F(x,T)$ is a smooth immersion from $B_{1}(0)$ because
$\chi_{T}$ is a diffeomorphism. As $x$ was chosen arbitrarily, we conclude the
continuous extension of $F$ defined in (4.31) must be a smooth immersion from
$L$ at $T$.
We may now restart the flow by Proposition 3.2 with initial immersion
$F(x,T)$. The time derivatives of the new flow and $F$ agree to any order at
$T$. Therefore the new flow is a smooth extension of $F$ to
$[0,T+\varepsilon)$ for some $\varepsilon>0$. Moreover, Theorem 3.4 asserts
that this is the only smooth extension. ∎
## 5\. Appendix
### 5.1. Submanifold with bounded second fundamental form $A$
It is a known and frequently used fact that when $|A|$ is bounded then the
submanifold can be written as a graph over a controlled region in its tangent
space. We provide a proof below for any dimension and codimension.
###### Proposition 5.1.
Let $L^{k}$ be a compact manifold embedded in a compact Riemannian manifold
$(M^{k+l},g)$. Suppose that the second fundamental form of $L$ satisfies
$|A|\leq K$ for some constant $K>0$. Then $L$ is locally a graph of a vector-
valued function over a ball $B_{r}(0)\subset T_{p}L$ in a normal neighbourhood
of $p\in L$ in $M$ and $r>C(M,g)(K+1)^{-1}$ for some constant $C(M,g)>0$.
###### Proof.
Step 1. Bound the injectivity radius of $L$ from below in terms of $K$. Assume
$M$ is isometrically embedded in some euclidean space. For the embedding
$F:L^{k}\overset{f}{\to}M^{k+l}\overset{\varphi}{\to}\mathbb{R}^{k+n}$, denote
its second fundamental form by $\tilde{A}$ and note that
$|\tilde{A}|\leq C(|A|+1)\leq C(K+1)$
where $C$ only depends on the isometric embedding $\varphi$. Let
$\gamma:\mathbb{S}^{1}\to L$ be a shortest geodesic loop based at a point
$p\in L$ which is parametrized by arc-length $s$. Suppose
$\gamma(0)=\gamma(a),\gamma^{\prime}(0)=\gamma^{\prime}(a)$. Take a hyperplane
$P$ in $\mathbb{R}^{k+n}$ such that $P$ intersects $\gamma$ at a point $p$
orthogonally. There is a point $q\in\gamma$ where $\gamma$ meets $P$ again at
first time. The angle between the unit vectors $\gamma^{\prime}(p)$ and
$\gamma^{\prime}(q)$ in $\mathbb{R}^{k+n}$ is at least $\frac{\pi}{2}$.
Therefore
$\left|\gamma^{\prime}(p)-\gamma(^{\prime}q)\right|\geq\sqrt{2}.$
Since
$F\circ\gamma:\mathbb{S}^{1}\overset{\gamma}{\to}{L}\overset{F}{\to}\mathbb{R}^{k+l}$
factors through $L$ where $\gamma$ is a geodesic, we have (cf. [ES64], [EL78]
for the notation of the second fundamental form $\nabla d\phi$ of a mapping
$\phi$ between Riemannian manifolds),
$\nabla d(F\circ\gamma)=dF\circ\nabla(d\gamma)+\nabla
d(F)(d\gamma,d\gamma)=\nabla d(F)(d\gamma,d\gamma).$
Since the Christoffel symbols of $\mathbb{S}^{1}$ and of $\mathbb{R}^{n+k}$
are 0 we have
$\nabla d(F\circ\gamma)=(F\circ\gamma)^{\prime\prime}.$
Therefore
$(F\circ\gamma)^{\prime\prime}=\tilde{A}(F)(\gamma^{\prime},\gamma^{\prime}).$
Integrating along the portion of $\gamma$ from $p$ to $q$, we get
$\sqrt{2}\leq\left|\gamma^{\prime}(p)-\gamma^{\prime}(q)\right|\leq\int^{q}_{p}\left|(F\circ\gamma)^{\prime\prime}\right|ds\leq
C(K+1)\,a.$
We conclude that that the length $a$ has a lower bound $C/(K+1)$.
From the Gauss equations and $|\tilde{A}|<C(K+1)$, the sectional curvatures of
$L$ are bounded above by $C^{2}(K+1)^{2}$. We conclude $\mbox{inj}(L)\leq
C(K+1)^{-1}$ [Pet06, p.178].
Step 2. Take a normal neighbourhood $U\subset L$ around a given point $p\in L$
and assume $U$ is contained in a normal neighbourhood $V$ of $M$ at $p$. We
will use $C(g)$ for constants only depending on the ambient geometry of
$(M,g)$. Now, on $V$ we will use
$\delta=\langle\cdot,\cdot\rangle_{\mathbb{R}^{k+l}}$, to measure length of
various geometric quantities already defined in $(V,g)$. First,
$|A|_{\delta}\leq C(g)|A|_{g}\leq C(g)K.$
Identify $T_{p}L$ with $\mathbb{R}^{k}\times\\{0\\}\subset\mathbb{R}^{k+l}$.
Let $e_{1}(x),...,e_{k}(x)$ be the orthonormal frame on $U$ obtained by
parallel transporting an orthonormal frame $e_{1}(0),...,e_{k}(0)$ at $T_{p}L$
along the unique radial geodesic $r_{x}(s)$ in $(U,f^{\ast}g)$ from $0$ to an
arbitrary point $x\in L$, and let $e_{1+k}(0),...,e_{l+k}(0)$ be the
orthonormal frame of $(T_{p}L)^{\perp}$. Integrating along $\gamma_{x}(s)$
leads to
$\displaystyle\left|\langle e_{i}(x),e_{j+l}(0)\rangle\right|$
$\displaystyle=\left|\langle e_{i}(x),e_{j+l}(0)\rangle-\langle
e_{i}(0),e_{j+l}(0)\rangle\right|$
$\displaystyle=\left|\int_{0}^{|x|}\frac{d}{ds}\langle
e_{i}(\gamma_{x}(s)),e_{j+l}(0)\rangle ds\right|$
$\displaystyle=\left|\int_{0}^{|x|}\langle e_{i}^{\prime}(s),e_{j+l}(0)\rangle
ds\right|$
$\displaystyle\leq\int_{0}^{|x|}\left|\langle\nabla_{\partial_{r}}^{g}e_{i},e_{j+l}(0)\rangle\right|ds$
$\displaystyle=\int_{0}^{|x|}\left|\langle
A(\partial_{r},e_{i})+\nabla_{\partial_{r}}^{L}e_{i},e_{j+l}(0)\rangle\right|ds$
$\displaystyle\leq C(g)K\,|x|$
as $\nabla_{\partial_{r}}^{L}e_{i}=0$ on $L$. Therefore, there exists
$r_{0}=C(g)K^{-1}$ (where $C(g)$ may differ from the one above) such that for
any $x\in B_{r_{0}}(0)$ the projection of each $e_{i}(x)$ in each fixed normal
direction $e_{j+l}(0)$ is at most $c_{n}/\sqrt{l}$ and the norm of the
projection is no more than some universal constant $c_{n,l}$ that we get to
choose. It is known that such $T_{x}L$ projects bijectively to $T_{p}L$.
Therefore, locally around any $x\in B_{r_{0}}(0)$, implicit function theorem
asserts that $U$ can be written as a graph over a ball in $T_{x}L$, hence as a
graph over a ball in $T_{p}L$ from the projection. The graphing functions over
the fixed reference plane $T_{P}L$ must coincide on the overlap of any pair of
such balls. This yields a global graphing function $\mathcal{F}$ over
$B_{r_{0}}^{n}(p)\subset T_{p}L$. Moreover, $|D\mathcal{F}|\leq C(g,l)$
because $D\mathcal{F}$ is close to $T_{x}L$ which is close (measured in $l$)
to $T_{p}L$ via the projection. ∎
## References
* [EI05] Joachim Escher and Kazuo Ito, _Some dynamic properties of volume preserving curvature driven flows_ , Math. Ann. 333 (2005), no. 1, 213–230. MR 2169834
* [EL78] J. Eells and L. Lemaire, _A report on harmonic maps_ , Bull. London Math. Soc. 10 (1978), no. 1, 1–68. MR 495450
* [EM02] Y. Eliashberg and N. Mishachev, _Introduction to the $h$-principle_, Graduate Studies in Mathematics, vol. 48, American Mathematical Society, Providence, RI, 2002. MR 1909245
* [ES64] James Eells, Jr. and J. H. Sampson, _Harmonic mappings of Riemannian manifolds_ , Amer. J. Math. 86 (1964), 109–160. MR 164306
* [FFRS21] Alberto Fiorenza, Maria Rosaria Formica, Tomáš G. Roskovec, and Filip Soudský, _Detailed proof of classical Gagliardo-Nirenberg interpolation inequality with historical remarks_ , Z. Anal. Anwend. 40 (2021), no. 2, 217–236. MR 4237368
* [Fif00] Paul C. Fife, _Models for phase separation and their mathematics_ , Electron. J. Differential Equations (2000), No. 48, 26. MR 1772733
* [HL82] Reese Harvey and H. Blaine Lawson, Jr., _Calibrated geometries_ , Acta Math. 148 (1982), 47–157. MR 666108
* [JLS11] Dominic Joyce, Yng-Ing Lee, and Richard Schoen, _On the existence of Hamiltonian stationary Lagrangian submanifolds in symplectic manifolds_ , Amer. J. Math. 133 (2011), no. 4, 1067–1092. MR 2823871
* [Lee13] John M. Lee, _Introduction to smooth manifolds_ , second ed., Graduate Texts in Mathematics, vol. 218, Springer, New York, 2013. MR 2954043
* [May01] Uwe F. Mayer, _A singular example for the averaged mean curvature flow_ , Experiment. Math. 10 (2001), no. 1, 103–107. MR 1822855
* [MM12] Carlo Mantegazza and Luca Martinazzi, _A note on quasilinear parabolic equations on manifolds_ , Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 11 (2012), no. 4, 857–874. MR 3060703
* [MS17] Dusa McDuff and Dietmar Salamon, _Introduction to Symplectic Topology_ , Oxford University Press, 03 2017.
* [Mul99] W. W. Mullins, _Two-dimensional motion of idealized grain boundaries [MR0078836 (17,1252g)]_ , Fundamental contributions to the continuum theory of evolving phase interfaces in solids, Springer, Berlin, 1999, pp. 70–74. MR 1770893
* [Pet06] Peter Petersen, _Riemannian geometry_ , second ed., Graduate Texts in Mathematics, vol. 171, Springer, New York, 2006. MR 2243772
* [PW16] Scott Parkins and Glen Wheeler, _The polyharmonic heat flow of closed plane curves_ , J. Math. Anal. Appl. 439 (2016), no. 2, 608–633. MR 3475940
* [Wei71] Alan Weinstein, _Symplectic manifolds and their Lagrangian submanifolds_ , Advances in Math. 6 (1971), 329–346. MR 286137
* [Whe13] Glen Wheeler, _On the curve diffusion flow of closed plane curves_ , Ann. Mat. Pura Appl. (4) 192 (2013), no. 5, 931–950. MR 3105957
* [Woo20] Albert Wood, _Singularities of lagrangian mean curvature flow_ , Doctoral thesis, UCL (University College London) (2020).
|
# Removing Sequential Bottleneck of Dijkstra’s Algorithm for the Shortest Path
Problem††thanks: Supported by NSF CNS-1812349, CNS-1563544, and the Cullen
Trust for Higher Education Endowed Professorship
Vijay K. Garg,
The University of Texas at Austin,
Department of Electrical and Computer Engineering,
Austin, TX 78712, USA
###### Abstract
All traditional methods of computing shortest paths depend upon edge-
relaxation where the cost of reaching a vertex from a source vertex is
possibly decreased if that edge is used. We introduce a method which maintains
lower bounds as well as upper bounds for reaching a vertex. This method
enables one to find the optimal cost for multiple vertices in one iteration
and thereby reduces the sequential bottleneck in Dijkstra’s algorithm.
We present four algorithms in this paper — $SP_{1}$, $SP_{2}$, $SP_{3}$ and
$SP_{4}$. $SP_{1}$ and $SP_{2}$ reduce the number of heap operations in
Dijkstra’s algorithm. For directed acyclic graphs, or directed unweighted
graphs they have the optimal complexity of $O(e)$ where $e$ is the number of
edges in the graph which is better than that of Dijkstra’s algorithm. For
general graphs, their worst case complexity matches that of Dijkstra’s
algorithm for a sequential implementation but allows for greater parallelism.
Algorithms $SP_{3}$ and $SP_{4}$ allow for even more parallelism but with
higher work complexity. Algorithm $SP_{3}$ requires $O(n+e(\max(\log
n,\Delta)))$ work where $n$ is the number of vertices and $\Delta$ is the
maximum in-degree of a node. Algorithm $SP_{4}$ has the most parallelism. It
requires $O(ne)$ work. These algorithms generalize the work by Crauser,
Mehlhorn, Meyer, and Sanders on parallelizing Dijkstra’s algorithm.
###### Index Terms:
Single Source Shortest Path Problem, Dijkstra’s Algorithm
## I Introduction
The single source shortest path (SSSP) problem has wide applications in
transportation, networking and many other fields. The problem takes as input a
weighted directed graph with $n$ vertices and $e$ edges. We are required to
find $cost[x]$, the minimum cost of a path from the source vertex $v_{0}$ to
all other vertices $x$ where the cost of a path is defined as the sum of edge
weights along that path. We assume that all edge weights are strictly positive
throughout this paper.
Most SSSP algorithms are inspired by Dijkstra’s algorithm [5] or Bellman-Ford
[2, 7]. We present four algorithms in this paper in increasing order of work
complexity. Algorithms $SP_{1}$, $SP_{2}$ and $SP_{3}$ are inspired by
Dijkstra’s algorithm and $SP_{4}$ is inspired by Bellman-Ford algorithm.
Algorithms $SP_{1}$ and $SP_{2}$ are suitable for sequential implementations.
They improve upon Dijkstra’s algorithm by reducing the total number of heap
operations. For acyclic graphs, $SP_{1}$ performs no heap operations (except
for the insertion of the initial source vertex) and has the time complexity of
$O(e)$. Hence, it unifies Dijkstra’s algorithm with the topological sorting
based algorithm for acyclic graphs. $SP_{2}$ has the optimal time complexity
of $O(e)$ whenever the input graph is acyclic or unweighted. For general
graphs, their worst case asymptotic complexity matches that of Dijkstra’s
algorithm for a sequential implementation; however, they always perform less
heap operations than Dijkstra’s algorithm. Additionally, they are more
suitable for a parallel implementation because they allow multiple vertices to
be explored in parallel unlike Dijkstra’s algorithm which explores vertices in
the order of their shortest cost. Algorithm $SP_{2}$ allows more parallelism
than $SP_{1}$ at the expense of an additional $O(e)$ processing.
Algorithm $SP_{3}$ allows for even more parallelism than $SP_{2}$. It uses the
technique of keeping lower bounds on $cost[x]$ for all vertices $x$. Almost
all algorithms for the shortest path problem are based on keeping upper
bounds. Dijkstra’s algorithm keeps $D[x]$, an upper bound on the cost of the
path for any vertex $x$. It maintains the invariant that $D[x]$ always
reflects the cost of a feasible path in the directed graph from the source
vertex to $x$. Our algorithm $SP_{3}$ extends Dijkstra’s algorithm by
maintaining the variable $C[x]$ for any vertex $x$ that gives a lower bound on
the cost to reach $x$. The invariant we maintain is that any path from the
source vertex to $x$ must have cost at least $C[x]$. When $C[x]$ is zero, the
invariant is trivially true in a directed graph with no negative weights. At
each iteration of the algorithm, we increase $C[x]$ for one or more vertices
till we reach a point where $C$ is also feasible and corresponds to the cost
of all shortest paths. The vertices that have matching upper bounds and lower
bounds are called fixed vertices and the minimum cost from the source vertex
to these vertices are known. By combining the upper bounds of Dijkstra’s
algorithm with the lower bounds, we present an algorithm, $SP_{3}$, for the
single source shortest path algorithm that is superior to Dijkstra’s algorithm
in two respects. First, Dijkstra’s algorithm suffers from the well-known
sequential bottleneck (e.g. [4, 12]). Outgoing edges of only those vertices
are explored (relaxed) whose distance is the minimum of all vertices whose
adjacency list have not been explored. In contrast, our algorithm explores all
those vertices $x$ whose upper bounds $D[x]$ and lower bounds $C[x]$ match and
have not been explored before. Although the idea of marking multiple vertices
fixed in a single iteration has been explored before (for e.g. [4]), this is
the first paper, to the best of our knowledge, that marks vertices fixed based
on the idea of lower bounds. Second, when one is interested in a shortest path
to a single destination, our algorithm may determine that $D[x]$ is equal to
$C[x]$ much sooner than Dijkstra’s algorithm.
There are two assumptions in our algorithms. First, we assume that all weights
are strictly positive. This is a minor strengthening of the assumption in
Dijkstra’s algorithm where all weights are assumed to be nonnegative. The
second assumption is that we have access to incoming edges for any vertex
discovered during the execution of the algorithm. Dijkstra’s algorithm uses
only an adjacency list of outgoing edges. This assumption is also minor in the
context of static graphs. However, when the graph is used in a dynamic
setting, it may be difficult to find the list of incoming edges. We assume in
this paper that either the graph is static or that a vertex can be expanded in
the backward direction in a dynamic graph.
The single source shortest path problem has a rich history. One popular
research direction is to improve the worst case complexity of Dijkstra’s
algorithm by using different data structures. For example, by using Fibonacci
heaps for the min-priority queue, Fredman and Tarjan [8] gave an algorithm
that takes $O(e+n\log n)$. There are many algorithms that run faster when
weights are small integers bounded by some constant $\gamma$. For example,
Ahuja et al [1] gave an algorithm that uses Van Emde Boas tree as the priority
queue to give an algorithm that takes $O(e\log\log\gamma)$ time. Thorup [14]
gave an implementation that takes $O(n+e\log\log n)$ under special constraints
on the weights. Raman [13] gave an algorithm with $O(e+n\sqrt{\log n\log\log
n})$ time. Our algorithms do not improve the worst case sequential complexity
of the problem, but reduce the sequential bottleneck. Our algorithms also
reduce the number of priority queue operations in the average case.
It is also interesting to compare our approach with algorithm $A^{*}$ [9]. The
algorithm $A^{*}$ is applicable when there is a single target vertex and there
is a heuristic function $h(x)$ for any vertex that provides the lower bound
from $x$ to the target vertex. The heuristic function assumes that there is
some background knowledge that provides the lower bound to the target. Our
algorithms are not based on a target vertex or availability of the background
knowledge. Even though $A^{*}$ also uses the notion of lower bounds, the usage
is different. We use the lower bound from the source vertex to $x$ in our
algorithms and not the lower bound from $x$ to the target vertex.
There are many related works for parallelizing Dijkstra’s algorithm. The most
closely related work is Crauser et al [4] which gives three methods to improve
parallelism. These methods, in-version, out-version and in-out-version, allow
multiple vertices to be marked as fixed instead of just the one with the
minimum $D$ value. The in-version marks as fixed any vertex $x$ such that
$D[x]\leq\min\\{D[y]~{}|~{}\neg fixed(y)\\}+\min\\{w[v,x]~{}|~{}\neg
fixed(x)\\}$. This method is a special case of our algorithm $SP_{2}$. The
implementation of in-version in [4] requires an additional priority queue and
the total number of heap operations increases by a factor of $2$ compared to
Dijkstra’s algorithm even though it allows greater parallelism. Our algorithm
$SP_{2}$ uses fewer heap operations than Dijkstra’s algorithm. The out-version
in [4] works as follows. Let $L$ be defined as $\min\\{D[x]+w[x,y]~{}|~{}\neg
fixed(x)\\}$. Then, the out-version marks as fixed all vertices that have $D$
value less than or equal to $L$. Our method is independent of this observation
and we incorporate out-version in algorithms $SP_{3}$ and $SP_{4}$. The in-
out-version is just the use of in-version as well as out-version in
conjunction.
A popular practical parallel algorithm for SSSP is $\Delta$-stepping algorithm
due to Meyer and Sanders [12]. Meyer and Sanders also provide an excellent
review of prior parallel algorithms in [12]. They classify SSSP algorithms as
either label-setting, or label-correcting. Label-setting algorithms, such as
Dijkstra’s algorithm, relax edges only for fixed vertices. Label-correcting
algorithms may relax edges even for non-fixed vertices. $\Delta$-stepping
algorithm is a label-correcting algorithm in which eligible non-fixed vertices
are kept in an array of buckets such that each bucket represents a distance
range of $\Delta$. During each phase, the algorithm removes all vertices of
the first non-empty bucket and relaxes all the edges of weight at most
$\Delta$. Edges of higher weights are relaxed only when their starting
vertices are fixed. The parameter $\Delta$ provides a trade-off between the
number of iterations and the work complexity. For example, when $\Delta$ is
$\infty$, the algorithm reduces to Bellman-Ford algorithm where any vertex
that has its $D$ label changed is explored. When $\Delta$ equals $1$ for
integral weights, the algorithm is a variant of Dijkstra’s algorithm. They
show that by taking $\Delta=\Theta(1/d)$ where $d$ is the maximum degree of a
graph on $n$ vertices, and random edge weights that are uniformly distributed
in $[0,1]$, their algorithm takes $O(n+e+dM)$ where $M$ is the maximum
shortest path weight from the source vertex to any other vertex. There are
many practical large-scale implementations of the $\Delta$-stepping algorithm
(for instance, by Madduri et al [11]) in which authors have shown the
scalability of the algorithm. Chakravarthy et al [3] give another scalable
implementation of an algorithm that is a hybrid of the Bellman-Ford algorithm
and the $\Delta$-stepping algorithm. The $\Delta$-stepping technique is
orthogonal to our method which is based on keeping lower bounds with vertices.
It is possible to apply $\Delta$-stepping in conjunction with our method.
In summary, we present four algorithms for SSSP in this paper in order of
increasing work complexity. We only compute the cost of the shortest paths and
not the actual paths because the standard method of keeping backward parent
pointers is applicable to all of our algorithms. Algorithm $SP_{1}$ counts the
number of incoming edges to a vertex that have been relaxed. When all incoming
edges have been relaxed, we show that it is safe to mark this vertex as fixed.
The algorithm $SP_{2}$ generalizes $SP_{1}$ to allow even those vertices to be
marked as fixed which have incoming edges from non-fixed vertices under
certain conditions. Both of these algorithms have fewer heap operations than
Dijkstra’s algorithm for the sequential case and allow more parallelism when
multiple cores are used. The algorithm $SP_{3}$ generalizes $SP_{2}$ further
by maintaining the lower bound $C$ for each vertex. All these algorithms are
label-setting. Algorithm $SP_{3}$ has the same asymptotic complexity as
Dijkstra’s algorithm when the maximum in-degree of a vertex is $O(\log n)$. It
allows even more parallelism than $SP_{2}$. The algorithm $SP_{4}$ is a label-
correcting algorithm. It has the the most parallelism but with highest work
complexity. $SP_{4}$ combines ideas from Bellman-Ford, Dijkstra, [4] and
$SP_{3}$ for faster convergence of $D$ and $C$ values.
## II Background and Notation
Dijkstra’s algorithm (or one of its variants) is the most popular single
source shortest path algorithm used in practice. For concreteness sake we use
the version shown in Fig. 1 for comparison with our algorithm. The algorithm
also helps in establishing the terminology and the notation used in our
algorithm. We consider a directed weighted graph $(V,E,w)$ where $V$ is the
set of vertices, $E$ is the set of directed edges and $w$ is a map from the
set of edges to positive reals (see Fig. 2 for a running example). To avoid
trivialities, we assume that the graph is loop-free and every vertex $x$,
except the source vertex $v_{0}$, has at least one incoming edge.
var $D$: array[$0\ldots n-1$] of integer initially $\forall i:D[i]=\infty$;
$fixed$: array[$0\ldots n-1$] of boolean initially $\forall i:fixed[i]=false$;
$H$: binary heaps of $(j,d)$ initially empty; $D[0]:=0$; $H$.insert((0,D[0]));
while $\neg H$.empty() do $(j,d):=H$.removeMin(); $fixed[j]:=true$; forall
$k$: $\neg fixed(k)\wedge(j,k)\in E$ if ($D[k]>D[z]+w[z,k]$) then
$D[k]:=D[z]+w[z,k]$; $H$.insertOrAdjust $(k,D[k])$; endwhile;
Figure 1: Dijkstra’s algorithm to find the shortest costs from $v_{0}$ .
$v_{0}$$v_{2}$$v_{1}$$v_{3}$$v_{4}$
Figure 2: A Weighted Directed Graph
Dijkstra’s algorithm maintains $D[i]$, which is a tentative cost to reach
$v_{i}$ from $v_{0}$. Every vertex $x$ in the graph has initially $D[x]$ equal
to $\infty$. Whenever a vertex is discovered for the first time, its $D[x]$
becomes less than $\infty$. We use the predicate $discovered(x)\equiv
D[x]<\infty$. The variable $D$ decreases for a vertex whenever a shorter path
is found due to edge relaxation.
In addition to the variable $D$, a boolean array fixed is maintained. Thus,
every discovered vertex is either fixed or non-fixed. The invariant maintained
by the algorithm is that if a vertex $x$ is fixed then $D[x]$ gives the final
shortest cost from vertex $v_{0}$ to $x$. If $x$ is non-fixed, then $D[x]$ is
the cost of the shortest path to $x$ that goes only through fixed vertices.
A heap $H$ keeps all vertices that have been discovered but are non-fixed
along with their distance estimates $D$. We view the heap as consisting of
tuples of the form $(j,D[j])$ where the heap property is with respect to $D$
values. The algorithm has one main while loop that removes the vertex with the
minimum distance from the heap with the method $H$.removeMin(), say $v_{j}$,
and marks it as fixed. It then explores the vertex $v_{j}$ by relaxing all its
adjacent edges going to non-fixed vertices $v_{k}$. The value of $D[k]$ is
updated to the minimum of $D[k]$ and $D[j]+w[j,k]$. If $v_{k}$ is not in the
heap, then it is inserted, else if $D[k]$ has decreased then the label
associated with vertex $k$ is adjusted in the heap. We abstract this step as
the method $H$.insertOrAdjust$(k,D[k])$. The algorithm terminates when the
heap is empty. At this point there are no discovered non-fixed vertices and
$D$ reflects the cost of the shortest path to all discovered vertices. If a
vertex $j$ is not discovered then $D[j]$ is infinity reflecting that $v_{j}$
is unreachable from $v_{0}$.
Observe that every vertex goes through the following states. Every vertex $x$
is initially undiscovered (i.e., $D[x]=\infty$). If $x$ is reachable from the
source vertex, then it is eventually discovered (i.e., $D[x]<\infty)$. A
discovered vertex is initially non-fixed, and is therefore in the heap $H$.
Whenever a vertex is removed from the heap it is a fixed vertex. A fixed
vertex may either be unexplored or explored. Initially, a fixed vertex is
unexplored. It is considered explored when all its outgoing edges have been
relaxed.
The following lemma simply summarizes the well-known properties of Dijkstra’s
algorithm.
###### Lemma 1.
The outer loop in Dijkstra’s algorithm satisfies the following invariants.
(a) For all vertices $x$: $fixed[x]\Rightarrow(D[x]=cost[x])$.
(b) For all vertices $x$: $D[x]$ is equal to cost of the shortest path from
$v_{0}$ to $x$ such that all vertices in the path before $x$ are fixed.
(c) For all vertices $x$: $x\in H$ iff $discovered(x)\wedge\neg fixed[x]$.
## III Algorithm $SP_{1}$: Using Predecessors
Dijkstra’s algorithm finds the vertex with the minimum tentative distance and
marks it as a fixed vertex. This is the only mechanism by which a vertex is
marked as fixed in Dijkstra’s algorithm. Finding the non-fixed vertex with the
minimum $D$ value takes $O(\log n)$ time when a heap or its variant is used.
Our first observation is that if for any non-fixed vertex $x$, if all the
incoming edges are from fixed vertices, then the current estimate $D[x]$ is
the shortest cost. To exploit this observation, we maintain with each vertex
$i$, a variable $pred[i]$ that keeps the number of incoming edges that have
not been relaxed. The variable $pred[i]$ is decremented whenever an incoming
edge to vertex $i$ is relaxed. When $pred[i]$ becomes zero, vertex $i$ becomes
fixed. Determining a vertex to be fixed by this additional method increases
the rate of marking vertices as fixed in any iteration of the while loop.
The second observation is that in Dijkstra’s algorithm vertices are explored
only in order of their cost. $SP_{1}$ explores vertices whenever it finds one
that is fixed. Hence, in addition to the heap $H$, we maintain a set $R$ of
vertices which have been fixed but not explored, i.e., their adjacency lists
have not been traversed. We also relax the invariant on the heap $H$. In
Dijkstra’s algorithm, the heap does not contain fixed vertices. In algorithm
$SP_{1}$, the heap $H$ may contain both fixed and non-fixed vertices. However,
only those fixed vertices which have been explored may exist in the heap.
var $D$: array[$0\ldots n-1$] of integer initially $\forall i:D[i]=\infty$;
$H$: binary heap of $(j,d)$ initially empty; $fixed$: array[$0\dots n-1$] of
boolean initially $\forall i:fixed[i]=false$; $Q,R$: set of vertices initially
empty; $pred$: array[$0\ldots n-1$] of integer initially $\forall
i:pred[i]=~{}|~{}\\{x~{}|~{}(x,v_{i})\in E\\}~{}|~{}$; $D[0]:=0$;
$H$.insert$((0,D[0]))$; while $\neg H$.empty() do $(j,d):=H$.removeMin(); if
($\neg fixed[j]$) then $R$.insert($j$); $fixed[j]$ := $true$; while
$R\neq\\{\\}$ do forall $z\in R$ $R$.remove($z$); forall $k:\neg
fixed(k)\wedge(z,k)\in E$: processEdge1($z,k$); endwhile; forall $z\in Q$:
$Q$.remove($z$); if $\neg fixed[z]$ then $H$.insertOrAdjust $(z,D[z])$;
endwhile; procedure processEdge1($z,k$); var $changed$: boolean initially
false; $pred[k]:=pred[k]-1$; if ($D[k]>D[z]+w[z,k]$) then $D[k]:=D[z]+w[z,k]$;
$changed$ := true; if $(pred[k]=0)$ then $fixed[k]:=true$; $R$.insert($k$);
else if ($changed\wedge(k\not\in Q)$) then $Q$.insert($k$);
Figure 3: Algorithm $SP_{1}$
The algorithm $SP_{1}$ is shown in Fig. 3. The algorithm starts with the
insertion of the source vertex with its $D$ value as $0$ in the heap. Instead
of removing the minimum vertex from the heap in each iteration and then
exploring it, the algorithm consists of two while loops. The outer while loop
removes one vertex from the heap. If this vertex is fixed, then it has already
been explored and therefore it is skipped; otherwise, it is marked as fixed
and inserted in $R$ to start the inner while loop. The inner loop keeps
processing the set $R$ till it becomes empty.
We do not require that vertices in $R$ be explored in the order of their cost.
If $R$ consists of multiple vertices then all of them can be explored in
parallel. During this exploration other non-fixed vertices may become fixed.
These are then added to $R$. Some vertices may initially be non-fixed but
eventually while processing $R$ may become fixed. To avoid the expense of
inserting these vertices in the heap, we collect all such vertices which may
need to be inserted or adjusted in the heap in a separate set called $Q$.
Only, when we are done processing $R$, we call $H.insertOrAdjust$ on vertices
in $Q$.
The vertices $z\in R$ are explored as follows. We process all out-going
adjacent edges $(z,k)$ of the vertex $z$ to non-fixed vertices $k$. This step
is called processEdge1 in Fig. 3. First, we decrement the count $pred[k]$ to
account for its predecessor $z$ being fixed. Then, we do the standard edge-
relaxation procedure by checking whether $D[k]$ can be decreased by taking
this edge. If $pred[k]$ is zero, $k$ is marked as fixed. Setting $fixed[k]$ to
true also removes it effectively from the heap because whenever a fixed vertex
is extracted in the outer while loop it is skipped.
Finally, if $D[k]$ has decreased and $pred[k]$ is greater than $0$, we insert
it in $Q$ so that once $R$ becomes empty we can call $H$.insertOrAdjust()
method on vertices in $Q$
Consider the graph in Fig. 2(a). Initially $(0,D[0])$ is in the heap $H$.
Since there is only one vertex in the heap $H$, it is also the minimum. This
vertex is removed and inserted in $R$ marking $v_{0}$ as fixed. Now, outgoing
edges of $v_{0}$ are relaxed. Since $pred[1]$ becomes $0$, $v_{1}$ is marked
as fixed and added to $R$. The vertex $v_{2}$ has $pred$ as $1$ and $D[2]$ as
$2$ after the relaxation of edge $(v_{0},v_{2})$. The vertex $v_{2}$ is
inserted in the $Q$ for later insertion in the heap. Since $R$ is not empty,
outgoing edges of $v_{1}$ are relaxed. The vertex $v_{3}$ is inserted in $Q$
and its $D$ value is set to $12$. The vertex $v_{4}$ is also inserted in $Q$
and its $D$ value is set to $11$. At this point $R$ is empty and we insert
vertices in $Q$ in $H$ and get back to the outer while loop. The minimum
vertex $v_{2}$ is removed from the heap, marked as fixed, and inserted in $R$
for exploration. When $v_{2}$ is explored, the $D$ label of $v_{3}$ is
adjusted to $8$. When edge $(v_{2},v_{4})$ is relaxed, $D[4]$ is reduced to
$7$. Moreover, $pred[4]$ becomes zero and $v_{4}$ is inserted in $R$ for
exploration. When $v_{4}$ is explored, $pred[3]$ also becomes zero and is also
inserted in $R$. Once $v_{3}$ is explored, $R$ becomes empty. We then go to
the outer while loop. All vertices in the heap are fixed and therefore the
algorithm terminates with $D$ array as $[0,9,2,8,7]$. Observe that it is easy
to maintain a count of the non-fixed vertices in the the heap and the method
$H.empty()$ can be overloaded to return true whenever this count is zero.
We now show that
###### Lemma 2.
Let $v$ be any non-fixed vertex. Suppose all incoming edges of $v$ have been
relaxed, then $D[v]$ equals $cost[v]$.
###### Proof.
We show that whenever $pred[v]$ is zero, $D[v]$ equals $cost[v]$. We prove
this lemma by contradiction. If not, let $x$ be the vertex with the smallest
$D$ value such that all its incoming edges have been relaxed but $D[x]$ is
greater than $cost[x]$. Let $\alpha$ be a path from $v_{0}$ to $x$ with the
smallest cost (and therefore less than $D[x]$). The path $\alpha$ must go
through a non-fixed vertex because $D[x]$ is the minimum cost of all paths
that go through fixed vertices. Let $y$ be the last non-fixed vertex along
this path. The successor of $y$ in that path cannot be $x$ because all
predecessors of $x$ are fixed. Therefore, its successor is a fixed vertex $z$
because $y$ is the last non-fixed vertex along the path. The path $\alpha$ can
be broken into two parts — the path from the source vertex to $z$ and then the
path from $z$ to $x$. The path from $z$ to $x$ consists only of fixed vertices
by the definition of $y$. It is sufficient to show that there exists a path
from the source vertex to $z$ that consists only of fixed vertex with the same
cost as in $\alpha$. The vertex $z$ can be fixed either because it has the
minimum value of $D$ in the heap at some iteration, or because all the
incoming edges to $z$ have been relaxed. In the former case, $D[z]=cost[z]$
and therefore there exists a path from the source vertex to $z$ with only
fixed vertices and the minimum cost. In the latter case, when $z$ is fixed
because all its incoming edges have been relaxed, then by our choice of $x$,
$D[z]$ is equal to $cost[z]$ which again shows existence of a path with only
fixed vertices with the minimum cost. ∎
To show the correctness of Algorithm $SP_{1}$, we make the following claims.
We use the predicate $explored(x)$ that holds true iff the adjacency list of
$x$ has been explored.
###### Lemma 3.
The following invariants hold at the outer and the inner while loop of
$SP_{1}$.
(a) For all vertices $x$: $fixed[x]\Rightarrow D[x]=cost[x]$.
(b) For all vertices $x$: $D[x]=$ cost of the shortest path from $v_{0}$ to
$x$ such that all vertices in the path before $x$ are fixed.
(c) For all vertices $x$: $x\in H\Rightarrow discovered(x)\wedge(\neg
fixed[x]\vee explored(x))$.
Furthermore, $\forall x:discovered(x)\wedge\neg fixed[x]\Rightarrow(x\in H)$.
###### Proof.
(a,b) The only difference from Dijkstra’s algorithm is that in one iteration
of the outer while loop, not only vertices with the minimum value of $D$ are
fixed, but also vertices with $pred[x]$ equal to $0$. Due to Lemma 2, the
invariant on $fixed$ and $D$ continues to hold. In the inner loop, whenever a
vertex is discovered and is not fixed, it is inserted in the heap maintaining
the invariant on $H$.
(c) Whenever a vertex $x$ is discovered and it is not fixed, it is inserted in
the heap. Whenever a vertex is removed from the heap it is marked as fixed. A
vertex in the heap can also become fixed in the inner while loop. However,
whenever a vertex becomes fixed it is inserted in $R$ for exploration and $R$
is empty at the outer while loop. Hence, any vertex that is fixed is also
explored. ∎
###### Lemma 4.
The following invariant holds at the inner while loop of $SP_{1}$. For all
vertices $x$: $x\in R$ iff $fixed[x]\wedge\neg explored(x)$.
###### Proof.
Whenever a vertex is marked fixed initially, it is inserted in $R$. Whenever
it is explored, it is removed from $R$. ∎
We now have the following Theorem.
###### Theorem 1.
Algorithm $SP_{1}$ returns the weight of a shortest path from source vertices
to all other vertices.
###### Proof.
Consider any vertex $x$ reachable from the source vertex. We show that $x$ is
eventually discovered. We use induction on $k$ equal to the number of vertices
with cost that less than or equal to that of $x$. The base case is trivial.
For the inductive case, $x$ has at least one predecessor. Since all weights
are positive, all predecessors of $x$ have cost less than that of $x$. If all
vertices are sorted based on their cost, the outer while loop marks as fixed
at least one vertex with cost less than or equal to $x$. Hence, in at most $k$
iterations of the outer while loop, one of the predecessors of $x$ is marked
as fixed. The algorithm terminates only when every fixed vertex is explored,
and therefore $x$ is discovered.
Any vertex $x$ that is discovered is either in $H$ when it is not fixed or
fixed but explored, or in $R$ when it is fixed and not explored. If the vertex
$x$ is $fixed$, from the invariant on $D$ and $fixed$, we have that $D[x]$
equal to $cost[x]$. If the vertex $x$ is not $fixed$, it is eventually removed
from the heap $H$ and becomes fixed. Hence, any reachable vertex $x$ has its
$D[x]$ set to $cost[x]$.
If any vertex $x$ is not reachable, then it can never be discovered and $D[x]$
returns $\infty$ due to initialization. ∎
We now show that $SP_{1}$ cuts down the complexity of Dijkstra’s algorithm
significantly for acyclic graphs whenever source vertex is the only vertex
with no incoming edges. To ensure this, whenever we read the graph we create a
list $L$ of all vertices other than the source vertex that have no incoming
edges. All these vertices are clearly not reachable from the source vertex. We
then repeatedly remove vertices from the list $L$ and their outgoing edges
from the graph. If in this process, another vertex has all its incoming edges
removed, it is added to the list $L$. The procedure is continued until $L$
becomes empty and we are guaranteed that the source vertex is the only vertex
with no incoming edges. This procedure takes at most $O(e)$ time because any
edge is processed at most once.
We now have the following result.
###### Theorem 2.
$SP_{1}$ takes $O(e+n\log n)$ time with Fibonacci heaps for any directed graph
and takes $O(e)$ time for directed acyclic graphs in which source node is the
only one with with zero incoming edges.
###### Proof.
For a general directed graph, any steps taken in $SP_{1}$ is also taken in
Dijkstra’s algorithm except for the constant time operations such as
decrementing $pred$, and inserting or deleting a vertex from $Q$ and $R$. Both
$Q$ and $R$ can be implemented as a linked lists with $O(1)$ insertions at the
tail and $O(1)$ deletions at the head of the list. The membership in $Q$ can
also be implemented in $O(1)$ time using a bit vector. Hence, using Fibonacci
heaps, we get the time complexity of Dijkstra’s algorithm.
For directed acyclic graphs, initially the source vertex is removed from the
heap and inserted in $R$. Now as we explore $R$, the predecessor count for all
vertices adjacent to the source vertex will decrease by $1$. Since the graph
is acyclic, at least one new vertex will become fixed. As we continue
processing $R$, all the nodes of the graph will become fixed (just as in the
topological sort of an acyclic graph). Thus, all reachable vertices of an
acyclic graph will be processed in the first iteration of the outer while
loop. In this iteration, every edges is processed exactly once, giving us
$O(e)$ time complexity. ∎
The worst case for $SP_{1}$ is when the vertex discovered last has outgoing
edges to all other vertices. In such a worst-case scenario, $SP_{1}$ will not
have any vertex becomes fixed through processing of $R$ and the algorithm will
degenerate into Dijkstra’s algorithm.
## IV Algorithm $SP_{2}$: Using Weights of Incoming edges
We now strengthen our mechanism to mark vertices as fixed. $SP_{2}$ requires
access to incoming edges for any vertex. Let a vertex $k$ be discovered from a
predecessor vertex $z$. Then, we compute $inWeight[k]$ as the minimum weight
of incoming edges from all predecessors other than $z$. We exploit $inWeight$
as follows.
###### Lemma 5.
Let $k$ be any non-fixed vertex discovered from the vertex $z$ in any
iteration of the outer while loop with $d$. If ($D[k]\leq d+inWeight[k]$) then
$D[k]$ equals $cost[k]$.
###### Proof.
Since $d$ is the weight of the vertex removed from the heap $H$, we know that
any predecessor vertex $v$ that is not fixed is guaranteed to have $D[v]\geq
d$. Hence, $D[k]$ is guaranteed to be less than or equal to $D[v]+w[v,k]$ for
any incoming edge $(v,k)$ that is relaxed. ∎
This mechanism comes at the space overhead of maintaining an additional array
$inWeight[]$ indexed by vertices.
$inWeight$: array [$0\ldots n-1$] of int initially $\forall
i:inWeight[i]=\infty$; procedure processEdge2($z,k$); var $changed$: boolean
initially false; $pred[k]:=pred[k]-1$; // Step 1: vertex $k$ has been
discovered. // Compute $inWeight$ if $(D[k]=\infty)\wedge(pred[k]>0)$ then
$inWeight[k]:=\min\\{w[v,k]~{}|~{}(v,k)\in E,v\neq z\\}$; // Step 2: relax
$(z,k)$ edge if ($D[k]>D[z]+w[z,k]$) then $D[k]:=D[z]+w[z,k]$; $changed$ :=
true; // Step 3: check if vertex $k$ can be fixed. if
$((pred[k]=0)\vee(D[k]\leq d+inWeight[k])$ then $fixed[k]:=true$;
$R:=R$.insert$(k)$; else if ($changed\wedge(k\not\in Q)$) then
$Q$.insert$(k)$;
Figure 4: Algorithm $SP_{2}$: Algorithm $SP_{1}$ with processEdge2
After incorporating Lemma 5, we get the algorithm $SP_{2}$ shown in Fig. 4. It
is same as $SP_{1}$ except we use the procedure $processEdge2$ instead of
$processEdge1$. In step 1, we compute $inWeight[k]$ when it is discovered for
the first time, i.e., when $D[k]$ is $\infty$. If there are additional
incoming edges, i.e., $(pred[k]>0)$, we determine the minimum of all the
incoming weights except from the vertex $z$ that discovered $k$. In step 2, we
perform the standard edge-relaxation. In step 3, we check if the vertex $k$
can be fixed either because it has no more predecessors, or for any non-fixed
predecessor $v$, the relaxation of the edge $(v,k)$ will not change $D[k]$.
Observe that for sequential implementations, if $R$ is maintained as a queue
and all edge weights are uniform, then any vertex discovered for the first
time will always be marked as fixed and will never be inserted in the heap.
For such inputs, $SP_{2}$ will behave as a simple breadth-first-search.
Since any vertex is discovered at most once, computing $inWeight$ requires
processing of all incoming edges of a vertex at most once. Hence, the
cumulative time overhead is linear in the number of edges. If the graph is
unweighted, then $SP_{2}$ is much faster than Dijkstra’s algorithm when $R$ is
implemented as a queue.
###### Theorem 3.
Suppose that $R$ is implemented as a simple queue. $SP_{2}$ takes
* •
$O(e+n\log n)$ time with Fibonacci heaps for any directed graph,
* •
$O(e)$ time for directed acyclic graphs in which only the source node has zero
incoming edges,
* •
$O(e)$ time for any unweighted directed graph.
###### Proof.
Since $SP_{2}$ retains all properties of $SP_{1}$, we only need to prove the
claim on unweighted directed graphs. In unweighted directed graphs, once the
source vertex is explored any vertex $k$ adjacent to the source vertex become
fixed because it satisfies the condition that $D[k]\leq d+inWeight[k]$ and is
inserted in $R$. Continuing in this manner, the algorithm reduces to breadth-
first search by simply inserting nodes in $R$ in breadth-first manner and
removing from $R$ till all reachable vertices are explored. ∎
Hence, $SP_{2}$ unifies Dijkstra’s algorithm with the topological sort for
acyclic graphs as well as the breadth-first search for unweighted graphs.
Consequently, it is faster than Dijkstra’s algorithm when the input graph is
close to an acyclic graph (i.e., has few cycles) or close to an unweighted
graph (most weights are the same).
Lemma 5 is similar to the in-version method of [4]. The in-version fixes any
vertex $k$ such that $D[k]\leq d+\min\\{w[j,k]~{}|~{}\neg fixed(j),(j,k)\in
E\\}$. There are two differences. First, we do not include the weight of the
edge that discovered $k$ in our calculation of $inWeight$. Second, in [4] the
implementation is based on maintaining an additional priority queue which adds
the overhead of $O(e\log n)$ to the algorithm with ordinary heap
implementation. $SP_{2}$ adds a cumulative overhead of $O(e)$. In sequential
implementations, the in-version increases the number of heap operations,
whereas $SP_{2}$ decreases this number.
Consider the graph in Fig. 2(a). Initially $(0,0)$ is in the heap $H$. It is
removed and inserted in $R$ marking $v_{0}$ as fixed. Now outgoing edges of
$v_{0}$ are relaxed. Since $pred[1]$ becomes $0$, $v_{1}$ is marked as fixed
and added to $R$. The vertex $v_{2}$ has $pred[2]$ as $1$ after the
relaxation. It is inserted in the heap with $D$ value as $2$ and $inWeight[2]$
is computed as $1$. Since $R$ is not empty, outgoing edges of $v_{1}$ are
relaxed. The vertex $v_{3}$ is inserted in $Q$ with $D$ value $12$ and the
vertex $v_{4}$ is inserted with $D$ value as $11$. We also compute
$inWeight[3]$ as $\min\\{6,8\\}$ equal to $6$ and $inWeight[4]$ as $5$. At
this point $R$ is empty and the minimum vertex $v_{2}$ is removed from the
heap and marked as fixed. When $v_{2}$ is explored and the edge $(v_{2},v3)$
is relaxed, the label of $v_{3}$ is adjusted to $8$. Since $8$ is less than or
equal to $d+inWeight[3]=2+6$, it is marked as fixed and inserted in $R$. When
edge $(v_{2},v_{4})$ is relaxed, $D[4]$ is reduced to $7$. Moreover, $pred[4]$
becomes zero and $v_{4}$ is inserted in $R$ for exploration. At this point,
all vertices are fixed. When $R$ is processed, there are no additional changes
and the algorithm terminates with the $D$ array as $[0,9,2,8,7]$.
## V Algorithm $SP_{3}$: Using Lower Bounds with Upper Bounds
We now generalize the mechanism of $SP_{2}$ further to determine fixed
vertices based on the idea of using lower bounds. The idea of starting with
the infinite cost as an estimate of the actual cost and decreasing the
estimate via edge-relaxation has been the underlying principle for not only
Dijkstra’s algorithm but almost all other shortest path algorithms such as
Bellman-Ford, Floyd-Warshall [6] and their derivatives. In this section, we
present the idea of using lower bounds $C$ associated with every vertex in
addition to the upper bounds given by $D$.
We keep a global array $C$ such that $C[x]$ is the lower bound associated with
each vertex $x$. We maintain the invariant that there is no path of cost
strictly lower than $C[x]$ from the source vertex to $x$. Just as $D[i]$ is
initialized to $\infty$, $C[i]$ is initialized to $0$ for all $i$ so that the
invariant is true initially. Clearly, any vertex $x$ such that $C[x]$ and
$D[x]$ are equal has both of them equal to $cost[x]$. Hence, any vertex with
$C[x]$ equal to $D[x]$ can be marked as fixed. Conversely, if any vertex $x$
is known to be fixed (for example, by removal from the min-heap), then $C[x]$
can be set to $D[x]$.
How do we determine nontrivial $C[x]$ for non-fixed vertices? Just as the
exploration of a vertex $x$ in Dijkstra’s algorithm updates $D[y]$ for all
out-going edges $(x,y)$, we define a dual step that can update $C[x]$ based
upon all in-coming edges. The value of $C[x]$ for the source vertices is
always zero. For other vertices, we have
###### Lemma 6.
Let $C[x]$ be a lower bound on the cost of the shortest path to $x$. Then, for
any vertex $x$ that is not a source vertex,
$C[x]\geq\min\\{C[v]+w[v,x]~{}|~{}(v,x)\in E\\}$ (1)
###### Proof.
Since $x$ is not the source vertex, it must have a predecessor $v$ in a
shortest path from the source vertex to $x$. The equation follows by noting
that an additional cost of $w[v,x]$ would be incurred as the last edge on that
path. ∎
The lemma gives an alternate short proof of Lemma 2. Consider any $x$ such
that all its predecessors are fixed. Since $C[v]$ is equal to $D[v]$ for all
fixed vertices, from Eqn 1, we get that
$C[x]\geq\min\\{D[v]+w[v,x]~{}|~{}(v,x)\in E\\}.$ We also get that
$D[x]\leq\min\\{D[v]+w[v,x]~{}|~{}(v,x)\in E\\}$ using the edge-relaxation
rule. Combining these two inequalities with $C[x]\leq D[x]$, we get that
$C[x]$ is equal to $D[x]$ and therefore $x$ can be marked as fixed.
An additional lower bound on the cost of a vertex is determined using the
global information on the graph. At any point in execution of the graph, there
are two sets of vertices — fixed and non-fixed. Any path from the source
vertex to a non-fixed vertex must include at least one edge from the set of
edges that go from a fixed vertex to a non-fixed vertex.
###### Lemma 7.
For any $x$ such that $\neg fixed[x]$,
$C[x]\geq\min\\{C[u]+w[u,v]~{}|~{}(u,v)\in E\wedge fixed[u]\wedge\neg
fixed[v]\\}$.
###### Proof.
Consider the shortest path from $v_{0}$ to $x$. Since $fixed[v_{0}]$ and $\neg
fixed[x]$ there is an edge in the path from a fixed vertex $u^{\prime}$ to a
non-fixed vertex $v^{\prime}$. We get that $C[x]\geq C[v^{\prime}]$ and
$C[v^{\prime}]\geq\min\\{C[u]+w[u,v]~{}|~{}(u,v)\in E\wedge fixed[u]\wedge\neg
fixed[v]\\}$. ∎
Since for a fixed vertex $u$, $C[u]$ equals $D[u]$, we get that for any non-
fixed vertex $x$, $C[x]\geq\min\\{D[u]+w[u,v]~{}|~{}(u,v)\in E\wedge
fixed[u]\wedge\neg fixed[v]\\}$. The right hand side is simply the minimum key
in the min-heap $H$.
Finally, we also exploit the method of [4].
###### Lemma 8.
[4] Let $threshold=\min\\{D[u]+w[u,v]~{}|~{}(u,v)\in E\wedge\neg fixed[u]\\}$.
Consider any non-fixed vertex $x$ with $D[x]\leq threshold$. Then, $x$ can be
marked as a fixed vertex.
###### Proof.
Since $D[x]\leq threshold$, we know that $x$ is a discovered vertex and there
is a path from $v_{0}$ to $x$. We show that this path has the shortest cost.
Suppose that there is another path with cost less than $D[x]$. This path must
go through at least some non-fixed vertex because $D[x]$ already accounts for
all paths that go through only fixed vertices. Let $u^{\prime}$ be the first
non-fixed vertex on that path. Then, the cost of that path is at least
$threshold$ by the definition of $threshold$ giving us the contradiction. ∎
This lemma also allows us to mark multiple vertices as fixed and therefore
update $C$ for them.
To exploit Lemma 8, we use two additional variables in $SP_{3}$. The variable
$outWeight[x]$ keeps the weight of the minimum outgoing edge from $x$. This
array is computed exactly once with the cumulative overhead of $O(e)$. We also
keep an additional binary heap $G$ as proposed in [4]. This heap keeps
$D[u]+outWeight[u]$ for all non-fixed vertices. Clearly, the minimum value of
this heap is the required threshold.
var $C,D$: array[$0\ldots n-1$] of integer initially $\forall
i:(C[i]=0)\wedge(D[i]=\infty)$; $G,H$: binary heap of $(j,d)$ initially empty;
$fixed$: array[$0\ldots n-1$] of boolean initially $\forall i:fixed[i]=false$;
$Q,R$: set of vertices initially empty; $outWeight$: array[$0\ldots n-1$] of
integer initially $\forall i:outWeight[i]=\min\\{w[i,j]~{}|~{}(i,j)\in E\\}$;
$D[0]:=0$; $H$.insert$(0,D[0])$; $G$.insert$(0,D[0]+outWeight[0])$; while
$\neg H$.empty() do int threshold := $G$.getMin(); while ($H$.getMin() $\leq
threshold)$ do $(j,d):=H$.removeMin(); $G$.remove$(j)$; $fixed[j]$ := $true$;
$C[j]:=D[j]$; $R$.insert($j$); if ($H$.empty()) break; endwhile; while
$R\neq\\{\\}$ do forall $z\in R$ $R:=R-\\{z\\}$ forall $k$: $\neg
fixed(k)\wedge(z,k)\in E$ processEdge3(z,k); endwhile; forall $z\in Q$:
$Q$.remove($z$); if $\neg fixed[z]$ then { $H$.insertOrAdjust $(z,D[z])$;
$G$.insertOrAdjust($z,D[z]+outWeight[z])$;} endwhile; procedure
processEdge3($z,k$); var $changed$: boolean initially false; $minD,minU$: int
initially $\infty$; // step 1: edge relaxation if ($D[k]>D[z]+w[z,k]$) then
$D[k]:=D[z]+w[z,k]$; $changed$ := true; // step 2: Update $C[v]$ for all
predecessors $v$ of $k$ forall $v:\neg fixed[v]\wedge((v,k)\in E)$
$C[v]:=\max(C[v],H$.getMin()); // step 3: Update $C$ via Eqn. 1
$C[k]:=\max(C[k],\min\\{C[v]+w[v,k]~{}|~{}((v,k)\in E)\\})$; // step 4: check
if vertex $k$ is fixed if $(C[k]=D[k])$ then $fixed[k]:=true$;
$R:=R\cup\\{k\\};$ $G$.remove($k$); $H$.remove($k$); else if
($changed\wedge(k\not\in Q)$) then $Q$.insert$(k)$;
Figure 5: Algorithm $SP_{3}$: Using upper bounds as well as lower bounds
Our third algorithm $SP_{3}$ is shown in Fig. 5. We first remove from the heap
$H$ all those non-fixed vertices $j$ such that $D[j]\leq threshold$. All these
vertices are marked as fixed. Also, whenever any vertex is added or removed
from the heap $H$, we also apply the same operation on the heap $G$. In
$SP_{3}$, it is more convenient to keep only the non-fixed vertices in $G$ and
$H$. All the vertices that are marked as fixed are removed from both $G$ and
$H$. Note that the deletion from the heap is only a virtual operation. It
simply corresponds to marking that vertex as fixed. Whenever a vertex is
removed from any of the heaps in the removeMin operation, and it is a fixed
vertex, the algorithm simply discards that vertex and continues. Hence,
vertices are physically removed only via removeMin operation. The getMin
operation removes any fixed vertex via removeMin, so that getMin applies only
to the non-fixed vertices. Whenever vertices are removed from $H$ via
removeMin operation, they are inserted in $R$ which explores them using
processEdge3.
Whenever we process an edge $(z,k)$, we update $D[k]$ as well as $C[k]$. If
$C[k]$ and $D[k]$ become equal then $v_{k}$ is marked as a fixed vertex;
otherwise, if $D[k]$ has changed then it is inserted in $Q$ for later
processing.
To update $C[k]$, we first apply Lemma 7 to all the non-fixed predecessors of
$k$, and then use Eqn. 1 to update $C[k]$. To apply Lemma 7, we set $C[v]$ for
any non-fixed predecessor vertex $v$ as the maximum of its previous value and
$H.getMin()$. The method processEdge3 takes time $O(max(\log n,\Delta))$ where
$\Delta$ is the maximum in-degree of any vertex.
We now show that $SP_{3}$ generalizes $SP_{2}$ (which, in turn, generalizes
$SP_{1}$).
###### Theorem 4.
Any vertex marked fixed by $SP_{2}$ in any iteration is also fixed by $SP_{3}$
in that iteration or earlier.
###### Proof.
$SP_{2}$ fixes a vertex when $pred[k]$ equals zero, or when $D[k]\leq
d+inWeight[k]$. When $pred[k]$ equals zero, all the predecessors of $v_{k}$
are fixed and their $C$ value matches their $D$ value. Therefore,
$C[k]:=\max(C[k],\min\\{C[v]+w[v,k]~{}|~{}((v,k)\in E)\\})$ guarantees that
$C[k]\geq\min\\{D[v]+w[v,k]~{}|~{}((v,k)\in E)=D[k]$. Therefore, vertex $k$ is
marked as fixed.
Now suppose that $D[k]\leq d+inWeight[k]$ in $SP_{2}$. Let $z$ be the vertex
that discovered $k$ in $SP_{2}$. Then $D[k]\leq d+inWeight[k]$ implies
$D[k]\leq\min(D[k],d+inWeight[k])$. Since $D[k]\leq D[z]+w[z,k]$, we get that
$D[k]\leq\min((D(z)+w[z,k]),d+inWeight[k]$. From the definition of
$inWeight[k]$, we get that
$D[k]\leq\min((D(z)+w[z,k]),d+\min\\{w[v,k]~{}|~{}(v,k)\in E\wedge(v\neq
z)\\})$. Since $z$ is a fixed vertex, we get
$D[k]\leq\min((C(z)+w[z,k]),\min\\{d+w[v,k]~{}|~{}(v,k)\in E\wedge(v\neq
z)\\})$. Since $C[v]$ for all predecessors of $k$ is set to at least $d$ in
step 2 of $SP_{3}$, we get that
$D[k]\leq\min((C(z)+w[z,k]),\min\\{C[v]+w[v,k]~{}|~{}(v,k)\in E\wedge(v\neq
z)\\})$. By combining two arguments of the $\min$, we get
$D[k]\leq\min\\{C[v]+w[v,k]~{}|~{}(v,k)\in E\\}$. The right hand side is
$C[k]$ due to assignment of $C[k]$ at step 4. Since $D[k]\leq C[k]$, we get
that vertex $k$ is marked as fixed. ∎
We now show that any vertex marked fixed by out-version or in-version of [4]
is also marked fixed by $SP_{3}$.
###### Lemma 9.
(a) $SP_{3}$ fixes any vertex $k$ such that
$D[k]\leq\min\\{D[x]+w[x,y]~{}|~{}\neg fixed(x)\\}$.
(b) $SP_{3}$ fixes any vertex $k$ such that $D[k]\leq\min\\{D[y]~{}|~{}\neg
fixed(y)\\}+\min\\{w[v,k]~{}|~{}(v,k)\in E\\}$.
###### Proof.
(a) follows from $threshold$ computed and marking of vertices as fixed based
on that.
(b) Suppose $D[k]\leq\min\\{D[y]~{}|~{}\neg
fixed(y)\\}+\min\\{w[v,k]~{}|~{}(v,k)\in E\\}$. The first part of the sum is
equal to $H.getMin()$ due to the property of $H$. Therefore, this expression
is equal to $\min\\{H.getMin()+w[v,k]~{}|~{}(v,k)\in E\\}$. From Step 2 in
$SP_{3}$, this expression is at most $\min\\{C(v)+w[v,k]~{}|~{}(v,k)\in E\\}$.
From step 3, we get this expression to be at most $C[k]$. Therefore, $D[k]\leq
C[k]$ and $k$ is fixed.
∎
The following Theorem summarizes properties of $SP_{3}$.
###### Theorem 5.
Algorithm $SP_{3}$ computes the cost of the shortest path from the source
vertex $v_{0}$ to all other vertices in $O(n+e(\max(\log n,\Delta)))$ time,
where $\Delta$ is the maximum in-degree of any vertex.
## VI Algorithm $SP_{4}$: A Parallel Label-Correcting Algorithm
In this section we present an algorithm when a large number of cores are
available. The goal of the algorithm is to decrease the value of $D$ and
increase the value of $C$ in as few iterations of the while loop as possible.
All our earlier algorithms explore only fixed vertices with the motivation of
avoiding multiple edge-relaxation of the same edge (in the spirit of
Dijkstra’s algorithm). In contrast, $SP_{4}$ is a label-correcting algorithm
that relaxes as many edges as possible in each iteration (in the spirit of
Bellman-Ford algorithm). Similarly, it recomputes $C$ for as many vertices as
possible and terminates faster than $SP_{3}$.
var $D$: array[$0\ldots n-1$] of integer initially $\forall i:D[i]:=\infty$;
$fixed$: array[$0\ldots n-1$] of boolean initially $\forall
i:fixed[i]:=false$; $C$: array[$0\ldots n-1$] of integer initially $\forall
i:C[i]=0$; $outWeight$: array[$0\ldots n-1$] of integer initially $\forall
i:outWeight[i]=\min\\{w[i,j]~{}|~{}(i,j)\in E\\}$; $Dout$: array[$0\ldots
n-1$] of integer initially $\forall i:Dout[i]=\infty$; int $threshold$; int
$minD$; $D[0]:=0;$ $Dout[0]:=D[0]+outWeight[0];$ boolean $changed:=true;$
while ($changed\wedge(\exists i:\neg fixed[i]\wedge(D[i]<\infty)$)
$changed:=false;$ // Step 1: find the minimum value of $D$ and $Dout[x]$
$threshold$ := min $Dout[x]$ of all non-fixed vertices; $minD$ := min $D[x]$
of all non-fixed vertices; // Step 2: Fix all vertices with $D[x]\leq
threshold$ forall $x$ such that $(D[x]\leq threshold)$ in parallel
$fixed[j]:=true;$ $C[j]:=D[j];$ // Step 3: Update $D$ values forall $x,y$ such
that $(D[x]<\infty)$ $\wedge\neg fixed[y]\wedge((x,y)\in E)$ in parallel if
$(D[y]>D[x]+w[x,y])$ then $D[y]:=D[x]+w[x,y]$; $Dout[y]:=D[y]+outWeight[y]$;
$changed:=true$; // Step 4: Update $C$ values forall $y$ such that $\neg
fixed[y]$ in parallel $C[y]:=\max(C[y],minD)$; forall $y$ such that $\neg
fixed[y]$ in parallel $C[y]:=\max(C[y],$ $\min\\{C[x]+w[x,y],(x,y)\in E)\\})$;
// Step 5: Update $fixed$ values forall $y:\neg fixed[y]\wedge(D[y]<\infty)$
in parallel if ($C[y]=D[y]$) $fixed[y]:=true$; endwhile;
Figure 6: Algorithm $SP_{4}$: A Bellman-Ford Style Algorithm with both upper
and lower bounds
The algorithm is shown in Fig. 6. We use an outer while loop that is executed
so long as $changed\wedge(\exists i:\neg fixed[i]\wedge(D[i]<\infty)$. The
variable $changed$ is used to record if any vertex changed its $D$ value. This
is a well-known optimization of Ford-Bellman algorithm for early termination.
If $D$ did not change in the last iteration of the while loop, we have reached
the fixed point for $D$ and it is equal to $cost$. Even if $D$ changed for
some vertices but all vertices are fixed, then their $D$ values cannot change
and we can terminate the algorithm. The conjunct $(D[i]<\infty)$ allows us to
restrict the algorithm to examine only discovered vertices.
In step 1, we find $threshold$ equal to the minimum of all $Dout$ values of
non-fixed vertices just as in $SP_{3}$. We also find $minD$ equal to the
minimum of all $D$ values for non-fixed vertices. With $n$ processors this
step can be done in $O(\log\log n)$ time and $O(n)$ work on a common-CRCW PRAM
with the standard technique of using a doubly logarithmic tree and
cascading[10]. In step 2, we fix all the vertices that have $D$ values less
than or equal to the threshold. This step can be done in $O(1)$ time and
$O(n)$ work. In step 3, we first explore all the discovered vertices. All
vertices adjacent to these vertices become discovered if they have not been
discovered earlier. In addition, we also relax all the incoming edges to
vertices that are not fixed. Clearly, this is equivalent to relaxing all edges
as in the Bellman-Ford algorithm because for fixed vertices their $D$ value
cannot decrease. This step can be performed in $O(1)$ time and $O(e)$ work
with $e$ cores. In step 4, we compute lower bounds for all non-fixed vertices.
We first update $C$ for all non-fixed vertices to be at least as large as
$minD$. In the second parallel step, we simply apply Eqn. 1 to update all
$C$’s for all non-fixed vertices. This step can also be performed in $O(1)$
time and $O(e)$ work with $e$ cores. In step 5, we recompute the array $fixed$
based on $C$ and $D$. This step can be performed in $O(1)$ time and $O(n)$
work. The total number of iterations is at most $n$ giving us the parallel
time complexity of $O(n\log\log n)$ and work complexity of $O(ne)$. The number
of iterations in $SP_{4}$ algorithm is less than or equal to the number of
iteration required by $SP_{3}$. We now show the following property of
$SP_{4}$.
###### Theorem 6.
Algorithm $SP_{4}$ computes the cost of the shortest path from the source
vertex $v_{0}$ to all other vertices in time $O(n\log\log n)$ and work $O(ne)$
with $e$ processors.
###### Proof.
We first show the correctness of $SP_{4}$. It is sufficient to show that the
while loop maintains the invariant that $D[x]$ is an upper bound and $C[x]$ is
a lower bound on the cost to the vertex $x$. Steps 1 and 2 correctly maintain
$D$ follows from [4]. Step 3 is the standard Bellman-Ford rule and it
correctly maintains $D$. Step 4 correctly maintains $C$ due to Lemma 6. Step
4, simply maintains the invariant that $fixed(x)\equiv(C[x]=D[x])$.
The time and work complexity follows from the earlier discussion. ∎
## VII Conclusions and Future Work
In this paper, we have presented four algorithms for the shortest path
problem. We present algorithms $SP_{1}$ and $SP_{2}$ that reduce the number of
heap operations required by Dijkstra’s algorithm and allow exploration of
multiple vertices in parallel thereby reducing its sequential bottleneck. We
also present algorithms $SP_{3}$ and $SP_{4}$ that require more work than
Dijkstra’s algorithm but reduce the sequential bottleneck even further. These
algorithms are the first ones that exploit both upper and lower bounds on the
cost of the shortest path to increase the number of vertices that can be
explored in parallel. Extending these algorithms for distributed shared memory
is a future research direction.
## References
* [1] Ravindra K Ahuja, Kurt Mehlhorn, James Orlin, and Robert E Tarjan. Faster algorithms for the shortest path problem. Journal of the ACM (JACM), 37(2):213–223, 1990.
* [2] Richard Bellman. On a routing problem. Quarterly of applied mathematics, 16(1):87–90, 1958.
* [3] Venkatesan T Chakaravarthy, Fabio Checconi, Prakash Murali, Fabrizio Petrini, and Yogish Sabharwal. Scalable single source shortest path algorithms for massively parallel systems. IEEE Transactions on Parallel and Distributed Systems, 28(7):2031–2045, 2017.
* [4] Andreas Crauser, Kurt Mehlhorn, Ulrich Meyer, and Peter Sanders. A parallelization of dijkstra’s shortest path algorithm. In International Symposium on Mathematical Foundations of Computer Science, pages 722–731. Springer, 1998.
* [5] E. W. Dijkstra. A note on two problems in connexion with graphs. Numerische Mathematik, 1(1):269–271, Dec 1959. URL: https://doi.org/10.1007/BF01386390, doi:10.1007/BF01386390.
* [6] Robert W Floyd. Algorithm 97: shortest path. Communications of the ACM, 5(6):345, 1962.
* [7] L. A. Ford. Network flow theory. Technical report, 1956.
* [8] Michael L. Fredman and Robert Endre Tarjan. Fibonacci heaps and their uses in improved network optimization algorithms. J. ACM, 34(3):596–615, July 1987. URL: http://doi.acm.org/10.1145/28869.28874, doi:10.1145/28869.28874.
* [9] P. E. Hart, N. J. Nilsson, and B. Raphael. A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems Science and Cybernetics, 4(2):100–107, July 1968. doi:10.1109/TSSC.1968.300136.
* [10] Joseph JáJá. An introduction to parallel algorithms, volume 17. Addison-Wesley Reading, 1992.
* [11] Kamesh Madduri, David A Bader, Jonathan W Berry, and Joseph R Crobak. An experimental study of a parallel shortest path algorithm for solving large-scale graph instances. In 2007 Proceedings of the Ninth Workshop on Algorithm Engineering and Experiments (ALENEX), pages 23–35. SIAM, 2007.
* [12] Ulrich Meyer and Peter Sanders. $\delta$-stepping: a parallelizable shortest path algorithm. Journal of Algorithms, 49(1):114–152, 2003.
* [13] Rajeev Raman. Recent results on the single-source shortest paths problem. SIGACT News, 28(2):81–87, June 1997. URL: http://doi.acm.org/10.1145/261342.261352, doi:10.1145/261342.261352.
* [14] Mikkel Thorup. On ram priority queues. SIAM Journal on Computing, 30(1):86–109, 2000.
|
11institutetext: Sentic Lab, Iasi, Romania 22institutetext: Faculty of
Computer Science, ”Alexandru Ioan Cuza” University of Iasi, Romania
22email<EMAIL_ADDRESS>
# Analyzing domain shift when using additional data for the MICCAI KiTS23
Challenge
George Stoica ✉ 1122 Mihaela Breaban 22 Vlad Barbu 11
###### Abstract
Using additional training data is known to improve the results, especially for
medical image 3D segmentation where there is a lack of training material and
the model needs to generalize well from few available data. However, the new
data could have been acquired using other instruments and preprocessed such
its distribution is significantly different from the original training data.
Therefore, we study techniques which ameliorate domain shift during training
so that the additional data becomes better usable for preprocessing and
training together with the original data. Our results show that transforming
the additional data using histogram matching has better results than using
simple normalization.
###### Keywords:
3D Segmentation Domain Shift Domain Adaptation.
## 1 Introduction
The segmentation of renal structures (kidney, tumor, cyst) has gained interest
in the recent years, starting from the KiTS19 Challenge [4] and continuing
with KiTS21, KiPA22111 https://kipa22.grand-challenge.org/home/ and currently
with KiTS23. The accurate segmentation of renal tumors and renal cysts is of
important clinical significance and can benefit the clinicians in preoperative
surgery planning.
Deep learning leverages on huge amount of training data for learning domain
specific knowledge which can be used for predicting on previously unseen data.
In medical image segmentation, a smaller amount of training data is available
when compared to other domains of deep learning. Therefore, using additional
training data has a greater impact on the end results.
In domain adaptation, the usual scenario entails learning from a source
distribution and predicting on a different target distribution. The change in
the distribution of the training dataset and the test dataset is called domain
shift. In the case of supervised domain adaptation, labeled data from the
target domain is available [9].
The medical image acquisition process is not uniform across different
institutions and CT images may have different HU values and various amount of
noise depending on the acquisition device, the acquisition time and other
external factors. As a consequence, distribution shifts are easily encountered
and this affects models that perform well on validation sets but encounter
different data in practice. Creating a model which is robust to different
types of distributions requires training on enough data, coming from all the
target domains.
When training under domain shift and using two datasets with different
distributions, the data should be preprocessed in order to mitigate the data
mismatch error which happens due to the distribution shift. Characteristics of
the target dataset have to be incorporated into the training dataset, which
could be done either by collecting more data from the target distribution, or
by artificial data synthesis. As the amount of training data from the target
distribution (KiTS23 challenge) is limited, our solution consists of
transforming the additional data (taken from the KiPA22 challenge) to the
target distribution. We compare two transformations, dataset normalization,
which preserves the original but different distribution, with histogram
matching, which makes both the source and target distribution the same.
## 2 Methods
Our approach consists of applying initial preprocessing to an additional
dataset which was used for training. The aim of the preprocessing was to
reduce the distribution shift between the additional data and the target
domain, and will be fully-detailed in Section 2.2. After bridging the
distributions of the original and additional data, we preprocess and normalize
the whole data together and train a 3D U-Net [1] using multiple data
augmentation techniques. Ultimately, we evaluate our model on the validation
set.
### 2.1 Training and Validation Data
Our submission uses the official KiTS23 training set, built upon the training
and testing data from the KiTS19 [5] and KiTS21 competitions. In addition to
the official KiTS23 data, our submission made use of the public KiPA22
competition training set [3] [2] [7] [8].
The KiTS23 training dataset contains 489 CTs which include at least one kidney
and tumor region and usually include both kidneys and optionally one or more
cyst regions. In contrast, the KiPA22 training data contains only 70 CTs in
which only the diseased kidney is selected. KiPA22 images have 4 segmentation
targets: kidney, tumor, artery and vein. Unlike KiTS23, benign renal cysts are
segmented as part of the kidney class for KiPA22. The initial preprocessing
for the KiPA22 images consists of removing the artery and vein segmentation
masks and keeping only the kidney and tumor class.
We have used 342 random images from KiTS23 and 70 images from KiPA22 for
training and 147 random images from KiTS23 for validation.
### 2.2 Preprocessing
Initial exploratory data analysis illustrate the fact that images from KiPA22
have a totally different distribution than images from the KiTS23 training set
on the HU scale.
(a) Both datasets before preprocessing.
(b) Shifting the mean and standard deviation.
(c) Applying histogram matching.
Figure 1: Histograms for both datasets before and after initial preprocessing.
While the values for the KiTS23 CT images are mostly centered around - 1000
and 0 on the HU scale, KiPA22 images are situated between 800 and 1500 while
also having a visible different distribution (Figure 1(a)). Training under
domain shift using the original distribution for the second dataset is
challenging, therefore we have taken steps towards ameliorating the effects of
distribution shift.
To mitigate the impact of the huge distance between the values of the
vertices, the simplest solution is shifting the mean and standard deviation of
KiPA images to match those of KiTS (Figure 1(b)). Nevertheless, the
distributions are visibly different, therefore we also applied histogram
matching to transform the KiPA images to the KiTS domain (Figure 1(c)).
We choose to transform the KiPA images because the test data will be made of
images which are expected to match the original KiTS distribution. When
training under domain shift, only data from the target distribution should be
used for validation.
For the KiPA dataset, shifting the mean maintains the original shape of the
curve, scaled by the factor which changed the standard deviation, spreading
the values evenly. Histogram matching, on the other side, is destructive and
the HU values of vertices are spread unevenly to match the KiTS distribution.
To choose the best transformation, we have created two datasets to evaluate
them separately in order to choose the more suitable one.
1. 1.
Dataset 1: 342 images from KiTS and 70 images from KiPA whose values are
shifted by changing the mean and standard deviation.
2. 2.
Dataset 2: the same 342 images from KiTS and 70 images from KiPA whose values
are transformed by histogram matching.
For both datasets, the same preprocessing steps are applied, using the nnUNet
framework [6]. Values are clipped at the 0.5th and 99.5th percentile to remove
outliers. Then, images are normalized to have the mean 0 and standard
deviation 1 and three order-interpolation is used to resample all images into
a space of $0.7636\times 0.7636\times 0.7636$ $mm^{3}$.
### 2.3 Proposed Method
After preprocessing each dataset, we have trained the model using the default
nnUNet configuration for training, which uses a classic 3D U-Net. Instead of
using an ensemble of 5 models trained on 5 folds of the data, we have opted to
train a single model on all the training data available.
We have used region based training, defining the 3 learning targets: Kidney &
Tumor & Cyst, Tumor & Cyst and Tumor. While this approach directly optimizes
the official evaluation metrics, it does not yield good results for predicting
the cyst class. We have experimented with Dice & Focal Loss and Dice & Cross
Entropy Loss, the latter achieving the best results.
We have trained the model using a patch size of $128\times 128\times 128$ and
a batch size of two, for 1000 epochs. We started the training using SGD and
Nesterov momentum with a learning rate of 0.01 and used a Polynomial Learning
Rate Scheduler to decrease the learning rate evenly until it reaches a value
of 0.001. To prevent overfitting, we applied multiple data augmentation
techniques integrated in nnUNet: Rotation, Scaling, Gaussian noise, Gaussian
blur, Random brightness, Gamma Correction and Mirroring. We did not apply any
post-processing on the prediction.
## 3 Results
We have trained on both Dataset 1 and Dataset 2 and have used 147 images from
KiTS23 for validation. The results are displayed in Table 1. The official
metrics used in competition are in italic, but we also report the Dice score
for kidney and cyst segmentation.
Table 1: Validation results for Dataset 1 and Dataset 2. Dataset | Dice kidney&masses | Dice masses | Dice kidney | Dice tumor | Dice cyst
---|---|---|---|---|---
Dataset 1 | 95.310904 | 79.072143 | 94.101086 | 76.891783 | 17.012944
Dataset 2 | 95.453839 | 80.760511 | 94.024749 | 78.960431 | 20.766421
Our experiments show that applying histogram equalization (Dataset 2) on the
additional dataset improves the results for all the target metrics. Using
simple normalization (Dataset 1) has better results only when calculating the
Dice score for the kidney area. However, the Dice score for tumors and cysts
is worse by 2 and 3 percent. Using the original distribution of the KiPA
dataset results in a lower Dice score for cysts also because possible cysts
are labeled as the kidney class. Nonetheless, since the two distributions are
still very different even after normalization and preprocessing, the scores
are heavily impacted.
Evaluating the results on both configurations, the model does not distinguish
the cyst class and many cysts are classified as tumors. There is a class
imbalance between cysts and tumors, as cysts generally encompass a smaller
area. In our case, the low dice score for cysts is a result of many false
positives. We presume that our learning target is the culprit, because we
directly minimize the Dice and Cross Entropy loss for the whole segmentation
area (kidney and masses), the lesion area (masses, both tumor and cyst) and
ultimately, tumor. As a consequence, the cyst area is indirectly learnt,
therefore the accuracy is lower. To make the model discriminate between the
two classes and reduce the false positives, we suggest changing the learning
target to directly minimize the Dice and Cross Entropy loss for the cyst
class.
For training and inference we have used a workstation with an RTX 3090 GPU, an
AMD Ryzen Threadripper 2970WX 24-Core Processor CPU, SSD and 31GB RAM memory
available. Training the model took around 3.4 days. For prediction, the
average inference time was 10 minutes per case.
## 4 Discussion and Conclusion
We have explored suitable techniques for mitigating distribution shift when
using additional data for the kidney tumor 3D segmentation task. We identify
histogram matching as an initial preprocessing step of artificial data
synthesis that completely transforms the additional distribution to the target
domain. Compared to simple normalization, this approach has the advantage of
training only on the target distribution, which improves the results,
especially for the least frequent classes, cyst and tumor.
We believe that more stable results can be achieved by training an ensemble,
and the discriminative power between cysts and tumors can be enhanced by
changing the training target and using techniques that deal with class
imbalance.
## References
* [1] Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3d u-net: learning dense volumetric segmentation from sparse annotation. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II 19. pp. 424–432. Springer (2016)
* [2] He, Y., Yang, G., Yang, J., Chen, Y., Kong, Y., Wu, J., Tang, L., Zhu, X., Dillenseger, J.L., Shao, P., et al.: Dense biased networks with deep priori anatomy and hard region adaptation: Semi-supervised learning for fine renal artery segmentation. Medical image analysis 63, 101722 (2020)
* [3] He, Y., Yang, G., Yang, J., Ge, R., Kong, Y., Zhu, X., Zhang, S., Shao, P., Shu, H., Dillenseger, J.L., et al.: Meta grayscale adaptive network for 3d integrated renal structures segmentation. Medical image analysis 71, 102055 (2021)
* [4] Heller, N., Isensee, F., Maier-Hein, K.H., Hou, X., Xie, C., Li, F., Nan, Y., Mu, G., Lin, Z., Han, M., et al.: The state of the art in kidney and kidney tumor segmentation in contrast-enhanced ct imaging: Results of the kits19 challenge. Medical Image Analysis 67, 101821 (2021)
* [5] Heller, N., Sathianathen, N., Kalapara, A., Walczak, E., Moore, K., Kaluzniak, H., Rosenberg, J., Blake, P., Rengel, Z., Oestreich, M., et al.: The kits19 challenge data: 300 kidney tumor cases with clinical context, ct semantic segmentations, and surgical outcomes. arXiv preprint arXiv:1904.00445 (2019)
* [6] Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods 18(2), 203–211 (2021)
* [7] Shao, P., Qin, C., Yin, C., Meng, X., Ju, X., Li, J., Lv, Q., Zhang, W., Xu, Z.: Laparoscopic partial nephrectomy with segmental renal artery clamping: technique and clinical outcomes. European urology 59(5), 849–855 (2011)
* [8] Shao, P., Tang, L., Li, P., Xu, Y., Qin, C., Cao, Q., Ju, X., Meng, X., Lv, Q., Li, J., et al.: Precise segmental renal artery clamping under the guidance of dual-source computed tomography angiography during laparoscopic partial nephrectomy. European urology 62(6), 1001–1008 (2012)
* [9] Wang, M., Deng, W.: Deep visual domain adaptation: A survey. Neurocomputing 312, 135–153 (2018)
|
11institutetext: School of Computer Science and Engineering, University of
Electronic Science and Technology of China, Chengdu 611731, China
22institutetext: School of Computer Science and Technology, Harbin Institute
of Technology, Shenzhen, University Town of Shenzhen, Nanshan, 510085
Shenzhen, China
# A Subword Guided Neural Word Segmentation Model for Sindhi
Wazir Ali 11 Jay Kumar 11 Zenglin Xu 1122 Congjian Luo 1122 Junyu Lu 11
Junming Shao 11 Rajesh Kumar 11 Yazhou Ren 11
###### Abstract
Deep neural networks employ multiple processing layers for learning text
representations to alleviate the burden of manual feature engineering in
Natural Language Processing (NLP). Such text representations are widely used
to extract features from unlabeled data. The word segmentation is a
fundamental and inevitable prerequisite for many languages. Sindhi is an
under-resourced language, whose segmentation is challenging as it exhibits
space omission, space insertion issues, and lacks the labeled corpus for
segmentation. In this paper, we investigate supervised Sindhi Word
Segmentation (SWS) using unlabeled data with a Subword Guided Neural Word
Segmenter (SGNWS) for Sindhi. In order to learn text representations, we
incorporate subword representations to recurrent neural architecture to
capture word information at morphemic-level, which takes advantage of
Bidirectional Long-Short Term Memory (BiLSTM), self-attention mechanism, and
Conditional Random Field (CRF). Our proposed SGNWS model achieves an F1 value
of 98.51% without relying on feature engineering. The empirical results
demonstrate the benefits of the proposed model over the existing Sindhi word
segmenters.
###### Keywords:
Recurrent neural networks sequence tagging Sindhi word segmentation subword
representation learning.
## 1 Introduction
Word segmentation is a fundamental and challenging task in text classification
and other NLP applications [6]. Word segmenter determines the boundaries of
words in the shape of beginning and ending [11]. It has been largely
investigated in many space-delimited languages including English [7], Arabic
[4], Urdu [46] and non-space delimited languages including Chinese [45],
Japanese [17], and Burmese [43]. However, the word segmentation in low-
resource Sindhi language has not been studied well [16], mainly due to the
lack of language resources.
Sindhi word segmentation exhibits the space omission and space insertion [6,
24] problems. Although, the white spaces between words are a good sign for
predicting word boundaries, the space omission and space insertion between
words bring ambiguity in the segmentation process. Therefore, the SWS task is
a challenging problem because of resource scarcity, lack of standard
segmentation benchmark corpus, and rich morphological features [6, 25] in
Sindhi language. Previously, little work has been proposed to address the SWS
problem by employing dictionary-based [6] and rule-based [25, 24, 29, 12]
approaches. Thus, the existing approaches lack the applicability towards open-
source implementation due to following reasons, (i) inability to deal with
out-of-vocabulary words, (ii) less robust on the large datasets, and (iii)
lower segmentation accuracy. Our proposed novel deep SGNWS model has the
capability of dealing with such issues for SWS with the Subword Representation
Learning (SRL) approach.
Recently, deep neural architectures have largely gained popularity in NLP
community [42] by greatly simplifying the learning and decoding in a number of
NLP applications [10, 23, 39, 40] including word segmentation [4, 35] with
neural word embedding [8, 21] and powerful recurrent neural architectures [14,
34]. More recently, self-attention [37] has also become a popular approach to
boost the performance of neural models. Therefore, we tackle the SWS problem
by taking advantage of BiLSTM, self-attention, SRL, and CRF without relying on
external feature engineering.
In this paper, we propose a language-independent neural word segmentation
model for Sindhi. The proposed model efficiently captures the character-level
information with subword representation learning. We convert segmentation into
a sequence tagging problem using B, I, E, S, X tagging scheme. Where B denotes
[Beginning], I [Inside], E [Ending] of a word in the given corpus, S [Single]
is used for the tagging of a single or special character in the unlabeled
text, and X tag is used for [hard-space] between words. We train task-oriented
[21] Sindhi word representations with character-level subword approach [18].
To the best of our knowledge, this is the first attempt to tackle SWS as a
sequence labeling task. We provide the open-source implementation for further
investigation111https://github.com/AliWazir/Neural-Sindhi-word-segmenter. Our
novel contributions are listed as follows:
* •
To the best of our knowledge, we are the first to propose a sequence modeling
based language-independent neural model to tackle the SWS problem.
* •
The proposed model eliminates the constraint of external feature engineering
by adopting subword representation learning.
* •
We treat SWS as a sequence tagging problem by assigning the B, I, E, S, X tags
to unlabelled corpus for the word boundary detection.
* •
Extensive experiments prove the dominant performance of our proposed model
compared with the baselines approaches.
## 2 Related Work
Recurrent Neural Network (RNN) variants has been widely adopted in a number of
learning tasks [41, 30, 20, 38] including sequence tagging problems [15, 23,
42] since the inception of well-known LSTM network [14]. However, LSTM suffers
from a limitation to encode the given sequence in unidirectional way. This
limitation has been handled by stacking two LSTM networks as a bidirectional
encoder, known as BiLSTM [34] by the integration of the simultaneous training
strategy in forward and backward directions. It is an ideal solution to learn
the sequences in a language because, unlike unidirectional, the bidirectional
network is beneficial to access both the contexts of right and left
directions. The bidirectional RNN variants have been largely employed for word
segmentation [39, 40, 22] in Chinese [9, 36], Japanese [17] and Arabic [4] by
achieving excellent performance without relying on any external feature
engineering strategies.
On the one hand, state-of-the-art sequence tagging systems rely on large
amounts of task-specific knowledge [23] in the form of hand-engineered
features and data pre-processing. On the other hand, the performance of neural
models can be enhanced by incorporating unsupervised neural embeddings
including classical [26], character-level [8], deep contextualized [2] and
task-oriented [21]. Moreover, the success of deep neural architectures also
relies on the optimal hyper-parameters selection [33]. More recently, an
attention mechanism [37] in neural models has also yielded new state-of-the-
art results in multiple NLP tasks. Furthermore, the last layer of neural
models has a significant impact on performance. The CRF [19] is broadly used
in the sequence classification tasks [17, 23, 15, 9] for decoding in neural
models. Taking advantage of language-independent neural models for SWS, we
propose a model that efficiently captures the character-level information with
subword representation learning by converting the segmentation into a sequence
tagging problem.
Presently, Sindhi language is being written in two famous writing system of
Persion-Arabic and Devanagari [27]. However, Persian-Arabic is standard script
[24] as well as frequently used in online communication, literary work, and
journalism. Sindhi contains rich morphology due to the frequent usage of
prefixes and suffixes to express inflections and derivations, which makes it a
complex morphological language. Initially, the SWS was coined [25] by
introducing the first word segmentation model using several rule-based
algorithms. The proposed model was evaluated on a small dataset consists of
16,601 lexicon with cumulative segmentation error rate (SER) of 9.54$\%$.
Later, [24] proposed a rule-based word tokenizer with 91.76$\%$ segmentation
accuracy. The segmentation is performed in three steps; the first step
consists of input and segmentation with white space. The second step is used
for the segmentation of simple and compound words, while the third step deals
with the segmentation of complex words. In this way, different word types are
separately segmented in their proposed model. Moreover, [6] proposed a word
segmenter by evaluating the dataset of 1,57,509 words obtained from news
corpus and dictionary lexicon. Their proposed model achieves good performance
dictionary lexicon, but poorly performed in dealing with news and books
corpus. Recently, [12] proposed two algorithms for stemming and lemmatization
process with an opensource222https://sindhinlp.com/stemlema.php
implementation. The SWS is a challenging task because it exhibits space
omission and space insertion problems. This is partly because of the Arabic
script, which, although cursive in nature, consists of characters that have
inherent joining and non-joining attributes regardless of a word boundary.
Apart from the discussed problems, there is no gold-standard benchmark corpus
for Sindhi to evaluate the segmentation task. In summary, the SWS task is
difficult, important, and not-studied as a sequence modeling problem. The
previous approaches mainly rely on the rule-based and dictionary-based
methods, which have certain limitations such as inability to deal with out of
vocabulary words, less robustness for other languages, and the algorithms’
inefficiency to deal with a large amount of noisy or raw text.
## 3 Sindhi Morphology
The Persian-Arabic is a standard writing script for Sindhi, which is cursive
and written from right to left direction [28, 32]. It contains rich morphology
[24] due to the frequent usage of prefixes and suffixes to express inflections
and derivations, which makes it a complex morphological language. The alphabet
of Sindhi Persian-Arabic consists of 52 basic letters, 29 derived from Arabic
language, 03 from the Persian language, and 20 modified letters [16]. It also
uses 03 secondary letters, 07 honorific symbols and diacritic marks [28, 32].
Interestingly, the shape of some letters in Sindhi change the form according
to their position in a word [6], such letters are referred as joiners. Thus, a
joiner have at most four shapes; i) initial ii) middle iii) final and iv)
isolated, as Table 1 depicts an example of some letters. Whereas the position-
independent letters having final or isolated form are referred as non-joiners.
Specifically, white spaces are used to detect word boundaries in Sindhi.
However, writers omit a hard space between two words. Therefore, a phrase or a
sentence that ends with non-joiner letters becomes one token. In the first
case, the words are joined with their preceding and succeeding words in the
absence of white space, which leads to misspellings. In the second case, the
shape of characters remains identical even in the absence of white space. Due
to position-independent and space-independent letters, the SWS exhibits both
challenges of space insertion and space omission [25, 32].
Table 1: Various shapes of Sindhi alphabet according to their position in words. Roman transliteration of every isolated letter is given for the ease of reading. Ending | Middle | Initial | Isolated | Roman
---|---|---|---|---
| | | | B$\bar{e}$
| | | | J$\bar{i}$m
| | | | S$\bar{i}$n
| | | | $\breve{g}$ain
| | | | G$\bar{a}$f
| | | | N$\breve{u}$n
Table 2: Complete list of Sindhi joiner and non-joiner letters, (i) denote
joiner letters (ii) non-joiners, and (iii) non-joiner secondary letters.
### 3.1 Space Omission
The space omission is a common phenomenon in Sindhi words that end with the
non-joiner letters. However, the absence of white space exhibit the correct
shape of words such as Table 3 shows an example of a Sindhi sentence with and
without the use of white space. But computationally, that sentence consists of
one token without the use of white spaces between words. Whereas the sentence
consists of eight tokens with the use of white space between words. Therefore,
the omission of white space between words ending with non-joiner letters
raises a computational issue.
### 3.2 Space Insertion
Another challenge in SWS arises when combining two or more root words
(morphemes) form a new standalone single word (see Table 4). In such cases,
writers omit white space if the first morpheme ends with a joiner letter.
However, white space prevents it’s joining with the next morpheme so that the
word retains a valid visual form. The missing space insertion leads to the
formation of compound words and often misspelling. Hence, white space is
essential in this case for the ease of readability and correct spelling of
Sindhi words.
Table 3: An example of a Sindhi sentence, all words end with the non-joiner letters. (i) denote the words with white space (the tokens are separated with ‘-’ symbol), (ii) without white space (iii) Roman transliteration of Sindhi sentence (iv) is the English translation of a Sindhi sentence. Table 4: Sindhi word types with an example of space insertion, along with English translation. (i) represent the words with white space (‘-’ symbol represents space), and (ii) without space. The Roman transliteration is given for ease of reading. Word Type | i. | ii. | Roman | English Translation
---|---|---|---|---
Affix | | | Be-hisaab | Uncountable
Reduplicate | | | haidai hodai | Here and there
Compound | | | saahib-e-Qudrat | Powerful
Borrowed | | | Mobile phone | Mobile Phone
Abbreviation | | | Ain Cee Aich Dee | NCHD
## 4 Methodology
In this section, the proposed methodology is described in detail. Firstly, we
convert the segmentation as a classification problem by introducing the
proposed B, I, E, S, X tagging scheme. The labels are assigned to each Sindhi
character, including punctuation marks and numbers in the dataset. Afterwards,
we describe the baseline as well as the proposed SGNWS models. Later, we
present the experimental details and the variants of neural models, including
word representations, character-level SRL to predict subwords boundaries.
### 4.1 Tagging Scheme
We modeled the word segmentation as character-level sequence labelling [9].
Theoretically, word boundary can be predicted with binary classification in
word segmentation, but in practice, fine-grained tag sets [44] produce high
segmentation accuracy. Following the work [36], we employ four tags [B, I, E,
S] to indicate the position of letters at the Beginning [B], Inside [I],
Ending [E] of a word, or a Single-character/symbol [S], respectively.
Additionally, [X] is used to represent the white space to delimit word
boundaries. A sentence, as an example of the proposed tagging scheme is
depicted in Table 5 by assigning the proposed tags to a sentence.
Table 5: An example of employed character-level sequence tagging scheme for
SWS task. The [X] label represents the white spaces. The given Sindhi sentence
can be read from right to left, and the Roman transliteration of each Sindhi
token can be read from left to right.
### 4.2 Recurrent Neural Architectures
#### 4.2.1 Long-Short-Term-Memory Unit:
The LSTM network [14] is an extension of RNN proposed to solve vanishing and
exploding gradient problems. For a given input $x_{t}$ of a sentence
$S=\left[x_{1},x_{2},x_{3}\dots x_{n}\right]$, each word is represented into
$N-$dimensional vector (word representation). As we mentioned earlier that
Sindhi is being written from the right-to-left direction. Thus, an LSTM
network computes each representation $\overleftarrow{h}_{t}$ of the right
context of the given input at each time-step $t$. The memory unit $c$ allows
the network to learn when to forget the previous information and when to
update memory cells given new information. The core LSTM architecture contains
forget $f$, input $i$, and output $o$ gates, which regulate the information to
flow-in and flow-out at current state $t$. The mathematical representation of
the gates, cell update, and output in LSTM is as follows:
$\displaystyle i^{t}$
$\displaystyle=\sigma\left({W}_{i}{h}_{t-1}+{U}_{i}{x}_{t}+{b}_{i}\right)$ (1)
$\displaystyle f^{t}$
$\displaystyle=\sigma\left({W}_{f}{h}_{t-1}+{U}_{f}{x}_{t}+{b}_{f}\right)$
$\displaystyle\tilde{c}^{t}$
$\displaystyle=\tanh\left({W}_{c}{h}_{t}+{U}_{c}{x}_{t}+{b}_{c}\right)$
$\displaystyle c^{t}$
$\displaystyle={f}^{t}\odot{c}^{t}+{i}^{t}\odot\tilde{{c}}^{t}$
$\displaystyle{o}^{t}$
$\displaystyle=\sigma\left({W}_{o}{h}_{t-1}+{U}_{o}{x}_{t}+{b}_{o}\right)$
$\displaystyle{h}^{t}$ $\displaystyle={o}^{t}\odot\tanh\left({c}^{t}\right)$
where $\sigma$ and $\odot$ are the element-wises sigmoid function and element-
wise product, $U,W,b$ denote the input $x_{t}$ weight matrix, hidden $h_{t}$
weight matrix, and bias vector for each LSTM gate, respectively. The core
model is a memory cell $c$ which encodes long-term temporal dependencies of
observed inputs at every time-step $t$.
#### 4.2.2 Bidirectional Long-Short-Term-Memory (BiLSTM):
The BiLSTM model encodes the text sequences from both left and right
directions into two separate forward $\overrightarrow{h}$ and backward
$\overleftarrow{h}$ hidden states to capture the right and left context
information. Afterwards, both hidden states $\overrightarrow{h}$,
$\overleftarrow{h}$ are concatenated for the final output. However, the LSTM
hidden state ${h_{t}}$ can only encode context of one direction, such as the
left direction, knowing nothing about the right direction. The BiLSTM first
computes the forward $\overrightarrow{h}$ and then backward
$\overleftarrow{h}$ hidden states of a given input $x_{t}$. Afterwards, both
$\overrightarrow{h}$ and $\overleftarrow{h}$ are combined to generate output
$y_{t}$. This process can be expressed as follows:
$\displaystyle{\overrightarrow{h_{t}}=H\left(W_{\mathrm{x}\overrightarrow{h}}x_{t}+\left(W_{\overrightarrow{h}}\overrightarrow{h}_{t-1}+b_{\overrightarrow{h}}\right)\right.}$
(2)
$\displaystyle{\overleftarrow{h_{t}}=H\left(W_{{x}\overleftarrow{h}}x_{t}+\left(W_{\overleftarrow{h}}\overleftarrow{h}_{t+1}+b_{\overleftarrow{h}}\right)\right.}$
$\displaystyle{y}_{t}={W}_{\overrightarrow{h}{y}}\overrightarrow{h}_{t}+W_{\overleftarrow{h}{y}}\overleftarrow{h}_{t}+b_{t}$
where, $H$ is a concatenation of the corresponding hidden outputs of both
forward $\overrightarrow{h}{y}$, and backward $\overleftarrow{h}{y}$ LSTM
cells.
### 4.3 Tag Inference
The CRF is an effective approach for sequence tagging problems [19] because it
learns the scoring function from tag pairs, such as B, I, E, S at the training
stage. Thus, it is beneficial for sequence tagging tasks by considering the
correlation between the corresponding neighbour tags [23] as well as
efficiently decodes the best chain of tags of a given input sequence. The
probability of a possible tag sequences in CRF can be formulated as:
$P(Y|X)=\frac{\left.\prod_{i=2}^{n}\exp\left(s(X,i)_{yi}+b_{yi-1yi}\right)\right)}{\left.\sum_{y^{\prime}}\prod_{i=2}^{n}\exp\left(s(X,i)_{y_{i}}^{\prime}+b_{y_{i-1}}^{\prime}y_{i}^{\prime}\right)\right)}$
(3)
where $y\in\\{B,I,E,S\\}$ tags, scoring function $s\left(X,i\right)_{yi}$ is
an output of the hidden layer at $i_{th}$ word, and $b_{yi-1yi}$ are the
trainable parameters. While decoding is the search for tag sequences $y$ with
highest conditional probability. Thus, by solving the Eq. (4), we obtain
optimal tag sequence:
$Y^{*}=\operatorname{argmax}P(Y|X)$ (4)
### 4.4 Subword Representation Learning
We use BiLSTM network [14] for SRL by representing each word $w$ from a fixed
vocabulary $V$ of unlabeled Sindhi text in a sequence of forward and backward
character representations. Such as, character representations
$E^{c}=\left[c_{1},c_{2},c_{3},\dots,c_{i}\right]$, bigrams
${E^{B}}=\left[c_{i},c_{i+1}\right]$, and trigrams
${E^{T}}=\left[c_{i},c_{i+1},c_{i+2}\right]$ of a given word are learned to
capture the structure of words at morphemic level. Afterwards, we utilize both
forward and backward representations by concatenating them:
$\displaystyle\overrightarrow{h}{{}_{t}}=\text{LSTM}\left(E^{C_{i}}:E^{B_{i}}:E^{T_{i}},\overrightarrow{h}{{}_{t-1}}\right),$
(5)
$\displaystyle\overleftarrow{h}{{}_{t}}=\text{LSTM}\left(E^{C_{i}}:E^{B_{i}}:E^{T_{i}},\overleftarrow{h}{{}_{t+1}}\right),$
$\displaystyle\text{BiLSTM}\left({Emb_{S}}\right)=\overrightarrow{h}{{}_{|w|}}:\overleftarrow{h}{{}_{1}},$
where ${Emb_{s}}$ is the concatenated output of Bidirectional
$\overrightarrow{h}{{}_{|w|}}$, $\overleftarrow{h}{{}_{1}}$ representations of
LSTM layers over the sequence of character n-grams.
Table 6: An example of Sindhi subword decomposition for subword representation
learning
### 4.5 Proposed Model
The proposed SGNWS architecture consists of five layers. We explain sequential
processing of each layer as follows:
* •
Input layer: The model takes character-level input
$x{{}_{t}}={c_{1},c_{2},c_{3},\dots c_{i}}$ of character unigrams $c_{i}$,
character bigrams $c_{i},c_{i+1}$, character trigrams $c_{i},c_{i+1},c_{i+2}$,
and 4-grams of each word words $w_{n}$ for SRL, as depicted in Table 6.
* •
Embedding layer: After the character-level input, we learn bidirectional
unigram $E_{c}$, bigram ${E_{c}^{B}}$, trigram ${E_{c}^{T}}$, and 4-gram
representations of the given input words $w_{n}$, other numerical features and
then concatenate them into subword embeddings as formulated in Eq.(5). In the
next step, embeddings are used as an input to the proposed model after passing
through a non-linear bidirectional layer.
* •
LSTM layers: we utilize forward $\overrightarrow{h}$ and backward hidden
$\overleftarrow{h}$ layers of BiLSTM to obtain high-level features from
embedding layer. The n-gram based subword representations are passed through
the $\overrightarrow{h}$ and $\overleftarrow{h}$ layers.
* •
Hidden layer: The Bidirectional output of the forward and backward hidden
layers is concatenated with a hidden layer before the input to the CRF layer.
* •
Self-attention layer: We add a self-attention layer before the CRF classifier,
which has the ability to decide how much information to use from token-level
components dynamically.
* •
Output layer: Finally, the CRF layer is placed on the last hidden layer of
proposed model to incorporate transition information between succeeding tag
sequences to obtain optimal tag sequences over the entire sentence. In this
way, CRF decodes the best chain of tags $Y^{*}$ of given input sequences as
represented in Eq. (4).
## 5 Experimental Setup
This section provides details about the experimental setting of baseline
models as well as proposed SGNWS architecture. We use tenserflow [1] deep
learning framework for the implementation of all neural models on GTX
1080-TITAN GPU to conduct all the experiments.
### 5.1 Dataset
We utilize the recently proposed unlabeled Sindhi text corpus [3] in the
experimental setting. We convert segmentation into a sequence tagging problem
using B, I, E, S, X tagging scheme and split the dataset into 80% for training
and 20% for development and test sets. The complete statistics of the dataset
is given in Table 7. We split each sentence with punctuation marks of period,
comma, question mark, colon, semicolon, exclamation mark, dash for consistency
in the dataset and do not consider sentences having tokens less than 5.
Moreover, we split the large sentences with white-space if the length exceeds
more than 300 tokens. The regular hard-space is tagged as [X] in the dataset.
However, multi-word tokens such as numerical expressions 689.0967, date
25-06-2020, money 4736$, etc., are assigned continuous tags. For example, date
25-06-2020 is assigned continuous tags of BIIIIIIIIE, respectively.
Table 7: Statistics of the proposed unlabelled datasest used in the experiments. We concatenate all the datasets and represent it as SDSEG in general experiments. Domain | Sentences | Tokens | Unique | Average
---|---|---|---|---
| | | words | word length
Kawish news paper | 24,212 | 601,910 | 10,721 | 3.687
Awami-Awaz news paper | 19,736 | 521,257 | 14,690 | 3.660
Wikipedia-dumps | 14,557 | 669,623 | 11,820 | 3.738
Twitter | 10,752 | 159,130 | 17,379 | 3.820
Books | 22,496 | 430,923 | 16,127 | 3.684
Total | 91,753 | 2,382,843 | 70,737 | 3.717
### 5.2 Baseline Models
To analyze and compare the performance of proposed model, we conduct several
baseline experiments by training by training the LSTM, BiLSTM, and B-LSTM-CRF.
We train task-specific [21] character-level word representations in baseline
experiments. The brief description of each approach is defined as follows:
1. 1.
LSTM: Our first baseline is the LSTM network exploited with character-level
word representations using task-oriented strategy [21]. We use a softmax
classifier in the last layer of the network for the decoding of tag sequences.
2. 2.
BiLSTM: The BiLSTM has the advantage of encoding forward and backward
sequences to efficiently capture the word information at the morphemic level.
Similar to the LSTM network, we also use softmax in the last layer of the
network for decoding.
3. 3.
BiLSTM-CRF: The third baseline model is based on a BiLSTM-CRF network with a
similar hyper-parameter setting as LSTM and BiLSTM networks. We use CRF
inference in the last layer of the network for decoding purpose.
All the hyper-parameters for baseline models and SGNWS are kept similar (see
Table 8) for performance difference and fair comparison.
### 5.3 Evaluation metrics
We report word boundary Precision, Recall and F-Score as illustrated in Eq.
(6)-(7)-(8). The Precision evaluates the percentage of correctly predicted
tags with respect to the predicted boundaries, and Recall measures the
percentage of correctly predicted tags with respect to the true boundaries.
While $F-\text{Score}$ is the harmonic mean of Precision and Recall, which can
be interpreted as their weighted average.
${\text{Precision}}=\frac{\text{\\#(correctly\\_predicted\\_
tags)}}{\text{\\#(predicted tags)}}\\\ $ (6)
${\text{Recall}}=\frac{\text{\\#(correctly\\_predicted\\_tags)}}{\text{\\#(true\\_tags)}}\\\
$ (7)
$F\text{-Score}=\frac{2\times{\text{Precision}}\times{\text{Recall}}}{{\text{Precision}}+{\text{Recall}}}\\\
$ (8)
### 5.4 Parameter setting and training
The training procedure is to regulate all parameters of the network from
training data. We train the baselines and proposed model using the log-
likelihood function. The log-likelihood has already been optimized to give
strong performances in our baseline experiments compared to global learning
[5] to maximize F-Score. We distribute the SDSEG dataset into training,
development, and test sets. We use variational dropout [13] to both input and
output recurrent units. The softmax is used for label classification in
baseline LSTM and BiLSTM models, CRF is added in the last layer of the BiLSTM-
CRF and SGNWS models. The Gradient normalization is used to improve the
performance [31], which re-scales the gradient when the norm goes over a
threshold. The range of optimal hyper-parameters for SRL, baselines, and the
proposed model is depicted in Table 8.
Table 8: Optimal hyper-parameters for SRL, baseline neural models, and proposed SGNWS model. | Hyper-parameter | Range
---|---|---
SRL | ${E^{c}}$ dimension | 64
$E^{B}$ dimension | 64
$E^{T}$ dimension | 64
${Emb}_{S}$ dimension | 64
| Epochs | 100
Neural models | Optimizer | Adamax
Learning rate | 0.025
| Gradient normalization | 5.0
| h layers | 200
| Dropout | 0.25
| Epochs | 40
Table 9: Results of baselines and proposed SGNWS model for Sindhi word segmentation on the SDSEG development and test sets. | | Dev. | | | Test | |
---|---|---|---|---|---|---|---
RNN variant | P | R | F | P | R | F |
LSTM (Baseline) | 96.38 | 95.68 | 95.29 | 94.81 | 94.57 | 94.32 |
BiLSTM | 96.86 | 96.21 | 96.19 | 96.52 | 94.28 | 95.87 |
BiLSTM-CRF | 97.25 | 96.38 | 96.74 | 96.11 | 95.87 | 96.18 |
BiLSTM-CRF+Char | 97.76 | 97.81 | 96.38 | 96.82 | 97.26 | 96.78 |
BiLSTM-CRF+bigram | 96.34 | 97.89 | 96.58 | 96.13 | 97.23 | 96.74 |
BiLSTM-CRF++Trigram | 97.14 | 98.29 | 96.89 | 97.32 | 97.68 | 96.53 |
SGNWS | 99.77 | 98.83 | 98.94 | 99.08 | 98.72 | 98.51 |
## 6 Results and analysis
The Table 9 shows the performance comparison of all the models on SDSEG
dataset. Firstly, the LSTM yields a stable baseline F-Score of 95.29% on
development and 94.32% on the test set. The BiLSTM model provides better
results than LSTM in both development and test sets due to the bidirectional
learning states. However, the BiLSTM-CRF is superior over both baselines of
LSTM and BiLSTM, respectively, which shows that CRF is dominant over softmax
classifier. Moreover, the addition of character-level features in the BiLSTM-
CRF model surpasses three baselines. However, BiLSTM-CRF with bigram and
trigram based word embeddings yield close results to the BiLSTM-CRF+Char
model, which shows the superiority of the character-level approach a
performance gain. The proposed SGNWS model produced superior results over
baselines, as depicted in Table 9 on development and test data. According to
the results, SRL is beneficial for the SWS task. The proposed SGNWS model
surpasses all the baselines on the SDSEG dataset as well as on five different
datasets (see Figure 1) of Kawish, Awami-Awaz, Wikipedia-dumps, Twitter, and
books.
Figure 1: The performance of proposed SGNWS model on the various datasets. The
F-Score is reported on the test set of multiple datasets.
Our proposed SGNWS model surpasses baselines with a high F-Score of 98.94% on
development set and 98.51% on the test set using the SDSEG dataset. The
observation indicates that SRL is beneficial to capture more semantic
information for the word segmentation of Sindhi text.
## 7 Conclusion
The word segmentation is an essential and non-trivial task in Sindhi language.
The white spaces between words are a good sign for predicting word boundaries,
but the existence of space-omission and space-insertion bring ambiguity in the
segmentation process. We proposed the SGNWS model, keeping in view the
challenges related to SWS, respectively. The proposed model has the ability to
learn and extract subword features automatically by eliminating the
constraints such as hand-craft features for segmentation or any other type of
prior domain-specific knowledge.
In this paper, we empirically demonstrate that our proposed model yields the
best performance in SWS because of its high efficiency and robustness for
sequential modeling tasks with great ability to capture the word information
at the morphemic level for the prediction of word boundaries. The SGNWS model
is an effective and elegant neural solution for SWS, which can also be applied
to other sequence tagging problems.
## Acknowledgement
This work was funded by the National Key R&D Program of China (No.
2018YFB1005100 & No. 2018YFB1005104).
## References
* [1] Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., et al.: Tensorflow: A system for large-scale machine learning. In: 12th $USENIX$ Symposium on Operating Systems Design and Implementation $(OSDI)$ 16). pp. 265–283 (2016)
* [2] Akbik, A., Blythe, D., Vollgraf, R.: Contextual string embeddings for sequence labeling. In: Proceedings of the 27th International Conference on Computational Linguistics. pp. 1638–1649 (2018)
* [3] Ali, W., Kumar, J., Lu, J., Xu, Z.: Word embedding based new corpus for low-resourced language: Sindhi. arXiv preprint arXiv:1911.12579 (2019)
* [4] Almuhareb, A., Alsanie, W., Al-Thubaity, A.: Arabic word segmentation with long short-term memory neural networks and word embedding. IEEE Access 7, 12879–12887 (2019)
* [5] Andor, D., Alberti, C., Weiss, D., Severyn, A., Presta, A., Ganchev, K., Petrov, S., Collins, M.: Globally normalized transition-based neural networks. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. pp. 2442–2452 (2016)
* [6] Bhatti, Z., Ismaili, I.A., Soomro, W.J., Hakro, D.N.: Word segmentation model for Sindhi text. American Journal of Computing Research Repository 2(1), 1–7 (2014)
* [7] Bird, S.: NLTK: the natural language toolkit. In: Proceedings of the COLING/ACL on Interactive presentation sessions. pp. 69–72 (2006)
* [8] Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics 5, 135–146 (2017)
* [9] Chen, X., Qiu, X., Zhu, C., Liu, P., Huang, X.: Long short-term memory neural networks for Chinese word segmentation. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing. pp. 1197–1206 (2015)
* [10] Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., Kuksa, P.: Natural language processing (Almost) from scratch. The Journal of Machine Learning Research 12, 2493–2537 (2011)
* [11] Ding, C., Thu, Y.K., Utiyama, M., Sumita, E.: Word segmentation for Burmese (Myanmar). ACM Transactions on Asian and Low-Resource Language Information Processing 15(4), 1–10 (2016)
* [12] Dootio, M.A., Wagan, A.I.: Automatic stemming and lemmatization process for Sindhi text. Journal of Social Sciences & Interdisciplinary Research 6(2), 19–28 (2017)
* [13] Gal, Y., Ghahramani, Z.: A theoretically grounded application of dropout in recurrent neural networks. In: Advances in Neural Information Processing Systems. pp. 1019–1027 (2016)
* [14] Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural computation 9(8), 1735–1780 (1997)
* [15] Huang, Z., Xu, W., Yu, K.: Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991 (2015)
* [16] Jamro, W.A.: Sindhi language processing: A survey. In: International Conference on Innovations in Electrical Engineering and Computational Technologies. pp. 1–8 (2017)
* [17] Kitagawa, Y., Komachi, M.: Long short-term memory for Japanese word segmentation. In: Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation (2018)
* [18] Labeau, M., Allauzen, A.: Character and subword-based word representation for neural language modeling prediction. In: Proceedings of the First Workshop on Subword and Character Level Models in NLP. pp. 1–13 (2017)
* [19] Lafferty, J.D., McCallum, A., Pereira, F.C.: Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: Proceedings of the Eighteenth International Conference on Machine Learning. pp. 282–289 (2001)
* [20] Liu, H., He, L., Bai, H., Dai, B., Bai, K., Xu, Z.: Structured inference for recurrent hidden semi-markov model. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence. pp. 2447–2453 (2018)
* [21] Liu, Q., Huang, H.Y., Gao, Y., Wei, X., Tian, Y., Liu, L.: Task-oriented word embedding for text classification. In: Proceedings of the 27th International Conference on Computational Linguistics. pp. 2023–2032 (2018)
* [22] Ma, J., Ganchev, K., Weiss, D.: State-of-the-art Chinese word segmentation with bi-lstms. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing. pp. 4902–4908 (2018)
* [23] Ma, X., Hovy, E.: End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. pp. 1064–1074 (2016)
* [24] Mahar, J., Shaikh, H., Memon, G.: A model for Sindhi text segmentation into word tokens. Sindh University Research Journal (Science Series) 44(1) (2012)
* [25] Mahar, J.A., Memon, G.Q., Danwar, S.H.: Algorithms for Sindhi word segmentation using lexicon-driven appraoch. International journal of academic research 3(3) (2011)
* [26] Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems. pp. 3111–3119 (2013)
* [27] Motlani, R.: Developing language technology tools and resources for a resource-poor language: Sindhi. In: Proceedings of the NAACL Student Research Workshop. pp. 51–58 (2016)
* [28] Narejo, W.A., Mahar, J.A.: Morphology: Sindhi morphological analysis for natural language processing applications. In: Proceedings of the International Conference on Computing, Electronic and Electrical Engineering). pp. 27–31 (2016)
* [29] Narejo, W.A., Mahar, J.A., Mahar, S.A., Surahio, F.A., Jumani, A.K.: Sindhi morphological analysis: An algorithm for Sindhi word segmentation into morphemes. International Journal of Computer Science and Information Security 14(6), 293 (2016)
* [30] Pan, Y., Xu, J., Wang, M., Ye, J., Wang, F., Bai, K., Xu, Z.: Compressing recurrent neural networks with tensor ring for action recognition. In: The Thirty-Third AAAI Conference on Artificial Intelligence. pp. 4683–4690 (2019)
* [31] Pascanu, R., Mikolov, T., Bengio, Y.: On the difficulty of training recurrent neural networks. In: Proceedings of the 30th International Conference on International Conference on Machine Learning. vol. 28, pp. III–1310 (2013)
* [32] Rahman, M.U.: Towards Sindhi corpus construction. In: Proceedings of the Conference on Language and Technology (2010)
* [33] Reimers, N., Gurevych, I.: Optimal hyperparameters for deep LSTM-networks for sequence labeling tasks. arXiv preprint arXiv:1707.06799 (2017)
* [34] Schuster, M., Paliwal, K.K.: Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45(11), 2673–2681 (1997)
* [35] Shao, Y.: Cross-lingual word segmentation and morpheme segmentation as sequence labelling. In: The First Workshop on Multi-Language Processing in a Globalising World. pp. 75–80 (2017)
* [36] Shao, Y., Hardmeier, C., Tiedemann, J., Nivre, J.: Character-based joint segmentation and POS tagging for Chinese using bidirectional RNN-CRF. In: Proceedings of the Eighth International Joint Conference on Natural Language Processing. pp. 173–183 (2017)
* [37] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Advances in Neural Information Processing Systems. pp. 5998–6008 (2017)
* [38] Wen, L., Zhang, X., Bai, H., Xu, Z.: Structured pruning of recurrent neural networks through neuron selection. Neural Networks 123, 134–141 (2020)
* [39] Yang, J., Zhang, Y., Dong, F.: Neural word segmentation with rich pretraining. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. pp. 839–849 (2017)
* [40] Yao, Y., Huang, Z.: Bi-directional lstm recurrent neural network for Chinese word segmentation. In: International Conference on Neural Information Processing. pp. 345–353 (2016)
* [41] Ye, J., Wang, L., Li, G., Chen, D., Zhe, S., Chu, X., Xu, Z.: Learning compact recurrent neural networks with block-term tensor decomposition. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 9378–9387 (2018)
* [42] Young, T., Hazarika, D., Poria, S., Cambria, E.: Recent trends in deep learning based natural language processing. IEEE Computational Intelligence Magazine 13(3), 55–75 (2018)
* [43] Zhang, S., Mao, C., Yu, Z., Wang, H., Li, Z., Zhang, J.: Word segmentation for Burmese based on dual-layer CRFs. ACM Transactions on Asian and Low-Resource Language Information Processing 18(1) (2018)
* [44] Zhao, H., Huang, C., Li, M., Lu, B.L.: Effective tag set selection in Chinese word segmentation via conditional random field modeling. In: Proceedings of the 20th Pacific Asia Conference on Language, Information and Computation. pp. 87–94 (2006)
* [45] Zheng, B., Che, W., Guo, J., Liu, T.: Enhancing LSTM-based word segmentation using unlabeled data. In: Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data, pp. 60–70 (2017)
* [46] Zia, H.B., Raza, A.A., Athar, A.: Urdu word segmentation using conditional random fields (CRFs). In: Proceedings of the 27th International Conference on Computational Linguistics. pp. 2562–2569 (2018)
|
# Leveraging Modality-specific Representations for Audio-visual Speech
Recognition via Reinforcement Learning
Chen Chen1, Yuchen Hu1, Qiang Zhang2, 3, Heqing Zou1, Beier Zhu1, and Eng
Siong Chng1
###### Abstract
Audio-visual speech recognition (AVSR) has gained remarkable success for
ameliorating the noise-robustness of speech recognition. Mainstream methods
focus on fusing audio and visual inputs to obtain modality-invariant
representations. However, such representations are prone to over-reliance on
audio modality as it is much easier to recognize than video modality in clean
conditions. As a result, the AVSR model underestimates the importance of
visual stream in face of noise corruption. To this end, we leverage visual
modality-specific representations to provide stable complementary information
for the AVSR task. Specifically, we propose a reinforcement learning (RL)
based framework called MSRL, where the agent dynamically harmonizes modality-
invariant and modality-specific representations in the auto-regressive
decoding process. We customize a reward function directly related to task-
specific metrics (_i.e._ , word error rate), which encourages the MSRL to
effectively explore the optimal integration strategy. Experimental results on
the LRS3 dataset show that the proposed method achieves state-of-the-art in
both clean and various noisy conditions. Furthermore, we demonstrate the
better generality of MSRL system than other baselines when test set contains
unseen noises.
## 1 Introduction
Background noise is inevitable in real world that can dramatically degrade the
speech quality and intelligibility, thereby increasing the difficulty of
speech recognition task (Hu et al. 2022a, b). In noisy scenarios, human will
unconsciously observe the mouth region of speakers, as such noise-invariant
visual cues can provide useful information for the corrupted speech
understanding (Ma, Petridis, and Pantic 2021).
Similar to this, the audio-visual speech recognition (AVSR) technique couples
the audio and visual modalities, which has attracted increasing research
interest for several years (Noda et al. 2015). Recent machine learning based
AVSR methods successfully demonstrate that deep neural network (DNN) can
process and fuse raw acoustic and visual inputs to improve the noise-
robustness for recognition through a supervised learning paradigm (Petridis et
al. 2018; Zhou et al. 2019). Additionally, self-supervised representation
learning has been explored to capture the correlations between audio and
visual lip movements for AVSR task, which has brought remarkable performance
gain in terms of word error rate (WER) metric (Shi et al. 2022).
Figure 1: Research problem. “SNR” denotes the signal-to-noise ratio, and
$\alpha$ is the threshold that modality-invariant representations lose the
effectiveness compared with visual modality-specific representations.
Mainstream AVSR methods focus on learning modality-invariant representations
by fusing audio and visual modalities into a common subspace (Song, Sun, and
Li 2022). However, such a fusion manner is prone to over-reliance on the audio
modality, as it is much easier to recognize than video stream in clean
conditions (Mittal et al. 2020). With the rise of noise levels, the importance
of the video stream is increasingly underestimated in AVSR systems, and leads
to sub-optimal performance since audio modality has already been corrupted by
noise. We further illustrate this research problem in Fig 1. Though modality-
invariant representations (green line) outperform audio modality-specific
representations (yellow line) by a large margin, it is still vulnerable to low
signal-to-noise ratio (SNR) conditions. It is worth noting that when SNR is
lower than $\alpha$, the modality-invariant representations even perform worse
than visual modality-specific representations (blue line) which are completely
unaffected by noise. We argue that this problematic situation can be avoided
by reasonable coordination of modality-invariant and modality-specific
representations, which is shown as the oracle system (red dashed line).
Although the significance of modality-specific representations has been
emphasized in other multi-modal tasks, such as emotion recognition (Hazarika,
Zimmermann, and Poria 2020), it still remains challenging to integrate them
into AVSR system for several reasons. Firstly, the real-world noises have
dynamic and non-stationary temporal distributions, which confuse the
recognizer to estimate the importance of visual modality-specific
representations during auto-regressive decoding. Secondly, due to the natural
distinction of input modalities, a uniform training schedule probably results
in the vanishing gradient when we add a further sub-network to extract visual
modality-specific representations (Yao and Mihalcea 2022). Finally, with
parameter growth of neural networks, the existing integration strategies for
new representations are prone to over-fit to specific types of noise
distribution (Fu, Wu, and Boulet 2022), thereby failing to adapt to unseen
noises in the wild.
In this work, we aim to improve the AVSR system by leveraging visual modality-
specific representations that carry noise-invariant information from the
visual stream. To this end, we employ a pre-trained vision model that takes
lip movement information as input and generates independent probability
distribution for sequence generation. This idea is inspired by the language
model rescoring that has been widely applied in popular ASR methods (Song et
al. 2022; Xu et al. 2022; Chen et al. 2022a). However, instead of applying a
typical integration approach, _e.g._ , late fusion (Inaguma et al. 2019), we
propose a reinforcement learning (RL) based method to dynamically harmonize
the integration process in terms of the task-specific metric. RL is
appropriate to play this integration role for: 1) The auto-regressive decoding
of AVSR can be modeled as an RL formulation (Bahdanau et al. 2016), where the
agent can consider multiple information for reasonable token prediction. 2)
The customized reward function of RL bridges the training criterion and WER,
thus encouraging it to effectively improve the model performance. 3) The beam
search of inference step can provide a set of hypotheses for RL sampling,
which allows the agent to explore the optimal policy on token level (Chen et
al. 2022c).
The main contributions of this paper can be summarized as following:
* •
We propose MSRL – a novel AVSR system that utilizes visual modality-specific
representations to dynamically remedy the noise-corrupted audio modality.
* •
MSRL adopts an RL-based integration method, where a new reward function is
designed to encourage the agent to efficiently explore the optimal strategy in
terms of the task-specific metric.
* •
Experimental results on the largest public LRS3 dataset show that MSRL is
effective and achieves state-of-the-art performance in both clean and various
noisy conditions. Furthermore, the comparative experiments on unseen noises
demonstrate that MSRL has better generalization and adaptability than a strong
baseline.
## 2 Related work
Audio-visual speech recognition. Recently, AVSR has been attracting increasing
research interest as it provides a potential solution for noise-robust speech
recognition. To process and fuse audio-video modalities, TM-seq2seq (Afouras,
Chung, and Zisserman 2018a) applies a separated Transformer encoder for two
modalities and fuses them before decoding. (Ma, Petridis, and Pantic 2021)
presents a hybrid CTC/Attention model based on Resnet-18 and Conformer (Gulati
et al. 2020), which can be trained in an end-to-end manner. (Tao and Busso
2018) demonstrate the importance of aligning two modalities before fusing
them. Moreover, the AV-HuBERT (Shi et al. 2022) learn the correspondence of
audio and video modalities in a self-supervised manner, which is further
augmented in (Shi, Hsu, and Mohamed 2022) to improve noise-robustness.
Modality-invariant and modality-specific representations. Despite the advanced
fusion techniques in multi-modal tasks (Chen et al. 2022b), prior works
suggest that the model can benefit from modality-specific representations
which capture some desirable properties (Xiong et al. 2020). Nevertheless, how
to effectively utilize it is still an open question to be explored. MISA
(Hazarika, Zimmermann, and Poria 2020) maps the multi-modal inputs into two
subspaces for modality-invariant and modality-specific representations and
then fuses them for final classification. MuSE (Yao and Mihalcea 2022) employs
separated encoders for multiple modalities, then harmonize them using
different learning rates and late-fusion. Similarly, (Feng, Lai, and Xie 2019)
constructs an individual network for each modality, as well as designing a
modality-shared identity loss to facilitate the extraction of modality-
invariant representation. The integration of modality-specific representations
is particularly difficult in an AVSR system because the single decoder is hard
to dynamically weight the representations in a sequential decision process.
Reinforcement learning in sequence generation. Extensive existing works have
demonstrated that RL is suitable to play an optimizing role in sequence
generation tasks. In captioning tasks like image captioning (Rennie et al.
2017) and audio captioning (Mei et al. 2021), a self-critical training
approach based on RL can optimize the trained model in terms of non-
differentiable metrics. Such an idea is also expanded to ASR tasks with a
customized reward function (Tjandra, Sakti, and Nakamura 2019; Chen et al.
2022c). Additionally, actor-critic based RL optimization algorithms are
designed to improve task-specific score (_e.g._ BLEU) in machine translation
task (Williams 1992; Bahdanau et al. 2016). Compared with previous work, the
proposed MSRL commits to the stability of RL training, where 1) we utilize
pre-trained models to provide learned representations as state space, and 2)
we design a reward function to encourage the policy network to explore in
trust region (Schulman et al. 2015).
Figure 2: The block diagram of the proposed MSRL system. The solid box denotes
such module is fixed during training, while the dashed box denotes it is
trainable. The red dashed arrow denotes the process of back-propagation.
Policy network considers multiple information in auto-regressive decoding and
predicts the current token “fine”.
## 3 Methodology
In this part, we first introduce the main structure of the proposed MSRL
system in Section 3.1. Then in Section 3.2, we model the auto-regressive
decoding of AVSR as an RL formulation, as well as give mathematical derivation
for the optimization. Finally, the training schedule of MSRL is illustrated in
Section 3.3.
### 3.1 Main Structure
Given the acoustic utterance $A$ and its paired $l$-frame video stream
$V=(v_{1},v_{2},...,v_{l})$, the neural network of AVSR intends to predict a
hypothesis sequence $Y=(y_{1},y_{2},...,y_{T})$. As shown in Fig. 2, the audio
and video streams are fed into a pre-trained AV encoder to extract hidden
representation, where a ResNet block (He et al. 2016) and a linear layer are
respectively served as front-end to obtain the audio and visual features.
After concatenating them, a Transformer encoder (Vaswani et al. 2017) with
self-attention mechanism is employed to extract hidden representations.
Subsequently, we utilize a learnable Transformer decoder including cross-
attention mechanism to produce modality-invariant representations $F_{i}$
corresponding to the probability distribution of predicted tokens. Meanwhile,
we further utilize a pre-trained vision model which similarly consists of
ResNet front-end, Transformer encoder, and Transformer decoder. It consumes
the video stream as input and independently produces visual modality-specific
representations $F_{v}$ with the same shape as $F_{i}$. To harmonize the
modality-specific and modality-invariant representations, a linear layer-based
policy network are designed in the auto-regressive decoding process. Besides
the $F_{i}$ and $F_{v}$ themselves, we argue that policy networks should also
be aware of audio quality which is useful to estimate the importance of
representations. To this end, an MLP block with 2 linear layers is used for
downsampling and provides acoustic information $I_{a}$ for policy network.
Finally, a combined distribution is generated to predict the current token
(“fine” in Fig. 2).
It is noted that all pre-trained models are fixed in the whole training
process. The pre-trained AV encoder is initialized from AV-HuBERT (Shi, Hsu,
and Mohamed 2022), which captures cross-modal correlations between audio and
video modalities by a self-supervised approach. The vision model (Shi et al.
2022) including encoder and decoder is pre-trained via lip-reading task on
LRS3 dataset, where only the video stream is required to generate target
sequence.
### 3.2 RL Policy
Basic Reinforcement Learning is typically formulated as a Markov Decision
Process (MDP) that includes a tuple of trajectories
$\left\langle\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{T}\right\rangle$.
For each time step $t$, the agent consider state $s_{t}\in S$ to generate an
action $a_{t}\in\mathcal{A}$ which interacts with environment. The transition
dynamics $\mathcal{T}(s_{t+1}|s_{t},a_{t})$ is defined as transition
probability from current state $s_{t}$ to next state $s_{t+1}$, and gain an
instant reward $r_{t}(s_{t},a_{t})$. The objective of RL is to learn optimal
policy to maximize the cumulative reward $\mathcal{R}$:
$\mathcal{R}=\mathop{\max}_{a_{t}\in\mathcal{A}}\sum_{t=1}^{T}r_{t}$ (1)
In AVSR task, we summarize the MDP tuple as:
* •
State $\mathcal{S}$ should contain the comprehensive information or learned
patterns for decision-making. Therefore, we denoted $\mathcal{S}$ as a
combination of $F_{i}$, $F_{v}$, and $I_{a}$ defined in Section 3.1, as they
are related to predict current token.
* •
Action $\mathcal{A}$ aims to interact with the environment and update the
$\mathcal{S}$. In this work, $\mathcal{A}$ is defined as a probability
distribution $P_{a}$ for the current predicted token.
* •
Reward $\mathcal{R}$ is an instant feedback signal to evaluate the performance
of $\mathcal{A}$. we define a token-level reward function for each hypothesis
$Y$ as follows:
$\begin{split}\mathcal{R}(Y,Y^{*})=-D_{\text{ED}}(Y||Y^{*})&-\lambda_{1}\sum_{t=0}^{T}D_{\text{KL}}(P_{a}^{t}||F_{i}^{t})\\\
&-\lambda_{2}\sum_{t=0}^{T}D_{\text{KL}}(P_{a}^{t}||F_{v}^{t})\end{split}$ (2)
where $D_{\text{ED}}(\cdot||\cdot)$ denotes the edit distance between two
sequence, and $Y^{*}$ denotes the ground-truth sequence. It is noted that such
distance is directly related to WER. The $D_{\text{KL}}(\cdot||\cdot)$ denotes
the KL-divergence between two distributions, which are used to constrain the
policy network to explore in trust region (Schulman et al. 2015).
$\lambda_{1}$ and $\lambda_{2}$ are the weights to balance them.
* •
Transition dynamics $\mathcal{T}(s_{t+1}|s_{t},a_{t})$ denotes that the
predicted token $a_{t}$ will influence next state $s_{t+1}$, since the
decoding of AVSR is auto-regressive generation process.
In order to maximize the cumulative reward $\mathcal{R}$, the training
objective of policy network is defined as:
$\begin{split}\mathcal{L}_{\theta}(\left\langle
A,V\right\rangle,Y^{*})&=-\mathbb{E}[\mathcal{R}(Y,Y^{*})]\\\
&=\sum_{Y}\mathcal{P}(Y|\left\langle
A,V\right\rangle,\theta)\mathcal{R}(Y,Y^{*})\end{split}$ (3)
where $\theta$ denotes the neural network, $\mathcal{P}(Y|\left\langle
A,V\right\rangle,\theta)$ is the probability of hypothesis $Y$ determined by
input $\left\langle A,V\right\rangle$ and $\theta$. The reward function
$\mathcal{R}(Y,Y^{*})$ is defined in E.q (2).
Since $\sum_{Y}\mathcal{P}(Y|\left\langle A,V\right\rangle,\theta)$ involves a
summation over all possible sequences, we employ the REINFORCEMENT algorithm
(Williams 1992) to approximate the expected $\mathcal{R}$ and calculate the
gradient $\nabla_{\theta}\mathcal{L}_{\theta}$:
$\nabla_{\theta}\mathcal{L}_{\theta}\\!\\!=\\!\\!-\mathbb{E}_{Y^{n}\sim\mathcal{P}(Y^{n}|\left\langle
A,\\!V\right\rangle,\theta)}[\mathcal{R}(Y^{n}\\!,Y^{*}\\!)\nabla_{\theta}log\mathcal{P}(Y^{n}|\\!\left\langle
A,\\!V\right\rangle\\!,\theta)]$ (4)
Where $Y^{n}$ is the sampling hypothesis drawn from the current model
distribution. Different from other sampling methods, we directly utilize the
beam search algorithm during decoding to select the $N$-best hypothesis, which
indicates the number of sampling hypotheses is equal to the beam size $N$.
Furthermore, we introduce the baseline to normalize the reward as follows:
$\nabla_{\theta}\mathcal{L}_{\theta}\\!=\\!-\frac{1}{N}\\!\\!\sum_{Y^{n}\in\text{Beam}}^{N}\\!\\!\\!\nabla_{\theta}log\mathcal{P}(Y^{n}|\left\langle
A,\\!V\right\rangle,\theta)\ [\ \mathcal{R}(Y^{n},Y^{*})-\bar{\mathcal{R}}\ ]$
(5)
where $\bar{\mathcal{R}}$ is the baseline defined as the average of reward of
all hypotheses in a beam set. Subtracting $\bar{\mathcal{R}}$ does not
influence the gradient, but importantly, it can reduce the variance of the
gradient estimation, thus stabilizing the training process. To simplify the
calculation, we assume that the probability mass is concentrated on the
$N$-best list only. Consequently, the loss function can be approximated as:
$\mathcal{L}\approx-\\!\\!\\!\sum_{Y^{n}\in\text{Beam}}^{N}\\!\\!log\hat{\mathcal{P}}(Y^{n}|\left\langle
A,\\!V\right\rangle,\theta)\ [\ \mathcal{R}(Y^{n},Y^{*})-\bar{\mathcal{R}}\ ]$
(6)
where $\hat{\mathcal{P}}(Y^{n}|\left\langle
A,\\!V\right\rangle,\theta)=\frac{\mathcal{P}(Y^{n}|\left\langle
A,\\!V\right\rangle,\theta)}{\sum_{Y^{n}\in\text{Beam}}\mathcal{P}(Y^{n}|\left\langle
A,\\!V\right\rangle,\theta)}$ represents the re-normalized distribution over
the N-best hypotheses. Accordingly, in one Beam set, those hypotheses with a
higher reward than average are encouraged to be selected by increasing their
possibilities. Conversely, the hypothesis that obtains a lower reward will be
suppressed. By minimizing the criterion of E.q (6), the current model intends
to pursue higher reward by effective exploration in a beam set.
### 3.3 Training Schedule of MSRL
The training process contains two stages as shown in Algorithm 1. We first use
typical cross-entropy criterion to train the randomly initialized decoder that
is shown from step 1 to step 4. The best model is selected by a valid set for
subsequent sampling. Then the RL optimization is applied to integrate the
visual modality-specific representations according to the reward function in
step 4. Considering the continuity of utterance, we adopt an online training
manner and the gradient is calculated after the completion of the beam search.
Consequently, to achieve higher reward, the policy network will be updated to
the direction which optimizes the posterior metric.
Algorithm 1 Pseudocode for MSRL Training
1:The paired audio $A$, video $V$, and corresponding ground-truth sequence
$Y^{*}=(y_{1}^{*},y_{1}^{*},...,y_{T}^{*})$.
2:Initialize the pre-trained parameters for AV encoder $\theta_{av}$ and
vision model $\theta_{v}$.
3:Initialize the random parameters for Transformer decoder $\theta_{d}$, MLP
block $\theta_{m}$, and RL policy network $\theta_{p}$.
4:while TRUE do Freeze the parameters of encoder $\theta_{av}$ Obtain the
hidden feature $h_{av}\\!=\theta_{av}(A,V)$ Train the decoder using cross-
entropy loss $\mathcal{L}_{ce}$:
$\mathcal{L}_{ce}=\sum_{t=1}^{T}-\log\mathcal{P}_{\theta_{d}}(y_{t}^{*}|\
y_{t-1}^{*},...,y_{1}^{*},h_{av})\vspace{-0.1cm}$ (7) end while
5:while TRUE do for hypothesis $Y^{n}$ in $N$-best list do Freeze the encoder
$\theta_{av}$ and vision model $\theta_{v}$ for t in 1,2,…, T do Obtain
representations $F_{i}$ and $F_{v}$: $F_{i}$ = $\theta_{d}(h_{av})$ $F_{v}$ =
$\theta_{v}(V)$ Compute current action probability:
$P_{a}^{t}=\theta_{p}(F_{i}^{t},F_{v}^{t},\theta_{m}(A))$ end for Compute
probability $\mathcal{P}(Y^{n})=\prod_{t=1}^{T}P_{a}^{t}$ Determine
accumulative reward $\mathcal{R}(Y^{n},Y^{*})$ end for Train the policy
network using E.q (6)end while
## 4 Experiment Setting
### 4.1 Database
We conduct the experiments on LRS3 (Afouras, Chung, and Zisserman 2018b),
which is the largest publicly available dataset for audio-visual speech
recognition task. It includes face tracks from over 400 hours of TED and TEDx
videos from more than 5,000 speakers, along with the corresponding subtitles
and word alignment boundaries. The original training set is divided into 2
partitions: pretrain (403 hours) and trainval (30 hours), which are both from
the same sources with test set (1452 utterances, 1 hour). In this paper, we
randomly choose 1,200 utterances (1 hour) as a valid set for hyper-parameter
tuning and best model selection.
For the noisy test set, we follow the categories and mixing strategy from
prior work (Shi, Hsu, and Mohamed 2022). The seen noises contains categories
of “ _babble_ ”, “ _music_ ” and “ _natural_ ” that are sampled from MUSAN
dataset (Snyder, Chen, and Povey 2015), and “ _speech_ ” noise is sampled from
utterances in LRS3. These four categories of noises are seen by both pre-
trained models and the training process. For the unseen noises, we select 4
categories of “ _Cafe_ ”, “ _Meeting_ ”, “ _River_ ”, and “ _Resto_ ” from
DEMAND noise set (Thiemann, Ito, and Vincent 2013) and mix them with test set.
The detailed data pre-processing strategy is illustrated in appendix.
### 4.2 MSRL Set up
We develop several MSRL frameworks with different settings, as shown in Table
1. The small transformer block has 768/3072/12 of embedding dimension/feed-
forward dimension/attention heads, and the large transformer block increases
to 2034/4096/16 respectively. Considering the task difficulty of lip reading,
the encoder and decoder of vision model adopt large blocks.
The labeled data is first used for the pre-trained models (Shi et al. 2022),
then it is reused for the training of the decoder and RL module. According to
labeled data amount, we define it as two modes. The normal-resource contains
433 hours of full training data (pretrain subset and trainval subset), and the
low-resource only contains 30 hours of training data (trainval subset).
ID | AV Pre-trained Encoder (# Enc. blocks ) | Decoder (# Dec. blocks ) | Vision Pre-trained Model (# Enc./ Dec. blocks) | Labeled data (hours)
---|---|---|---|---
1 | Small (12) | Small (6) | Large (24/16) | 30
2 | Small (12) | Small (6) | Large (24/16) | 433
3 | Large (24) | Large (9) | Large (24/16) | 30
4 | Large (24) | Large (9) | Large (24/16) | 433
Table 1: Different settings of MSRL. “# Enc.” and “# Dec.” denotes the numbers
of encoder and decoder blocks.
## 5 Result and Analysis
In this section, we conduct extensive experiments and answer the following
questions:
* •
What is the effect of modality-specific representations in AVSR task? We
display the experimental results to prove that MSRL addresses the research
problem in Fig. 1, and the problematic situation will not happen at low SNR
conditions.
* •
What is the effect of the RL integration? We conduct comparative experiments
including other integration strategies to show the superiority of RL method.
* •
How does the MSRL performance against other competitive methods? We carry out
a series of experiments in various conditions to compare our method with
previously published works.
* •
How is the generalization of MSRL to unseen noises? We directly test the MSRL
on various conditions with unseen noise to demonstrate its ability of
generalization.
### 5.1 Effect of Modality-specific Representations
In this part, we first quantitatively analyze the effect that MSRL utilizes
the visual modality representations. To this end, we construct three baseline
systems that leverage different representations. _Audio-only_ baseline only
consumes audio modality as input to generate target sequence. _Visual-only_
baseline is trained as a lip-reading task that consumes visual modality as
input. _Modality-invariant_ baseline is trained as a vanilla AVSR task that
consumes both audio and visual modality as input. Considering the intensity,
the _babble_ noise is selected to simulate the noisy condition with different
SNR levels. The WER results of MSRL and three baselines with different
Transformer blocks and resource modes are shown in Table 2.
Method | Block | _Babble_ Noise, SNR= | Clean
---|---|---|---
| -15 | -10 | -5 | 0 | 5 | avg | $\infty$
Normal-resource (433 hours of labeled data)
_Audio-only_ | Small | 99.1 | 98.1 | 82.7 | 32.6 | 11.9 | 64.9 | 2.53
_Audio-only_ | Large | 98.6 | 97.4 | 75.8 | 24.6 | 9.01 | 61.1 | 1.95
_Visual-only_ | Large | 26.9 | 26.9
_Modality-invariant_ | Small | 55.6 | 38.0 | 19.1 | 7.24 | 4.02 | 24.8 | 1.84
_Modality-invariant_ | Large | 43.4 | 30.3 | 13.5 | 4.90 | 2.50 | 18.9 | 1.45
MSRL | Small | 26.1 | 24.7 | 14.8 | 5.92 | 3.19 | 14.9 | 1.44
MSRL | Large | 25.5 | 22.3 | 11.3 | 4.51 | 2.31 | 13.2 | 1.33
Low-resource (30 hours of labeled data)
_Audio-only_ | Small | * | * | 84.2 | 36.1 | 13.9 | 66.8 | 4.69
_Audio-only_ | Large | * | 98.0 | 77.0 | 25.9 | 14.2 | 63.0 | 3.51
_Visual-only_ | Large | 27.8 | 27.8
_Modality-invariant_ | Small | 53.0 | 39.5 | 21.4 | 10.2 | 5.92 | 26.0 | 4.10
_Modality-invariant_ | Large | 44.8 | 32.3 | 16.4 | 7.37 | 4.87 | 21.1 | 3.27
MSRL | Small | 27.4 | 25.8 | 16.7 | 7.24 | 5.20 | 16.5 | 3.38
MSRL | Large | 26.5 | 24.9 | 13.0 | 6.36 | 3.97 | 14.9 | 2.82
Table 2: The WER (%) results in _babble_ noise and clean conditions.“avg”
denotes the average performance across all SNR. “*” denotes the input modality
can not be recognized.
Method | _Babble_ Noise, SNR= | Clean
---|---|---
-15 | -10 | -5 | 0 | 5 | avg | $\infty$
Normal-resource (433 hours of labeled data) & Large Transformer block
_Modality-invariant_ | 43.4 | 30.3 | 13.5 | 4.90 | 2.50 | 18.9 | 1.45
_Early fusion_ | 38.2 | 25.8 | 12.6 | 5.07 | 2.96 | 16.9 | 1.58
_Late fusion_ | 36.7 | 26.2 | 12.2 | 4.78 | 2.70 | 16.5 | 1.68
_Model ensemble_ | 31.6 | 23.4 | 11.8 | 5.36 | 3.15 | 15.1 | 2.26
MSRL | 25.5 | 22.3 | 11.3 | 4.51 | 2.31 | 13.2 | 1.33
Low-resource (30 hours of labeled data) & Large Transformer block
_Modality-invariant_ | 44.8 | 32.3 | 16.4 | 7.37 | 4.87 | 21.1 | 3.27
_Early fusion_ | 40.1 | 25.6 | 15.5 | 7.40 | 5.01 | 18.7 | 3.26
_Late fusion_ | 38.4 | 25.9 | 13.3 | 6.40 | 4.19 | 17.6 | 3.31
_Model ensemble_ | 33.7 | 25.4 | 13.6 | 6.77 | 4.18 | 16.7 | 3.35
MSRL | 26.5 | 24.9 | 13.0 | 6.36 | 3.97 | 14.9 | 2.82
Table 3: The WER (%) results of MSRL and other integration methods in _babble_
noise and clean conditions. Best results are in bold. Figure 3: The
visualization WER(%) results of _Audio-only_ baseline, _Modality-invariant_
baseline, and proposed MSRL with large block and 433 hours of labeled data.
We observe that except _Visual-only_ baseline, the performance of other
methods degrades obviously with the decrease of SNR. When SNR is lower than
-5, the performance of _Modality-invariant_ baseline is even worse than
_Visual-only_ baseline. However, such a problematic situation does not happen
in MSRL, since the visual modality-specific representations have been
increasingly effective if audio quality becomes hard to recognize.
Furthermore, we observe that MSRL system achieves up to 17.6% relative WER
reduction than _Modality-invariant_ baseline in clean conditions. It is out of
intuition because the visual modality-specific representations are usually
considered trivial when audio quality is high. We reason that 1) visual
modality-specific representations add the diversity of information, and it
might be helpful when some ambiguous acoustic pronunciations have similar
probabilities. 2) The training objective in E.q.(6) is related to WER, thus
ameliorating the mismatch problem between training and testing modes. In
general, the proposed MSRL system not only guarantees the lower-bound
performance in noisy conditions but also improves the upper-bound performance
in clean conditions.
In order to visualize the effect of modality-specific representations, we draw
the histogram of _Audio-only_ baseline, _Modality-invariant_ baseline, and
MSRL system in Fig. 3. The _Visual-only_ baseline is shown as the blue dashed
line as the WER keeps invariant (26.9%) in all conditions. It is noted that
Fig. 3 roughly reproduces the situation in Fig. 1. The _Modality-invariant_
baseline loses its effectiveness in low SNR setting, while MSRL system
performs similarly to the oracle line in all conditions.
We also conduct a case study to observe how the visual modality-specific
representations help the MSRL system. To this end, we sample a divergent step
in decoding, where the _Modality-invariant_ baseline predicts a wrong token
but MSRL predict the correct one. As shown in Fig 4, three probability
distributions are drawn from _Modality-invariant_ baseline, _Visual-only_
baseline, and MSRL system. The x-axis is the vocabulary size and each value
denotes a BPE (Sennrich, Haddow, and Birch 2015) token. The y-axis denotes the
probability of the corresponding token. For better visualization, the
improbable tokens (probability$\textless$ 0.05) are not included in the
figure. It is observed that _Modality-invariant_ baseline predict a wrong
token (ID=48) in this decoding step, but with help of visual modality-specific
representations (_i.e._ , _Visual-only_ baseline), the MSRL predicts the
correct token ’ _que_ ’ (ID=582).
Figure 4: Case study of a divergent decoding step, where the ground-truth
token is ’ _que_ ’ (ID=582). The probabilities higher than 0.05 are displayed.
Method | Hr | _Babble_ , SNR= | _Natural_ , SNR= | _Music_ , SNR= | _Speech_ , SNR= | Clean
---|---|---|---|---|---|---
-10 | -5 | 0 | 5 | avg | -10 | -5 | 0 | 5 | avg | -10 | -5 | 0 | 5 | avg | -10 | -5 | 0 | 5 | avg | $\infty$
RNN-T | 34K | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 4.5
TM-seq2seq | 595 | - | - | 42.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 7.2
AE-MSR | 1.4K | 38.6 | 31.1 | 25.5 | 24.3 | 29.9 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 6.8
AV-HuBERT | 30 | 35.1 | 18.4 | 8.3 | 4.9 | 16.7 | 11.6 | 6.5 | 4.6 | 4.0 | 6.7 | 12.4 | 7.4 | 4.7 | 4.1 | 7.2 | 11.5 | 6.8 | 5.0 | 4.2 | 6.9 | 3.3
AV-HuBERT | 433 | 34.9 | 16.6 | 5.8 | 2.6 | 15.0 | 9.4 | 4.3 | 2.4 | 1.9 | 4.5 | 10.9 | 4.6 | 2.6 | 1.8 | 5.0 | 11.4 | 4.6 | 2.9 | 2.2 | 5.3 | 1.4
MSRL (ours) | 30 | 24.9 | 13.0 | 6.4 | 4.1 | 12.1 | 9.8 | 5.6 | 3.7 | 3.4 | 5.6 | 10.8 | 6.5 | 4.0 | 3.3 | 6.2 | 8.6 | 5.5 | 4.0 | 3.5 | 5.4 | 2.8
MSRL (ours) | 433 | 22.4 | 11.3 | 4.5 | 2.3 | 10.1 | 8.0 | 4.1 | 2.3 | 1.6 | 4.0 | 8.9 | 4.4 | 2.4 | 1.7 | 4.4 | 7.2 | 3.4 | 2.3 | 1.8 | 3.7 | 1.3
Table 4: The WER (%) results of MSRL and prior works on LRS3 dataset.“ _Hr_ ”
denotes the the amount of labeled audio-visual speech data used in each
system. “ _Babble_ ”, “ _Natural_ ”, and “ _Music_ ” are the different types
of noise from MUSAN. “ _Speech_ ” are sampled from other utterances in LRS3.
Method | _Cafe_ , SNR= | _Meeting_ , SNR= | _River_ , SNR= | _Resto_ , SNR= | Clean
---|---|---|---|---|---
-10 | -5 | 0 | 5 | avg | -10 | -5 | 0 | 5 | avg | -10 | -5 | 0 | 5 | avg | -10 | -5 | 0 | 5 | avg | $\infty$
Low-resource (30 hours of labeled data) & Large Transformer block
AV-HuBERT | 16.4 | 7.5 | 4.7 | 4.0 | 8.2 | 13.6 | 7.3 | 4.9 | 4.1 | 7.5 | 23.6 | 11.0 | 5.9 | 4.4 | 11.2 | 36.8 | 19.9 | 8.3 | 5.1 | 17.5 | 3.3
MSRL (ours) | 13.0 | 6.1 | 3.9 | 3.1 | 6.5 | 11.1 | 6.4 | 4.4 | 3.4 | 6.3 | 18.5 | 9.5 | 5.0 | 3.7 | 9.2 | 24.5 | 16.3 | 7.0 | 4.3 | 13.0 | 2.8
Normal-resource (433 hours of labeled data) & Large Transformer block
AV-HuBERT | 13.1 | 4.8 | 2.6 | 1.9 | 5.6 | 12.4 | 5.4 | 3.0 | 2.2 | 5.8 | 21.0 | 8.3 | 3.6 | 2.4 | 8.8 | 35.9 | 17.4 | 5.9 | 2.8 | 15.5 | 1.4
MSRL (ours) | 11.2 | 4.2 | 2.3 | 1.7 | 4.9 | 10.4 | 4.5 | 2.6 | 1.8 | 4.8 | 17.8 | 7.8 | 3.2 | 1.9 | 7.7 | 23.9 | 13.9 | 5.1 | 2.4 | 11.3 | 1.3
Table 5: The WER (%) results of MSRL on unseen noises. “ _Cafe_ ”, “ _Meeting_
”, “ _River_ ”, and “ _Resto_ ” are the different types of noise from DEMAND.
### 5.2 Effect of RL Module
In this part, we examine the effect of RL module by replacing it using other
integration methods, which are _early fusion_ , _late fusion_ , and _Model
ensemble_. Since the pre-trained vision model also has the Transformer-based
encoder-decoder pipeline, the _Early fusion_ adds the hidden features from the
final encoder layer of pre-trained vision model to the corresponding layer of
pre-trained AV encoder. The _Late fusion_ (Inaguma et al. 2019) executes a
similar operation but add features at the final layer of decoder. Both early
and late fusion strategies are applied in the cross-entropy training (step 3
in Algorithms 1), where the decoder is trainable to fit the new features. The
_Model ensemble_ method directly computes the average of probabilities from
_Modality-invariant_ baseline and _Visual-only_ baseline for token prediction
in auto-regressive decoding, without any tuning operation.
From the WER results of Table 3, in noisy conditions, all integration methods
can benefit from visual modality-specific representations compared with
_Modality-invariant_ baseline. MSRL achieves best performance in all SNR
levels. Surprisingly, the untrained _Model ensemble_ baseline beats the _Early
fusion_ and _Late fusion_ baselines on average in both normal-resource and
low-resource modes. When SNR is -15, except MSRL, three other baselines are
not able to avoid the problematic situation that perform worse than _Visual-
only_ baseline. In clean conditions, however, the visual modality-specific
representations might be redundant. We observe that _Model ensemble_ baseline
overestimates the importance of visual modality-specific representations, thus
suffering 55.9% of performance deterioration (1.45% $\xrightarrow[]{}$ 2.26%)
in normal-resource mode. _Early fusion_ and _Early fusion_ can dilute it by
tunable parameters, thereby obtaining comparable WER results with _Modality-
invariant_ baseline. In general, MSRL can reasonably balance the importance of
modality-specific and modality-invariant representations, as the policy
network always considers acoustic information in auto-regressive decoding.
### 5.3 Benchmark against Other Methods
We then report the WER performance of MSRL in various conditions, as well as
comparing it with other competitive methods. Four recent published methods are
selected as strong baselines, which are RNN-T (Makino et al. 2019), TM-seq2seq
(Afouras, Chung, and Zisserman 2018a), AE-MSR (Xu et al. 2020), and AV-HuBERT
(Shi et al. 2022). Since RNN-T and TM-seq2seq methods focus on clean
conditions, and the AE-MSR is only evaluated on _babble_ noise, we only report
the available results from their respective papers. For AV-HuBERT, the “
_babble_ ”, “ _speech_ ”, and “clean” columns present the WER results from
original paper. The “ _natural_ ” and “ _music_ ” columns were reproduced
using the official code as they are not available in original paper. The
comparison of WER results is shown in Table 4.
In clean conditions, we observe that MSRL achieves 5% (1.4%
$\xrightarrow[]{}$1.33%) relative WER reduction over the best baseline of AV-
HuBERT in normal-resource mode. In low-resource mode, such superiority
increases to 14.5% (3.3% $\xrightarrow[]{}$2.82%). It indicates that visual
information is particularly important when training data is limited.
Furthermore, the MSRL using 30 hours of labeled data even performs better than
RNN-T employs 34k hours of labeled data, which shows better data efficiency.
In noisy conditions, MSRL achieves the best performances in all kinds of
noises and SNR levels. For the “ _babble_ ”, “ _natural_ ”, “ _music_ ” and “
_speech_ ” noises, MSRL respectively surpasses AV-HuBERT baseline by
32.5%/27.5%, 11.1%/15%, 12.0%/19.5% and 30.2%/21.7% relatively in normal-
resource/low-resource mode. It is noted that the _speech_ noise is the
utterance drawn from the same source of LRS3 which might confuse the
recognizer, while the MSRL can address it well without any separation module.
### 5.4 Generalization on Unseen Noise
Finally, we evaluate the generalization of MSRL method, as the AVSR model
usually encounters unseen noises in practical applications. We test the AV-
HuBERT and MSRL models on a customized test set which contains 4 types of
unseen noises, and the WER results are shown in Table 5.
We observe MSRL system has better generalization on all 4 kinds of noises. The
visual modality-specific representations are still effective as they are
unaffected by the domain shift of audio modality. Consequently, MSRL
respectively surpasses the AV-HuBERT baseline by 20.7%/12.5%, 16.4%/17.2%,
17.9%/12.5% and 25.7%/27.1% relatively in low-resource/normal-resource mode.
Furthermore, we notice that models show distinct adaptability to different
unseen noises. Since the “ _cafe_ ” and “ _meeting_ ” noises mainly consist of
human voice, both AV-HuBERT and MSRL adapt them well and achieve comparable
WER results with seen “ _speech_ ” noise. However, the WER performance
degrades obviously on “ _river_ ” and “ _resto_ ” noises, as there is no
similar seen noise during training process.
## 6 Conclusion
In this paper, we propose a reinforcement learning-based method MSRL to
leverage the modality-specific representations into AVSR task. MSRL employs a
pre-trained vision model to provide the visual modality-specific and a policy
network to explore the optimal integrated strategy in auto-regressive decoding
process. We design the experiments to examine the effects of both visual
modality-specific representations and RL integration module. WER results
demonstrate that MSRL achieves state-of-the-art performance on LRS3 dataset in
clean and noisy conditions, as well as showing better generalization on unseen
noises.
## 7 Acknowledgement
This research is supported by the National Research Foundation, Singapore
under its AI Singapore Programme (AISG Award No: AISG2-PhD-2021-01-002).
## References
* Afouras, Chung, and Zisserman (2018a) Afouras, T.; Chung, J. S.; and Zisserman, A. 2018a. The conversation: Deep audio-visual speech enhancement. _arXiv preprint arXiv:1804.04121_.
* Afouras, Chung, and Zisserman (2018b) Afouras, T.; Chung, J. S.; and Zisserman, A. 2018b. LRS3-TED: a large-scale dataset for visual speech recognition. _arXiv preprint arXiv:1809.00496_.
* Bahdanau et al. (2016) Bahdanau, D.; Brakel, P.; Xu, K.; Goyal, A.; Lowe, R.; Pineau, J.; Courville, A.; and Bengio, Y. 2016. An actor-critic algorithm for squence prediction.
* Chen et al. (2022a) Chen, C.; Hou, N.; Hu, Y.; Shirol, S.; and Chng, E. S. 2022a. Noise-robust speech recognition with 10 minutes unparalleled in-domain data. In _ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , 4298–4302. IEEE.
* Chen et al. (2022b) Chen, C.; Hou, N.; Hu, Y.; Zou, H.; Qi, X.; and Chng, E. S. 2022b. Interactive Audio-text Representation for Automated Audio Captioning with Contrastive Learning. _arXiv preprint arXiv:2203.15526_.
* Chen et al. (2022c) Chen, C.; Hu, Y.; Hou, N.; Qi, X.; Zou, H.; and Chng, E. S. 2022c. Self-Critical Sequence Training for Automatic Speech Recognition. In _ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , 3688–3692. IEEE.
* Feng, Lai, and Xie (2019) Feng, Z.; Lai, J.; and Xie, X. 2019. Learning modality-specific representations for visible-infrared person re-identification. _IEEE Transactions on Image Processing_ , 29: 579–590.
* Fu, Wu, and Boulet (2022) Fu, Y.; Wu, D.; and Boulet, B. 2022. Reinforcement learning based dynamic model combination for time series forecasting. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 36, 6639–6647.
* Gulati et al. (2020) Gulati, A.; Qin, J.; Chiu, C.-C.; Parmar, N.; Zhang, Y.; Yu, J.; Han, W.; Wang, S.; Zhang, Z.; Wu, Y.; et al. 2020. Conformer: Convolution-augmented transformer for speech recognition. _arXiv preprint arXiv:2005.08100_.
* Hazarika, Zimmermann, and Poria (2020) Hazarika, D.; Zimmermann, R.; and Poria, S. 2020. Misa: Modality-invariant and-specific representations for multimodal sentiment analysis. In _Proceedings of the 28th ACM international conference on multimedia_ , 1122–1131.
* He et al. (2016) He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 770–778.
* Hu et al. (2022a) Hu, Y.; Hou, N.; Chen, C.; and Chng, E. S. 2022a. Dual-Path Style Learning for End-to-End Noise-Robust Speech Recognition. _arXiv preprint arXiv:2203.14838_.
* Hu et al. (2022b) Hu, Y.; Hou, N.; Chen, C.; and Chng, E. S. 2022b. Interactive feature fusion for end-to-end noise-robust speech recognition. In _ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , 6292–6296. IEEE.
* Inaguma et al. (2019) Inaguma, H.; Cho, J.; Baskar, M. K.; Kawahara, T.; and Watanabe, S. 2019. Transfer learning of language-independent end-to-end ASR with language model fusion. In _ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , 6096–6100. IEEE.
* Ma, Petridis, and Pantic (2021) Ma, P.; Petridis, S.; and Pantic, M. 2021. End-to-end audio-visual speech recognition with conformers. In _ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , 7613–7617. IEEE.
* Makino et al. (2019) Makino, T.; Liao, H.; Assael, Y.; Shillingford, B.; Garcia, B.; Braga, O.; and Siohan, O. 2019. Recurrent neural network transducer for audio-visual speech recognition. In _2019 IEEE automatic speech recognition and understanding workshop (ASRU)_ , 905–912. IEEE.
* Mei et al. (2021) Mei, X.; Huang, Q.; Liu, X.; Chen, G.; Wu, J.; Wu, Y.; Zhao, J.; Li, S.; Ko, T.; Tang, H. L.; et al. 2021. An encoder-decoder based audio captioning system with transfer and reinforcement learning. _arXiv preprint arXiv:2108.02752_.
* Mittal et al. (2020) Mittal, T.; Bhattacharya, U.; Chandra, R.; Bera, A.; and Manocha, D. 2020. M3er: Multiplicative multimodal emotion recognition using facial, textual, and speech cues. In _Proceedings of the AAAI conference on artificial intelligence_ , volume 34, 1359–1367.
* Noda et al. (2015) Noda, K.; Yamaguchi, Y.; Nakadai, K.; Okuno, H. G.; and Ogata, T. 2015. Audio-visual speech recognition using deep learning. _Applied Intelligence_ , 42(4): 722–737.
* Petridis et al. (2018) Petridis, S.; Stafylakis, T.; Ma, P.; Tzimiropoulos, G.; and Pantic, M. 2018. Audio-visual speech recognition with a hybrid ctc/attention architecture. In _2018 IEEE Spoken Language Technology Workshop (SLT)_ , 513–520. IEEE.
* Rennie et al. (2017) Rennie, S. J.; Marcheret, E.; Mroueh, Y.; Ross, J.; and Goel, V. 2017. Self-critical sequence training for image captioning. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 7008–7024.
* Schulman et al. (2015) Schulman, J.; Levine, S.; Abbeel, P.; Jordan, M.; and Moritz, P. 2015. Trust region policy optimization. In _International conference on machine learning_ , 1889–1897. PMLR.
* Sennrich, Haddow, and Birch (2015) Sennrich, R.; Haddow, B.; and Birch, A. 2015. Neural machine translation of rare words with subword units. _arXiv preprint arXiv:1508.07909_.
* Shi et al. (2022) Shi, B.; Hsu, W.-N.; Lakhotia, K.; and Mohamed, A. 2022. Learning audio-visual speech representation by masked multimodal cluster prediction. _arXiv preprint arXiv:2201.02184_.
* Shi, Hsu, and Mohamed (2022) Shi, B.; Hsu, W.-N.; and Mohamed, A. 2022. Robust Self-Supervised Audio-Visual Speech Recognition. _arXiv preprint arXiv:2201.01763_.
* Snyder, Chen, and Povey (2015) Snyder, D.; Chen, G.; and Povey, D. 2015. Musan: A music, speech, and noise corpus. _arXiv preprint arXiv:1510.08484_.
* Song, Sun, and Li (2022) Song, Q.; Sun, B.; and Li, S. 2022. Multimodal Sparse Transformer Network for Audio-Visual Speech Recognition. _IEEE Transactions on Neural Networks and Learning Systems_.
* Song et al. (2022) Song, T.; Xu, Q.; Ge, M.; Wang, L.; Shi, H.; Lv, Y.; Lin, Y.; and Dang, J. 2022\. Language-specific Characteristic Assistance for Code-switching Speech Recognition. _arXiv preprint arXiv:2206.14580_.
* Tao and Busso (2018) Tao, F.; and Busso, C. 2018. Aligning audiovisual features for audiovisual speech recognition. In _2018 IEEE International Conference on Multimedia and Expo (ICME)_ , 1–6. IEEE.
* Thiemann, Ito, and Vincent (2013) Thiemann, J.; Ito, N.; and Vincent, E. 2013. The diverse environments multi-channel acoustic noise database (demand): A database of multichannel environmental noise recordings. In _Proceedings of Meetings on Acoustics ICA2013_ , volume 19, 035081\. Acoustical Society of America.
* Tjandra, Sakti, and Nakamura (2019) Tjandra, A.; Sakti, S.; and Nakamura, S. 2019. End-to-end speech recognition sequence training with reinforcement learning. _IEEE Access_ , 7: 79758–79769.
* Vaswani et al. (2017) Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. _Advances in neural information processing systems_ , 30.
* Williams (1992) Williams, R. J. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. _Machine learning_ , 8(3): 229–256.
* Xiong et al. (2020) Xiong, H.; Ou, W.; Yan, Z.; Gou, J.; Zhou, Q.; and Wang, A. 2020. Modality-specific matrix factorization hashing for cross-modal retrieval. _Journal of Ambient Intelligence and Humanized Computing_ , 1–15.
* Xu et al. (2020) Xu, B.; Lu, C.; Guo, Y.; and Wang, J. 2020. Discriminative multi-modality speech recognition. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 14433–14442.
* Xu et al. (2022) Xu, Q.; Song, T.; Wang, L.; Shi, H.; Lin, Y.; Lv, Y.; Ge, M.; Yu, Q.; and Dang, J. 2022. Self-Distillation Based on High-level Information Supervision for Compressing End-to-End ASR Model. In _Proc. Interspeech 2022_ , 1716–1720.
* Yao and Mihalcea (2022) Yao, Y.; and Mihalcea, R. 2022. Modality-specific Learning Rates for Effective Multimodal Additive Late-fusion. In _Findings of the Association for Computational Linguistics: ACL 2022_ , 1824–1834.
* Zhou et al. (2019) Zhou, P.; Yang, W.; Chen, W.; Wang, Y.; and Jia, J. 2019. Modality attention for end-to-end audio-visual speech recognition. In _ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , 6565–6569. IEEE.
|
11institutetext: Center for Modeling Social Systems (CMSS),
NORCE Norwegian Research Center AS,
Universitetsveien 19, Kristiansand, Norway
11email<EMAIL_ADDRESS>
https://www.norceresearch.no/en/research-group/cmss
# A Guide to Re-Implementing Agent-based Models: Experiences from the HUMAT
Model
Önder Gürcan 0000-0001-6982-5658 Timo Szczepanska 0000-0003-2442-8223
Patrycja Antosz 0000-0001-6330-1597
###### Abstract
Replicating existing agent-based models poses significant challenges,
particularly for those new to the field. This article presents an all-
encompassing guide to re-implementing agent-based models, encompassing vital
concepts such as comprehending the original model, utilizing agent-based
modeling frameworks, simulation design, model validation, and more. By
embracing the proposed guide, researchers and practitioners can gain a
profound understanding of the entire re-implementation process, resulting in
heightened accuracy and reliability of simulations for complex systems.
Furthermore, this article showcases the re-implementation of the HUMAT socio-
cognitive architecture, with a specific focus on designing a versatile,
language-independent model. The encountered challenges and pitfalls in the re-
implementation process are thoroughly discussed, empowering readers with
practical insights. Embrace this guide to expedite model development while
ensuring robust and precise simulations.
###### Keywords:
Agent-based Models Replication Re-implementation Simulation design Model
calibration Model validation Best practices
## 1 Introduction
Recognizing the need to build higher quality social simulation tools, the
scientific community has made numerous efforts to develop procedures that
improve description [16], reusability [28], rigor and transperency [3], and
increase confidence in agent-based model (ABM) outputs. One essential
procedure that deserves more attention as an external model validation method
is model replication - re-implementing an existing model based on a
representation provided by model builders (e.g., in the form of a natural
language conceptual description or source code). Even though agent-based
modelers have early on recognized replication as "one of the hallmarks of
cumulative science" [10] and was proposed, alongside verification and
validation, as an independent test of a model’s reliability [33], it is most
often brought to attention in negative instances of a failure to reproduce
results of the original model (e.g.,[31]).
Since ABMs provide explicit causal explanations of investigated phenomena [9],
replication is vital in validating the model’s causal consistency. After all,
a causal mechanism represented in the ABM is expected to produce the same
effects regardless of the language/software of the implementation. However, if
it fails to do so, jumping to conclusions about a widespread replication
crisis in social simulation (similar to the one in psychology [22] might be
premature, given how much we still have to learn about the specificity of
agent-based modeling as a scientific method. Re-implementing a conceptually
identical model in a novel platform can help validate the causal mechanisms
explaining the model outcomes and identify software-implicit assumptions that
are not an explicit part of the conceptual causal explanation but influence
the model outcomes (e.g., [14]).
Re-implementing existing ABMs in another programming language is a crucial
task for researchers and practitioners seeking to enhance the flexibility and
scalability of their simulations. Until now, various studies have emphasized
the importance of re-implementing agent-based models in different programming
languages [26, 14, 4, 29, 20]. Railsback [26] emphasizes the need for re-
implementing models in diverse programming languages to capture and represent
the complexity of real-world systems. Edmond and Hales [14] state that
replication can reveal surprising flaws, even in the simplest of models.
Chattoe-Brown et al. [12] emphasize that ensuring such replication becomes
even more critical when the model outcomes have the potential to impact
individuals’ lives significantly.
Unfortunately, replication of ABMs is underused in practice. Zhong and Kim
[33] elaborate on possible challenges that explain why re-implementation is
still rare. They emphasize that replication is a highly resource-demanding
activity with relatively low payoffs in the form of publishable articles,
sometimes seen as a trivial activity given to students who take their first
steps in coding.
This article attempts to aid in building procedures that support replication
[27, 30, 32], recognizing the importance of the original research process that
starts with the conceptual model. The aim is to report on a systematic process
of model replication, sharing good practices and lessons learned from re-
implementing the HUMAT socio-cognitive architecture in Python (following an
original implementation in NetLogo). The following section introduces a
systematic guide for re-implementing agent-based models - a step-by-step
process of model re-implementation. We developed this guide alongside the re-
implementing HUMAT in Python case study. Effort was taken to generalize the
re-implementation process. The guide proposed here serves as a starting point,
aimed to be further developed. The article concludes with a short discussion.
## 2 Guide for Re-implementing Agent-Based Models
Re-implementing an existing agent-based model in a different programming
language involves a series of steps to ensure the new implementation is
accurate, efficient, and maintainable. We propose the following systematic
approach to guide the re-implementation process (Figure 1), summarized in the
most important steps below.
Understand the original model: Before beginning any re-implementation, it is
essential to clearly understand the existing model’s functionality and designs
[24]. This allows the developer to identify potential issues or limitations
that should be addressed in the new implementation. Hence, we need to start by
studying the original model’s documentation, code, and any related
publications and gain a thorough understanding of its objectives, assumptions,
agents, behaviors, interactions, environment, and other relevant aspects.
Figure 1: The process for re-implementing ABM Models
Design a generic model: If the original model’s documentation is tightly
coupled with the original programming language, we must outline a generic
model independent of a programming language and framework. The generic model
should describe the objective, assumptions, agents, behaviors, interactions,
and environment. In that sense, applying UML and object-oriented patterns [19]
and pattern-oriented agent modeling [15] are good candidates.
Choose a new programming language111Note that the initiation of this step is
independent from initiation of the other steps and can start at any time.:
Choosing the correct programming language can significantly impact its success
and depends on several factors [25, 23]. The criteria to be considered are the
target platform, target users, (if any) partners’ experience/preference, and
the language’s community, library, and support strength. Common choices
include Python, Java, and NetLogo [1].
Identify appropriate libraries or frameworks: Research and choose libraries or
frameworks that are compatible with your chosen programming language and can
facilitate agent-based modeling. For example, Mesa for Python [21], Repast for
Java [13], or NetLogo’s built-in constructs/extensions.
Design the new model: Based on the generic model and considering the chosen
language and framework, design a new model representing agents, environments,
interactions, and behaviors. Consider whether any modifications, adaption of
the data structures, or optimizations should be made to the generic model
based on the new programming language’s capabilities.
Implement the new model: Translate the design model into the chosen
programming language, adapting the structure and syntax as needed. Use the
chosen libraries or frameworks to help streamline the process.
Validate the new model: Test the new model against the original to ensure it
produces the same or similar results [18, 17]. This may involve comparing
outputs, such as agent behaviors, interactions, aggregate patterns, and any
performance metrics. Address any discrepancies or issues that arise.
Document the new model: Create thorough documentation for the new model,
including explanations of its purpose, assumptions, agents, behaviors,
interactions, and environment. In that sense, the ODD protocol [16] or UML-
based specifications [19] can be used. Include information on any changes or
optimizations made during the re-implementation process.
Share and collaborate: Share the new model with the original model’s authors
and the broader research community through platforms like CoMSES222CoMSES
Model Library, https://www.comses.net/codebases/, last access on 11/05/2023.,
GitLab, GitHub, and through scientific journals and conferences. Solicit
feedback, collaborate on improvements, and contribute to the growing body of
knowledge in agent-based modeling.
## 3 Case Study: Re-Implementing HUMAT
We have chosen a realistic case study to validate the effectiveness of the
proposed re-implementation process. In the following, we present how we
followed the abovementioned guideline (Section 2) in three subsections.
### 3.1 Choosing the Programming Language and Identifying the
Libraries/Frameworks
In our case, the need for re-implementation was driven by the goal of the
URBANE project333URBANE, https://www.urbane-horizoneurope.eu, last access on
10/05/2023. that requires combining the elements of two different simulation
models: HUMAT [5] (implemented in NetLogo) and MASS-GT [11] (implemented in
Python) into a single simulation model. Since the target of the resulting
model will be used by our partner who knows Python and integrating HUMAT will
be easier if we have a Python version, we decided to re-implement HUMAT in
Python.
NetLogo is a well-documented ABM platform that uses a primary object-oriented
language with primitives (predefined keywords) to control agents. Python is a
general-purpose, high-level programming language. For the URBANE
implementation, we used the Mesa ABM framework [21]. Mesa extends Python with
several functionalities to make programming ABMs more manageable. While it is
less comprehensive and well-documented than NetLogo, it offers modelers the
benefit of accessing many Python libraries.
### 3.2 Understanding HUMAT and Designing its Generic Model
To understand HUMAT, we used the available documents and publications [5, 6,
7, 8], and its corresponding NetLogo version (Figure 2).
Figure 2: The NetLogo version of HUMAT.
As a result of the understanding process, the purpose of the HUMAT model is to
represent agents’ socio-cognitive process of attitude formation. The subjects
of the attitude – the options an agent decides between (alternative A and
alternative B) are decided by the modeler to fit the research problem that the
agent-based model investigates.
The model is composed mainly of HUMAT agents connected through one or several
social networks (i.e., ego networks). Each HUMAT agent is characterized by a
set of needs/motives that are important for the subject of the attitude that
can belong to one of three groups: experiential needs, social needs, and
values. HUMAT agents vary regarding the importance of each motive and how the
choice alternatives satisfy each motive. When HUMAT agents form their attitude
toward a choice alternative, they reflect on how satisfying that alternative
is. If the alternative satisfies one motive and dissatisfies another motive
(i.e., has pros and cons), a HUMAT agent experiences an unpleasant state of
dissonance. Consequently, that agent faces a dilemma and employs one of two
dissonance resolution strategies to maintain cognitive consistency. Suppose
the dilemma is non-social (i.e., the social need to be surrounded by enough
like-minded HUMATS is satisfied). In that case, the HUMAT inquires - strives
to change its own beliefs by asking the most persuasive alter in the ego
network for advice. If the dilemma is social (i.e., the social need is
dissatisfied), the HUMAT signals to the most gullible alter, striving to
persuade them to change their mind.
Figure 3: The generic conceptual UML model for HUMAT.
To do this effectively, each HUMAT has a representation of all alters linked
to it in the ego network. An activated link between HUMAT and the targeted
alter denotes a communication act - sharing information about the subject of
the attitude (either inquiring or signaling). The persuasiveness of the
communicating agent depends on similarity and aspirational characteristics
relevant to a given research context.
Based on the above understanding, we designed a programming language-
independent generic model for HUMAT (Figure 3 and Figure 4). Figure 3 depicts
the high-level representations of various concepts in the HUMAT domain and
their relationships. Figure 4 represents an overall behavioral model for a
HUMAT agent within a social network. The model initializes nodes (HUMATS) and
edges in the social network, creating agent instances, and initializing their
variables, motives, and choices. Then, it adds the agents to the network,
initializes their representations of other agents (alters), and updates their
social motives for choices. During each simulation step (tick), agents may
decide to signal, inquire, or do nothing based on their current dilemmas and
the dissonance strength of their chosen action. If an agent is not satisfied
with their choice, they will try to become more content by signalling or
inquiring. The basic version of the HUMAT architecture assumes perfect
information about alter choices, meaning that all choices are visible to other
agents in the ego network. Throughout the simulation, the agents continuously
update their alter representations, social motives of choices and make new
choices based on their evaluations of motives and dissonance strength.
Figure 4: The generic behavioral UML model for HUMAT.
### 3.3 Reimplementing HUMAT in Python
Re-implementing HUMAT in Python from the generic conceptual model is a
straightforward process. Each concept in the generic model is translated into
a Python class with related parameters and methods. The two main classes of
the model describe the agents (HumatAgent) and the model (HumatModel) (see
Figure 5).
Figure 5: Python code for the Signal or Inquire function.
The HumatModel class extends the Mesa Model class and controls methods
executed during a time step. The HumatAgent class extends the Mesa Agent class
and controls the methods executed by the agent. The generic model does not
specify which Python data types, syntax, and packages to use. These decisions
are up to the modeler and depend on their personal experiences.
### 3.4 Validating HUMAT in Python
We start each validation by configuring both models identically by importing
all model states of the NetLogo model into Python after initial
initialization. Subsequently, automatic unit tests of agent parameters are
executed at each time step. This process is repeated, considering increasing
agent populations and degrees of randomization (e.g., by controlling
scheduling). Throughout the testing, the methods’ functionality is reviewed
and optimized. The findings of this comprehensive case study will be
documented in a separate paper.
## 4 Discussion
Replicating a NetLogo model directly in Python poses some specific challenges
in the implementation process. A brief description of the main challenges we
faced is given below. More complete comparisons between NetLogo and Python can
be found in [2].
* •
Object-oriented coding and methods: The NetLogo model is written as a
collection of procedures: (i) a setup defines agent parameters and the model
environment (e.g., patches and networks), (ii) the main go loop is then
executed to run all model procedures for a defined number of time steps
(ticks), (iii) the remaining procedures are non-restrictive and written
anywhere below the setup and go procedure. The Python model is organized into
classes with specific methods: (i) the main class contains methods defining
the model inputs and the number of time steps, (ii) the model class controls
methods executed during a time step, (iii) the §agent class control the
methods that agents execute. In NetLogo, two types of procedures are: to and
to-report. The to procedures usually contain a set of commands executed (e.g.,
by agents), while the to-report returns a value. Functions and methods in
Python can execute commands or return values.
* •
Turtles and Breed vs. Agent classes: NetLogo has four predefined agent types:
turtles, patches, links, and the observer. Breeds are used to define specific
sub-groups of agents (e.g., HUMATS). Each agent and breed can have specific
parameters assigned to it and can be controlled using NetLogo keywords, the
primitives. Python, on the other hand, uses classes to define objects. One of
the features of the Mesa is the Agent class. Each object created as a sub-
class of Agent is automatically equipped with a unique id and a step() method
and inherits features.
* •
Agentset, lists, and dictionaries: In NetLogo, groups of agents are organised
in agentsets. These sets of agents can be created on the fly in random order.
Agentsets are a very comfortable way to control or select a subset of agents
using a set of primitives. While Python can create agent sets, storing agents
in dictionaries is often more convenient.
Due to these challenges, it is not practical to re-implement a model in Python
directly from a NetLogo model. The difference in abstractions used in both
languages will make it hard for the modeler to transition. Consequently, for
an effective re-implementation and rapid re-implementation in other
programming languages, abstracting away the programming language concept and
designing a generic model is essential. For instance, thanks to this generic
model, we plan to re-implement HUMAT in Java for another project, and it will
be pretty rapid.
## 5 Conclusions and Future Work
This paper contributes to the literature in three meaningful ways. One,
previous studies agree on the importance of replicating agent-based models,
however they mostly present experiences on individual models (e.g., [4, 29,
20]). Here, we add to the existing general guidelines [32] by proposing a
programming language-independent systematic approach, from understanding the
existing model to sharing the new implementation.
Two, replications of ABMs focus on discussing the validation of the re-
implemented model: to what extent the outputs of the re-implemented model are
aligned with the outputs of the original model [10]. The case study of
replicating HUMAT described here focuses on the re-implementation process
rather than the model outcomes.
Three, the authors provide a glimpse of the re-implementation process report
having developed a general, conceptual model that is the basis of the original
ABM [30] or a platform-independent model [27]. This is a similar approach to
developing a generic model proposed here. An intermediate, generic model
enables a focus on the investigated phenomenon without anchoring in the
concepts present in a given programming language. Additionally, it makes
further re-implementations in different languages faster and less effortful.
Up until now, we closely followed the guideline until the Validate the New
Model step. This remaining step involves sensitivity analysis and testing of
the new model and thus requires a more detailed discussion. In future work, we
will finish validating the new model implemented in Python and report the
results of our experience.
We hope that, in future, the guidelines will be used and perfected by the
social simulation community. To make re-implementation of ABMs more common,
modellers should follow the Share and collaborate step of the proposed
guideline. The social simulation community can popularize such works by
initiating a dedicated label in COMSES or launching a publication outlet
focusing on model evaluation, replication and re-implementation.
#### 5.0.1 Acknowledgements
The work reported here is part of the URBANE project, which has received
funding from the European Union’s Horizon Europe Innovation Action under grant
agreement No. 101069782. We thank the reviewers for the thoughtful remarks,
especially related to the popularization ideas.
## References
* [1] Abar, S., Theodoropoulos, G.K., Lemarinier, P., O’Hare, G.M.P.: Agent Based Modelling and Simulation tools: A review of the state-of-art software. Computer Science Review 24, 13–33 (2017)
* [2] Abbott, R., Lim, J.: PyLogo: A Python Reimplementation of (Much of) NetLogo:. In: Proceedings of the 11th International Conference on Simulation and Modeling Methodologies, Technologies and Applications. pp. 199–206. SCITEPRESS - Science and Technology Publications, Online Streaming (2021)
* [3] Achter, S., Borit, M., Chattoe-Brown, E., Siebers, P.O.: RAT-RS: a reporting standard for improving the documentation of data use in agent-based modelling. International Journal of Social Research Methodology 25(4), 517–540 (Jul 2022)
* [4] An, G., Mi, Q., Dutta-Moscato, J., Vodovotz, Y.: Agent-based models in translational systems biology. Wiley interdisciplinary reviews. Systems biology and medicine 1(2), 159–171 (2009)
* [5] Antosz, P., Jager, W., Polhill, J.G., Salt, D., Alonso-Betanzos, A., Sánchez-Maroño, N., Guijarro-Berdiñas, B., Rodríguez, A.: Simulation model implementing different relevant layers of social innovation, human choice behaviour and habitual structures. Tech. Rep. D7.2 (2019)
* [6] Antosz, P., Jager, W., Polhill, J.G., Salt, D., Alonso-Betanzos, A., Sánchez-Maroño, N., Guijarro-Berdiñas, B., Rodríguez, A., Scalco, A.: SMARTEES simulation implementations. Tech. Rep. D7.3 (2021)
* [7] Antosz, P., Puga-Gonzalez, I., Shults, F.L., Lane, J.E., Normann, R.: Documenting Data Use in a Model of Pandemic “Emotional Contagion” Using the Rigour and Transparency Reporting Standard (RAT-RS). In: Czupryna, M., Kamiński, B. (eds.) Advances in Social Simulation, pp. 439–451. Springer, Cham (2022)
* [8] Antosz, P., Puga-Gonzalez, I., Shults, F.L., Szczepanska, T.: HUM-e: An emotive-socio-cognitive agent architecture for representing human decision-making in anxiogenic contexts. In: Squazzoni, F. (ed.) Advances in Social Simulation. Springer International Publishing, Cham
* [9] Antosz, P., Szczepanska, T., Bouman, L., Polhill, J.G., Jager, W.: Sensemaking of causality in agent-based models. International Journal of Social Research Methodology 25(4), 557–567 (Jul 2022). https://doi.org/10.1080/13645579.2022.2049510
* [10] Axelrod, R.: Advancing the Art of Simulation in the Social Sciences. In: Simulating Social Phenomena, Lecture Notes in Economics and Mathematical Systems, vol. 456, pp. 21–40. Springer Berlin Heidelberg, Berlin, Heidelberg (1997)
* [11] de Bok, M., Tavasszy, L.: An empirical agent-based simulation system for urban goods transport (MASS-GT). Procedia Computer Science 130, 126–133 (2018)
* [12] Chattoe-Brown, E., Gilbert, N., Robertson, D.A., Watts, C.: Reproduction as a Means of Evaluating Policy Models: A Case Study of a COVID-19 Simulation. medRxiv (2021). https://doi.org/10.1101/2021.01.29.21250743
* [13] Collier, N.: RePast: An Extensible Framework for Agent Simulation. The University of Chicago’s Social Science Research (2003)
* [14] Edmonds, B., Hales, D.: Replication, Replication and Replication: Some Hard Lessons from Model Alignment (Oct 2003)
* [15] Grimm, V., Railsback, S.F.: Pattern-oriented modelling: a ‘multi-scope’ for predictive systems ecology. Philosophical Transactions of the Royal Society B: Biological Sciences 367(1586), 298–310 (2012)
* [16] Grimm, V., Railsback, S.F., Vincenot, C.E., Berger, U., Gallagher, C., DeAngelis, D.L., Edmonds, B., Ge, J., Giske, J., Groeneveld, J., Johnston, A.S.A., Milles, A., Nabe-Nielsen, J., Polhill, J.G., Radchuk, V., Rohwäder, M.S., Stillman, R.A., Thiele, J.C., Ayllón, D.: The ODD Protocol for Describing Agent-Based and Other Simulation Models: A Second Update to Improve Clarity, Replication, and Structural Realism. Journal of Artificial Societies and Social Simulation 23(2), 7 (2020)
* [17] Gürcan, O., Dikenelli, O., Bernon, C.: Towards a Generic Testing Framework for Agent-Based Simulation Models. In: Ganzha, M., Maciaszek, L.A., Paprzycki, M. (eds.) FedCSIS 2011. pp. 635–642. Szczecin, Poland (Sep 2011)
* [18] Gürcan, O., Dikenelli, O., Bernon, C.: A generic testing framework for agent-based simulation models. In: Agent-Based Modeling and Simulation, pp. 231–270. Springer (2014)
* [19] Larman, C.: Applying UML and Patterns: An Introduction to Object-Oriented Analysis and Design and Iterative Development (3rd Edition). Prentice Hall, USA (2004)
* [20] Liang, H., Fu, K.w.: Testing Propositions Derived from Twitter Studies: Generalization and Replication in Computational Social Science. PLOS ONE 10(8), 1–14 (Aug 2015). https://doi.org/10.1371/journal.pone.0134270
* [21] Masad, D., Kazil, J.: Mesa: An Agent-Based Modeling Framework. pp. 51–58. Austin, Texas (2015). https://doi.org/10.25080/Majora-7b98e3ed-009
* [22] Maxwell, S.E., Lau, M.Y., Howard, G.S.: Is psychology suffering from a replication crisis? Am Psychol. 70(6), 487–498 (2015)
* [23] North, M.J., Macal, C.M.: Agent Based Modeling and Computer Languages. In: Meyers, R.A. (ed.) Encyclopedia of Complexity and Systems Science, pp. 131–148. Springer New York (2009). https://doi.org/10.1007/978-0-387-30440-3_8
* [24] Pressman, R., Maxim, B.: Software Engineering: A Practitioner’s Approach, 8th Ed (Jan 2014)
* [25] Railsback, S., Grimm, V.: Agent-Based and Individual-Based Modeling: A Practical Introduction. Agent-based and Individual-based Modeling: A Practical Introduction, Princeton University Press (2019)
* [26] Railsback, S.F.: Concepts from complex adaptive systems as a framework for individual-based modelling. Ecological Modelling 139(1), 47–62 (2001)
* [27] Sansores, C., Pavón, J.: Agent-Based Simulation Replication: A Model Driven Architecture Approach. In: MICAI 2005: Advances in AI 4th Mexican Int. Conf. on AI, LNAI, vol. 3789, pp. 244–253. Springer (2005)
* [28] Tang, W., Grimm, V., Tesfatsion, L., Shook, E., Bennett, D., An, L., Gong, Z., Ye, X.: Code Reusability and Transparency of Agent-Based Modeling: A Review from a Cyberinfrastructure Perspective. In: Tang, W., Wang, S. (eds.) High Performance Computing for Geospatial Applications, pp. 115–134. Springer, Cham (2020)
* [29] Thiele, J.C., Kurth, W., Grimm, V.: Facilitating Parameter Estimation and Sensitivity Analysis of Agent-Based Models: A Cookbook Using NetLogo and ’R’. Journal of Artificial Societies and Social Simulation 17(3), 11 (2014)
* [30] Wilensky, U., Rand, W.: Making Models Match: Replicating an Agent-Based Model. Journal of Artificial Societies and Social Simulation 10(4) (2007)
* [31] Will, O., Hegselmann, R.: A Replication That Failed: On the Computational Model in ’Michael W. Macy and Yoshimichi Sato: Trust, Cooperation and Market Formation in the U.S. and Japan. JASSS 11(3) (2008)
* [32] Zhang, J., Robinson, D.T.: Replication of an agent-based model using the Replication Standard. Environmental Modelling & Software 139, 105016 (2021)
* [33] Zhong, W., Kim, Y.: Using Model Replication to Improve Reliability of Agent-Based Models. In: Chai, S.K., Salerno, J.J., Mabry, P.L., Hutchison, D., Kanade, T. (eds.) Advances in Social Computing: 3rd Int. Conf. on Social Computing, Behavioral Modeling, and Prediction, LNCS, vol. 6007, pp. 118–127. Springer (2010)
|
# Evolution of high-redshift quasar hosts and promotion of massive black hole
seed formation
Wenxiu Li (李文秀) Kavli Institute for Astronomy and Astrophysics, Peking
University, Beijing 100871, China Kohei Inayoshi Kavli Institute for
Astronomy and Astrophysics, Peking University, Beijing 100871, China Yu Qiu
(邱宇) Kavli Institute for Astronomy and Astrophysics, Peking University,
Beijing 100871, China
###### Abstract
High-redshift luminous quasars powered by accreting supermassive black holes
(SMBHs) with mass $\gtrsim 10^{9}~{}M_{\odot}$ constrain their formation
pathways. We investigate the formation of heavy seeds of SMBHs through gas
collapse in the quasar host progenitors, using merger trees to trace the halo
growth in highly-biased, overdense regions of the universe. The progenitor
halos are likely irradiated by intense H2-photodissociating radiation from
nearby star-forming galaxies and heat the interior gas by successive mergers.
The kinetic energy of the gas originating from mergers as well as baryonic
streaming motion prevents gas collapse and delays prior star formation. With a
streaming velocity higher than the root-mean-square value, gas clouds in
nearly all $10^{4}$ realizations of merger trees enter the atomic-cooling
stage and begin to collapse isothermally with $T\simeq 8000~{}{\rm K}$ via
Ly$\alpha$ cooling. The fraction of trees which host isothermal gas collapse
is $14\%$ and increases with streaming velocity, while the rest form H2-cooled
cores after short isothermal phases. If the collapsing gas is enriched to
$Z_{\rm crit}\sim 2\times 10^{-3}~{}Z_{\odot}$, requiring efficient metal
mixing, this fraction could be reduced by additional cooling via metal fine-
structure lines. In the massive collapsing gas, the accretion rate onto a
newly-born protostar ranges between $3\times 10^{-3}-5~{}M_{\odot}~{}{\rm
yr}^{-1}$, among which a large fraction exceeds the critical rate suppressing
stellar radiative feedback. As a result, we expect a distribution of stellar
mass (presumably BH mass) ranging from several hundred to above
$10^{5}~{}M_{\odot}$, potentially forming massive BH binary mergers and
yielding gravitational wave events.
Supermassive black holes (1663); Quasars (1319); High-redshift galaxies (734)
## 1 Introduction
Supermassive black holes (SMBHs) with masses of $10^{6-9}~{}M_{\odot}$ are one
of the most fundamental ingredients on the structure formation paradigm and
are believed to coevolve with their host galaxies over the cosmic timescale
through gas feeding and feedback processes (Kormendy & Ho, 2013). The
existence of high-redshift quasars at $z\gtrsim 6$ suggests that such monster
SMBHs form in the first billion years of the cosmic age (Fan et al., 2006;
Mortlock et al., 2011; Wu et al., 2015; Jiang et al., 2016; Matsuoka et al.,
2018; Onoue et al., 2019; Wang et al., 2021) via rapid assembly processes,
such as the formation of heavy BH seeds (initial mass), rapid mass growth via
gas accretion, or a combination of the two mechanisms (see a review by
Inayoshi et al. 2020).
For massive seed BH formation, a sufficiently high accretion rate of gas onto
stellar objects is required. In early protogalaxies where the halo virial
temperature is as high as $T_{\rm vir}\simeq 10^{4}~{}{\rm K}$ and the
temperature of a self-gravitating gas cloud is as warm as that value, the mass
accretion rate is expected to be $\dot{M}\simeq c_{\rm s}^{3}/G\simeq
0.1~{}M_{\odot}~{}{\rm yr}^{-1}(T/10^{4}~{}{\rm K})^{3/2}$, where $c_{\rm s}$
is the sound speed of the gas and $G$ is the gravitational constant. To keep
the gas warm against efficient cooling via H2 lines, several mechanisms
suppressing, delaying, and counteracting H2 formation/cooling have been
proposed by many previous studies in literature: photo-dissociation of H2 by
Lyman-Werner (LW) radiation (Omukai, 2001; Oh & Haiman, 2002; Shang et al.,
2010; Latif et al., 2013; Inayoshi et al., 2014; Sugimura et al., 2014; Regan
et al., 2014; Visbal et al., 2014a; Chon et al., 2016), supersonic baryonic
streaming motion relative to dark matter (Tanaka & Li, 2014; Hirano et al.,
2018; Inayoshi et al., 2018; Schauer et al., 2019), and rapid halo mergers
which cause heating (Yoshida et al., 2003; Wise et al., 2019; Lupi et al.,
2021) as well as reduce H2 cooling through accretion shocks (Fernandez et al.,
2014) All the three processes bring the gas cloud into a dense and hot region
on the gas phase diagram, where collisional dissociation from the excited
rovibrational levels of H2 reduces the H2 fraction (Inayoshi & Omukai, 2012).
In the subsequent stage, the gas collapses almost isothermally, keeping itself
as warm as $T\simeq 3000-8000~{}{\rm K}$ and avoiding vigorous gas
fragmentation into smaller clumps (Bromm & Loeb, 2003; Latif et al., 2013;
Inayoshi et al., 2014; Becerra et al., 2015; Chon et al., 2018). Due to global
and monolithic collapse of the warm cloud, the embryonic protostar is fed by
rapidly accreting gas at a rate of $\gtrsim 0.1~{}M_{\odot}~{}{\rm yr}^{-1}$
through a compact accretion disk where gas clumps could quickly migrate inward
and merge with the central protostar (Inayoshi & Haiman, 2014; Sakurai et al.,
2016). Moreover, since the protostar evolves with an expanding stellar
envelope due to rapid entropy inject from the accreting matter, the surface
temperature is limited to $T_{\rm eff}\simeq 5000~{}{\rm K}$, which is too low
for the protostar to emit ionizing radiation (Hosokawa et al., 2013; Haemmerlé
et al., 2018). As a result of inefficient radiative feedback, the protostar
would likely reach $\sim 10^{5-6}~{}M_{\odot}$ before the end of its lifetime
and collapse into a massive seed BH. However, those formation sites of mass
seed BHs are expected to be as rare as the number density of high-$z$ quasars
in a comoving volume ($n_{\rm SMBH}\sim 1-10~{}{\rm Gpc}^{-3}$ from Willott et
al., 2010).
Recent cosmological hydrodynamical simulations have suggested that the
conditions required to form massive seeds should be more modest than
previously considered (e.g., Wise et al., 2019). Even with a moderate level of
LW radiation, streaming motion and merger heating, a high mass accretion rate
is sustained at larger radii in a protogalaxy, although the isothermality of
gas is not maintained at high densities ($n\gtrsim 100~{}{\rm cm}^{-3}$).
Under such less stringent situations, the average mass accretion rate onto the
central protostar is reduced but the peak rate can exceed the critical rate
for bifurcating the protostellar evolution (Latif & Volonteri, 2015; Hirano et
al., 2017; Regan et al., 2020b). As a result, the central star grows to the
intermediate mass regime at $M_{\star}\simeq 100-10^{4}~{}M_{\odot}$, which is
lower than originally expected the expected mass for a SMS but still massive
enough to form massive seeds that will end up as high-$z$ SMBHs (Sakurai et
al., 2020a, Toyouchi et al. in prep). Therefore, those environmental effects
are potentially important to initiate intermediate massive BHs (IMBHs) in the
high-$z$ universe by $z\sim~{}6-7$ (Inayoshi et al., 2020), and form
gravitational-wave sources for the space-based GW interferometers such as
LISA, Taiji, and Tianqin (Sesana et al., 2008; Amaro-Seoane et al., 2017;
Bonetti et al., 2019; Dayal et al., 2019; Luo et al., 2016) However, we
emphasize that the massive seed forming halos in those scenarios do not
necessarily merge into high-$z$ quasar host galaxies.
In this paper, we consider a new scenario of the massive seed formation in
biased, over-dense regions with $\gtrsim 5$ mass variance, where high-$z$
SMBHs are expected to form (Wyithe & Padmanabhan, 2006). In such intrinsically
rare patches of the universe, stronger halo clustering increases the frequency
of halo mergers and boosts the mean intensity of LW radiation background in
the regions. Therefore, the modest conditions required to form massive seeds
with $100-10^{4}~{}M_{\odot}$ will be naturally satisfied there. We generate
merger trees of the progenitor halos that end up a high-$z$ quasar host, based
on the extended Press-Schechter formalism, and quantify the expected mean LW
intensity irradiating the main progenitors and the merger heating rate along
with the trees. By taking into account the environmental input, the thermal
and dynamical evolution of a massive gas cloud in the main progenitor halo is
calculated in a self-consistent way.
Among previous studies in literature, Valiante et al. (2016) investigated the
origin of SMBHs using semi-analytical models and found massive BHs seeded in
the quasar progenitor halos, depending on their environmental effects.
Recently, Lupi et al. (2021) also proposed a similar idea that massive seed BH
formation would be much more efficient in a biased halo merger tree based on
dark matter (DM) only N-body simulation. They found that in an overdense
region, a large number of atomic-cooling halos experience successive merger
heating that counteracts radiative cooling via H2 lines and potentially
promote massive seed formation. However, most of the halos in their samples do
not end up in the most massive DM halo that is supposed to be a high-$z$
quasar host. Instead, we study the statistical properties of the progenitor
halos of a high-$z$ quasar host by generating merger trees. Moreover, we
explicitly follow the evolution of gas clouds in the main progenitors, taking
into account merger heating, radiative cooling, and chemical reaction
networks. Thus, the two studies are complementary.
This paper is organized as follows. In §2, we summarize our construction of
merger histories of a quasar host, the calculation of environmental LW
intensity for individual halos, and subsequent gas evolution following the
underlying halo mass growth. In §3, we discuss the results of LW intensity,
the fraction of promising heavy seed formation sites, and the distribution of
accretion rate realized. In §4, we quantify the critical metallicity that
affects thermal evolution of gas and the efficiency of metal enrichment, and
discuss caveats of our model. In §5, we show the mass distribution of seed BHs
formed in the high-$z$ quasar progenitors. Finally, in §6, we summarize the
main conclusions of this paper.
## 2 methodology
In order to investigate the evolution of luminous quasar progenitors that form
in rare, overdense regions in the universe at redshift $z\sim 6$, we construct
the merger history of DM halos up to $z=50$, and model the evolution of the
gas properties within the DM halos along each merger tree. The processes we
model consist of three parts: (1) We first construct the hierarchical merger
history of a quasar host halo using the Monte Carlo merger tree algorithm. For
a $10^{9}\,M_{\odot}$ SMBH powering the luminous quasar at $z\sim 6$, the halo
mass is estimated to be $M_{\rm h}\sim 10^{12}~{}M_{\odot}$ by comparing the
growth rate of quasar density indicated from observations with that predicted
by the Press-Schechter formalism (Wyithe & Padmanabhan, 2006). We therefore
focus our analysis on halos that grow to $M_{\rm h}=10^{12}~{}M_{\odot}$ at
$z=6$. (2) For a given merger tree, we calculate the LW radiation background
produced by the surrounding star-forming galaxies at each redshift, in order
to model the radiative impact on the gas within the halo. (3) The evolution of
the gas in the parent halo of each tree is studied by taking into account the
injection of thermal and kinetic energy due to violent merger events, as well
as LW irradiation calculated in step (2) that dissociates the gas coolants. In
the following subsections, we describe in detail the three key ingredients.
Throughout the paper, we adopt cosmological parameters estimated by Planck
assuming a $\Lambda$CDM universe (Planck Collaboration et al., 2016), i.e.,
$\Omega_{\mathrm{m}}=0.307,~{}\Omega_{\Lambda}=0.693,~{}\Omega_{\mathrm{b}}=0.0486,~{}H_{0}=67.7\mathrm{~{}km}\mathrm{~{}s}^{-1}\mathrm{Mpc}^{-1}$.
### 2.1 Merger histories of progenitors
We construct DM merger trees based on the extended Press-Schechter formalism
(Press & Schechter, 1974; Lacey & Cole, 1993; Cole et al., 2000) using the
GALFORM semi-analytic algorithm summarized in Parkinson et al. (2008). Our
sample consists of $10^{4}$ merger tree realizations for the DM halos that end
up as high-$z$ quasar hosts with $M_{\rm h}=10^{12}~{}M_{\odot}$ at $z=6$. For
each tree, we adopt a minimum DM halo mass of $M_{\rm
h,min}=10^{5}~{}M_{\odot}$. Halos smaller than this threshold do not
significantly impact the gas evolution, because the critical virial
temperature above which gas collapse can be induced by coolant
$\mathrm{H_{2}}$ is $\sim 10^{3}~{}{\rm K}$ (see Haiman et al., 1996; Tegmark
et al., 1997), corresponding to halo mass higher than $M_{\rm h,min}$ (see
also Fig. 1). Reflecting the rarity of quasar host galaxies, the progenitor
halos form in highly biased regions with $\gtrsim 5$ mass variance (Mo &
White, 2002). Note that the fraction of all matter in such rare halos is
$\lesssim 10^{-7}$.
### 2.2 Lyman-Werner background intensity
Due to the photo-dissociation of H2 exposed to LW radiation, we also consider
the local LW intensity $J_{\rm LW}$ (at $h\nu=12.4\rm{eV}$, hereafter in units
of $\rm 10^{-21}erg~{}s^{-1}~{}cm^{-2}~{}Hz^{-1}~{}sr^{-1}$) in order to
follow the gas evolution in a given progenitor halo. Along each merger tree,
we calculate the cumulative $J_{\rm LW}$ from neighboring star-forming
galaxies (hereafter source halos). Based on the model developed by Dijkstra et
al. (2014), the basic equations and assumptions we adopt are summarized as
below.
We consider a DM halo with mass $M_{\rm h}$ (gas + DM) at redshift $z$, which
is supposed to be the main progenitor in a merger tree. The average number of
source halos (within mass range $m\pm dm/2$) that populate a surrounding
spherical shell (at a physical distance $r$ with thickness $dr$) is calculated
by
$\displaystyle\frac{d^{2}\mathcal{N}(m,r)}{dmdr}dmdr$ $\displaystyle=4\pi
r^{2}dr(1+z)^{3}~{}\frac{\mathrm{d}n_{\mathrm{ST}}(m,z)}{\mathrm{d}m}$
$\displaystyle~{}~{}\times[1+\xi(M_{\mathrm{h}},m,z,r)]\mathrm{~{}d}m,$ (1)
where $\mathrm{d}n_{\mathrm{ST}}/\mathrm{d}m$ is the mass function of source
halos (Sheth et al., 2001), and $\rm\xi$ denotes the non-linear bias function
(Iliev et al., 2003), which gives the deviation (from random) probability of
finding a halo with mass $m$ at distance $r$ from the main progenitor. We set
the minimum source halo mass to be $m_{\mathrm{ac},z}\simeq 6\times
10^{6}M_{\odot}\left(T_{\rm vir}/10^{4}~{}{\rm
K}\right)^{3/2}\left[\left(1+z\right)/31\right]^{-3/2}$, where the halo virial
temperature is just above the hydrogen atomic-cooling threshold of $T_{\rm
vir}=10^{4}~{}{\rm K}$, where radiative cooling by Ly$\alpha$ emission leads
to star formation. In our model, we do not consider the production of LW
radiation background by star formation activity in less-massive DM halos. The
maximum mass of source halos is determined so that the LW intensity converges
towards the higher mass bins, namely in terms of averaged flux, contributions
from the $m_{\rm max}$ halos vanish due to their low abundance. The value of
$m_{\rm max}$ ranges from $\sim 10^{6}M_{\odot}$ to $\sim 10^{10}M_{\odot}$
and is larger at lower $z$.
Following Dijkstra et al. (2014), we compute the average LW radiation flux
that irradiates the target halo. The time-averaged production rate of LW
photons (per unit stellar mass) emitted from a surrounding source galaxy is
approximated by
$\left\langle
Q_{\mathrm{LW}}(t)\right\rangle=Q_{0}\left[1+\left(t_{6}/4\right)\right]^{-3/2}\mathrm{e}^{-t_{6}/300},$
(2)
where $Q_{0}=10^{47}\mathrm{~{}s}^{-1}M_{\odot}^{-1}$ and
$t~{}(=t_{6}~{}\mathrm{Myr})$ is the time after a single star burst in the
star-forming halo. Thus, the specific LW luminosity from the halo is
calculated by
$L_{\mathrm{LW}}(m_{\star},t)=\frac{h\langle\nu\rangle}{\Delta\nu}\left\langle
Q_{\mathrm{LW}}(t)\right\rangle
f_{\mathrm{esc},\mathrm{LW}}\left(\frac{m_{\star}}{M_{\odot}}\right),$ (3)
where the mean frequency and frequency width of the LW band ($11.2\leq
h\nu/{\rm eV}\leq 13.6$) are set to $\langle\nu\rangle=12.4~{}\mathrm{eV}/h$
and ${\Delta\nu}=2.4~{}\mathrm{eV}/h$. The total stellar mass is calculated by
$m_{\star}=f_{\star}(\Omega_{\mathrm{b}}/\Omega_{\mathrm{m}})m$, assuming the
star formation efficiency to be $f_{\star}=0.05$. The escape fraction of LW
photons from the halo is assumed to be unity
($f_{\mathrm{esc},\mathrm{LW}}=1$). This value tends to be lower for atomic-
cooling halos with $m\gtrsim 10^{7}~{}M_{\odot}$. As a reference, Schauer et
al. (2015) calculated the LW escape fraction for a single PopIII star in an
atomic-cooling halo with 1D simulations and found
$f_{\mathrm{esc},\mathrm{LW}}\simeq 0.7$. However, this is considered to be a
lower bound because the escape fraction would be higher for 3D calculations
through directions with lower optical depths, besides a higher SFR is expected
in our case (rather than a single massive star). We estimate the LW luminosity
at one free-fall time after the burst of star formation:
$t_{\mathrm{sf}}=\sqrt{3\pi/(32G\Delta_{\rm vir}\bar{\rho})}\simeq 18~{}{\rm
Myr}~{}[(1+z)/31]^{-3/2}$, where $\Delta_{\rm vir}\simeq 18~{}\pi^{2}$. Using
Eqs. (1)-(3), we obtain the mean LW radiation intensity in the target halo as
$J_{\rm LW}(M_{\mathrm{h}},z)=\int_{m_{\mathrm{ac},z}}^{m_{\rm
max}}\int_{r_{\rm min}}^{r_{\rm
max}}\frac{d^{2}\mathcal{N}(m,r)}{dmdr}\cdot\frac{L_{\mathrm{LW}}}{16\pi^{2}r^{2}}~{}dmdr,$
(4)
where $r_{\rm min}$ and $r_{\rm max}$ are the minimum and maximum distance of
the source halo from the target halo. In the absence of metal pollution,
$r_{\rm min}$ can be safely set by adding the virial radii of the target and
source halos. However, metal enrichment of the main progenitor is a main
obstacle in the formation scenario of massive seed BHs, because efficient
metal-line cooling (and possibly dust thermal emission) will likely lead to
gas fragmentation during its gravitational collapse and thus suppress massive
star formation. Generically, there are two types of enrichment processes: (1)
genetic enrichment due to past star formation episodes in the progenitors, and
(2) environmental enrichment owing to metal bubbles created by supernova (SN)
explosions in nearby galaxies. In our model, we consider the environmental
enrichment process by adopting the minimum distance to source halos as $r_{\rm
min}=\max\\{r_{\rm vir}(M_{\mathrm{h}})+r_{\rm vir}(m),r_{\rm s}(m)\\}$, where
$r_{\rm s}$ is the size of the metal-polluted region surrounding the source
halo
$\displaystyle r_{\mathrm{s}}(m,t)=\left(\frac{E_{\rm SN}m_{\star}}{m_{\rm
0}\rho_{\rm s}}\right)^{1/5}t^{2/5},$ (5)
where $m_{0}=100~{}M_{\odot}$ is the stellar mass budget required to form a SN
progenitor and $E_{\rm SN}=10^{51}~{}{\rm erg}$ is the explosion energy of a
SN. The density $\rho_{\rm s}$ of gas surrounding the wind is considered to be
$\Delta\bar{\rho}_{\rm b}$, where $\bar{\rho}_{\rm b}$ is the IGM baryon
density, and $\Delta=60$ corresponding to the typical baryonic overdensity of
halos at their virial radius for a NFW profile Dijkstra et al. (2014). Similar
to the production of LW radiation, we estimate the size of metal-enriched
bubbles at $t_{\rm sf}$. We note that metal-enrichment through in-situ star
formation should be subdominant because intense LW radiation suppresses star
formation in low-mass progenitors (see §4).
On the other hand, the maximum distance in the integration is given by $r_{\rm
max}=\left(\lambda_{\mathrm{LW},1}-\lambda_{\beta}\right)c/\left[\lambda_{\beta}H(z)\right]$,
where the $\lambda_{\mathrm{LW},1}=1110\AA$ and $\lambda_{\beta}$ are
wavelengths of the lowest LW energy and Ly$\beta$ line, respectively (see
Haiman et al., 1997). We consider the redshift effect by cosmic expansion,
where
$H(z)=H_{0}\left[\Omega_{\mathrm{m}}(1+z)^{3}+\Omega_{\Lambda}\right]^{1/2}$
is the Hubble constant at redshift $z$ and $c$ is the light speed. LW photons
emitted at $r>r_{\rm max}$ are redshifted into one of the Lyman series
resonances and are converted into low-energy photons before reaching the
target halo. The $r_{\rm max}$ is thus set as an absorbing screen, i.e., we
exclude the contribution of $J_{\rm LW}$ from halos located at $r>r_{\rm
max}$.
### 2.3 Energy injection through halo mergers
The main progenitor halo experiences vigorous halo mergers in the high-$z$
universe. Successive merger events, in particular major mergers, inject energy
into the gas in the parent halo. At early phase, energy loss through radiative
cooling is inefficient, i.e., the cooling timescale is comparable or longer
than the Hubble timescale. Gas is heated through shock formation at the halo
virial radius in an adiabatic manner. Subsequently, the energy is transported
into the halo interior, leading to gas virialization with a nearly constant
temperature profile ($T_{\rm gas}\sim T_{\rm vir}$) across all radii (Wise &
Abel, 2007). Assuming that the virial equilibrium state is reached after a
merger event,
the virial theorem applies to the gas in the post-merger halo, where the
internal and kinetic (turbulence) energy is balanced with the gravitational
energy as
$e_{\rm tot}=e_{\rm th}+e_{\rm k}+\Phi_{R_{\rm vir}}=\frac{1}{2}\Phi_{R_{\rm
vir}},$ (6)
where $e_{\rm tot}$, $e_{\rm th}$ and $e_{\rm k}$ are the total, thermal, and
kinetic energy per unit mass, and $\Phi_{R_{\rm vir}}$ is the gravitational
energy at the virial radius. In this work, we adopt the NFW potential for DM
halos given by
$\Phi_{R_{\rm vir}}=-\frac{2k_{\rm B}T_{\rm vir}}{\mu m_{\rm
p}}\cdot\frac{\ln(1+c_{\mathrm{vir}})}{\ln(1+c_{\mathrm{vir}})-c_{\mathrm{vir}}/(1+c_{\mathrm{vir}})},$
(7)
where $T_{\rm vir}$ is the halo virial temperature, the concentration
parameter of the DM density profile $c_{\mathrm{vir}}=1.9~{}(M_{\rm
h}/10^{7}\,M_{\odot})^{-0.13}[(1+z)/31]^{-1}$ (Bullock et al., 2001), $k_{\rm
B}$ is the Boltzmann constant, $\mu=1.22$ is the mean molecular weight, and
$m_{\rm p}$ is the proton mass. Therefore, the total energy change owing to
the halo evolution is given by
$\Gamma_{\rm mrg}=-\frac{1}{2}\Phi_{R_{\rm
vir}}\left(\frac{2}{3}\frac{\dot{M_{\mathrm{h}}}}{M_{\mathrm{h}}}-\frac{1}{1+z}\frac{dz}{dt}\right),$
(8)
where the first term of the right hand side denotes the energy change
associated with mass growth and the second term represents the cosmic
expansion effect. In the generally turbulent virialized gas, the kinetic-to-
thermal energy ratio is equal to 1 around the virial radius, and decreases to
$1/3$ at the center (see Wise & Abel, 2007). Adopting this branching ratio of
the total injected energy, the thermal and kinetic heating rate associated
with mergers are given by $\Gamma_{\rm mrg,th}=3\Gamma_{\rm mrg}/4$ and
$\Gamma_{\rm mrg,kin}=\Gamma_{\rm mrg}/4$, respectively. Combining Eqs.
(6)-(8), the gas temperature follows the halo virial temperature as
$\frac{\dot{T}_{\rm gas}}{\dot{T}_{\rm
vir}}=\frac{1}{2}\cdot\frac{\ln(1+c_{\mathrm{vir}})}{\ln(1+c_{\mathrm{vir}})-c_{\mathrm{vir}}/(1+c_{\mathrm{vir}})}.$
(9)
This ratio is close to unity for a wide range of ($M_{\rm h}$, $z$) halos of
interest, e.g., $\dot{T}_{\rm gas}/\dot{T}_{\rm vir}\simeq 1.3$ and $0.81$ for
$c_{\mathrm{vir}}=2$ and $10$. Note that our method is different from that
adopted in previous papers (e.g., Yoshida et al., 2003; Lupi et al., 2021),
where $T_{\rm gas}=T_{\rm vir}$ is imposed. The treatment allows us to
precisely calculate the radiative cooling rates and chemical reaction
coefficients, which sensitively depend on the gas temperature.
### 2.4 Turbulence and baryonic streaming motion
The kinetic energy injected through mergers is stored in the halo as
turbulence. During the viliarization process, turbulence plays an important
role on massive star formation (e.g., McKee & Tan, 2002). Namely, turbulence
acts as a source of pressure, which stabilizes the gas against its self-
gravity and delays the collapse until the cloud becomes massive enough to
overcome the turbulent pressure. In addition to turbulence, the baryonic
streaming motion relative to the DM produced in the epoch of cosmic
recombination at $z_{\rm rec}\simeq 1100$ also significantly delays gas
collapse and star formation in protogalaxies. The streaming velocity is found
to follow a Maxwell-Boltzmann distribution with the root-mean-square speed of
$\sigma=30~{}{\rm km~{}s}^{-1}$ at $z=z_{\rm rec}$ and decays as
$\tilde{v}_{\rm bsm}=v_{\rm bsm}(1+z)/(1+z_{\mathrm{rec}})$ (Tseliakhovich &
Hirata, 2010). We note that the volume fraction of the universe with streaming
velocities of $v_{\rm bsm}\geq A\sigma$ is estimated as $\simeq 0.4$, $8\times
10^{-3}$, and $5.9\times 10^{-6}$ for $A=1$, $2$, and $3$, respectively.
Considering both the three-dimensional turbulence and coherent baryonic
streaming velocity, we approximate the effective pressure by kinetic motion of
gas as
$\displaystyle P_{\rm tur}\approx\frac{1}{3}\rho v_{\rm
tur}^{2}+\rho\left[\alpha_{0}\tilde{v}_{\rm bsm}(z)\right]^{2},$ (10)
where $v_{\rm tur}^{2}=2\int\Gamma_{\rm mrg,kin}dt$ is the kinetic specific
energy accumulated through successive mergers and the coefficient of $1/3$ is
required to estimate the pressure due to isotropic turbulence (Chandrasekhar,
1951a, b). With pressure support from turbulence, gas collapse is delayed to
different extents, with varying strengths of the streaming motion. In this
work, we adopt $\alpha_{0}=4.7$ in our fiducial model, in order to match the
delay of collapse obtained from cosmological simulations (Hirano et al.,
2018). The total gas pressure is therefore defined by $P_{\rm tot}=P_{\rm
gas}+P_{\rm tur}$.
### 2.5 Density evolution
With the energy injection processes defined above, in this section we describe
our model for calculating the density evolution of a gas cloud concentrated in
a DM halo that grows through successive merger episodes. There are three
characteristic stages of the evolution: (1) initial adiabatic phase, (2)
transition to isothermal gas due to radiative cooling, and (3) gravitationally
collapsing phase in a runaway fashion. We model the gas dynamics in these
stages based on a one-zone model (e.g., Omukai, 2001), which is often used to
follow the physical quantities at the center of a gravitationally collapsing
cloud with a self-similar density profile $\rho_{\mathrm{gas}}\propto r^{-2}$.
However, this profile does not apply to gas in hydrostatic equilibrium before
the onset of gravitational collapse. Therefore, we construct a new method to
model the three characteristic stages in a physically motivated way.
#### 2.5.1 Adiabatic Stage
In the early stage, since the gas density is not high enough for radiative
cooling to operate through collisionally excited transitions, the gas is
adiabatically compressed in the DM halo as the underlying DM gravitational
potential evolves. In the DM assembly history through mass accretion, the
entropy profile $K(r)$ of the adiabatic gas is characterized by a power-law
outer profile of $K(r)=K_{\rm vir}(r/R_{\rm vir})^{1.1}$, and a constant core
with $K_{0}\simeq 0.1K_{\rm vir}$, where $K_{\rm vir}=k_{\rm B}T_{\rm
vir}/\left[(\mu m_{\rm p})\bar{\rho}_{\rm b}^{2/3}\right]$ is the gas entropy
at the virial radius (Voit et al., 2003, 2005). This self-similar entropy
profile is also found to be established inside high-$z$ protogalaxies formed
in DM halos more massive than $3\times 10^{6}M_{\odot}$ at $z=10$, while the
core entropy for less massive halos is maintained at the IGM entropy when gas
decouples from the cosmic microwave background (CMB; see more details in
Visbal et al., 2014b). Motivated by both numerical simulations and galaxy
cluster observations, we approximate the entropy profile as
$K(r)\simeq K_{\rm vir}\left(\dfrac{r}{R_{\rm vir}}\right)+K_{0},$ (11)
where $K_{0}={\rm max}(0.1K_{\rm vir},K_{\rm IGM})$. Using the entropy profile
and the equation of state given by $P_{\rm gas}=K(r)\rho_{\rm gas}^{\gamma}$,
where $\gamma=5/3$, we calculate the density profile by solving the
hydrostatic equation (the so-called Lane-Emden equation) for the cloud
embedded in the DM potential:
$\frac{1}{r^{2}}\frac{d}{dr}\left[\frac{r^{2}}{\rho_{\rm
gas}}\frac{d(K\rho_{\rm gas}^{\gamma}+P_{\rm tur})}{dr}\right]=-4\pi
G\left(\rho_{\rm gas}+\rho_{\rm DM}\right).$ (12)
Throughout this paper, we adopt the NFW density profile of dark matter halos
of all masses characterized by a simple analytical form of
$\rho_{\rm DM}(r)=\rho_{\rm
m}(z)\frac{\delta_{0}}{\left(c_{\mathrm{vir}}r/R_{\mathrm{vir}}\right)\left(1+c_{\mathrm{vir}}r/R_{\mathrm{vir}}\right)^{2}},$
(13)
where $\rho_{\rm{m}}(z)$ is the mean matter density and
$\delta_{0}=\frac{200}{3}\frac{c_{\mathrm{vir}}^{3}}{\ln(1+c_{\mathrm{vir}})-c_{\mathrm{vir}}/(1+c_{\mathrm{vir}})}$
(14)
is the characteristic overdensity within halo virial radius (Navarro et al.,
1997).
We integrate this hydrostatic equation with respect to $\rho_{\rm gas}(r)$
imposing the regularity conditions at the center, i.e., $\rho_{\rm
gas}=\rho_{0}$ and $d\rho_{\rm gas}/dr=0$ at $r=0$. Since the solution for
adiabatic gas generally has the radius $r_{0}$ where $\rho_{\rm
gas}(r_{0})=0$, we determine the central density $\rho_{0}$ so that the
enclosed gas mass at $r\leq r_{0}$ satisfies $M_{\rm gas}=f_{\rm b}M_{\rm h}$,
where $f_{\rm b}=\Omega_{\rm b}/\Omega_{\rm m}$ is the baryonic fraction.
#### 2.5.2 Isothermal Stage
As the gas temperature increases due to gravitational compression and merger
heating, radiative cooling processes begin to operate in the cloud and the
adiabatic assumption no longer applies. When the radiative cooling timescale
is shorter than the heating timescale, we solve the hydrostatic equation for
the density profile assuming an isothermal equation of state:
$\frac{1}{r^{2}}\frac{d}{dr}\left[r^{2}c_{\rm eff}^{2}\frac{d\ln\rho_{\rm
gas}}{dr}\right]=-4\pi G\left(\rho_{\rm gas}+\rho_{\rm DM}\right),$ (15)
where $c_{\rm eff}\equiv\sqrt{c_{\rm s}^{2}+v_{\rm
tur}^{2}/3+\left(\alpha_{0}\tilde{v}_{\rm bsm}\right)^{2}}$ is the effective
sound speed developed from the isothermal sound speed $c_{\rm
s}\equiv\sqrt{k_{\rm B}T_{\rm gas}/(\mu m_{\rm p})}$. The solution of the
isothermal Lane-Emden equation with the regularity condition does not have the
radius where the density becomes zero, but connects to the external medium
with a density of $\rho_{\rm ext}=f_{\rm b}\rho_{\rm DM}$. The central density
is determined so that $\rho_{\rm gas}=\rho_{\rm ext}$ at the virial radius.
From the analogy of the Bonnor-Ebert sphere, the isothermal gas cloud embedded
in a DM halo potential has a critical mass for the onset of its gravitational
collapse. Practically, for a given $T_{\rm gas}$ and $\rho_{\rm DM}(r)$, we
construct the density profile with different values of the gas central density
$\rho_{0}$ and thus obtain $\rho_{\rm gas}(R_{\rm vir})$ as a function of
$\rho_{0}$. Since this function has a local maximum value and the value
decreases with increasing halo mass, a hydrostatic equilibrium solution where
$\rho_{\rm gas}(R_{\rm vir})=\rho_{\rm ext}$ no longer exists for $M_{\rm
h}\geq M_{\rm h,crit}$ (see Appendix A). In this case, the gas evolution is
described by the free-fall stage below.
#### 2.5.3 Free-fall Stage
Once the gas cloud becomes gravitationally unstable, the evolution of the gas
density profile is well described by the Penston-Larson self-similar solution
(Penston, 1969; Larson, 1969), which has a density profile with a flat core of
the Jeans scale and an envelope with a power-law density distribution
$\rho_{\mathrm{gas}}(r)\propto r^{-2}$. The central density increases over the
free-fall timescale as
$\frac{d\rho_{\rm gas}}{dt}=\frac{\rho_{\rm gas}}{t_{\rm ff}},$ (16)
where the free-fall timescale is calculated with
$t_{\mathrm{ff}}\equiv\sqrt{\frac{3\pi}{32G\left(\rho_{\rm
gas}+\langle\rho_{\rm DM}\rangle\right)}},$ (17)
where $\langle\rho_{\rm DM}\rangle=\rho_{\rm m}(z)\delta_{0}$ represents the
averaged DM density 111 The squared density of a NFW profile averaged within
the characteristic radius of $R_{\rm vir}/c_{\mathrm{vir}}$ is given by
$\langle\rho^{2}\rangle=\frac{7}{8}\left[\rho_{\rm
m}(z)\delta_{0}\right]^{2}$, independent of the concentration factor
$c_{\mathrm{vir}}$. .
In the collapsing stage, compressional heating by the self-gravitating gas is
taken into account and the rate is given by
$\displaystyle\Gamma_{\rm comp}\equiv\frac{P_{\rm gas}+P_{\rm tur}}{\rho_{\rm
gas}^{2}}\cdot\frac{d\rho_{\rm gas}}{dt}=\frac{c_{\rm eff}^{2}}{t_{\rm ff}}.$
(18)
We note that the compressional heating rate is enhanced by turbulent pressure
through the effective sound speed.
### 2.6 Temperature and chemical evolution
We consider the evolution of thermal and kinetic energy of the gas by solving
the two energy equations:
$\displaystyle\frac{de_{\rm th}}{dt}$ $\displaystyle=\Gamma_{\rm
mrg,th}+\Gamma_{\rm comp}-\mathcal{L}_{\rm chem}-\mathcal{L}_{\rm rad},$ (19)
where $\mathcal{L}_{\rm chem}$ is the cooling/heating rate associated with
chemical reactions, and $\mathcal{L}_{\rm rad}$ is the radiative cooling rate
(note that all the rates are in units of erg s-1 g-1). While the compressional
heating rate is included only in the collapse stage, the other effects are
taken into account to calculate the gas temperature over the three
evolutionary stages. The cooling term includes radiative cooling by H, He,
He+, and He++ (Glover & Jappsen, 2007), H2 (Glover & Abel, 2008; Glover,
2015a, b), and cooling/heating associated with chemical reactions.
Figure 1: Merger history of the main progenitors of a high-$z$ quasar host
galaxy with a DM halo mass of $M_{\rm h}=10^{12}~{}M_{\odot}$ at $z=6$. For a
reference, the median halo mass among all the $10^{4}$ trees is shown with the
red curve. Three representative merger trees (in terms of growth speed) are
highlighted with the blue, orange, and green curves (tree id = 1, 2, and 3).
The dotted curves indicate constant virial temperatures, the values of which
are denoted by numbers in the figure.
We solve the chemical reactions of primordial gas among the following 9
species; H, H2, e-, H+, H${}^{+}_{2}$ , H-, He, He+, and He++. In Table. 1, we
show the 35 reaction rate coefficients adopted in this work. In terms of
photodissociation of H2, H- and H${}_{2}^{+}$ by external radiation emitted
from nearby star-forming galaxies, the reaction rate is calculated by assuming
the radiation spectral energy distribution (SED) to be a blackbody spectra
with $T_{\rm rad}=2\times 10^{4}~{}{\rm K}$. The SED model approximates more
realistic spectra of observed metal-poor star-forming galaxies (Inoue, 2011).
The dissociation rates of H- and H${}_{2}^{+}$ are calculated by a convolution
with the cross section of the $i$-th chemical species ($i=$ H- and
H${}_{2}^{+}$) as
$k_{\mathrm{i},\mathrm{pd}}=\int_{0}^{\infty}\frac{4\pi
J(\nu)}{h\nu}\sigma_{\mathrm{i}}(\nu)d\nu.$ (20)
The cross sections we adopt are from references listed in Table. 1.
## 3 Results
Figure 2: Time evolution of LW radiation intensity $J_{\rm LW}$ (in units of
$10^{-21}$ erg s-1 cm-2 Hz-1 sr-1) irradiating the quasar progenitors for the
four cases shown in Fig. 1. For the median tree, we show two cases where the
metal-bubble size $r_{\rm s}$ is calculated as described in §2.2 (solid) and
the twice of the fiducial value is adopted (dashed). Figure 3: Distributions
of the LW background intensity $J_{\rm LW}$ (in units of $10^{-21}$ erg s-1
cm-2 Hz-1 sr-1) irradiating the quasar progenitors at different epochs
($10\leq z\leq 45$). The mean value of $J_{\rm LW}$ increases from higher
redshifts, has a peak of $J_{\rm LW}\simeq 450$ at $z\simeq 25$, and decreases
toward lower redshifts. The LW intensity is distributed over a wide range of
$10^{-1}\lesssim J_{\rm LW}\lesssim 10^{4}$ at higher redshifts, while the
dispersion of the distribution becomes smaller toward lower redshifts.
---
Figure 4: Gas density and temperature evolution along with the three
representative halo merger trees for the two values of baryonic streaming
velocity: $v_{\rm bsm}=1\sigma$ (upper panels) and $v_{\rm bsm}=2\sigma$
(lower panels). The elapsed epochs when the parent halo mass reaches $M_{\rm
h}=10^{6}$, $10^{7}$ and $3\times 10^{7}M_{\odot}$ are marked with dots in the
left panels, while those when the LW intensity cross $J_{\rm LW}=1$, $10$,
$100$ and $1000$ are marked in the right panels. When the halo mass grows
faster and/or the streaming velocity is higher, gas collapse is significantly
delayed due to pressure (thermal + kinetic) support of the gas cloud. This
effect makes the gas enter the atomic-cooling stage at lower densities (H-H2
and H-H cases) owing to strong LW irradiation before the onset of
gravitational collapse.
### 3.1 Merger history & evolution of LW radiation background
In Fig. 1, we show the evolution of the main progenitors, i.e., the most
massive halos at each epoch, for all the $10^{4}$ merger trees that grow to
$M_{\rm h}=10^{12}~{}M_{\odot}$ at $z=6$. In such over-dense regions of the
universe, the DM halo mass increases via rapid mergers. The median halo mass
(dashed curve) reaches $M_{\rm h}\simeq 8\times 10^{10}$, $6\times 10^{8}$,
$2\times 10^{7}$, and $8\times 10^{5}~{}M_{\odot}$ at $z=10$, $20$, $30$, and
$40$, respectively, and the virial temperature exceeds the atomic-cooling
threshold of $T_{\rm vir}\simeq 10^{4}~{}{\rm K}$ at $z\simeq 34$. Therefore,
the gas cloud concentrated in the massive halo collapses at an epoch earlier
than when typical first-galaxies would form in atomic-cooling halos ($M_{\rm
h}\simeq 10^{7}~{}M_{\odot}$ at $z\simeq 10$; see Bromm & Yoshida, 2011),
which are usually considered to be massive seed forming sites in most previous
studies (e.g., Dijkstra et al. 2014). For illustration purposes, we highlight
three merger trees: the blue (id 1, a less massive tree), orange (id 2, a tree
comparable to the median evolution), and green curve (id 3, a more massive
tree). In the following sections, we focus our analysis on these three
representative cases.
Following the method laid out in § 2, in Fig. 2 we present the redshift
evolution of $J_{\rm LW}$ for the three representative trees and the median
track. For all the cases, the LW background intensity gradually increases from
higher redshifts, peaks at the intermediate redshifts, and decreases toward
lower redshifts. This redshift dependence reflects the nature of the non-
linear bias function which boosts the abundance of halo pairs with comparable
masses (Scannapieco & Barkana, 2002). Namely, when the mass of the main
progenitor is close to the atomic-cooling halo mass ($m_{\mathrm{ac},z}\sim
10^{7}M_{\odot}$), a large number of source halos form nearby owing to the
halo clustering effect and thus the LW intensity is maximized. As the main
progenitor grows, its mass difference from $m_{\mathrm{ac},z}$ is larger and
thus the clustering effect of atomic-cooling sources becomes weaker so that
their spacial distribution is approximated to be uniform (i.e., $\xi\ll 1$).
As a result, the LW intensity is dominated by the contribution from a large
number of atomic-cooling source halos within the absorbing screen ($r\lesssim
r_{\rm max}$) and begins to decline due to the cosmic dilution effect at lower
redshifts. For rapidly growing progenitor halos exceeding $m_{\mathrm{ac},z}$
earlier, the LW intensity quickly rises at higher redshifts and the peak
values become higher owing to stronger clustering at earlier epochs. Namely,
the peak values of LW intensity in the overdense regions are $J_{\rm LW}\simeq
60$ (id 1), $J_{\rm LW}\simeq 400$ (id 2), $J_{\rm LW}\simeq 600$ (median),
and $J_{\rm LW}\simeq 6\times 10^{3}$ (id 3), which are significantly higher
than the level of LW intensity irradiating typical atomic-cooling halos that
are expected to form massive BH seeds (see Dijkstra et al., 2008; Agarwal et
al., 2012; Johnson et al., 2013).
In our semi-analytical approach, we model metal pollution of the progenitor
halos due to SN explosions that occur in source halos. Although we treat this
effect by replacing the minimum distance between the target and source halos
with $r_{\rm s}$, there is no information on the time-dependent spatial
distributions of DM halos in our framework. To examine the impact of the model
assumptions, in Fig. 2 we also show the case where the size of the metal-
polluted bubbles ($r_{\rm s}$) is doubled, the corresponding $t_{\rm sf}$ is
comparable to the Hubble time at the redshift, or equivalent to setting
$\Delta=1$ with the fiducial value of $t_{\rm sf}$. In this case, the LW
intensity is overall reduced at higher redshifts, indicating a significant
contribution from nearby source halos with $\gtrsim m_{\mathrm{ac},z}$ to the
LW radiation background. We note that our treatment simply removes the
contribution from source halos within distances of $r_{\rm s}$, but does not
address how likely the main progenitor is affected by environmental metal-
enrichment. Our argument nevertheless provides a conservative estimate of
$J_{\rm LW}$ if the efficiency of environmental metal-enrichment is low. As
discussed in §4, the efficiency should be negligibly low because metal-
polluted bubbles rarely penetrate the interior of the target halo (Chiaki et
al., 2018).
In Fig. 3, we present the histograms of the LW background intensity that
irradiates the main progenitor halos for the $10^{4}$ trees at different
redshifts. For the whole sample of the target halos in highly-biased regions,
the histogram resembles a probability distribution function (PDF) of $J_{\rm
LW}$, with the bar height in each bin ($\Delta\log J_{\rm LW}=0.3$) represents
the number fraction of halos irradiated within $\log J_{\rm LW}\to\log J_{\rm
LW}+\Delta\log J_{\rm LW}$. From higher redshifts down to $z\simeq 30$, the
mean value of $J_{\rm LW}$ in the PDF increases owing to a large number of
clustered source halos with $\gtrsim m_{\mathrm{ac},z}$ and the $J_{\rm LW}$
distribution peaks around $\simeq 270$ at $z\simeq 25$. Towards lower
redshifts, the target halo mass becomes higher than the typical mass of source
halos. Therefore the abundance of sources is hardly boosted by the clustering
effect (Iliev et al., 2003). Moreover, the LW intensity is diluted by the
cosmic expansion, lowering the mean value. While the dispersion of the PDF is
larger at higher redshifts, reflecting the diversity of the progenitor mass,
the PDF peaks at $J_{\rm LW}\simeq 60$ by $z=10$ when all the $10^{4}$ trees
converge to the high-$z$ quasar host. We note that our model does not consider
LW radiation produced from DM minihalos with $m<m_{\mathrm{ac},z}$, where
$\mathrm{H_{2}}$ is the only coolant to induce star formation. However, strong
LW background radiation in the over-dense region likely suppresses its
formation. Therefore, the histogram shown in Fig. 3 counts the lower bound of
the LW background intensity.
Figure 5: Census of merger trees which host the three types of gas collapse
with different $v_{\rm bsm}$. The blue, orange and green bars correspond to
the representative evolutionary tracks of the same colors in the upper panels
of Fig. 4. With increasing $v_{\rm bsm}$, the cases where gas clouds enter the
atomic-cooling stage (H-H2 and H-H types) dominate primarily because of the
delay of gas collapse that also leads to higher values of LW intensity.
Figure 6: Distributions of the halo virial temperature $T_{\rm vir}$ (upper
panels) and LW intensity $J_{\rm LW}$ (middle panels) measured at the epochs
when gas clouds become gravitationally unstable for the cases with different
$v_{\rm bsm}$ values. The lower panels show the mass accretion rate of
$\dot{M}_{\star}\equiv c_{\rm eff}^{3}/G$ measured at the minimum temperature
point at $n_{\rm gas}>10^{3}{\rm cm}^{-3}$ in the collapsing stage. Overall,
with $v_{\rm bsm}\geq 1\sigma$, nearly all the cases enter the atomic-cooling
stage in massive halos with $T_{\rm vir}>10^{4}~{}{\rm K}$ irradiated by LW
radiation with intensity of $J_{\rm LW}>10$. Since the collapsing clouds are
massive, high accretion rates become high enough ($\dot{M}\gtrsim
0.1~{}M_{\odot}~{}{\rm yr}^{-1}$) to form massive seed BHs.
### 3.2 Thermal and dynamical evolution of gas clouds in the high-$z$ quasar
hosts
In this section we focus our analysis on the gas properties in the main
progenitors along the three representative merger trees. In Fig. 4, we show
the evolution of gas density (left panels) and temperature (right panels) at
the central core as a function of redshift. In order to examine the impact of
baryonic streaming motion, for each merger tree we assume two different
$v_{\rm bsm}$ values, i.e., $v_{\rm bsm}=1\sigma$ (upper panels), and
$2\sigma$ (lower panels). Each curve corresponds to the representative case
highlighted in Fig. 1. Along with the three evolutionary tracks, we denote the
epochs when the DM halo mass exceeds $M_{\rm h}=10^{6}~{}M_{\odot}$,
$10^{7}~{}M_{\odot}$, and $3\times 10^{7}~{}M_{\odot}$ in the left panels, and
when the LW background intensity first crosses $J_{\rm LW}=1$, $10$, $10^{2}$,
and $10^{3}$ in the right panels. In the following paragraphs, we first
describe the gas properties with $v_{\rm bsm}=1\sigma$, and then discuss the
impact of the baryonic streaming motion on gas evolution in cases with $v_{\rm
bsm}=2\sigma$.
For the lowest mass case (blue curve, tree id 1), the gas density gradually
increases with the halo mass in the early stage ($z>30$), where the gas cloud
is supported by thermal and turbulent pressure against its self-gravity and DM
gravitational force. After the halo mass reaches $\simeq 10^{6}~{}M_{\odot}$,
the cloud becomes gravitationally unstable owing to its low temperature, and
collapses over one free-fall timescale at $z\simeq 28$. The gas temperature
remains at $T\lesssim 10^{3}~{}{\rm K}$ due to H2 cooling, under a modest
level of LW intensity ($J_{\rm LW}\sim 1$) at $z>35$. In addition to LW
radiation, the gas is heated by four major merger events around $z\simeq
31-34$, but the dynamical heating rate does not overcome the H2 cooling rate
in this case.
For the intermediate mass case (orange curve, tree id 2), the evolution begins
from a redshift higher than in the previous case. In this case, the gas
temperature is substantially higher as a result of the combination of merger
heating and intense LW irradiation with $J_{\rm LW}\gtrsim 1$ in the early
stage. As several episodes of halo mergers increase the halo mass to $\sim
10^{7}~{}M_{\odot}$ by $z\simeq 30$ (the corresponding halo virial temperature
is $T_{\rm vir}\simeq 10^{4}~{}{\rm K}$), the gas temperature reaches $T\simeq
10^{4}~{}{\rm K}$, where the atomic cooling via Ly$\alpha$ emission begins to
operate. Although the LW intensity reaches $J_{\rm LW}\gtrsim 100$ before the
cloud gravitationally collapses, the level of LW intensity is not strong
enough to suppress H2 formation in the dense region ($\gtrsim 10^{2}~{}{\rm
cm}^{-3}$), where H2 reforms owing to its self-shielding effect. As a result
of efficient H2 cooling, the gas temperature drops down to $T\simeq
10^{3}~{}{\rm K}$ in the collapsing stage.
For the highest mass case (green curve, tree id 3), the gas temperature
quickly rises to $T\simeq 10^{4}~{}{\rm K}$ due to frequent mergers. Owing to
the clustering effect of the massive parent halo, the LW intensity reaches
$J_{\rm LW}\gtrsim 10^{3}$ at $z\simeq 47$, prominently higher than those seen
in the less massive cases. Although the H2 self-shielding becomes more
effective as the central density increases up to $\gtrsim 10^{4}~{}{\rm
cm}^{-3}$, the gas collapses keeping a nearly constant temperature of $T\simeq
8000~{}{\rm K}$. Inside the dense and warm region, H2 is collisionally
dissociated and its radiative cooling does not alter the thermal evolution.
In cases where $v_{\rm bsm}=2\sigma$, the gas property evolution is shown in
the lower panels of Fig. 4. Overall, the collapse of gas clouds is delayed due
to kinetic energy injection to the gas concentrated at the halo center. When
the cloud begins to collapse, the corresponding halo masses reach $M_{\rm
h}\simeq(3.5,~{}4.2,~{}5.9)\times 10^{7}~{}M_{\odot}$. For comparison, the
collapse halo masses are $M_{\rm h}\simeq(0.24,~{}2.1,~{}2.2)\times
10^{7}~{}M_{\odot}$ for $v_{\rm bsm}=1\sigma$. The delay effect is more
remarkable for the lower-mass cases because the halo circular velocity is
lower than the effective sound speed boosted by injection of turbulence and
streaming motion. As the gas collapse proceeds, $\mathrm{H_{2}}$ forms
efficiently in the modest $J_{\rm LW}$ environment, and eventually its cooling
reduces the gas temperature in the low- and intermediate-mass cases.
### 3.3 The statistical properties of the high-$z$ quasar progenitors
As noted in §3.2 and Fig. 4, depending on the main cooling processes inducing
star formation, the evolutionary tracks of the gas clouds embedded in the main
progenitors of high-$z$ quasar hosts are classified into three cases: (i)
$\mathrm{H_{2}}$ cooling, (ii) initial H Ly$\alpha$ cooling followed by
$\mathrm{H_{2}}$ cooling after a short isothermal collapse, (iii) H Ly$\alpha$
cooling when temperature is kept above $8000~{}{\rm K}$ by compression along a
wide density range. In Fig. 5, we present the number count of merger trees for
the three types with different baryonic streaming velocities, denoted as (i)
H2, (ii) H-H2, and (iii) H-H. Without the streaming velocity, 74% of the trees
experience gas collapse via H2 cooling, while the rest ($26\%$) form
atomically-cooling gas clouds (cases H-H2 and H-H). With non-zero streaming
motion ($v_{\rm bsm}\neq 0$), nearly all cases enter the atomic-cooling stage
because the halo mass reaches $m_{\rm ac,z}$ via mergers due to the
significant delay effect. As the streaming velocity increases, the gas mass
becomes higher at the onset of gravitational collapse, and thus the
compressional heating rate during the collapse stage is higher owing to the
accumulation of kinetic energy. Therefore, the number of trees where gas
isothermally collapses with $T\simeq 8000~{}{\rm K}$ (case H-H) increases
monotonically from $14\%$ to $27\%$ with increasing streaming velocity from
$v_{\rm bsm}=1\sigma$ to $3\sigma$.
In Fig. 6, we show the distributions of the halo virial temperature (upper
panels) and LW background intensity (middle panels) for the three types of gas
collapse. For each case, the values of $T_{\rm vir}$ and $J_{\rm LW}$ are
measured at the epoch when the gas cloud first enters its unstable stage. In
contrast to cases with $v_{\rm bsm}=0$, where gas collapse is led by H2
cooling in less massive halos with $T_{\rm vir}\sim 10^{3-4}~{}{\rm K}$, the
streaming velocity delays the cloud collapse until after the halo grows across
the atomic cooling threshold of $T_{\rm vir}\gtrsim 10^{4}~{}{\rm K}$. The
virial temperature for the H-H cases is generally higher than that for the
H-H2 cases and the mean value of $T_{\rm vir}$ for each case increases with
larger streaming velocity. This trend is more clearly shown in the
distributions of $J_{\rm LW}$, namely the mean LW background intensity for the
H-H cases is $\langle J_{\rm LW}\rangle\gtrsim 10^{3}$, which is $\simeq 10$
times higher than that for the H-H2 cases. The higher value of $J_{\rm LW}$ is
mainly caused by the delay of gas collapse until the halo mass becomes massive
enough to be exposed by a larger number of LW source halos. In addition,
compressional heating in collapsing clouds is stronger with larger $v_{\rm
bsm}$ and the minimum LW intensity required to keep isothermal collapse is
extended to lower values.
In the main progenitors of high-$z$ quasar hosts, massive gas clouds form
owing to the significant delay effect of cloud collapse by rapid halo mergers
and intense LW irradiation from nearby star-forming galaxies. The mass
accretion rate onto the central region of a gravitationally collapsing cloud
is approximated as $\dot{M}\simeq M_{\rm gas}/t_{\rm ff}$, where $M_{\rm gas}$
and $t_{\rm ff}$ are the gas mass and free-fall timescale at the onset of
gravitational collapse. Since the cloud is supported by thermal and kinetic
energy of the gas, the accretion rate can be written as $\simeq c_{\rm
eff}^{3}/G$ (Larson, 1969; Penston, 1969, etc.), which depends only on the gas
thermal and kinetic temperature (see below Eq. 15). In the lower panels of
Fig. 6, we show the distributions of $\dot{M}\equiv c_{\rm eff}^{3}/G$, for
which we adopt the minimum temperature value in the cloud collapse stage at
$n\gtrsim 10^{3}~{}{\rm cm}^{-3}$. The accretion rate is broadly distributed
over $\dot{M}\simeq 3\times 10^{-3}-5~{}M_{\odot}~{}{\rm yr}^{-1}$. The
vertical line in the bottom panels indicates a reference value of
$0.1~{}M_{\odot}~{}{\rm yr}^{-1}$, above which the outer envelope of an
accreting protostar is bloated due to rapid heat injection through mass
accretion and the emission of stellar ionizing photons is strongly suppressed.
For $v_{\rm bsm}=1\sigma$, the majority of the H-H2 cases yield
$\dot{M}\gtrsim 0.1~{}M_{\odot}~{}{\rm yr}^{-1}$. With $v_{\rm bsm}>1\sigma$,
all the cases have sufficiently high accretion rates exceeding the reference
value (see more discusssion in § 5).
## 4 Effects of Metal enrichment
### 4.1 Critical Metallicity
Figure 7: Evolution of the heating rate (solid) and metal fine-structure line
cooling rate (dashed) with gas density for the two representative trees (id 2
and 3) with $v_{\rm bsm}=1\sigma$. The cooling rate consists of CII and OI
fine-structure line emission, and the heating rate includes the effect of
turbulence and halo mergers. To quantify the critical metallicity for which
metal-line cooling dominates heating during the gas collapse, we turn off the
H2 cooling rate. The critical metallicity is found to be $Z_{\rm crit}\simeq
1.9\times 10^{-3}~{}Z_{\odot}$ and $2.5\times 10^{-3}~{}Z_{\odot}$ for the
tree 2 and 3, respectively.
Metal enrichment is considered to be a major obstacle in forming massive BH
seeds through star formation because efficient radiative cooling via metal
fine-structure lines will induce gas fragmentation and suppress the formation
of masive stars. In order to quantify the critical metallicity, we calculate
the cooling rate by CII and OI, assuming that the number fractions of carbon
and oxygen nuclei in the gas phase with respect to hydrogen nuclei are $x_{\rm
C,gas}=0.927\times 10^{-4}(Z/Z_{\odot})$ and $x_{\rm O,gas}=3.568\times
10^{-4}(Z/Z_{\odot})$ (Pollack et al., 1994), and all the carbon and oxygen
are in the form of CII and OI, respectively. This treatment is justified for
warm gas with $T\simeq 8000~{}{\rm K}$ (Omukai et al., 2008).
In Fig. 7, we present the metal-line cooling rate (dashed) and heating rate
associated with mergers and gravitational compression (solid) as a function of
the density of gas embedded in the two representative progenitor halos (tree
id 2 and 3) with $v_{\rm bsm}=1\sigma$. In order to examine the cooling effect
by metal lines against heating, the H2 cooling is turned off, and metal-line
cooling is calculated but not included in the thermal evolution. The
metallicity for each case is set so that the cooling rate is marginally
balanced with the heating rate at least once during the collapse phase.
Namely, the critical metallicity is estimated as $Z_{\rm crit}\simeq 1.9\times
10^{-3}~{}Z_{\odot}$ (tree id 2) and $2.5\times 10^{-3}~{}Z_{\odot}$ (tree id
3), respectively. These values are higher than the critical metallicity of
$Z_{\rm cirt}\sim 3\times 10^{-4}~{}Z_{\odot}$ (in the absence of dust)
obtained by Omukai et al. (2008), where the effect of turbulence and merger
heating is not included. Although the critical metallicity depends on the
relative abundance of metals produced in SN ejecta, we use $Z_{\rm
crit}=2\times 10^{-3}~{}Z_{\odot}$ as a reference value in the following
discussion.
### 4.2 Efficiency of Metal Enrichment
Throughout this paper, we do not consider the genetic pollution process
through mergers of metal-rich minihalos, given that the star forming
efficiency is strongly suppressed by intense LW radiation in the overdense
region. However, we note that this treatment is justified only when the
“actual” LW intensity is as high as the average value shown in Fig. 3.
Otherwise, H2 cooling induces star formation in weak LW-radiation pockets. We
do not quantify this effect that reduces the number of the main progenitors
where gas is kept pristine. As a reference, Lupi et al. (2021) found $\sim
30\%$ of the atomic-cooling halos in the overdense region to be polluted
genetically. Since some of those polluted halos do not belong to the merger
history of the final massive quasar host halo, more than 70% of our main-
progenitor samples should remain pristine (or sufficiently metal poor). On the
other hand, together with the metal enrichment effect, we also exclude the
contribution of LW flux from such lower mass halos, making our treatment
conservative.
Next, we discuss the modeling of environmental pollution led by SN-driven
bubbles from nearby star-forming halos. One important caveat is that the
progenitor halo is assumed to be immediately enriched once the bubble front
reaches the halo virial radius. However, the instantaneous enrichment process
considered in many previous studies in literature may not be realistic. In
fact, metals in SN ejecta cannot penetrate into the halo center but pollute
the halo superficially in the outer region with low densities of $\lesssim
10~{}{\rm cm}^{-3}$ (Chen et al., 2017; Chiaki et al., 2018), leaving the gas
in the halo interior un-polluted, even for low mass halos. If more energetic
pair-instability SNe occur in nearby source halos, the ejecta with stronger
ram pressure deeply penetrate into the target halo and induce metal mixing at
the shock front (Chen et al., 2017). To consider this uncertainty, we
introduce the metal mixing efficiency $f_{\rm mix}$, which is the fraction of
metals mixed with the interior gas in the target halo and is treated as a free
parameter below.
Another important quantity is the total amount of metals carried into the
target halo through multiple SN-driven bubbles. Let us consider a source halo
$m$ with a distance of $r_{\rm s}$ from the target halo with a size of $r_{\rm
vir}(M_{\rm h})$. The mass of metals produced by multiple SNe in the source
halo is given by $m_{\rm met}=N_{\rm sn}m_{\rm ej}$, where $N_{\rm sn}\simeq
m_{\star}/m_{0}$ is the number of SNe and $m_{\rm ej}$ is the average mass of
metals produced by one SN. We here adopt $m_{\rm ej}=0.746~{}M_{\odot}$, which
corresponds to the metal ejecta mass produced by a $13~{}M_{\odot}$ stellar
progenitor (Chiaki et al., 2018). Assuming that a fraction $f_{\rm esc,m}$ of
the metals is launched isotropically by the SN bubble, the mass of the metals
that reach the target halo is given by $f_{\rm esc,m}m_{\rm met}(r_{\rm
vir}/r_{\rm s})^{2}/4$. Therefore, due to SN bubbles produced from one source
halo, the gas metallicity in the target halo increases by
$\displaystyle\Delta Z$ $\displaystyle\simeq\frac{m_{\star}m_{\rm
ej}}{f_{b}M_{\rm h}m_{0}}\cdot\frac{f_{\rm esc,m}f_{\rm
mix}}{4}\left(\frac{r_{\rm vir}}{r_{\rm s}}\right)^{2}$ (21)
$\displaystyle\simeq 9.3\times 10^{-5}~{}Z_{\odot}~{}f_{\rm
mix}\left(\frac{f_{\rm esc,m}}{0.5}\right)\left(\frac{m}{M_{\rm
h}}\right)\left(\frac{5r_{\rm vir}}{r_{\rm s}}\right)^{2},$
where $f_{\rm esc,m}\simeq 0.5$ is motivated by a 3D high-resolution
hydrodynamical simulations of SN-driven galactic outflows (Li et al., 2017).
As discussed in § 3.1, the LW intensity peaks when the target halo reaches the
atomic-cooling threshold because (1) source halos with $m_{\rm ac,z}$ are the
most abundant population in number and (2) two halos with comparable mass are
strongly clustered. This circumstance will also maximize the efficiency of
environmental enrichment. Assuming $M_{\rm h}=m_{\rm ac,z}$, we estimate the
number of source halos with mass of $m\geq m_{\rm ac,z}$ located within
$r_{\rm s}$ ($\simeq 5r_{\rm vir}$ typically) from the target halo for the
three representative trees as $N_{\rm s}\simeq$ 0.4 (tree id 1), 6 (tree id
2), and 86 (tree id 3), respectively. As a result, the gas metallicity in the
target halo is calculated as $Z=N_{\rm s}\Delta Z\simeq 9.3\times
10^{-5}~{}Z_{\odot}f_{\rm mix}N_{\rm s}$. Therefore, we obtain the conditions
where the environmental enrichment process affects the thermal evolution of
gas in the target halo as $Z>Z_{\rm crit}$, or equivalently
$\displaystyle N_{\rm s}>21.5f_{\rm mix}^{-1}.$ (22)
Since $f_{\rm mix}\leq 1$, the gas evolution in the main progenitor surrounded
by $\lesssim 21$ nearby source halos within $r_{\rm s}$ is unlikely to be
affected by metal-line cooling. On the other hand, if the mixing efficiency is
as high as $f_{\rm mix}\gtrsim 0.25$, metal enrichment will play an important
role in changing the gas evolution in rapidly growing halos (tree id 3),
reducing the number fraction of H-H collapse cases (see Fig. 5).
Additionally, inhomogeneous density distributions inside the source halos and
non-steady SFR that form SNe in the earlier stage change the velocity and
shape of expanding bubbles. Those effects could result in either an
overestimation or underestimation of the bubble size. To discuss the
efficiency of environmental enrichment precisely, we need to further study a
variety of situations with different physical parameters as well as the metal
mixing efficiency $f_{\rm mix}$. We leave this to future work.
### 4.3 Dynamical evolution of metal enriched gas
We quantify the critical metallicity and discuss the impact of metal-line
cooling on the thermal evolution of gas clouds. However, dynamical evolution
of a collapsing cloud with $Z\gtrsim Z_{\rm crit}$ that composes of a warm
outer-envelope ($T\simeq 8000~{}{\rm K}$) and a cool central core has not been
fully understood; especially, longterm behavior of the mass inflow rate onto
the central newly-formed protostar is still uncertain. Recent cosmological
simulations suggest that rapid mass inflows may occur even with metal
pollution above the critical metallicity in atomic-cooling halos (Regan et
al., 2020a), but widespread star formation limits the final mass of the
central star to $\lesssim 10^{4}~{}M_{\odot}$ (Regan et al., 2020b). On the
other hand, when the metallicity is lower than the critical value, the
collapsing gas cloud fragments only at the central region and forms a compact
disk, in which a vast majority of the clumps merge with the central protostar
via inward migration (Inayoshi & Haiman, 2014; Chon & Omukai, 2020). As a
result, the stellar growth is not quenched by metal pollution. Future work is
needed to investigate the star formation in the overdense regions where
high-$z$ quasar form to quantify the impact of metal pollution on the gas
dynamics.
---
Figure 8: Distributions of the mass of massive stars (equivalent to seed BHs)
formed in the quasar progenitors with a bin size of $\Delta\log M_{\star}=0.1$
for the two cases with $v_{\rm bsm}=0$ (left panels) and $v_{\rm bsm}=1\sigma$
(right panels). We set the accretion efficiency of $\eta=0.6$ (and $0.3$) in
the upper (lower) panels. Without streaming motion, the BH mass is widely
distributed from $500~{}M_{\odot}(250~{}M_{\odot})$ to $\gtrsim 2\times
10^{5}~{}M_{\odot}$ for $\eta=0.6$ ($0.3$, respectively), while for $v_{\rm
bsm}=1\sigma$, the lower bound shifts to $7000~{}M_{\odot}(3500~{}M_{\odot})$.
## 5 Discussion
### 5.1 Protostellar Mass and BH Mass Distribution
We apply the obtained mass accretion rate to estimate the final protostellar
mass distribution at the end of star formation episodes. Due to the existing
angular momentum at large scales, the rapidly accreted pristine gas settles
into a disk, which becomes gravitationally unstable and thus results in
fragmentation and clump formation (e.g., Oh & Haiman, 2002). Most clumps can
migrate inward and merge with the central protostar before forming stars
(Inayoshi & Haiman, 2014), yielding accretion rate onto the stellar surface
through the disk $\dot{M}_{\star}=\eta\dot{M}$, where $\eta(<1)$ denotes the
conversion efficiency from the global accretion rate to that through the
accretion disk. Hydrodynamical simulations find that mass accretion through
the unstable disk proceeds episodically and the time-averaged value of the
efficiency is $\eta\simeq 0.6$ (Sakurai et al. 2016; Toyouchi et al. in prep).
When the time-averaged accretion rate is higher than a critical rate,
$\dot{M}_{\star}\gtrsim\dot{M}_{\rm crit}$, the accreting star continues to
expand its envelope with a lower surface temperature of $T_{\rm eff}\simeq
5000~{}{\rm K}$, from which UV radiation hardly emits. As a result, stellar
radiative feedback does not prevent the central star growing via mass
accretion (Omukai & Palla, 2001; Hosokawa et al., 2012, 2013; Schleicher et
al., 2013; Haemmerlé et al., 2018; Sakurai et al., 2015, 2020b). Since the
value of $\dot{M}_{\rm crit}$ ranges from $0.01$ to $0.04~{}M_{\odot}~{}{\rm
yr}^{-1}$, depending on the treatment of the stellar evolution calculations
and their boundary conditions (Hosokawa et al., 2013; Haemmerlé et al., 2018),
we adopt the highest $\dot{M}_{\rm crit}=0.04~{}M_{\odot}~{}{\rm yr}^{-1}$ as
a reference value. This choice leads to a lower bound of the stellar/BH mass.
With $\dot{M}_{\star}\gtrsim\dot{M}_{\rm crit}$, the final stellar mass is
determined either by its finite lifetime or by the onset of stellar collapse
triggered by the general-relativistic (GR) instability (Shibata et al. 2016;
see a review by Woods et al. 2019 and references therein). The final mass is
also affected by fuel supply through mass accretion onto the star. Woods et
al. (2017) have investigated the final mass of stars accreting at a constant
rate over $\simeq 0.01-10~{}M_{\odot}~{}{\rm yr}^{-1}$ (radiative feedback is
neglected), and found that the final mass linearly increases with the
accretion rate below $\sim 0.03~{}M_{\odot}~{}{\rm yr}^{-1}$ but is saturated
around $\sim{\rm a~{}few}\times 10^{5}~{}M_{\odot}$ due to the GR instability.
The relation between the critical mass and accretion rate is fitted as
$M_{\rm\star,GR}\simeq\left[0.83\log\left(\frac{\dot{M}_{\star}}{~{}M_{\odot}~{}{\rm
yr}^{-1}}\right)+2.48\right]\times 10^{5}~{}M_{\odot},$ (23)
at $\dot{M}_{\star}\geq 0.1~{}M_{\odot}~{}{\rm yr}^{-1}$, which is used for
our analysis.
On the other hand, when the stellar accretion rate is lower than the critical
rate, $\dot{M}_{\star}\lesssim\dot{M}_{\rm crit}$, the star evolves to the
main-sequence stage and begins to emit strong ionizing radiation, quenching
the stellar growth. Here, we simply consider that ionizing radiation from the
star heats the disk surface and thus photoevaporation suppresses the accretion
rate (McKee & Tan, 2008; Hosokawa et al., 2011; Tanaka et al., 2013). This
process becomes important when the ionization front reaches the stellar
gravitational influence radius for ionized gas with a temperature of $2\times
10^{4}~{}{\rm K}$ defined by
$R_{\rm inf,\star}\equiv\frac{GM_{\star}}{c_{\rm s,ion}^{2}}\simeq 0.17~{}{\rm
pc}\left(\frac{M_{\star}}{10^{4}~{}M_{\odot}}\right),$ (24)
and the ionized gas breaks out through the neutral infalling gas. The
photoevaporation rate can be expressed as
$\dot{M}_{\rm pe}\simeq 2.1\times 10^{-2}~{}M_{\odot}~{}{\rm
yr}^{-1}\left(\frac{\Phi_{\rm ion}}{10^{52}~{}{\rm
s}^{-1}}\right)^{1/2}\left(\frac{R_{\rm disk}}{0.1~{}{\rm pc}}\right)^{1/2},$
(25)
where $\Phi_{\rm ion}$ is the ionizing photon number flux and $R_{\rm disk}$
is the size of the accretion disk. The photon flux is approximated as
$\Phi_{\rm ion}\simeq 1.6\times 10^{52}~{}{\rm
s}^{-1}(M_{\star}/10^{4}~{}M_{\odot})$ in the range of $10^{3}\lesssim
M_{\star}/M_{\odot}\lesssim 10^{5}$ for main-sequence stars (Johnson et al.,
2012). We evaluate the mass outflow rate owing to photoevaporation by setting
$R_{\rm disk}\simeq R_{\rm inf,\star}$ as
$\dot{M}_{\rm pe,min}\simeq 3.5\times 10^{-2}~{}M_{\odot}~{}{\rm
yr}^{-1}\left(\frac{M_{\star}}{10^{4}~{}M_{\odot}}\right),$ (26)
which gives a lower bound for the rate because the outflow of ionized gas is
mainly driven from larger radii (i.e., a lager surface area). Therefore,
equating $\dot{M}_{\star}=\dot{M}_{\rm pe,min}$, we obtain the feedback-
regulated stellar mass as $M_{\rm\star,fb}\simeq\dot{M}_{\star}t_{\rm pe}$ or
$\dot{M}_{\rm\star,fb}\simeq 2.9\times
10^{3}~{}M_{\odot}\left(\frac{\dot{M}_{\star}}{0.01~{}M_{\odot}~{}{\rm
yr}^{-1}}\right),$ (27)
at $\dot{M}_{\star}\leq\dot{M}_{\rm crit}$, where $t_{\rm pe}(\simeq 2.9\times
10^{5}~{}{\rm yr})$ is the characteristic photoevaporation timescale (note
that this expression is valid when the stellar lifetime is longer than $t_{\rm
pe}$). The final mass at the intermediate accretion rate ($\dot{M}_{\rm
crit}\leq\dot{M}_{\star}\leq 0.1~{}M_{\odot}~{}{\rm yr}^{-1}$) is estimated by
performing logarithmic interpolation.
In Fig. 8, we show the mass distribution of massive BH seeds formed in the
high-$z$ quasar progenitor halos, calculated with the method described above
(see also the bottom panels in Fig. 6). Note that the number fraction from the
different types of gas evolution is stacked at the same mass bin. Without the
streaming motion ($v_{\rm bsm}=0$; left panels), the BH mass is widely
distributed from $500~{}M_{\odot}$ ($250~{}M_{\odot}$) to $\gtrsim 2\times
10^{5}~{}M_{\odot}$ for $\eta=0.6$ ($0.3$, respectively) with a few peaks
corresponding to the virial temperatures of halos when the BHs form by gas
collapse. Overall, the cases with high accretion rates $\dot{M}_{\rm in}$
(H-H2 and H-H cases) are responsible for high-mass BH formation beyond $\sim
10^{4}~{}M_{\odot}$, while the H2 case with lower accretion rates yields less
massive BHs with $<10^{4}~{}M_{\odot}$. The number of BH seeds above $2\times
10^{5}~{}M_{\odot}$ is limited because the GR instability induces direct
collapse of accreting supermassive stars. The shape of the mass distribution
at $10^{4}\lesssim M_{\bullet}/M_{\odot}\lesssim 10^{5}$ depend on the
accretion efficiency; namely, the smaller value of $\eta(=0.3)$ yields a
distribution skewed toward lower masses. With non-zero streaming motion
($v_{\rm bsm}=1\sigma$; right panels), the less massive population with
$<10^{4}~{}M_{\odot}$ decreases abruptly since nearly all the cases experience
the atomic-cooling stage and thus the central stars accrete at high rates
without strong radiative feedback. We note that the BH mass distribution for
higher streaming velocities are similar to that for $v_{\rm bsm}=1\sigma$, but
their contribution to the total BH mass distribution is less important because
regions with $v_{\rm bsm}\geq 2\sigma$ are rarer.
As discussed in §4, the number fraction of the cases with highest mass
accretion rates (H-H cases) would be reduced by the effect of line cooling via
atomic carbon and oxygen which are produced in nearby source halos through SNe
and carried into the quasar main progenitor halos with interest. The level of
reduction depends on the metal mixing efficiency in the main progenitor;
namely, the enrichment effect could be neglected if the mixing efficiency is
lower than $\sim 20\%$. Nevertheless, the overall shape of the BH mass
distribution still holds.
### 5.2 Subsequent BH growth and evolution
How do those massive seed BHs formed in overdense regions grow to be SMBHs
that are observed as high-$z$ quasars at $z\simeq 6-7$? In previous studies in
literature, the subsequent growth of their BHs via gas accretion and/or
mergers and the required conditions have been discussed (e.g., Tanaka &
Haiman, 2009; Valiante et al., 2016). Recently, large-scale cosmological
simulations have been exploring the evolution of SMBHs and the coevolution of
their host galaxies including various feedback processes due to SNe and AGN
activity with subgrid models. These simulations have generally found that
massive seed BHs formed in protogalaxies hardly grow via gas accretion because
dense, cold gas is expelled by energetic SN feedback associated with star
formation. However, it is worth noting that most simulations in which SN
feedback quenches BH growth have focused on “typical” atomic-cooling halos
that will grow to $\sim 10^{10-11}~{}M_{\odot}$ by $z\simeq 6$ (e.g., Habouzit
et al., 2017; Latif et al., 2018; Smith et al., 2018)
On the other hand, as pointed out by Inayoshi et al. (2020), the progenitor
halos of high-$z$ quasar hosts with $M_{\rm h}\simeq 10^{12}~{}M_{\odot}$ at
$z\simeq 6$ form in rarer regions and have reached $M_{\rm h}\sim
10^{8}~{}M_{\odot}$ with deeper gravitational potential by the time when star
formation takes place ($z\sim 20-35$). In such massive halos, a large amount
of cold gas is supplied to the nuclear region through filamentary structures
of the proto-cosmic web (Di Matteo et al., 2012), and the seed BHs can be fed
at high rates significantly exceeding the Eddington limit when the metallicity
of inflowing gas is as low as $\lesssim 0.01~{}Z_{\odot}$ (Toyouchi et al.
2021; see also Inayoshi et al. 2016). The critical halo mass required for the
onset of rapid mass accretion exceeding the Eddington rate is $M_{\rm h}\simeq
10^{9}~{}M_{\odot}$, almost independent of redshift. Most of the quasar
progenitor halos of interest can reach this mass threshold after birth of seed
BHs in $\simeq 20-50$ Myr, within which intense star bursts would take place
and form protogalaxies. Exploring the nature of BH growth embedded in such a
protogalaxy is left for future investigations.
This process is a possible way to form intermediate massive BH (IMBH)
populations. Observations of IMBHs in the local universe have the potential to
constrain high-$z$ BH (seed) formation (see the review by Greene et al. 2020).
Furthermore, if those IMBHs form binaries through galaxy mergers and dynamical
processes during the cosmic history, the seed forming channel also provides a
significant number of gravitational wave events (e.g., Hartwig et al., 2018;
Chon et al., 2018; Regan et al., 2020b), which will be detectable by the
space-based gravitational wave detectors such as the Laser Interferometer
Space Antenna (LISA) (Amaro-Seoane et al., 2017) and Tianqin (Luo et al.,
2016), and third-generation terrestrial instruments.
## 6 Summary
In this paper, we investigate a new scenario of the formation of heavy BH
seeds through collapse of warm gas in massive halos that end up in quasar
hosts at $z\simeq 6-7$. In the highly biased, overdense regions of the
universe, stronger halo clustering increases the frequency of halo mergers and
boosts the mean intensity of LW radiation background produced from star-
forming galaxies. Those effects are expected to increase the probability of
massive seed formation because the conditions required for their formation
(intense LW irradiation and violent merger heating) become less stringent than
previous considered. Under such unique environments, we model the thermal and
dynamical evolution of massive gas clouds along with $10^{4}$ merger trees of
the main progenitors of high-$z$ quasar hosts using the Monte Carlo method.
With those samples, we study the statistical properties of the progenitor
halos of high-z quasar hosts and massive seed BHs. Our major findings can be
summarized as follows.
1. 1.
In the high-$z$ quasar forming regions, DM halos are likely irradiated by
strong LW radiation with intensity of $J_{\rm LW}\simeq 100-10^{3}$ (in units
of $10^{-21}~{}{\rm erg}~{}{\rm s}^{-1}~{}{\rm cm}^{-2}~{}{\rm Hz}^{-1}$) from
nearby star-forming galaxies at $z\lesssim 30$ and gas clouds in the halo
interiors are heated by successive gaseous halo mergers. Suppression of H2
cooling via LW irradiation/merger hating as well as injection of gas kinetic
energy through halo mergers prevent gas collapse and delays prior star
formation episodes.
2. 2.
Without baryonic streaming motion, 74% of the trees experience gas collapse
led by H2 cooling, while the rest (26%) form atomically-cooling gas clouds
that begin to collapse isothermally with $T\simeq 8000~{}{\rm K}$ via
Ly$\alpha$ cooling. With a streaming velocity higher than the root-mean-square
value, gas clouds for nearly all $10^{4}$ realizations of the merger trees
enter the atomic-cooling stage.
3. 3.
The fraction of trees which host isothermal gas collapse is $14\%$ and
increases with streaming velocity, while the rest form H2-cooled cores after
short isothermal phases. However, this fraction is reduced by additional
cooling via metal fine-structure lines when the collapsing gas could be
enriched to $Z_{\rm crit}\sim 2\times 10^{-3}~{}Z_{\odot}$, requiring
efficient metal mixing $f_{\rm mix}\gtrsim 0.25$. This high probability
reflects that high-redshift quasar forming regions likely provide such
peculiar environments, which hardly occur in typical high-redshift star-
forming regions.
4. 4.
The mass accretion rate onto a newly-born protostar is distributed over
$3\times 10^{-3}-5~{}M_{\odot}~{}{\rm yr}^{-1}$, a large fraction of which
exceeds the critical rate suppressing stellar radiative feedback. As a result,
we expect a distribution of stellar masses (presumably BH masses) ranging from
several hundred to above $10^{5}~{}M_{\odot}$.
We greatly thank Gen Chiaki, Zoltán Haiman, Tilman Hartwig, Alessandro Lupi,
and Daisuke Toyouchi for constructive discussions. This work is supported by
the National Natural Science Foundation of China (12073003, 12003003,
11721303, 11991052, 11950410493), the National Key R&D Program of China
(2016YFA0400702), and the High-Performance Computing Platform of Peking
University. Y.Q acknowledges support from the China Postdoctoral Science
Foundation (2020T130019).
## Appendix A The critical conditions for collapse of an isothermal gas cloud
In the Appendix, we briefly describe the method of how to calculate the
critical gas density at the center by solving the hydrostatic equation for an
isothermal gas cloud (Eq. 15), where the gas pressure gradient force is
balanced with the gas self-gravity and DM gravitational force. For
demonstration purpose, in the left panel of Fig. 9, we show the radial
profiles of gas with an effective sound speed of $c_{\rm
eff}=8.3\mathrm{~{}km~{}s^{-1}}$ (corresponding to $T=10^{4}~{}{\rm K}$ gas in
the absence of turbulence) for different values of $\rho_{0}$ in a DM halo
with $M_{\mathrm{h}}=6\times 10^{6}~{}M_{\odot}$ at $z=30$. As the central
density increases, the density at the virial radius $\rho_{\rm gas}(R_{\rm
vir})$ does not increase monotonically but has a local maximum value around
$\rho_{0}\simeq 10^{-21}~{}\rm g~{}{cm}^{-3}$. In general, the maximum value
of $\rho_{\rm gas}(R_{\rm vir})$ can be found for a given combination of
$M_{\mathrm{h}}$, $z$, and $c_{\rm eff}$. In the right panel of Fig. 9, we
present the relation between $\rho_{\rm gas}(R_{\rm vir})$ and $\rho_{0}$ for
different halo masses ($z=30$ and $c_{\rm eff}=8.3\mathrm{~{}km~{}s^{-1}}$ are
fixed). As seen in the left panel, each curve has a local maximum and the
maximum value decreases with $M_{\rm h}$. The density value at the outer
boundary ($\rho_{\rm ext}=f_{\rm b}\rho_{\rm DM}$) weakly depends on $M_{\rm
h}$ and $z$ through the concentration factor $c_{\rm vir}$, i.e., the three
halos have $\rho_{\rm ext}\simeq 8\times 10^{-25}~{}{\rm cm}^{-3}$, varying
within $3\%$. For $M_{\rm h}=6\times 10^{6}~{}M_{\odot}$, there exist two
solutions where the boundary conditions are satisfied. Since the solution with
the higher value of $\rho_{0}$ is not stable, we adopt the solution with the
lower value of $\rho_{0}$ (see Ebert, 1955; Bonnor, 1958; Lynden-Bell & Wood,
1968). As the halo mass increases to $M_{\rm h}=8\times 10^{6}$ and
$10^{7}~{}M_{\odot}$ , there is no hydrostatic solution of the gas cloud. In
our semi-analytical model, we calculate the hydro-static density profile which
satisfies the boundary conditions at each time step and quantify the critical
halo mass $M_{\rm h,crit}$ above which the gas begins to collapse. We note
that this method can be applied to a wide range of $c_{\rm eff}$ and $z$ of
interest in our paper.
---
Figure 9: Left panel: gas density profile in a halo with
$M_{\mathrm{h}}=6\times 10^{6}~{}M_{\odot}$ at $z=30$, $c_{\rm
eff}=8.3\mathrm{~{}km~{}s^{-1}}$, calculated from
$\rho_{0}=10^{-23,-22,-21,-20}\rm g~{}{cm}^{-3}$. With increasing $\rho_{0}$,
the $\rho_{\rm gas}(R_{\rm vir})$ solved first increases then decreases. Right
panel: the $\rho_{\rm gas}(R_{\rm vir})$ solved as a function of $\rho_{0}$ in
diffrent halo masses. The solution of $\rho_{0}$ is determined from the left
intersection of $\rho_{\rm ext}$ and $\rho_{\rm gas}(R_{\rm vir})$ curves. The
local maximum of $\rho_{\rm gas}(R_{\rm vir})$ decreases with increasing halo
mass, thus a critical $M_{\rm h,crit}$ exists above which no solution of
$\rho_{0}$ can be found. In this case, $M_{\rm h,crit}$ lies between $6\times
10^{6}$ and $8\times 10^{6}~{}M_{\odot}$.
## References
* Abel et al. (1997) Abel, T., Anninos, P., Zhang, Y., & Norman, M. L. 1997, New A, 2, 181, doi: 10.1016/S1384-1076(97)00010-9
* Abel et al. (2002) Abel, T., Bryan, G. L., & Norman, M. L. 2002, Science, 295, 93, doi: 10.1126/science.295.5552.93
* Agarwal et al. (2012) Agarwal, B., Khochfar, S., Johnson, J. L., et al. 2012, MNRAS, 425, 2854, doi: 10.1111/j.1365-2966.2012.21651.x
* Amaro-Seoane et al. (2017) Amaro-Seoane, P., Audley, H., Babak, S., et al. 2017, arXiv e-prints, arXiv:1702.00786. https://arxiv.org/abs/1702.00786
* Barlow (1984) Barlow, S. E. 1984, PhD thesis, UNIVERSITY OF COLORADO AT BOULDER.
* Becerra et al. (2015) Becerra, F., Greif, T. H., Springel, V., & Hernquist, L. E. 2015, MNRAS, 446, 2380, doi: 10.1093/mnras/stu2284
* Bonetti et al. (2019) Bonetti, M., Sesana, A., Haardt, F., Barausse, E., & Colpi, M. 2019, MNRAS, 486, 4044, doi: 10.1093/mnras/stz903
* Bonnor (1958) Bonnor, W. B. 1958, MNRAS, 118, 523, doi: 10.1093/mnras/118.5.523
* Bromm & Loeb (2003) Bromm, V., & Loeb, A. 2003, ApJ, 596, 34, doi: 10.1086/377529
* Bromm & Yoshida (2011) Bromm, V., & Yoshida, N. 2011, ARA&A, 49, 373, doi: 10.1146/annurev-astro-081710-102608
* Bullock et al. (2001) Bullock, J. S., Kolatt, T. S., Sigad, Y., et al. 2001, MNRAS, 321, 559, doi: 10.1046/j.1365-8711.2001.04068.x
* Chandrasekhar (1951a) Chandrasekhar, S. 1951a, Proceedings of the Royal Society of London Series A, 210, 18, doi: 10.1098/rspa.1951.0227
* Chandrasekhar (1951b) —. 1951b, Proceedings of the Royal Society of London Series A, 210, 26, doi: 10.1098/rspa.1951.0228
* Chen et al. (2017) Chen, K.-J., Whalen, D. J., Wollenberg, K. M. J., Glover, S. C. O., & Klessen, R. S. 2017, ApJ, 844, 111, doi: 10.3847/1538-4357/aa7b34
* Chiaki et al. (2018) Chiaki, G., Susa, H., & Hirano, S. 2018, MNRAS, 475, 4378, doi: 10.1093/mnras/sty040
* Chon et al. (2016) Chon, S., Hirano, S., Hosokawa, T., & Yoshida, N. 2016, ApJ, 832, 134, doi: 10.3847/0004-637X/832/2/134
* Chon et al. (2018) Chon, S., Hosokawa, T., & Yoshida, N. 2018, MNRAS, 475, 4104, doi: 10.1093/mnras/sty086
* Chon & Omukai (2020) Chon, S., & Omukai, K. 2020, MNRAS, 494, 2851, doi: 10.1093/mnras/staa863
* Cole et al. (2000) Cole, S., Lacey, C. G., Baugh, C. M., & Frenk, C. S. 2000, MNRAS, 319, 168, doi: 10.1046/j.1365-8711.2000.03879.x
* Coppola et al. (2011) Coppola, C. M., Longo, S., Capitelli, M., Palla, F., & Galli, D. 2011, ApJS, 193, 7, doi: 10.1088/0067-0049/193/1/7
* Croft et al. (1999) Croft, H., Dickinson, A. S., & Gadea, F. X. 1999, MNRAS, 304, 327, doi: 10.1046/j.1365-8711.1999.02346.x
* Dalgarno & Lepp (1987) Dalgarno, A., & Lepp, S. 1987, in Astrochemistry, ed. M. S. Vardya & S. P. Tarafdar, Vol. 120, 109–118
* Dayal et al. (2019) Dayal, P., Rossi, E. M., Shiralilou, B., et al. 2019, MNRAS, 486, 2336, doi: 10.1093/mnras/stz897
* Di Matteo et al. (2012) Di Matteo, T., Khandai, N., DeGraf, C., et al. 2012, ApJ, 745, L29, doi: 10.1088/2041-8205/745/2/L29
* Dijkstra et al. (2014) Dijkstra, M., Ferrara, A., & Mesinger, A. 2014, MNRAS, 442, 2036, doi: 10.1093/mnras/stu1007
* Dijkstra et al. (2008) Dijkstra, M., Haiman, Z., Mesinger, A., & Wyithe, J. S. B. 2008, MNRAS, 391, 1961, doi: 10.1111/j.1365-2966.2008.14031.x
* Dove et al. (1987) Dove, J. E., Rusk, A. C. M., Cribb, P. H., & Martin, P. G. 1987, ApJ, 318, 379, doi: 10.1086/165375
* Ebert (1955) Ebert, R. 1955, ZAp, 37, 217
* Fan et al. (2006) Fan, X., Strauss, M. A., Richards, G. T., et al. 2006, AJ, 131, 1203, doi: 10.1086/500296
* Ferland et al. (1992) Ferland, G. J., Peterson, B. M., Horne, K., Welsh, W. F., & Nahar, S. N. 1992, ApJ, 387, 95, doi: 10.1086/171063
* Fernandez et al. (2014) Fernandez, R., Bryan, G. L., Haiman, Z., & Li, M. 2014, MNRAS, 439, 3798, doi: 10.1093/mnras/stu230
* Glover (2015a) Glover, S. C. O. 2015a, MNRAS, 451, 2082, doi: 10.1093/mnras/stv1059
* Glover (2015b) —. 2015b, MNRAS, 453, 2901, doi: 10.1093/mnras/stv1781
* Glover & Abel (2008) Glover, S. C. O., & Abel, T. 2008, MNRAS, 388, 1627, doi: 10.1111/j.1365-2966.2008.13224.x
* Glover & Jappsen (2007) Glover, S. C. O., & Jappsen, A. K. 2007, ApJ, 666, 1, doi: 10.1086/519445
* Greene et al. (2020) Greene, J. E., Strader, J., & Ho, L. C. 2020, ARA&A, 58, 257, doi: 10.1146/annurev-astro-032620-021835
* Habouzit et al. (2017) Habouzit, M., Volonteri, M., & Dubois, Y. 2017, MNRAS, 468, 3935, doi: 10.1093/mnras/stx666
* Haemmerlé et al. (2018) Haemmerlé, L., Woods, T. E., Klessen, R. S., Heger, A., & Whalen, D. J. 2018, MNRAS, 474, 2757, doi: 10.1093/mnras/stx2919
* Haiman et al. (1997) Haiman, Z., Rees, M. J., & Loeb, A. 1997, ApJ, 476, 458, doi: 10.1086/303647
* Haiman et al. (1996) Haiman, Z., Thoul, A. A., & Loeb, A. 1996, ApJ, 464, 523, doi: 10.1086/177343
* Hartwig et al. (2018) Hartwig, T., Agarwal, B., & Regan, J. A. 2018, MNRAS, 479, L23, doi: 10.1093/mnrasl/sly091
* Hirano et al. (2017) Hirano, S., Hosokawa, T., Yoshida, N., & Kuiper, R. 2017, Science, 357, 1375, doi: 10.1126/science.aai9119
* Hirano et al. (2018) Hirano, S., Yoshida, N., Sakurai, Y., & Fujii, M. S. 2018, ApJ, 855, 17, doi: 10.3847/1538-4357/aaaaba
* Hosokawa et al. (2012) Hosokawa, T., Omukai, K., & Yorke, H. W. 2012, ApJ, 756, 93, doi: 10.1088/0004-637X/756/1/93
* Hosokawa et al. (2011) Hosokawa, T., Omukai, K., Yoshida, N., & Yorke, H. W. 2011, Science, 334, 1250, doi: 10.1126/science.1207433
* Hosokawa et al. (2013) Hosokawa, T., Yorke, H. W., Inayoshi, K., Omukai, K., & Yoshida, N. 2013, ApJ, 778, 178, doi: 10.1088/0004-637X/778/2/178
* Hummer & Storey (1998) Hummer, D. G., & Storey, P. J. 1998, MNRAS, 297, 1073, doi: 10.1046/j.1365-8711.1998.2970041073.x
* Huq et al. (1982) Huq, M. S., Doverspike, L. D., Champion, R. L., & Esaulov, V. A. 1982, Journal of Physics B Atomic Molecular Physics, 15, 951, doi: 10.1088/0022-3700/15/6/020
* Iliev et al. (2003) Iliev, I. T., Scannapieco, E., Martel, H., & Shapiro, P. R. 2003, MNRAS, 341, 81, doi: 10.1046/j.1365-8711.2003.06410.x
* Inayoshi & Haiman (2014) Inayoshi, K., & Haiman, Z. 2014, MNRAS, 445, 1549, doi: 10.1093/mnras/stu1870
* Inayoshi et al. (2016) Inayoshi, K., Haiman, Z., & Ostriker, J. P. 2016, MNRAS, 459, 3738, doi: 10.1093/mnras/stw836
* Inayoshi et al. (2018) Inayoshi, K., Li, M., & Haiman, Z. 2018, MNRAS, 479, 4017, doi: 10.1093/mnras/sty1720
* Inayoshi & Omukai (2012) Inayoshi, K., & Omukai, K. 2012, MNRAS, 422, 2539, doi: 10.1111/j.1365-2966.2012.20812.x
* Inayoshi et al. (2014) Inayoshi, K., Omukai, K., & Tasker, E. 2014, MNRAS, 445, L109, doi: 10.1093/mnrasl/slu151
* Inayoshi et al. (2020) Inayoshi, K., Visbal, E., & Haiman, Z. 2020, ARA&A, 58, 27, doi: 10.1146/annurev-astro-120419-014455
* Inoue (2011) Inoue, A. K. 2011, MNRAS, 415, 2920, doi: 10.1111/j.1365-2966.2011.18906.x
* Jacobs et al. (1967) Jacobs, T. A., Giedt, R. R., & Cohen, N. 1967, J. Chem. Phys., 47, 54, doi: 10.1063/1.1711890
* Janev et al. (1987) Janev, R. K., Langer, W. D., Post, D. E., & Evans, K. 1987, in Elementary Processes in Hydrogen-Helium Plasmas: Cross Sections and Reaction Rate Coefficients (Berlin, Heidelberg: Springer Berlin Heidelberg), 217–231, doi: 10.1007/978-3-642-71935-6_7
* Jiang et al. (2016) Jiang, L., McGreer, I. D., Fan, X., et al. 2016, ApJ, 833, 222, doi: 10.3847/1538-4357/833/2/222
* Johnson et al. (2013) Johnson, J. L., Dalla Vecchia, C., & Khochfar, S. 2013, MNRAS, 428, 1857, doi: 10.1093/mnras/sts011
* Johnson et al. (2012) Johnson, J. L., Whalen, D. J., Fryer, C. L., & Li, H. 2012, ApJ, 750, 66, doi: 10.1088/0004-637X/750/1/66
* Karpas et al. (1979) Karpas, Z., Anicich, V., & Huntress, W. T. 1979, J. Chem. Phys., 70, 2877, doi: 10.1063/1.437823
* Kimura et al. (1993) Kimura, M., Lane, N. F., Dalgarno, A., & Dixson, R. G. 1993, ApJ, 405, 801, doi: 10.1086/172410
* Kormendy & Ho (2013) Kormendy, J., & Ho, L. C. 2013, ARA&A, 51, 511, doi: 10.1146/annurev-astro-082708-101811
* Kreckel et al. (2010) Kreckel, H., Bruhns, H., Čížek, M., et al. 2010, Science, 329, 69, doi: 10.1126/science.1187191
* Lacey & Cole (1993) Lacey, C., & Cole, S. 1993, MNRAS, 262, 627, doi: 10.1093/mnras/262.3.627
* Larson (1969) Larson, R. B. 1969, MNRAS, 145, 271, doi: 10.1093/mnras/145.3.271
* Latif et al. (2013) Latif, M. A., Schleicher, D. R. G., Schmidt, W., & Niemeyer, J. 2013, MNRAS, 433, 1607, doi: 10.1093/mnras/stt834
* Latif & Volonteri (2015) Latif, M. A., & Volonteri, M. 2015, MNRAS, 452, 1026, doi: 10.1093/mnras/stv1337
* Latif et al. (2018) Latif, M. A., Volonteri, M., & Wise, J. H. 2018, MNRAS, 476, 5016, doi: 10.1093/mnras/sty622
* Lepp & Shull (1983) Lepp, S., & Shull, J. M. 1983, ApJ, 270, 578, doi: 10.1086/161149
* Li et al. (2017) Li, M., Bryan, G. L., & Ostriker, J. P. 2017, ApJ, 841, 101, doi: 10.3847/1538-4357/aa7263
* Luo et al. (2016) Luo, J., Chen, L.-S., Duan, H.-Z., et al. 2016, Classical and Quantum Gravity, 33, 035010, doi: 10.1088/0264-9381/33/3/035010
* Lupi et al. (2021) Lupi, A., Haiman, Z., & Volonteri, M. 2021, MNRAS, doi: 10.1093/mnras/stab692
* Lynden-Bell & Wood (1968) Lynden-Bell, D., & Wood, R. 1968, MNRAS, 138, 495, doi: 10.1093/mnras/138.4.495
* Mac Low & Shull (1986) Mac Low, M. M., & Shull, J. M. 1986, ApJ, 302, 585, doi: 10.1086/164017
* Martin et al. (1998) Martin, P. G., Keogh, W. J., & Mandy, M. E. 1998, ApJ, 499, 793, doi: 10.1086/305665
* Matsuoka et al. (2018) Matsuoka, Y., Iwasawa, K., Onoue, M., et al. 2018, ApJS, 237, 5, doi: 10.3847/1538-4365/aac724
* McKee & Tan (2002) McKee, C. F., & Tan, J. C. 2002, Nature, 416, 59, doi: 10.1038/416059a
* McKee & Tan (2008) —. 2008, ApJ, 681, 771, doi: 10.1086/587434
* McLaughlin et al. (2017) McLaughlin, B. M., Stancil, P. C., Sadeghpour, H. R., & Forrey, R. C. 2017, Journal of Physics B Atomic Molecular Physics, 50, 114001, doi: 10.1088/1361-6455/aa6c1f
* Mo & White (2002) Mo, H. J., & White, S. D. M. 2002, MNRAS, 336, 112, doi: 10.1046/j.1365-8711.2002.05723.x
* Mortlock et al. (2011) Mortlock, D. J., Warren, S. J., Venemans, B. P., et al. 2011, Nature, 474, 616, doi: 10.1038/nature10159
* Navarro et al. (1997) Navarro, J. F., Frenk, C. S., & White, S. D. M. 1997, ApJ, 490, 493, doi: 10.1086/304888
* Oh & Haiman (2002) Oh, S. P., & Haiman, Z. 2002, ApJ, 569, 558, doi: 10.1086/339393
* Omukai (2001) Omukai, K. 2001, ApJ, 546, 635, doi: 10.1086/318296
* Omukai & Palla (2001) Omukai, K., & Palla, F. 2001, ApJ, 561, L55, doi: 10.1086/324410
* Omukai et al. (2008) Omukai, K., Schneider, R., & Haiman, Z. 2008, ApJ, 686, 801, doi: 10.1086/591636
* Onoue et al. (2019) Onoue, M., Kashikawa, N., Matsuoka, Y., et al. 2019, ApJ, 880, 77, doi: 10.3847/1538-4357/ab29e9
* Orel (1987) Orel, A. E. 1987, J. Chem. Phys., 87, 314, doi: 10.1063/1.453628
* Parkinson et al. (2008) Parkinson, H., Cole, S., & Helly, J. 2008, MNRAS, 383, 557, doi: 10.1111/j.1365-2966.2007.12517.x
* Peart & Hayton (1994) Peart, B., & Hayton, D. A. 1994, Journal of Physics B Atomic Molecular Physics, 27, 2551, doi: 10.1088/0953-4075/27/12/013
* Penston (1969) Penston, M. V. 1969, MNRAS, 144, 425, doi: 10.1093/mnras/144.4.425
* Planck Collaboration et al. (2016) Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2016, A&A, 594, A13, doi: 10.1051/0004-6361/201525830
* Pollack et al. (1994) Pollack, J. B., Hollenbach, D., Beckwith, S., et al. 1994, ApJ, 421, 615, doi: 10.1086/173677
* Poulaert et al. (1978) Poulaert, G., Brouillard, F., Claeys, W., McGowan, J. W., & Van Wassenhove, G. 1978, Journal of Physics B Atomic Molecular Physics, 11, L671, doi: 10.1088/0022-3700/11/21/006
* Press & Schechter (1974) Press, W. H., & Schechter, P. 1974, ApJ, 187, 425, doi: 10.1086/152650
* Regan et al. (2020a) Regan, J. A., Haiman, Z., Wise, J. H., O’Shea, B. W., & Norman, M. L. 2020a, The Open Journal of Astrophysics, 3, E9, doi: 10.21105/astro.2006.14625
* Regan et al. (2014) Regan, J. A., Johansson, P. H., & Wise, J. H. 2014, ApJ, 795, 137, doi: 10.1088/0004-637X/795/2/137
* Regan et al. (2020b) Regan, J. A., Wise, J. H., Woods, T. E., et al. 2020b, The Open Journal of Astrophysics, 3, 15, doi: 10.21105/astro.2008.08090
* Sakurai et al. (2020a) Sakurai, Y., Haiman, Z., & Inayoshi, K. 2020a, MNRAS, 499, 5960, doi: 10.1093/mnras/staa3227
* Sakurai et al. (2020b) —. 2020b, MNRAS, 499, 5960, doi: 10.1093/mnras/staa3227
* Sakurai et al. (2015) Sakurai, Y., Hosokawa, T., Yoshida, N., & Yorke, H. W. 2015, MNRAS, 452, 755, doi: 10.1093/mnras/stv1346
* Sakurai et al. (2016) Sakurai, Y., Vorobyov, E. I., Hosokawa, T., et al. 2016, MNRAS, 459, 1137, doi: 10.1093/mnras/stw637
* Savin et al. (2004) Savin, D. W., Krstić, P. S., Haiman, Z., & Stancil, P. C. 2004, ApJ, 606, L167, doi: 10.1086/421108
* Scannapieco & Barkana (2002) Scannapieco, E., & Barkana, R. 2002, ApJ, 571, 585, doi: 10.1086/340063
* Schauer et al. (2019) Schauer, A. T. P., Glover, S. C. O., Klessen, R. S., & Ceverino, D. 2019, MNRAS, 484, 3510, doi: 10.1093/mnras/stz013
* Schauer et al. (2015) Schauer, A. T. P., Whalen, D. J., Glover, S. C. O., & Klessen, R. S. 2015, MNRAS, 454, 2441, doi: 10.1093/mnras/stv2117
* Schleicher et al. (2013) Schleicher, D. R. G., Palla, F., Ferrara, A., Galli, D., & Latif, M. 2013, A&A, 558, A59, doi: 10.1051/0004-6361/201321949
* Schneider et al. (1994) Schneider, I. F., Dulieu, O., Giusti-Suzor, A., & Roueff, E. 1994, ApJ, 424, 983, doi: 10.1086/173948
* Schulz & Asundi (1967) Schulz, G. J., & Asundi, R. K. 1967, Physical Review, 158, 25, doi: 10.1103/PhysRev.158.25
* Sesana et al. (2008) Sesana, A., Vecchio, A., & Colacino, C. N. 2008, MNRAS, 390, 192, doi: 10.1111/j.1365-2966.2008.13682.x
* Shang et al. (2010) Shang, C., Bryan, G. L., & Haiman, Z. 2010, MNRAS, 402, 1249, doi: 10.1111/j.1365-2966.2009.15960.x
* Shapiro & Kang (1987) Shapiro, P. R., & Kang, H. 1987, ApJ, 318, 32, doi: 10.1086/165350
* Sheth et al. (2001) Sheth, R. K., Mo, H. J., & Tormen, G. 2001, MNRAS, 323, 1, doi: 10.1046/j.1365-8711.2001.04006.x
* Shibata et al. (2016) Shibata, M., Sekiguchi, Y., Uchida, H., & Umeda, H. 2016, Phys. Rev. D, 94, 021501, doi: 10.1103/PhysRevD.94.021501
* Smith et al. (2018) Smith, B. D., Regan, J. A., Downes, T. P., et al. 2018, MNRAS, 480, 3762, doi: 10.1093/mnras/sty2103
* Stancil (1994) Stancil, P. C. 1994, ApJ, 430, 360, doi: 10.1086/174411
* Sugimura et al. (2014) Sugimura, K., Omukai, K., & Inoue, A. K. 2014, MNRAS, 445, 544, doi: 10.1093/mnras/stu1778
* Tanaka et al. (2013) Tanaka, K. E. I., Nakamoto, T., & Omukai, K. 2013, ApJ, 773, 155, doi: 10.1088/0004-637X/773/2/155
* Tanaka & Haiman (2009) Tanaka, T., & Haiman, Z. 2009, ApJ, 696, 1798, doi: 10.1088/0004-637X/696/2/1798
* Tanaka & Li (2014) Tanaka, T. L., & Li, M. 2014, MNRAS, 439, 1092, doi: 10.1093/mnras/stu042
* Tegmark et al. (1997) Tegmark, M., Silk, J., Rees, M. J., et al. 1997, ApJ, 474, 1, doi: 10.1086/303434
* Toyouchi et al. (2021) Toyouchi, D., Inayoshi, K., Hosokawa, T., & Kuiper, R. 2021, ApJ, 907, 74, doi: 10.3847/1538-4357/abcfc2
* Trevisan & Tennyson (2002) Trevisan, C. S., & Tennyson, J. 2002, Plasma Physics and Controlled Fusion, 44, 1263, doi: 10.1088/0741-3335/44/7/315
* Tseliakhovich & Hirata (2010) Tseliakhovich, D., & Hirata, C. 2010, Phys. Rev. D, 82, 083520, doi: 10.1103/PhysRevD.82.083520
* Valiante et al. (2016) Valiante, R., Schneider, R., Volonteri, M., & Omukai, K. 2016, MNRAS, 457, 3356, doi: 10.1093/mnras/stw225
* Visbal et al. (2014a) Visbal, E., Haiman, Z., & Bryan, G. L. 2014a, MNRAS, 445, 1056, doi: 10.1093/mnras/stu1794
* Visbal et al. (2014b) —. 2014b, MNRAS, 442, L100, doi: 10.1093/mnrasl/slu063
* Voit et al. (2003) Voit, G. M., Balogh, M. L., Bower, R. G., Lacey, C. G., & Bryan, G. L. 2003, ApJ, 593, 272, doi: 10.1086/376499
* Voit et al. (2005) Voit, G. M., Kay, S. T., & Bryan, G. L. 2005, MNRAS, 364, 909, doi: 10.1111/j.1365-2966.2005.09621.x
* Walkauskas & Kaufman (1975) Walkauskas, L., & Kaufman, F. 1975, Symposium (International) on Combustion, 15, 691, doi: https://doi.org/10.1016/S0082-0784(75)80339-0
* Wang et al. (2021) Wang, F., Yang, J., Fan, X., et al. 2021, ApJ, 907, L1, doi: 10.3847/2041-8213/abd8c6
* Willott et al. (2010) Willott, C. J., Delorme, P., Reylé, C., et al. 2010, AJ, 139, 906, doi: 10.1088/0004-6256/139/3/906
* Wise & Abel (2007) Wise, J. H., & Abel, T. 2007, ApJ, 665, 899, doi: 10.1086/520036
* Wise et al. (2019) Wise, J. H., Regan, J. A., O’Shea, B. W., et al. 2019, Nature, 566, 85, doi: 10.1038/s41586-019-0873-4
* Wolcott-Green & Haiman (2011) Wolcott-Green, J., & Haiman, Z. 2011, MNRAS, 412, 2603, doi: 10.1111/j.1365-2966.2010.18080.x
* Woods et al. (2017) Woods, T. E., Heger, A., Whalen, D. J., Haemmerlé, L., & Klessen, R. S. 2017, ApJ, 842, L6, doi: 10.3847/2041-8213/aa7412
* Woods et al. (2019) Woods, T. E., Agarwal, B., Bromm, V., et al. 2019, PASA, 36, e027, doi: 10.1017/pasa.2019.14
* Wu et al. (2015) Wu, X.-B., Wang, F., Fan, X., et al. 2015, Nature, 518, 512, doi: 10.1038/nature14241
* Wyithe & Padmanabhan (2006) Wyithe, J. S. B., & Padmanabhan, T. 2006, MNRAS, 366, 1029, doi: 10.1111/j.1365-2966.2005.09858.x
* Yoshida et al. (2003) Yoshida, N., Abel, T., Hernquist, L., & Sugiyama, N. 2003, ApJ, 592, 645, doi: 10.1086/375810
* Zygelman et al. (1989) Zygelman, B., Dalgarno, A., Kimura, M., & Lane, N. F. 1989, Phys. Rev. A, 40, 2340, doi: 10.1103/PhysRevA.40.2340
Table 1: Chemical Reactions
Number | Reaction | Reference
---|---|---
| H collisional reactions |
1 | H + e- $\rightarrow$ H+ \+ 2e- | 1
2 | H+ \+ e- $\rightarrow$ H + $\gamma$ | 2∗
3 | H + e- $\rightarrow$ H- \+ $\gamma$ | 3
4 | H- \+ H $\rightarrow$ H2 \+ e | 4
5 | H + H+ $\rightarrow$ H${}_{2}^{+}$ \+ $\gamma$ | 5
6 | H${}_{2}^{+}$\+ H $\rightarrow$ H2 \+ H+ | 6
7 | H2 \+ H $\rightarrow$ 3H | 7
8 | H2 \+ H+ $\rightarrow$ H${}_{2}^{+}$ \+ H | 8
9 | H2 \+ e- $\rightarrow$ 2H + e- | 9
10 | H- \+ e- $\rightarrow$ H + 2e- | 10
11 | H- \+ H+ $\rightarrow$ 2H | 11
12 | H- \+ H+ $\rightarrow$ H${}_{2}^{+}$ \+ e- | 12
13 | H${}_{2}^{+}$ \+ e- $\rightarrow$ 2H | 13
14 | H${}_{2}^{+}$ \+ H- $\rightarrow$ H2 \+ H | 14
15 | 3H $\rightarrow$ H2 \+ H | 15
16 | 2H + H2 $\rightarrow$ 2H2 | 16
17 | 2H2 $\rightarrow$ 2H + H2 | 17
18 | H- \+ H $\rightarrow$ 2H + e- | 18
19 | H- \+ H${}_{2}^{+}$ $\rightarrow$ 3H | 19
20 | H2 \+ e- $\rightarrow$ H- \+ H | 20
| photo-dissociation and detatchment reactions |
21 | H2 \+ $\gamma$ $\rightarrow$ 2H | 21
22 | H- \+ $\gamma$ $\rightarrow$ H + e- | 22
23 | H${}_{2}^{+}$ \+ $\gamma$ $\rightarrow$ H + H+ | 23
| He reactions |
24 | He + e- $\rightarrow$ He+ \+ 2e- | 24
25 | He+ \+ e- $\rightarrow$ He + $\gamma$ | 25
26 | He+ \+ e- $\rightarrow$ He++ \+ 2e- | 26
27 | He++ \+ e- $\rightarrow$ He+ \+ H+ +$\gamma$ | 27
28 | H2 \+ He $\rightarrow$ 2H + He | 28
29 | H2 \+ He+ $\rightarrow$ He + H + H+ | 29
30 | H2 \+ He+ $\rightarrow$ H${}_{2}^{+}$ \+ He | 30
31 | He+ \+ H $\rightarrow$ He + H+ | 31
32 | He + H+ $\rightarrow$ He+ \+ H | 32
33 | He+ \+ H- $\rightarrow$ He + H | 33
34 | He + H- $\rightarrow$ He + H + e- | 34
35 | 2H + He $\rightarrow$ H2 \+ He | 35
(1) Abel et al. (1997); (2) Ferland et al. (1992), Case B; (3) McLaughlin et
al. (2017); (4) Kreckel et al. (2010); (5) Coppola et al. (2011); (6) Karpas
et al. (1979); (7) Mac Low & Shull (1986); Lepp & Shull (1983); (8) Savin et
al. (2004); Coppola et al. (2011); (9) Trevisan & Tennyson (2002); (10) Janev
et al. (1987); (11) Croft et al. (1999); (12) Poulaert et al. (1978); (13)
Schneider et al. (1994); (14) Dalgarno & Lepp (1987); (15) Abel et al. (2002);
Orel (1987); (16) Jacobs et al. (1967); (17) Martin et al. (1998); Shapiro &
Kang (1987) (18) Janev et al. (1987); (19) Dalgarno & Lepp (1987); (20) Schulz
& Asundi (1967); (21) Wolcott-Green & Haiman (2011); (22) McLaughlin et al.
(2017); (23) Stancil (1994); (24) Janev et al. (1987); (25) Hummer & Storey
(1998); (26) Janev et al. (1987); (27) Ferland et al. (1992); (28) Dove et al.
(1987); (29) Barlow (1984); (30) Barlow (1984); (31) Zygelman et al. (1989);
(33) Kimura et al. (1993); (33) Peart & Hayton (1994); (34) Huq et al. (1982);
(35) Walkauskas & Kaufman (1975);
|
final portion of the segments joining ${\mathbb{X}}_{t-1}^{\epsilon}$ and
${\mathbb{X}}_{t}^{\epsilon}$.
Thus, at each step, we arrive at a vertex of a hexagon. For this Exploration
Process we still maintain the following properties.
###### Proposition 4.5 (Proposition 4.3, [6]).
Let $\gamma_{\epsilon}([0,t])$ be the line segments formed by the process up
until time $t$, and $\Gamma_{\epsilon}([0,t])$ be the hexagons revealed by the
Exploration Process. Let
$\partial\Omega_{\epsilon}^{t}=\partial\Omega_{\epsilon}\cup\Gamma_{\epsilon}([0,t])$
and let
$\Omega_{\epsilon}^{t}=\Omega_{\epsilon}\backslash\Gamma_{\epsilon}([0,t]).$
Then, the quadruple
$(\Omega_{\epsilon}^{t},\partial\Omega_{\epsilon}^{t},{\mathbb{X}}_{\epsilon}^{t},c_{\epsilon})$
is admissible. Furthermore, the Exploration Process in $\Omega_{\epsilon}^{t}$
from ${\mathbb{X}}_{\epsilon}^{t}\text{ to }c_{\epsilon}$ has the same law as
the original Exploration Process from $a_{\epsilon}$ to $c_{\epsilon}$ in
$\Omega_{\epsilon}$ conditioned on $\Gamma_{\epsilon}([0,t]).$
Percolation satisfies the KS Condition.
It is well known that the Exploration Process produces in any critical
percolation configuration $\Omega_{\epsilon}$, the unique interface connecting
$a_{\epsilon}$ to $c_{\epsilon}$ denoted by $\gamma_{\epsilon}$, i.e. the
unique curve which separates the blue connected cluster of the boundary from
the yellow connected cluster of the boundary. Let
${\mathbb{P}}_{\Omega_{\epsilon}}$ be the law of this interface. Let
$\mu_{\epsilon}$ be the probability measure on random curves induced by the
Exploration Process on $\Omega_{\epsilon}$, and let us endow the space of
curves with the sup-norm metric
$\mathrm{dist}(\gamma_{1},\gamma_{2})=\inf\limits_{\varphi_{1},\varphi_{2}}\sup\limits_{t}|\gamma_{1}(\varphi_{1}(t))-\gamma_{2}(\varphi_{2}(t))|$
over all possible parameterizations $\varphi_{1},\varphi_{2}$.
The proof of the fact that the collection
$({\mathbb{P}}_{\Omega}\;:\;\Omega\text{ admissible})$ satisfies the KS
Condition follows directly from [17, Proposition 4.13] since this generalized
percolation model still satisfies Russo-Seymour-Welsh (RSW) type correlation
inequalities.
###### Remark 4.6.
As long as $a^{2}\geq 2s^{2}$, then a restricted form of Harris-FKG property
holds for all paths and path type events, see [10, Lemma 6.2]. Since we have
this essential ingredient in the RSW type arguments, we are indeed free to use
RSW sort of correlation inequalities.
###### Proposition 4.7 (Proposition 4.13 in [17]).
The collection of the laws of the interface of the modified bond percolation
model described above on the hexagonal lattice.
$\Sigma_{\mathrm{Percolation}}=\\{(\Omega_{\epsilon},\phi(\Omega_{\epsilon}),{\mathbb{P}}_{\Omega_{\epsilon}}):\Omega_{\epsilon}\text{
an admissible domain}\\}$ (4.1)
satisfies the KS Condition.
###### Proof.
First, notice that for percolation, we do not have to consider stopping times.
Indeed, by Proposition (4.5) if
$\gamma:[0,N]\rightarrow\Omega_{\epsilon}\cup\\{a,c\\}$ is the interface
parameterized so that $\gamma(k),\;k=0,1,\cdots,N$ are vertices along the
path, then $\Omega_{\epsilon}\backslash\gamma(0,k]$ is admissible for any
$k=0,1,\cdots,N$ and there is no information gained during $(k,k+1)$. Also,
the law of percolation satisfies the domain Markov property so the law
conditioned to the vertices explored up to time $n$ is the percolation measure
in the domain where $\gamma(k),\;k=0,1,\cdots,n$ is erased. Thus, the family
((4.1)) is closed under stopping.
Since crossing an annuli is a translation invariant event for percolation, for
any $\Omega_{\epsilon}$, we can apply a translation and consider the annuli
around the origin. Let $B_{n}$ be the set of points on the triangular lattice
that are graph distance less than or equal to $n$ from $0$. Consider the
annulus $B_{9^{N}n}\backslash B_{n}$ for any $n,N\in{\mathbb{N}}$. We can
consider concentric balls $B_{3n}$ inside the annulus $B_{9^{N}n}\backslash
B_{n}$. Then for an open crossing of the annulus $B_{9^{N}n}\backslash B_{n}$,
there needs to be an open path inside each annulus $A_{n}=B_{3n}\backslash
B_{n},A_{3n}=B_{9n}\backslash B_{3n},\cdots$ etc. The probability that $A_{n}$
contains an open path separating $0$ from $\infty$ and $A_{3n}$ contains a
closed path separating $0$ from $\infty$ are independent. Hence, by Russo-
Seymour-Welsh (RSW) theory, we know that there exists a $q>0$ for any $n$
$\mu_{\epsilon}\left(\text{open path inside }A_{n}\cap\text{ closed path in
}A_{3n}\text{ both separating 0 from }\infty\right)\geq q^{2}$
Since a closed path in one of the concentric annuli prohibits an open crossing
of $B_{9^{N}n}\backslash B_{n}$, we conclude that
${\mathbb{P}}_{\epsilon}\left(\gamma\text{ makes an unforced crossing of
}B_{9^{N}n}\backslash B_{n}\right)\leq(1-q^{2})^{N}\leq\frac{1}{2}$
for large enough $N$. ∎
The observable.
Consider two addition marked points (or prime ends) b,d so that a,b,c,d are in
cyclic order. Let $\Omega_{n}$ be the admissible domain described above at
lattice scale $n^{-1}$ to the domain $\Omega$. More details of the
construction can be found in [6, §3 and §4] and [7, §4.2]. Furthermore, the
boundary arcs can be appropriately coloured and the lattice points
$a_{n},b_{n},c_{n},d_{n}$ can be selected. The main objects of study for
percolation is the crossing probability of the conformal rectangle
$\Omega_{n}$ from $(a_{n},b_{n})$ to $(c_{n},d_{n})$, denoted by
${\mathcal{C}}_{n}$ and ${\mathcal{C}}_{\infty}$ its limit in the domain
$\Omega$, i.e., Cardy’s formula in the limiting domain. Geometrically,
${\mathcal{C}}_{n}$ produces in any percolation configuration on $\Omega_{n}$,
the unique interface connecting $a_{n}$ to $c_{n}$, i.e. the curve separating
the blue lattice connected cluster of the boundary from the yellow. Let us
temporarily forget the marked point $a_{n}$ and consider the conformal
triangle $(\Omega_{n};b_{n},c_{n},d_{n})$.
We will briefly recall the observable function introduced in [28] which we
will denote by $S_{b},S_{c},S_{d}$. For a lattice point $z\in\Omega_{n}$,
$S_{d}(z)$ is the probability of a yellow crossing from $(c_{n},d_{n})$ to
$(d_{n},b_{n})$ separating $z$ from $(b_{n},c_{n})$. Notice that $S_{d}$ has
boundary value $0$ on $(b_{n},c_{n})$ and $1$ at the point $d_{n}$. $S_{b}$
and $S_{c}$ are defined similarly. We define the complexified function
$S_{n}:=S_{b}+\tau S_{c}+\tau^{2}S_{d}$ with $\tau=e^{2\pi i/3}$, called the
Carleson-Cardy-Smirnov (CCS) function. The following lemma due to Smirnov
shows that CCS observable is a martingale.
###### Lemma 4.8.
The CCS observable is a martingale observable.
###### Proof.
Parameterize the interface $\gamma^{\epsilon}$ and draw the exploration
process up to time $t$, $\gamma^{\epsilon}[0,t]$. By convention/definition,
the faces on the left and right side of the exploration process are yellow and
blue, respectively. Then any open crossing (yellow crossing) from arc $bc$ to
the arc $db$ inside $\Omega$ is either disjoint from $\gamma^{\epsilon}[0,t]$
or hits its ”open” (yellow) arc of $\gamma^{\epsilon}[0,t]$. Either case
produces an open crossing from the arc $\gamma^{\epsilon}(t)c$ to the arc
$d\gamma^{\epsilon}(t)$ inside $\Omega\backslash\gamma^{\epsilon}[0,t]$. The
converse also holds. Thus, we have the following observation: crossing
probabilities conditioned on $\gamma^{\epsilon}[0,t]$ coincide with crossing
probabilities in the slit domain $\Omega\backslash\gamma^{\epsilon}[0,t].$
Let $Q$ denote the area above the lowest (i.e. closest to arc $bc$) open
crossing from arc $cd$ to arc $db$. Then $S_{d}(z)={\mathbb{P}}(z\in Q)$. We
can view the other crossing probabilities $S_{b}$ and $S_{c}$ in the same way.
By the above observation, we know that this probability conditioned on
$\gamma^{\epsilon}[0,t]$ will coincide with the probability in the slit domain
$\Omega\backslash\gamma^{\epsilon}[0,t].$ The same holds true for $S_{b}(z)$
and $S_{c}(z)$.
Figure 5: Observe that the lowest yellow crossing cannot cross the curve
$\gamma$ since it is blocked by the blue side of the curve.
Thus, one sees for every realization of $\gamma^{\epsilon}[0,t]$, the CCS
function conditioned on $\gamma^{\epsilon}[0,t]$ coincides with the CCS
function in the slit domain, an analogue of the Markov property. Stopping the
curve at times $0<t<s$, say with the least discrete time such that the path
has capacity $\geq t$, and using the total probability for every realization
of $\gamma^{\epsilon}[0,t]$ we get the martingale property:
${\mathbb{E}}_{\mu_{\epsilon}}\left[S_{\epsilon}(\Omega_{\epsilon}\backslash\gamma^{\epsilon}[0,s],\gamma^{\epsilon}(s),b,c,d)|\gamma^{\epsilon}[0,t]\right]=S_{\epsilon}(\Omega_{\epsilon}\backslash\gamma^{\epsilon}[0,t],\gamma^{\epsilon}(t),b,c,d).$
∎
The CCS functions $S_{n}$ are not discrete analytic but are “almost” discrete
analytic in the following sense, see [8, §4]:
###### Definition 4.9 ($(\sigma,\rho)-$Holomorphic).
Let $\Lambda\subseteq\mathbb{C}$ be a simply connected domain and
$\Lambda_{\epsilon}$ be the (interior) discretized domain given as
$\Lambda_{\epsilon}:=\bigcup_{h_{\epsilon}\subseteq\Lambda}h_{\epsilon}$ and
let $(Q_{\epsilon}:\Lambda_{\epsilon}\to\mathbb{C})_{\epsilon\searrow 0}$ be a
sequence of functions defined on the vertices of $\Lambda_{\epsilon}$. We say
that the sequence $(Q_{\epsilon})$ is _$(\sigma,\rho)$ –holomorphic_ if there
exist constants $0<\sigma,\rho\leq 1$ such that for all $\epsilon$
sufficiently small:
1. 1.
$Q_{\varepsilon}$ is Hölder continuous up to $\partial\Lambda_{\epsilon}$:
There exists some small $\psi>0$ and constants $c,C\in(0,\infty)$ (independent
of domain and $\epsilon$) such that
1. (a)
if $z_{\epsilon},w_{\epsilon}\in\Lambda_{\epsilon}\setminus
N_{\psi}(\partial\Lambda_{\epsilon})$ such that
$|z_{\epsilon}-w_{\epsilon}|<\psi$, then
$|Q_{\epsilon}(z_{\epsilon})-Q_{\epsilon}(w_{\epsilon})|\leq
c\left(\frac{|z_{\epsilon}-w_{\epsilon}|}{\psi}\right)^{\sigma}$ and
2. (b)
if $z_{\epsilon}\in N_{\psi}(\partial\Lambda_{\epsilon})$, then there exists
some $w_{\epsilon}^{\star}\in\partial\Lambda_{\epsilon}$ such that
$|Q_{\epsilon}(z_{\epsilon})-Q_{\epsilon}(w_{\epsilon}^{\star})|\leq
C\left(\frac{|z_{\epsilon}-w_{\epsilon}^{\star}|}{\psi}\right)^{\sigma}$.
2. 2.
For any simply closed lattice contour $\Gamma_{\epsilon}$,
$\left|\oint_{\Gamma_{\epsilon}}Q~{}dz\right|=\left|\sum_{h_{\epsilon}\subseteq\Lambda_{\epsilon}^{\prime}}\oint_{\partial
h_{\epsilon}}Q~{}dz\right|\leq c\cdot|\Gamma_{\epsilon}|\cdot\epsilon^{\rho},$
(4.2)
with $c\in(0,\infty)$ (independent of domain and $\epsilon$) and
$\Lambda_{\epsilon}^{\prime},|\Gamma_{\epsilon}|$ denoting the region enclosed
by $\Gamma_{\epsilon}$ and the Euclidean length of $\Gamma_{\epsilon}$,
respectively.
###### Proposition 4.10 (Proposition 4.3, [8]).
Let $\Lambda$ denote a conformal triangle with marked points (or prime ends)
$b$, $c$, $d$ and let $\Lambda_{\epsilon}$ denote an interior approximation
(see [7, Definition 3.1]) of $\Lambda$ with
$b_{\epsilon},c_{\epsilon},d_{\epsilon}$ the associated boundary points. Let
$S_{\epsilon}(z)$ denote the CCS function defined on $\Lambda_{\epsilon}$.
Then for all $\epsilon$ sufficiently small, the functions
$(S_{\epsilon}:\Lambda_{\epsilon}\rightarrow\mathbb{C})$ are
$(\sigma,\rho)$–holomorphic for some $\sigma,\rho>0$.
Polynomial convergence of the observable function to its continuous
counterpart.
Observe that ${\mathcal{C}}_{n}$ can be realized from $S_{d}(a_{n})$ as
${\mathcal{C}}_{n}=\frac{-2}{\sqrt{3}}\cdot\operatorname{Im}[S_{n}(a_{n})]$.
Since it is already known that $S_{n}$ converges to $H:D\to T$, a conformal
map to equilateral triangle $T$ which sends $(b,c,d)$ to $(1,\tau,\tau^{2})$,
we can see that
${\mathcal{C}}_{\infty}=\frac{-2}{\sqrt{3}}\operatorname{Im}[H(a)]$ (see,
[28], [2], and [7]). Thus, when establishing a rate of convergence of
${\mathcal{C}}_{n}$ to ${\mathcal{C}}_{\infty}$, it is sufficient to show that
there exists $\psi>0$ such that
$|S_{n}(a_{n})-H(a)|\leq C_{\psi}\cdot n^{-\psi}$
for some $C_{\psi}<\infty$ independent of the domain. Indeed, a polynomial
rate of convergence is shown in [8, Main Theorem]. This is a slight
reformulation of the theorem in which we have that the constant $\psi$ is
independent of the domain $\Omega$. Indeed, a direct reconstruction of the
proof in [8] gives this result.
###### Theorem 4.11.
Let $\Omega$ be a domain with two marked boundary points (or prime ends) $a$
and $c$. Let $(\Omega_{n},a_{n},c_{n})$ be its admissible discretization.
Consider the site percolation model or the models introduced in [10] on the
domain $\Omega_{n}$. In the case of the latter we also impose the assumption
that the boundary Minkowski dimension is less than 2 (in the former, this is
not necessary). Let $\gamma$ be the interface between $a$ and $c$. Consider
the stopping time $T:=\inf\\{t\geq 0\;:\;\gamma\text{ enters a
}\Delta\text{-neighbourhood of }c\\}$ for some $\Delta>0$. Then there exists
$n_{0}<\infty$ depending only on the domain $(\Omega;a,b,c,d)$ and $T$ such
that the following estimate holds: There exists some $\psi>0$ (which does not
depend on the domain $\Omega$) such that $\mathscr{C}_{n}$ converges to its
limit with the estimate
$|\mathscr{C}_{n}-\mathscr{C}_{\infty}|\leq C_{\psi}\cdot n^{-\psi},$
for some $C_{\psi}<\infty$ provided $n\geq n_{0}(\Omega)$ is sufficiently
large.
Polynomial convergence of critical percolation on the triangular lattice.
By a straightforward computation, we can see that the martingale observable is
a nondegenerate solution to BPZ equation ((1.4)). Thus, by Proposition (4.7),
Lemma (4.8), Proposition (4.10), and Theorem (4.11), we can now apply Theorem
(1.23) to obtain:
###### Theorem 4.12.
Let $\gamma_{n}$ be the percolation Exploration Process defined above on the
admissible triangular lattice domain $\Omega_{n}$. Let $\tilde{\gamma}_{n}$ be
its image in $({\mathbb{H}};0,\infty)$ parameterized by capacity. There exists
stopping time $T<\infty$ and $n_{1}$ such that
$\displaystyle\sup_{n}\sup_{t\in[0,T]}n_{1}(\Omega_{t})<\infty$. Then if
$n\geq n_{1}$, there is a coupling of $\gamma_{n}$ with Brownian motion
$B(t),\;t\geq 0$ with the property that if $\tilde{\gamma}$ denotes the
chordal SLE6 path in ${\mathbb{H}}$,
${\mathbb{P}}\left\\{\sup_{t\in[0,T]}|\tilde{\gamma}_{n}(t)-\tilde{\gamma}(t)\;|\;>n^{-u}\right\\}<n^{-u}$
for some $u\in(0,1)$ and where both curves are parameterized by capacity.
Moreover, if $\Omega$ is an $\alpha$-Hölder domain, then under the same
coupling, the SLE curve in the image is polynomially close to the original
discrete curve:
$\mathbb{P}\left\\{\sup_{t\in[0,T]}d_{*}\left(\gamma^{n}(t),\phi^{-1}(\tilde{\gamma}(t))\right)>n^{-v}\right\\}<n^{-v}$
where $v$ depends only on $\alpha$ and $u$.
###### Remark 4.13.
The authors believe that modifications of the arguments in [8] could lead to a
full convergence statement.
###### Remark 4.14.
Notice that under this modified percolation model, we still maintain the
reversibility of the exploration path. Let $\omega$ be a simple polygonal path
from $a^{\delta}$ to $c^{\delta}$. Suppose that the corresponding path
designate is the sequence
$\left[H_{0,1},(\mathcal{F}_{1},h_{1}^{e},h_{1}^{x}),H_{1,2},(\mathcal{F}_{2},h_{2}^{e},h_{2}^{x}),H_{2,3},\cdots,(\mathcal{F}_{K},h_{K}^{e},h_{K}^{x}),H_{K,K+1}\right]$
where $\mathcal{F}_{1},\cdots\mathcal{F}_{K}$ are flowers in $\Omega^{\delta}$
with $h_{j}^{e}$ and $h_{j}^{x}$ are the entrance and exit petals in the
$j^{th}$ flower and for $1\leq j\leq K-1,\;H_{j,j+1}$ is a path in the
complement of flowers which connects $h_{j}^{x}$ to $h_{j+1}^{e}$. That is, we
are not viewing the microscopic description where we have to specifying how
the path got between entry and exit petals. With a small loss of generality we
are also assuming that the path only visits the flower once else we would have
to specify the first entrance and exit petals, the second entrance and exit
petals, etc.
Let $\gamma^{\delta}$ be a chordal exploration process from $a^{\delta}$ to
$c^{\delta}$ in $\Omega^{\delta}$ and $\hat{\gamma}^{\delta}$ be a chordal
exploration process from $c^{\delta}$ to $a^{\delta}$ in $\Omega^{\delta}$.
Recall that all petal arrangements are independent, all flowers are configured
independently and these in turn are independent of the background filler
sites. Thus the exploration process generated by the colouring algorithm given
previously, excluding colouring of flowers, is independent and flowers are
independent of background filler sites. Thus, by the colouring algorithm we
have:
$\displaystyle{\mathbb{P}}(\gamma^{\delta}=\omega)=\left(\frac{1}{2}\right)^{l(H_{0,1})}p_{1}\left(\frac{1}{2}\right)^{l(H_{1,2})}\cdots
p_{K}\left(\frac{1}{2}\right)^{l(H_{K,K+1})}$
where $l(H_{j,j+1})$ is the number of coloured hexagons in $H_{j,j+1}$
produced by the colouring algorithm on the event $\gamma^{\delta}=\omega$ and
$p_{j}$ is the appropriate conditional probabilities on each flower of a petal
or iris given by the colouring algorithm. Notice that on the event
$\gamma^{\delta}=\hat{\gamma}^{\delta}=\omega$ for any hexagon in
$\Omega^{\delta}$ either it is coloured by both the colouring algorithm for
$\gamma^{\delta}$ and the colouring algorithm for $\hat{\gamma}^{\delta}$ or
by neither. Therefore, we have the following lemma:
###### Lemma 4.15.
Suppose $\Omega^{\delta}$ is a simply connected domain in the
$\delta$-hexagonal lattice with a predetermined flower arrangement. For any
simple polygonal path $\omega$ from $a^{\delta}$ to $c^{\delta}$ we have
${\mathbb{P}}(\gamma^{\delta}=\omega)={\mathbb{P}}(\hat{\gamma}^{\delta}=\omega)$
This lemma directly implies the following lemma.
###### Lemma 4.16.
For any simply connected domain $\Omega^{\delta}$ with predetermined flower
arrangement, the percolation exploration path from $a^{\delta}$ to
$c^{\delta}$ in $\Omega^{\delta}$ has the same distribution as the time-
reversal of the percolation exploration path from $c^{\delta}$ to $a^{\delta}$
in $\Omega^{\delta}$.
###### Question 4.17.
Is it possible to use reversibility to extend the polynomial convergence for
the whole curve percolation exploration process?
## References
* [1] M. Aizenman and A. Burchard. Hölder regularity and dimension bounds for random curves. Duke Math. J., 99(3):419–453, 1999.
* [2] Vincent Beffara. Cardy’s formula on the triangular lattice, the easy way. 02 2007.
* [3] A. A. Belavin, A. M. Polyakov, and A. B. Zamolodchikov. Infinite conformal symmetry in two-dimensional quantum field theory. Nuclear Physics B, 241(2):333–380, July 1984.
* [4] D. Beliaev. Conformal maps and geometry. 2019\.
* [5] Christian Beneš, Fredrik Johansson Viklund, and Michael J. Kozdron. On the rate of convergence of loop-erased random walk to ${\rm SLE}_{2}$. Comm. Math. Phys., 318(2):307–354, 2013.
* [6] I. Binder, L. Chayes, and H. K. Lei. On convergence to $\text{SLE}_{6}$ I: Conformal invariance for certain models of the bond-triangular type. J. Stat. Phys., 141(2):359–390, 2010.
* [7] I. Binder, L. Chayes, and H. K. Lei. On convergence to $\text{SLE}_{6}$ II: Discrete approximations and extraction of Cardy’s formula for general domains. J. Stat. Phys., 141(2):391–408, 2010.
* [8] I. Binder, L. Chayes, and H. K. Lei. On the rate of convergence for critical crossing probabilities. Ann. Inst. Henri Poincaré Probab. Stat., 51(2):672–715, 2015\.
* [9] Ilia Binder, Dmitry Chelkak, and Larissa Richards. Polynomial convergence rate of harmonic explorer and ising model. Preprint.
* [10] L. Chayes and H. Lei. Cardy’s formula for certain models of the bond-triangular type. Reviews in Mathematical Physics, 19, 01 2006.
* [11] Dmitry Chelkak. Ising model and s-embeddings of planar graphs, 2020.
* [12] Dmitry Chelkak, Hugo Duminil-Copin, Clément Hongler, Antti Kemppainen, and Stanislav Smirnov. Convergence of Ising interfaces to Schramm’s SLE curves. C. R. Math. Acad. Sci. Paris, 352(2):157–161, 2014.
* [13] M. Csörgő and P. Révész. Strong Approximations in Probability and Statistics. 1981\.
* [14] Richard M. Dudley. Real analysis and probability. Wadsworth & Brooks/Cole Advanced Books & Software, Pacific Grove, CA, 1989.
* [15] Erich Haeusler. An exact rate of convergence in the functional central limit theorem for special martingale difference arrays. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 65:523 – 534, 1984.
* [16] Fredrik Johansson Viklund. Convergence rates for loop-erased random walk and other Loewner curves. Ann. Probab., 43(1):119–165, 2015.
* [17] Antti Kemppainen and Stanislav Smirnov. Random curves, scaling limits and Loewner evolutions. Ann. Probab., 45(2):698–779, 2017.
* [18] Gregory F. Lawler. Conformally invariant processes in the plane, volume 114 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2005.
* [19] Gregory F. Lawler, Oded Schramm, and Wendelin Werner. Values of brownian intersection exponents, i: Half-plane exponents. Acta Mathematica, 187(2):237–273, 2001.
* [20] Gregory F. Lawler, Oded Schramm, and Wendelin Werner. Conformal invariance of planar loop-erased random walks and uniform spanning trees [mr2044671]. In Selected works of Oded Schramm. Volume 1, 2, Sel. Works Probab. Stat., pages 931–987. Springer, New York, 2011.
* [21] Gregory F. Lawler and Fredrik Viklund. Convergence of loop-erased random walk in the natural parametrization, 2016.
* [22] Ch. Pommerenke. Boundary behaviour of conformal maps, volume 299 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 1992.
* [23] Ch. Pommerenke. Conformal maps at the boundary. In Handbook of complex analysis: geometric function theory, Vol. 1, pages 37–74. North-Holland, Amsterdam, 2002.
* [24] Steffen Rohde and Oded Schramm. Basic properties of SLE [mr2153402]. In Selected works of Oded Schramm. Volume 1, 2, Sel. Works Probab. Stat., pages 989–1030. Springer, New York, 2011.
* [25] Oded Schramm. Scaling limits of loop-erased random walks and uniform spanning trees. Israel J. Math., 118:221–288, 2000.
* [26] Oded Schramm and Scott Sheffield. Harmonic explorer and its convergence to ${\rm SLE}_{4}$. Ann. Probab., 33(6):2127–2148, 2005.
* [27] Oded Schramm and Scott Sheffield. Contour lines of the two-dimensional discrete Gaussian free field. Acta Math., 202(1):21–137, 2009.
* [28] Stanislav Smirnov. Critical percolation in the plane: conformal invariance, cardy’s formula, scaling limits. Comptes Rendus de l’Académie des Sciences - Series I - Mathematics, 333(3):239 – 244, 2001.
* [29] Stanislav Smirnov. Critical percolation and conformal invariance. In XIVth International Congress on Mathematical Physics, pages 99–112. World Sci. Publ., Hackensack, NJ, 2005.
* [30] Stanislav Smirnov. Towards conformal invariance of 2D lattice models. In International Congress of Mathematicians. Vol. II, pages 1421–1451. Eur. Math. Soc., Zürich, 2006.
* [31] S. E. Warschawski. On the degree of variation in conformal mapping of variable regions. Trans. Amer. Math. Soc., 69:335–356, 1950.
|
# Ethics of Generating Synthetic MRI Vocal Tract Views from the Face
CVPR Responsible Generative AI Workshop
Muhammad Suhaib Shahid
University of Nottingham
NG8 1BB, UK
<EMAIL_ADDRESS>Gleb E. Yakubov
University of Nottingham
LE12 5RD, UK
Andrew P. French
University of Nottingham
NG8 1BB, UK
###### Abstract
Forming oral models capable of understanding the complete dynamics of the oral
cavity is vital across research areas such as speech correction, designing
foods for the aging population, and dentistry. Magnetic resonance imaging
(MRI) technologies, capable of capturing oral data essential for creating such
detailed representations, offer a powerful tool for illustrating articulatory
dynamics. However, its real-time application is hindered by expense and
expertise requirements. Ever advancing generative AI approaches present
themselves as a way to address this barrier by leveraging multi-modal
approaches for generating pseudo-MRI views. Nonetheless, this immediately
sparks ethical concerns regarding the utilisation of a technology with the
capability to produce MRIs from facial observations.
This paper explores the ethical implications of external-to-internal
correlation modeling (E2ICM). E2ICM utilises facial movements to infer
internal configurations and provides a cost-effective supporting technology
for MRI. In this preliminary work, we employ Pix2PixGAN to generate pseudo-MRI
views from external articulatory data, demonstrating the feasibility of this
approach. Ethical considerations concerning privacy, consent, and potential
misuse, which are fundamental to our examination of this innovative
methodology, are discussed as a result of this experimentation.
## 1 Introduction
The ability to model the complete oral cavity holds significant utility across
various domains, notably in dentistry, where understanding how dental
prosthetics impact speech and mastication is crucial in forming personalised
dental devices. However, achieving comprehensive oral cavity modeling presents
challenges, including limitations in technique availability, articulator
capture capacity, and associated costs. Researchers often face the dilemma of
selecting a suitable technique tailored to their specific objectives. Among
the available methods, magnetic resonance imaging (MRI) stands out for its
capability to provide detailed representations of all articulators,
particularly when augmented with real-time functionality, offering insights
into dynamic movements. Nonetheless, the practicality of real-time MRI is
hindered by its costliness and the need for specialised expertise, rendering
it less feasible for routine use.
This raises the question, is it possible to use generative AI approaches to
achieve a complete model of the oral cavity, encompassing all articulators in
motion, without incurring excessive expenses? This is where the concept of
external-to-internal correlation modeling (E2ICM) emerges as a potential
solution. By observing the external facial movements, particularly those of
visible articulators such as the lips, and jaw, we investigate if we can in
any way reconstruct the internal configurations of the oral cavity. This
approach leverages the inherent relationship between external facial gestures
and internal vocal tract configurations. Such an approach aims to address the
cost and complexity concerns associated with MRI and other experimental
techniques. Clearly there are limitations to this approach, but here we
consider exploring the feasibility of such a technology, and bring to the fore
the ethical questions such an approach might raise.
As advancements in AI-based approaches continue to progress, questions
regarding ethical implications become increasingly pertinent. The ability to
record or photograph individuals during articulation and mastication, followed
by the generation of MRI-like images of the internal oral cavity, raises
several ethical considerations and potential concerns regarding privacy,
consent, and misuse.
This paper explores the application of generative deep learning models to
create pseudo-MRI views of the oral cavity. Specifically, it employs the
Pix2PixGAN network to transform external views of a participant during
articulation into predicted MRI representations, in a limited speech-
reconstruction scenario. For the purpose of demonstrating the feasibility of
the proposed approach we briefly evaluate the challenges associated with
determining the quality of the generated images. This is followed by a
discussion focused on possible ethical dilemmas associated with the use of
such generated data.
## 2 Background
Real time MRI (RtMRI) of the vocal tract is one of the very few techniques
capable of displaying, frame by frame, the movements of all articulators
during speech[6]. The method allows researchers to explore a wide range of
applications from articulatory studies, to oral health and food oral
processing. Despite the prospects RtMRI presents, there are some underpinning
issues that hinder its widespread use. These limitations are a result of the
cost and expertise requirements for collecting RtMRI data on an individual
subject basis [10, 3]. One possible generative solution to this is by forming
predictive models capable of using external observations of the face to
synthesise a representation of the vocal tract MRI view.
The feasibility of such an approach relies on investigating the
interrelationship between the internal vocal tract and external face views.
Such research has explored correlations between the two views by linking
facial movements, captured via video, with vocal tract dynamics, captured
through rtMRI [7]. The main focus is to identify whether there is sufficient
mutual information between the forward coronal view of the face and the
sagittal MRI to make reconstruction procedures possible. Employing Principal
Component Analysis (PCA), Scholes et al. (2020) simplified the data and
identified key patterns of change in both modalities. Through this process,
they uncovered connections between facial gestures and vocal tract
configurations, showcasing the potential for mutual reconstruction between the
two modalities. The findings concluded that facial information may hold
sufficient data to recover certain vocal tract shapes during speech
production.
While the PCA-based analysis-by-synthesis technique showcases an
interrelationship between the two modalities, it comes short of addressing key
barriers that prevent the widespread use of MRI. In order to reconstruct an
MRI representation of the vocal tract, a corresponding PCA matrix must
accompany each specific external view. However, the PCA representation is
derived from the MRI image, and consequently, the MRI data are still necessary
each time the representation is created.
Paving the way to addressing this problem are generative machine learning
models capable of performing cross-modality synthesis of unseen MRI
configurations when presented with a novel face view for a specific individual
[11]. This technique is commonly used in computer vision and machine learning
to create mappings between different visual styles, attributes, or
characteristics. It is the process of transforming an input image from one
domain into a corresponding output image in another domain, while preserving
meaningful content and maintaining consistency between the two domains; it
involves changing how an image looks while keeping its underlying meaning
intact. In the application of this task, it would involve shifting from a face
view to a MRI vocal tract view for any two paired frames; this pairing being
key to the approach.
The Pix2PixGaN framework [1] serves as the translation network chosen for this
task. The architecture comprises two key components: a generator and a
discriminator. The generator works to produce a realistic mapping from the
input domain to the desired output, while the discriminator’s role is to
determine whether an image is real or synthesised. The generator and
discriminator train in an adversarial fashion, each trying to optimise ahead
of the other. This approach drives the mapping of images from one domain to
another in a supervised manner. Once trained, the system has the potential to
operate in an autonomous manner and predict internal views based on the
outside image or video only. If successful, such approaches can enable
generating synthesised views without specialised equipment and a person’s
consent, which raises important ethical questions and considerations that need
to be addressed.
Existing research has explored the (bio)ethical considerations surrounding the
use of Generative Adversarial Networks (GANs) for generating medical images.
The integration of AI technologies in healthcare raises complex legal,
ethical, and technical challenges. In their work [5], the authors underscore
the necessity for a regulatory framework to ensure the safe integration of
generative technologies in medical contexts. A systematic review conducted by
[2] examined recent GAN architectures utilised in medical image analysis,
revealing imbalances in their capabilities, particularly with smaller
datasets. These findings align with the observations of [4], regarding the
imbalanced class distributions often observed in datasets, thereby raising
ethical concerns.
## 3 Framework and Implementation
In this study we used a dual-modal dataset used for this study, comprising
registered videos captured during speech, this has been previously
published[8]. Initial data collection involved 13 participants articulating a
predetermined set of 10 sentences. Participants underwent two recording
sessions: first, speaking the sentences in front of a camera, and second,
repeating the same sentences during MRI scans. Subsequently, these video sets
were then aligned. Initially, data from 13 participants were collected for the
study. However, only data from 11 participants were ultimately included in the
published datasets as the study focused on British English speakers. The
dataset encompasses videos providing a frontal view of the face alongside
sagittal MRI views. For the purposes of this preliminary study, only data from
one subject was utilised, as they were the only participant for whom all 10
videos were available across all sentences. Across these 10 videos, a total of
461 frames were available, considering the videos were recorded at a frame
rate of 15 frames per second (fps). The shortest video contained 30 frames,
while the longest comprised 59 frames.
An implementation of Pix2PixGaN framework was used as the image-to-image
translation network. Based on the conditional generative adversarial network
(CGaN), the architecture consists of a generator and a discriminator. The
generator aims to produce realistic images based on the input, while the
discriminator’s job is to distinguish between real and generated images. The
generator employs a U-Net-inspired encoder-decoder architecture with skip
connections. The encoder module is formed of only convolutional layers,
omitting dropout. This structure forms the following sequence of layers:
C64-C128-C256-C512-C512-C512-C512-C512. The decoder integrates dropout layers
with a dropout rate of 0.5 in the first, second, and third layers. The
decoder’s structure is as follows: CD512-CD512-CD512-C512-C256-C128-C64. This
combination establishes a proficient generator capable of producing coherent
translations for this dataset. The model was optimised using the Adam
optimiser with hyperparameters $\alpha$ = 0.0002, $\beta_{1}$ = 0.5,
$\beta_{2}$ = 0.999, and $\epsilon$ = 1e-08. The training was done with a
batch size of 16, for 200 epochs. Tanh activation function was used.
## 4 Results
The Fréchet Inception Distance (FID) metric and Structural Similarity Index
Measure (SSIM) were used to accesses the quality of image examination. FID
provides a quantitative measure of similarity between the distribution of
generated and authentic images, with lower scores indicating higher quality.
SSIM considers the structural information of images, accounting for spatial
relationships beyond pixel values. Additionally, a qualitative evaluation was
conducted by observing the movements of each articulator frame by frame,
drawing conclusions regarding which articulators are constructed most
effectively. The FID score for generated images compared to ground truth is
30.80, though establishing an understanding of what a ”good” FID score is when
transitioning from RGB to MRI domains remains challenging. To gain some
insight, an FID was calculated for various ground truth frames to assess how
well the FID performs with real images but of different vocal tract views,
yielding a score of 19.75. While FID offers valuable insight into image
similarity, it doesn’t directly consider the spatial representation of vocal
tract structure. Therefore, SSIM might be more suitable for this task. The
average SSIM score for all 46 test images was 0.7961, with higher scores
indicating better image quality on a scale from -1 to 1. We recognise, and
highlight, that interpreting these scores in this application domain is
challenging.
As illustrated in Figure 1, the generative models demonstrate some proficiency
in generating images with realistic appearances, particularly showcasing
discernible movements in the jaw regions. However, upon closer examination,
specific articulatory details are challenging to determine. Despite reasonable
FID and SSIM scores indicating overall good image similarity, inconsistencies
between generated articulators and ground truth are apparent in certain
frames. These discrepancies could potentially lead to misleading
interpretations in clinical applications, where images resembling plausible
MRI scans but with incorrect articulator configurations may pose risks. Moving
forward, vocal tract segmentation could serve as a promising avenue for
enhancing the clinical relevance in assessing the quality of generated vocal
tract views [9].
Figure 1: Still frames sample the external view (left), ground truth MRI frame
(middle) and generated frame (right).
## 5 Discussion of Ethics and Responsible Use
The ethical dimensions surrounding the potential of such generative medical
approaches demand scrutiny. There are concerns associated with the generation
and utilisation of synthetic views, the accuracy and reliability of the data,
and potential misuse.
### 5.1 Synthetic dataset enlargement
The fundamental principle of informed consent and participant autonomy is
central. Ethical protocols governing the collection of MRI data are
universally stringent, dictating both the type of data collected and its
storage practices. Participants are provided with comprehensive information
about the study, including potential risks and the intended utilisation of
their data, to make informed decisions regarding consent. The integration of
generative AI raises logistical and ethical considerations perhaps not
originally conceived. In a practical application, MRI data might initially be
collected for a limited set of sentences, such as the 10 here. Subsequently,
using generative techniques, additional MRI data could be synthesised for
sentences that were not originally captured via MRI. When using a trained
model, it’s possible to employ readily available facial data, especially from
public spaces where recording may be allowed by law. However, the ethical
dilemma arises: Is it acceptable and responsible to generate new data
modalities without obtaining explicit consent? While enlarging datasets using
generative AI techniques is not novel and has been applied in various domains,
the unique aspect here lies in the translation from an external view to an
internal view. Accessing and utilising facial data for such a task, even if
publicly available, must be carefully assessed to uphold principles of
privacy.
### 5.2 Accuracy of generated images
Responsible use of generative AI necessitates addressing concerns surrounding
the accuracy and integrity of synthesised data. As demonstrated here, methods
such as FID are employed to help assess the ”quality” of images. However, it
is evident from the outset that these methods do not adequately evaluate
specific spatial information in the generated images. In other words, the
morphology of clinically-relevant structures is not captured well by these
metrics. Often, it is also hard to identify subtle features even directly in
the dataset (see Figure 1), so interpreting the quality of synthesised data in
this domain is a challenge.
Questions will arise regarding the potential misuse or misinterpretation of
inaccurate synthetic information. Poorly performing models could lead to
misdiagnosis or misinterpretation. Therefore, it is imperative to remain
vigilant and implement rigorous validation procedures to ensure the
reliability and accuracy of synthesised data. More work is needed to develop
approaches to assess the usefulness and trustworthiness of generated images.
### 5.3 Data storage of generated images
The stringent protocols governing the storage and anonymisation of MRI data
are imperative to safeguard individuals’ sensitive health information.
However, the creation of additional synthetic MRI data, which may still
contain identifiable features or morphology, may not always undergo comparable
protocol scrutiny as the original data. While large scale MRI datasets could
potentially advance medical research and clinical applications, relaxed
protocols for synthesised data may compromise privacy and data security. Thus,
careful ethical consideration is warranted here for future research.
Furthermore, there are broader societal implications to consider, particularly
regarding the potential impact of synthesised medical views on areas such as
identity verification. If such technology were to be deployed in contexts such
as security or law enforcement, there could be implications for individuals’
rights and freedoms, including the risk of discrimination or misuse of
biometric data.
### 5.4 Dataset and model biases
The publicly available dataset used in this study exhibited a bias towards
speakers of British English. Though it is understandable from the associated
papers that this is likely an attempt to standardise an already small and
challenging data set, it nevertheless highlights concerns regarding potential
biases in future datasets and the models trained upon them. Certain
demographic groups may be favoured in inferences, while for others the model
could perform poorly. Likewise, the application of the model would likely be
limited to the application studied in the dataset (e.g. speech versus chewing,
for example); wrong application would lead to misleading results.
The concern extends beyond data representation to the broader implications for
societal equity and fairness. Though this is a problem not only relevant to
this application, occurrence of such a scenario could also exacerbate existing
disparities in access to resources and opportunities, with models not being
tailored to regional use. Proactive measures must be implemented to mitigate
bias and promote inclusivity in dataset curation and model development.
Strategies may include diversifying dataset sources to encompass a broader
spectrum of linguistic and cultural backgrounds, and implementing robust
validation techniques to identify and mitigate bias in model predictions.
## 6 Conclusion
A demonstration of an exploratory method for generating MRI images of the
vocal tract has been presented. Leveraging the Pix2PixGAN architecture, this
study demonstrates the application of Generative AI to synthesise previously-
unseen vocal tract configurations from external facial views. The quality of
the generated images has been evaluated using the Fréchet Inception Distance
(FID) metric, alongside the observation of distinct articulator movements.
Results are by no means conclusive at this stage, but certainly raise the
question of whether this is a valid line of research in generative AI for
future researchers.
Furthermore, an initial discussion regarding the responsible use of generative
AI in such applications has been provided. This discussion presents
considerations that must be taken into account when employing such techniques.
These encompass various aspects, including the enlargement of synthetic
datasets, the accuracy of generated images, the storage protocols employed,
and the potential biases inherent in both the dataset and the models utilised.
## References
* Isola et al. [2017] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. Image-to-Image Translation with Conditional Adversarial Networks. In _2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 5967–5976, Honolulu, HI, 2017. IEEE.
* Jeong et al. [2022] Jiwoong J. Jeong, Amara Tariq, Tobiloba Adejumo, Hari Trivedi, Judy W. Gichoya, and Imon Banerjee. Systematic Review of Generative Adversarial Networks (GANs) for Medical Image Classification and Segmentation. _Journal of Digital Imaging_ , 35(2):137–152, 2022.
* Kochetov [2020] Alexei Kochetov. Research methods in articulatory phonetics I: Introduction and studying oral gestures. _Language and Linguistics Compass_ , 14(4):e12368, 2020. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/lnc3.12368.
* Makhlouf et al. [2023] Ahmed Makhlouf, Marina Maayah, Nada Abughanam, and Cagatay Catal. The use of generative adversarial networks in medical image augmentation. _Neural Computing and Applications_ , 35(34):24055–24068, 2023.
* Paladugu et al. [2023] Phani Srivatsav Paladugu, Joshua Ong, Nicolas Nelson, Sharif Amit Kamran, Ethan Waisberg, Nasif Zaman, Rahul Kumar, Roger Daglius Dias, Andrew Go Lee, and Alireza Tavakkoli. Generative Adversarial Networks in Medicine: Important Considerations for this Emerging Innovation in Artificial Intelligence. _Annals of Biomedical Engineering_ , 51(10):2130–2142, 2023.
* Ramanarayanan et al. [2018] Vikram Ramanarayanan, Sam Tilsen, Michael Proctor, Johannes Töger, Louis Goldstein, Krishna S. Nayak, and Shrikanth Narayanan. Analysis of speech production real-time MRI. _Computer Speech and Language_ , 52:1–22, 2018.
* Scholes and Skipper [2020] Chris Scholes and Jeremy I. Skipper. The inter-relationship between the face and vocal-tract configuration during audio-visual speech. 2020\. Publisher: OSF.
* Scholes et al. [2020] Chris Scholes, Jeremy I. Skipper, and Alan Johnston. The interrelationship between the face and vocal tract configuration during audiovisual speech. _Proceedings of the National Academy of Sciences_ , 117(51):32791–32798, 2020. Publisher: Proceedings of the National Academy of Sciences.
* Shahid et al. [2024] Muhammad Suhaib Shahid, Andrew P French, Michel F. Valstar, and Gleb E. Yakubov. Research in Methodologies for Modelling the Oral Cavity. _Biomedical Physics & Engineering Express_, 2024.
* Tiede et al. [2000] M.K. Tiede, Shinobu Masaki, and Eric Vatikiotis-Bateson. Contrasts in speech articulation observed in sitting and supine conditions. _Proceedings of the 5th Seminar on Speech Production_ , pages 25–28, 2000.
* Xie et al. [2023] Guoyang Xie, Yawen Huang, Jinbao Wang, Jiayi Lyu, Feng Zheng, Yefeng Zheng, and Yaochu Jin. Cross-modality Neuroimage Synthesis: A Survey. _ACM Computing Surveys_ , 56(3):80:1–80:28, 2023.
|
∎
11institutetext: Sachin Chauhan 22institutetext: Department of Physics, Indian
Institute of Technology Bombay
Powai, Mumbai, 400076, India
22email<EMAIL_ADDRESS>33institutetext: Pichai Ramadevi
44institutetext: Department of Physics, Indian Institute of Technology Bombay
Powai, Mumbai, 400076, India
44email<EMAIL_ADDRESS>
# $\hat{Z}$\- invariant for $SO(3)$ and $OSp(1|2)$ Groups
Sachin Chauhan Pichai Ramadevi
(Received: date / Accepted: date)
###### Abstract
Three-manifold invariants $\hat{Z}$ (“$Z$-hat”), also known as homological
blocks, are $q$-series with integer coefficients. Explicit $q$-series form for
$\hat{Z}$ is known for $SU(2)$ group, supergroup $SU(2|1)$ and ortho-
symplectic supergroup $OSp(2|2)$. We focus on $\hat{Z}$ for $SO(3)$ group and
orthosymplectic supergroup $OSp(1|2)$ in this paper. Particularly, the change
of variable relating $SU(2)$ link invariants to the $SO(3)$ & $OSp(1|2)$ link
invariants plays a crucial role in explicitly writing the $q$-series.
###### Keywords:
Chern-Simons theory topological field theories topological strings M-theory
3-manifold knot quantum invariant q-series colored Jones polynomial
## 1 Introduction
Knot theory has attracted attention from both mathematicians and physicists
during the last 40 years. The seminal work of WittenWitten:1988hf giving a
three-dimensional definition for Jones polynomials of knots and links, using
$SU(2)$ Chern-Simons theory on $S^{3}$, triggered a tower of new colored link
invariants. Such new invariants are given by expectation value of Wilson loops
carrying higher dimensional representation $R\in\mathcal{G}$ in Chern-Simons
theory where $\mathcal{G}$ denotes gauge group. These link invariants are in
variable ${\mathbbm{q}}$ which depends on the rank of the gauge group
$\mathcal{G}$ and the Chern-Simons coupling constant $k\in\mathbb{Z}$ (For eg:
when $\mathcal{G}=SU(N)$ then ${\mathbbm{q}}=\text{exp}\left(\frac{2\pi
i}{k+N}\right)$). Witten’s approach also gives three-manifold invariant
$Z_{k}^{\mathcal{G}}[M;{\mathbbm{q}}]$, called Chern-Simons partition function
for manifold $M$, obtained from surgery of framed links on $S^{3}$(Lickorish-
Wallace theorem10.2307/1970373 ; wallace_1960 ). Witten-Reshitikhin-Turaev
(WRT) invariants $\tau_{k}^{\mathcal{G}}[M;{\mathbbm{q}}]$ known in the
mathematics literature are proportional to the Chern-Simons partition
function:
$Z_{k}^{\mathcal{G}}[M;{\mathbbm{q}}]={\tau_{k}^{\mathcal{G}}[M;{\mathbbm{q}}]\over\tau_{k}^{\mathcal{G}}[S^{2}\times
S^{1};{\mathbbm{q}}]}~{}.$ (1)
These WRT invariants can be written in terms of the colored invariants of
framed links10.2307/1970373 ; wallace_1960 ; Kaul:2000xe ; Ramadevi:1999nd .
It was puzzling observation that the colored knot polynomials appear as
Laurent series with integer coefficients. There must be an underlying
topological interpretation of such integer coefficients. This question was
answered both from mathematics and physics perspective. Initial work of
Khovanovkhovanov2000categorification titled ‘cateforification’ followed by
other papers on bi-graded homology theory including Khovanov-Rozansky homology
led to new homological invariants. Thus the integer coefficients of the
colored knot polynomials are interpreted as the dimensions of vector space of
homological theory. From topological strings and intersecting
branesOoguri:1999bv ; Gopakumar:1998ii ; Gopakumar:1998jq , the integers of
HOMFLY-PT polynomials are interpretable as counting of BPS states. Further the
connections to knot homologies within topological string context was initiated
in Gukov:2004hz resulting in concrete predictions of homological invariants
for some knots (see review Nawata:2015wya and references therein). Such a
physics approach involving brane set up in $M$-theoryGukov:2017kmk ;
Gukov:2016gkn ; Mikhaylov:2014aoa ; Ferrari:2020avq suggests the plausibility
of categorification of WRT invariants
$\tau_{k}^{\mathcal{G}}[M;{\mathbbm{q}}]$ for three-manifolds. However, the
WRT invariants for simple three-manifolds are not a Laurent series with
integer coefficients.
The detailed discussion on $U(N)$ Chern-Simons partition function on Lens
space $L(p,1)\equiv S^{3}/\mathbb{Z}_{p}$ (see section 6 of Gukov:2016gkn )
shows a basis transformation
$Z_{k}^{\mathcal{G}}[M;{\mathbbm{q}}]\underrightarrow{~{}~{}\mathcal{S}~{}~{}}{\hat{Z}}^{\mathcal{G}}[M;q]$
so that $\hat{Z}$ are $q$-series(where variable $q$ is an arbitrary complex
number inside a unit disk) with integer coefficients (GPPV
conjectureGukov:2017kmk ). These $\hat{Z}$ are called the homological blocks
of WRT invariants of three-manifolds $M$. Physically, the new three-manifold
invariants $\hat{Z}^{\mathcal{G}}[M;q]$ is the partition function
$Z_{T^{\mathcal{G}}[M]}[D^{2}\times S^{1}]$ for simple Lie groups. Here
$T^{\mathcal{G}}[M]$ denote the effective 3d $\mathcal{N}=2$ theory on
$D^{2}\times S^{1}$ obtained by reducing 6d $(2,0)$ theory (describing
dynamics of coincident $M5$ branes) on $M$.
For a class of negative definite plumbed three-manifolds as well as link
complements Gukov:2017kmk ; Gukov:2019mnk ; park2020higher ; Chung:2018rea ,
$\hat{Z}^{SU(N)}$ has been calculated. Further, $\hat{Z}$ invariants for super
unitary group $SU(n|m)$ supergroup with explicit $q$-series for $SU(2|1)$ is
presented in Ferrari:2020avq . Generalisation to orthosymplectic supergroup
$OSp(2|2n)$ with explicit $q$-series for $Osp(2|2)$chae2021towards motivates
us to look at $\hat{Z}$ for other gauge groups.
Our goal in this paper is to extract $\hat{Z}$ for the simplest orthogonal
group $SO(3)$ and the simplest odd orthosymplectic supergroup $OSp(1|2)$. We
take the route of relating $SU(2)$ colored link invariants to the link
invariants for these two groups to obtain $\hat{Z}$ invariants.
The plan of the paper is organised as follows. In section 2, we will review
the developments on the invariants of knots, links and three-manifolds. We
will first briefly present Chern-Simons theory and colored link invariants
with explicit results for $SU(2)$ gauge group and indicate how colored $SO(3)$
and $OSp(1|2)$ link invariants can be obtained from the colored $SU(2)$
polynomials. Then, we will summarise the developments of the homological
invariants. In section 3, we briefly review $\hat{Z}$-series invariant for
$SU(2)$ group for the negative definite plumbed three-manifolds. This will
serve as a warmup to extend to $SO(3)$ and $OSp(1|2)$ group which we will
present in section 4. We summarize the results and also indicate future
directions to pursue in the concluding section 5.
## 2 Knots, Links and Three-manifold Invariants
In this section, we will briefly summarise new invariants in knot theory from
the physics approach as well as from the mathematics approach.
### 2.1 Chern-Simons Field Theory Invariants
Chern-Simons theory based on gauge group $\mathcal{G}$ is a Schwarz type
topological field theory which provides a natural framework for study of
knots, links and three-manifolds $M$. Chern-Simons action
$S_{CS}^{\mathcal{G}}(A)$ is explicitly metric independent:
$S_{CS}^{\mathcal{G}}(A)=\frac{k}{4\pi}\int_{M}Tr\left(A\wedge
dA+\frac{2}{3}A\wedge A\wedge A\right)~{}.$ (2)
Here $A$ is the matrix valued gauge connection based on gauge group
$\mathcal{G}$ and $k\in\mathbb{Z}$ is the coupling constant. In fact, the
expectation value of Wilson loop operators associated with any $m$-component
link $\mathcal{L}_{m}$ are the the link invariants:
$V_{R_{1},R_{2},\ldots
R_{m}}^{\mathcal{G}}[\mathcal{L}_{m};{\mathbbm{q}}]=\langle
W_{R_{1},R_{2},\ldots
R_{m}}[\mathcal{L}_{m}]\rangle={\int{\mathcal{D}}A\exp(iS_{CS})\overbrace{P\left(\prod_{i}\Tr_{R_{i}}exp\oint_{\mathcal{K}_{i}}A\right)}^{W_{R_{1},R_{2},\ldots
R_{m}}[\mathcal{L}_{m}]}\over{\underbrace{\int{\mathcal{D}}A\exp(iS_{CS})}_{Z^{\mathcal{G}}_{k}[M;{\mathbbm{q}}]}}}~{},$
(3)
where $\mathcal{K}_{i}$’s denote the component knots of link $\mathcal{L}_{m}$
carrying representations $R_{i}$’s of gauge group $\mathcal{G}$ and
$Z^{\mathcal{G}}_{k}[M;{\mathbbm{q}}]$ defines the Chern-Simons partition
function encoding the topology of the three-manifold $M$.
Exploiting the connection between Chern-Simons theory, based on group
$\mathcal{G}$, and the corresponding Wess-Zumino-Witten (WZW) conformal field
theory with the affine Lie algebra symmetry $\mathfrak{g}_{k}$, the invariants
of these links embedded in a three-sphere $M=S^{3}$ can be explicitly written
in variable ${\mathbbm{q}}$ :
${\mathbbm{q}}=\exp({2\pi i\over k+C_{v}})~{},$ (4)
which depends on the coupling constant $k$ and the dual Coxeter number $C_{v}$
of the group $\mathcal{G}$. These link invariants include the well-known
polynomials in the knot theory literature.
$\mathcal{G}$ | $R$ | Invariant
---|---|---
$SU(2)$ | $\yng(1)$ | Jones
$SU(N)$ | $\yng(1)$ | HOMFLY-PT
$SO(N)$ | defining | Kauffman
#### 2.1.1 Link Invariants
As indicated in the above table, Jones’ polynomial corresponds to the
fundamental representation $R=\tiny{\yng(1)}\equiv 1\in SU(2)$ placed on all
the component knots:
$V_{1,1,1,\ldots 1}^{SU(2)}[\mathcal{L}_{m};{\mathbbm{q}}]\equiv J_{2,2,\ldots
2}\left[\mathcal{L}_{m};{\mathbbm{q}}=\exp({2\pi i\over k+2})\right]~{},$ (5)
where the subscript ‘2’ in Jones polynomial
$J_{2,2,2,\ldots}[\mathcal{L}_{m};{\mathbbm{q}}]$ denotes the dimension of
$R={\tiny{\yng(1)}}$. Higher dimensional representations placed on the
component knots $R_{i}=\underbrace{\tiny{\yng(4)}}_{n_{i}-1}\equiv n_{i}-1\in
SU(2)$ are the colored Jones invariants:
$V_{n_{1}-1,n_{2}-1,n_{3}-1,\ldots
n_{m}-1}^{SU(2)}[\mathcal{L}_{m};{\mathbbm{q}}]\equiv J_{n_{1},n_{2},\ldots
n_{m}}\left[\mathcal{L}_{m};{\mathbbm{q}}=\exp({2\pi i\over k+2})\right]~{},$
(6)
and the invariants with these representations belonging to $SU(N)$ ($(SO(N)$)
are known as colored HOMFLY-PT (colored Kauffman) invariants. For clarity, we
will restrict to $SU(2)$ group to write the invariants explicitly in terms of
${\mathbbm{q}}$ variable.
We work with the following unknot ($\bigcirc$) normalisation:
$J_{n+1}[\bigcirc;{\mathbbm{q}}]={\rm
dim}_{\mathbbm{q}}\underbrace{\yng(4)}_{n}={{\mathbbm{q}}^{(n+1)\over
2}-{\mathbbm{q}}^{-{(n+1)\over 2}}\over{\mathbbm{q}}^{1\over
2}-{\mathbbm{q}}^{-{1\over 2}}}={\sin({\pi(n+1)\over k+2})\over\sin({\pi\over
k+2})}={S_{0n}\over S_{00}},$ (7)
where ${\rm dim}_{\mathbbm{q}}\underbrace{\yng(3)}_{n}$ denotes quantum
dimension of the representation $\underbrace{\yng(3)}_{n}$ and
$S_{n_{1}n_{2}}$ are the modular transformation matrix elements of the
$\mathfrak{su}(2)_{k}$ WZW conformal field theory whose action on the
characters is
$\chi_{n_{1}}(\tau)~{}~{}\underrightarrow{~{}~{}~{}S~{}~{}~{}}~{}~{}\chi_{n_{2}}\left(-{1\over\tau}\right)~{},$
where $\tau$ denotes the modular parameter. These knot and link polynomials
with the above unknot normalisation are referred as unnormalised colored Jones
invariant.
For framed unknots with framing number $f$, the invariant will be
$J_{n+1}[\bigcirc_{f};{\mathbbm{q}}]={\mathbbm{q}}^{f[{(n+1)^{2}-1\over
4}]}{{\mathbbm{q}}^{(n+1)\over 2}-{\mathbbm{q}}^{-{(n+1)\over
2}}\over{\mathbbm{q}}^{1\over 2}-{\mathbbm{q}}^{-{1\over
2}}}\propto(T_{nn})^{f}{S_{0n}\over S_{00}},$ (8)
where the action of the modular transformation matrix $T$ on characters is
$\chi_{n}(\tau)~{}~{}\underrightarrow{~{}~{}~{}T~{}~{}~{}~{}}\chi_{n}(\tau+1)~{}.$
The colored Jones invariant for the Hopf link can also be written in terms of
$S$ matrix:
$J_{n_{1}+1,n_{2}+1}[H;{\mathbbm{q}}]=\left({{\mathbbm{q}}^{\frac{(n_{1}+1)(n_{2}+1)}{2}}-{\mathbbm{q}}^{-\frac{(n_{1}+1)(n_{2}+1)}{2}}\over{\mathbbm{q}}^{\frac{1}{2}}-{\mathbbm{q}}^{-\frac{1}{2}}}\right)={S_{n_{1}n_{2}}\over
S_{00}}.$ (9)
The invariant for a framed Hopf link $H(f_{1},f_{2})$, with framing numbers
$f_{1}$ and $f_{2}$ on the two component knots, in terms of $T$ and $S$
matrices is
$J_{n_{1}+1,n_{2}+1}[H(f_{1},f_{2});{\mathbbm{q}}]\propto(T_{n_{1}n_{1}})^{f_{1}}(T_{n_{2}n_{2}})^{f_{2}}{S_{n_{1}n_{2}}\over
S_{00}}~{}~{}.$ (10)
We will look at a class of links obtained as a connected sum of framed Hopf
links. For instance, the invariant for the connected sum of two framed Hopf
links $H(f_{1},f_{2})\\#H(f_{2},f_{3})$ will be
$\displaystyle
J_{n_{1}+1,n_{2}+1,n_{3}+1}[H(f_{1},f_{2})\\#H(f_{2},f_{3});{\mathbbm{q}}]$
$\displaystyle\propto$
$\displaystyle{\prod_{i=1}^{3}T_{n_{i}n_{i}}^{f_{i}}}{S_{n_{1}n_{2}}\over
S_{00}}{S_{n_{2}n_{3}}\over S_{n_{2}0}}$ $\displaystyle=$
$\displaystyle{\prod_{i=1}^{2}J_{n_{i}+1,n_{i+1}+1}[H(f_{i},f_{i+1});{\mathbbm{q}}]\over
J_{n_{2}+1}[\bigcirc;{\mathbbm{q}}]}~{}.~{}$
Such a connected sum of two framed Hopf links, which is a 3-component link,
can be denoted as a linear graph
$f_{1}$$f_{2}$$f_{3}$
with three vertices labeled by the framing numbers and the edges connecting
the adjacent vertices. These are known as ‘plumbing graphs’. Another plumbing
graph $\Gamma$ with 8 vertices denoting the link $\mathcal{L}(\Gamma)$ (the
connected sum of many framed Hopf links) is illustrated in Figure 1. The
colored invariant for these links $\mathcal{L}(\Gamma)$ can be written in
terms of $S$ and $T$ matrices.
Figure 1: An example of a plumbing graph $\Gamma$ (left) and the corresponding
link ${\mathcal{L}}(\Gamma)$ of framed unknots in $S^{3}$ (right).
For a general $m$ vertex plumbing graph with vertices $v_{1},v_{2},\ldots
v_{m}\in V$ labelled by framing numbers $f_{1},f_{2},\ldots f_{m}$, there can
be one or more edges connecting a vertex $v$ with the other vertices. The
degree of any vertex $v$ ($\text{deg}(v)$) is equal to the total number of
edges intersecting $v$. For the graph in Figure 1,
$\text{deg}(2)=\text{deg}(4)=\text{deg}(6)=3$. The colored Jones’ invariant
for any plumbing graph $\Gamma$ is
$J_{n_{1}+1,n_{2}+1,\ldots n_{m}+1}[\mathcal{L};{\mathbbm{q}}]\propto{1\over
S_{00}}\prod_{i=1}^{m}\\{(T_{n_{i}n_{i}})^{f_{i}}(S_{0n_{i}})^{1-\text{deg}(v_{i})}\\}\prod_{(v_{1},v_{2})\;\in\;\text{Edges}}(S_{n_{v_{1}}n_{v_{2}}})~{}.$
(12)
Even though we have presented the colored Jones invariants (10, 2.1.1, 12),
the formal expression of these link invariants in terms of $S$ and $T$
matrices are applicable for any arbitrary gauge group $\mathcal{G}$.
$\bullet$ $SO(3)$ and $OSp(1|2)$ Link invariants
Using group theory arguments, it is possible to relate colored link invariants
between different groups. For instance, the representations of the $SO(3)$ can
be identified with a subset of $SU(2)$ representations. As a consequence, the
$SO(3)$ link invariants can be related to the colored Jones invariants as
follows:
$V_{n_{1},n_{2},n_{3},\ldots n_{m}}^{SO(3)}\left[\mathcal{L}_{m};Q=\exp({2\pi
i\over K+1})\right]=J_{2n_{1}+1,2n_{2}+1,\ldots
2n_{m}+1}[\mathcal{L}_{m};{\mathbbm{q}}]{\big{|}}_{{\mathbbm{q}}^{2}=Q}~{},$
(13)
where the level $K$ of the affine $\mathfrak{so}(3)_{K}$ Lie algebra must be
an even integer( $K\in 2\mathbb{Z}$).
Similarly, the representations of the orthosymplectic supergroup $OSp(1|2)$
can be related to the representations of the $SU(2)$ group from the study of
$\mathfrak{osp}(1|2)_{\hat{K}}$ WZW conformal field theory and the link
invariants
$V_{n_{1},n_{2},n_{3},\ldots
n_{m}}^{OSp(1|2)}\left[\mathcal{L}_{m};\hat{Q}=\exp({2\pi i\over
2{\hat{K}}+3})\right]$ Ennes:1997kx . Particularly, there is a precise
identification of the polynomial variable $\hat{Q}$ to $SU(2)$ variable
${\mathbbm{q}}$. Further, the fusion rules of the primary fields of
$\mathfrak{osp}(1|2)_{\hat{K}}$ WZW conformal field theory can be compared to
integer spin primary fields of the $\mathfrak{su}(2)_{k}$. Particularly, the
$\hat{S}$ and $\hat{T}$-matrices of $\mathfrak{osp}(1|2)_{\hat{K}}$ :
$\displaystyle\hat{S}_{n_{1}n_{2}}$ $\displaystyle=$
$\displaystyle\sqrt{4\over
2\hat{K}+3}(-1)^{n_{1}+n_{2}}\cos\left[{(2n_{1}+1)(2n_{2}+1)\over
2(2\hat{K}+3)}\pi\right]~{}~{},$ (14) $\displaystyle\hat{T}_{n_{1},n_{2}}$
$\displaystyle\propto$
$\displaystyle\delta_{n_{1},n_{2}}{\hat{Q}}^{[\frac{(2n_{1}+1)^{2}-1]}{4}}~{}~{}~{},$
(15)
are related to the $S$ and $T$ matrices of $\mathfrak{su}(2)_{k}$ in the
following way:
$\hat{S}_{n_{1},n_{2}}=S_{2n_{1},2n_{2}}{\big{|}}_{{\mathbbm{q}}=-\hat{Q}}~{};~{}\hat{T}_{n_{1},n_{1}}=T_{2n_{1},2n_{1}}{\big{|}}_{{\mathbbm{q}}=-\hat{Q}}$
(16)
Using these relations, we can show that the $OSp(1|2)$ colored invariant match
the colored Jones invariant for any arbitrary link $\mathcal{L}_{m}$ in the
following way:
$V_{n_{1},n_{2},n_{3},\ldots
n_{m}}^{OSp(1|2)}\left[\mathcal{L}_{m};\hat{Q}=\exp({2\pi i\over
2{\hat{K}}+3})\right]=\epsilon J_{2n_{1}+1,2n_{2}+1,\ldots
2n_{m}+1}[\mathcal{L}_{m};{\mathbbm{q}}]{\big{|}}_{{\mathbbm{q}}=-\hat{Q}}~{},$
(17)
where $\epsilon$ could be $\pm 1$ depending on the link $\mathcal{L}$ and the
representations $n_{i}$’s. For example, the colored $OSp(1|2)$ invariant for
framed Hopf link is
$\displaystyle V_{n_{1},n_{2}}^{OSp(1|2)}[H(f_{1},f_{2});\hat{Q}]$
$\displaystyle=$
$\displaystyle{\hat{Q}}^{\frac{f_{1}((2n_{1}+1)^{2}-1)}{4}}{\hat{Q}}^{\frac{f_{2}((2n_{1}+1)^{2}-1)}{4}}(-1)^{(n_{1}+n_{2})}\times$
$\displaystyle\left({{\hat{Q}}^{\frac{(2n_{1}+1)(2n_{2}+1)}{2}}+{\hat{Q}}^{-\frac{(2n_{1}+1)(2n_{2}+1)}{2}}\over{\hat{Q}}^{\frac{1}{2}}+{\hat{Q}}^{-\frac{1}{2}}}\right)$
$\displaystyle=$ $\displaystyle
J_{2n_{1}+1,2n_{2}+1}[H(f_{1},f_{2}),-\hat{Q}]~{}.$
In fact, for any link $\mathcal{L}(\Gamma)$ denoted by the graph $\Gamma$, the
invariants will be
$\displaystyle V_{n_{1},n_{2},\ldots
n_{m}}^{OSp(1|2)}[\mathcal{L}(\Gamma);\hat{Q}]$ $\displaystyle=$
$\displaystyle{1\over{\hat{Q}}^{\frac{1}{2}}+{\hat{Q}}^{-\frac{1}{2}}}\prod_{i=1}^{m}(-1)^{n_{i}}{\hat{Q}}^{\frac{f_{i}((2n_{i}+1)^{2}-1)}{4}}$
$\displaystyle\left({\hat{Q}}^{\frac{2n_{i}+1}{2}}+{\hat{Q}}^{-{\frac{2n_{i}+1}{2}}}\right)^{\text{deg}(v_{i})-1}$
$\displaystyle\prod_{(v_{1},v_{2})\;\in\;{\rm
Edges}}{\left({\hat{Q}}^{\frac{(2n_{v_{1}}+1)(2n_{v_{2}}+1)}{2}}+{\hat{Q}}^{-\frac{(2n_{v_{1}}+1)(2n_{v_{2}}+1)}{2}}\right)}~{}.$
(19)
As three-manifolds can be constructed by a surgery procedure on any framed
link, the Chern-Simons partition function/WRT invariant (1) can be written in
terms of link invariants10.2307/1970373 ; wallace_1960 ; Kaul:2000xe ;
Ramadevi:1999nd . We will now present the salient features of such WRT
invariants.
#### 2.1.2 Three-Manifold Invariants
Let us confine to the three-manifold $M[\Gamma]$ obtained from surgery of
framed link associated with $L$-vertex graph (an example illustrated in Figure
1). These kind of manifolds are known in the literature as plumbed three-
manifolds. The linking matrix $B$ is defined as
$B_{v_{1},v_{2}}=\left\\{\begin{array}[]{ll}1,&v_{1},v_{2}\text{
connected},\\\ f_{v},&v_{1}=v_{2}=v,\\\
0,&\text{otherwise}.\end{array}\right.\qquad v_{i}\in\text{Vertices of
}\Gamma\;\cong\;\\{1,\ldots,L\\}.$ (20)
The algebraic expression for the WRT invariant
$\tau_{k}^{\mathcal{G}}[M(\Gamma);{\mathbbm{q}}]$ is
$\tau_{k}^{\mathcal{G}}[M(\Gamma);{\mathbbm{q}}]=\frac{F^{\mathcal{G}}[{\mathcal{L}}(\Gamma);{\mathbbm{q}}]}{F^{\mathcal{G}}[{\mathcal{L}}(+1\bullet);{\mathbbm{q}}]^{b_{+}}F^{\mathcal{G}}[{\mathcal{L}}(-1\bullet);{\mathbbm{q}}]^{b_{-}}}$
(21)
where $b_{\pm}$ are the number of positive and negative eigenvalues of a
linking matrix $B$ respectively and
$F^{\mathcal{G}}[\mathcal{L}(\Gamma);{\mathbbm{q}}]$ is defined as
$F^{\mathcal{G}}[{\mathcal{L}}(\Gamma);{\mathbbm{q}}]=\sum_{R_{1},R_{2},\ldots
R_{L}}\left(\prod_{i=1}^{L}V_{R_{i}}^{\mathcal{G}}[\bigcirc;{\mathbbm{q}}]\right)V_{R_{1},R_{2}\ldots
R_{L}}^{\mathcal{G}}[\mathcal{L}(\Gamma);{\mathbbm{q}}]~{},$ (22)
where the summation indicates all the allowed integrable representations of
affine $\mathfrak{g}_{k}$ Lie algebra. By construction, any two homeomorphic
manifolds must share the same three-manifold invariant. There is a prescribed
set of moves called Kirby moves on links which gives the same three-manifold.
For framed links depicted as plumbing graphs, these moves are known as Kirby-
Neumann moves as shown in Figure 2. Hence, the three-manifold invariant must
obey
$\tau_{k}^{\mathcal{G}}[M(\Gamma);{\mathbbm{q}}]=\tau_{k}^{\mathcal{G}}[M(\Gamma^{\prime});{\mathbbm{q}}]~{},$
(23)
where the plumbing graphs $\Gamma,\Gamma^{\prime}$ are related by the Kirby-
Neumann moves.
Figure 2: Kirby-Neumann moves that relate plumbing graphs which result in
homeomorphic 3-manifolds.
Towards the end of 20th century, attempts to give a topological interpretation
for the integer coefficients in the Laurent series expression for Jones
polynomial (HOMFLY-PT) as well as the corresponding colored invariants for any
knot $\mathcal{K}$
$J_{n}[\mathcal{K};{\mathbbm{q}}]=\sum_{s}a_{s}{\mathbbm{q}}^{s}~{},~{}\\{a_{s}\\}\in\mathbb{Z}$
(24)
has resulted in developments on homology theories as well as physics
explanation. We will discuss these ‘homological invariants’ and their
appearance in string/M-theory in the following section.
### 2.2 Knot, Link and Three-manifold Homologies
We will first review the developments on homological invariants of knots
accounting for these integers $a_{s}$ (24) as dimension of the vector space
$\mathcal{H}_{\mathcal{K}}$ of a homological theory. Then, we will present the
topological string/M-theory approach where these integers count number of BPS
states.
#### 2.2.1 Homological Invariants of Knots
The pioneering work of Khovanovkhovanov2000categorification on bi-graded
homology theory led to categorification of the Jones polynomial. This was
extended to colored $\mathfrak{sl2}$ knot homology
$\mathcal{H}^{\mathfrak{sl}_{2};n}_{i,j}$ webster2017knot ;
cooper2012categorification ; frenkel2012categorifying leading to new
homological invariants $\mathcal{P}_{n}^{\mathfrak{sl}_{2}}[\mathcal{K},q,t]$
which categorifies the colored Jones polynomial:
$\mathcal{P}_{n}^{\mathfrak{sl}_{2}}[\mathcal{K},{\mathbbm{q}},t]=\sum_{i,j}t^{j}{\mathbbm{q}}^{i}{\rm
dim}\mathcal{H}^{\mathfrak{sl}_{2};n}_{i,j}~{}.$ (25)
The subscripts $i$ and $j$ on the colored $\mathfrak{sl}_{2}$ homology
$\mathcal{H}^{\mathfrak{sl}_{2};n}_{i,j}$ are called the polynomial grading
and the homological grading respectively. In fact, the ${\mathbbm{q}}$-graded
Euler characteristic of the colored $\mathfrak{sl}_{2}$ knot homology gives
the colored Jones invariant:
$J_{n}[\mathcal{K};{\mathbbm{q}}]=\sum_{i,j}(-1)^{j}{\mathbbm{q}}^{i}{\rm
dim}\mathcal{H}^{\mathfrak{sl}_{2};n}_{i,j}~{},$ (26)
explaining the reasons behind the integers $a_{s}$(24). Khovanov and Rozansky
khovanov2004matrix constructed $\mathfrak{sl}_{N}$ homology using matrix
factorizations. This led to the categorification of colored HOMFLY-PT
polynomials of knots. There has been interesting insight on these homological
invariants within topological strings context and $M$-theory. We will now
discuss the essential features from physics approach.
#### 2.2.2 Topological Strings and M-theory
The parallel developments from topological strings and intersecting branes in
$M$-theory Ooguri:1999bv ; Gopakumar:1998jq interpreted the integers of
unnormalised HOMFLY-PT (24) as counting of BPS states. Invoking topological
string duality in the presence of any knot $\mathcal{K}$, Ooguri-Vafa
conjectured
$V_{\tiny\yng(1)}^{SU(N)}[\mathcal{K};{\mathbbm{q}},\lambda={\mathbbm{q}}^{N}]={1\over({\mathbbm{q}}^{1/2}-{\mathbbm{q}}^{-1/2})}\sum_{Q,s}N_{\tiny\yng(1),Q,s}\lambda^{Q}{\mathbbm{q}}^{s}~{},$
(27)
where the integers $N_{\tiny\yng(1),Q,s}$ count $D4-D2$ bound states in string
theory. Further the relation between the BPS spectrum and $sl_{N}$/Khovanov-
Rozansky knot homology was conjectured within the topological string
contextGukov:2004hz :
$N_{\tiny\yng(1),Q,s}=\sum_{j}(-1)^{j}D_{Q,s,j}~{},$ (28)
where the integers $D_{Q,s,j}$ are referred to as refined BPS invariants. The
extra charge/ homological grading $j$ are explainable by the appearance of
extra $U(1)$ symmetry in $M$-theory compactified on Calabi-Yau three-folds
$CY_{3}$. The topological string duality and the dualities of physical string
theories compactified on $CY_{3}$ implies that the vector space of knot
homologies are the Hilbert space of BPS states(see review Nawata:2015wya and
references therein):
$\mathcal{H}_{\mathcal{K}}\equiv\mathcal{H}_{BPS}~{}.$
The impact of knot homology on the categorification of the WRT invariants has
been studied in the last six years. We now present a concise summary of the
recent developments in this direction.
#### 2.2.3 Three-Manifold Homology
As WRT invariants (21) of three-manifolds involves invariants of links,
logically we would expect the homology of three-manifold
$\mathcal{H}^{\mathcal{G};M}$ such that
$\tau_{k}^{\mathcal{G}}[M;{\mathbbm{q}}]\stackrel{{\scriptstyle\text{?}}}{{=}}\sum_{i,j}(-1)^{j}{\mathbbm{q}}^{i}{\rm
dim}\mathcal{H}^{\mathcal{G};M}_{i,j}~{}.$ (29)
However, the WRT invariants known for many three-manifolds are not seen as
${\mathbbm{q}}$-series (29). We will now review the necessary steps
Gukov:2017kmk of obtaining a new three-manifold invariant $\hat{Z}$, as
${\mathbbm{q}}$-series, from $U(N)$ Chern-Simons partition function for Lens
space $M=L(p,1)\equiv S^{3}/\mathbb{Z}_{p}$. The space of flat connections
$\\{a\\}$ denoted by
$\pi_{1}[{S^{3}\over\mathbb{Z}_{p}}]\equiv\mathbb{Z}_{p}$. Hence
$Z_{k}^{U(N)}[L[p,1];{\mathbbm{q}}]$ can be decomposed as a sum of
perturbative Chern-Simons $Z_{a}^{U(N)}[L[p,1];{\mathbbm{q}}]$ around these
abelian flat connections $a$ Gukov:2017kmk :
$Z_{k}^{U(N)}[L[p,1];{\mathbbm{q}}]=\sum_{a}\exp[iS_{CS}^{(a)}]Z_{a}^{U(N)}[L[p,1];{\mathbbm{q}}]~{},$
(30)
where $S_{CS}^{(a)}$ is the corresponding classical Chern-Simons action. The
following change of basis by $\mathcal{S}$ matrix of $\mathfrak{u}(1)^{N}_{p}$
affine algebra:
$Z_{a}^{U(N)}[L(p,1);{\mathbbm{q}}]=\sum_{b}\mathcal{S}_{ab}\hat{Z}_{b}^{U(N)}[\mathcal{L}[p,1];q]\Big{|}_{q\rightarrow{\mathbbm{q}}}~{},$
(31)
is required so that
$\hat{Z}_{b}^{U(N)}[\mathcal{L}[p,1];q]\in
q^{\Delta_{b}}\mathbb{Z}[[q]]~{},~{}~{}\Delta_{b}\in\mathbb{Q}~{}.$ (32)
Physically, the $\hat{Z}_{b}[\mathcal{L}[p,1];q]$ is also the vortex partition
function $\hat{Z}_{T[L[p,1]]}[D^{2}\times_{q}S^{1}]$ obtained from reducing 6d
$(2,0)$ theory (describing dynamics of $N$-coincident $M5$ branes on
$L[p,1]\times D^{2}\times_{q}S^{1}$) on $L[p,1]$. The effective 3-d
$\mathcal{N}=2$ theory on $D^{2}\times_{q}S^{1}$ (cigar geometry) is denoted
as $T^{U(N)}[L[p,1]]$.
For other three-manifolds $M$, $\mathcal{S}$ matrix depends only on
$H_{1}(M,\mathbb{Z})$. Further the Hilbert space of BPS states
$\mathcal{H}^{i,j}_{BPS}$ on the M5 brane system, in the ambient space-time
$T^{*}M\times TN\times S^{1}$, where $i,j$ gradings will keep track of both
spins associated with the rotational symmetry $U(1)_{q}\times U(1)_{R}$ on
$D^{2}\subset TN$. The Hilbert space of states for the theory
$T^{\mathcal{G}}[M]$ with boundary condition at $\partial D^{2}=S^{1}$ labeled
by $a\in({\rm Tor}H_{1}(M,\mathbb{Z}))^{N}/S_{N}$ leads to bi-graded
homological invariants of $M$:
${\mathcal{H}}_{a}^{U(N)}[M]={\mathcal{H}}_{T_{L[p,1]}^{U(N)}}[D^{2};a]=\bigoplus_{\begin{subarray}{c}i\in\mathbb{Z}+\Delta_{a},\\\
j\in\mathbb{Z}\end{subarray}}{\mathcal{H}}_{a}^{i,j}~{}.$ (33)
Note that the grading $i$ counts the charge under $U(1)_{q}$ rotation of
$D^{2}$ and homological grading $j$ is the R-charge of the $U(1)_{R}$
R-symmetry. In the following section, we will review the necessary steps of
obtaining $\hat{Z}$ invariants for $SU(2)$ group. This will provide clarity of
notations to investigate $\hat{Z}$ for $SO(3)$ and $OSp(1|2)$ group.
## 3 Review of $SU(2)$ $\hat{Z}$ invariant
As discussed in subsection 2.2.3Gukov:2016gkn , the expression for Lens space
partition function using eqns.(30-32)
$\displaystyle Z_{k}^{U(N)}[L(p,1),{\mathbbm{q}}]$ $\displaystyle=$
$\displaystyle\sum_{a,b\in\mathbb{Z}_{p}}\mathcal{S}_{ab}\exp[iS_{CS}^{(a)}]\hat{Z}_{b}^{U(N)}[\mathcal{L}(p,1);q]\Big{|}_{q\rightarrow{\mathbbm{q}}}~{},$
(34) $\displaystyle{\rm where}~{}\hat{Z}_{b}[\mathcal{L}[p,1];q]$
$\displaystyle\in$ $\displaystyle
q^{\Delta_{a}}\mathbb{Z}[[q]]~{},~{}~{}\Delta_{a}\in\mathbb{Q}~{}.$ (35)
led to the following conjecture Gukov:2017kmk ; Gukov:2019mnk for any closed
oriented three manifold $M$ known as GPPV conjecture:
$\displaystyle Z^{SU(2)}_{k}[M;{\mathbbm{q}}]$ $\displaystyle=$
$\displaystyle(i\sqrt{2(k+2}))^{b_{1}(M)-1}\sum_{a,b\;\in\;\atop\text{Spin}^{c}(M)/{\mathbb{Z}}_{2}}\exp[2\pi
i(k+2){\ell k}(a,a)]\,\times$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}|\mathcal{W}_{b}|^{-1}\mathcal{S}_{ab}\hat{Z}^{SU(2)}_{b}[M;q]|_{q\rightarrow{\mathbbm{q}}=\exp({\frac{2\pi
i}{k+2}})}$ (36)
where
$\hat{Z}^{SU(2)}_{b}[M;q]\in\,2^{-c}q^{\Delta_{b}}{\mathbb{Z}}[[q]]\qquad\Delta_{b}\in{\mathbb{Q}},\qquad
c\in{\mathbb{Z}}_{+}$ (37)
is convergent for $|q|<1$ and
$\mathcal{S}_{ab}=\frac{e^{2\pi i{\ell k}(a,b)}+e^{-2\pi i{\ell
k}(a,b)}}{|{\mathcal{W}}_{a}|\sqrt{|H_{1}(M,{\mathbb{Z}})|}}.$ (38)
Here $\mathcal{W}_{a}$ is the stabilizer subgroup defined as
${\mathcal{W}}_{a}\;\equiv\;\text{Stab}_{{\mathbb{Z}}_{2}}(a)\;=\;\left\\{\begin{array}[]{cl}{\mathbb{Z}}_{2},&a=-a\,,\\\
1,&\text{otherwise\,.}\end{array}\right.$ (39)
and $\ell k$ denotes the linking pairing on $H_{1}(M,\mathbb{Z})$:
$\begin{array}[]{cccc}{\ell k}:&H_{1}(M,\mathbb{Z})\otimes
H_{1}(M,\mathbb{Z})&\longrightarrow&{\mathbb{Q}}/{\mathbb{Z}}\\\
&[a]\otimes[b]&\longmapsto&{\\#(a\cap\hat{b})}/{n}\\\ \end{array}$ (40)
where $\hat{b}$ is a two-chain complex such that $\partial\hat{b}=nb$ with
$n\in{\mathbb{Z}}$. Such a $\hat{b}$ and $n$ exists because $[b]\in
H_{1}(M,\mathbb{Z})$. The number $\\#(a\cap\hat{b})$ counts the intersection
points with signs determined by the orientation. The set of orbits is the set
of $\text{Spin}^{c}$ structures on $M$, with the action of ${\mathbb{Z}}_{2}$
by conjugation.
Although, the relation(36) is true for any closed oriented three-manifold $M$,
the explicit $q$ series expression for $\hat{Z}$ is waiting to be discovered
for a general three-manifold.
In the following subsection, we will review the $\hat{Z}^{SU(2)}$ for the
plumbed manifolds. We begin with the WRT invariant for a plumbing graph, of
the type shown in Figure. 1, discussed in section.(2.1.2). Then analytically
continue ${\mathbbm{q}}\rightarrow q$ to get the $\hat{Z}^{SU(2)}$-invariant .
We will see that the analytic continuation procedure is doable only for
negative definite plumbed manifolds(i.e., the signature of linking matrix $B$,
$\sigma=b_{+}-b_{-}=-L$)111In principle, this procedure is also doable when
$B$ is negative on a certain subspace of $\mathbb{Z}^{L}$.. Moreover, as
explained inGukov:2019mnk , the $\text{Spin}^{c}$-structure in case of plumbed
3-manifold with $b_{1}(M)=0$, is given by
$H_{1}(M,\mathbb{Z})\cong\text{Coker}B=\mathbb{Z}^{L}/B\mathbb{Z}^{L}$.
### 3.1 $\hat{Z}_{b}^{SU(2)(q)}$
The WRT invariant $\tau_{k}^{SU(2)}[M(\Gamma);{\mathbbm{q}}]$,222normalized
such that $\tau_{k}^{\mathcal{G}}[S^{3};{\mathbbm{q}}]=1$ and $k$ is the bare
level for $SU(2)$ Chern-Simons theory for plumbed three-manifold
$M(\Gamma)$(21), obtained from surgery of framed link ${\mathcal{L}}(\Gamma)$
in $S^{3}$, is
$\displaystyle\tau_{k}^{SU(2)}[M(\Gamma);{\mathbbm{q}}]$ $\displaystyle=$
$\displaystyle\frac{F^{SU(2)}[{\mathcal{L}}(\Gamma);{\mathbbm{q}}]}{F^{SU(2)}[{\mathcal{L}}(+1\bullet);{\mathbbm{q}}]^{b_{+}}F^{SU(2)}[{\mathcal{L}}(-1\bullet);{\mathbbm{q}}]^{b_{-}}}$
$\displaystyle{\rm where}~{}F^{SU(2)}[{\mathcal{L}}(\Gamma);{\mathbbm{q}}]$
$\displaystyle=$
$\displaystyle\sum_{{n}\in\\{1,\ldots,k+1\\}^{L}}J[{\mathcal{L}}(\Gamma)]_{n_{1},\ldots,n_{L}}\prod_{v=1}^{L}\frac{{\mathbbm{q}}^{n_{v}/2}-{\mathbbm{q}}^{-n_{v}/2}}{{\mathbbm{q}}^{1/2}-{\mathbbm{q}}^{-1/2}}.$
(41)
Note $b_{\pm}$ are the number of positive and negative eigenvalues of a
linking matrix $B$ respectively and the colored Jones polynomial of link
$\mathcal{L}(\Gamma)$ (12) in variable ${\mathbbm{q}}=\exp({2i\pi/(k+2)})$ is
$\displaystyle J[{\mathcal{L}}(\Gamma)]_{n_{1},\ldots,n_{L}}$ $\displaystyle=$
$\displaystyle\frac{2i}{{\mathbbm{q}}^{1/2}-{\mathbbm{q}}^{-1/2}}\prod_{v\;\in\;\text{Vertices}\;\cong\;\\{1,\ldots,L\\}}{\mathbbm{q}}^{\frac{f_{v}(n_{v}^{2}-1)}{4}}\,\times$
(42)
$\displaystyle\left(\frac{2i}{{\mathbbm{q}}^{n_{v}/2}-{\mathbbm{q}}^{-n_{v}/2}}\right)^{\text{deg}(v)-1}\prod_{(v_{1},v_{2})\;\in\;\text{Edges}}\frac{{\mathbbm{q}}^{n_{v_{1}}n_{v_{2}}/2}-{\mathbbm{q}}^{-n_{v_{1}}n_{v_{2}}/2}}{2i}.$
Using the following Gauss sum reciprocity formula
$\sum_{n\;\in\;{\mathbb{Z}}^{L}/2k{\mathbb{Z}}^{L}}\exp\left(\frac{\pi
i}{2k}(n,Bn)+\frac{\pi i}{k}(\ell,n)\right)=\\\ \frac{e^{\frac{\pi
i\sigma}{4}}\,(2k)^{L/2}}{|\det
B|^{1/2}}\sum_{a\;\in\;{\mathbb{Z}}^{L}/B{\mathbb{Z}}^{L}}\exp\left(-2\pi
ik\left(a+\frac{\ell}{2k},B^{-1}\left(a+\frac{\ell}{2k}\right)\right)\right)$
(43)
where $\ell\in{\mathbb{Z}}^{L}$, $(\cdot,\cdot)$ is the standard pairing on
${\mathbb{Z}}^{L}$ and $\sigma=b_{+}-b_{-}$ is the signature of the linking
matrix $B$, we can sum
$F^{SU(2)}[{\mathcal{L}}(\pm
1\bullet);{\mathbbm{q}}]=\sum_{n=1}^{k+1}{\mathbbm{q}}^{\pm\frac{n^{2}-1}{4}}\,\left(\frac{{\mathbbm{q}}^{n/2}-{\mathbbm{q}}^{-n/2}}{{\mathbbm{q}}^{1/2}-{\mathbbm{q}}^{-1/2}}\right)^{2}=\frac{[2(k+2)]^{1/2}\,e^{\pm\frac{\pi
i}{4}}\,{\mathbbm{q}}^{\mp\frac{3}{4}}}{{\mathbbm{q}}^{1/2}-{\mathbbm{q}}^{-1/2}},$
(44)
for the unknot with framing $\pm 1$. Incorporating the above equation and the
fact that $L-|\text{Edges}|=1$ for the framed link ${\mathcal{L}}(\Gamma)$,
the WRT invariant simplifies to
$\tau_{k}^{SU(2)}[M(\Gamma);{\mathbbm{q}}]=\frac{e^{-\frac{\pi
i\sigma}{4}}\,{\mathbbm{q}}^{\frac{3\sigma}{4}}}{2\,(2(k+2))^{L/2}\,({\mathbbm{q}}^{1/2}-{\mathbbm{q}}^{-1/2})}\times\\\
{\sum_{{n}\in{\mathbb{Z}}^{L}/2(k+2){\mathbb{Z}}^{L}}}^{\prime}\prod_{v\;\in\;\text{Vertices}}{\mathbbm{q}}^{\frac{f_{v}(n_{v}^{2}-1)}{4}}\,\left(\frac{1}{{\mathbbm{q}}^{n_{v}/2}-{\mathbbm{q}}^{-n_{v}/2}}\right)^{\text{deg}(v)-2}\times\\\
\prod_{(v^{\prime},v^{\prime\prime})\;\in\;\text{Edges}}\frac{{\mathbbm{q}}^{n_{v^{\prime}}n_{v^{\prime\prime}}/2}-{\mathbbm{q}}^{-n_{v^{\prime}}n_{v^{\prime\prime}}/2}}{2}$
(45)
where we used invariance of the summand under $n_{v}\rightarrow-n_{v}$. The
prime ′ in the sum means that the singular values $n_{v}=0,\,k+2$ are omitted.
Let us focus on the following factor for general plumbed graph:
$\displaystyle\prod_{(v^{\prime},v^{\prime\prime})\;\in\;\text{Edges}}({\mathbbm{q}}^{n_{v^{\prime}}n_{v^{\prime\prime}}/2}-{\mathbbm{q}}^{-n_{v^{\prime}}n_{v^{\prime\prime}}/2})$
$\displaystyle=$ $\displaystyle\sum_{{p}\in\\{\pm
1\\}^{\text{Edges}}}\prod_{(v^{\prime},v^{\prime\prime})\;\in\;\text{Edges}}p_{(v^{\prime},v^{\prime\prime})}$
$\displaystyle{\mathbbm{q}}^{p_{(v^{\prime},v^{\prime\prime})}n_{v^{\prime}}n_{v^{\prime\prime}}/2}.$
(46)
Note that, under $n_{v}\rightarrow-n_{v}$ on any vertex $v$ of degree ${\rm
deg}(v)$, the factor with a given configuration of signs associated to edges
(i.e., $p\in\\{\pm 1\\}^{\text{Edges}}$) will transform into a term with a
different configuration times $(-1)^{\text{deg}(v)}$. For the class of graphs
$\Gamma$ (like Figure. 1), the sequence of such transforms can be finally
brought to the configuration with all signs $+1$. Hence, the WRT invariant
(45) for these plumbed three-manifolds can be reduced to this form:
$\tau_{k}^{SU(2)}[M(\Gamma)]=\frac{e^{-\frac{\pi
i\sigma}{4}}\,{\mathbbm{q}}^{\frac{3\sigma-\sum_{v}f_{v}}{4}}}{2\,(2(k+2))^{L/2}\,({\mathbbm{q}}^{1/2}-{\mathbbm{q}}^{-1/2})}\times\\\
{\sum_{{n}\in{\mathbb{Z}}^{L}/2(k+2){\mathbb{Z}}^{L}}}^{\prime}\;\;{\mathbbm{q}}^{\frac{(n,Bn)}{4}}\prod_{v\;\in\;\text{Vertices}}\,\left(\frac{1}{{\mathbbm{q}}^{n_{v}/2}-{\mathbbm{q}}^{-n_{v}/2}}\right)^{\text{deg}(v)-2}.$
(47)
In the above expression, the points $0$ and $k+2$ are excluded in the
summation but in the reciprocity formula (43) no point is excluded. So, to
apply the reciprocity formula we have to first regularize the sum. This is
achieved by introducing the following regularising parameters:
$\displaystyle\Delta_{v}\in{\mathbb{Z}}_{+}:\;\Delta_{v}$ $\displaystyle=$
$\displaystyle\text{deg}(v)-1\mod 2,\qquad\forall v\;\in\;\text{Vertices},$
(48) $\displaystyle\omega\in{\mathbb{C}}:$ $\displaystyle 0<|\omega|<1.$
so that the sum in eqn.(47) is rewritable as $\omega\rightarrow 1$:
${\sum_{{n}\in{\mathbb{Z}}^{L}/2(k+2){\mathbb{Z}}^{L}}}^{\prime}\;\;{\mathbbm{q}}^{\frac{(n,Bn)}{4}}\prod_{v\;\in\;\text{Vertices}}\,\left(\frac{1}{{\mathbbm{q}}^{n_{v}/2}-{\mathbbm{q}}^{-n_{v}/2}}\right)^{\text{deg}(v)-2}=\\\
\lim_{\omega\rightarrow
1}\frac{1}{2^{L}}\sum_{{n}\in{\mathbb{Z}}^{L}/2(k+2){\mathbb{Z}}^{L}}{\mathbbm{q}}^{\frac{(n,Bn)}{4}}F_{\omega}(x_{1},\ldots,x_{L})|_{x_{v}={\mathbbm{q}}^{n_{v}/2}},$
(49)
where
$\displaystyle F_{\omega}(x_{1},\ldots,x_{L})$ $\displaystyle=$
$\displaystyle\prod_{v\;\in\;\text{Vertices}}\left({x_{v}-1/x_{v}}\right)^{\Delta_{v}}\times\,$
(50)
$\displaystyle\left\\{\left(\frac{1}{x_{v}-\omega/x_{v}}\right)^{\text{deg}(v)-2+\Delta_{v}}+\left(\frac{1}{\omega
x_{v}-1/x_{v}}\right)^{\text{deg}(v)-2+\Delta_{v}}\right\\}$
Note that, we can perform a binomial expansion taking $(\omega/x_{v}^{2})$
small in the first term and $(\omega x_{v}^{2})$ small in the second term to
rewrite $F_{\omega}(x_{1},\ldots,x_{L})$ as a formal power series:
$F_{\omega}(x_{1},\ldots,x_{L})=\sum_{\ell\in
2{\mathbb{Z}}^{L}+\delta}F_{\omega}^{\ell}\prod_{v}x_{v}^{\ell_{v}}\qquad\in{\mathbb{Z}}[\omega][[x_{1}^{\pm
1},\ldots,x_{1}^{\pm L}]]~{},$ (51)
where
$\delta\in{\mathbb{Z}}^{L}/2{\mathbb{Z}}^{L},~{}\delta_{v}\equiv\text{deg}(v)\mod
2$ and
$F_{\omega}^{\ell}=\sum_{m:\,\ell\in{\mathcal{I}}_{m}}N_{m,\ell}\,\omega^{m}\;\in{\mathbb{Z}}[\omega]$
(52)
with ${\mathcal{I}}_{m}$ being a finite set of elements from
${\mathbb{Z}}^{L}$. By definition, ${\rm lim}_{\omega\rightarrow
1}F_{\omega}^{\ell}$ is not dependent on $\Delta\in{\mathbb{Z}}^{L}$ (48).
However this $\omega\rightarrow 1$ limit in eqn. (3.1) will restrict the
binomial expansion range of the first term to be $x\rightarrow\infty$ and that
of the second term to $x\rightarrow 0$:
$\displaystyle F_{\omega\rightarrow 1}(x_{1},\ldots,x_{L})=\sum_{\ell\in
2{\mathbb{Z}}^{L}+\delta}F_{\omega\rightarrow
1}^{\ell}\prod_{v}x_{v}^{\ell_{v}}$ $\displaystyle=$ (53)
$\displaystyle~{}~{}{\rm lim}_{\omega\rightarrow
1}\prod_{v\,\in\,\text{Vertices}}\left\\{{\scriptsize\begin{array}[]{c}\text{Expansion}\\\
\text{as}x\rightarrow\infty\end{array}}\frac{1}{(x_{v}-\omega/x_{v})^{\text{deg}\,v-2}}\right.$
$\displaystyle+$
$\displaystyle\left.{\scriptsize\begin{array}[]{c}\text{Expansion}\\\ \text{as
}x\rightarrow 0\end{array}}\frac{1}{(\omega
x_{v}-1/x_{v})^{\text{deg}\,v-2}}\right\\}.$ (58)
Now let us assume that the quadratic form
$B:{\mathbb{Z}}^{L}\times{\mathbb{Z}}^{L}\rightarrow{\mathbb{Z}}$ is negative
definite i.e., $\sigma=-L$. Then we can define the following series in $q$
which is convergent for $|q|<1$:
$\hat{Z}_{b}^{SU(2)}[M(\Gamma);q]\stackrel{{\scriptstyle\text{Def}}}{{=\joinrel=}}2^{-L}q^{-\frac{3L+\sum_{v}f_{v}}{4}}\sum_{\ell\in
2B{\mathbb{Z}}^{L}+b}F^{\ell}_{\omega\rightarrow
1}\,q^{-\frac{(\ell,B^{-1}\ell)}{4}}\;\in\;2^{-c}q^{\Delta_{b}}{\mathbb{Z}}[[q]]$
(59)
where $c\in{\mathbb{Z}}_{+},c\leq L$ and
$\displaystyle b$ $\displaystyle\in$
$\displaystyle(2{\mathbb{Z}}^{L}+\delta)/2B{\mathbb{Z}}^{L}\,/{\mathbb{Z}}_{2}\cong(2\text{Coker}\,B+\delta)\,/{\mathbb{Z}}_{2}\stackrel{{\scriptstyle\text{Set}}}{{\cong}}H_{1}(M_{3},{\mathbb{Z}})\,/{\mathbb{Z}}_{2},$
(60) $\displaystyle\Delta_{b}$ $\displaystyle=$
$\displaystyle-\frac{3L+\sum_{v}f_{v}}{4}-\max_{\ell\in
2M{\mathbb{Z}}^{L}+b}\frac{(\ell,B^{-1}\ell)}{4}\,\in{\mathbb{Q}}$ (61)
where ${\mathbb{Z}}_{2}$ action takes $b\rightarrow-b$ and is the symmetry of
(59).
Using relation (49) and applying Gauss reciprocity formula (43) we arrive at
the following expression for the WRT invariant:
$\tau_{k}^{SU(2)}[M(\Gamma);{\mathbbm{q}}]=\\\ \\\
~{}~{}~{}~{}~{}\frac{e^{-\frac{\pi
iL}{4}}\,{\mathbbm{q}}^{-\frac{3L+\sum_{v}f_{v}}{4}}}{2\,(2(k+2))^{L/2}\,({\mathbbm{q}}^{1/2}-{\mathbbm{q}}^{-1/2})}\,\lim_{\omega\rightarrow
1}\sum_{{n}\in{\mathbb{Z}}^{L}/2(k+2){\mathbb{Z}}^{L}}{\mathbbm{q}}^{\frac{(n,Bn)}{4}}F_{\omega}(x_{1},\ldots,x_{L})|_{x_{v}={\mathbbm{q}}^{n_{v}/2}}=\\\
\\\
~{}~{}\frac{2^{-L}{\mathbbm{q}}^{-\frac{3L+\sum_{v}f_{v}}{4}}}{2\,({\mathbbm{q}}^{1/2}-{\mathbbm{q}}^{-1/2})\,|\det
B|^{1/2}}\sum_{\scriptsize\begin{array}[]{c}a\in\mathrm{Coker}\,B\\\ b\in
2\mathrm{Coker}\,B+\delta\end{array}}e^{-2\pi i(a,B^{-1}b)}e^{-2\pi
i(k+2)(a,B^{-1}a)}\times\\\ \lim_{\omega\rightarrow 1}\sum_{\ell\in
2B{\mathbb{Z}}^{L}+b}F^{\ell}_{\omega}\,{\mathbbm{q}}^{-\frac{(\ell,B^{-1}\ell)}{4}}.$
(62)
Assuming that the limit
$\lim_{q\rightarrow{\mathbbm{q}}}\hat{Z}_{b}^{SU(2)}(q)$ exists, where $q$
approaches $(k+2)$-th primitive root of unity from inside of the unit disc
$|q|<1$, we expect
$\lim_{\omega\rightarrow 1}\sum_{\ell\in
2B{\mathbb{Z}}^{L}+b}F^{\ell}_{\omega}\,{\mathbbm{q}}^{-\frac{(\ell,B^{-1}\ell)}{4}}=\lim_{q\rightarrow{\mathbbm{q}}}\sum_{\ell\in
2B{\mathbb{Z}}^{L}+b}F^{\ell}_{\omega\rightarrow
1}\,q^{-\frac{(\ell,B^{-1}\ell)}{4}}.$ (63)
Thus we obtain GPPV conjecture form:
$\tau_{k}^{SU(2)}[M(\Gamma),{\mathbbm{q}}]=\frac{1}{2\,({\mathbbm{q}}^{1/2}-{\mathbbm{q}}^{-1/2})\,|\det
B|^{1/2}}\,\times\\\ \sum_{a\in\mathrm{Coker}\,B}e^{-2\pi
i(k+2)(a,B^{-1}a)}\sum_{b\in 2\mathrm{Coker}\,B+\delta}e^{-2\pi
i(a,B^{-1}b)}\lim_{q\rightarrow{\mathbbm{q}}}\hat{Z}_{b}^{SU(2)}[M(\Gamma);q].$
(64)
There is also an equivalent contour integral form for the homological
blocks(59):
$\hat{Z}_{b}^{SU(2)}[M(\Gamma);q]=q^{-\frac{3L+\sum_{v}f_{v}}{4}}\cdot\text{v.p.}\int\limits_{|z_{v}|=1}\prod_{v\;\in\;\text{Vertices}}\frac{dz_{v}}{2\pi
iz_{v}}\,\left({z_{v}-1/z_{v}}\right)^{2-\text{deg}(v)}\cdot\Theta^{-B}_{b}(z),$
(65)
where $\Theta^{-B}_{b}(x)$ is the theta function of the lattice corresponding
to minus the linking form $B$:
$\Theta^{-B}_{b}(x)=\sum_{\ell\in
2B{\mathbb{Z}}^{L}+b}q^{-\frac{(\ell,B^{-1}\ell)}{4}}\prod_{i=1}^{L}x_{i}^{\ell_{i}},$
(66)
and “v.p.” refers to principle value integral (i.e. take half-sum of contours
$|z_{v}|=1\pm\epsilon$). This prescription corresponds to the regularization
by $\omega$ made in eqn.(3.1).
Thus we can obtain explicit $SU(2)$ $q$-series for any negative definite
plumbed three-manifolds. For completeness, we present the $q$-series for some
examples.
### 3.2 Examples
$\bullet$ Poincare homology sphere is a well-studied three-manifold
corresponding to the graph:
(67)
As $H_{1}(M,{\mathbb{Z}})=0$, we obtain only single homological block
$\hat{Z}_{b_{1}}$. Solving eqns.(58,59), we get
$\hat{Z}_{b_{1}}^{SU(2)}=q^{-3/2}(1-q-q^{3}-q^{7}+q^{8}+q^{14}+q^{20}+q^{29}-q^{31}-q^{42}+\cdots).$
(68)
$\bullet$ The next familiar example with $H_{1}(M,{\mathbb{Z}})=0$ is
Brieskorn homology sphere. A particular example of this class is
$\Sigma(2,3,7)$ with the following equivalent graphs:
${\,\raisebox{-64.01869pt}{\includegraphics[width=128.0374pt]{example-3ii}}\,}\stackrel{{\scriptstyle\text{Kirby}}}{{\sim}}{\,\raisebox{-59.75095pt}{\includegraphics[width=106.69783pt]{example-3iii}}\,}$
(69)
The homological block turns out to be
$\hat{Z}_{b_{1}}^{SU(2)}=q^{1/2}(1-q-q^{5}+q^{10}-q^{11}+q^{18}+q^{30}-q^{41}+q^{43}-q^{56}-q^{76}\cdots).$
(70)
$\bullet$ For a three-manifold with non-trivial
$H_{1}(M,{\mathbb{Z}})={\mathbb{Z}}_{3}$ as drawn below,
${\,\raisebox{-56.9055pt}{\includegraphics[width=227.62204pt]{example-2a}}\,}\stackrel{{\scriptstyle\text{Kirby}}}{{\sim}}{\,\raisebox{-56.9055pt}{\includegraphics[width=113.81102pt]{example-2b}}\,}$
(71)
the three homological blocks are
$\hat{Z}^{SU(2)}=\left(\begin{array}[]{c}1-q+q^{6}-q^{11}+q^{13}-q^{20}+q^{35}+O\left(q^{41}\right)\\\
q^{5/3}\left(-1+q^{3}-q^{21}+q^{30}+O\left(q^{41}\right)\right)\\\
q^{5/3}\left(-1+q^{3}-q^{21}+q^{30}+O\left(q^{41}\right)\right)\\\
\end{array}\right),$ (72)
where two of them are equal.
Our focus is to obtain explicit $q$-series for $SO(3)$ and $OSp(1|2)$ groups.
Using the relation between $SU(2)$ and $SO(3)$, $SU(2)$ and $OSp(1|2)$ link
invariants(2.1.1), we will investigate the necessary steps starting from the
WRT invariant for $SO(3)$ and $OSp(1|2)$ eventually leading to the
$\hat{Z}$-invariant. This will be the theme of the following section.
## 4 $\hat{Z}$ for $SO(3)$ and $OSp(1|2)$
Our aim is to derive the $\hat{Z}$-invariant for $SO(3)$ and $OSp(1|2)$
groups. We will first look at the WRT invariants
$\tau_{K}^{SO(3)}[M(\Gamma);Q]$ for plumbed three-manifolds written in terms
of colored Jones invariants of framed links ${\mathcal{L}}[\Gamma]$ in the
following subsection and then discuss $OSp(1|2)$ $\hat{Z}$ in the subsequent
section.
### 4.1 $SO(3)$ WRT invariant and $\hat{Z}^{SO(3)}$ invariant
Recall that the framed link invariants are written in variable ${\mathbbm{q}}$
which is dependent on Chern-Simons coupling and the rank of the gauge group
$\mathcal{G}$. For $SO(3)$ Chern-Simons with coupling $K\in 2\mathbb{Z}$, the
variable $Q=e^{\frac{2\pi i}{K+1}}$. Hence $F^{SO(3)}[\mathcal{L}(\Gamma);Q]$
in WRT $\tau_{K}^{SO(3)}[M(\Gamma);Q]$ is
$F^{SO(3)}[\mathcal{L}(\Gamma);Q]=\\\
\sum_{n_{1},n_{2},\dots,n_{L}\in\\{0,1,\dots,K\\}}V_{n_{1},n_{2},\dots,n_{L}}^{SO(3)}(\mathcal{L}(\Gamma);Q)\prod_{v=1}^{L}V_{n_{1},n_{2},\dots,n_{L}}^{SO(3)}(\bigcirc;Q)~{}=\\\
\\\
\sum_{n_{1},n_{2},\ldots,n_{L}\in\\{1,3,\dots,2K+1\\}}J_{n_{1},n_{2},\dots,n_{L}}^{SU(2)}\left(\mathcal{L}(\Gamma);{\mathbbm{q}}=e^{\frac{2\pi
i}{2K+2}}\right)\prod_{v=1}^{L}\frac{{\mathbbm{q}}^{n_{v}/2}-{\mathbbm{q}}^{-n_{v}/2}}{{\mathbbm{q}}^{1/2}-{\mathbbm{q}}^{-1/2}}\bigg{|}_{{\mathbbm{q}}^{2}\rightarrow
Q}~{},$ (73)
where we have used the relation (13) to write $SO(3)$ link invariants in terms
of the colored Jones invariants. Notice that the summation is over only odd
integers and hence WRT invariant for $SO(3)$ is different from the WRT for
$SU(2)$ group. Further, the highest integrable representation in the summation
indicates that the Chern-Simons coupling for $SU(2)$ group is $2K+2$. After
performing the summation, we can convert the ${\mathbbm{q}}=Q^{1/2}$(13) to
obtain $SO(3)$ WRT invariant. We need to modify the Gauss sum reciprocity
formula to incorporate the summation over odd integers in
$F^{SO(3)}[\mathcal{L}(\Gamma);Q]$.
Using the following Gauss sum reciprocity formula
$\sum_{n\;\in\;{\mathbb{Z}}^{L}/k{\mathbb{Z}}^{L}}\exp\left(\frac{2\pi
i}{k}(n,Bn)+\frac{2\pi i}{k}(\ell,n)\right)=\\\ \frac{e^{\frac{\pi
i\sigma}{4}}\,(k/2)^{L/2}}{|\det
B|^{1/2}}\sum_{a\;\in\;{\mathbb{Z}}^{L}/2B{\mathbb{Z}}^{L}}\exp\left(\frac{-\pi
ik}{2}\left(a+\frac{\ell}{k},B^{-1}\left(a+\frac{\ell}{k}\right)\right)\right)~{},$
(74)
for $k=2K+2$, we can obtain the summation over odd integers by replacing
$n\longrightarrow\frac{n-1}{2}$ :
$\sum_{n_{1},n_{2},\dots,n_{L}\;\in\;\\{1,3,\dots,4K+3\\}}{\mathbbm{q}}^{\frac{(n,Bn)}{4}+\frac{(n,d)}{2}}=\frac{e^{\frac{\pi
i\sigma}{4}}\,(K+1)^{L/2}}{|\det
B|^{1/2}}{\mathbbm{q}}^{-\frac{(d,B^{-1}d)}{4}}\times\\\ \\\
\sum_{a\;\in\;{\mathbb{Z}}^{L}/2B{\mathbb{Z}}^{L}}\exp\left[-\pi
i(K+1)(a,B^{-1}a)\right]\exp\left[-\pi i(a,B^{-1}(d+BI))\right],$ (75)
where $d=\ell-BI$ with $I$ denoting $L$ component vector with entry $1$ on all
the components. That is, the transpose of the vector $I$ is
$I^{T}=[1,1,\ldots,1]~{}.$ (76)
For unknot with framing $\pm 1$, the
$F^{SO(3)}[{\mathcal{L}}(-1\bullet);Q={\mathbbm{q}}^{2}]$ involving summation
over odd integers simplifies to
$F^{SO(3)}[{\mathcal{L}}(\pm\bullet);Q={\mathbbm{q}}^{2}]=\frac{\sqrt{K+1}\;e^{\pm\pi
i/4}\;{\mathbbm{q}}^{\mp
3/4}}{{\mathbbm{q}}^{1/2}-{\mathbbm{q}}^{-1/2}}\underbrace{(1+e^{\pi
iK})}_{2}~{},$ (77)
as the coupling $K\in 2\mathbb{Z}$ for the $SO(3)$ Chern-Simons theory. Hence,
the WRT invariant takes the following form:
$\tau_{K}^{SO(3)}[M(\Gamma);Q={\mathbbm{q}}^{2}]=\frac{e^{-\frac{\pi
i\sigma}{4}}\,{\mathbbm{q}}^{\frac{3\sigma}{4}}}{2^{L}(K+1)^{L/2}\,({\mathbbm{q}}^{1/2}-{\mathbbm{q}}^{-1/2})}\times\\\
{\sum_{{n}\in\\{1,3,\ldots,2K+1\\}^{L}}}\prod_{v\;\in\;\text{Vertices}}{\mathbbm{q}}^{\frac{f_{v}(n_{v}^{2}-1)}{4}}\,\left(\frac{1}{{\mathbbm{q}}^{n_{v}/2}-{\mathbbm{q}}^{-n_{v}/2}}\right)^{\text{deg}(v)-2}\times\\\
\prod_{(v^{\prime},v^{\prime\prime})\;\in\;\text{Edges}}\Big{(}{\mathbbm{q}}^{n_{v^{\prime}}n_{v^{\prime\prime}}/2}-{\mathbbm{q}}^{-n_{v^{\prime}}n_{v^{\prime\prime}}/2}\Big{)}.$
(78)
In above equation, the terms involving edges of the graph $\Gamma$
$\prod_{(v^{\prime},v^{\prime\prime})\;\in\;\text{Edges}}\Big{(}{\mathbbm{q}}^{n_{v^{\prime}}n_{v^{\prime\prime}}/2}-{\mathbbm{q}}^{-n_{v^{\prime}}n_{v^{\prime\prime}}/2}\Big{)}=2^{L-1}\prod_{(v^{\prime},v^{\prime\prime})\;\in\;\text{Edges}}\frac{\Big{(}{\mathbbm{q}}^{n_{v^{\prime}}n_{v^{\prime\prime}}/2}-{\mathbbm{q}}^{-n_{v^{\prime}}n_{v^{\prime\prime}}/2}\Big{)}}{2},$
can also be rewritten as
$\prod_{(v^{\prime},v^{\prime\prime})\in\text{Edges}}({\mathbbm{q}}^{n_{v^{\prime}}n_{v^{\prime\prime}}/2}-{\mathbbm{q}}^{-n_{v^{\prime}}n_{v^{\prime\prime}}/2})=\sum_{p\in\\{\pm
1\\}^{\text{Edges}}}\prod_{(v^{\prime},v^{\prime\prime})\in\text{Edges}}p_{(v^{\prime},v^{\prime\prime})}{\mathbbm{q}}^{p_{(v^{\prime},v^{\prime\prime})}n_{v^{\prime}}n_{v^{\prime\prime}}/2}~{}.$
Here again, if we make a change of variable as $n_{v}\longrightarrow-n_{v}$ at
any vertex, a term in the sum with a given configuration of signs associated
to edges (that is $p\in\\{\pm 1\\}^{\text{Edges}}$) will transform into a term
with a different configuration times $(-1)^{\text{deg}(v)}$. However, for
these plumbing graphs $\Gamma$, the signs of such configuration can be brought
to the configuration with all signs +1. Incorporating this fact, the WRT
invariant(78) simplifies to
$\tau_{K}^{SO(3)}[M(\Gamma);Q={\mathbbm{q}}^{2}]=\frac{e^{-\frac{\pi
i\sigma}{4}}\,{\mathbbm{q}}^{\frac{3\sigma-\sum_{v}f_{v}}{4}}}{2\,(K+1)^{L/2}\,({\mathbbm{q}}^{1/2}-{\mathbbm{q}}^{-1/2})}\times\\\
{\sum_{{n}\in\\{1,3,\ldots,2K+1\\}^{L}}}\;\;{\mathbbm{q}}^{\frac{(n,Bn)}{4}}\prod_{v\;\in\;\text{Vertices}}\,\left(\frac{1}{{\mathbbm{q}}^{n_{v}/2}-{\mathbbm{q}}^{-n_{v}/2}}\right)^{\text{deg}(v)-2}.$
(79)
Further, we double the range of summation so as to use the reciprocity
formula(75)
$\tau_{K}^{SO(3)}[M(\Gamma);Q={\mathbbm{q}}^{2}]=\frac{e^{-\frac{\pi
i\sigma}{4}}\,{\mathbbm{q}}^{\frac{3\sigma-\sum_{v}f_{v}}{4}}}{4\,(K+1)^{L/2}\,({\mathbbm{q}}^{1/2}-{\mathbbm{q}}^{-1/2})}\times\\\
{\sum_{{n}\in\\{1,3,\ldots,4K+3\\}^{L}}}\;\;{\mathbbm{q}}^{\frac{(n,Bn)}{4}}\prod_{v\;\in\;\text{Vertices}}\,\left(\frac{1}{{\mathbbm{q}}^{n_{v}/2}-{\mathbbm{q}}^{-n_{v}/2}}\right)^{\text{deg}(v)-2}.$
(80)
The steps discussed in the $SU(2)$ context to extract $\hat{Z}$ can be
similarly followed for $SO(3)$. This procedure leads to
$\tau_{K}^{SO(3)}[M(\Gamma);Q={\mathbbm{q}}^{2}]=\frac{1}{2\,({\mathbbm{q}}^{1/2}-{\mathbbm{q}}^{-1/2})\,|\det
B|^{1/2}}\,\sum_{a\in\mathrm{Coker}\,B}e^{-\pi i(K+1)(a,B^{-1}a)}\\\ \\\
\sum_{b\in 2\mathrm{Coker}\,B+\delta}e^{-\pi
i\big{(}a,B^{-1}(b{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}+BI})\big{)}}\lim_{q\rightarrow{\mathbbm{q}}}\hat{Z}^{SO(3)}_{b}[M(\Gamma);q]~{}.$
(81)
We observe that the $SO(3)$ WRT invariant is different from the $SU(2)$
invariant due to the factor highlighted in blue color in the summand whereas
the $\hat{Z}^{SO(3)}_{b}[M(\Gamma);q]$ is exactly same as the $SU(2)$
$q$-series. Even though $SO(3)\equiv SU(2)/\mathbb{Z}_{2}$, it is surprising
to see that the factor group $SO(3)$ shares the same $\hat{Z}$ as that of the
parent group $SU(2)$. The case of $\hat{Z}^{SO(3)}$ was also considered in
Costantino:2021yfd but they took a different route by considering the refined
WRT invariant which is consistent with our result.
In the following subsection, we will extract $\hat{Z}$ from the WRT invariant
$\tau_{\hat{K}}^{OSp(1|2)}[M(\Gamma);\hat{Q}]$ for $OSp(1|2)$ supergroup. We
will see that the $OSp(1|2)$ $q$-series are related to
$\hat{Z}^{SU(2)}[M(\Gamma);q]$.
### 4.2 $OSp(1|2)$ WRT and $\hat{Z}^{OSp(1|2)}$ invariant
Using the relation between $OSp(1|2)$ and $SU(2)$ link invariants (17), the
WRT invariant can be written for plumbed manifolds $M(\Gamma)$ as
$\tau_{\hat{K}}^{OSp(1|2)}[M(\Gamma);\hat{Q}={\mathbbm{q}}]=\frac{e^{-\frac{\pi
i\sigma}{4}}\,{\mathbbm{q}}^{\frac{3\sigma}{4}}}{(2\hat{K}+3)^{L/2}\,({\mathbbm{q}}^{1/2}+{\mathbbm{q}}^{-1/2})}\times\\\
{\sum_{{n_{1},n_{2},\ldots,n_{L}}\in\\{1,3,\ldots,2\hat{K}+1\\}}}\;\;\prod_{v\;\in\;\text{Vertices}}{\mathbbm{q}}^{\frac{f_{v}(n_{v}^{2}-1)}{4}}\,\left(\frac{1}{{\mathbbm{q}}^{n_{v}/2}+{\mathbbm{q}}^{-n_{v}/2}}\right)^{\text{deg}(v)-2}\times\\\
\prod_{(v^{\prime},v^{\prime\prime})\;\in\;\text{Edges}}\Big{(}{\mathbbm{q}}^{n_{v^{\prime}}n_{v^{\prime\prime}}/2}+{\mathbbm{q}}^{-n_{v^{\prime}}n_{v^{\prime\prime}}/2}\Big{)}\Big{|}_{{\mathbbm{q}}=\hat{Q}}~{}.$
(82)
Here again we use the Gauss reciprocity(75) as the summation is over odd
integers to work out the steps leading to $\hat{Z}^{OSp(1|2)}[M(\Gamma);q]$.
Note that, the highest integrable representation $2\hat{K}+1$ which fixes the
${\mathbbm{q}}$ as $(2\hat{K}+2)$-th root of unity. However to compare the
result with $OSp(1|2)$ WRT, we have to replace $\hat{K}+1\rightarrow
2\hat{K}+3$ which is equivalent to ${\mathbbm{q}}=\hat{Q}$.
Following similar steps performed for $SU(2)$, we find the following
expression for $OSp(1|2)$ WRT invariant:
$\frac{1}{2\,({\mathbbm{q}}^{1/2}+{\mathbbm{q}}^{-1/2})\,|\det
B|^{1/2}}\,\sum_{a\in\mathrm{Coker}\,B}e^{-\pi
i(2\hat{K}+3)(a,B^{-1}a)}\times\\\ \sum_{b\in
2\mathrm{Coker}\,B+\delta}e^{-\pi
i\big{(}a,B^{-1}(b{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}+BI})\big{)}}\lim_{q\rightarrow\hat{Q}}\hat{Z}_{b}^{OSp(1|2)}[M(\Gamma);q],$
(83)
where $I$ is again the column vector (76) and
$\hat{Z}_{b}^{OSp(1|2)}[M(\Gamma);q]$ is given by the following algebraic
expression:
$\hat{Z}_{b}^{OSp(1|2)}[M(\Gamma);q]\;\;=\;\;2^{-L}q^{-\frac{3L+\sum_{v}f_{v}}{4}}\sum_{d\;\in\;2B{\mathbb{Z}}^{L}+b}F^{d}_{1}\,q^{-\frac{(d,B^{-1}d)}{4}},$
(84)
with coefficient $F_{1}^{d}$ is obtained by following relation
$\sum_{d\;\in\;2{\mathbb{Z}}^{L}+\delta}F_{1}^{d}\prod_{v}x_{v}^{d_{v}}=\\\
\prod_{v\,\in\,\text{Vertices}}\left\\{{\scriptsize\begin{array}[]{c}\text{Expansion}\\\
\text{at }x\rightarrow
0\end{array}}\frac{1}{(x_{v}+1/x_{v})^{\text{deg}\,v-2}}+{\scriptsize\begin{array}[]{c}\text{Expansion}\\\
\text{at
}x\rightarrow\infty\end{array}}\frac{1}{(x_{v}+1/x_{v})^{\text{deg}\,v-2}}\right\\}.$
(85)
Equivalently, $\hat{Z}^{OSp(1|2)}[M(\Gamma);q]$(84) can also represented as
the following contour integral:
$\hat{Z}_{b}^{OSp(1|2)}[M(\Gamma);q]=q^{-\frac{3L+\sum_{v}f_{v}}{4}}\cdot\text{v.p.}\int\limits_{|z_{v}|=1}\prod_{v\;\in\;\text{Vertices}}\frac{dz_{v}}{2\pi
iz_{v}}\,\left({z_{v}+1/z_{v}}\right)^{2-\text{deg}(v)}\cdot\Theta^{-B}_{b}(z)~{}.$
(86)
Here $\Theta^{-B}_{b}(x)$ is the theta function of the lattice corresponding
to minus the linking form $B$:
$\Theta^{-B}_{b}(x)=\sum_{d\;\in\;2B{\mathbb{Z}}^{L}+b}q^{-\frac{(d,B^{-1}d)}{4}}\prod_{i=1}^{L}x_{i}^{d_{i}}$
(87)
and “v.p.” again means that we take principle value integral (i.e. take half-
sum of contours $|z_{v}|=1\pm\epsilon$). Comparing eqns.(84,85) with the
$SU(2)$ expressions(58,59), we can see that the $\hat{Z}$ for $OSp(1|2)$ are
different from $SU(2)$ q-series. We will now present explicit $q$-series for
some examples.
#### 4.2.1 Examples
$\bullet$ For the Poincare homology sphere(67), we find the following
$OSp(1|2)$ $q$-series
$\hat{Z}_{b_{1}}^{OSp(1|2)}=q^{-3/2}(1+q+q^{3}+q^{7}+q^{8}+q^{14}+q^{20}-q^{29}+q^{31}-q^{42}-q^{52}+\cdots).$
(88)
$\bullet$ In the case of Brieskorn homology sphere(69), the $OSp(1|2)$
$q$-series is
$\hat{Z}_{b_{1}}^{OSp(1|2)}=q^{1/2}(1+q+q^{5}+q^{10}+q^{11}+q^{18}+q^{30}+q^{41}-q^{43}-q^{56}-q^{76}\cdots).$
(89)
$\bullet$ For the case of plumbing graph(71), the three homological blocks are
$\hat{Z}^{OSp(1|2)}=\left(\begin{array}[]{c}1+q+q^{6}+q^{11}-q^{13}-q^{20}-q^{35}+O\left(q^{41}\right)\\\
q^{5/3}\left(1+q^{3}-q^{21}-q^{30}+O\left(q^{41}\right)\right)\\\
q^{5/3}\left(1+q^{3}-q^{21}-q^{30}+O\left(q^{41}\right)\right)\end{array}\right).$
(90)
After comparing the $q$-series for $SU(2)$ and $OSp(1|2)$, we noticed that
these two $q$-series are related by a simple change of variable which is
$q\longrightarrow-q$. This change of variable applies only to the series not
to the overall coefficient outside the series.
$\bullet$ Lens space $L(p,q)$ is a well studied three-manifold. For
$L(-5,11)\sim L(-13,29)$ whose plumbing graph is shown below, we obtain the
five homological blocks
${\,\raisebox{0.0pt}{\includegraphics[width=142.26378pt]{example4b}}\,}\;\;\stackrel{{\scriptstyle\text{Kirby}}}{{\sim}}\;\;{\,\raisebox{0.0pt}{\includegraphics[width=142.26378pt]{example4a}}\,}$
$\hat{Z}^{OSp(1|2)}=\left(\begin{array}[]{c}q^{1/10}\\\ q^{-1/10}\\\ 0\\\
q^{-1/10}\\\
q^{1/10}\end{array}\right)~{}~{}~{}~{}\text{as}~{}~{}H_{1}(M,\mathbb{Z})={\mathbb{Z}}_{5}.$
(91)
$\bullet$ For the following plumbing graph,
$H_{1}(M,{\mathbb{Z}})=\mathbb{Z}_{13}$ ,
$\hat{Z}^{OSp(1|2)}=\frac{1}{4}\left(\tiny\begin{array}[]{c}q^{-1/2}(2+2q+2q^{2}-2q^{4}-4q^{5}+6q^{10}-8q^{11}+4q^{13}+2q^{14}-4q^{15}+O\left(q^{18}\right))\\\
q^{5/26}(3+2q-2q^{2}-4q^{3}-2q^{7}+q^{8}+2q^{9}+q^{10}-2q^{12}-4q^{13}-2q^{16}+O\left(q^{18}\right))\\\
q^{5/26}(3+2q-2q^{2}-4q^{3}-2q^{7}+q^{8}+2q^{9}+q^{10}-2q^{12}-4q^{13}-2q^{16}+O\left(q^{18}\right))\\\
q^{7/26}(4-q-2q^{3}-2q^{4}-2q^{6}+3q^{7}-2q^{8}-2q^{10}+q^{11}+2q^{13}-4q^{14}+2q^{15}-4q^{16}+O\left(q^{18}\right))\\\
q^{7/26}(4-q-2q^{3}-2q^{4}-2q^{6}+3q^{7}-2q^{8}-2q^{10}+q^{11}+2q^{13}-4q^{14}+2q^{15}-4q^{16}+O\left(q^{18}\right))\\\
q^{-7/26}(3+3q^{2}-2q^{4}-2q^{5}+4q^{7}-2q^{8}+2q^{9}-2q^{10}-4q^{12}+4q^{13}-4q^{14}+2q^{15}+O\left(q^{18}\right))\\\
q^{-7/26}(3+3q^{2}-2q^{4}-2q^{5}+4q^{7}-2q^{8}+2q^{9}-2q^{10}-4q^{12}+4q^{13}-4q^{14}+2q^{15}+O\left(q^{18}\right))\\\
q^{-11/26}(1+2q+2q^{2}+4q^{3}+3q^{6}-2q^{7}-4q^{8}-2q^{9}+2q^{11}+2q^{13}-q^{14}+2q^{16}-2q^{17}+O\left(q^{18}\right))\\\
q^{-11/26}(1+2q+2q^{2}+4q^{3}+3q^{6}-2q^{7}-4q^{8}-2q^{9}+2q^{11}+2q^{13}-q^{14}+2q^{16}-2q^{17}+O\left(q^{18}\right))\\\
q^{-5/26}(2+2q^{2}+q^{3}+3q^{5}-2q^{6}-2q^{7}-4q^{8}-2q^{10}+2q^{11}-2q^{12}+2q^{13}+5q^{15}-2q^{16}+2q^{17}+O\left(q^{18}\right))\\\
q^{-5/26}(2+2q^{2}+q^{3}+3q^{5}-2q^{6}-2q^{7}-4q^{8}-2q^{10}+2q^{11}-2q^{12}+2q^{13}+5q^{15}-2q^{16}+2q^{17}+O\left(q^{18}\right))\\\
q^{-15/26}(1-2q-2q^{2}+q^{4}-2q^{6}-2q^{7}-2q^{8}-4q^{10}-2q^{12}+2q^{13}+2q^{15}+4q^{17}+O\left(q^{18}\right))\\\
q^{-15/26}(1-2q-2q^{2}+q^{4}-2q^{6}-2q^{7}-2q^{8}-4q^{10}-2q^{12}+2q^{13}+2q^{15}+4q^{17}+O\left(q^{18}\right))\end{array}\right)$
(92)
We have checked for many examples that under $q\rightarrow-q$ in the
$OSp(1|2)$ $q$-series(not affecting the overall coefficient), we obtain the
$SU(2)$ $q$-series.
## 5 Conclusions and future directions
Our goal was to investigate $\hat{Z}$ for $SO(3)$ and $OSp(1|2)$ groups for
negative definite plumbed three-manifolds. The change of variable and color
indeed relates invariants of framed links ${\mathcal{L}}[\Gamma]$(13,17) of
$SO(3)$ and $OSp(1|2)$ to colored Jones. Such a relation allowed us to go
through the steps of GPPV conjecture to extract $\hat{Z}$ from WRT invariants.
Interestingly, we observe that the $\hat{Z}^{SO(3)}$ is same as
$\hat{Z}^{SU(2)}$ even though the WRT invariants are different. We know that
$SU(2)/\mathbb{Z}_{2}\equiv SO(3)$ and it is not at all obvious that the
homological blocks are same for both the groups. It is important to explore
other factor groups and the corresponding $\hat{Z}$ invariants.
For the odd orthosympletic supergroup $OSp(1|2)$, we observe from our
computations for many negative definite plumbing graph $\Gamma$:
$\hat{Z}^{OSp(1|2)}_{b}(\Gamma;q)=2^{-c}q^{\Delta_{b}}\left(\sum_{n}a_{n}q^{n}\right)$
whereas their $SU(2)$ q-series is
$\hat{Z}^{SU(2)}_{b}(\Gamma;q)=2^{-c}q^{\Delta_{b}}\left(\sum_{n}a_{n}(-q)^{n}\right)$
where $c\in\mathbb{Z}_{+}$, $\Delta_{b}\in\mathbb{Q}$.
The brane setup in string theory for $U(N)$ gauge group gives a natural
interpretation for these $q$-series as partition function of the theory
$T^{\mathcal{G}}[M]$. In principle, there should be a natural generalisation
to orthogonal $SO(N)$ and symplectic group $Sp(2n)$ involving orientifolds. It
will be worth investigating such a construction to obtain $\hat{Z}$ for
$SO(N)$ group and compare with our $SO(N=3)$ results. Extension of $\hat{Z}$
to the two variable series for link complementsGukov:2019mnk is another
direction to pursue. We hope to report on these aspects in future.
###### Acknowledgements.
PR is grateful to ICTP senior associateship funding for visit where this work
was initiated with Pavel Putrov and Francesca Ferrari during summer 2019.
Unfortunately due to covid, we could not pursue the collaboration through
online mode. We would like to thank Pavel Putrov and Francesca Ferrari for
clarifying the notations during the initial stages. SC would like to thank
Sunghyuk Park for useful comments and discussions during the String-Math 2022
conference held at Univ. of Warsaw, Poland. SC is grateful to Dmitry
Noshchenko for his comments on the manuscript. We would also like to thank
Vivek Kumar Singh for his clarification on mathematica program. SC would also
like to thank the organisers of String-Math 2022 where a part of this work was
presented. SC is thankful for the MHRD fellowship from IIT Bombay providing
financial support to visit. PR would like to thank SERB (MATRICS)
MTR/2019/000956 funding. We would like to thank the reviewers for their
valuable comments and the suggestions.
## References
* (1) Chae, J.: Towards a q-series for osp (2| 2n). arXiv preprint arXiv:2106.09868 (2021)
* (2) Chung, H.J.: BPS Invariants for Seifert Manifolds. JHEP 03, 113 (2020). DOI 10.1007/JHEP03(2020)113
* (3) Cooper, B., Krushkal, V.: Categorification of the jones–wenzl projectors. Quantum Topology 3(2), 139–180 (2012)
* (4) Costantino, F., Gukov, S., Putrov, P.: Non-Semisimple TQFT’s and BPS $q$-Series. SIGMA 19, 010 (2023). DOI 10.3842/SIGMA.2023.010
* (5) Ennes, I.P., Ramadevi, P., Ramallo, A.V., Sanchez de Santos, J.M.: Duality in osp (1|2) conformal field theory and link invariants. Int. J. Mod. Phys. A 13, 2931–2978 (1998). DOI 10.1142/S0217751X98001487
* (6) Ferrari, F., Putrov, P.: Supergroups, q-series and 3-manifolds (2020)
* (7) Frenkel, I.B., Stroppel, C., Sussan, J.: Categorifying fractional euler characteristics, jones–wenzl projectors and 3 j-symbols. Quantum Topology 3(2), 181–253 (2012)
* (8) Gopakumar, R., Vafa, C.: M theory and topological strings. 1. (1998)
* (9) Gopakumar, R., Vafa, C.: M theory and topological strings. 2. (1998)
* (10) Gukov, S., Manolescu, C.: A two-variable series for knot complements. Quantum Topol. 12(1), 1–109 (2021). DOI 10.4171/qt/145
* (11) Gukov, S., Pei, D., Putrov, P., Vafa, C.: BPS spectra and 3-manifold invariants. J. Knot Theor. Ramifications 29(02), 2040003 (2020). DOI 10.1142/S0218216520400039
* (12) Gukov, S., Putrov, P., Vafa, C.: Fivebranes and 3-manifold homology. Journal of High Energy Physics 2017(7) (2017). DOI 10.1007/jhep07(2017)071. URL http://dx.doi.org/10.1007/JHEP07(2017)071
* (13) Gukov, S., Schwarz, A.S., Vafa, C.: Khovanov-Rozansky homology and topological strings. Lett. Math. Phys. 74, 53–74 (2005). DOI 10.1007/s11005-005-0008-8
* (14) Kaul, R.K., Ramadevi, P.: Three-manifold invariants from Chern-Simons field theory with arbitrary semi-simple gauge groups. Commun. Math. Phys. 217, 295–314 (2001). DOI 10.1007/s002200000347
* (15) Khovanov, M.: A categorification of the jones polynomial. Duke Mathematical Journal 101(3), 359–426 (2000)
* (16) Khovanov, M., Rozansky, L.: Matrix factorizations and link homology. arXiv preprint math/0401268 (2004)
* (17) Lickorish, W.B.R.: A representation of orientable combinatorial 3-manifolds. Annals of Mathematics 76(3), 531–540 (1962). URL http://www.jstor.org/stable/1970373
* (18) Mikhaylov, V., Witten, E.: Branes And Supergroups. Commun. Math. Phys. 340(2), 699–832 (2015). DOI 10.1007/s00220-015-2449-y
* (19) Nawata, S., Oblomkov, A.: Lectures on knot homology. Contemp. Math. 680, 137 (2016). DOI 10.1090/conm/680/13702
* (20) Ooguri, H., Vafa, C.: Knot invariants and topological strings. Nucl. Phys. B 577, 419–438 (2000). DOI 10.1016/S0550-3213(00)00118-8
* (21) Park, S., et al.: Higher rank hat z and fk. SIGMA. Symmetry, Integrability and Geometry: Methods and Applications 16, 044 (2020)
* (22) Ramadevi, P., Naik, S.: Computation of Lickorish’s three manifold invariants using Chern-Simons theory. Commun. Math. Phys. 209, 29–49 (2000). DOI 10.1007/s002200050014
* (23) Wallace, A.H.: Modifications and cobounding manifolds. Canadian Journal of Mathematics 12, 503–528 (1960). DOI 10.4153/CJM-1960-045-7
* (24) Webster, B.: Knot invariants and higher representation theory, vol. 250. American Mathematical Society (2017)
* (25) Witten, E.: Quantum field theory and the jones polynomial. Communications in Mathematical Physics 121(3), 351–399 (1989)
|
Global Existence of Bi-Hamiltonian Structures on Orientable Three-Dimensional
Manifolds
Global Existence of Bi-Hamiltonian Structures
on Orientable Three-Dimensional Manifolds
Melike IŞİM EFE and Ender ABADOĞLU M. Işim Efe and E. Abadoğlu Yeditepe
University, Mathematics Department, İnȯnu̇ Mah. Kayışdağı Cad. 326A,
26 Ağustos Yerleşimi, 34755 Ataşehir İstanbul, Turkey
<EMAIL_ADDRESS><EMAIL_ADDRESS>
Received December 21, 2016, in final form July 04, 2017; Published online July
14, 2017
In this work, we show that an autonomous dynamical system defined by a
nonvanishing vector field on an orientable three-dimensional manifold is
globally bi-Hamiltonian if and only if the first Chern class of the normal
bundle of the given vector field vanishes. Furthermore, the bi-Hamiltonian
structure is globally compatible if and only if the Bott class of the complex
codimension one foliation defined by the given vector field vanishes.
bi-Hamiltonian systems; Chern class; Bott class
53D17; 53D35
Dedicated to the memory of Ali Yavuz.
## 1 Introduction
An autonomous dynamical system on a manifold $M$
$\displaystyle\dot{x}(t)=v(x(t))$ (1.1)
is determined by a vector field $v(x)$ on a manifold up to time
reparametrization. Important geometric quantities related to a dynamical
system are functions $I$ which are invariant under the flow of the vector
field
$\displaystyle\mathcal{L}_{v}I=0.$
It is sometimes possible to relate the vector field to an invariant function
via a Poisson structure $\mathcal{J},$ which is a bivector field on $M$
$\displaystyle\mathcal{J}\colon\ \Lambda^{1}(M)\rightarrow\mathfrak{X}(M)$
satisfying the Jacobi identity condition
$\displaystyle[\mathcal{J},\mathcal{J}]_{\rm SN}=0,$
where $[\,,\,]_{\rm SN}$ is the Schouten–Nijenhuis bracket. The local
structure of such manifolds was first introduced in [13]. The invariants
satisfying the condition
$\displaystyle v=\mathcal{J}({\rm d}I)$ (1.2)
are called Hamiltonian functions of (1.1). Given a dynamical system on $M$
defined by the vector field $v$, the vector field $v$ is called a Hamiltonian
vector field if there exists a Poisson bivector $\mathcal{J}$ and a smooth
function $I$ such that equation (1.2) holds.
Given a vector field $v$ on $M$, finding a Poisson structure according to
which the vector field becomes Hamiltonian may not be an easy task in general.
However, if a given dynamical system can be put into Hamiltonian form then,
there may be more than one Poisson structure which makes it into a Hamiltonian
system. In [9], a bi-Hamiltonian system is introduced for the analysis of
certain infinite-dimensional soliton equations. In such a case, there arises
the question of the relation between these Poisson structures, which is called
compatibility. Although there are at least two different approaches to
compatibility [11], by following [10] we adapt the definitions below:
###### Definition 1.1.
A dynamical system is called bi-Hamiltonian if it can be written in
Hamiltonian form in two distinct ways:
$\displaystyle v=\mathcal{J}_{1}({\rm d}H_{2})=\mathcal{J}_{2}({\rm d}H_{1}),$
(1.3)
such that $\mathcal{J}_{1}$ and $\mathcal{J}_{2}$ are nowhere multiples of
each other. This bi-Hamiltonian structure is compatible if
$\mathcal{J}_{1}+\mathcal{J}_{2}$ is also a Poisson structure.
In this paper we confine ourselves to dynamical systems on three-dimensional
orientable manifolds. For three-dimensional manifolds, where there is no
symplectic structure for dimensional reasons, Poisson structures have a simple
form. Poisson structures of dynamical systems on three manifolds are
extensively studied first in [4] and then also in [5] and [8]. Following the
definitions in [4], choosing any Riemannian metric $\boldsymbol{g}$ on $M$, a
Poisson bivector field, which is skew-symmetric, can be associated to a vector
field by using the Lie algebra isomorphism
$\mathfrak{so}(3)\simeq\mathbb{R}^{3}$
$\displaystyle\mathcal{J}=\mathcal{J}^{mn}e_{m}\wedge
e_{n}=\varepsilon_{k}^{mn}J^{k}e_{m}\wedge e_{n},$
and the vector field
$\displaystyle J=J^{k}e_{k}$
is called the Poisson vector field on $M$.
Then, the Jacobi identity has the form
$\displaystyle J\cdot(\nabla\times J)=0,$ (1.4)
and equation (1.3) becomes
$\displaystyle v=J_{1}\times\nabla H_{2}=J_{2}\times\nabla H_{1}.$ (1.5)
Since $J_{1}$ and $J_{2}$ are not multiples of each other by definition, we
have
$\displaystyle J_{1}\times J_{2}\neq 0$ (1.6)
and
$\displaystyle J_{i}\cdot v=0$ (1.7)
for $i=1,2$.
This work is focused on the bi-Hamiltonian structure of dynamical systems
defined by nonvanishing vector fields on orientable three-dimensional
manifolds, or equivalently vector fields on three-dimensional manifolds whose
supports are orientable three-dimensional manifolds. Since all orientable
three-dimensional manifolds are parallelizable [12], there is no topological
obstruction to the global existence of a nonvanishing vector field. Then, by
the bi-Hamiltonian form (1.5)–(1.7), $\\{v,J_{1},J_{2}\\}$ forms a local frame
field. Therefore, whenever the system is globally bi-Hamiltonian,
$\\{v,J_{1},J_{2}\\}$ becomes a global frame field on $M$. For example, for
$M=\mathbb{R}^{3}$ and $v=\partial_{x^{0}}$ we have $J_{i}=\partial_{x^{i}}$
and $\\{\partial_{x^{0}},\partial_{x^{1}},\partial_{x^{2}}\\}$ forms such a
global frame field. However, the global existence of the frame field
$\\{v,J_{1},J_{2}\\}$ is by no means guaranteed. The simplest counterexample
is the gradient flow of the $S^{2}$ in $\mathbb{R}^{3}\setminus\\{0\\}$. Here,
a frame field $\\{v,J_{1},J_{2}\\}$ cannot be defined globally since $J_{1}$,
$J_{2}$ are sections of the tangent bundle of $S^{2}$ which is not trivial and
does not admit two nonvanishing linearly independent vector fields.
The goal of this paper is to give necessary and sufficient conditions for a
nonvanishing vector field on an orientable three-dimensional manifold to admit
a compatible bi-Hamiltonian structure. The paper is organized as follows: In
Section 2, the local existence of bi-Hamiltonian systems is investigated in a
neighbourhood of a point, possibly refined by the existence conditions of
solutions of certain ODE’s related with the problem, and it is shown in
Theorem 2.7 that it is always possible to find a pair of compatible Poisson
structures such that the system defined by the nonvanishing vector field
becomes bi-Hamiltonian. In Section 3, obstructions to the global existence of
a pair of Poisson structures are studied. In Section 3.2 the primary
obstruction for the existence of a global pair of Poisson structures is
investigated, and it is shown in Theorem 3.6 that such a pair, which is not
necessarily compatible, exists if and only if the first Chern class of the
normal bundle vanishes. Finally, the global compatibility of this pair is
investigated in Section 3.3 and it is shown in Theorem 3.8 that under the
assumption of global existence, the vanishing of the Bott class of the complex
codimension one foliation is the necessary and sufficient condition for the
global compatibility of the pair of Poisson structures.
Throughout the work, bivectors are denoted by calligraphic and forms are
denoted by bold letters.
## 2 Local existence of bi-Hamiltonian structure in 3D
For this purpose, we will first analyze the local solutions of the equation
(1.4) defining Poisson vectors, which is also studied in [6]. Let $M$ be an
orientable three-dimensional manifold with an arbitrary Riemannian metric
$\boldsymbol{g}$, and $v$ be a nonvanishing vector field. Let
$\displaystyle\widehat{e}_{1}=\frac{v}{\|v\|}$
and extend this vector field to a local orthonormal frame field
$\\{\widehat{e}_{1},\widehat{e}_{2},\widehat{e}_{3}\\}$. Define the structure
functions $(C_{ij}^{k}(x))$ via the relation
$\displaystyle[\widehat{e}_{i},\widehat{e}_{j}]=C_{ij}^{k}(x)\widehat{e}_{k}.$
(2.1)
###### Proposition 2.1.
A nonvanishing vector field $v$ admits two independent local Poisson
structures on $M$.
###### Proof.
Adopting the frame defined above and using (1.7), we have the Poisson vector
field
$\displaystyle J=\alpha\widehat{e}_{2}+\beta\widehat{e}_{3},$ (2.2)
and its curl is given by
$\displaystyle\nabla\times
J=\nabla\alpha\times\widehat{e}_{2}+\alpha\nabla\times\widehat{e}_{2}+\nabla\beta\times\widehat{e}_{3}+\beta\nabla\times\widehat{e}_{3}.$
(2.3)
Now the Jacobi identity (1.4) is obtained by taking the dot product of (2.2)
with (2.3), and using triple vector product identities we get
$\displaystyle\beta\widehat{e}_{1}\cdot\nabla\alpha-\alpha\widehat{e}_{1}\cdot\nabla\beta-\alpha^{2}C_{31}^{2}-\alpha\beta\big{(}C_{31}^{3}+C_{12}^{2}\big{)}-\beta^{2}C_{12}^{3}=0.$
(2.4)
If $J=0$ then $\|v\|=0$ and hence $v=0$, which contradicts with our assumption
that the vector field is nonvanishing. Therefore, we assume
$\displaystyle J\neq 0,$
which means that $\alpha\neq 0$ or $\beta\neq 0$. Now we assume $\alpha\neq
0$, while the case $\beta\neq 0$ is similar and amounts to a rotation of the
frame fields. Defining
$\displaystyle\mu=\frac{\beta}{\alpha}$
and dividing (2.4) by $\alpha^{2}$, we get
$\displaystyle\widehat{e}_{1}\cdot\nabla\mu=-C_{31}^{2}-\mu\big{(}C_{31}^{3}+C_{12}^{2}\big{)}-\mu^{2}C_{12}^{3},$
(2.5)
whose characteristic curve is the integral curve of (1.1) in arclength
parametrization and
$\displaystyle\frac{{\rm d}\mu}{{\rm
d}s}=-C_{31}^{2}-\mu\big{(}C_{31}^{3}+C_{12}^{2}\big{)}-\mu^{2}C_{12}^{3}$
(2.6)
in the arclength variable $s$. The Riccati equation (2.6) is equivalent to a
linear second order equation and hence, possesses two linearly independent
solutions leading to two Poisson vector fields. Since the vector field $v$ is
assumed to be nonvanishing, for each $\boldsymbol{x}_{0}\in\mathbb{R}^{3}$ it
is possible to find a neighborhood foliated by the integral curves of $v$
which are nothing but characteristic curves of (2.5). Therefore, solutions of
(2.6) can be extended to a possibly smaller neighborhood on which the Riccati
equation has two independent solutions which we call $\mu_{i}$ for $i=1,2$.
Hence, we have two Poisson vector fields
$\displaystyle
J_{i}=\alpha_{i}\big{(}\widehat{e}_{2}+\mu_{i}\widehat{e}_{3}\big{)},$ (2.7)
where the coefficients $\alpha_{i}$ are arbitrary. ∎
Note that, (2.5) determines $\mu_{i}$ alone, but not $\alpha_{i}$. Taking the
advantage of the freedom of choosing arbitrary scaling factors we may restrict
these factors by imposing compatibility on our Poisson vector fields.
###### Proposition 2.2.
Two Poisson structures obtained in (2.5) are compatible iff
$\displaystyle\widehat{e}_{1}\cdot\nabla\ln\frac{\alpha_{i}}{\alpha_{j}}=C_{12}^{3}(\mu_{i}-\mu_{j}).$
(2.8)
###### Proof.
Let
$\displaystyle J=J_{1}+J_{2}$
Using (1.4) for $J_{1}$, $J_{2}$ and $J$
$\displaystyle(\nabla\times J)\cdot J=(\nabla\times J_{2})\cdot
J_{1}+(\nabla\times J_{1})\cdot J_{2}=0.$ (2.9)
For the Poisson vector fields defined in (2.5), taking the dot product of both
sides of (2.3) by $J_{j}$, leads to
$\displaystyle(\nabla\times J_{i})\cdot
J_{j}=\alpha_{i}\alpha_{j}(\mu_{i}-\mu_{j})\big{(}C_{12}^{2}+C_{12}^{3}\mu_{i}-\widehat{e}_{1}\cdot\nabla\ln\alpha_{i}\big{)}.$
(2.10)
Therefore, the compatibility condition (2.9) implies that
$\displaystyle
C_{12}^{2}+C_{12}^{3}\mu_{i}-\widehat{e}_{1}\cdot\nabla\ln\alpha_{i}=C_{12}^{2}+C_{12}^{3}\mu_{j}-\widehat{e}_{1}\cdot\nabla\ln\alpha_{j},$
and hence, we get
$\displaystyle\widehat{e}_{1}\cdot\nabla\ln\frac{\alpha_{i}}{\alpha_{j}}=C_{12}^{3}(\mu_{i}-\mu_{j}),$
(2.11)
whose characteristic curve is the solution curve of (1.1) in arclength
parametrization
$\displaystyle\frac{{\rm d}}{{\rm
d}s}\ln\frac{\alpha_{i}}{\alpha_{j}}=C_{12}^{3}(\mu_{i}-\mu_{j}).$ (2.12)
By a similar line of reasoning as above, the solutions of (2.12) can also be
extended to the whole neighborhood, and the proposition follows. ∎
However, having a pair of Poisson structures obtained in (2.5) and even a
compatible pair satisfying (2.11) do not guarantee the existence of
Hamiltonian functions even locally.
###### Proposition 2.3.
The dynamical system (1.1) is locally bi-Hamiltonian with the pair of Poisson
structures obtained in (2.7) if and only if
$\displaystyle\widehat{e}_{1}\cdot\nabla\ln\frac{\alpha_{i}}{\|v\|}=C_{31}^{3}+\mu_{i}C_{12}^{3}.$
(2.13)
###### Proof.
For this purpose we first need to write down the equations for the Hamiltonian
functions. The invariance condition of Hamiltonian functions under the flow
generated by $v$ implies
$\displaystyle\widehat{e}_{1}\cdot\nabla H_{i}=0,$ (2.14)
so the gradients of Hamiltonian functions are linear combinations of
$\widehat{e}_{2}$ and $\widehat{e}_{3}$. Then, inserting (2.7) into (1.5) we
get another condition
$\displaystyle\widehat{e}_{3}\cdot\nabla
H_{j}-\mu_{i}\widehat{e}_{2}\cdot\nabla H_{j}=\frac{\|v\|}{\alpha_{i}}$ (2.15)
or by defining
$\displaystyle u_{i}=-\mu_{i}\widehat{e}_{2}+\widehat{e}_{3}$
(2.15) can be written as
$\displaystyle u_{i}\cdot\nabla H_{j}=\frac{\|v\|}{\alpha_{i}}.$ (2.16)
Equations (2.14) and (2.16) for Hamiltonian functions are subject to the
integrability condition
$\displaystyle\widehat{e}_{1}(u_{i}(H_{j}))-u_{i}\big{(}\widehat{e}_{1}(H_{j})\big{)}=\big{[}\widehat{e}_{1},u_{i}\big{]}(H_{j}).$
Using the commutation relations given in (2.1) and (2.5), we obtain
$\displaystyle[\widehat{e}_{1},u_{i}]=-\big{(}C_{31}^{1}+\mu_{i}C_{12}^{1}\big{)}\widehat{e}_{1}-\big{(}C_{31}^{3}+\mu_{i}C_{12}^{3}\big{)}u_{i}.$
(2.17)
Applying $H_{j}$ to both sides of (2.17) and using two equations (2.14) and
(2.16) for Hamiltonian functions, we get
$\displaystyle\big{[}\widehat{e}_{1},u_{i}\big{]}(H_{j})=-\big{(}C_{31}^{3}+\mu_{i}C_{12}^{3}\big{)}\frac{\|v\|}{\alpha_{i}}.$
Therefore, our integrability condition for Hamiltonian functions becomes
$\displaystyle\widehat{e}_{1}\cdot\nabla\left(\frac{\|v\|}{\alpha_{i}}\right)=-\big{(}C_{31}^{3}+\mu_{i}C_{12}^{3}\big{)}\frac{\|v\|}{\alpha_{i}},$
hence,
$\displaystyle\widehat{e}_{1}\cdot\nabla\ln\left(\frac{\alpha_{i}}{\|v\|}\right)=\mu_{i}C_{12}^{3}+C_{31}^{3}$
(2.18)
and the proposition follows. ∎
###### Corollary 2.4.
The pair of Poisson structures
$J_{i}=\alpha_{i}\big{(}\widehat{e}_{2}+\mu_{i}\widehat{e}_{3}\big{)}$ where
$\alpha_{i}$’s are defined by (2.18) and $\mu_{i}$’s are defined by (2.5) are
compatible.
###### Proof.
What we need is to show that (2.8) is satisfied. Indeed, writing (2.18) for
$\alpha_{i}$ and $\alpha_{j}$ and subtracting the second from the first, the
corollary follows. ∎
Note that, for a pair of compatible Poisson structures, $J_{1}$ and $J_{2}$,
the dilatation symmetry $J\rightarrow fJ$ and the additive symmetry
$J_{1}+J_{2}$ do not imply that $J_{1}+fJ_{2}$ is a Poisson structure. Indeed,
if we apply the Jacobi identity condition and using triple vector identity
$\displaystyle(J_{1}+fJ_{2})\cdot\nabla\times(J_{1}+fJ_{2})=-\nabla
f\cdot(J_{1}\times J_{2})=0,$
which implies that
$\displaystyle\widehat{e}_{1}\cdot\nabla f=0.$
Now we try to describe the relation between the pair of compatible Poisson
structures and Hamiltonian functions. But first, we need the following lemma
to describe this relation.
###### Lemma 2.5.
For the bi-Hamiltonian system with a pair of compatible Poisson structures
defined above,
$\displaystyle\nabla\cdot\widehat{e}_{1}=\widehat{e}_{1}\cdot\nabla\ln\frac{\alpha_{1}\alpha_{2}(\mu_{2}-\mu_{1})}{\|v\|^{2}}.$
###### Proof.
Adding the equations for integrability conditions of Hamiltonian functions
(2.18) for $i=1,2$, we get
$\displaystyle\widehat{e}_{1}\cdot\nabla\ln(\alpha_{1}\alpha_{2})=\widehat{e}_{1}\cdot\nabla\ln\big{(}\|v\|^{2}\big{)}+2C_{31}^{3}+(\mu_{1}+\mu_{2})C_{12}^{3}.$
(2.19)
On the other hand, subtracting the equations (2.5) satisfied by $\mu_{1}$ and
$\mu_{2}$, and dividing by $(\mu_{2}-\mu_{1})$,
$\displaystyle\widehat{e}_{1}\cdot\nabla\ln(\mu_{2}-\mu_{1})=-\big{(}C_{31}^{3}+C_{12}^{2}\big{)}-(\mu_{1}+\mu_{2})C_{12}^{3}.$
(2.20)
Adding (2.19) to (2.20) and using
$\displaystyle\nabla\cdot\widehat{e}_{1}=C_{i1}^{i},$
we get
$\displaystyle\widehat{e}_{1}\cdot\nabla\ln(\alpha_{1}\alpha_{2}(\mu_{2}-\mu_{1}))=\widehat{e}_{1}\cdot\nabla\ln\big{(}\|v\|^{2}\big{)}+\nabla\cdot\widehat{e}_{1},$
and the lemma follows. ∎
###### Proposition 2.6.
Given a bi-Hamiltonian system with a pair of compatible Poisson structures,
there exists a canonical pair of compatible Poisson structures $K_{1}$,
$K_{2}$ with the same Hamiltonian functions $H_{1}$, $H_{2}$ such that
$\displaystyle K_{i}=(-1)^{i+1}\phi\nabla H_{i},$
where
$\displaystyle\phi=\frac{\alpha_{1}\alpha_{2}(\mu_{2}-\mu_{1})}{\|v\|}.$
###### Proof.
Since Poisson vector fields are linearly independent, one could write
Hamiltonians in terms of Poisson vector fields as
$\displaystyle\nabla H_{i}=\sigma_{i}^{j}J_{j}.$
By using (1.5), we get
$\displaystyle\sigma_{2}^{2}=-\sigma_{1}^{1}=\frac{\|v\|}{\alpha_{1}\alpha_{2}(\mu_{2}-\mu_{1})}.$
On the other hand, we have
$\displaystyle\nabla\times\nabla H_{i}=\nabla\sigma_{i}^{j}\times
J_{j}+\sigma_{i}^{j}\nabla\times J_{j}=0.$
Taking the dot product of both sides with $J_{1}$ and $J_{2}$, and using the
compatibility condition, we obtain
$\displaystyle\widehat{e}_{1}\cdot\nabla\ln\sigma_{j}^{i}=\frac{J_{1}\cdot(\nabla\times
J_{2})}{\alpha_{1}\alpha_{2}(\mu_{2}-\mu_{1})}.$ (2.21)
Inserting (2.13) into (2.10) and using (2.21),
$\displaystyle\widehat{e}_{1}\cdot\nabla\ln\sigma_{j}^{i}=-\widehat{e}_{1}\cdot\nabla\ln\phi,$
which leads to
$\displaystyle\sigma_{j}^{i}=\frac{\Psi_{j}^{i}}{\phi},$
where
$\displaystyle\widehat{e}_{1}\cdot\nabla\Psi_{j}^{i}=0.$
Therefore, we have
$\displaystyle\nabla
H_{1}=\frac{1}{\phi}\big{(}\Psi_{1}^{1}J_{1}+\Psi_{1}^{2}J_{2}\big{)},\qquad\nabla
H_{2}=\frac{1}{\phi}\big{(}\Psi_{2}^{1}J_{1}-\Psi_{1}^{1}J_{2}\big{)}.$ (2.22)
Inserting (2.22) into (1.5), we get
$\displaystyle\Psi_{1}^{1}=-1,$
and finally,
$\displaystyle\nabla
H_{1}=-\frac{\|v\|}{\alpha_{1}\alpha_{2}(\mu_{2}-\mu_{1})}\big{(}J_{1}-\Psi_{1}^{2}J_{2}\big{)},\qquad\nabla
H_{2}=\frac{\|v\|}{\alpha_{1}\alpha_{2}(\mu_{2}-\mu_{1})}\big{(}\Psi_{2}^{1}J_{1}+J_{2}\big{)}.$
Note that,
$\displaystyle\nabla H_{1}\times\nabla
H_{2}=-\big{(}1+\Psi_{2}^{1}\Psi_{1}^{2}\big{)}\frac{\|v\|^{2}}{\alpha_{1}\alpha_{2}(\mu_{2}-\mu_{1})}\widehat{e}_{1}.$
(2.23)
For the Hamiltonians to be functionally independent, r.h.s. of (2.23) must not
vanish, i.e.,
$\displaystyle 1+\Psi_{2}^{1}\Psi_{1}^{2}\neq 0.$
Now let us define
$\displaystyle
K_{1}=\frac{J_{1}-\Psi_{1}^{2}J_{2}}{1+\Psi_{2}^{1}\Psi_{1}^{2}}=-\frac{\alpha_{1}\alpha_{2}(\mu_{2}-\mu_{1})}{\big{(}1+\Psi_{2}^{1}\Psi_{1}^{2}\big{)}\|v\|}\nabla
H_{1},\qquad
K_{2}=\frac{J_{2}+\Psi_{2}^{1}J_{1}}{1+\Psi_{2}^{1}\Psi_{1}^{2}}=\frac{\alpha_{1}\alpha_{2}(\mu_{2}-\mu_{1})}{\big{(}1+\Psi_{2}^{1}\Psi_{1}^{2}\big{)}\|v\|}\nabla
H_{2}.$
By (1.5), we get
$\displaystyle K_{1}\times\nabla H_{1}=K_{2}\times\nabla H_{2}=0,\qquad
K_{2}\times\nabla H_{1}=K_{1}\times\nabla H_{2}=v.$
Choosing $K_{i}$’s to be our new Poisson vector fields, the proposition
follows. ∎
Consequently, we can write the local existence theorem of bi-Hamiltonian
systems in three dimensions.
###### Theorem 2.7.
Any three-dimensional dynamical system
$\displaystyle\dot{x}(t)=v(x(t))$ (2.24)
has a pair of compatible Poisson structures
$\displaystyle
J_{i}=\alpha_{i}\big{(}\widehat{e}_{2}+\mu_{i}\widehat{e}_{3}\big{)},$
in which $\mu_{i}$’s are determined by the equation
$\displaystyle\widehat{e}_{1}\cdot\nabla\mu_{i}=-C_{31}^{2}-\mu_{i}\big{(}C_{31}^{3}+C_{12}^{2}\big{)}-\mu_{i}^{2}C_{12}^{3},$
and $\alpha_{i}$’s are determined by the equation
$\displaystyle\widehat{e}_{1}\cdot\nabla\ln\frac{\alpha_{i}}{\|v\|}=C_{31}^{3}+\mu_{i}C_{12}^{3}.$
Furthermore, (2.24) is a locally bi-Hamiltonian system with a pair of local
Hamiltonian functions determined by
$\displaystyle J_{i}=(-1)^{i+1}\phi\nabla H_{i},$ (2.25)
where
$\displaystyle\phi=\frac{\alpha_{1}\alpha_{2}(\mu_{2}-\mu_{1})}{\|v\|}.$
(2.26)
## 3 Global existence of compatible bi-Hamiltonian
structure in 3D
In this section, we investigate the conditions for which the local existence
theorem holds globally. To study the global properties of the vector field
$\boldsymbol{v}$ by topological means, we relate the vector field with its
normal bundle. Let $E$ be the one-dimensional subbundle of $TM$ generated by
$v$. Let $Q=TM/E$ be the normal bundle of $v$. By using the cross product with
$\widehat{e}_{1}$, we can define a complex structure $\Lambda$ on the fibers
of $Q\rightarrow M$, and $Q$ becomes a complex line bundle over $M$.
### 3.1 Bi-Hamiltonian structure in 3D with differential forms
In order to obtain and express the obstructions to the global existence of bi-
Hamiltonian structures on orientable three manifolds by certain cohomology
groups and characteristic classes, we will reformulate the problem by using
differential forms. For this purpose, let $\boldsymbol{\Omega}$ be the volume
form associated to the Riemannian metric $\boldsymbol{g}$ of $M$. Then, there
is a local one-form $\boldsymbol{J}$ associated with a local Poisson bivector
field $\mathcal{J}$,
$\displaystyle\boldsymbol{J}=\imath_{\mathcal{J}}\boldsymbol{\Omega},$
which is called the local Poisson one-form. The bi-Hamiltonian system (1.5)
can be written as
$\displaystyle\iota_{v}\boldsymbol{\Omega}=\boldsymbol{J}_{1}\wedge{\rm
d}H_{2}=\boldsymbol{J}_{2}\wedge{\rm d}H_{1}.$ (3.1)
Note that, although the l.h.s. of this equality is globally defined, r.h.s. is
defined only locally, therefore it holds only locally. Now the Jacobi identity
is given by
$\displaystyle\boldsymbol{J}_{i}\wedge{\rm
d}\boldsymbol{J}_{i}=0\qquad\text{for}\quad i=1,2,$ (3.2)
and compatibility amounts to
$\displaystyle\boldsymbol{J}_{1}\wedge{\rm
d}\boldsymbol{J}_{2}=-\boldsymbol{J}_{2}\wedge{\rm d}\boldsymbol{J}_{1}.$
By (2.25), $\boldsymbol{J}_{1}$ and $\boldsymbol{J}_{2}$ can be chosen to be
proportional to ${\rm d}H_{1}$ and ${\rm d}H_{2}$, respectively, and hence
(3.1) takes the form
$\displaystyle\iota_{v}\boldsymbol{\Omega}=\phi{\rm d}H_{1}\wedge{\rm
d}H_{2}.$
The Jacobi identity for Poisson 1-forms (3.2) implies the existence of 1-forms
$\boldsymbol{\beta}_{i}$ such that
$\displaystyle{\rm
d}\boldsymbol{J}_{i}=\boldsymbol{\beta}_{i}\wedge\boldsymbol{J}_{i}$ (3.3)
for each $i=1,2$. In the next proposition we are going to show that the
compatibility of Poisson structures allows us to combine
$\boldsymbol{\beta}_{1}$ and $\boldsymbol{\beta}_{2}$ into a single one.
###### Proposition 3.1.
There is a $1$-form $\boldsymbol{\beta}$ such that
$\displaystyle{\rm
d}\boldsymbol{J}_{i}=\boldsymbol{\beta}\wedge\boldsymbol{J}_{i}$
for each $i=1,2$.
###### Proof.
Applying (3.3) to the compatibility condition
$\displaystyle\boldsymbol{J}_{1}\wedge{\rm
d}\boldsymbol{J}_{2}+\boldsymbol{J}_{2}\wedge{\rm d}\boldsymbol{J}_{1}=0,$
we get
$\displaystyle(\boldsymbol{\beta}_{1}-\boldsymbol{\beta}_{2})\wedge\boldsymbol{J}_{1}\wedge\boldsymbol{J}_{2}=0,$
which implies that
$\displaystyle\boldsymbol{\beta}_{1}-\boldsymbol{\beta}_{2}=b_{1}\boldsymbol{J}_{1}+b_{2}\boldsymbol{J}_{2},$
and therefore, we define
$\displaystyle\boldsymbol{\beta}=\boldsymbol{\beta}_{1}-b_{1}\boldsymbol{J}_{1}=\boldsymbol{\beta}_{2}+b_{2}\boldsymbol{J}_{2}.$
Hence
$\displaystyle\boldsymbol{\beta}\wedge\boldsymbol{J}_{i}=\boldsymbol{\beta}_{i}\wedge\boldsymbol{J}_{i}={\rm
d}\boldsymbol{J}_{i},$
and the proposition follows. ∎
Note that $\boldsymbol{\beta}$ is a $TM$-valued 1-form. Namely,
$\displaystyle\iota_{\widehat{e}_{1}}\boldsymbol{\beta}\neq 0$
in general. Now we are going to show that by an appropriate change of Poisson
forms, we may reduce it to a connection 1-form on $Q$.
###### Lemma 3.2.
$\displaystyle\iota_{\widehat{e}_{1}}\boldsymbol{\beta}=\iota_{\widehat{e}_{1}}({\rm
d}\ln\phi),$
where $\phi$ is the function defined in (2.26).
###### Proof.
For the proof, we carry out the computation with Poisson vector fields, then
transform the result to differential forms. The Jacobi identity (1.4) implies
that $\nabla\times J_{i}$ is orthogonal to $J_{i}$ and therefore, we get
$\displaystyle\nabla\times
J_{i}=a_{i1}\widehat{e}_{1}+a_{i2}\widehat{e}_{1}\times J_{i}.$ (3.4)
By the definition of Poisson vector fields, we have
$\displaystyle J_{1}\times J_{2}=\phi\|v\|\widehat{e}_{1}.$
We can rewrite (3.4) in the form
$\displaystyle\nabla\times J_{i}=\frac{a_{i1}}{\phi\|v\|}J_{1}\times
J_{2}+a_{i2}\widehat{e}_{1}\times J_{i}.$ (3.5)
Using the compatibility condition (2.9), we obtain
$\displaystyle a_{i1}=(\nabla\times J_{i})\cdot\widehat{e}_{1},\qquad
a_{i2}=\frac{(\nabla\times J_{1})\cdot J_{2}}{\phi\|v\|}.$
Now we define
$\displaystyle\xi=\frac{a_{21}J_{1}-a_{11}J_{2}+((\nabla\times J_{1})\cdot
J_{2})\widehat{e}_{1}}{\phi\|\overrightarrow{v}\|},$
and (3.5) becomes
$\displaystyle\nabla\times J_{i}=\xi\times J_{i}.$
After a bit of computation it is possible to show that
$\displaystyle\xi=\nabla\ln\phi+\widehat{e}_{1}\times\left(\frac{[\widehat{e}_{1}\times
J_{1},\widehat{e}_{1}\times
J_{2}]}{\phi\|v\|}-\widehat{e}_{1}\times\nabla\ln\|v\|\right).$
Hence, we have
$\displaystyle\widehat{e}_{1}\cdot\xi=\widehat{e}_{1}\cdot\nabla\ln\phi$
and defining
$\displaystyle\boldsymbol{\beta}=\ast\iota_{\xi}\boldsymbol{\Omega},$
the lemma follows. ∎
Now we define new Poisson 1-forms $K_{i}$
$\displaystyle\boldsymbol{J}_{i}=\phi\boldsymbol{K}_{i}.$
Taking the exterior derivatives of both sides
$\displaystyle{\rm d}\boldsymbol{J}_{i}={\rm
d}\phi\wedge\boldsymbol{K}_{i}+\phi{\rm
d}\boldsymbol{K}_{i}=\boldsymbol{\beta}\wedge\phi\boldsymbol{K}_{i}$
and dividing both sides by $\phi$,
$\displaystyle{\rm d}\boldsymbol{K}_{i}=(\boldsymbol{\beta}-{\rm
d}\ln\phi)\wedge\boldsymbol{K}_{i}.$
Let
$\displaystyle\boldsymbol{\gamma}=\boldsymbol{\beta}-{\rm d}\ln\phi.$
Now, by the lemma above,
$\displaystyle\iota_{\widehat{e}_{1}}\boldsymbol{\gamma}=\iota_{\widehat{e}_{1}}\boldsymbol{\beta}-\iota_{\widehat{e}_{1}}({\rm
d}\ln\phi)=0,$ (3.6)
therefore,
$\displaystyle{\rm
d}\boldsymbol{K}_{i}=\boldsymbol{\gamma}\wedge\boldsymbol{K}_{i},$ (3.7)
where $\gamma$ is a connection on $Q$.
### 3.2 The first obstruction: the Chern class of $\boldsymbol{Q}$
Now we try to find conditions for which a nonvanishing vector field $v$
satisfies
$\displaystyle\boldsymbol{w}=\iota_{v}\boldsymbol{\Omega}=\phi{\rm
d}H_{1}\wedge{\rm d}H_{2}$ (3.8)
for some globally defined functions $\phi$, $H_{1}$ and $H_{2}$. For a two-
form to be decomposed into the form (3.8), first of all, the two-form must be
written as a product of two globally defined, linearly independent
nonvanishing factors. However, such a decomposition may not exist globally.
Then, the question is to decompose $\boldsymbol{w}$ into a product of two
globally defined one forms $\boldsymbol{\rho}_{1}$ and $\boldsymbol{\rho}_{2}$
$\displaystyle\boldsymbol{w}=\boldsymbol{\rho}_{1}\wedge\boldsymbol{\rho}_{2}.$
(3.9)
Since $v$ is a nonvanishing vector field, $\boldsymbol{w}$ is a $2$-form of
constant rank $2$. If we let $S_{\boldsymbol{w}}$ to be the sub-bundle of $TM$
on which $\boldsymbol{w}$ is of maximal rank, then we have
$S_{\boldsymbol{w}}\cong Q$ defined above. The following theorem states the
necessary and sufficient conditions for the decomposition of a two-form of
constant rank $2s$ in the large.
###### Theorem 3.3.
Let $\Sigma$ be an $\mathbb{R}^{n}$-bundle over a connected base space $M$.
Let $\boldsymbol{w}$ be a $2$-form on $\Sigma$ of constant rank $2s$. Let
$S_{\boldsymbol{w}}$ be the subbundle of $\Sigma$ on which $\boldsymbol{w}$ is
of maximal rank. $w$ decomposes if and only if
* $i)$
$S_{\boldsymbol{w}}$ is a trivial bundle.
* $ii)$
The representation of its normalization as a map $w_{1}\colon M\rightarrow{\rm
SO}(2s)/{\rm U}(s)$ arising from any trivialization of $S_{\boldsymbol{w}}$
lifts to ${\rm SO}(2s)$ [3].
In our case, when $s=1$, since ${\rm U}(1)\cong{\rm SO}(2)$, then ${\rm
SO}(2)/{\rm U}(1)$ is a point and it lifts to ${\rm SO}(2)$ trivially,
therefore the second condition in the theorem is satisfied. Hence, the
necessary and sufficient condition of decomposition is the triviality of
$S_{\boldsymbol{w}}\cong Q$. Since $Q$ is a complex line bundle, it is trivial
if and only if $\boldsymbol{c}_{1}(Q)=0$, or equivalently it has a global
section. Since the decomposition of the 2-form $\boldsymbol{w}$ into globally
defined 1-forms $\boldsymbol{\rho}_{1}$ and $\boldsymbol{\rho}_{2}$ is a
necessary condition for the existence of a global bi-Hamiltonian structure,
the vanishing of the first Chern class of $Q$ becomes a necessary condition.
However, this may not be sufficient since the existence of a decomposition in
the form (3.9) may not imply that the factors $\boldsymbol{\rho}_{i}$ satisfy
$\displaystyle\boldsymbol{\rho}_{i}\wedge{\rm d}\boldsymbol{\rho}_{i}=0.$
In order to determine the effect of vanishing Chern class on the constructions
made so far, we are going to investigate the equation (2.5) defining the
Poisson one-forms. Since our Poisson one-forms and related integrability
conditions are determined by the local solutions of (2.5), they are defined
locally on each chart. Let $\big{\\{}J_{i}^{p}\big{\\}}$ and
$\big{\\{}J_{i}^{q}\big{\\}}$ be the Poisson vector fields in charts
$(U_{p},x_{p})$ and $(U_{q},x_{q})$ around points $p\in M$ and $q\in M$,
respectively. Around the point $p\in M$, the Poisson vectors
$\big{\\{}J_{i}^{p}\big{\\}}$ are determined by $\mu_{i}^{p},\alpha_{i}^{p}$
and the local frame
$\big{\\{}\widehat{e}_{2}^{p},\widehat{e}_{3}^{p}\big{\\}}$. Given the local
frame, we can write (2.5) whose solutions are $\mu_{i}^{p}$’s, and using
$\mu_{i}^{p}$’s we can determine $\alpha_{i}^{p}$’s by the equation (2.13).
Now, if $\boldsymbol{c}_{1}(Q)=0$, which is a necessary condition for the
existence of global bi-Hamiltonian structure, then we have a global section of
$Q$, i.e., global vector fields normal to $v$. By using the metric on $M$,
normalize this global section of $Q$ and take it as $\widehat{e}_{2}$, then
define $\widehat{e}_{3}=\widehat{e}_{1}\times\widehat{e}_{2}$. So we have the
global orthonormal frame field
$\\{\widehat{e}_{1},\widehat{e}_{2},\widehat{e}_{3}\\}$. In order to
understand the relation between local Poisson one-forms obtained in two
different coordinate neighborhoods, we first need the following lemmas:
###### Lemma 3.4.
If two solutions $\mu_{1}(s)$ and $\mu_{2}(s)$ of the Riccati equation
$\displaystyle\frac{{\rm d}\mu_{i}}{{\rm
d}s}=-C_{31}^{2}-\mu_{i}\big{(}C_{31}^{3}+C_{12}^{2}\big{)}-\mu_{i}^{2}C_{12}^{3}$
are known, then the general solution $\mu(s)$ is given by
$\displaystyle\mu-\mu_{1}=K(\mu-\mu_{2})e^{\int
C_{12}^{3}(\mu_{2}-\mu_{1}){\rm d}s},$
where $K$ is an arbitrary constant [7].
###### Lemma 3.5.
If $\boldsymbol{c}_{1}(Q)=0,$ then two pairs of compatible Poisson vector
fields $\big{\\{}J_{i}^{p}\big{\\}}$ and $\big{\\{}J_{i}^{q}\big{\\}}$ on
$U_{p}$ and $U_{q}$ respectively, are related on $U_{p}\cap U_{q}$ by
$\displaystyle\frac{J_{i}^{q}}{\big{\|}J_{i}^{q}\big{\|}}=\frac{J_{i}^{p}}{\big{\|}J_{i}^{p}\big{\|}}.$
###### Proof.
Given the global frame field $\\{\widehat{e}_{2},\widehat{e}_{3}\\}$ defined
on coordinate neighborhoods $U_{p}$ and $U_{q}$, Riccati equations for
$\mu_{i}$’s can be written as
$\displaystyle\widehat{e}_{1}\cdot\nabla\mu_{i}^{r}=(\nabla\times\widehat{e}_{2})\cdot\widehat{e}_{2}+\mu_{i}^{r}\big{(}(\nabla\times\widehat{e}_{2})\cdot\widehat{e}_{3}+(\nabla\times\widehat{e}_{3})\cdot\widehat{e}_{2}\big{)}+\big{(}\mu_{i}^{r}\big{)}^{2}\big{(}\nabla\times\widehat{e}_{3}\big{)}\cdot\widehat{e}_{3}$
for $r=p,q$. Therefore, on $U_{p}\cap U_{q}$, $\mu_{i}^{p}$ and $\mu_{i}^{q}$
are four solutions of the same Riccati equation for $i=1,2$. By the lemma
above we have
$\displaystyle\mu_{i}^{q}-\mu_{1}^{p}=K_{i}^{pq}\big{(}\mu_{i}^{q}-\mu_{2}^{p}\big{)}e^{\int
C_{12}^{3}\big{(}\mu_{2}^{p}-\mu_{1}^{p}\big{)}{\rm d}s}.$ (3.10)
Now, using the compatibility condition (2.8),
$\displaystyle
C_{12}^{3}\big{(}\mu_{2}^{p}-\mu_{1}^{p}\big{)}=\widehat{e}_{1}\cdot\nabla\ln\frac{\alpha_{2}^{p}}{\alpha_{1}^{p}},$
(3.10) becomes
$\displaystyle\mu_{i}^{q}-\mu_{1}^{p}=K_{i}^{pq}\big{(}\mu_{i}^{q}-\mu_{2}^{p}\big{)}\frac{\alpha_{2}^{p}}{\alpha_{1}^{p}},$
(3.11)
where
$\displaystyle\widehat{e}_{1}\cdot\nabla K_{i}^{pq}=0.$ (3.12)
Multiplying both sides by $\alpha_{1}^{p}\alpha_{i}^{q}$ in (3.11), gives
$\displaystyle J_{i}^{q}\times J_{1}^{p}=K_{i}^{pq}J_{i}^{q}\times J_{2}^{p}.$
(3.13)
Rearranging (3.13), we obtain
$\displaystyle J_{i}^{q}\times\big{(}J_{1}^{p}-K_{i}^{pq}J_{2}^{p}\big{)}=0.$
Using (3.12) and the compatibility, we can take
$\displaystyle\widetilde{J}_{i}^{p}=J_{1}^{p}-K_{i}^{pq}J_{2}^{p}$
to be our new Poisson vector fields on the neighborhood $U_{p}$, and obtain
$\displaystyle J_{i}^{q}\times\widetilde{J}_{i}^{p}=0.$
By compatibility these new Poisson vector fields $\widetilde{J}_{i}^{p}$
produce functionally dependent Hamiltonians and therefore, for the simplicity
of notation, we will assume without restriction of generality that
$\displaystyle\widetilde{J}_{i}^{p}=J_{i}^{p}$
and the lemma follows. ∎
Then, we have the following result:
###### Theorem 3.6.
There exist two linearly independent global sections $\widehat{j}_{i}$ of $Q$
satisfying
$\displaystyle\widehat{j}_{i}\cdot\big{(}\nabla\times\widehat{j}_{i}\big{)}=0$
(3.14)
if and only if $\boldsymbol{c}_{1}(Q)=0$.
###### Proof.
The forward part is trivial since the existence of a global section of the
complex line bundle $Q$ implies that $Q$ is trivial, and hence
$\boldsymbol{c}_{1}(Q)$ vanishes. For the converse, we define
$\displaystyle\widehat{j}_{i}^{p}=\frac{J_{i}^{p}}{\big{\|}J_{i}^{p}\big{\|}}$
and the lemma implies that $j_{i}^{p}=j_{i}^{q}$ on $U_{p}\cap U_{q}$ and the
theorem follows. ∎
The lemma above states the reason why one may fail to extend a local pair of
compatible Poisson vector fields into a global one, even if
$\boldsymbol{c}_{1}(Q)=0$. In order to do so one should have
$J_{i}^{q}=J_{i}^{p}$ on $U_{p}\cap U_{q}$. However, not the Poisson vector
fields but their unit vector fields can be globalized. Since
$\displaystyle\widehat{e}_{1}\cdot\nabla\frac{\big{\|}J_{2}^{p}\big{\|}}{\big{\|}J_{1}^{p}\big{\|}}\neq
0$
in general, they may not lead to a pair of compatible Poisson structures. Now
we take $\widehat{j}_{1}$ as our first global Poisson vector field, and check
whether we can find another global Poisson vector field compatible with this
one by rescaling $\widehat{j}_{2}$.
### 3.3 Second obstruction: Bott class of the complex codimension 1 foliation
Since $v$ is a nonvanishing vector field on $M$, it defines a real codimension
two foliation on $M$ by orbits of $v$. Since $Q=TM/E$ is a complex line bundle
on $M$, this foliation has complex codimension one. Now, by assuming our
primary obstruction which is the vanishing of the Chern class, we compute the
Bott class of the complex codimension one foliation as defined in [2], which
is studied in detail in [1], and then show that the system admits two globally
defined compatible Poisson structures if and only if the Bott Class is
trivial.
For the rest of our work, we will assume that $Q$ and its dual $Q^{\ast}$ are
trivial bundles. By (3.14), $Q^{\ast}$ has two global sections
$\widehat{\boldsymbol{j}}_{i}=(^{\ast}\imath_{\widehat{j}_{i}}\boldsymbol{\Omega})$
satisfying
$\displaystyle{\rm
d}\widehat{\boldsymbol{j}}_{i}=\boldsymbol{\Gamma}_{i}\wedge\widehat{\boldsymbol{j}}_{i}$
(3.15)
for globally defined $\boldsymbol{\Gamma}_{i}$’s. These
$\widehat{\boldsymbol{j}}_{i}$’s are related with the local Poisson one-forms
$\boldsymbol{J}_{i}^{p}$ by
$\displaystyle\boldsymbol{J}_{i}^{p}=\big{\|}\boldsymbol{J}_{i}^{p}\big{\|}\widehat{\boldsymbol{j}}_{i}.$
(3.16)
By (3.7), we have
$\displaystyle{\rm
d}\boldsymbol{J}_{i}^{p}=\boldsymbol{\gamma}^{p}\wedge\boldsymbol{J}_{i}^{p}.$
(3.17)
Inserting (3.16) and (3.17) into (3.15), we also have
$\displaystyle{\rm
d}\widehat{\boldsymbol{j}}_{i}=\big{(}\boldsymbol{\gamma}^{p}-{\rm
d}\ln\big{\|}\boldsymbol{J}_{i}^{p}\big{\|}\big{)}\wedge\widehat{\boldsymbol{j}}_{i}.$
(3.18)
Redefining $\boldsymbol{\Gamma}_{i}$’s if necessary, comparing (3.15) with
(3.18), we get
$\displaystyle\boldsymbol{\Gamma}_{i}=\boldsymbol{\gamma}^{p}-{\rm
d}\ln\big{\|}\boldsymbol{J}_{i}^{p}\big{\|}.$ (3.19)
###### Proposition 3.7.
Let $\boldsymbol{\kappa}$ be the curvature two-form of $Q$. There exists a
compatible pair of global Poisson structures if and only if
$\displaystyle\boldsymbol{\Xi}=(\boldsymbol{\Gamma}_{1}-\boldsymbol{\Gamma}_{2})\wedge\boldsymbol{\kappa}$
is exact.
###### Proof.
Since $\widehat{\boldsymbol{j}}_{1}$ and $\widehat{\boldsymbol{j}}_{2}$ may
not be compatible, we introduce a local Poisson form $\boldsymbol{j}^{p}$
defined on the coordinate neighborhood $U_{p}$ of $p\in M$, which is
compatible with $\widehat{\boldsymbol{j}}_{1}$ and parallel to
$\widehat{\boldsymbol{j}}_{2}$ i.e.,
$\displaystyle\boldsymbol{j}^{p}=f^{p}\widehat{\boldsymbol{j}}_{2}$ (3.20)
and
$\displaystyle\widehat{\boldsymbol{j}}_{1}\wedge{\rm
d}\boldsymbol{j}^{p}+\boldsymbol{j}^{p}\wedge{\rm
d}\widehat{\boldsymbol{j}}_{1}=0.$ (3.21)
Now (3.20) implies that
$\displaystyle{\rm d}\boldsymbol{j}^{p}=\big{(}\boldsymbol{\Gamma}_{2}+{\rm
d}\ln f^{p}\big{)}\wedge\boldsymbol{j}^{p}.$ (3.22)
Putting (3.15) and (3.22) into (3.21) and using (3.20), we get
$\displaystyle\big{(}\boldsymbol{\Gamma}_{1}-\boldsymbol{\Gamma}_{2}-{\rm
d}\ln
f^{p}\big{)}\wedge\widehat{\boldsymbol{j}}_{1}\wedge\boldsymbol{j}^{p}=0$
which implies
$\displaystyle(\boldsymbol{\Gamma}_{1}-\boldsymbol{\Gamma}_{2})\wedge\widehat{\boldsymbol{j}}_{1}\wedge\widehat{\boldsymbol{j}}_{2}={\rm
d}\ln
f^{p}\wedge\widehat{\boldsymbol{j}}_{1}\wedge\widehat{\boldsymbol{j}}_{2}.$
(3.23)
Our aim here is to find the obstruction to extending $f^{p}$ to $M$, or for
(3.23) to hold globally. For this purpose, we consider the connections on $Q$
defined by $\Gamma_{i}$’s. By (3.19), we define the curvature of these
connections to be
$\displaystyle\boldsymbol{\kappa}={\rm d}\boldsymbol{\Gamma}_{i}={\rm
d}\boldsymbol{\gamma}^{p}.$
Taking the exterior derivative of (3.17) and using (3.16), we get
$\displaystyle{\rm d}\boldsymbol{\gamma}^{p}\wedge\boldsymbol{J}_{i}^{p}={\rm
d}\boldsymbol{\gamma}^{p}\wedge\widehat{\boldsymbol{j}}_{i}=0,$
which leads to
$\displaystyle\kappa={\rm
d}\boldsymbol{\gamma}^{p}=\varphi\widehat{\boldsymbol{j}}_{1}\wedge\widehat{\boldsymbol{j}}_{2}.$
(3.24)
Now multiplying both sides of (3.23) with $\varphi$,
$\displaystyle(\boldsymbol{\Gamma}_{1}-\boldsymbol{\Gamma}_{2})\wedge\boldsymbol{\kappa}={\rm
d}\ln f^{p}\wedge\boldsymbol{\kappa}={\rm d}\big{(}\big{(}\ln
f^{p}\big{)}\boldsymbol{\kappa}\big{)}$
and the proposition follows. ∎
Now we are going to show that the cohomology class of $\boldsymbol{\Xi}$
vanishes if and only if the Bott class of the complex codimension 1 foliation
vanishes. Since $Q$ is a complex line bundle we have
$\displaystyle\boldsymbol{c}_{1}(Q)=[\boldsymbol{\kappa}]$
and the vanishing of $\boldsymbol{c}_{1}(Q)$ is a necessary condition
$\displaystyle\boldsymbol{c}_{1}={\rm d}\boldsymbol{h}_{1}.$
So we have
$\displaystyle\boldsymbol{c}_{1}=[\boldsymbol{\kappa}]=\big{[}{\rm
d}\boldsymbol{\gamma}^{p}\big{]},$
which implies that on $U_{p}$
$\displaystyle\boldsymbol{h}_{1}=\boldsymbol{\gamma}^{p}+{\rm d}\ln h^{p}.$
Then, the Bott class [2] becomes
$\displaystyle\boldsymbol{h}_{1}\wedge\boldsymbol{c}_{1}=\big{(}\boldsymbol{\gamma}^{p}+{\rm
d}\ln h^{p}\big{)}\wedge{\rm d}\boldsymbol{\gamma}^{p}={\rm d}\ln
h^{p}\wedge\boldsymbol{\kappa}+\boldsymbol{\gamma}^{p}\wedge{\rm
d}\boldsymbol{\gamma}^{p}.$
Now by (3.6) and (3.24) we have
$\displaystyle\boldsymbol{\gamma}^{p}\wedge{\rm d}\boldsymbol{\gamma}^{p}=0,$
and therefore,
$\displaystyle\boldsymbol{h}_{1}\wedge\boldsymbol{c}_{1}={\rm
d}\big{(}\big{(}\ln h^{p}\big{)}\kappa\big{)}.$
Since $\boldsymbol{h}_{1}$ is globally defined, on $U_{p}\cap U_{q}$ we have
$\displaystyle\boldsymbol{h}_{1}=\boldsymbol{\gamma}^{p}+{\rm d}\ln
h^{p}=\boldsymbol{\gamma}^{q}+{\rm d}\ln h^{q}$
and
$\displaystyle\boldsymbol{\gamma}^{p}-\boldsymbol{\gamma}^{q}={\rm
d}\ln\frac{h^{q}}{h^{p}}.$ (3.25)
Now we have the following theorem:
###### Theorem 3.8.
The cohomology class of $\boldsymbol{\Xi}$ vanishes if and only if the Bott
class of the complex codimension one foliation defined by the nonvanishing
vector field vanishes.
###### Proof.
If the Bott class vanishes, then we have a globally defined function $h$ such
that
$\displaystyle{\rm d}((\ln h)\boldsymbol{\kappa})=0.$
Then, choosing $f=h$ leads to a compatible pair of global Poisson structures.
Conversely, if there is a pair of globally defined compatible Poisson
structures, then $\boldsymbol{\gamma}$ becomes a global form, and by (3.25) we
have
$\displaystyle{\rm d}\ln\frac{h^{q}}{h^{p}}=0$
on $U_{p}\cap U_{q}$. Therefore,
$\displaystyle\ln h^{q}-\ln h^{p}=c^{qp},$
where $c^{qp}$ is a constant on $U_{p}\cap U_{q}$. Now, fixing a point
$x_{0}\in U_{p}\cap U_{q}$
$\displaystyle c^{qp}=\ln h^{q}(x_{0})-\ln h^{p}(x_{0})=\ln c^{q}-\ln c^{p},$
we obtain
$\displaystyle\frac{h^{p}}{c^{p}}=\frac{h^{q}}{c^{q}}=h,$
where $h$ is a globally defined function, and
$\displaystyle{\rm d}\ln h={\rm d}\ln h^{p}.$
Therefore,
$\displaystyle[\boldsymbol{h}_{1}\wedge\boldsymbol{c}_{1}]=[{\rm d}((\ln
h)\boldsymbol{\kappa})]=0$
and the theorem follows. ∎
### Acknowledgements
We are indebted to Professor Turgut Önder for his help during this work. We
also thank to the anonymous referees for their comments and corrections.
## References
* [1] Asuke T., A remark on the Bott class, Ann. Fac. Sci. Toulouse Math. 10 (2001), 5–21.
* [2] Bott R., Lectures on characteristic classes and foliations, in Lectures on Algebraic and Differential Topology (Second Latin American School in Math., Mexico City, 1971), Lecture Notes in Math., Vol. 279, Springer, Berlin, 1972, 1–94.
* [3] Dibag I., Decomposition in the large of two-forms of constant rank, Ann. Inst. Fourier (Grenoble) 24 (1974), 317–335.
* [4] Gümral H., Nutku Y., Poisson structure of dynamical systems with three degrees of freedom, J. Math. Phys. 34 (1993), 5691–5723.
* [5] Haas F., Goedert J., On the generalized Hamiltonian structure of $3$D dynamical systems, Phys. Lett. A 199 (1995), 173–179, math-ph/0211035.
* [6] Hernández-Bermejo B., New solutions of the Jacobi equations for three-dimensional Poisson structures, J. Math. Phys. 42 (2001), 4984–4996.
* [7] Ince E.L., Ordinary differential equations, Dover Publications, New York, 1944.
* [8] Laurent-Gengoux C., Pichereau A., Vanhaecke P., Poisson structures, Grundlehren der Mathematischen Wissenschaften, Vol. 347, Springer, Heidelberg, 2013.
* [9] Magri F., A simple model of the integrable Hamiltonian equation, J. Math. Phys. 19 (1978), 1156–1162.
* [10] Olver P.J., Canonical forms and integrability of bi-Hamiltonian systems, Phys. Lett. A 148 (1990), 177–187.
* [11] Santoprete M., On the relationship between two notions of compatibility for bi-Hamiltonian systems, SIGMA 11 (2015), 089, 11 pages, arXiv:1506.08675.
* [12] Stiefel E., Richtungsfelder und Fernparallelismus in $n$-dimensionalen Mannigfaltigkeiten, Comment. Math. Helv. 8 (1935), 305–353.
* [13] Weinstein A., The local structure of Poisson manifolds, J. Differential Geom. 18 (1983), 523–557.
|
# A unified framework for coordination of thermostatically controlled loads
Austin Coffman<EMAIL_ADDRESS>Ana Bušić<EMAIL_ADDRESS>Prabir Barooah
<EMAIL_ADDRESS>University of Florida, Gainesville, FL, USA Inria, Paris,
France University of Florida, Gainesville, FL, USA
###### Abstract
A collection of thermostatically controlled loads (TCLs) – such as air
conditioners and water heaters – can vary their power consumption within
limits to help the balancing authority of a power grid maintain demand supply
balance. Doing so requires loads to coordinate their on/off decisions so that
the aggregate power consumption profile tracks a grid-supplied reference. At
the same time, each consumer’s quality of service (QoS) must be maintained.
While there is a large body of work on TCL coordination, there are several
limitations. One is that they do not provide guarantees on the reference
tracking performance and QoS maintenance. A second limitation of past work is
that they do not provide a means to compute a suitable reference signal for
power demand of a collection of TCLs. In this work we provide a framework that
addresses these weaknesses. The framework enables coordination of an arbitrary
number of TCLs that: (i) is computationally efficient, (ii) is implementable
at the TCLs with local feedback and low communication, and (iii) enables
reference tracking by the collection while ensuring that temperature and
cycling constraints are satisfied at every TCL at all times. The framework is
based on a Markov model obtained by discretizing a pair of Fokker-Planck
equations derived in earlier work by Malhame and Chong [21]. We then use this
model to design randomized policies for TCLs. The balancing authority
broadcasts the same policy to all TCLs, and each TCL implements this policy
which requires only local measurement to make on/off decisions. Simulation
results are provided to support these claims.
###### keywords:
Distributed control, Grid support, Randomized control, Thermostatically
controlled loads.
††thanks: This paper was not presented at any IFAC meeting. Corresponding
author A. Coffman. The research reported here has been partially supported by
the NSF through awards 1646229 (CPS-ECCS) and 1934322 (CPS-ECCS), and the
French National Research Agency grant ANR-16-CE05-0008.
, ,
###### Contents
1. 1 Introduction
1. 1.1 Literature review and contribution
2. 1.2 Notation
2. 2 Modeling: Individual TCL
1. 2.1 Temperature dynamics of TCLs
1. 2.1.1 Policy (at the TCL)
2. 2.2 PDE model
3. 3 Markov model from PDE Discretization
1. 3.1 Spatial discretization
2. 3.2 Temporal discretization
4. 4 Discrete space model of a TCL: structure and grid friendly policies
1. 4.1 Discrete state space
2. 4.2 Conditional independence in $P_{k}$
1. 4.2.1 Constructing the factorization
3. 4.3 BA control command $=$ policy
5. 5 Proposed framework
1. 5.1 Individual TCL model with cycling
2. 5.2 Aggregate model of a collection of TCLs
1. 5.2.1 Evaluating the aggregate model
3. 5.3 Grid support Policy design
1. 5.3.1 Convex control synthesis
2. 5.3.2 Computational considerations
3. 5.3.3 Communication burden
6. 6 Numerical experiments
1. 6.1 Planning
2. 6.2 Real time control
7. 7 Conclusion
8. A Proofs
1. A.1 Proof of Lemma 1
2. A.2 Proof of Lemma 3
3. A.3 Proof of Lemma 4
4. A.4 Proof of Theorem 1
1. A.4.1 $\eta^{*}_{\text{CVX}}\leq\eta^{*}_{\text{NCVX}}$
2. A.4.2 $\eta^{*}_{\text{NCVX}}\leq\eta^{*}_{\text{CVX}}$
9. B PDE discretization
1. B.1 Internal CV’s
2. B.2 Boundary CV’s
1. B.2.1 Additional conditions
3. B.3 Overall system
## 1 Introduction
Many loads are flexible in their power demand: they can vary their demand
around a baseline without adversely affecting consumers’ quality of service
(QoS). The flexibility can be used by a balancing authority (BA) to balance
supply and demand in a power grid. The baseline demand refers to the power
demand under normal operation, when each load operates only to meet its
consumer’s QoS without any interference from the BA. Since the rated power of
each load is small, it is necessary to use a collection of loads. To provide
grid support, the collection has to vary its demand from its baseline. It is
envisioned that the BA would supply a reference signal for power demand and
the actions of the loads in a collection would be coordinated so that their
total demand tracks this reference.
Thermostatically controlled loads (TCLs) - such as residential air
conditioners, heat pumps, and water heaters - are recognized to be valuable
sources of flexible demand [4, 6, 23, 18]. For an air conditioner or a heat
pump, baseline demand is largely dictated by ambient weather conditions. There
are at least two QoS requirements: the indoor temperature must be maintained
within a prespecified range and compressor short-cycling must be avoided,
meaning, once the compressor turns on it cannot turn off until a prespecified
time period elapses, and vice versa. Coordination of TCLs involves two
conflicting requirements: (i) the TCLs collectively need to track the
reference power demand signal, and (ii) every TCL’s QoS need to be maintained.
The actuation at each TCL is discrete: it can either be on or off. Direct load
control [7], in which a centralized controller at the BA directly commands
on/off status of each TCL is not scalable to large populations. A more
scalable idea, that subsequent works on TCL coordination use, is for the BA to
broadcast a low dimensional control command to all TCLs, which is translated
by each TCL into its actuation command with a local policy. To avoid confusion
between the decision making at the BA and a TCL, we use the word “policy” to
mean the algorithm at a TCL that makes on/off decisions. The literature on
decentralized coordination of TCLs differ in their choice of the broadcast
signal (i.e., BA’s control command) and the policy at the TCL that translates
this broadcast to on/off decisions. Coordination architectures can be divided
into two broad categories based on these choices: (i) thermostat set point
change [4, 1] and (ii) probabilistic control [23, 20, 6, 9]. These are
discussed in more detail in Section 1.1.
A framework for coordinating TCLs needs two parts. The coordination scheme is
one part. The other part is reference computation: the framework must provide
the BA with a method to determine a suitable reference signal for the TCLs.
That is, the reference must be such that the TCLs can collectively track the
signal while each TCL maintains its QoS. Otherwise, even the best coordination
scheme will fail to meet either the BA’s need, which is reference tracking, or
the consumers’ need, which is maintaining indoor temperature etc., or both.
This work presents a unified framework for coordination of a collection of
TCLs for providing grid support services. The framework enables both of the
above mentioned components, i.e., (i) planning a suitable reference for a
collection of TCLs and (ii) designing a randomized policy for coordination of
the individual TCLs, so that both the BA’s requirement and consumers’ QoS are
satisfied. In the proposed framework, the BA computes randomized control
policies for the TCLs and broadcasts them to all the TCLs. Each TCL receives
the same policy and implements it using locally measurable information. The
framework is computationally tractable for an arbitrary number of TCLs. The
communication burden is low: only a few numbers need to be broadcast by the BA
at every sampling instant. Feedback from TCLs to the BA can be infrequent.
Underlying the framework is: (i) a Markov chain model that is derived from
partial differential equations developed in the early work of Malhame and
Chong [21], (ii) state augmentation to incorporate cycling constraints, and
(iii) convexification of the non-convex problem that appears in the design of
the randomized control policy for the individual TCL. Additionally, we show
that the assumption made about the effect of weather in earlier work [3] on
randomized control, under certain conditions, is in fact true.
### 1.1 Literature review and contribution
Before reviewing coordination methods, we discuss two interrelated modeling
approaches that underpin many of the ideas in the TCL control architectures.
These are the Markov chain and partial differential equation (PDE) models [16,
21, 26, 17, 28, 24], which stem from the early work of Malhame and Chong [21].
In [21] a pair of coupled Fokker-Planck equations are developed to model a
collection of TCLs under thermostat control. The Fokker-Planck equations are
PDEs that describe the time evolution of a certain probability density
functions (pdf) over the state space of temperature and on/off mode. The PDEs
can be used to model the entire collection or a single TCL: the probability
that a single TCL is “on” is approximately the fraction of TCLs that are “on”.
Discretizing the PDE yields a Markov chain model, though some works have
obtained Markov models without using the PDEs. Hence, _one_ set of PDEs can
model a collection of TCLs. Thus, methods that base control design on the PDE
or Markov chain framework scales well with the number of TCLs.
Due to the lack of scalability of direct load control, we limit our attention
to the two broad classes mentioned earlier: (i) thermostat set point changes,
(ii) probabilistic policy. There are many forms of probabilistic policy, which
can be roughly subdivided into two sub categories: (ii-A) “bin switching” and
(ii-B) “randomized policy”. We discuss these in detail below.
In the thermostat setpoint change coordination architecture, a time-varying
thermostat set point is broadcast to all TCLs, and each TCL makes on/off
decisions based on this new setpoint [4, 1]. This approach may ask for an
extremely small change in thermostat setpoint, far below the resolution of the
temperature sensor at each TCL. Or it may ask for large changes in thermostat
setpoint which will violate occupant comfort.
In a probabilistic policy architecture, the TCL policy - the mapping from BA’s
broadcast command to a TCL’s on/off decision - is a non-deterministic mapping.
Works in this category typically first model the population of TCLs under
thermostat control, which is a deterministic policy, as a Markov chain. The
continuous temperature range is divided into a number of discrete bins. A
finite dimensional state vector, a probability mass function, is then defined.
Each entry of the state vector represents “the fraction of TCLs that are on
(or off) and has temperature in a certain range.”
Since the basic Markov model is derived for the thermostat policy,
introduction of the BA’s control to manipulate TCLs’ on/off state is somewhat
ad-hoc. In the the bin switching literature, the control command from the BA
is chosen so as to affect the fraction of TCLs in the temperature bins
directly. In [23], the BA’s control command is chosen to be another vector,
whose $i^{\text{th}}$ entry represents “the fraction of TCLs in bin $i$ to
increase/decrease”. A policy is then proposed to translate this command to
on/off action at each TCL, which requires knowledge of the state of the Markov
model. In [20], BA’s control command is chosen to be a scalar. The probability
of a TCL turning on or off is proportional to this scalar. Subsequent works
have proposed various refinements, such as BA’s command affecting the rate of
fractions to switch instead of fraction to switch [26]. Providing performance
guarantees with bin switching architecture has proved challenging, either on
reference tracking or on QoS maintenance for individual TCLs.
An alternative to bin switching that still uses probabilistic on/off decision
making is randomized policy [3, 6]. A randomized policy is a specification of
the conditional probability of turning on or off given the current state.
On/off decisions are computed with the help of a random number generator and
the policy. In this architecture it is envisioned that the thermostat policy
at the TCL is replaced with a randomized policy. In [3, 6], the policy is
parameterized by a scalar $\zeta(t)$. Coordination of the population is then
achieved by appropriate design of $\zeta(t)$, which is computed and broadcast
by the BA. This architecture also uses a Markov model of the evolution of
binned temperature, but assumes a certain factorization: the next values of
the temperature and mode are conditionally independent given the current joint
pair of temperature and mode values under the effects of the randomized policy
and exogenous disturbances, especially weather. That is, the transition matrix
of the state process is a point wise product of two controlled transition
matrices. In an optimal control setting, computation of the BA’s control
command, $\zeta(t)$, for reference tracking is a non-convex optimization
problem [11]. The probability of turning on when temperature exceeds the upper
limit, or off when temperature dips below the lower limit, is set to 1 by
design. This will ensure the temperature constraint is maintained. Attempts
have been made to maintain the cycling constraint [9]. But a formal design
method to incorporate the cycling constraint has been lacking.
A complete framework for coordination of TCL needs not only a control
algorithm to make decisions at TCLs, but also a method to compute a _feasible_
reference signal for the collection’s power demand. Feasible means that no TCL
needs to violate local constraints in order for the collection to track the
reference. The topic is sometimes described as “flexibility capacity” and has
been examined in many recent works, with various definitions of flexibility
[15, 25, 13, 10]. A unified treatment of reference design and coordination
algorithm design that would provide a complete framework is lacking.
In short, existing work on TCL coordination has a number of scattered
disadvantages. Direct load control suffers from scalability/privacy issues and
thermostat set-point methods have implementation issues. Bin switching does
not provide guarantees on reference tracking and often requires solving a
challenging state estimation problem. Prior work on randomized control
requires non-convex optimization and is based on an assumed conditional
independence. Finally, there is a lack of unified treatment of the reference
design and policy design problems.
In this work we develop a unified framework for coordination of TCLs that
addresses the weaknesses of prior work described above. Our major
contributions are as follows.
1. 1.
We provide a complete framework that allows the BA to compute (a) an optimal
reference signal that is feasible for the collection and (b) optimal
randomized policies for the TCLs. When the TCLs implement these policies,
their total power demand collectively tracks the reference signal and the
policies guarantee that temperature and cycling QoS requirements at each TCL
are satisfied. Optimal reference means it is closest to what the BA wants
while being feasible for the TCLs. Implementation of the policy at a TCL is
easy; it requires only local measurements. The communication burden for
coordination is also low. At each sampling time, a randomized control policy -
parameterized by a few numbers - is broadcast to all TCLs. Feedback from TCLs
to the BA can be infrequent.
2. 2.
Our framework is based on a careful discretization of the partial differential
equation (PDE) model described in [21]. This discretization shows that a
certain “conditional independence” that was assumed in [3] indeed holds. The
conditional independence separates the effects of the policy at the TCL
(control) and weather (disturbance) on the transition matrix, and greatly
facilitates computation of policies.
3. 3.
Numerical experiments are provided to illustrate the efficacy of the
framework. Simulations show that TCLs are able to track the optimal reference
collectively while each TCL is able to maintain both temperature and cycling
constraints. Matlab implementation is made publicly available at [8].
Figure 1 illustrates the two parts of the proposed framework.
Figure 1: Coordination architecture with the proposed framework.
The Markov model obtained by discretizing a PDE was presented in [12]. For
completeness, we include the discretization in this paper as an Appendix.
### 1.2 Notation
The symbol $\mathbb{1}$ denotes the vector of all ones, $\mathbf{e}_{i}$
denotes the ith canonical basis vector, and $\mathbf{0}$ denotes the zero
matrix or vector, all of appropriate dimension. For a vector $v$,
$\text{diag}(v)$ denotes the diagonal matrix with entries of $v$, i.e.,
$\text{diag}(v)\mathbb{1}=v$. Further, $\otimes$ denotes matrix Kronecker
product and $\mathbf{I}_{A}(\cdot)$ the indicator function of the set $A$.
## 2 Modeling: Individual TCL
A thermostatically controlled load (TCL) is an on/off device that ensures the
temperature of a given environment remains within a specified region, e.g., an
air conditioner. During its operation, the TCL must adhere to certain
operational requirements (QoS constraints). We consider two: the temperature
constraint and the cycling constraint. The temperature constraint is that the
TCL’s temperature must remain within a prespecified deadband,
$[\lambda^{\min},\lambda^{\max}]$. This is achieved by switching the TCL on or
off when it is too hot or cold. The cycling constraint is that the TCL can
only change from “on” to “off” or vice versa once every $\tau$ (discrete) time
instants, where $\tau$ is a prespecified constant. The cycling constraint is
to ensure the mechanical hardware is not damaged. In both cases, ensuring the
two constraints amounts to appropriately deciding when to switch the TCL on or
off.
### 2.1 Temperature dynamics of TCLs
The typical model for the TCL’s temperature $\theta(t)$ in the literature is
the following ordinary differential equation (ODE),
$\displaystyle\begin{split}\frac{d}{dt}\theta(t)&=f_{m}(\theta,t),\quad\text{with}\\\
f_{m}(\theta,t)&=-\frac{1}{RC}\left(\theta-\theta^{a}(t)\right)-m(t)\frac{\eta
P_{0}}{C}.\end{split}$ (1)
The rated electrical power consumption is denoted $P_{0}$ with coefficient of
performance (COP) $\eta$. The parameters $R$ and $C$ denote thermal resistance
and capacitance, respectively. The signal $\theta^{a}(t)$ is the ambient
temperature. The quantity $m(t)$ is the on/off mode, and in the following we
identify $m(t)=1$ and $m(t)=$ on, as well as $m(t)=0$ and $m(t)=$ off. We
denote arbitrary temperature values through the variable $\lambda$, and the
thermostat setpoint as $\lambda^{\text{set}}$. The values
$\lambda^{\text{max}}$ and $\lambda^{\text{min}}$ set the upper and lower
limit for the temperature deadband.
A model for the temperature state that accounts for modeling errors in (1) and
will be crucial in developing the content in Section 2.2 is the following Itô
stochastic differential equation (SDE),
$\displaystyle d\theta(t)=f_{m}(\theta,t)dt+\sigma dB(t).$ (2)
The term $B(t)$ is Brownian motion with parameter $\sigma>0$, and the quantity
$\sigma dB(t)$ captures modeling errors in (1). In either model, the baseline
power for the TCL is the value of $P$ so that
$f_{1}(\lambda^{\text{set}},t)=0$, solving yields:
$\displaystyle\text{Baseline
Power:}\quad\bar{P}^{\text{ind}}(t)=\frac{\theta^{a}(t)-\lambda^{\text{set}}}{\eta
R}.$ (3)
For ${\sf{N_{tcl}}}$ TCLs the baseline power $\bar{P}(t)$ and maximum power
$P_{\text{\footnotesize{agg}}}$ are,
$\displaystyle\bar{P}(t)\triangleq{\sf{N_{tcl}}}\bar{P}^{\text{ind}}(t),\quad\text{and}\quad
P_{\text{\footnotesize{agg}}}\triangleq{\sf{N_{tcl}}}P_{0}.$ (4)
The total electrical power consumption of the collection, whether with
thermostat policy or some other policy, is denoted by $y(t)$:
$\displaystyle y(t)\triangleq P_{0}\sum_{\ell=1}^{{\sf{N_{tcl}}}}m^{\ell}(t)$
(5)
where $m^{\ell}(t)$ is the on/off state of the $\ell$-th TCL.
#### 2.1.1 Policy (at the TCL)
The mode state of a TCL evolves according to a policy. The following policy,
which we denote as the _thermostat policy_ , ensures the temperature
constraint:
$\displaystyle\lim_{\epsilon\rightarrow 0}\
m(t+\epsilon)=\begin{cases}1,&\theta(t)\geq\lambda^{\text{max}}.\\\
0,&\theta(t)\leq\lambda^{\text{min}}.\\\ m(t),&\text{o.w.}\end{cases}$ (6)
We add the following set of assumptions about the individual TCL discussed so
far.
1. A.1
The thermostat policy does not violate the cycling constraint.
2. A.2
For all $t\geq 0$ and $\theta\in[\lambda^{\text{min}},\lambda^{\text{max}}]$,
$f_{\text{\footnotesize{on}}}(\theta,t)\leq 0$ and
$f_{\text{\footnotesize{off}}}(\theta,t)\geq 0$.
3. A.3
The TCL’s cycling and temperature constraint are both simultaneously feasible.
The sizing/design of the TCL is most likely to ensure that A.1 holds. With A.1
, we depart from discussing the cycling constraint until Section 5 since up to
that point the mode state is assumed to evolve according to (6).
Assumption A.2 states that when the TCL is on, the temperature does not
increase and when the TCL is off the temperature does not decrease. All prior
works focusing on cooling TCLs (e.g., air conditioners) implicitly make this
assumption. Every result that is to follow is also valid for heating TCLs
(e.g., a water heater or a heat pump) with a sign reversal.
Like A.2, assumption A.3 is also implicit in any work that considers both the
TCLs temperature and cycling constraint.
### 2.2 PDE model
We now describe a PDE model of a TCL’s temperature with thermostat policy
originally derived in [21]. Consider the following marginal pdfs
$\mu_{\text{\footnotesize{on}}},\mu_{\text{\footnotesize{off}}}$:
$\displaystyle\mu_{\text{\footnotesize{on}}}(\lambda,t)d\lambda$
$\displaystyle={\sf P}\left((\lambda<\theta(t)\leq\lambda+d\lambda),\
m(t)=\text{on}\right),$ (7)
$\displaystyle\mu_{\text{\footnotesize{off}}}(\lambda,t)d\lambda$
$\displaystyle={\sf P}\left((\lambda<\theta(t)\leq\lambda+d\lambda),\
m(t)=\text{off}\right),$ (8)
where ${\sf P}(\cdot)$ denotes probability, $\theta(t)$ evolves according to
(2) and for now $m(t)$ evolves according to (6). It was shown in [21] that the
densities $\mu_{\text{\footnotesize{on}}}$ and
$\mu_{\text{\footnotesize{off}}}$ satisfy the Fokker-Planck equations,
$\displaystyle\frac{\partial}{\partial
t}\mu_{\text{\footnotesize{on}}}(\lambda,t)$
$\displaystyle=\frac{\sigma^{2}}{2}\nabla^{2}_{\lambda}\mu_{\text{\footnotesize{on}}}(\lambda,t)-\nabla_{\lambda}\Big{(}f_{\text{\footnotesize{on}}}(\lambda,t)\mu_{\text{\footnotesize{on}}}(\lambda,t)\Big{)}$
(9) $\displaystyle\frac{\partial}{\partial
t}\mu_{\text{\footnotesize{off}}}(\lambda,t)$
$\displaystyle=\frac{\sigma^{2}}{2}\nabla^{2}_{\lambda}\mu_{\text{\footnotesize{off}}}(\lambda,t)-\nabla_{\lambda}\big{(}f_{\text{\footnotesize{off}}}(\lambda,t)\mu_{\text{\footnotesize{off}}}(\lambda,t)\big{)}$
(10)
that are coupled through their boundary conditions [21]. The boundary
conditions are listed in Appendix B.2.
###### Remark 1.
The coupled equations (9)-(10) can be used to model either: (i) a _single_ TCL
or (ii) a _collection_ of TCLs. For (i) the quantities (7)-(8) represent the
_probability_ that a single TCLs temperature and on/off mode reside in the
respective region. For (ii) the quantities (7)-(8) represent the _fraction_ of
TCLs whose temperature and on/off mode reside in the respective region. How
the equations (9)-(10) (specifically their discretized form) can be used to
model an ensemble is discussed further in Section 5.2.
Figure 2: The control volumes (CVs). The colors correspond to the colors found
in Figure 3. The values in each CV represent the nodal temperature for the CV.
The arrows describe the sign of the convection of the TCL through the CVs. The
values are such that $N=m+q$. The terms involving $\alpha$ model rate of
transfer between the corresponding CVs due to the thermostat policy, where
$\alpha=\gamma+\frac{\sigma^{2}}{(\Delta\lambda)^{2}}$. The parameter
$\gamma>0$ is a design parameter; see Remark 4.3.
## 3 Markov model from PDE Discretization
We use the finite volume method (FVM) to discretize the PDEs (9) and (10). The
discretization of (9) and (10) yields a finite dimensional probabilistic model
for a single TCL (equation (16)). We discretize the PDEs (9) and (10) in a way
that a control input for the BA can then be identified. More on this point
will be discussed in Section 4, however the discretization here will play a
role.
### 3.1 Spatial discretization
The FVM bins the continuous temperature into $N$ control volumes (CV). The
layout of the CVs is shown in Figure 2. The $N$ CVs for both the on and off
mode state, as shown in Figure 2, are defined through the nodal temperature
values ($\lambda_{\text{\footnotesize{on}}}$ and
$\lambda_{\text{\footnotesize{off}}}$) and their boundaries
($\lambda^{+}_{\text{\footnotesize{on}}}$ and
$\lambda^{+}_{\text{\footnotesize{off}}}$) and
($\lambda^{-}_{\text{\footnotesize{on}}}$ and
$\lambda^{-}_{\text{\footnotesize{off}}}$):
$\displaystyle\lambda_{\text{\footnotesize{on}}}=(\lambda^{i}_{\text{\footnotesize{on}}})_{i=1}^{N},\quad\lambda_{\text{\footnotesize{on}}}^{+}=\lambda_{\text{\footnotesize{on}}}+\frac{\Delta\lambda}{2},\quad\lambda_{\text{\footnotesize{on}}}^{-}=\lambda_{\text{\footnotesize{on}}}-\frac{\Delta\lambda}{2},$
$\displaystyle\lambda_{\text{\footnotesize{off}}}=(\lambda^{i}_{\text{\footnotesize{off}}})_{i=1}^{N},\quad\lambda_{\text{\footnotesize{off}}}^{+}=\lambda_{\text{\footnotesize{off}}}+\frac{\Delta\lambda}{2},\quad\lambda_{\text{\footnotesize{off}}}^{-}=\lambda_{\text{\footnotesize{off}}}-\frac{\Delta\lambda}{2},$
where $\Delta\lambda$ is the CV width. All intermediate values of
$\lambda_{\text{\footnotesize{on}}}$ and $\lambda_{\text{\footnotesize{off}}}$
are separated from each other by $\Delta\lambda$. The values in
$\lambda^{+}_{\text{\footnotesize{on}}}$ (respectively,
$\lambda^{+}_{\text{\footnotesize{off}}}$) are the right edges of the CVs and
the values $\lambda^{-}_{\text{\footnotesize{on}}}$ (respectively,
$\lambda^{-}_{\text{\footnotesize{off}}}$) are the left edges of the CVs, for
example, $\lambda^{1,-}_{\text{\footnotesize{off}}}=\lambda^{\text{low}}$. The
quantities $\lambda^{\text{min}}$ and $\lambda^{\text{max}}$ specify the
thermostat deadband, and are _different_ from the quantities
$\lambda^{\text{high}}$ and $\lambda^{\text{low}}$ (see Figure 2).
Figure 3: Sparsity pattern of the matrix $A(t)$ for $N=51$ CVs for both the on
and off state. The colors correspond to the colors found in Figure 2.
The steps taken to obtain the spatially discretized PDEs is detailed in
Appendix B. To give an overview, the discretization is done in two parts: (i)
for the internal CV’s (Appendix B.1) and (ii) for the boundary CV’s (Appendix
B.2). We describe here the end result of the derivation in Appendix B. First,
define the following quantities
$\displaystyle\nu_{\text{\footnotesize{off}}}(\lambda^{i},t)$
$\displaystyle\triangleq\mu_{\text{\footnotesize{off}}}(\lambda^{i},t)\Delta\lambda,\quad\text{and}$
(11) $\displaystyle\nu_{\text{\footnotesize{on}}}(\lambda^{i},t)$
$\displaystyle\triangleq\mu_{\text{\footnotesize{on}}}(\lambda^{i},t)\Delta\lambda,$
(12)
then construct the row vector,
$\nu(t)=[\nu_{\text{\footnotesize{off}}}(t),\nu_{\text{\footnotesize{on}}}(t)]$.
with
$\displaystyle\nu_{\text{\footnotesize{off}}}(t)$
$\displaystyle\triangleq[\nu_{\text{\footnotesize{off}}}(\lambda^{1},t),\dots,\nu_{\text{\footnotesize{off}}}(\lambda^{N},t)],\quad\text{and}$
(13) $\displaystyle\nu_{\text{\footnotesize{on}}}(t)$
$\displaystyle\triangleq[\nu_{\text{\footnotesize{on}}}(\lambda^{1},t),\dots,\nu_{\text{\footnotesize{on}}}(\lambda^{N},t)].$
(14)
By combining all the ordinary differential equations (ODEs) for the
$\nu_{\text{\footnotesize{off}}}(\lambda^{i},t),\nu_{\text{\footnotesize{on}}}(\lambda^{i},t)$
for all the $i$’s, we obtain the linear time varying system
$\displaystyle\frac{d}{dt}\nu(t)=\nu(t)A(t).$ (15)
The sparsity pattern of $A(t)$ is shown in Figure 3. The system (15) is the
spatially discretized version of the PDEs (9)-(10). The matrix $A(t)$ also
satisfies the properties of a transition rate matrix, described in the
following lemma.
###### Lemma 1.
For all $t$, the matrix $A(t)$ is a transition rate matrix. That is, for all
$t$
(i): $\displaystyle\quad A(t)\mathbb{1}=\mathbf{0}.$ (ii):
$\displaystyle\quad\text{for all}\ i,\ A_{i,i}(t)\leq 0,\ \text{and}\
\text{for all}\ j\neq i\ A_{i,j}(t)\geq 0.$
###### Proof.
See Appendix A.1. ∎
###### Remark 2.
The choice of the FVM and how we discretize the convection and diffusion terms
appearing in (9)-(10) is important for $A(t)$ to satisfy the conditions in
Lemma 1. This issue is well known in the CFD literature, and also recognized
in the related work [2]. If a finite difference method had been used with
central differences for both diffusion and convection terms, the resulting
$A(t)$ would require restrictive conditions on both $\sigma^{2}$ and
$\Delta\lambda$ to satisfy the properties in Lemma 1 [27].
### 3.2 Temporal discretization
To temporally integrate the dynamics (15) we use a first order Euler
approximation with time step $\Delta t>0$. Making the identifications
$\nu_{k}\triangleq\nu(t_{k})$ and $A_{k}\triangleq A(t_{k})$ we have
$\displaystyle\nu_{k+1}$ $\displaystyle=\nu_{k}P_{k},\quad\text{with}\quad
P_{k}=I+\Delta tA_{k}.$ (16)
In the continuous time setting elements of the vector $\nu(t)$ were referred
to as, for example, $\nu_{\text{\footnotesize{on}}}(\lambda^{i},t)$. The
counterpart to this, in the discrete time setting, is referring to elements of
$\nu_{k}$ as, for example, $\nu_{\text{\footnotesize{on}}}[\lambda^{i},k]$. We
further have the following.
###### Lemma 2.
The matrix $P_{k}$ is a Markov transition probability matrix if
$\displaystyle\forall\ i,\ \text{and}\ \forall\ k,\quad 0<\Delta
t\leq\left|[A_{k}]_{i,i}\right|^{-1}.$
where $[A_{k}]_{i,i}$ is the $i^{th}$ diagonal element of the matrix $A_{k}$.
###### Proof.
From Lemma 1 we have that $P_{k}\mathbb{1}=I\mathbb{1}+\Delta
tA_{k}\mathbb{1}=\mathbb{1}$ since $A_{k}\mathbb{1}=0$. Also from Lemma 1,
every element of $A_{k}$ is non-negative, save for the diagonal elements.
Under the hypothesis on $A_{k}$, then every diagonal element of $I+\Delta
tA_{k}$ will be in $[0,1]$. ∎
###### Remark 3.
The bound on the time step $\Delta t$ given in Lemma 2 is $O(\Delta\lambda)$,
which follows from the PDE discretization; see Appendix B . Since
$\Delta\lambda=\frac{\lambda^{\text{high}}-\lambda^{\text{min}}}{N}$, as the
temperature resolution $\Delta\lambda$ becomes finer the time resolution
$\Delta t$ must also become finer at the same rate. See also Remark 4.3 for a
related comment.
## 4 Discrete space model of a TCL: structure and grid friendly policies
Recall that the dynamics (16) derived in the previous section was for the
thermostat policy. We now delve into the structure of these dynamics so to
introduce a BA control input. We first formalize a discrete state space for
the dynamics (16). We will then show that the transition matrix in (16) can be
written as $P_{k}=\Phi G_{k}$ where $\Phi$ depends on the thermostat policy
and $G_{k}$ on the TCL temperature dynamics and weather. The isolation of the
policy then indicates how a BA could introduce grid friendly policies in place
of the thermostat policy $\Phi$.
### 4.1 Discrete state space
When the conditions of Lemma 2 are met $P_{k}$ is a transition matrix and
hence each $\nu_{k}$ is a marginal pmf if $\nu_{0}$ is a pmf. The structure of
this marginal is given from (7) for the on state (a similar interpretation
holds for the off state) as,
$\displaystyle\nu_{\text{\footnotesize{on}}}[\lambda^{i},k]$
$\displaystyle={\sf P}\left(\theta(t_{k})\in\text{CV}(i),\
m(t_{k})=\text{on}\right),$ (17)
where $\theta(t_{k})$ is the temperature. Now denote,
$\theta_{k}\triangleq\theta(t_{k})$, $m_{k}\triangleq m(t_{k})$, and
$\displaystyle
I_{k}\triangleq\sum_{i=1}^{N}i\mathbf{I}_{\text{CV}(i)}(\theta_{k},m_{k}).$
(18)
The quantity $I_{k}$ indicates which CV the TCLs temperature resides in at
time $k$. It also is a function of $m_{k}$ since the CV index for the on mode
is different from the index for the off mode. We then define the following
discrete state space:
$\displaystyle{\sf
Z}\triangleq\\{m\in\\{\text{\footnotesize{on}},\text{\footnotesize{off}}\\},\
I\in\\{1,\dots,N\\}\\},$ (19)
with cardinality $\left|{\sf Z}\right|=2N$. Using the newly defined quantity
$I_{k}$ we rewrite the marginals
$\nu_{\text{\footnotesize{on}}}[\lambda^{i},k]$ and
$\nu_{\text{\footnotesize{off}}}[\lambda^{i},k]$ as functions on ${\sf Z}$,
$\displaystyle\nu_{\text{\footnotesize{on}}}[\lambda^{i},k]$
$\displaystyle={\sf P}\left(I_{k}=i,\ m_{k}=\text{on}\right),\quad\text{and}$
(20) $\displaystyle\nu_{\text{\footnotesize{off}}}[\lambda^{i},k]$
$\displaystyle={\sf P}\left(I_{k}=i,\ m_{k}=\text{off}\right).$ (21)
From the above, the matrix $P_{k}$ (with the conditions of Lemma 2 satisfied)
is the transition matrix for the joint process $(I_{k},m_{k})$ on the state
space ${\sf Z}$. The dynamic equation $\nu_{k+1}=\nu_{k}P_{k}$ is then a
probabilistic model for a TCL with state space ${\sf Z}$ and operating under
the thermostat policy.
### 4.2 Conditional independence in $P_{k}$
In the following, we refer to the values of $I_{k}$ with $i$ and $j$ and the
values of $m_{k}$ with $u$ and $v$. We introduce the following notation to
refer to the elements of the transition matrix $P_{k}$:
$\displaystyle P_{k}((i,u),(j,v))\triangleq$ (22) $\displaystyle{\sf
P}\Big{(}I_{k+1}=j,\ m_{k+1}=v\ \Big{|}\ I_{k}=i,\ m_{k}=u,\
\theta^{a}_{k}=w_{k}\Big{)}.$
Recall, the matrix $P_{k}$ is derived for the thermostat policy. We will now
show that the matrix $P_{k}$ can be written as the product of two matrices.
One depends only on the thermostat policy (control) and the other depends only
on weather and TCL temperature dynamics. That is, we show that each entry of
$P_{k}$ factors as
$\displaystyle P_{k}((i,u),(j,v))=\phi^{\text{TS}}_{u}(v\ |\ i)P^{u}_{k}(i,j)$
(23)
where, for each given values of $\theta^{a}_{k}$, $P^{u}_{k}(i,j)$ is a
_controlled transition matrix_ on Z:
$\displaystyle P^{u}_{k}(i,j)\triangleq{\sf P}\left(I_{k+1}=j\ |\ I_{k}=i,\
m_{k}=u,\ \theta^{a}_{k}=w_{k}\right)$ (24)
and $\phi^{\text{TS}}_{u}(v\ |\ i)$ is an instance of a _randomized policy_
$\phi_{u}(v\ |\ i)$ on Z:
$\displaystyle\phi_{u}(v\ |\ i)\triangleq{\sf P}\left(m_{k+1}=v\ |\ I_{k}=i,\
m_{k}=u\right).$ (25)
We show the factorization (23) through construction next.
#### 4.2.1 Constructing the factorization
The quantity $\phi^{\text{TS}}_{u}(v\ |\ i)$ in (25) is the thermostat policy
on ${\sf Z}$, which is formally defined as follows.
###### Definition 1.
The thermostat policy on ${\sf Z}$ is specified by the two vectors,
$\phi^{\text{TS}}_{\text{\footnotesize{off}}},\phi^{\text{TS}}_{\text{\footnotesize{on}}}\in\mathbb{R}^{N}$,
where
$\phi^{\text{TS}}_{\text{\footnotesize{off}}}\triangleq\phi^{\text{TS}}_{\text{\footnotesize{off}}}(\text{\footnotesize{on}}\
|\ \cdot)=\mathbf{e}_{N}$,
$\phi^{\text{TS}}_{\text{\footnotesize{on}}}\triangleq\phi^{\text{TS}}_{\text{\footnotesize{on}}}(\text{\footnotesize{off}}\
|\ \cdot)=\mathbf{e}_{1}$, and
$\phi^{\text{TS}}_{\text{\footnotesize{off}}}(\text{\footnotesize{off}}\ |\
\cdot)\triangleq 1-\phi^{\text{TS}}_{\text{\footnotesize{off}}}$,
$\phi^{\text{TS}}_{\text{\footnotesize{on}}}(\text{\footnotesize{on}}\ |\
\cdot)\triangleq 1-\phi^{\text{TS}}_{\text{\footnotesize{on}}}$.
The quantity $P^{u}_{k}(i,j)$ in (24) represents the policy-free (open loop)
evolution of the TCL on ${\sf Z}$. That is, it describes how the TCLs
temperature evolves under a fixed mode. We define matrices with entries
$P^{u}_{k}(i,j)$ next.
###### Definition 2.
Let
$P_{k}^{\text{\footnotesize{off}}},P_{k}^{\text{\footnotesize{on}}}\in\mathbb{R}^{N\times
N}$ have $(i,j)$ entries given by,
$\displaystyle P_{k}^{\text{\footnotesize{off}}}(i,j)$
$\displaystyle=P_{w_{k}}((i,\text{\footnotesize{off}}),(j,\text{\footnotesize{off}})),\quad
i\neq N\ \text{and}\ j\neq N,$ $\displaystyle
P_{k}^{\text{\footnotesize{on}}}(i,j)$
$\displaystyle=P_{w_{k}}((i,\text{\footnotesize{on}}),(j,\text{\footnotesize{on}})),\quad
i\neq 1\ \text{and}\ j\neq 1,$
with $P_{k}^{\text{\footnotesize{off}}}(N,N)=1$ and
$P_{k}^{\text{\footnotesize{on}}}(1,1)=1$.
The quantities defined in Definition 1 and 2 correspond to entries of $P_{k}$.
To construct the promised factorization, from these definitions, the idea is
to construct its four sub-matrices that correspond to all possible
combinations of
$u,v\in\\{\text{\footnotesize{on}},\text{\footnotesize{off}}\\}$ (see Figure
3). For example, the $\text{\footnotesize{off}}-\text{\footnotesize{off}}$
quadrant of $P_{k}$ is given by the matrix product
$\displaystyle\big{(}I-\text{diag}(\phi^{\text{TS}}_{\text{\footnotesize{off}}})\big{)}P^{\text{\footnotesize{off}}}_{k}.$
However, since the temperature associated with the $i^{th}$ CV for the on mode
is not the same temperature associated with the $i^{th}$ CV for the off state
(see Figure 2) it is _not_ true that the
$\text{\footnotesize{off}}-\text{\footnotesize{on}}$ quadrant of $P_{k}$ is
given as
$\text{diag}(\phi^{\text{TS}}_{\text{\footnotesize{off}}})P^{\text{\footnotesize{off}}}_{k}$.
The entries of the matrix $P^{\text{\footnotesize{off}}}_{k}$ need to be re-
arranged so to correctly account for the difference in CV index between the
on/off mode. We define such correctly re-arranged matrices next.
###### Definition 3.
Let $I^{\text{\footnotesize{off}}}=\\{m,\dots,N\\}$,
$I^{\text{\footnotesize{on}}}=\\{1,\dots,q\\}$, $m^{-}=m-1$, and
$S_{k}^{\text{\footnotesize{off}}},S_{k}^{\text{\footnotesize{on}}}\in\mathbb{R}^{N\times
N}$ with $(i,j)$ entries
$\displaystyle S_{k}^{\text{\footnotesize{off}}}(i,j-m^{-})$
$\displaystyle=\begin{cases}P_{k}^{\text{\footnotesize{off}}}(i,j)&i,j\in
I^{\text{\footnotesize{off}}}\\\ 0&\text{otherwise}.\end{cases}$ (26)
$\displaystyle S_{k}^{\text{\footnotesize{on}}}(i,j+m^{-})$
$\displaystyle=\begin{cases}P_{k}^{\text{\footnotesize{on}}}(i,j)&i,j\in
I^{\text{\footnotesize{on}}}\\\ 0&\text{otherwise}.\end{cases}$ (27)
The above definition is based on the construction that $N=q+m$. The quantities
in Definition 3 let us construct, e.g., the
$\text{\footnotesize{off}}-\text{\footnotesize{on}}$ quadrant of $P_{k}$ as
$\text{diag}(\phi_{\text{\footnotesize{off}}}^{\text{TS}})S_{k}^{\text{\footnotesize{off}}}$.
The next result shows that $P_{k}=\Phi^{\text{\footnotesize{TS}}}G_{k}$ under
certain conditions and for appropriate choices of the matrices
$\Phi^{\text{\footnotesize{TS}}}$ and $G_{k}$.
###### Lemma 4.1.
Let the time discretization period $\Delta t$ and the parameter $\alpha$ that
appears as a design choice in discretizing the PDEs to ODEs be chosen to
satisfy $\alpha=(\Delta t)^{-1}$. Let
$\Phi^{\text{TS}}_{\text{\footnotesize{off}}}=\text{diag}(\Phi^{\text{TS}}_{\text{\footnotesize{off}}})$
and
$\Phi^{\text{TS}}_{\text{\footnotesize{on}}}=\text{diag}(\Phi^{\text{TS}}_{\text{\footnotesize{on}}})$,
and
$\displaystyle\Phi^{\text{\footnotesize{TS}}}$
$\displaystyle\triangleq\begin{bmatrix}I-\Phi^{\text{TS}}_{\text{\footnotesize{off}}}&\Phi^{\text{TS}}_{\text{\footnotesize{off}}}&\mathbf{0}&\mathbf{0}\\\
\mathbf{0}&\mathbf{0}&\Phi^{\text{TS}}_{\text{\footnotesize{on}}}&I-\Phi^{\text{TS}}_{\text{\footnotesize{on}}}\end{bmatrix}\quad\text{and}$
(28) $\displaystyle G_{k}$
$\displaystyle\triangleq\begin{bmatrix}\mathbf{0}&S_{k}^{\text{\footnotesize{off}}}&\mathbf{0}&P_{k}^{\text{\footnotesize{on}}}\\\
P_{k}^{\text{\footnotesize{off}}}&\mathbf{0}&S_{k}^{\text{\footnotesize{on}}}&\mathbf{0}\end{bmatrix}^{T},$
(29)
then
$\displaystyle P_{k}=\Phi^{\text{\footnotesize{TS}}}G_{k}.$ (30)
###### Proof 4.2.
See Appendix A.2.
###### Remark 4.3.
The condition $\alpha=1/\Delta t$ can be satisfied as long as time and
temperature discretization intervals are chosen to satisfy $\Delta
t<(\Delta\lambda)^{2}/\sigma^{2}$. To understand how, recall that in the
discretizing the PDE to the coupled ODEs, a design parameter $\gamma>0$
appears: some rate of density is transferred out of the control volume
$\lambda^{N}_{\text{\footnotesize{off}}}$ and into the CV
$\lambda^{q}_{\text{\footnotesize{on}}}$ (as depicted in Figure 2) due to
thermostatic control. The rate of the density transfer is then given as
$-\gamma\nu_{\text{\footnotesize{off}}}(\lambda^{N},t)$, where $\gamma>0$ is a
modeling choice and a constant of appropriate units that describes the
discharge rate. We then define $\alpha\triangleq D+\gamma$ where
$D=\frac{\sigma^{2}}{(\Delta\lambda)^{2}}$. Recall that $\sigma^{2}$ is the
variance in the Fokker-Planck equation (9)-(10) and $\Delta\lambda$ is the
temperature discretization interval. Thus, as long as $1/\Delta t>D$, a
positive $\gamma$ can be chosen while meeting the condition $\alpha=1/\Delta
t$. The condition $1/\Delta t>D$ is equivalent to $\Delta
t<(\Delta\lambda)^{2}/\sigma^{2}$.
###### Remark 4.4.
The conditional independence factorization (30) has been a useful assumption
in the design of algorithms in [3]. In the present it is a byproduct of our
spatial and temporal discretization of the PDEs (9)-(10). There are other
works [2, 1, 25] that develop Markov models for TCLs through discretization of
PDEs. However, to our knowledge, our work is the first to uncover this
factorization.
Lemma 4.1 informs us how to define the dynamics of the marginals (20) under a
different policy than the thermostat policy, which is described next.
### 4.3 BA control command $=$ policy
In light of the previous section, an arbitrary randomized policy can replace
the thermostat policy to control the state process on ${\sf Z}$. That is
equivalent to replacing $\Phi^{\text{\footnotesize{TS}}}$ in (30) with a new
matrix $\Phi$ that corresponds to a policy designed for grid support. From the
viewpoint of the BA this randomized policy _is_ the control input that it must
design and broadcast to a TCL. The TCL now implements this policy to make
on/off decisions instead of using the thermostat policy. As we shall soon see,
if the BA appropriately designs and sends the randomized policy to multiple
TCLs it can achieve coordination of the TCLs for grid support.
To distinguish from thermostat policy
$\phi^{\text{TS}}_{\text{\footnotesize{off}}}$ and
$\phi^{\text{TS}}_{\text{\footnotesize{on}}}$ in the prior section that only
maintains temperature, we denote the newly introduced policies for providing
grid support with the superscript ‘GS’. We require the policies,
$\phi^{\text{GS}}_{\text{\footnotesize{on}}}$ and
$\phi^{\text{GS}}_{\text{\footnotesize{off}}}$, to have the following
structure
$\displaystyle\phi^{\text{GS}}_{\text{\footnotesize{off}}}(\text{on}\ |\
j)=\begin{cases}\kappa^{\text{\footnotesize{on}}}_{j},&(m+1)\leq
j\leq(N-1).\\\ 1,&j=N.\\\ 0,&\text{o.w.}\end{cases}$ (31)
$\displaystyle\phi^{\text{GS}}_{\text{\footnotesize{on}}}(\text{off}\ |\
j)=\begin{cases}\kappa^{\text{\footnotesize{off}}}_{j},&2\leq j\leq(q-1).\\\
1,&j=1.\\\ 0,&\text{o.w.}\end{cases}$ (32)
with $\phi^{\text{GS}}_{\text{\footnotesize{off}}}(\text{off}\ |\
\cdot)=1-\phi^{\text{GS}}_{\text{\footnotesize{off}}}(\text{on}\ |\ \cdot)$
and $\phi^{\text{GS}}_{\text{\footnotesize{on}}}(\text{on}\ |\
\cdot)=1-\phi^{\text{GS}}_{\text{\footnotesize{on}}}(\text{off}\ |\ \cdot)$
and
$\kappa^{\text{\footnotesize{on}}}_{j},\kappa^{\text{\footnotesize{off}}}_{j}\in[0,1]$
for all $j$. The policies could also be time varying, for example:
$\kappa^{\text{\footnotesize{off}}}_{j}[k]$ and
$\kappa^{\text{\footnotesize{on}}}_{j}[k]$. The dependence of the policies on
time is denoted as $\phi^{\text{GS}}_{\text{\footnotesize{off}}}[k]$ and
$\phi^{\text{GS}}_{\text{\footnotesize{on}}}[k]$. Designing the grid support
control policies is then equivalent to choosing the values of
$\kappa^{\text{\footnotesize{on}}}_{j}[k]$ and
$\kappa^{\text{\footnotesize{off}}}_{j}[k]$ for all $j$ and $k$.
We have required $\phi^{\text{GS}}_{\text{\footnotesize{off}}}(\text{on}\ |\
j)=0$ for $1\leq j\leq m$ since the temperatures corresponding to these
indices are below the permitted deadband temperature, $\lambda^{\text{min}}$.
Hence, turning on at these temperature does not make physical sense. The
arguments for the zero elements in
$\phi^{\text{GS}}_{\text{\footnotesize{on}}}$ are symmetric.
###### Remark 4.5.
From the individual TCL’s perspective, implementing grid support randomized
policies of the form (31)-(32) is straightforward: (i) the TCL measures its
current temperature and on/off status, (ii) the TCL “bins” this temperature
value according to (18) and (iii) the TCL flips a coin to decide its next
on/off state according to the probabilities given in (31)-(32). Note that the
thermostat policy is a special case of the grid support control policy, and
both policies enforce the temperature constraint.
## 5 Proposed framework
We are now in a position to present our unified framework for coordination of
TCLs. We first expand the state of the model (16) so to incorporate cycling,
following [19, 26]. We then shift the viewpoint from a single TCL to that of a
collection of TCLs (recall Remark 1) to develop our control oriented aggregate
model. Using this model we develop a method for designing both reference and
policy through convex optimization.
### 5.1 Individual TCL model with cycling
We now augment the model for a TCL’s temperature evolution with cycling
dynamics. Recall the cycling constraint: as soon as a TCL switches its mode,
the TCL becomes stuck in that mode for $\tau$ time instances. This constraint
can be represented as the evolution of a state, specifically, a counter
variable. First defining the binary variable $s_{k}$ as $s_{k}=1$ if the TCL
is stuck in the current mode at time $k$ and $0$ if it is not stuck. The
counter variable is defined as follows
$\displaystyle{\sf{L}}_{k+1}\triangleq\begin{cases}{\sf{L}}_{k}+1,&s_{k}=1.\\\
0,&s_{k}=0.\end{cases}$ (33)
This variable denotes the time spent in the “stuck” mode ($s_{k}=1$). A TCL
has flexibility to help the grid only when ${\sf{L}}_{k}=0$, which means it is
not stuck in either the on or off mode. If ${\sf{L}}_{k}>0$, it is stuck in
either the on or off mode, and switching the mode to help the grid will
violate the cycling constraint.
Recall, the discrete state space ${\sf Z}$ for a TCL included binned
temperature and on/off mode. The space ${\sf Z}$, the policies
$\phi^{\text{GS}}_{\text{\footnotesize{on}}}$ and
$\phi^{\text{GS}}_{\text{\footnotesize{off}}}$, the marginal pmf $\nu_{k}$,
and the transition matrix $P_{k}$ (and consequently its factors $\Phi$ and
$G_{k}$) now all need to be expanded to be defined over a state space
consisting of $(I_{k},m_{k},{\sf{L}}_{k})$. This expansion is described next.
We denote this newly expanded state space as the set of values: ${\sf
X}\triangleq$
$\displaystyle\Big{\\{}m\in\\{\text{\footnotesize{on}},\text{\footnotesize{off}}\\},\
I\in\\{1,\dots,N\\},\ {\sf{L}}\in\\{0,\dots,\tau\\}\Big{\\}},$ (34)
with cardinality $|{\sf X}|=2N(\tau+1)$. The policies on the expanded state
space are:
$\displaystyle\phi^{\text{E}}_{\text{\footnotesize{off}}}$
$\displaystyle=\mathbf{I}_{\\{0\\}}({\sf{L}})\phi^{\text{GS}}_{\text{\footnotesize{off}}}+(1-\mathbf{I}_{\\{0\\}}({\sf{L}}))\phi^{\text{TS}}_{\text{\footnotesize{off}}},\quad\text{and}$
(35) $\displaystyle\phi^{\text{E}}_{\text{\footnotesize{on}}}$
$\displaystyle=\mathbf{I}_{\\{0\\}}({\sf{L}})\phi^{\text{GS}}_{\text{\footnotesize{on}}}+(1-\mathbf{I}_{\\{0\\}}({\sf{L}}))\phi^{\text{TS}}_{\text{\footnotesize{on}}}.$
To ensure that expanded policy (35) will enforce the cycling constraint, we
impose the following restriction at the design stage: a TCL with
${\sf{L}}_{k}>0$ will only implement the thermostat policy, and a TCL with
${\sf{L}}_{k}=0$ will make on/off decisions based on the grid support policy.
The construction in this way ensures a TCL will not violate its cycling and
temperature constraints under the conditions in Assumption A.2 and A.3.
Each entry of the expanded policy is denoted as
$\phi^{\text{E}}_{\text{\footnotesize{off}}}(u\ |\ j,\ l)$ and
$\phi^{\text{E}}_{\text{\footnotesize{on}}}(u\ |\ j,\ l)$. The expanded
marginals are $\nu_{\text{\footnotesize{off}}}[\lambda^{j},l,k]$ and
$\nu_{\text{\footnotesize{on}}}[\lambda^{j},l,k]$, and
$\nu_{\text{\footnotesize{off}},l}$ (resp.,
$\nu_{\text{\footnotesize{on}},l}$) is shorthand for
$\nu_{\text{\footnotesize{off}}}[\cdot,l,k]$ (resp.,
$\nu_{\text{\footnotesize{on}}}[\cdot,l,k]$). In vectorized form, the expanded
marginal is
$\nu^{\text{E}}=[\nu_{\text{\footnotesize{off}}}^{\text{E}},\nu_{\text{\footnotesize{on}}}^{\text{E}}]$
where
$\nu_{\text{\footnotesize{off}}}^{\text{E}}=[\nu_{\text{\footnotesize{off}},0},\dots,\nu_{\text{\footnotesize{off}},\tau}]$
and
$\nu_{\text{\footnotesize{on}}}^{\text{E}}=[\nu_{\text{\footnotesize{on}},0},\dots,\nu_{\text{\footnotesize{on}},\tau}]$.
Define
$\displaystyle G^{\text{E}}_{k}$
$\displaystyle\triangleq\begin{bmatrix}\mathbf{0}&D_{\tau}\otimes
S_{k}^{\text{\footnotesize{on}}}&\mathbf{0}&C_{\tau}\otimes
P_{k}^{\text{\footnotesize{off}}}\\\ C_{\tau}\otimes
P_{k}^{\text{\footnotesize{on}}}&\mathbf{0}&D_{\tau}\otimes
S_{k}^{\text{\footnotesize{off}}}&\mathbf{0}\end{bmatrix}^{T},$ (36)
where
$D_{\tau}\triangleq\mathbb{1}^{T}\otimes\mathbf{e}_{2}\in\mathbb{R}^{\tau+1\times\tau+1}$
and
$\displaystyle C_{\tau}\triangleq\begin{bmatrix}1&0&\mathbf{0}_{\tau-1}^{T}\\\
\mathbf{0}_{\tau-1}&\mathbf{0}_{\tau-1}&I_{\tau-1}\\\
1&0&\mathbf{0}_{\tau-1}^{T}\end{bmatrix}\in\mathbb{R}^{(\tau+1)\times(\tau+1)}.$
(37)
We define the matrix $\Phi_{k}^{\text{E}}$ as having the same structure as
(28), but with the expanded policies
$\phi^{\text{E}}_{\text{\footnotesize{off}}}$ and
$\phi^{\text{E}}_{\text{\footnotesize{on}}}$, i.e.,
$\displaystyle\Phi^{\text{E}}_{k}\triangleq\begin{bmatrix}I-\Phi^{\text{E}}_{\text{\footnotesize{off}}}[k]&\Phi^{\text{E}}_{\text{\footnotesize{off}}}[k]&\mathbf{0}&\mathbf{0}\\\
\mathbf{0}&\mathbf{0}&\Phi^{\text{E}}_{\text{\footnotesize{on}}}[k]&I-\Phi^{\text{E}}_{\text{\footnotesize{on}}}[k]\end{bmatrix},$
(38)
where
$\Phi^{\text{E}}_{\text{\footnotesize{off}}}[k]\triangleq\text{diag}(\phi^{\text{E}}_{\text{\footnotesize{off}}}[k])$
and
$\Phi^{\text{E}}_{\text{\footnotesize{on}}}[k]\triangleq\text{diag}(\phi^{\text{E}}_{\text{\footnotesize{on}}}[k])$.
The model of a TCL with cycling dynamics and grid support policy becomes
$\displaystyle\nu^{\text{E}}_{k+1}=\nu^{\text{E}}_{k}\Phi_{k}^{\text{E}}G_{k}^{\text{E}}.$
(39)
The structure of the transition matrix $\Phi_{k}^{\text{E}}G_{k}^{\text{E}}$
is shown in Figure 4. For comparison, the transition matrix with policy
$\phi^{\text{GS}}$ and _without_ the cycle counter variable would simply be
the four red shaded blocks appearing in their respective quadrant. In the
expanded system, an on to off mode switch forces probability mass from the red
shaded region ($l=0$ and $m=\text{\footnotesize{on}}$) to the green shaded
region ($l=1$ and $m=\text{\footnotesize{off}}$). Mass must then transition
through the chain of $\tau$ green blocks until it reaches the red block again,
so to respect the cycling constraint.
Figure 4: The sparsity pattern of the expanded transition matrix (the dots
represent non-zero entries in the matrix) with $\tau=5$. Each shaded block is
over the entire range of temperature values.
### 5.2 Aggregate model of a collection of TCLs
We now transition from the viewpoint of a single TCL to that of a collection
of ${\sf{N_{tcl}}}$ TCLs: $\ell=1,\dots,{\sf{N_{tcl}}}$. For example,
$m_{k}^{\ell}$ and $I_{k}^{\ell}$ are the mode and binned temperature of the
$\ell^{th}$ TCL at time $k$. Recall Remark 1, the model (39) also describes an
entire collection of TCLs. For a single TCL, we view the state
$\nu_{k}^{\text{E}}$ as a marginal but for a collection of TCLs we expect the
marginal pmf $\nu_{k}^{\text{E}}$ to approximate the histogram
$\displaystyle h_{k}[u,i,l]$
$\displaystyle\triangleq\frac{1}{{\sf{N_{tcl}}}}\sum_{\ell=1}^{{\sf{N_{tcl}}}}\Big{(}\mathbf{I}_{\\{i\\}}(I^{\ell}_{k})\mathbf{I}_{\\{u\\}}(m^{\ell}_{k})\mathbf{I}_{\\{l\\}}({\sf{L}}^{\ell}_{k})\Big{)},$
(40)
for each state $(u,i,l)\in{\sf X}$ as ${\sf{N_{tcl}}}\rightarrow\infty$. In
the same regard, we define
$\displaystyle\gamma_{k}^{\text{E}}\triangleq\nu_{k}^{\text{E}}C^{\text{E}},\quad\text{where}\quad
C^{\text{E}}\triangleq[\mathbf{0}^{T},P_{\text{\footnotesize{agg}}}\mathbb{1}^{T}]^{T},$
(41)
where $P_{\text{\footnotesize{agg}}}$ is the maximum possible power of the
collection, defined in (4). We expect $\gamma_{k}^{\text{E}}$ to approximate
the total power consumption $y_{k}$ of the collection of ${\sf{N_{tcl}}}$
TCLs:
$\displaystyle y_{k}$ $\displaystyle\triangleq
P\sum_{\ell=1}^{{\sf{N_{tcl}}}}m_{k}^{\ell}.$ (42)
which is the discrete-time equivalent of $y(t)$ defined in (5). That is, we
expect $\gamma_{k}^{\text{E}}\approx y_{k}$ for large ${\sf{N_{tcl}}}$, based
on a law of large numbers argument [5]. The _control oriented aggregate model
of a TCL collection_ is the dynamics (39) together with the output (41):
$\displaystyle\nu^{\text{E}}_{k+1}=\nu^{\text{E}}_{k}\Phi_{k}^{\text{E}}G_{k}^{\text{E}}.\qquad\text{and}\qquad\gamma_{k}^{\text{E}}=\nu_{k}^{\text{E}}C^{\text{E}}.$
(43)
#### 5.2.1 Evaluating the aggregate model
Before proceeding to policy design with our developed model (43), we first
show that it is effective in modeling a population of TCLs. We do this by
comparing the state of the model to (40) and (42) obtained from a simulation
of ${\sf{N_{tcl}}}$= 50,000 air conditioning TCLs.
The comparison results are shown in Figure 5 and Figure 6. The mode state of
each TCL evolves according to a control policy, where the
$\phi^{\text{GS}}_{\text{\footnotesize{off}}}$ and
$\phi^{\text{GS}}_{\text{\footnotesize{on}}}$ portion are shown in Figure 6
(bottom). The policy is arbitrary, designed merely to be an example of a non-
thermostat policy. This policy satisfies the structure in (31) and (32) so
that both temperature and cycling constraints are satisfied at each TCL. The
temperature evolution evolves according to (2). We see the state
$\nu^{\text{E}}_{k}$ matches the histogram $h_{k}$ of the collection for the
devices that are not stuck (Figure 5 (top)) and for the devices that are stuck
(Figure 5 (bottom)). Additionally, the output of the aggregate model,
$\gamma_{k}^{\text{E}}$, matches it’s empirical counterpart $y_{k}$ (shown in
Figure 6 (top)).
Figure 5: (Top): Histogram of the collection for the devices that are on and
not stuck. (Bottom): Histogram of the collection for the devices that are on
and are stuck.
Figure 6: (Top): Comparison of the output of the expanded aggregate model
$\gamma_{k}^{\text{E}}$ and the ensembles power consumption $y_{k}$. (Bottom):
The policies $\phi^{\text{GS}}_{\text{\footnotesize{off}}}$ and
$\phi^{\text{GS}}_{\text{\footnotesize{on}}}$ used for the numerical
experiment in Section 5.2.
### 5.3 Grid support Policy design
The goal of coordinating TCLs is to help the BA balance supply and demand of
electricity in the grid. We denote $r^{\text{BA}}_{k}$ as the desired demand
from all flexible loads and batteries that will reduce the imbalance to 0. It
is unreasonable to expect any collection of TCLs to meet the entire desired
demand $r^{\text{BA}}_{k}$ while maintaining their QoS. Only a portion of
$r^{\text{BA}}_{k}$ can be supplied by TCLs, and we denote this portion by
$r_{k}$. Determining $r_{k}$ becomes an optimal control problem due to the
time coupling produced by the TCL dynamics. We consider a planning horizon of
$T_{\text{plan}}$. To simultaneously design grid support control policies
$\phi^{\text{GS}}_{\text{\footnotesize{off}}}[k]$ and
$\phi^{\text{GS}}_{\text{\footnotesize{on}}}[k]$ and determine a suitable
reference signal $r_{k}$ over $T_{\text{plan}}$ the BA solves the following
optimization problem,
$\displaystyle\eta^{*}=\min_{\nu^{\text{E}}_{k},\Phi^{\text{E}}_{k}}\ $
$\displaystyle\eta(\hat{\nu})=\sum_{k\in{\sf
T}}\Big{(}r^{\text{BA}}_{k}-\gamma^{\text{E}}_{k}\Big{)}^{2}$ (44) s.t.
$\displaystyle\nu^{\text{E}}_{k+1}=\nu^{\text{E}}_{k}\Phi_{k}^{\text{E}}G_{k}^{\text{E}},\quad\nu^{\text{E}}_{{\sf
T}(0)}=\hat{\nu},$ (45)
$\displaystyle\gamma^{\text{E}}_{k}=\nu_{k}^{\text{E}}C^{\text{E}},\quad\nu^{\text{E}}_{k}\in[0,1],\quad\Phi^{\text{E}}_{k}\in\varPhi.$
(46)
The solution at time $k$ is denoted $r_{k}\triangleq\gamma_{k}^{\text{E},*}$,
$\phi^{\text{GS},*}_{\text{\footnotesize{off}}}[k]$, and
$\phi^{\text{GS},*}_{\text{\footnotesize{on}}}[k]$. We have ${\sf
T}\triangleq\\{{\sf T}(0),\dots,{\sf T}(0)+T_{\text{plan}}-1\\}$ is the index
set of times, ${\sf T}(0)$ denotes the initial time index, $\hat{\nu}$ is the
initial condition, and $\nu_{k}^{\text{E}}\in[0,1]$ holds elementwise. The set
$\varPhi$ collects all of the constraints on the policy. This includes the
equality constraints set by the structural requirements in (31)-(32) and (35)
as well as the structural requirement in (38). These constraints require
certain elements of the policy to be either zero or one. The policy should
also be a valid conditional pmf and its elements in $[0,1]$. Hence, the set
$\varPhi$ is the following convex set
$\displaystyle\varPhi\triangleq\Big{\\{}\Phi\in\mathbb{R}^{|{\sf X}|\times
2|{\sf X}|}_{[0,1]}\ \big{|}\ $ $\displaystyle\Phi\
\text{satisfies}~{}\eqref{eq:fullBAcontPol},\ \mathbb{1}=\Phi\mathbb{1},$
$\displaystyle\phi^{\text{GS}}_{\text{\footnotesize{off}}}\
\text{satisfies}~{}\eqref{eq:randPolOff2On},$
$\displaystyle\phi^{\text{GS}}_{\text{\footnotesize{on}}}\
\text{satisfies}~{}\eqref{eq:randPolOn2Off},\ \text{and}$
$\displaystyle\phi^{\text{E}}_{\text{\footnotesize{off}}}\ \text{and}\
\phi^{\text{E}}_{\text{\footnotesize{on}}}\
\text{satisfy}~{}\eqref{eq:expPolStruct}\Big{\\}}.$ (47)
Where, e.g., $\mathbb{R}^{|{\sf X}|\times|{\sf X}|}_{[0,1]}$ is the set of
$|{\sf X}|\times|{\sf X}|$ matrices with elements in $[0,1]$.
##### QoS + Solution of (44)
1. 1.
The equality constraints in $\Phi$ are present to ensure the individual TCL’s
QoS constraints: the structure (35) ensures the cycling constraint and the
structure (31)-(32) ensures the temperature constraint. Recall that this
structure guarantees QoS by requiring the policy to place zero probability on
state transitions that would violate QoS.
2. 2.
A solution to (44) yields, for $k\in{\sf T}$, two things: (i) the optimal
randomized policies $\phi^{\text{GS},*}_{\text{\footnotesize{off}}}[k]$ and
$\phi^{\text{GS},*}_{\text{\footnotesize{on}}}[k]$ and (ii) an optimal
reference for the power demand of the TCL collection
$r_{k}(=\gamma^{\text{E},*}_{k})$. The reference is optimal in the following
sense: among all power demand signals the collection can track without
requiring any TCL to violate its local QoS constraints in so doing, it is the
closest to the BA’s desired demand $r^{\text{BA}}$ in 2-norm. The reference is
also the predicted power consumption of the TCLs whilst using the policies
$\phi^{\text{GS},*}_{\text{\footnotesize{off}}}[k]$ and
$\phi^{\text{GS},*}_{\text{\footnotesize{on}}}[k]$.
###### Remark 5.1.
Since the reference $r_{k}(=\gamma^{\text{E},*}_{k})$ from (44) is the best
the TCLs can do to help the BA without any TCL having to violate its QoS,
Problem (44) therefore also provides an answer to the “aggregate flexibility”
question: how much can a collection of TCLs vary their demand while
maintaining their local QoS constraints. This question has been investigated
by many works [25, 13, 10, 15].
#### 5.3.1 Convex control synthesis
The problem (44) is non-convex due to the product
$\nu^{\text{E}}_{k}\Phi_{k}^{\text{E}}$ in the constraint. A well known
convexification remedy for (44) is to consider optimizing over the marginal
and joint distribution instead of the marginal and the policy [22, 2]. Using
our identified structure from Section 4.2 we construct the following joint
distribution (written in matrix form):
$\displaystyle
J_{k}=\text{diag}(\nu^{\text{E}}_{k})\Phi^{\text{E}}_{k}\in\mathbb{R}^{|{\sf
X}|\times 2|{\sf X}|}.$ (48)
By construction, we have that
$\nu^{\text{E}}_{k+1}=\mathbb{1}^{T}J_{k}G^{\text{E}}_{k}$ and
$(\nu^{\text{E}}_{k})^{T}=J_{k}\mathbb{1}$ since
$\mathbb{1}^{T}\text{diag}(\nu^{\text{E}}_{k})=\nu^{\text{E}}_{k}$ and
$\mathbb{1}=\Phi^{\text{E}}_{k}\mathbb{1}$. It is straightforward to convert
the constraint set $\Phi^{\text{E}}_{k}\in\varPhi$ to the new decision
variables. For the equality constraints in $\varPhi$ if we have that
$\phi^{\text{E}}_{\text{\footnotesize{off}}}(u\ |\ j,l)=\kappa$, then in the
decision variables $J_{k}$ and $\nu_{k}^{\text{E}}$ we will have a linear
constraint of the form
$\displaystyle{\sf P}\left(m_{k+1}=u,\ I_{k}=j,\ {\sf{L}}_{k}=l,\
m_{k}=\text{off}\right)$
$\displaystyle=\kappa\nu_{\text{\footnotesize{off}}}[\lambda^{j},l,k],$ (49)
where the LHS of the above is some element in the matrix $J_{k}$. In addition
to the above equality constraints, requiring both $J_{k}$ and
$\nu_{k}^{\text{E}}$ to be within $[0,1]$ and the constraint
$(\nu^{\text{E}}_{k})^{T}=J_{k}\mathbb{1}$ will allow one to reconstruct a
policy $\Phi_{k}^{\text{E}}\in\varPhi$ from $J_{k}$ and $\nu_{k}^{\text{E}}$
(described shortly in Lemma 5.2). We denote the transcription of
$\Phi^{\text{E}}_{k}\in\varPhi$ to the new variables as
$(J_{k},\nu_{k}^{\text{E}})\in\bar{\varPhi}$. Optimizing over $J_{k}$ and
$\nu_{k}^{\text{E}}$ yields the convex program:
$\displaystyle\begin{split}\eta^{*}=&\min_{\nu^{\text{E}}_{k},J_{k}}\
\eta(\hat{\nu})=\sum_{k\in{\sf
T}}\Big{(}r^{\text{BA}}_{k}-\gamma^{\text{E}}_{k}\Big{)}^{2}\\\
\text{s.t.}\quad&\nu^{\text{E}}_{k+1}=\mathbb{1}^{T}J_{k}G^{\text{E}}_{k},\quad\nu^{\text{E}}_{{\sf
T}(0)}=\hat{\nu},\quad\gamma^{\text{E}}_{k}=\nu_{k}^{\text{E}}C^{\text{E}},\\\
&\nu^{\text{E}}_{k},J_{k}\in[0,1],\ (\nu^{\text{E}}_{k})^{T}=J_{k}\mathbb{1},\
(J_{k},\nu_{k}^{\text{E}})\in\bar{\varPhi}.\end{split}$ (50)
Once the convex problem is solved, the grid support control policies need to
be recovered from it by using the relation (48). If the matrix
$\text{diag}(\nu_{k}^{\text{E}})$ is invertible, then the policy can be
obtained trivially from inversion of $\text{diag}(\nu_{k}^{\text{E}})$. If
$\text{diag}(\nu_{k}^{\text{E}})$ is not invertible, then slight care is
required when reconstructing a policy from the solution of (50). We describe
this in the following Lemma.
###### Lemma 5.2.
Suppose for all $k\in{\sf T}$ that $\nu_{k}^{\text{E}}$ and $J_{k}$ satisfy
the constraints in problem (50). Then, there exists matrices
$H_{k}=H_{k}(\nu_{k}^{\text{E}})$ and $W_{k}=W_{k}(\nu_{k}^{\text{E}})$ so
that for all $k\in{\sf T}$ the quantity $\Phi_{k}^{\text{E}}=H_{k}J_{k}+W_{k}$
satisfies (48) and $\Phi_{k}^{\text{E}}\in\varPhi$.
###### Proof 5.3.
See Appendix A.3.
Exact construction of $H_{k}$ and $W_{k}$ is given in the proof of Lemma 5.2.
_Hence, the proof of Lemma 5.2 provides an algorithm for computing grid
support control policies that are feasible for the problem (44) from the
solutions of the convex problem (50)._ Further the two problems have a certain
equivalence described here in the following Theorem.
###### Theorem 1.
Denote $\eta^{*}_{\text{CVX}}$ the optimal cost for (50) and
$\eta^{*}_{\text{NCVX}}$ the optimal cost for (44) we have that
$\eta^{*}_{\text{CVX}}=\eta^{*}_{\text{NCVX}}$.
###### Proof 5.4.
See Appendix A.4.
This result, for a similar problem setup, is also reported in [2]. While we
have no guarantee on the difference of the argument minimizers (and hence the
policies obtained from both), Theorem (1) says that the policies will produce
the same tracking performance. Further, from Lemma 5.2, the policies produced
from either problem are guaranteed to ensure TCL QoS.
#### 5.3.2 Computational considerations
The dimension of the program (50) can be quite large, so that even though it
is convex obtaining a solution requires care. We discuss now some practical
considerations that we found necessary to consider when solving the problem
(50).
Due to the structure of $\Phi^{\text{E}}_{k}$, we do not need to declare every
element in the matrix $J_{k}$ as a decision variable since many of these
elements will be zero. For instance, we see that
$\text{diag}(\nu^{\text{E}}_{k})\Phi^{\text{E}}_{k}$ is a block matrix, where
further each matrix block is diagonal. We express this as:
$\text{diag}(\nu^{\text{E}}_{k})\Phi^{\text{E}}_{k}=$
$\displaystyle\begin{bmatrix}B_{\text{\footnotesize{off}},\text{\footnotesize{off}}}[k]&B_{\text{\footnotesize{off}},\text{\footnotesize{on}}}[k]&\mathbf{0}&\mathbf{0}\\\
\mathbf{0}&\mathbf{0}&B_{\text{\footnotesize{on}},\text{\footnotesize{off}}}[k]&B_{\text{\footnotesize{on}},\text{\footnotesize{on}}}[k]\end{bmatrix}$
$\displaystyle\mathrel{\ensurestackMath{\stackunder[1pt]{=}{\scriptstyle\triangledown}}}\
\text{sparse}(J_{k})$
where, e.g.,
$B_{\text{\footnotesize{off}},\text{\footnotesize{off}}}[k]=\text{diag}(\nu^{\text{E}}_{\text{\footnotesize{off}}}[k])(I-\Phi^{\text{E}}_{\text{\footnotesize{off}}}[k])$.
The other diagonal matrices appearing in (5.3.2) can be inferred by carrying
out the matrix multiplication.
If $J_{k}$ was declared directly as a decision variable the problem (50) would
have $(8N^{2}+2N(\tau+1))T_{\text{plan}}$ primal variables, whereas the
problem with $\text{sparse}(J_{k})$ as a decision variable only has
$2NT_{\text{plan}}(\tau+3)$ primal variables. As an example, consider $N=12$,
$T_{\text{plan}}=360$, and $\tau=5$, which are values used in numerical
results reported later. The problem (50) without the structure exploited has
$\approx 0.5$ million decision variables, but only $\approx 75,000$ when the
structure is exploited.
We also have found it helpful to include constraints of the form,
$\displaystyle\phi^{\text{GS}}_{\text{\footnotesize{off}}}(\text{on}\ |\
j-1)\nu_{\text{\footnotesize{off}}}[\lambda^{j-1},0,k]\leq\phi^{\text{GS}}_{\text{\footnotesize{off}}}(\text{on}\
|\ j)\nu_{\text{\footnotesize{off}}}[\lambda^{j},0,k],$ (51)
$\displaystyle\phi^{\text{GS}}_{\text{\footnotesize{on}}}(\text{off}\ |\
j+1)\nu_{\text{\footnotesize{on}}}[\lambda^{j+1},0,k]\leq\phi^{\text{GS}}_{\text{\footnotesize{on}}}(\text{off}\
|\ j)\nu_{\text{\footnotesize{on}}}[\lambda^{j},0,k],$ (52)
so to suggest that the switching on (resp., switching off) probability
increases as temperature increases (resp., decreases). Adding the constraints
(51)-(52) to the problem (50) is straightforward as both the LHS and RHS of
the inequalities are elements in the matrix $J_{k}$.
Matlab implementation of (50) and the algorithm to extract the policies from
$J_{k}$ (described in the proof of Lemma 5.2) is available at [8].
#### 5.3.3 Communication burden
Once solved, the policies obtained from (50) need to be sent to each
individual TCL. Many of the policy state values are constrained to either zero
or one, which could be pre-programmed into each TCL. At each time index, $q-2$
(for the on to off policy) plus $N-m-1$ (for the off to on policy) numbers are
not constrained and need to be sent from the BA to each TCL. Recall that the
numbers $m$ and $q$ are temperature bin indices (see Figure 2) and $N$ is the
number of temperature bins. For illustrative purposes, consider the values
used in numerical experiments reported in the sequel: $N=12$ with $q=10$ and
$m=2$ and a time discretization $\Delta t=1$ minute. Since $N=q+m$, then the
BA has to broadcast $2(q-1)=18$ numbers every 1 minute to the TCLs. Each TCL
receives the same 18 numbers.
Communication from TCLs to the BA - about their temperature and on/off state -
is needed at the beginning of every planning period so that the BA can
determine the initial condition $\hat{\nu}$ in (50). The frequency of this
feedback is a design choice. In our numerical simulations reported later, a
planning horizon of 6 hours was used, and this feedback was necessary only
once in six hours.More frequent loop closure may be needed for higher
robustness to uncertainty in weather prediction etc., a topic outside the
scope of this paper.
## 6 Numerical experiments
Simulation involving coordination of ${\sf{N_{tcl}}}=20,000$ TCLs through our
proposed framework is presented here. Recall the two parts of the coordination
architecture shown in Figure 1: (i) planning and (ii) real time control.
Planning refers to the solution of the problem (50) at the BA to compute the
following two things for the planning period ${\sf T}$:
1. 1.
$r_{k}$: the reference power consumption of the TCL collection, given the
problem data $r^{\text{BA}}_{k}$.
2. 2.
$\phi^{\text{GS},*}_{\text{\footnotesize{off}}}[k]$ and
$\phi^{\text{GS},*}_{\text{\footnotesize{on}}}[k]$: grid support control
policies for each TCL.
This computation is performed at ${\sf T}(0)$. Real time control is then the
implementation of the grid support policies by each TCL to make on/off
decisions in real time. We imagine the BA broadcasts the policies
$\phi^{\text{GS},*}_{\text{\footnotesize{off}}}[k]$ and
$\phi^{\text{GS},*}_{\text{\footnotesize{on}}}[k]$ at each $k$, though it can
also broadcast all the policies, for all $k\in{\sf T}$, at ${\sf T}(0)$ and
not broadcast again until the beginning of the next planning horizon.
The goal of the numerical simulations of real time control is to show the
following.
1. 1.
When each TCL uses the policies
$\phi^{\text{GS},*}_{\text{\footnotesize{off}}}[k]$ and
$\phi^{\text{GS},*}_{\text{\footnotesize{on}}}[k]$ to decide on/off actuation,
the collection’s power demand indeed tracks $r_{k}$.
2. 2.
Every TCL’s QoS constraints - both temperature and cycling - are satisfied at
all times.
Temperature of each TCL is computed in these simulations with the ODE model
(1).
Table 1: Simulation Parameters Par. | Unit | value | Par. | Unit | value
---|---|---|---|---|---
${\sf{N_{tcl}}}$ | N/A | 2$\times 10^{4}$ | $\eta$ | $\frac{\text{kW-e}}{\text{kW-th.}}$ | $2.5$
$C$ | kWh$/^{\circ}$C | 1 | $P_{0}$ | kW | 5.5
$\lambda^{\text{min}}$ | ∘C | 20 | $\lambda^{\text{max}}$ | ∘C | $22$
$(\Delta t)\tau$ | Mins. | 5 | $P_{\text{\footnotesize{agg}}}$ | MW | 110
$R$ | ∘C$/$kW | 2 | $\Delta t$ | Mins. | 1
$q$ | N/A | 10 | $m$ | N/A | 2
$N$ | N/A | 12 | $T_{\text{plan}}$ | N/A | 360
### 6.1 Planning
The demand needed for demand-supply imbalance at the BA, $r^{\text{BA}}_{k}$,
is chosen arbitrarily, and shown in Figure 7 (top). It is infeasible for the
collection: sometimes negative and sometimes far higher than the maximum power
demand of the collection. This is done to simulate a realistic scenario in
which many sources of demand and generation, not just TCLs, are managed by the
BA.
The baseline demand trajectory is defined by the equation (4), which is
approximately the power consumption for this collection of air conditioners
under thermostat control. The ambient air temperature is time varying and is
obtained from wunderground.com for a typical summer day in Gainesville,
Florida, USA. The other parameters that affect the Markov model are shown in
Table 1.
Planning computations are done with Matlab and CVX [14] using a desktop Linux
machine, with $N=12$, and for a six hour planning horizon with 1 minute
discretization ($T_{\text{plan}}=360$). The problem (50) takes about a minute
to solve. The quantity $r^{\text{BA}}_{k}$, the baseline power $\bar{P}_{k}$,
and the reference signal $r_{k}$, obtained from solving (50), are shown in
Figure 7 (top). The optimal reference for the collection, $r_{k}$, is as close
to $r^{\text{BA}}_{k}$ as the dynamics of TCLs allows without violating their
QoS constraints; recall Remark 5.1. Figure 7 (bottom) shows the two grid
support control polices for one time instant.
### 6.2 Real time control
The power consumption of the collection making on/off decisions according to
the obtained policies is shown in Figure 8 (top). The figure shows that the
TCLs are able to collectively track the reference signal $r_{k}$. We emphasize
that the computational effort at each TCL is negligible. Recall Remark 4.5:
once a TCL receives a grid support policy ($\approx$ 18 floating point
numbers, see Section 5.3.3) it only has to measure its current state
(temperature and on/off mode) and generate a uniformly distributed random
number in $[0,1]$ to implement the policy.
Verification of the grid support policies in ensuring QoS is shown in Figure
8. The bottom plots shows a histogram of the times between switches for 300
randomly chosen TCLs. The middle plot shows a histogram of temperature from
200 randomly chosen TCLs’ temperature trajectories. The histograms show that
the policies designed with (50) indeed satisfy the QoS constraints, which is
specified by the vertical lines in the figures. Some TCLs do escape the
temperature deadband by a little bit, which is expected and occurs also in
thermostatic control: the sensor must _first_ register a value outside the
deadband in order decide to switch the on/off state.
Figure 7: (Top): The quantity $r_{k}$ obtained from solving (50), the dashed
horizontal lines represent all of the TCLs on (top line) and off (bottom
line). (Bottom): Grid support control policies, obtained from solving (50), at
one time instance.
Figure 8: (Top): Reference tracking results for the TCLs under the influence
of the grid support control policies obtained by solving (50). (Middle):
Histogram of the 200 TCL’s temperature trajectories over the entire simulation
horizon. (Bottom): Histogram of the time between switches over 3000 TCLs with
the vertical line representing the minimum allowable time between switches.
## 7 Conclusion
In this work we present a unified framework for the distributed control of
TCLs. The framework enables: (i) reference planning for a collection of TCLs
and (ii) design of a randomized control policy for the individual TCLs, so
that both the BA’s requirement and consumers’ QoS are satisfied. The resulting
framework is (i) scalable to an arbitrary number of loads and is implemented
through _local_ feedback and minimal communication, (ii) able to guarantee
both temperature and cycling constraints maintenance in each TCL, and (iii)
based on convex optimization. Matlab/cvx implementation is publicly available
[8].
There are several avenues for future work. The optimal control problem is
solved in an open-loop fashion here. Feedback from TCLs is used only to
compute an initial condition that is needed as problem data for the off- line
planning problem. It is straightforward to close the loop between the TCL
collection and the BA with greater frequency for robustness to uncertainty in
weather forecast and TCL parameters. It will be of interest to identify
scenarios where closing loop, say, by using Model Predictive Control, is (i)
necessary, and (ii) at what frequency should information be communicated from
the TCLs to the BA. Another avenue is to investigate how the problem (50)
could be solved at each TCL, intermittently, instead of at the BA. Since the
computational power of the processor at each TCL is lower than that of the
processor at the BA, online distributed algorithms for convex optimization
could play a role. The Fokker-Planck equations from [21] we used here are
convenient for modeling TCL populations with a small deegree of heterogeneity.
Distributed computation of optimal policies locally at each TCLs may help
extend the method to a highly heterogeneous population of TCLs.
## References
* [1] Saeid Bashash and Hosam K Fathy. Modeling and control of aggregate air conditioning loads for robust renewable power management. IEEE Transactions on Control Systems Technology, 21(4):1318–1327, 2013.
* [2] Emilio Benenati, Marcello Colombino, and Emiliano Dall’Anese. A tractable formulation for multi-period linearized optimal power flow in presence of thermostatically controlled loads. In 2019 IEEE 58th Conference on Decision and Control (CDC), pages 4189–4194. IEEE, 2019.
* [3] Ana Bušić and Sean Meyn. Distributed randomized control for demand dispatch. In IEEE conference on decision and control, pages 6964–6971, 2016\.
* [4] D.S. Callaway and I.A. Hiskens. Achieving controllability of electric loads. Proceedings of the IEEE, 99(1):184–199, 2011.
* [5] Yue Chen, Ana Bušić, and Sean P. Meyn. State estimation for the individual and the population in mean field control with application to demand dispatch. IEEE Transactions on Automatic Control, 62(3):1138–1149, 2017.
* [6] Yue Chen, Md Umar Hashmi, Joel Mathias, Ana Bušić, and Sean Meyn. Distributed control design for balancing the grid using flexible loads. In IMA Volume on the Control of Energy Markets and Grids, pages 1–26. 2017.
* [7] Chi-Min Chu and Tai-Lang Jong. A novel direct air-conditioning load control method. IEEE transactions on power systems, 23(3):1356–1363, 2008.
* [8] Austin Coffman. Distributed control of TCLs through convex optimization. https://gitlab.com/austinrc925/distributed-control-of-tcls-through-convex-optimization/-/tree/main, 2021\.
* [9] Austin Coffman, Ana Bušić, and Prabir Barooah. Virtual energy storage from TCLs using QoS preserving local randomized control. In 5th ACM International Conference on Systems for Built Environments (BuildSys), page 10, November 2018.
* [10] Austin Coffman, Neil Cammardella, Prabir Barooah, and Sean P. Meyn. Aggregate capacity of TCLs with cycling constraints. ArXiV.org, October 2020. arXiv:1909.11497.
* [11] Austin R. Coffman, Ana Bušić, and Prabir Barooah. Aggregate capacity for TCLs providing virtual energy storage with cycling constraints. In IEEE Conference on Decision and Control, December 2019.
* [12] Austin R. Coffman, Ana Bušić, and Prabir Barooah. Control oriented modeling of TCLs. In American Control Conference, May 2021. accepted, extended version in arXiv:2009.12960.
* [13] Austin R. Coffman, Zhong Guo, and Prabir Barooah. Characterizing capacity of flexible loads for providing grid support. IEEE Transactions on Power Systems, 36:2428 – 2437, May 2021\.
* [14] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 1.21. http://cvxr.com/cvx, February 2011.
* [15] H. Hao, B. M. Sanandaji, K. Poolla, and T. L. Vincent. Aggregate flexibility of thermostatically controlled loads. IEEE Transactions on Power Systems, 30(1):189–198, Jan 2015.
* [16] Emre C Kara, Mario Bergés, and Gabriela Hug. Impact of disturbances on modeling of thermostatically controlled loads for demand response. IEEE Transactions on Smart Grid, 6(5):2560–2568, 2015.
* [17] Adil Khurram, Roland Malhamé, Luis Duffaut Espinosa, and Mads Almassalkhi. Identification of hot water end-use process of electric water heaters from energy measurements. Electric Power Systems Research, 189:106625, 2020.
* [18] Zachary E. Lee, Qingxuan Sun, Zhao Ma, Jiangfeng Wang, Jason S. MacDonald, and K. Max Zhang. Providing Grid Services With Heat Pumps: A Review. ASME Journal of Engineering for Sustainable Buildings and Cities, 1(1), 01 2020. 011007\.
* [19] M. Liu and Y. Shi. Model predictive control of aggregated heterogeneous second-order thermostatically controlled loads for ancillary services. IEEE Transactions on Power Systems, 31(3):1963–1971, May 2016.
* [20] M. Liu, Y. Shi, and X. Liu. Distributed MPC of aggregated heterogeneous thermostatically controlled loads in smart grid. IEEE Transactions on Industrial Electronics, 63(2):1120–1129, 2016\.
* [21] Roland Malhame and Chee-Yee Chong. Electric load model synthesis by diffusion approximation of a high-order hybrid-state stochastic system. IEEE Transactions on Automatic Control, 30(9):854–860, 1985.
* [22] Alan S. Manne. Linear programming and sequential decisions. Management Science, 6(3):259–267, 1960.
* [23] J. L. Mathieu, S. Koch, and D. S. Callaway. State estimation and control of electric loads to manage real-time energy imbalance. IEEE Transactions on Power Systems, 28:430–440, 2013.
* [24] Md Salman Nazir and Ian Hiskens. Analysis of synchronization in load ensembles. Electric Power Systems Research, 190:106779, 2021.
* [25] Dario Paccagnan, Maryam Kamgarpour, and John Lygeros. On the range of feasible power trajectories for a population of thermostatically controlled loads. In 2015 54th IEEE Conference on Decision and Control (CDC), pages 5883–5888, 2015.
* [26] L. C. Totu, R. Wisniewski, and J. Leth. Demand response of a TCL population using switching-rate actuation. IEEE Transactions on Control Systems Technology, 25(5):1537–1551, 2017.
* [27] Henk Kaarle Versteeg and Weeratunge Malalasekera. An introduction to computational fluid dynamics: the finite volume method. Pearson education, 2007.
* [28] Wei Zhang, Jianming Lian, Chin-Yao Chang, and Karanjit Kalsi. Aggregated modeling and control of air conditioning loads for demand response. IEEE Transactions on Power Systems, 28(4):4655–4664, 2013.
## Appendix A Proofs
### A.1 Proof of Lemma 1
See Appendix B before reading this proof. Property (ii) is a consequence of
the upwind difference scheme used. We see that for the internal CVs we have
off CVs:
$\displaystyle\quad-\Big{(}F^{i,+}_{\text{\footnotesize{off}}}+D\Big{)}$
(A.61) on CVs:
$\displaystyle\quad\Big{(}F^{i,-}_{\text{\footnotesize{on}}}-D\Big{)}.$ (A.62)
From Assumption A.2 we have that $F^{i,-}_{\text{\footnotesize{on}}}\leq 0$
and $F^{i,-}_{\text{\footnotesize{off}}}\geq 0$ so that both of the above
terms are negative. The upwind scheme is what ensured appropriate sign was
added to the terms $F^{i,-}_{\text{\footnotesize{on}}}$ and
$F^{i,-}_{\text{\footnotesize{off}}}$ so that the above coefficients are
negative. Similar arguments can be applied for the off diagonal terms of the
internal CVs and the boundary CVs.
To show property (i) we consider solely an internal CV for the off state as
the arguments for all other CVs are identical in structure. Note that showing
$A(t)\mathbb{1}=\mathbf{0}$ is equivalent to
$\mathbb{1}^{T}\mathcal{A}(t)=\mathbf{0}^{T}$. Hence we need to show, for an
arbitrary $i$ that all coefficients acting on
$\nu_{\text{\footnotesize{off}}}(\lambda^{i},t)$ sum to $0$. We collect the
coefficients corresponding to
$\nu_{\text{\footnotesize{off}}}(\lambda^{i},t)$:
$\displaystyle\text{From CV($i$)}:$
$\displaystyle\quad-F^{i,+}_{\text{\footnotesize{off}}}(t)-D.$
$\displaystyle\text{From CV($i-1$)}:$ $\displaystyle\quad\frac{D}{2}.\quad$
$\displaystyle\text{From CV($i+1$)}:$
$\displaystyle\quad\frac{D}{2}+F^{i+1,-}_{\text{\footnotesize{off}}}(t).\quad$
We then require the sum of these coefficients to be zero for all $t$ and any
index $i$ for the internal off CVs, adding yields
$\displaystyle
F^{i+1,-}_{\text{\footnotesize{off}}}(t)-F^{i,+}_{\text{\footnotesize{off}}}(t)=\frac{f_{\text{\footnotesize{off}}}(\lambda^{i+1,-},t)-f_{\text{\footnotesize{off}}}(\lambda^{i,+},t)}{\Delta\lambda}=0$
since by construction $\lambda^{i+1,-}=\lambda^{i,+}$ for the off CV’s. This
procedure can be repeated for $\nu_{\text{\footnotesize{off}}}(\lambda^{i},t)$
with $i\in\\{1,m,N\\}$, i.e., the boundary CVs in the off state and all of the
on CVs in a similar fashion.
### A.2 Proof of Lemma 3
If $\alpha=(\Delta t)^{-1}$, the diagonal elements of $A_{k}$ with $\alpha$ in
them will go to zero and the non diagonal elements will go to 1. These non-
diagonal elements with value $1$ are the red dots in Figure 3 and encapsulate
the thermostat control law. Thus the construction of
$\Phi^{\text{\footnotesize{TS}}}$ with the canonical basis vectors. Now,
multiplying out the matrix we have,
$\displaystyle\Phi^{\text{\footnotesize{TS}}}G_{k}=\begin{bmatrix}\big{(}I-\Phi^{\text{\footnotesize{TS}}}_{\text{\footnotesize{off}}}\big{)}P_{k}^{\text{\footnotesize{off}}}&\Phi^{\text{\footnotesize{TS}}}_{\text{\footnotesize{off}}}S_{k}^{\text{\footnotesize{off}}}\\\
\Phi^{\text{\footnotesize{TS}}}_{\text{\footnotesize{on}}}S_{k}^{\text{\footnotesize{on}}}&\big{(}I-\Phi^{\text{\footnotesize{TS}}}_{\text{\footnotesize{on}}}\big{)}P_{k}^{\text{\footnotesize{on}}}\end{bmatrix}$
(A.63)
where
$\big{(}I-\Phi^{\text{\footnotesize{TS}}}_{\text{\footnotesize{off}}}\big{)}P_{k}^{\text{\footnotesize{off}}}$
(respectively,
$\big{(}I-\Phi^{\text{\footnotesize{TS}}}_{\text{\footnotesize{on}}}\big{)}P_{k}^{\text{\footnotesize{on}}}$)
is the matrix $P_{k}^{\text{\footnotesize{off}}}$ (respectively,
$P_{k}^{\text{\footnotesize{on}}}$) but with the last (respectively, first)
row zeroed out. The exact opposite statement is true for
$\Phi^{\text{\footnotesize{TS}}}_{\text{\footnotesize{off}}}P_{k}^{\text{\footnotesize{on}}}$
and
$\Phi^{\text{\footnotesize{TS}}}_{\text{\footnotesize{on}}}P_{k}^{\text{\footnotesize{off}}}$.
Hence, by definition of the matrices in $G_{k}$ we have
$P_{k}=\Phi^{\text{\footnotesize{TS}}}G_{k}$ where each non-zero element holds
the interpretation (23).
### A.3 Proof of Lemma 4
We define the following transformation for $l\in\\{0,\dots,\tau\\}$ and
$j\in\\{1,\dots,N\\}$ as
$\displaystyle T(j,l)=lN+j$ (A.64)
that maps the integers $j$ and $l$ that label the state values to the absolute
index of either of the vectors $\nu^{\text{E}}_{\text{\footnotesize{off}}}$
and $\nu^{\text{E}}_{\text{\footnotesize{on}}}$. Now consider the following
two sets
$\displaystyle\mathcal{W}_{\text{\footnotesize{off}}}$
$\displaystyle\triangleq\Big{\\{}(u,j,l)\in{\sf X}\ \Big{|}\
\phi^{\text{E}}_{\text{\footnotesize{off}}}(u\ |\ j,\
l)=\beta_{\text{\footnotesize{off}}}(u,j,l)\Big{\\}}$ (A.65)
$\displaystyle\mathcal{W}_{\text{\footnotesize{on}}}$
$\displaystyle\triangleq\Big{\\{}(u,j,l)\in{\sf X}\ \Big{|}\
\phi^{\text{E}}_{\text{\footnotesize{on}}}(u\ |\ j,\
l)=\beta_{\text{\footnotesize{on}}}(u,j,l)\Big{\\}}.$ (A.66)
The values $\beta_{\text{\footnotesize{off}}}$ and
$\beta_{\text{\footnotesize{on}}}$ are chosen to ensure the structural
requirements in (31)-(32) and (35). For example, for $l=1$ and $u=\text{on}$
we have that
$\beta_{\text{\footnotesize{off}}}(\text{\footnotesize{on}},\cdot,1)=\phi^{\text{TS}}_{\text{\footnotesize{off}}}(\text{\footnotesize{on}}|\cdot)$
(and hence
$\beta_{\text{\footnotesize{off}}}(\text{\footnotesize{off}},\cdot,1)=1-\phi^{\text{TS}}_{\text{\footnotesize{off}}}(\text{\footnotesize{on}}|\cdot)$)
so to enforce the structural requirement in (35). Define for each $k\in{\sf
T}$ , $u,v\in\\{\text{\footnotesize{on}},\text{\footnotesize{off}}\\}$,
$j\in\\{1,\dots,N\\}$, and $l\in\\{0,\dots,\tau\\}$ the following vectors
$\displaystyle h^{u}_{k}[T(j,l)]$
$\displaystyle\triangleq\begin{cases}(\nu_{u}^{\text{E}}[\lambda^{j},l,k])^{-1}&\text{if}\
\nu_{u}^{\text{E}}[\lambda^{j},l,k]>0.\\\ 0&\text{otherwise}.\end{cases}$
$\displaystyle w^{u,v}_{k}[T(j,l)]$
$\displaystyle\triangleq\begin{cases}\beta_{v}(u,j,l)&\text{if}\
(u,j,l)\in\mathcal{W}_{v}\ \text{and}\\\
&\nu_{v}^{\text{E}}[\lambda^{j},l,k]=0.\\\ 0.5&\text{if}\
(u,j,l)\notin\mathcal{W}_{v}\ \text{and}\\\
&\nu_{v}^{\text{E}}[\lambda^{j},l,k]=0.\\\ 0&\text{otherwise}.\end{cases}$
where $T(\cdot,\cdot)$ is defined in (A.64),
$\mathcal{W}_{\text{\footnotesize{off}}}$ in (A.65), and
$\mathcal{W}_{\text{\footnotesize{on}}}$ in (A.66). Let
$W^{u,v}_{k}=\text{diag}(w^{u,v}_{k})$ and
$H^{u,v}_{k}=\text{diag}(h^{u,v}_{k})$, and construct the following matrices
$\displaystyle H_{k}$
$\displaystyle=\begin{bmatrix}H_{k}^{\text{\footnotesize{off}}}&\mathbf{0}\\\
\mathbf{0}&H_{k}^{\text{\footnotesize{on}}}\end{bmatrix},\quad\text{and}$
(A.67) $\displaystyle W_{k}$
$\displaystyle=\begin{bmatrix}W^{\text{\footnotesize{off}},\text{\footnotesize{off}}}_{k}&W^{\text{\footnotesize{off}},\text{\footnotesize{on}}}_{k}&\mathbf{0}&\mathbf{0}\\\
\mathbf{0}&\mathbf{0}&W^{\text{\footnotesize{on}},\text{\footnotesize{off}}}_{k}&W^{\text{\footnotesize{on}},\text{\footnotesize{on}}}_{k}\end{bmatrix}.$
(A.68)
We first show that $\Phi_{k}^{\text{E}}=H_{k}J_{k}+W_{k}$ satisfies (48). Note
that $\text{diag}(\nu_{k}^{\text{E}})W_{k}=\mathbf{0}$ since by construction
if the $i^{th}$ row of $W_{k}$ has a non zero entry then the $i^{th}$ diagonal
entry of $\text{diag}(\nu_{k}^{\text{E}})$ is zero. The product
$\text{diag}(\nu_{k}^{\text{E}})H_{k}$ is a diagonal matrix with with entries
of either zero or one. The zero entries also correspond to the zero entries of
$\nu_{k}^{\text{E}}$. In this case, the respective entry in $J_{k}$ is also
zero so that $\text{diag}(\nu_{k}^{\text{E}})H_{k}J_{k}=J_{k}$ as desired.
We now show that $\phi_{k}^{\text{E}}=H_{k}J_{k}+W_{k}\in\varPhi$. First
consider an arbitrary state indexed by $(\text{\footnotesize{off}},j,l)$ at
time $k$, if the corresponding value in
$\nu_{\text{\footnotesize{off}}}[\lambda^{j},l,k]>0$ then the two policy
values are defined as
$\displaystyle\frac{{\sf P}\left(m_{k+1}=\text{on},\ I_{k}=j,\
{\sf{L}}_{k}=l,\
m_{k}=\text{off}\right)}{\nu_{\text{\footnotesize{off}}}[\lambda^{j},l,k]}$
(A.69) $\displaystyle\frac{{\sf P}\left(m_{k+1}=\text{off},\ I_{k}=j,\
{\sf{L}}_{k}=l,\
m_{k}=\text{off}\right)}{\nu_{\text{\footnotesize{off}}}[\lambda^{j},l,k]}.$
(A.70)
If either of the above values are fixed in the constraint set $\Phi$, then the
constraint (49) will ensure this. Further, since we have that
$\nu_{k}^{\text{E}},J_{k}\in[0,1]$ and that
$(\nu_{k}^{\text{E}})^{T}=J_{k}\mathbb{1}$ this ensures that the above policy
values are within $[0,1]$ and sum to 1. The above argument is valid for any
pair of state values such that the corresponding value of $\nu_{k}^{\text{E}}$
is non-zero. If the corresponding value of
$\nu_{\text{\footnotesize{off}}}[\lambda^{j},l,k]=0$ and the policy
(conditioned on this state value) has a constraint, the first if case in the
definition of $w^{u,v}_{k}$ ensures this constraint. Further, the constraint
values must also be chosen to ensure the respective policy values are in
$[0,1]$ and sum to one. Lastly, if
$\nu_{\text{\footnotesize{off}}}[\lambda^{j},l,k]=0$ and there is no
constraint for the policy conditioned on this state value the second if case
in the definition of $w^{u,v}_{k}$ ensures the policy value sums to 1 and the
respective elements are in $[0,1]$. Thus $\Phi_{k}^{\text{E}}\in\varPhi$ for
all $k\in{\sf T}$.
### A.4 Proof of Theorem 1
The proof structure is similar to the one in [2]. The idea is to exploit the
fact that: (i) $\nu_{k}^{\text{E}}$ is a decision variable for both
optimization problems (50) and (44) and (ii) the objective function is the
same for both problems and solely a function of the marginal
$\nu_{k}^{\text{E}}$. We rewrite these problem compactly below,
$\displaystyle\eta^{*}_{\text{CVX}}$
$\displaystyle=\min_{(\nu^{\text{E}},J)\in X}\eta(\nu^{\text{E}}),$ (A.71)
$\displaystyle\eta^{*}_{\text{NCVX}}$
$\displaystyle=\min_{(\nu^{\text{E}},\Phi^{\text{E}})\in
Y}\eta(\nu^{\text{E}}),$ (A.72)
where the sets $X$ and $Y$ collect all of the relevant constraints for the
problems. The variables $\nu^{\text{E}}$, $\Phi^{\text{E}}$, and $J$ are
concatenated over the considered finite time horizon and hence are not sub-
scripted by $k$. We proceed by showing that
$\eta^{*}_{\text{CVX}}\leq\eta^{*}_{\text{NCVX}}$ and
$\eta^{*}_{\text{NCVX}}\leq\eta^{*}_{\text{CVX}}$ to give the desired result.
#### A.4.1 $\eta^{*}_{\text{CVX}}\leq\eta^{*}_{\text{NCVX}}$
Pick any argument minimizer that achieves value $\eta^{*}_{\text{NCVX}}$ and
denote the pair as
$(\nu^{\text{E}}_{\text{NCVX}},\Phi^{\text{E}}_{\text{NCVX}})$. Trivially
construct $J$ through the relation (48) so that this constructed $J$ and
$\nu^{\text{E}}_{\text{NCVX}}$ (that is optimal for (44)) are also feasible
for (50), i.e., $(\nu^{\text{E}}_{\text{NCVX}},J)\in X$. This is since
$\mathbb{1}^{T}\text{diag}(\nu^{\text{E}}_{k})=\nu^{\text{E}}_{k}$ and
$\mathbb{1}=\Phi^{\text{E}}_{k}\mathbb{1}$. Hence we have that
$\displaystyle\eta^{*}_{\text{CVX}}$
$\displaystyle=\min_{(\nu^{\text{E}},J)\in
X}\eta(\nu^{\text{E}})\leq\eta(\nu^{\text{E}}_{\text{NCVX}})=\eta^{*}_{\text{NCVX}}$
(A.73)
since by definition $\eta^{*}_{\text{CVX}}$ is the minimum value over the set
of feasible solutions.
#### A.4.2 $\eta^{*}_{\text{NCVX}}\leq\eta^{*}_{\text{CVX}}$
We take a pair $(\nu^{\text{E}}_{\text{CVX}},J_{\text{CVX}})$ that achieve
optimal cost $\eta^{*}_{\text{CVX}}$ and construct a feasible solution for
(44), denoted $(\eta^{\text{E}}_{\text{NCVX}},\Phi^{\text{E}}_{\text{NCVX}})$,
as follows (for each $k$)
$\displaystyle\Phi^{\text{E}}_{k,\text{NCVX}}$
$\displaystyle=H_{k}J_{k}+W_{k},\quad\text{and}$ (A.74)
$\displaystyle\nu^{\text{E}}_{k,\text{NCVX}}$
$\displaystyle=\nu^{\text{E}}_{k,\text{CVX}}.$ (A.75)
Where $H_{k}$ and $W_{k}$ are defined in Lemma 5.2. This constructed solution
is then feasible for (44) as the constraint
$\Phi^{\text{E}}_{\text{NCVX}}\in\varPhi$ is part of the result in Lemma 5.2
and
$\displaystyle\nu^{\text{E}}_{k,\text{NCVX}}\Phi^{\text{E}}_{k,\text{NCVX}}G_{k}^{\text{E}}$
$\displaystyle=\nu^{\text{E}}_{k,\text{NCVX}}\big{(}H_{k}J_{k}+W_{k}\big{)}G_{k}^{\text{E}}$
(A.76)
$\displaystyle=\mathbb{1}^{T}J_{k}G_{k}^{\text{E}}=\nu^{\text{E}}_{k+1,\text{NCVX}}.$
(A.77)
The fact that
$\nu^{\text{E}}_{k,\text{NCVX}}\big{(}H_{k}J_{k}+W_{k}\big{)}=\mathbb{1}^{T}J_{k}$
is since $\nu^{\text{E}}_{k,\text{NCVX}}W_{k}=\mathbf{0}$ and
$\nu^{\text{E}}_{k,\text{NCVX}}H_{k}J_{k}=\mathbb{1}^{T}J_{k}$. The matrix
$W_{k}$ only has non zero entries for row indices where the index of the row
vector $\nu^{\text{E}}_{k,\text{NCVX}}$ is zero so that the resulting product
is the zero vector. The product $\nu^{\text{E}}_{k,\text{NCVX}}H_{k}$ is a
vector of ones and zeros, specifically, if the $i^{th}$ element of this vector
is zero then the entire $i^{th}$ column of the matrix $J_{k}$ will be the zero
vector. Thus the equivalence between
$\nu^{\text{E}}_{k,\text{NCVX}}\big{(}H_{k}J_{k}+W_{k}\big{)}$ and
$\mathbb{1}^{T}J_{k}$. Since the constructed solution is feasible we have that
$\displaystyle\eta^{*}_{\text{NCVX}}$
$\displaystyle=\min_{(\nu^{\text{E}},\Phi^{\text{E}})\in
Y}\eta(\nu^{\text{E}})\leq\eta(\nu^{\text{E}}_{\text{CVX}})=\eta^{*}_{\text{CVX}}$
(A.78)
since by definition $\eta^{*}_{\text{NCVX}}$ is the minimum value over the set
of feasible solutions.
## Appendix B PDE discretization
We denote the $i^{th}$ CV as CV($i$) and further adopt the following
notational simplifications,
$\displaystyle\mu_{\text{\footnotesize{off}}}(\lambda^{i},t)\triangleq\mu_{\text{\footnotesize{off}}}(\lambda^{i}_{\text{\footnotesize{off}}},t),\quad\text{and}\quad\mu_{\text{\footnotesize{on}}}(\lambda^{i},t)\triangleq\mu_{\text{\footnotesize{on}}}(\lambda^{i}_{\text{\footnotesize{on}}},t).$
Highlighted red in Figure 2 are the two control volumes to assist in enforcing
boundary conditions that coincide with the thermostat policy (6). This is
discussed further in Appendix B.2 when the boundary conditions CVs are
discretized.
### B.1 Internal CV’s
Consider the RHS of the pde (9) integrated over CV($i$):
$\displaystyle\int_{\text{CV(i)}}\bigg{(}\frac{\sigma^{2}}{2}\frac{\partial^{2}}{\partial\lambda^{2}}\big{(}\mu_{\text{\footnotesize{on}}}(\lambda,t)\big{)}-\frac{\partial}{\partial\lambda}\big{(}f_{\text{\footnotesize{on}}}(\lambda,t)\mu_{\text{\footnotesize{on}}}(\lambda,t)\big{)}\bigg{)}d\lambda$
$\displaystyle=\bigg{(}\frac{\sigma^{2}}{2}\frac{\partial}{\partial\lambda}\mu_{\text{\footnotesize{on}}}(\lambda,t)-f_{\text{\footnotesize{on}}}(\lambda,t)\mu_{\text{\footnotesize{on}}}(\lambda,t)\bigg{)}\bigg{|}_{\lambda^{i,-}}^{\lambda^{i,+}},$
(B.79)
where equality is by the divergence theorem [27]. Note, the points
$\lambda^{i,-}$ and $\lambda^{i,+}$ are not control volume variables, but
rather the boundaries of a single control volume. Hence, quantities in (B.79)
need to be approximated in terms of the nodal points of the neighboring
control volumes. The approximations for the partial derivative are,
$\displaystyle\frac{\partial}{\partial\lambda}\mu_{\text{\footnotesize{on}}}(\lambda^{i,+},t)$
$\displaystyle\approx\frac{\mu_{\text{\footnotesize{on}}}(\lambda^{i+1},t)-\mu_{\text{\footnotesize{on}}}(\lambda^{i},t)}{\Delta\lambda},\quad\text{and}$
(B.80)
$\displaystyle\frac{\partial}{\partial\lambda}\mu_{\text{\footnotesize{on}}}(\lambda^{i,-},t)$
$\displaystyle\approx\frac{\mu_{\text{\footnotesize{on}}}(\lambda^{i},t)-\mu_{\text{\footnotesize{on}}}(\lambda^{i-1},t)}{\Delta\lambda}.$
(B.81)
For the integrated convective term, we use the so-called upwind scheme [27].
This scheme elects the FVM equivalent of a forward or backward difference
based on the sign of the convective velocity $f_{\text{on}}(\lambda,t)$. By
assumption A.2, $f_{\text{\footnotesize{on}}}(\lambda,t)\leq 0$ and the upwind
scheme prescribes:
$\displaystyle
f_{\text{\footnotesize{on}}}(\lambda^{i,-},t)\mu_{\text{\footnotesize{on}}}(\lambda^{i,-},t)$
$\displaystyle=f_{\text{\footnotesize{on}}}(\lambda^{i,-},t)\mu_{\text{\footnotesize{on}}}(\lambda^{i},t),\quad\text{and}$
$\displaystyle
f_{\text{\footnotesize{on}}}(\lambda^{i,+},t)\mu_{\text{\footnotesize{on}}}(\lambda^{i,+},t)$
$\displaystyle=f_{\text{\footnotesize{on}}}(\lambda^{i,+},t)\mu_{\text{\footnotesize{on}}}(\lambda^{i+1},t).$
(B.82)
When the TCL is off $f_{\text{\footnotesize{off}}}(\lambda,t)\geq 0$ (also by
Assumption A.2) the upwind scheme prescribes:
$\displaystyle
f_{\text{\footnotesize{off}}}(\lambda^{i,-},t)\mu_{\text{\footnotesize{off}}}(\lambda^{i,-},t)$
$\displaystyle=f_{\text{\footnotesize{off}}}(\lambda^{i,-},t)\mu_{\text{\footnotesize{off}}}(\lambda^{i-1},t),\
\text{and}$ $\displaystyle
f_{\text{\footnotesize{off}}}(\lambda^{i,+},t)\mu_{\text{\footnotesize{off}}}(\lambda^{i,+},t)$
$\displaystyle=f_{\text{\footnotesize{off}}}(\lambda^{i,+},t)\mu_{\text{\footnotesize{off}}}(\lambda^{i},t).$
(B.83)
Now returning to the discretization of the PDE (9) over an arbitrary internal
CV. We approximate the LHS of (9) integrated over the control volume as,
$\displaystyle\int_{\text{CV(i)}}\frac{\partial}{\partial
t}\mu_{\text{\footnotesize{on}}}(\lambda,t)d\lambda\approx\frac{d}{dt}\mu_{\text{\footnotesize{on}}}(\lambda^{i},t)\Delta\lambda=\frac{d}{dt}\nu_{\text{\footnotesize{on}}}(\lambda^{i},t),$
where we have defined
$\displaystyle\nu_{\text{\footnotesize{on}}}(\lambda^{i},t)\triangleq\mu_{\text{\footnotesize{on}}}(\lambda^{i},t)\Delta\lambda.$
(B.84)
Now, denote the following
$\displaystyle
D\triangleq\frac{\sigma^{2}}{(\Delta\lambda)^{2}},\quad\text{and}\quad
F_{\text{\footnotesize{on}}}^{i}(t)\triangleq\frac{f_{\text{\footnotesize{on}}}(\lambda^{i},t)}{\Delta\lambda},$
(B.85)
where the quantities $F_{\text{\footnotesize{off}}}^{i}(t)$,
$F_{\text{\footnotesize{on}}}^{i,+}(t)/F_{\text{\footnotesize{off}}}^{i,+}(t)$,
and
$F_{\text{\footnotesize{on}}}^{i,-}(t)/F_{\text{\footnotesize{off}}}^{i,-}(t)$
are defined similarly to $F_{\text{\footnotesize{on}}}^{i}(t)$, e.g.,
$F_{\text{\footnotesize{off}}}^{i,+}(t)\triangleq
f_{\text{\footnotesize{off}}}(\lambda^{i,+},t)/\Delta\lambda$. Now equating
the approximation of the RHS (9) with the approximation of the LHS of (9) we
have,
$\displaystyle\frac{d}{dt}\nu_{\text{\footnotesize{on}}}(\lambda^{i},t)$
$\displaystyle=\Big{(}F_{\text{\footnotesize{on}}}^{i,-}(t)-D\Big{)}\nu_{\text{\footnotesize{on}}}(\lambda^{i},t)+\frac{D}{2}\nu_{\text{\footnotesize{on}}}(\lambda^{i-1},t)$
$\displaystyle+\Big{(}\frac{D}{2}-F_{\text{\footnotesize{on}}}^{i,+}(t)\Big{)}\nu_{\text{\footnotesize{on}}}(\lambda^{i+1},t).$
(B.86)
The spatial discretization for the PDE (10) is similar and yields,
$\displaystyle\frac{d}{dt}\nu_{\text{\footnotesize{off}}}(\lambda^{i},t)$
$\displaystyle=\frac{D}{2}\nu_{\text{\footnotesize{off}}}(\lambda^{i+1},t)-\Big{(}F_{\text{\footnotesize{off}}}^{i,+}(t)+D\Big{)}\nu_{\text{\footnotesize{off}}}(\lambda^{i},t)$
$\displaystyle+\Big{(}\frac{D}{2}+F_{\text{\footnotesize{off}}}^{i,-}(t)\Big{)}\nu_{\text{\footnotesize{off}}}(\lambda^{i-1},t),$
(B.87)
where
$\nu_{\text{\footnotesize{off}}}(\lambda^{i},t)\triangleq\mu_{\text{\footnotesize{off}}}(\lambda^{i},t)\Delta\lambda.$
### B.2 Boundary CV’s
The boundary CVs are the CVs associated with the nodal values:
$\lambda_{\text{\footnotesize{on}}}^{1}$,
$\lambda_{\text{\footnotesize{on}}}^{q}$,
$\lambda_{\text{\footnotesize{on}}}^{N}$,
$\lambda_{\text{\footnotesize{off}}}^{1}$,
$\lambda_{\text{\footnotesize{off}}}^{m}$, and
$\lambda_{\text{\footnotesize{off}}}^{N}$. The superscript, for example the
integer $q$ in $\lambda_{\text{\footnotesize{on}}}^{q}$ represents the CV
index. All boundary CVs can be seen in Figure 2. Discretization of the
boundary CVs requires care for atleast two reasons. First, this is typically
where one introduces the BCs of the PDE into the numerical approximation.
Secondly, on finite domains the endpoints present challenges, for example,
there is no variable $\mu_{\text{\footnotesize{on}}}(\lambda^{N+1},t)$ for
computation of the derivative values for node
$\lambda^{N}_{\text{\footnotesize{on}}}$.
The BC’s for the coupled PDEs (9)-(10) are [21]:
Absorbing Boundaries:
$\displaystyle\qquad\qquad\qquad\mu_{\text{\footnotesize{on}}}(\lambda^{\text{min}},t)=\mu_{\text{\footnotesize{off}}}(\lambda^{\text{max}},t)=0.$
(B.88) Conditions at Infinity:
$\displaystyle\qquad\qquad\qquad\mu_{\text{\footnotesize{on}}}(+\infty,t)=\mu_{\text{\footnotesize{off}}}(-\infty,t)=0.$
(B.89) Conservation of Probability:
$\displaystyle\frac{\partial}{\partial\lambda}\bigg{[}\mu_{\text{\footnotesize{on}}}(\lambda^{q,-},t)-\mu_{\text{\footnotesize{on}}}(\lambda^{q-1,+},t)-\mu_{\text{\footnotesize{off}}}(\lambda^{N-1,+},t)\bigg{]}=0.$
(B.90)
$\displaystyle\frac{\partial}{\partial\lambda}\bigg{[}\mu_{\text{\footnotesize{off}}}(\lambda^{m,+},t)-\mu_{\text{\footnotesize{on}}}(\lambda^{2,-},t)-\mu_{\text{\footnotesize{off}}}(\lambda^{m+1,-},t)\bigg{]}=0.$
(B.91) Continuity:
$\displaystyle\qquad\qquad\qquad\mu_{\text{\footnotesize{on}}}(\lambda^{q,-},t)=\mu_{\text{\footnotesize{on}}}(\lambda^{q-1,+},t).$
(B.92)
$\displaystyle\qquad\qquad\qquad\mu_{\text{\footnotesize{off}}}(\lambda^{m,+},t)=\mu_{\text{\footnotesize{off}}}(\lambda^{m+1,-},t).$
(B.93)
As we will see, implementation of some of the above conditions will require a
bit of care. However, some are quite trivial to enforce. For example, by
default, the continuity conditions (B.92) and (B.93) are satisfied due to our
choice of CV structure, since, for example, for any $i$ we have
$\lambda_{\text{\footnotesize{off}}}^{i,-}=\lambda_{\text{\footnotesize{off}}}^{i-1,+}$
and
$\lambda_{\text{\footnotesize{off}}}^{i,+}=\lambda_{\text{\footnotesize{off}}}^{i+1,-}$.
Now focusing on the conditions at infinity BC (B.89), we enforce instead the
following conditions:
$\displaystyle\frac{\partial}{\partial\lambda}\mu_{\text{\footnotesize{off}}}(\lambda^{1,-},t)=0,\quad\text{and}\quad\frac{\partial}{\partial\lambda}\mu_{\text{\footnotesize{on}}}(\lambda^{N,+},t)=0.$
(B.94)
Our computational domain cannot extend to infinity, where the BC (B.89) is
required to hold, but the temperature values
$\lambda_{\text{\footnotesize{off}}}^{1}$ and
$\lambda_{\text{\footnotesize{on}}}^{N}$ are quite far away from the deadband
and so the density here will be near zero.
Now, consider the spatial discretization of the CVs associated with the BC at
infinity. First considering the CV associated with the temperature
$\lambda^{1}_{\text{\footnotesize{off}}}$, we have that the differential
equation is
$\displaystyle\frac{d}{dt}\nu_{\text{\footnotesize{off}}}(\lambda^{1},t)$
$\displaystyle=\Big{(}-F_{\text{\footnotesize{off}}}^{1,+}(t)-\frac{D}{2}\Big{)}\nu_{\text{\footnotesize{off}}}(\lambda^{1},t)$
(B.95)
$\displaystyle+\Big{(}\frac{D}{2}+F_{\text{\footnotesize{off}}}^{2,-}(t)\Big{)}\nu_{\text{\footnotesize{off}}}(\lambda^{2},t).$
Considering the CV associated with the temperature $\lambda^{N}_{\text{on}}$,
we have
$\displaystyle\frac{d}{dt}\nu_{\text{\footnotesize{on}}}(\lambda^{N},t)$
$\displaystyle=\Big{(}F_{\text{\footnotesize{on}}}^{N,+}(t)-\frac{D}{2}\Big{)}\nu_{\text{\footnotesize{on}}}(\lambda^{N},t)$
(B.96)
$\displaystyle+\Big{(}\frac{D}{2}-F_{\text{\footnotesize{on}}}^{N,+}(t)\Big{)}\nu_{\text{\footnotesize{on}}}(\lambda^{N-1},t).$
In the above we make the assumption that
$\nu_{\text{\footnotesize{off}}}(\lambda^{1,-}-\Delta\lambda,t)=0$ and
$\nu_{\text{\footnotesize{on}}}(\lambda^{N,+}+\Delta\lambda,t)=0$.
Now focus on the absorbing boundary (B.88) and conservation of probability
(B.90)-(B.91) boundary conditions. These BCs have the following meaning. The
condition (B.88) clamps the density at the end of the deadband to zero. BC
(B.90) reads: the net-flux across the temperature value
$\lambda^{q}_{\text{\footnotesize{on}}}$ is equal to the flux of density going
from off to on. In order to enforce both (B.90) and (B.91) we will model the
flux of density due to the thermostat control policy as a source/sink. Before
doing this, we mention some issues with enforcing the BC (B.88).
A TCL’s temperature trajectory will not satisfy the BC (B.88) since to switch
its mode the TCL’s temperature sensor will have to register a value outside
the deadband. Therfore, we introduce two additional CV’s associated with the
temperatures $\lambda^{1}_{\text{\footnotesize{on}}}$ and
$\lambda^{N}_{\text{\footnotesize{off}}}$, which are shown in red in Figure 2.
We then transfer the BC (B.88) to one on the added CVs, which becomes:
$\displaystyle\mu_{\text{\footnotesize{on}}}(\lambda^{1,-},t)=\mu_{\text{\footnotesize{off}}}(\lambda^{N,+},t)$
$\displaystyle=0.$ (B.97)
As mentioned, to enforce the conservation of probability BC we use a
source/sink type argument, which we also enforce on the added CVs. To see what
we mean by source/sink argument, consider the following: some rate of density
is transferred out of the CV $\lambda^{N}_{\text{\footnotesize{off}}}$ and
into the CV $\lambda^{q}_{\text{\footnotesize{on}}}$ (as depicted in Figure 2)
due to thermostatic control. We model the sink as simply
$-\nu_{\text{\footnotesize{off}}}(\lambda^{N},t)$. The rate of the sink is
then given as $-\gamma\nu_{\text{\footnotesize{off}}}(\lambda^{N},t)$, where
$\gamma>0$ is a modeling choice and a constant of appropriate units that
describes the discharge rate. We shortly give insight on how to select a value
for $\gamma$. Now discretizing the CV corresponding to the nodal value
$\lambda^{N}_{\text{\footnotesize{off}}}$ subject to the BC (B.97) and the
sink $-\nu_{\text{\footnotesize{off}}}(\lambda^{N},t)$ we obtain,
$\displaystyle\frac{d}{dt}\nu_{\text{\footnotesize{off}}}(\lambda^{N},t)$
$\displaystyle=\Big{(}\frac{D}{2}+F_{\text{\footnotesize{off}}}^{N,-}(t)\Big{)}\nu_{\text{\footnotesize{off}}}(\lambda^{N-1},t)$
(B.98) $\displaystyle-\alpha\nu_{\text{\footnotesize{off}}}(\lambda^{N},t),$
where $\alpha\triangleq\big{(}\gamma+D\big{)}$. In obtaining the above, we
have made the reasonable assumption that
$\nu_{\text{\footnotesize{off}}}(\lambda^{N,+}+\Delta\lambda,t)=0$. The
quantity $\alpha\nu_{\text{\footnotesize{off}}}(\lambda^{N},t)$ represents the
rate of change of density from the CV
$\lambda^{N}_{\text{\footnotesize{off}}}$ to the CV
$\lambda^{q}_{\text{\footnotesize{on}}}$, as depicted in Figure 2.
Consequently, to conserve probability, we must add this quantity as a source
to the ode for the CV $\lambda^{q}_{\text{\footnotesize{on}}}$, i.e.,
$\displaystyle\frac{d}{dt}\nu_{\text{\footnotesize{on}}}(\lambda^{q},t)$
$\displaystyle=\dots+\alpha\nu_{\text{\footnotesize{off}}}(\lambda^{N},t).$
(B.99)
The dots in equation (B.99) represent the portion of the dynamics for the
standard internal CV (i.e., the RHS of (B.86)) for the temperature node
$\lambda^{q}_{\text{\footnotesize{on}}}$. A similar argument is used for the
BC (B.91) with the CV’s $\lambda^{1}_{\text{\footnotesize{on}}}$ and
$\lambda^{m}_{\text{\footnotesize{off}}}$, and the corresponding differential
equations are,
$\displaystyle\frac{d}{dt}\nu_{\text{\footnotesize{on}}}(\lambda^{1},t)$
$\displaystyle=\Big{(}\frac{D}{2}-F_{\text{\footnotesize{on}}}^{1,+}(t)\Big{)}\nu_{\text{\footnotesize{on}}}(\lambda^{2},t)-\alpha\nu_{\text{\footnotesize{on}}}(\lambda^{1},t),$
(B.100)
$\displaystyle\frac{d}{dt}\nu_{\text{\footnotesize{off}}}(\lambda^{m},t)$
$\displaystyle=\dots+\alpha\nu_{\text{\footnotesize{on}}}(\lambda^{1},t).$
(B.101)
To better understand the role of $\gamma$ consider the following example.
Electing $\gamma$ in the above so that $\alpha=(\Delta t)^{-1}$, where $\Delta
t$ is a time increment, has the following interpretation: all mass starting in
state $\nu_{\text{\footnotesize{off}}}(\lambda^{N},\cdot)$ at time $t$ is
transferred out by time $t+\Delta t$ into the state
$\nu_{\text{\footnotesize{on}}}(\lambda^{q},\cdot)$.
#### B.2.1 Additional conditions
Two additional conditions are enforced, namely that once mass is transferred
to the nodes $\lambda^{N}_{\text{\footnotesize{off}}}$ or
$\lambda^{1}_{\text{\footnotesize{on}}}$ it cannot “travel backwards.” For
example, mass is transferred from $\lambda^{N}_{\text{\footnotesize{off}}}$
entirely to the corresponding on temperature bin and no mass is transferred
backwards to $\lambda^{N-1}_{\text{\footnotesize{off}}}$. This corresponds to
setting: (i) the coefficient on
$\nu_{\text{\footnotesize{off}}}(\lambda^{N},t)$ in the ode for
$\nu_{\text{\footnotesize{off}}}(\lambda^{N-1},t)$ to zero and (ii) the
coefficient on $\nu_{\text{\footnotesize{on}}}(\lambda^{1},t)$ in the ode for
$\nu_{\text{\footnotesize{on}}}(\lambda^{2},t)$ to zero.
### B.3 Overall system
Now, combining the odes–(B.86) and (B.87) for all of the internal CVs and
(B.95), (B.96), (B.98), (B.99), (B.100), (B.101) for the BC CVs–we obtain the
linear time varying system,
$\displaystyle\frac{d}{dt}\nu(t)=\nu(t)A(t).$ (B.102)
|
# Impact of surface anisotropy on the spin-wave dynamics in thin ferromagnetic
film
Krzysztof Szulc<EMAIL_ADDRESS>Institute of Spintronics and
Quantum Information, Faculty of Physics, Adam Mickiewicz University, Poznań,
Uniwersytetu Poznańskiego 2, 61-614 Poznań, Poland Julia Kharlan Institute
of Spintronics and Quantum Information, Faculty of Physics, Adam Mickiewicz
University, Poznań, Uniwersytetu Poznańskiego 2, 61-614 Poznań, Poland
Institute of Magnetism, National Academy of Sciences of Ukraine, 36b
Vernadskogo Boulevard, 03142 Kyiv, Ukraine Pavlo Bondarenko Institute of
Magnetism, National Academy of Sciences of Ukraine, 36b Vernadskogo Boulevard,
03142 Kyiv, Ukraine Elena V. Tartakovskaya Institute of Spintronics and
Quantum Information, Faculty of Physics, Adam Mickiewicz University, Poznań,
Uniwersytetu Poznańskiego 2, 61-614 Poznań, Poland Institute of Magnetism,
National Academy of Sciences of Ukraine, 36b Vernadskogo Boulevard, 03142
Kyiv, Ukraine Maciej Krawczyk Institute of Spintronics and Quantum
Information, Faculty of Physics, Adam Mickiewicz University, Poznań,
Uniwersytetu Poznańskiego 2, 61-614 Poznań, Poland
###### Abstract
The spin-wave dynamics in the thin CoFeB film in Damon-Eshbach geometry are
studied in three cases of boundary conditions—free boundary conditions,
symmetrical surface anisotropy, and one-sided surface anisotropy. The
analytical model created by Wolfram and De Wames was extended to include
perpendicular surface anisotropy in boundary conditions. Its comparison with
numerical simulations demonstrate perfect agreement between the approaches.
The analysis of the dispersion relation indicates that the presence of surface
anisotropy increases the avoided crossing size between Damon-Eshbach mode and
perpendicular standing modes. Additionally, asymmetrical one-sided surface
anisotropy induces nonreciprocity in the dispersion relation. In-depth
analysis of the avoided crossing size is conducted for systems with different
boundary conditions, different thicknesses, surface anisotropy constant
values, and external magnetic fields. It shows the significant role of the
strength of surface localization of Damon-Eshbach mode and the symmetry of
perpendicular standing modes in the avoided crossing broadening.
Interestingly, for specific set of parameters the interaction between the
particular modes can be suppressed, resulting in a mode crossing. Such a
crossing, which occurs only on one side of the dispersion relation in a one-
sided surface anisotropy system, can be utilized in nonreciprocal devices.
††preprint: APS/123-QED
## I Introduction
In recent years, spin waves (SWs), which are collective, harmonic oscillations
of spins that propagate within magnetic materials, have received increased
attention due to their potential to transport and process information with the
reduction of Joule heating and energy dissipation [1]. One of the interesting
properties of propagating SWs in thin magnetic films in Damon-Eshbach (DE)
geometry [2] is the hybridization between the fundamental SW mode and
perpendicular standing SW (PSSW) modes [3, 4, 5, 6, 7, 8, 9]. This may result
in the formation of avoided crossings (ACs), which can be a crucial physical
characteristic for the development of magnonic devices such as filters and
phase shifters. However, the control of the dynamic magnetic properties is a
fundamental problem for the implementation of these devices.
It has been demonstrated that surface anisotropy significantly impacts the
dispersion relation and the AC size between propagating SW mode and PSSW modes
[10]. Another studies have shown that surface anisotropy can be controlled by
the voltage applied across the ferromagnetic-metal/insulator heterostructures
due to the charge accumulation at the interface [11, 12, 13] or across
insulator/ferromagnet/insulator multilayer due to the dielectric polarization
influence on the interface [14]. Therefore, it can be concluded that
hybridization between fundamental SW mode and higher-order PSSW modes could be
controlled by electric field. However, there has been no systematic study on
the influence of surface anisotropy on the hybridization between SW modes in
the ferromagnetic film.
In general, there are two alternative approaches which can be used for the
analytical evaluation of dipole-exchange SW spectrum including interaction
between fundamental SW mode and PSSW modes. One approach, proposed by Wolfram
and De Wames [15, 16], involves solving a sixth-order differential equation
derived from Maxwell’s equations along with equations of the magnetization
motion. The extension of Damon and Eshbach’s theory for pure dipolar SWs by
including exchange interactions provides evidence that, as a result of
exchange, the surface and bulk modes mix. This theoretical approach was used
for explanation of the first experiments on magnon branch repulsion in thin
ferromagnetic films with in-plane magnetization [17, 18] and in thin single-
crystal disks of yttrium-iron garnet [19]. Much later, researchers applied the
same method to characterize SWs in infinitely long cylindrical wires with
magnetization along the wire [20, 21].
However, it turned out that the Wolfram and De Wames approach is not suitable
for a broad range of sample geometries and magnetic moment directions. In
fact, its effectiveness is limited to cases of unbroken symmetry in infinite
films, as well as in infinite wires with a magnetic moment along the wire
axis, as previously noted. For more general cases, Kalinikos and Slavin
proposed an alternative approach for mixed exchange boundary conditions in
thin films and the arbitrary direction of external magnetic field and magnetic
moment relative to the film plane [22, 23]. The first step of this method is
to solve Maxwell’s equations separately in the magnetostatic approximation
[24]. Then, the dynamical scalar potential obtained in the form of the
tensorial magnetostatic Green’s functions [25] is inserted into the equations
of motion for the magnetic moment (linearized Landau-Lifshitz equations), and
the resulting integro-differential equation is solved through perturbation
theory. This method has resolved most theoretical issues of spin dynamics in
laterally confined magnetic elements under different magnetic field
configurations. It has been previously applied to describe SW dynamics in
isolated magnetic stripes [26] as well as rectangular [27, 28], cylindrical
[29], and triangular [30] magnetic dots. A notable benefit of the Kalinikos
and Slavin method is that it utilizes a simple analytical formula to achieve
good agreement between theory and experiment for thin, circular nanoelements
with perpendicularly-magnetized states, such as rings [31] and dots [32]. In
more complex cases with broken cylindrical symmetry, it is necessary to
consider a greater number of perturbation theory terms (i.e., the interaction
of SW modes) [33, 34]. However, the applicability of this theory to any case
of nanostructures and geometry of applied fields is not in question. The
method of Wolfram and De Wames turned out to be somewhat forgotten, which
forced Harms and Duine [35] to ”rediscover” this ansatz since in some cases it
provides a more direct path to the result.
A comprehensive review of the two mentioned approaches with an analysis of
their applicability for various cases of the direction of the external field
and magnetization in a ferromagnetic film is given by Arias [36]. The
potential drawbacks of the Kalinikos-Slavin method were identified, including
possible inaccuracies of the results obtained in the region of hybridization
of SW modes, as well as the complexity of describing the interaction of
surface and bulk modes. The theoretical approach proposed in [36] is based on
the method developed by Wolfram and De Wames and provides strict solutions to
the problem. It is important to note that the hybridization of SWs was only
examined in the case of mixed symmetrical boundary conditions.
In this paper, we conduct a systematical analysis of the impact of surface
anisotropy on the SW hybridization, which was presented in [10]. We are
confronted with a choice between the two methods described above for
calculating the dynamics of SWs should be chosen. Following the conclusions of
Arias [36], the Wolfram and De Wames method not only leads to the goal more
efficiently in this case, despite the asymmetry of the boundary conditions,
but also provides a rigorous solution. This is in contrast to the Kalinikos-
Slavin perturbation theory which requires a significant number of iterations
and provides only an approximate solution. Therefore, we compared the
dispersion relations of SWs in an DE geometry using symmetrical and
asymmetrical boundary conditions via the extended Wolfram and De Wames
approach. The results of analytical calculations perfectly matched the
numerical simulations on the example of CoFeB thin film. We provide an in-
detail analysis of the dispersion relations, SW mode profiles, and the effect
of material parameters on the SW coupling in the frame of AC size.
## II Methods
### II.1 Investigated system
Figure 1: (a) A general schematic of the system and coordinate system. (b-d)
Schematics of the boundary conditions investigated in the manuscript: (b) free
boundary conditions, (c) symmetrical surface anisotropy, and (d) one-sided
surface anisotropy.
The system under investigation is presented in Fig. 1(a). It is a thin CoFeB
film of thickness $L$ magnetized in-plane in $y$-direction by the external
magnetic field $H_{0}$. We consider the DE geometry, i.e., the SWs propagating
along the $x$-direction, perpendicular to the external field $H_{0}$. The
$z$-axis corresponds to the direction perpendicular to the film plane, where
the surfaces of the film are located at $z=\pm L/2$. The following parameters
were used for CoFeB: magnetization saturation
$M_{\mathrm{S}}=$1335\text{\,}\mathrm{k}\mathrm{A}\mathrm{/}\mathrm{m}$$,
exchange stiffness
$A_{\mathrm{ex}}=$15\text{\,}\mathrm{p}\mathrm{J}\mathrm{/}\mathrm{m}$$, and
gyromagnetic ratio
$\gamma=$30\text{\,}\mathrm{G}\mathrm{Hz}\mathrm{/}\mathrm{T}$$. In this
study, we consider three cases of boundary conditions: free boundary
conditions (FBC) where the surface anisotropy is absent in the system [Fig.
1(b)]; symmetrical surface anisotropy (SSA), i.e., the surface anisotropy of
equal strength is present on both boundaries of the film [Fig. 1(c)]; one-
sided surface anisotropy (OSA) where the bottom surface has non-zero surface
anisotropy while the top surface is described with FBC [Fig. 1(d)].
### II.2 Analytical model
We use the approach proposed by Wolfram and De Wames [15, 16] to calculate the
dispersion relation in DE geometry in dipole-exchange regime and extend it to
include the perpendicular surface anisotropy introduced by Rado and Weertman
[37].
The magnetic free energy of the system can be presented as
$F=\int\left(-\mu_{0}\mathbf{H}_{0}\cdot\mathbf{M}+\frac{A_{\mathrm{ex}}}{M_{\mathrm{S}}^{2}}\left(\nabla\mathbf{M}\right)^{2}-\frac{1}{2}\mu_{0}\mathbf{H}_{\mathrm{d}}\cdot\mathbf{M}\right)\mathrm{d}V,$
(1)
where there are three terms in the integral—the first term represents the
Zeeman energy, the second term represents the exchange energy, and the third
term represents the magnetostatic energy, $\mathbf{M}$ is the magnetization
vector, $\mu_{0}$ is the vacuum permeability, $\mathbf{H}_{\mathrm{d}}$ is the
demagnetizing field.
The dynamics of the magnetic system are described with Landau-Lifshitz
equation
$\frac{\partial\mathbf{M}}{\partial
t}=-|\gamma|\mu_{0}\mathbf{M}\times\mathbf{H}_{\mathrm{eff}},$ (2)
where $\mathbf{H}_{\mathrm{eff}}=-\frac{1}{\mu_{0}}\frac{\delta
F}{\delta\mathbf{M}}$ is the effective magnetic field.
The demagnetizing field $\mathbf{H}_{\mathrm{d}}$ is derived from the Maxwell
equations in magnetostatic approximation:
$\nabla\times\mathbf{H}_{\mathrm{d}}=0,\,\,\,\,\,\nabla\cdot\mathbf{B}=0,$ (3)
where $\mathbf{B}=\mu_{0}(\mathbf{H}_{\mathrm{d}}+\mathbf{M})$ is the magnetic
induction. Equation (3) enables the introduction of magnetic scalar potential
$\varphi$, which satisfies the formula
$\mathbf{H}_{\mathrm{d}}=-\nabla\varphi$. As a result, the magnetostatic
Maxwell equations are replaced with a single equation for the magnetic scalar
potential
$\Delta\varphi=\nabla\cdot\mathbf{M}.$ (4)
Thanks to the uniform magnetization, the Landau-Lifshitz equation can be
easily linearized. We assume that the static $y$-component of the
magnetization remains constant and is equal to the saturation magnetization
$M_{\mathrm{S}}$, while the dynamic component $\mathbf{m}=(m_{x},m_{z})$,
which is much smaller than the static component $M_{y}$ ($|\mathbf{m}|\ll
M_{\mathrm{S}}$), precesses in the $xz$-plane. Therefore,
$\mathbf{M}(x,y,z,t)=M_{\mathrm{S}}\hat{y}+\mathbf{m}(x,z)e^{i\omega t}$,
where $\omega=2\pi f$ is the angular frequency and $f$ is the frequency.
After linearization, the SW dynamics are described with a set of three coupled
equations:
$i\omega
m_{x}=\gamma\mu_{0}\left(H_{0}-\frac{2A_{\mathrm{ex}}}{\mu_{0}M_{\mathrm{S}}}\Delta\right)m_{z}+M_{\mathrm{S}}\partial_{z}\varphi,$
(5) $-i\omega
m_{z}=\gamma\mu_{0}\left(H_{0}-\frac{2A_{\mathrm{ex}}}{\mu_{0}M_{\mathrm{S}}}\Delta\right)m_{x}+M_{\mathrm{S}}\partial_{x}\varphi,$
(6) $\Delta\varphi-\partial_{x}m_{x}-\partial_{z}m_{z}=0.$ (7)
The solutions to Eqs. (5)-(7) take the form of plane waves. Two wave vectors
can be defined due to the system’s symmetry: in-plane wave vector $k$ (in the
$x$-direction) and out-of-plane wave vector $q$ (in the $z$-direction), as
shown in Fig. 1(a). As a result, we have
$(m_{x},m_{z},\varphi)\propto(m_{x0},m_{z0},\varphi_{0})e^{ikx}e^{iqz}$. The
system in the $x$-direction is infinite, therefore the wave vector $k$ can
only have real values for the solution to be physical. On the other hand, the
wave vector $q$ may take on complex values. For simplicity, we introduce the
following dimensionless parameters:
$\Omega=\frac{\omega}{\gamma\mu_{0}M_{\mathrm{S}}}$,
$\theta=\Omega_{H}+\lambda^{2}(k^{2}+q^{2})$,
$\Omega_{H}=\frac{H_{0}}{M_{\mathrm{S}}}$, and
$\lambda^{2}=\frac{2A_{\mathrm{ex}}}{\mu_{0}M_{\mathrm{S}}^{2}}$. After
substituting the plane-wave solution into Eqs. (5)-(7) and expressing them in
the matrix form, we obtain
$\begin{pmatrix}i\Omega&\theta&iq\\\ \theta&-i\Omega&ik\\\
ik&iq&k^{2}+q^{2}\end{pmatrix}\begin{pmatrix}m_{x0}\\\ m_{z0}\\\
\varphi_{0}\end{pmatrix}=0.$ (8)
The condition that the determinant of the 3x3 matrix in Eq. (8) is equal to
zero leads to the following formula:
$(k^{2}+q^{2})(\Omega^{2}-\theta^{2}-\theta)=0.$ (9)
As $\theta=\theta(q^{2})$, Eq. (9) is a third-degree function with respect to
$q^{2}$. Two roots, $q=\pm ik$, are obtained by setting the first bracket to
zero whereas four roots, $q=\pm q_{1}$ and $q=\pm iq_{2}$ where
$q_{1},q_{2}\in\mathbb{R}$, are obtained by setting the second bracket to
zero. From the zeroing of the second bracket in Eq. (9), we can also derive
the formula for the dimensionless frequency
$\Omega=\sqrt{\theta(\theta+1)}.$ (10)
Let $\theta(q=q_{1})=\theta_{1}$ and $\theta(q=q_{2})=\theta_{2}$. Since
$q_{1}$ and $q_{2}$ correspond to the same frequency,
$\Omega=\sqrt{\theta_{1}(\theta_{1}+1)}=\sqrt{\theta_{2}(\theta_{2}+1)}$, and
therefore, $\theta_{2}=-(\theta_{1}+1)$. From this formula we can obtain the
connection between wave vectors $k$, $q_{1}$, and $q_{2}$, which is the
following:
$q_{2}=\pm\sqrt{2k^{2}+q_{1}^{2}+\frac{2\Omega_{H}+1}{\lambda^{2}}}.$ (11)
We can interpret the solutions obtained for the out-of-plane wave vector $q$
as follows. Since our solution is a plane wave, wave vector $q_{1}$ will give
a volume contribution of the sinusoidal character to the mode profile while
wave vectors $k$ and $q_{2}$ denote exponentially-decaying modes localized on
the surfaces. Since the wave vector $k$ represents also the propagating in-
plane wave vector, this solution has a character of a DE mode. Next, knowing
that $\Omega_{H}\geq 0$, we can derive from Eq. (11) that $|q_{2}|\geq
1/\lambda$, indicating that $q_{2}$ has a character of a surface exchange
mode.
The solution of Eq. (8) can be represented by a vector
$\begin{pmatrix}m_{x0}\\\ m_{z0}\\\
\varphi_{0}\end{pmatrix}=\begin{pmatrix}ik\theta-q\Omega\\\
iq\theta+k\Omega\\\ \Omega^{2}-\theta^{2}\end{pmatrix}C,$ (12)
where $C$ is an arbitrary constant. The general solution for the full vector
$(m_{x},m_{z},\varphi)$ is a superposition of six terms, one for each solution
of the wave vector $q$
$\begin{pmatrix}m_{x}\\\ m_{z}\\\
\varphi\end{pmatrix}=\left[\begin{pmatrix}X_{1}\\\ Z_{1}\\\
F_{1}\end{pmatrix}C_{1}e^{iq_{1}z}+\begin{pmatrix}X_{2}\\\ Z_{2}\\\
F_{2}\end{pmatrix}C_{2}e^{-iq_{1}z}+\begin{pmatrix}X_{3}\\\ Z_{3}\\\
F_{3}\end{pmatrix}C_{3}e^{kz}+\begin{pmatrix}X_{4}\\\ Z_{4}\\\
F_{4}\end{pmatrix}C_{4}e^{-kz}+\begin{pmatrix}X_{5}\\\ Z_{5}\\\
F_{5}\end{pmatrix}C_{5}e^{q_{2}z}+\begin{pmatrix}X_{6}\\\ Z_{6}\\\
F_{6}\end{pmatrix}C_{6}e^{-q_{2}z}\right]e^{ikx}$ (13)
where $X_{1}=ik\theta_{1}-q_{1}\Omega$, $X_{2}=ik\theta_{1}+q_{1}\Omega$,
$X_{3}=X_{4}=ik$, $X_{5}=ik\theta_{2}-iq_{2}\Omega$,
$X_{6}=ik\theta_{2}+iq_{2}\Omega$, $Z_{1}=k\Omega+iq_{1}\theta_{1}$,
$Z_{2}=k\Omega-iq_{1}\theta_{1}$, $Z_{3}=-k$, $Z_{4}=k$, $Z_{5}=k\Omega-
q_{2}\theta_{2}$, $Z_{6}=k\Omega+q_{2}\theta_{2}$,
$F_{1}=F_{2}=\Omega^{2}-\theta_{1}^{2}$, $F_{3}=-(\Omega+\Omega_{H})$,
$F_{4}=\Omega-\Omega_{H}$, $F_{5}=F_{6}=\Omega^{2}-\theta_{2}^{2}$, as it
follows from Eq. (12).
As the system under consideration is an infinite film, boundary conditions
must be applied on top and bottom surfaces. Our goal was to extend the model
derived by Wolfram and De Wames to include the presence of the perpendicular
surface anisotropy. It requires the extension of exchange boundary condition
by adding the term depending on the surface anisotropy [37]
$\left\\{\begin{aligned} \partial_{z}m_{x}&=0|_{z=\pm L/2}\\\
\partial_{z}m_{z}\mp\sigma_{\mathrm{t(b)}}m_{z}&=0|_{z=\pm
L/2}\end{aligned}\right.$ (14)
where $\sigma_{\mathrm{t(b)}}=K_{\mathrm{s}}^{\mathrm{t(b)}}/A_{\mathrm{ex}}$
and $K_{\mathrm{s}}^{\mathrm{t(b)}}$ is surface anisotropy constant for the
top (bottom) surface.
Since the equation for the magnetic scalar potential [Eq. (7)] outside of the
film gives $\Delta\varphi_{\mathrm{out}}=0$ and, subsequently,
$-\varphi_{0}(k^{2}+q^{2})e^{ikx}e^{iqz}=0$, the asymptotic solutions outside
the film for the magnetic scalar potential are given by expression
$\varphi_{\mathrm{out}}=\begin{cases}C_{7}e^{ikx}e^{-|k|z}&\text{for }z\geq
L/2,\\\ C_{8}e^{ikx}e^{|k|z}&\text{for }z\leq-L/2.\end{cases}$ (15)
As the tangential components of the demagnetizing field
$\mathbf{H}_{\mathrm{d}}$ are continuous across the surfaces of the film, the
magnetic scalar potential must also be continuous. Additionally, the normal
component of $\mathbf{B}$ must also be continuous. Therefore, this results in
the effective magnetostatic boundary conditions:
$\varphi=\varphi_{\mathrm{out}},$ (16) $B_{z}=B_{z}^{\mathrm{out}},$ (17)
where $\varphi$ and $B_{z}$ are magnetic scalar potential and magnetic
induction in the magnetic material, and $\varphi_{\mathrm{out}}$ and
$B_{z}^{\mathrm{out}}$ – out of the magnetic material, respectively. Then, Eq.
(17) can be rewritten in terms of scalar potential as
$\partial_{z}\varphi-m_{z}=\partial_{z}\varphi_{\mathrm{out}}.$ (18)
The complete set of boundary conditions in Eqs. (14), (16), and (18) evaluated
for the SW modes in Eq. (13) leads to the following degeneracy matrix
$\bm{A}$:
$\begin{split}\bm{A}=\left(\begin{matrix}iq_{1}X_{1}e^{iq_{1}\frac{L}{2}}&-iq_{1}X_{2}e^{-iq_{1}\frac{L}{2}}&kX_{3}e^{k\frac{L}{2}}\\\
iq_{1}X_{1}e^{-iq_{1}\frac{L}{2}}&-iq_{1}X_{2}e^{iq_{1}\frac{L}{2}}&kX_{3}e^{-k\frac{L}{2}}\\\
(iq_{1}-\sigma_{\mathrm{t}})Z_{1}e^{iq_{1}\frac{L}{2}}&(-iq_{1}-\sigma_{\mathrm{t}})Z_{2}e^{-iq_{1}\frac{L}{2}}&(k-\sigma_{\mathrm{t}})Z_{3}e^{k\frac{L}{2}}\\\
(iq_{1}+\sigma_{\mathrm{b}})Z_{1}e^{-iq_{1}\frac{L}{2}}&(-iq_{1}+\sigma_{\mathrm{b}})Z_{2}e^{iq_{1}\frac{L}{2}}&(k+\sigma_{\mathrm{b}})Z_{3}e^{-k\frac{L}{2}}\\\
[(iq_{1}+|k|)F_{1}-Z_{1}]e^{iq_{1}\frac{L}{2}}&[(-iq_{1}+|k|)F_{2}-Z_{2}]e^{-iq_{1}\frac{L}{2}}&[(k+|k|)F_{3}-Z_{3}]e^{k\frac{L}{2}}\\\
[(iq_{1}-|k|)F_{1}-Z_{1}]e^{-iq_{1}\frac{L}{2}}&[(-iq_{1}-|k|)F_{2}-Z_{2}]e^{iq_{1}\frac{L}{2}}&[(k-|k|)F_{3}-Z_{3}]e^{-k\frac{L}{2}}\end{matrix}\right|\\\
\left|\begin{matrix}-kX_{4}e^{-k\frac{L}{2}}&q_{2}X_{5}e^{q_{2}\frac{L}{2}}&-q_{2}X_{6}e^{-q_{2}\frac{L}{2}}\\\
-kX_{4}e^{k\frac{L}{2}}&q_{2}X_{5}e^{-q_{2}\frac{L}{2}}&-q_{2}X_{6}e^{q_{2}\frac{L}{2}}\\\
(-k-\sigma_{\mathrm{t}})Z_{4}e^{-k\frac{L}{2}}&(q_{2}-\sigma_{\mathrm{t}})Z_{5}e^{q_{2}\frac{L}{2}}&(-q_{2}-\sigma_{\mathrm{t}})Z_{6}e^{-q_{2}\frac{L}{2}}\\\
(-k+\sigma_{\mathrm{b}})Z_{4}e^{k\frac{L}{2}}&(q_{2}+\sigma_{\mathrm{b}})Z_{5}e^{-q_{2}\frac{L}{2}}&(-q_{2}+\sigma_{\mathrm{b}})Z_{6}e^{q_{2}\frac{L}{2}}\\\
[(-k+|k|)F_{4}-Z_{4}]e^{-k\frac{L}{2}}&[(q_{2}+|k|)F_{5}-Z_{5}]e^{q_{2}\frac{L}{2}}&[(-q_{2}+|k|)F_{6}-Z_{6}]e^{-q_{2}\frac{L}{2}}\\\
[(-k-|k|)F_{4}-Z_{4}]e^{k\frac{L}{2}}&[(q_{2}-|k|)F_{5}-Z_{5}]e^{-q_{2}\frac{L}{2}}&[(-q_{2}-|k|)F_{6}-Z_{6}]e^{q_{2}\frac{L}{2}}\end{matrix}\right).\end{split}$
(19)
The condition $\det{\bm{A}}=0$ allows obtaining the solutions of wave vector
$q$ and, subsequently, the resonance frequencies as a function of wave vector
$k$. The eigenvectors of matrix $\bm{A}$ provide the coefficients $C_{i}$ in
Eq. (13).
Compared to the approach suggested by Kalinikos et al. [23], the solution
mentioned above is precise within the examined geometry. Calculating multiple
integrals for components of a demagnetizing tensor and expanding dynamical
magnetization components into a series is not required to obtain coupled
modes, which simplifies analytical calculations and significantly reduces
computation time.
### II.3 Numerical simulations
The Landau-Lifshitz equation in the linear approximation [Eqs. (5),(6)] and
the magnetostatic Maxwell equation-based formula for the magnetic scalar
potential [Eq. (7)] along with the boundary conditions for perpendicular
surface anisotropy [Eq. (14)] and magnetostatic potential [Eq. (15)] were
solved numerically using finite-element method simulations in COMSOL
Multiphysics [10]. The problem was solved in 1D geometry with reduced $x$\-
and $y$-directions. Eqs. (5)-(7) were modified accordingly to introduce the
terms coming from the implementation of plane-wave solution representing the
propagation of SWs in $x$-direction
$(m_{x},m_{z},\varphi)=(m_{x0},m_{z0},\varphi_{0})e^{ikx}$. The dispersion
relations were calculated using eigenfrequency study.
## III Results and discussion
### III.1 Dispersion relation analysis
Figure 2: (a-d) Dispersion relations of six lowest modes of a 100 nm-thick
CoFeB film with (a) FBC, (b) SSA with
$K_{\mathrm{s}}^{\mathrm{t}}=K_{\mathrm{s}}^{\mathrm{b}}=$-700\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{J}\mathrm{/}\mathrm{m}^{2}$$,
and (c,d) OSA with $K_{\mathrm{s}}^{\mathrm{t}}=0$ and
$K_{\mathrm{s}}^{\mathrm{b}}=$-1500\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{J}\mathrm{/}\mathrm{m}^{2}$$
for (c) negative and (d) positive wave vectors in the external magnetic field
$\mu_{0}H_{0}=$50\text{\,}\mathrm{mT}$$. The plots present the comparison
between the analytical model (orange lines) and numerical simulations (blue
lines). Avoided crossings (ACs) are marked with labels. (e) The frequency
difference between neighboring modes in FBC system. In plots (a-e) wave vector
$k$ on the $x$-axis is presented in the logarithmic scale. (f-i) Close-up on
the ACs: (f) AC1, (g) AC2, (h) AC3, and (i) AC4. The plot axis are showing the
wave vector and frequency values relative to the AC position calculated from
Eqs. (20) and (24), respectively. Plots present numerical simulations results
only which are in agreement with analytical results.
First, we study the effect of the surface anisotropy on the dispersion
relation. We chose the thickness of the CoFeB film
$L=$100\text{\,}\mathrm{nm}$$ and external magnetic field
$\mu_{0}H_{0}=$50\text{\,}\mathrm{mT}$$. We show the dispersion relation of
the six lowest modes for three cases—free boundary conditions (FBC), i.e.,
$K_{\mathrm{s}}^{\mathrm{t}}=K_{\mathrm{s}}^{\mathrm{b}}=0$ [Fig. 2(a)];
symmetrical surface anisotropy (SSA) with
$K_{\mathrm{s}}^{\mathrm{t}}=K_{\mathrm{s}}^{\mathrm{b}}=$-700\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{J}\mathrm{/}\mathrm{m}^{2}$$
[Fig. 2(b)]; one-sided surface anisotropy (OSA) with
$K_{\mathrm{s}}^{\mathrm{t}}=0$ and
$K_{\mathrm{s}}^{\mathrm{b}}=$-1500\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{J}\mathrm{/}\mathrm{m}^{2}$$
separately for negative [Fig. 2(c)] and positive [Fig. 2(d)] wave vector $k$.
Values of surface anisotropy are comparable to the values presented in
literature [38].
The dispersion relation calculated with the analytical model is shown as a
dashed orange line while the numerical simulation results are shown with
dashed blue line. Figs. 2(a-d) demonstrate the perfect agreement between these
two methods, yielding identical outcomes. The nature of dispersions is
characteristic of the system in DE geometry. Each plot consists of one branch
with a significant slope in the center of the investigated range of wave
vector $k$, displaying a DE surface mode character, and the remaining five are
flat branches representing PSSW modes. All the modes start to increase
significantly in frequency at about $10^{7}$ rad/m as a result of the
increasing contribution of the exchange interaction to the SW energy.
Positions of PSSW modes at $k\approx 0$ are determined by the wave vector
$q_{1}\approx n\pi/L$ ($n=1,2,3...$ is the PSSW mode number). In the presence
of negative surface anisotropy, the value of $q_{1}>n\pi/L$ for corresponding
PSSW modes (the reverse happens for positive surface anisotropy). The increase
of the frequency of the DE mode correlates with the increase of its wave
vector $q_{1}$ with the increase of $k$. However, $q_{1}$ begins to decrease
at some point, leading to $q_{1}\approx n\pi/L$ for very large wave vectors
$k$. Similarly as for the case of $k\approx 0$, for very large $k$ in the
presence of negative surface anisotropy, $q_{1}>n\pi/L$ (the reverse happens
for positive surface anisotropy). Detailed explanation of the correlation
between wave vectors $k$ and $q_{1}$ is provided in Appendix A. The DE mode
increases in frequency and intersects with the three lowest PSSW modes,
leading to the emergence of ACs. These ACs are labeled in Figs. 2(a-d) with
the abbreviation AC and a number indicating their sequence, beginning with the
lowest.
The discussion of ACs requires a precise definition of where AC occurs.
Neglecting the atomic distance limit, the theory provides infinite number of
SW modes. Though it is hypothetically possible for AC to be present between
all modes, it is apparent that the number of ACs is not infinite for the
finite-thickness film. To denote the presence of AC, we establish two distinct
criteria. The first is the local minimum criterion. If the function that
represents the frequency difference between the neighboring modes
$\Delta f_{mn}=f_{m}(k)-f_{n}(k)$ (20)
(where $m,n$ is a mode number) has a local minimum $\Delta f_{\mathrm{AC}n}$,
this minimum represents an AC (or simply crossing if $\Delta
f_{\mathrm{AC}}=0$). In this way, we can define an AC for any boundary
conditions and it allows multiple ACs if multiple local minima exist. The
second is a frequency limit criterion. It could be clearly defined only for
FBC. It says that an AC is present between the DE mode and $n$-th PSSW mode if
$f_{k\to\infty}^{\mathrm{DE}}>f_{k=0}^{n}$ in case where
$f_{k\to\infty}^{\mathrm{DE}}$ is calculated for $A_{\mathrm{ex}}=0$ [2], i.e.
$f_{k\to\infty}^{\mathrm{DE}}=\frac{\mu_{0}\gamma}{2\pi}\left(H_{0}+\frac{M_{\mathrm{S}}}{2}\right)$
(21)
and [39]
$\displaystyle
f_{k=0}^{n}=\frac{\mu_{0}\gamma}{2\pi}\left(\left(H_{0}+\frac{2A_{\mathrm{ex}}}{\mu_{0}M_{\mathrm{S}}}\left(\frac{n\pi}{L}\right)^{2}\right)\right.\times$
$\displaystyle\left.\times\left(H_{0}+M_{\mathrm{S}}+\frac{2A_{\mathrm{ex}}}{\mu_{0}M_{\mathrm{S}}}\left(\frac{n\pi}{L}\right)^{2}\right)\right)^{1/2}$
(22)
This criterion is valid under the assumption that the contribution of the
exchange interaction to the $k$ dependence of the frequency of DE mode and
PSSW modes is identical. The AC position is determined by the minimum of Eq.
(20). It means that the choice of criterion does not influence the value of
the AC size. In this paper, we present the results based on the local minimum
criterion because of its broader definition. However, we will also mention the
frequency limit criterion and its impact on the results.
To address AC occurrence accurately, the frequency difference between
neighboring modes is presented as a function of wave vector $k$ in Fig. 2(e)
for the case of FBC, for which the dispersion relation is shown in Fig. 2(a).
In the range of small and large wave vectors, the distance between the modes
is almost constant. The discrepancy between these ranges is due to the fact
that in the limit of small wave vectors, the dispersion relation of the modes
can be described by Eq. (III.1) [39], while in the large wave vector limit
with the function
$f_{n}=\frac{\mu_{0}\gamma}{2\pi}\left(H_{0}+M_{\mathrm{S}}+\frac{2A_{\mathrm{ex}}}{\mu_{0}M_{\mathrm{S}}}k^{2}+\frac{2A_{\mathrm{ex}}}{\mu_{0}M_{\mathrm{S}}}\left(\frac{n\pi}{L}\right)^{2}\right).$
(23)
Table 1: AC size of AC1-AC5 for FBC, SSA, and OSA systems, which dispersion relations are shown in Figs. 2(a-d). System | AC1 (MHz) | AC2 (MHz) | AC3 (MHz) | AC4 (MHz) | AC5 (MHz)
---|---|---|---|---|---
FBC | $11.14$ | $6.04$ | $158.8$ | $1137.2$ | $6022.4$
SSA | $21.03$ | $231.5$ | $280.6$ | $1327.7$ | $5781.5$
OSA ($k-$) | $162.8$ | $254.8$ | $565.6$ | $1389.2$ | $5474.4$
OSA ($k+$) | $122.8$ | $175.7$ | $24.67$ | $1396.0$ | $6119.0$
In the mid-range, each curve shown in Fig. 2(e) has a local minimum
corresponding to the AC, which is labeled and marked with an arrow. The first
three ACs are relatively small, not exceeding a size of 200 MHz. The AC4,
represented by a deep minimum, has a size of 1.14 GHz. On the other hand, AC5
has a very shallow minimum with a size of 6.02 GHz. Interestingly, it is not
the global minimum, as according to Eq. (23) the distance between the modes
can reach 5.99 GHz, which value is in agreement with the analytical model.
However, according to the local minimum criterion, it is considered to be an
AC. In case of the frequency limit criterion, only the first three minima can
be identified as ACs. The AC4 does not meet this criterion as
$f_{k=0}^{n=4}=$27.55\text{\,}\mathrm{GHz}$$ exceeds
$f_{k\to\infty}^{\mathrm{DE}}=$26.66\text{\,}\mathrm{GHz}$$ slightly.
After presenting the similarities between the systems, it is time to highlight
the differences. Firstly, the symmetry of the system, specifically the
boundary conditions, leads to the symmetry of the dispersion relation with
respect to wave vector. Therefore, the FBC and SSA systems have symmetrical
dispersions since $K_{\mathrm{s}}^{\mathrm{t}}=K_{\mathrm{s}}^{\mathrm{b}}$.
In contrast, the OSA system has different surface anisotropy constants on the
top and bottom surfaces, resulting in a frequency difference between negative
and positive wave vectors. Additionally, the presence of the negative surface
anisotropy causes a slight increase in the frequency of all modes. Comparing
the results in Figs. 2(a) and (b), for
$K_{\mathrm{s}}=$-700\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{J}\mathrm{/}\mathrm{m}^{2}$$
the increase does not pass 1 GHz. Conversely, for a positive surface
anisotropy, a decrease in frequency would be noted.
The most significant difference between the systems lies in the size of the
ACs. Close-up plots are shown in Fig. 2(f-i) for AC1-AC4, respectively. They
show the dispersion relation for the values of wave vector and frequency
relative to the AC location ($k_{\mathrm{AC}}$,$f_{\mathrm{AC}}$), which is
defined separately for each AC in the following way –- $k_{\mathrm{AC}_{n}}$
represents the wave vector of the local minimum of Eq. (20), while
$f_{\mathrm{AC}_{n}}$
$f_{\mathrm{AC}_{n}}=\frac{f_{n}(k_{\mathrm{AC}_{n}})+f_{n+1}(k_{\mathrm{AC}_{n}})}{2}.$
(24)
The values of the AC size for each AC type can be found in Table 1. AC1 [Fig.
2(f)] exhibits a negligible size for the FBC and SSA systems, but a more
significant size of 162.8 MHz for negative and 122.8 MHz for positive wave
vectors in the OSA system. As the dispersion relations for FBC and SSA are
symmetrical, the AC sizes are always equal for both negative and positive wave
vectors. The size of AC2 [Fig. 2(g)] remains small only in the FBC system,
whereas it opens up in the SSA and OSA systems reaching the sizes larger than
AC1. In the OSA system, there is a slight asymmetry between negative and
positive wave vector range. In the case of AC3 [Fig. 2(h)], it opens up for
all the considered cases. The most interesting case is present for OSA system.
In the range of negative wave vectors, this AC is large, whereas in the range
of positive wave vectors, it is very small, measuring only 24.67 MHz. AC4 is
much larger than lower order ACs, having a size above 1 GHz, however, the size
is very similar in all of the systems [Fig. 2(i)]. AC5 presents a similar
case, with its size being even larger, measuring above 5 GHz.
### III.2 Mode profiles
Figure 3: Distribution across the film thickness of a dynamic magnetization
components $m_{x}$ (solid lines) and $m_{z}$ (dashed lines) for (a) a first
mode for wave vector $k=0$, (b) a first mode for wave vector
$k=$-5\text{\times}{10}^{8}\text{\,}\mathrm{rad}\mathrm{/}\mathrm{m}$$, (c) a
third mode for wave vector
$k=$-2.5\text{\times}{10}^{6}\text{\,}\mathrm{rad}\mathrm{/}\mathrm{m}$$, and
(d) a third mode for wave vector
$k=$2.5\text{\times}{10}^{6}\text{\,}\mathrm{rad}\mathrm{/}\mathrm{m}$$. Mode
profiles are presented for system with FBC (blue lines), SSA (orange lines),
and OSA (green lines). Plots present numerical simulations results only which
are in agreement with analytical results. Figure 4: The AC size $\Delta
f_{\mathrm{AC}}$ as a function of film thickness $L$ for the system with (a)
FBC, (b) SSA, and OSA for (c) negative and (d) positive wave vector $k$. Odd-
numbered ACs are shown with solid lines, even-numbered ACs with dashed lines.
The $y$-axis is in the logarithmic scale. Plots present results of numerical
simulations.
The surface anisotropy has a significant impact on the dynamic magnetization
distribution of SW modes, with mode profiles shown in Fig. 3. Firstly, we
present the profile of the lowest frequency mode at $k=0$ in Fig. 3(a). Due to
the low external field, the spin precession is strongly elliptical with the
domination of in-plane $m_{x}$ component. In the case of FBC (blue lines), the
mode is uniform throughout the thickness. The negative surface anisotropy
leads to the reduction of the SW amplitude close to the film boundary. The
mode is symmetrical for SSA, while for OSA it becomes asymmetrical.
Interestingly, although the surface anisotropy affects directly only the out-
of-plane $m_{z}$ component, the in-plane $m_{x}$ component is also impacted.
However, in the dipole-dominated low-wave vector regime, the effect of surface
anisotropy is generally small. The impact on the PSSW modes (not shown here)
is even smaller. However, the anisotropy has a substantial effect on the mode
profiles in the exchange-dominated large-wave vector region, as evidenced in
Fig. 3(b) for the lowest frequency mode at
$k=$-5\text{\times}{10}^{8}\text{\,}\mathrm{rad}\mathrm{/}\mathrm{m}$$. In
both the SSA and OSA cases, the mode amplitude is significantly lower near the
boundary with surface anisotropy in comparison to the FBC case. Interestingly,
in this case, the $m_{z}$ component exceeds the $m_{x}$ component, and the
precession is close to circular.
In Figs. 3(c,d), profiles of the third lowest mode are shown at $k$ between
AC2 and AC3 for the negative
[$k=$-2.5\text{\times}{10}^{6}\text{\,}\mathrm{rad}\mathrm{/}\mathrm{m}$$,
Fig. 3(c)] and positive
[$k=$2.5\text{\times}{10}^{6}\text{\,}\mathrm{rad}\mathrm{/}\mathrm{m}$$, Fig.
3(d)] wave vectors. The mode has a character of a DE mode, although the first
and second term of Eq. (13) connected with wave vector $q_{1}$ also have a
significant impact on the mode shape, which results in the sinusoidal
character of these profiles. Their contribution is enhanced when the surface
anisotropy is present. The $m_{x}$ component is larger than $m_{z}$ component,
but the precession is less elliptical than when $k=0$. For both FBC and SSA
cases, where the boundary conditions are identical on both surfaces, the mode
profiles for opposite wave vectors are their mirror images. However, this is
not true for OSA as the mode profiles differ between negative and positive
wave vectors. For negative wave vectors [Fig. 3(c)], the contribution from
first and second terms in Eq. (13) are significantly stronger for both $m_{x}$
and $m_{z}$ components.
### III.3 Analysis of thickness dependence
In the next step, we present a detailed analysis of the impact of the surface
anisotropy on AC formation. Firstly, we study the effect of the film thickness
$L$ on the AC size $\Delta f_{\mathrm{AC}}$ in four cases—FBC [Fig. 4(a)], SSA
[Fig. 4(b)], and OSA for both negative [Fig. 4(c)] and positive [Fig. 4(d)]
wave vector $k$. In general, the increase of film thickness results in an
increase in the number of ACs. This phenomenon is well-explained by the
frequency limit criterion. The thickness has no impact on the maximum DE
frequency $f_{k\to\infty}^{\mathrm{DE}}$ [Eq. (21)]. In contrast, the formula
for the PSSW frequency $f_{k=0}^{n}$ [Eq. (III.1)] includes thickness in the
denominator; thus, an increase of thickness results in a decrease of
frequency. This allows for a higher number of PSSW modes to satisfy the
frequency limit criterion, resulting in more ACs. Another relevant effect is
that the AC size decreases with an increase of thickness.
Figure 4 shows that the rate of the AC size decrease depends on the boundary
conditions and the parity of the AC number. In the FBC system [Fig. 4(a)], the
AC size decreases rapidly, but much faster for even-numbered ACs than for odd-
numbered ACs. In the case of SSA [Fig. 4(b)], the rate of decrease for odd-
numbered ACs is slightly smaller, but for even-numbered ACs the change is
significant; in this case, the decrease is much smaller compared to the odd-
numbered ACs. In the OSA system [Figs. 4(c,d)], the rate of decrease is
similar across all ACs and comparable to the even-numbered ACs in the SSA
system. This effect, which depends on parity, originates from the symmetry of
modes and boundary conditions. Due to the dominant contribution of the
$k$-dependent term in the magnetization profile shape, the DE mode has a
symmetry closer to the odd-numbered PSSW modes, which are connected with the
odd-numbered ACs. In the case of FBC, there is no additional source of
symmetry breaking and, therefore, odd-numbered ACs are larger. On the other
hand, SSA causes a symmetric disturbance of all modes, primarily affecting the
dynamic magnetization amplitude in close proximity to the surface. Odd-
numbered PSSW modes have opposite amplitude on the opposite boundaries,
therefore, the effect of the anisotropy on the mode symmetry cancels out. On
the other hand, both DE mode and even-numbered PSSW modes exhibit the same
amplitude on the opposite surfaces, so the anisotropy breaks the symmetry of
these modes and, as an effect, these modes induce larger ACs. In the case of
OSA, the asymmetry of the anisotropy generates the asymmetry in the mode
profiles, leading to large ACs in all cases. An explanation based on a
simplified model of mode profiles is provided in Appendix B.
The final effect is present only in the OSA system in the positive wave vector
$k$ range [Fig. 4(d)]. It is the presence of a local minimum of AC size with a
change of thickness. Interestingly, this effect only occurs for odd-numbered
ACs. Upon analyzing this effect, one may question whether this local minimum
reaches zero, or in other words, whether exists such a critical film thickness
for which AC does not occur, i.e., the mode crossing is present. Obviously,
the numerical study of the AC size can not provide a definite answer while we
were not able to derive it from the analytical model. Nevertheless, the mode
profiles analysis can resolve this issue.
Figure 5: (a) The AC1 size $\Delta f_{\mathrm{AC1}}$ as a function of film
thickness $L$ for the system with OSA for positive wave vector $k$—the close-
up of Fig. 4(d) to the AC1 local minimum in small thickness range. Inset plots
show the magnetization profiles of first (orange line) and second (green line)
mode at $k_{\mathrm{AC1}}$ for the thickness of 39 nm (on the left) and 46 nm
(on the right). (b) Dispersion relation of three lowest modes for the system
with OSA for positive wave vector $k$ for the thickness of 42.4 nm. Inset in
the bottom-right corner: the close-up to the AC1 with marking of three wave
vectors –
$k_{\mathrm{L}}=$4.853\text{\times}{10}^{6}\text{\,}\mathrm{rad}\mathrm{/}\mathrm{m}$$,
$k_{\mathrm{AC}}=$5.033\text{\times}{10}^{6}\text{\,}\mathrm{rad}\mathrm{/}\mathrm{m}$$,
and
$k_{\mathrm{R}}=$5.2\text{\times}{10}^{6}\text{\,}\mathrm{rad}\mathrm{/}\mathrm{m}$$.
Inset at top: magnetization profiles of first (orange line) and second (green
line) mode at $k_{\mathrm{L}}$ (left), $k_{\mathrm{AC}}$ (center), and
$k_{\mathrm{R}}$ (right). Plots present results of numerical simulations.
The close-up to the local minimum of the AC1 [Fig. 4(d), solid blue line] is
shown in Fig. 5(a). In this case, the step in simulation was 0.2 nm. The
minimum value of $\Delta f_{\mathrm{AC1}}=1.33$ MHz was obtained for a
thickness of 42.4 nm. We can take a look on the magnetization profiles for the
first and second modes at wave vector $k_{\mathrm{AC}}$ [the inset plots in
Fig. 5(a)] for the thickness smaller (39 nm, left plot) and larger (46 nm,
right plot) than the thickness of the AC1 size minimum. In both cases, the
mode profiles are very similar, indicating the superposition of the DE and
first PSSW mode. However, the most important fact is to notice that the modes
are interchanged. For $L=$39\text{\,}\mathrm{nm}$$, the lower frequency mode
(orange line) has higher amplitude at the bottom of the film, while for
$L=$46\text{\,}\mathrm{nm}$$, higher amplitude is at the top of the film. The
higher frequency mode (green line) demonstrates the opposite trend. The
detailed analysis of the profiles indicates that this behavior is connected
with each local minimum of $\Delta f_{\mathrm{AC}}$, i.e. the mode profiles at
$k_{\mathrm{AC}}$ interchange. According to our analysis, it indicates the
existence of a critical thickness value $L_{\mathrm{C}}$ where a crossing
occurs instead of AC, indicating the absence of a gap between the first and
second mode. This observation suggests the possible occurrence of an
accidental degeneracy in the system [40, 41], meaning that there are two
solutions with the same values of wave vector and frequency.
Figure 6: (a) Dispersion relation of the lowest six modes of the system with
SSA for
$K_{\mathrm{s}}=$2500\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{J}\mathrm{/}\mathrm{m}^{2}$$.
(b) Frequency difference between the neighboring modes of the system with SSA
presented in (a). The $x$-axis is in the logarithmic scale. (c-e) The AC size
$\Delta f_{\mathrm{AC}}$ as a function of the surface anisotropy
$K_{\mathrm{s}}$ for the system with (c) SSA and OSA for (d) negative and (e)
positive wave vector $k$. Odd-numbered ACs are shown with solid lines, even-
numbered ACs with dashed lines, and second-order ACs with dotted lines. The
$y$-axis is in the logarithmic scale. Plots present results of numerical
simulations.
Another observation concerns the system with the lowest value of $\Delta
f_{\mathrm{AC1}}$ found for the thickness of 42.4 nm. Its dispersion relation
is shown in Fig. 5(b). The AC1 is not visible in the full dispersion.
Interestingly, the AC1 is still too small to be visible even after a close-up
of the AC1 vicinity (inset plot in the lower right corner). To study the mode
profiles in the vicinity of AC1, we chose three wave vectors:
$k_{\mathrm{AC}}=$5.033\text{\times}{10}^{6}\text{\,}\mathrm{rad}\mathrm{/}\mathrm{m}$$,
$k_{\mathrm{L}}=$4.853\text{\times}{10}^{6}\text{\,}\mathrm{rad}\mathrm{/}\mathrm{m}$$,
and
$k_{\mathrm{R}}=$5.2\text{\times}{10}^{6}\text{\,}\mathrm{rad}\mathrm{/}\mathrm{m}$$.
The mode profiles are shown in the inset plot at the top part of Fig. 5(b).
For $k_{\mathrm{AC}}$ (middle plot), the profiles are similar to the case of
$L=$39\text{\,}\mathrm{nm}$$. It suggests, that the critical value of the
thickness $L_{\mathrm{C}}>$42.4\text{\,}\mathrm{nm}$$. In the case of
$k_{\mathrm{L}}$ (left plot), the profile of the first mode (orange line) has
a character of the DE mode with a small amplitude reduction at the bottom due
to the surface anisotropy. The second mode (green line) has the character of
the first PSSW mode. This mode has a slightly larger amplitude at the bottom
than at the top. The modes at $k_{\mathrm{R}}$ (right plot) have the same
character as the modes at $k_{\mathrm{L}}$, but their order is reversed. It
clearly shows that far from the AC (where $f_{2}-f_{1}\gg\Delta
f_{\mathrm{AC1}}$), the modes have the same character on both sides of the AC,
as if the interaction between them is negligible. It is worth noting that this
interchange is not so clear in the case where $\Delta f_{\mathrm{AC}}$ is
relatively large. In this case, the intermixing of the effects of the wave
vector dependence and the short distance between the ACs relative to their
size leads to a significant change in the mode profiles.
### III.4 Analysis of surface anisotropy constant dependence
The analysis presented above was done for the case where the surface
anisotropy constant $K_{\mathrm{s}}$ has a negative value, resulting in the
partial pinning condition for the out-of-plane dynamic component of the
magnetization. Now we can look at the case where $K_{\mathrm{s}}$ is positive,
so the magnetization amplitude close to the surface is enhanced. The
dispersion relation for the system with SSA for
$K_{\mathrm{s}}^{\mathrm{t}}=K_{\mathrm{s}}^{\mathrm{b}}=$2500\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{J}\mathrm{/}\mathrm{m}^{2}$$
is shown in Fig. 6(a). The small $k$ range is comparable to the case of
negative $K_{\mathrm{s}}$. However, for about $k=10^{7}$ rad/m, the DE mode
reaches the local maximum at about 23 GHz and obtains negative group velocity.
This effect is analogous to the effect of the volume perpendicular magnetic
anisotropy [42]. On its way, the DE mode produces additional ACs, which did
not occur in the case of FBC and negative $K_{\mathrm{s}}$. The frequency
difference between the adjacent modes [Fig. 6(b)] shows that additional ACs
are present for the first, second, and third PSSW modes. These ACs are marked
with the letter ’x’. Also, an AC5 is present. However, it is not related to
the AC5 occurring for negative anisotropy, therefore, it is also marked with
’x’.
Figure 7: The AC size $\Delta f_{\mathrm{AC}}$ as a function of the external
magnetic field $B_{0}$ for the system with (a) FBC, (b) SSA, and OSA for (c)
negative and (d) positive wave vector $k$. Odd-numbered ACs are shown with
solid lines, even-numbered ACs with dashed lines. The $y$-axis is in the
logarithmic scale. Plots present results of numerical simulations.
Next, we study the AC size as a function of the surface anisotropy constant
$K_{\mathrm{s}}$ for the case of SSA [Fig. 6(c)] and OSA for negative [Fig.
6(d)] and positive [Fig. 6(e)] wave vector $k$. We calculated it numerically
in the range from
$-3000\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{J}\mathrm{/}\mathrm{m}^{2}$
to
$3000\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{J}\mathrm{/}\mathrm{m}^{2}$
with a step of
$100\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{J}\mathrm{/}\mathrm{m}^{2}$.
Almost all curves have a minimum similar to the one present in Fig. 4(d). A
detailed analysis of the mode profiles agrees with the previous observation—in
each case, the mode profiles at $k_{\mathrm{AC}}$ interchange, so we expect
that for a critical value of $K_{\mathrm{s}}$ a crossing between modes should
occur. The position of the minimum depends on the AC parity. AC2 has the
smallest size at $K_{\mathrm{s}}=0$ (however, we expect the critical value to
be very low, i.e.,
$|K_{\mathrm{s}}^{\mathrm{critical}}|<$50\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{J}\mathrm{/}\mathrm{m}^{2}$$).
The odd-numbered ACs (AC1 and AC3) have the smallest size for positive
$K_{\mathrm{s}}$ in the system with SSA and OSA at negative $k$, while for the
system with OSA at positive $k$, the smallest value occurs for negative
$K_{\mathrm{s}}$. Interestingly, for the system with OSA,
$K_{\mathrm{s}}^{\mathrm{critical}}$ of the same AC is different for positive
and negative wave vector range, which means that we can get a situation where
the AC is present only on one side of the dispersion relation, while on the
opposite side a crossing will be present. In general, the AC tends to have
larger size for positive surface anisotropy than for negative surface
anisotropy of the same value. In addition, we can see that for a wide range of
positive surface anisotropy, additional ACs (marked with the letter ’x’) occur
in all systems. Their source lies in the negative slope range of the
dispersion relation, as discussed above. Their minima also follow the rule of
the mode profile interchange, so we should expect these minima to go to zero
as well.
### III.5 Analysis of external magnetic field dependence
Finally, the effect of the external magnetic field $B_{0}$ on the AC size is
shown in Fig. 7. The ACs have been calculated in the field range between 10
and 500 mT with a step of 10 mT. Almost all ACs are increasing with the
increase of the external field. This observation correlates with the fact that
$k_{\mathrm{AC}}$ also increases with the increase of external field. Then,
the $k$-dependent terms in the DE mode profile [Eq. (13)] give a stronger
contribution, and the profile asymmetry is increased, resulting in a stronger
interaction with PSSW modes and a larger AC size. The most remarkable example
is AC4 in the FBC system, which increases by a factor of 4.06 in the
investigated field range. On the other hand, AC5 in the SSA system increases
by only 3% in the same range. Interestingly, in the OSA system, the local
minimum for the AC3 occurs in the positive wave vector range. The lowest
detected value is 0.94 MHz at 280 mT. This minimum also has the source in the
mode profile interchange at $k_{\mathrm{AC}}$, indicating the closing of the
AC gap. In the direction of lower fields, the local maximum is present for 100
mT with the AC size of 27.4 MHz, while for higher fields it increases up to
120 MHz in the upper limit of the study of 500 mT. The results show that the
external field provides a simple way to control the AC size, which is the
easiest source of control from the experimental point of view.
## IV Conclusions
In this article, we provide a comprehensive investigation of the SW dynamics
in the ferromagnetic film in the DE geometry in the presence of surface
anisotropy with the use of analytical model and numerical simulations. We
compare three different cases: free boundary conditions, symmetrical surface
anisotropy, and one-sided surface anisotropy. We show that the surface
anisotropy significantly increases the size of the AC between DE and PSSW
modes. In the case of OSA, the mirror symmetry breaking leads to the
asymmetrical dispersion relation with respect to the wave vector $k$, which
particularly affects the AC size. The surface anisotropy also has a strong
influence on the shape of the mode profiles.
We have studied in detail the impact of various parameters (i.e., film
thickness, surface anisotropy constant, and external magnetic field) on the AC
size. In general, the ACs shrink with the increase of film thickness or the
decrease of the external magnetic field. Also, the parity of the AC has a
strong influence on the AC size. For large positive surface anisotropy
constant, the mode of DE character has a non-monotonic dispersion relation,
which leads to the appearance of additional ACs for large wave vectors. In
most cases, the increase in anisotropy leads to the increase in AC size.
Interestingly, we found that under certain conditions, the AC can close and
turn into a crossing. This phenomenon, known as accidental degeneracy, occurs
for some particular ACs when the value of the surface anisotropy constant, the
layer thickness, or the external magnetic field is changed. In the system with
SSA, it occurs for both negative and positive wave vector, while in the system
with OSA only on one side of the dispersion relation for a given set of
parameters. The transition through the accidental degeneracy point in any
parameter space is always associated with the exchange of the order of the
mode profiles in the AC region. It is worth to note that the results shown in
the paper are calculated for typical material parameters of CoFeB but the
presented effects are universal and should also occur for different materials.
The presence of surface anisotropy in magnetic thin films is ubiquitous. It is
often considered a detrimental feature, but it can also be an essential
property. The ability to control the anisotropy by voltage as well as to
control its effects by an external magnetic field gives an additional
advantage. Moreover, surface anisotropy of different strength on opposite
surfaces provides a simple way to induce the nonreciprocity in the structure.
We believe that surface anisotropy can be exploited in magnonic devices where
asymmetrical transmission or the possibility to control the propagation of the
SW is a fundamental property.
###### Acknowledgements.
K.S. and M.K. acknowledge the financial support from National Science Centre,
Poland, grants no. UMO-2020/39/I/ST3/02413 and UMO-2021/41/N/ST3/04478. The
research leading to these results has received funding from the Norwegian
Financial Mechanism 2014-2021 project no. UMO-2020/37/K/ST3/02450. K.S.
acknowledges the financial support from the Foundation for Polish Science
(FNP). The dataset for this manuscript is available on
https://doi.org/10.5281/zenodo.8382924.
## Appendix A Relation between wave vectors $k$ and $q_{1}$
Figure 8: Wave vector $\tilde{q}_{1}=q_{1}L$ as a function of wave vector $k$
of six lowest modes of a CoFeB film of thickness $L=$100\text{\,}\mathrm{nm}$$
with (a) FBC, (b) SSA with
$K_{\mathrm{s}}^{\mathrm{t}}=K_{\mathrm{s}}^{\mathrm{b}}=$-700\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{J}\mathrm{/}\mathrm{m}^{2}$$,
and (c,d) OSA with $K_{\mathrm{s}}^{\mathrm{t}}=0$ and
$K_{\mathrm{s}}^{\mathrm{b}}=$-1500\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{J}\mathrm{/}\mathrm{m}^{2}$$
for (c) negative and (d) positive wave vector $k$ in the external magnetic
field $\mu_{0}H_{0}=$50\text{\,}\mathrm{mT}$$. Horizontal dashed black lines
represent the values $q_{1}=n\pi/L$ for $n=1,2,3...$ Plots present analytical
results.
Figure 8 shows the wave vector $\tilde{q}_{1}=q_{1}L$ as a function of wave
vector $k$ for three cases studied in the manuscript—FBC [Fig. 8(a)], SSA with
$K_{\mathrm{s}}^{\mathrm{t}}=K_{\mathrm{s}}^{\mathrm{b}}=$-700\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{J}\mathrm{/}\mathrm{m}^{2}$$
[Fig. 8(b)], and OSA with $K_{\mathrm{s}}^{\mathrm{t}}=0$ and
$K_{\mathrm{s}}^{\mathrm{b}}=$-1500\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{J}\mathrm{/}\mathrm{m}^{2}$$
for negative [Fig. 8(c)] and positive [Fig. 8(d)] wave vector $k$. In the low
wave vector range (up to about $10^{7}$ rad/m), the plots are very similar to
the dispersion relations shown in Figs. 2(a-d), including the presence of the
gaps between the modes. For the case of FBC, the PSSW modes are placed exactly
at $q_{1}=n\pi/L$, while for DE mode the value of $q_{1}$ is increasing and
produces ACs with PSSW modes exactly as in the dispersion relation.
Nevertheless, the large value of $q_{1}$ for DE mode is not decisive for the
shape of the mode profile since the coefficients in Eq. (13) associated with
$q_{1}$ give smaller contribution that those associated with $k$ (however,
this contribution is not negligible). For the case of SSA and OSA, values of
$q_{1}$ of PSSW modes are larger than $n\pi/L$. It is clear that a larger
value of $q_{1}$ results in larger frequency of PSSW modes according to Eq.
(10).
In the large $k$ range for the case of FBC, the values of $q_{1}$ go back to
$n\pi/L$, but this time for $n$ starting from 0. To achieve this feat, all
modes in the range of the DE mode are decreasing in the value of $q_{1}$ in
the dipole-exchange regime of the wave vector $k$. In the case of SSA and OSA,
the values of $q_{1}$ are also larger than $n\pi/L$ but the difference is much
larger than in the small $k$ range.
## Appendix B Toy model of interaction between modes
Figure 9: The absolute value of overlapping integral $I$ as a function of film
thickness $L$ for the system with (a) FBC, (b) SSA, and OSA for (c) negative
and (d) positive wave vector $k$. Odd-numbered ACs are shown with solid lines,
even-numbered ACs with dashed lines. The $y$-axis is in the logarithmic scale.
The interaction between the modes can be explained using of a simplified model
of mode profiles. In the case of FBC at $k=0$, the modes form a basis of
cosine functions
$m_{n}=A_{n}\cos{\left(n\pi\left(z-\frac{L}{2}\right)\right)},$ (25)
where $m_{0}$ represents the DE mode and $m_{n>0}$ represents $n$-th order
PSSW modes. $A_{n}$ is the normalization constant which assure that
$\int_{-L/2}^{L/2}m_{n}^{2}\mathrm{d}z=1$.
Assume that in the regime of small $k$, the PSSW modes remain unchanged with
the change of the wave vector $k$, so their profiles are represented by Eq.
(25). On the other hand, the DE mode is described by the function
$m_{0}(k)=A_{0}e^{kz}.$ (26)
In the presence of negative surface anisotropy, the PSSW modes are “squeezed”
to satisfy the boundary conditions. Due to this effect, their profiles are
modified such that $q_{1}=(n+p_{n})\pi/L$, where $p_{n}$ is the relative shift
of wave vector. In the case of SSA, the mode profile is modified in the
following way:
$m_{n}=A_{n}\cos{\left((n+p_{n})\pi\left(z-\frac{n}{n+p_{n}}\frac{L}{2}\right)\right)}.$
(27)
In the case of OSA, the mode profile of PSSW modes is represented by the
function:
$m_{n}=A_{n}\cos{\left((n+p_{n})\pi\left(z-\frac{L}{2}\right)\right)}.$ (28)
We assume that the change in DE mode due to surface anisotropy is negligible.
Basing on the results in Fig. 8, we assume that $p_{n}$ has a constant value
of 0.03 for all PSSW modes and all thicknesses.
Strength of the interaction between the modes is described by the overlapping
integral
$I_{ij}=\int_{-L/2}^{L/2}m_{i}m_{j}\mathrm{d}z.$ (29)
Figure 9 shows the overlapping integral between the DE mode and the $n$-th
order PSSW mode at $k_{\mathrm{AC}_{n}}$ as a function of layer thickness $L$.
The model qualitatively reproduces the behavior shown in Fig. 4 showing that
the overlapping integral is connected with the AC size. Firstly, there is an
identical dependence on the PSSW mode parity. In the case of FBC [Fig. 9(a)],
the overlapping integral has a larger value for the function representing odd-
numbered PSSW modes than for even-numbered PSSW modes. In the case of SSA
[Fig. 9(b)], the overlapping integral for even-numbered PSSW modes grows over
the integral for odd-numbered PSSW modes which only increases slightly
compared to the FBC case. In the case of OSA for negative wave vectors $k$
[Fig. 9(c)], the value of the overlapping integral is similar for all modes.
In the case of positive wave vectors $k$ [Fig. 9(d)], we have successfully
reproduced the presence of the minima for odd-numbered PSSW modes shown in
Fig. 4(d). The positions of the minima—at 48, 104, 154, and 204 nm for the
first, third, fifth, and seventh PSSW modes, respectively, is in good
agreement with the positions of the minima in Fig. 4(d) (42, 98, 148, and 200
nm, respectively).
## References
* Chumak _et al._ [2015] A. V. Chumak, V. I. Vasyuchka, A. A. Serga, and B. Hillebrands, Magnon spintronics, Nat. Phys. 11, 453 (2015).
* Damon and Eshbach [1961] R. Damon and J. Eshbach, Magnetostatic modes of a ferromagnet slab, J. Phys. Chem. Solids 19, 308 (1961).
* Grünberg _et al._ [1982] P. Grünberg, M. G. Cottam, W. Vach, C. Mayr, and R. E. Camley, Brillouin scattering of light by spin waves in thin ferromagnetic films (invited), J. Appl. Phys. 53, 2078 (1982).
* Zhang _et al._ [2021] Z. Zhang, H. Yang, Z. Wang, Y. Cao, and P. Yan, Strong coupling of quantized spin waves in ferromagnetic bilayers, Phys. Rev. B 103, 104420 (2021).
* Gladii _et al._ [2016] O. Gladii, M. Haidar, Y. Henry, M. Kostylev, and M. Bailleul, Frequency nonreciprocity of surface spin wave in permalloy thin films, Phys. Rev. B 93, 054430 (2016).
* Tacchi _et al._ [2019] S. Tacchi, R. Silvani, G. Carlotti, M. Marangolo, M. Eddrief, A. Rettori, and M. G. Pini, Strongly hybridized dipole-exchange spin waves in thin Fe-N ferromagnetic films, Phys. Rev. B 100, 104406 (2019).
* Wang _et al._ [2022] H. Wang, W. He, R. Yuan, Y. Wang, J. Wang, Y. Zhang, I. Medlej, J. Chen, G. Yu, X. Han, J.-P. Ansermet, and H. Yu, Hybridized propagating spin waves in a CoFeB/IrMn bilayer, Phys. Rev. B 106, 064410 (2022).
* Dreyer _et al._ [2021] R. Dreyer, N. Liebing, E. R. J. Edwards, A. Müller, and G. Woltersdorf, Spin-wave localization and guiding by magnon band structure engineering in yttrium iron garnet, Phys. Rev. Mater. 5, 064411 (2021).
* Song _et al._ [2021] W. Song, X. Wang, C. Jia, X. Wang, C. Jiang, D. Xue, and G. Chai, Nonreciprocal emergence of hybridized magnons in magnetic thin films, Phys. Rev. B 104, 014402 (2021).
* Vaňatka _et al._ [2021] M. Vaňatka, K. Szulc, O. Wojewoda, C. Dubs, A. V. Chumak, M. Krawczyk, O. V. Dobrovolskiy, J. W. Kłos, and M. Urbánek, Spin-wave dispersion measurement by variable-gap propagating spin-wave spectroscopy, Phys. Rev. Appl. 16, 054033 (2021).
* Rana and Otani [2019] B. Rana and Y. Otani, Towards magnonic devices based on voltage-controlled magnetic anisotropy, Commun. Phys. 2, 90 (2019).
* Wang _et al._ [2017] Q. Wang, A. V. Chumak, L. Jin, H. Zhang, B. Hillebrands, and Z. Zhong, Voltage-controlled nanoscale reconfigurable magnonic crystal, Phys. Rev. B 95, 134433 (2017).
* Rana and Otani [2018] B. Rana and Y. Otani, Voltage-controlled reconfigurable spin-wave nanochannels and logic devices, Phys. Rev. Appl. 9, 014033 (2018).
* Ibrahim _et al._ [2016] F. Ibrahim, H. X. Yang, A. Hallal, B. Dieny, and M. Chshiev, Anatomy of electric field control of perpendicular magnetic anisotropy at Fe/MgO interfaces, Phys. Rev. B 93, 014429 (2016).
* De Wames and Wolfram [2003] R. E. De Wames and T. Wolfram, Dipole‐Exchange Spin Waves in Ferromagnetic Films, J. Appl. Phys. 41, 987 (2003).
* Wolfram [2003] T. Wolfram, Magnetostatic Surface Waves in Layered Magnetic Structures, J. Appl. Phys. 41, 4748 (2003).
* Camley and Grimsditch [1980] R. E. Camley and M. Grimsditch, Brillouin scattering from magnons in ferromagnetic thin films of polycrystalline iron, Phys. Rev. B 22, 5420 (1980).
* Kabos _et al._ [1984] P. Kabos, W. D. Wilber, C. E. Patton, and P. Grünberg, Brillouin light scattering study of magnon branch crossover in thin iron films, Phys. Rev. B 29, 6396 (1984).
* Henry _et al._ [1972] R. Henry, S. D. Brown, P. E. Wigen, and P. J. Besser, Magnetoexchange branch repulsion in thin single-crystal disks of yttrium iron garnet, Phys. Rev. Lett. 28, 1272 (1972).
* Arias and Mills [2001] R. Arias and D. L. Mills, Theory of spin excitations and the microwave response of cylindrical ferromagnetic nanowires, Phys. Rev. B 63, 134439 (2001).
* Rychły _et al._ [2018] J. Rychły, V. S. Tkachenko, J. W. Kłos, A. Kuchko, and M. Krawczyk, Spin wave modes in a cylindrical nanowire in crossover dipolar-exchange regime, J. Phys. D: Appl. Phys. 52, 075003 (2018).
* Kalinikos and Slavin [1986] B. A. Kalinikos and A. N. Slavin, Theory of dipole-exchange spin wave spectrum for ferromagnetic films with mixed exchange boundary conditions, J. Phys. C: Solid State Phys. 19, 7013 (1986).
* Kalinikos _et al._ [1990] B. A. Kalinikos, M. P. Kostylev, N. V. Kozhus, and A. N. Slavin, The dipole-exchange spin wave spectrum for anisotropic ferromagnetic films with mixed exchange boundary conditions, J. Phys. Condens. Matter 2, 9861 (1990).
* Akhiezer _et al._ [1968] A. Akhiezer, V. Bar’yakhtar, and S. Peletminskii, _Spin waves_ (North-Holland Pub. Co., Amsterdam, 1968).
* Guslienko and Slavin [2011] K. Y. Guslienko and A. N. Slavin, Magnetostatic Green’s functions for the description of spin waves in finite rectangular magnetic dots and stripes, J. Magn. Magn. Mater. 323, 2418 (2011).
* Guslienko _et al._ [2002] K. Y. Guslienko, S. O. Demokritov, B. Hillebrands, and A. N. Slavin, Effective dipolar boundary conditions for dynamic magnetization in thin magnetic stripes, Phys. Rev. B 66, 132402 (2002).
* Gubbiotti _et al._ [2004] G. Gubbiotti, M. Conti, G. Carlotti, P. Candeloro, E. D. Fabrizio, K. Y. Guslienko, A. Andre, C. Bayer, and A. N. Slavin, Magnetic field dependence of quantized and localized spin wave modes in thin rectangular magnetic dots, J. Phys. Condens. Matter 16, 7709 (2004).
* Bayer _et al._ [2005] C. Bayer, J. Jorzick, B. Hillebrands, S. O. Demokritov, R. Kouba, R. Bozinoski, A. N. Slavin, K. Y. Guslienko, D. V. Berkov, N. L. Gorn, and M. P. Kostylev, Spin-wave excitations in finite rectangular elements of ${\mathrm{Ni}}_{80}{\mathrm{Fe}}_{20}$, Phys. Rev. B 72, 064427 (2005).
* Guslienko and Slavin [2000] K. Y. Guslienko and A. N. Slavin, Spin-waves in cylindrical magnetic dot arrays with in-plane magnetization, J. Appl. Phys. 87, 6337 (2000).
* Kharlan _et al._ [2019] J. Kharlan, P. Bondarenko, M. Krawczyk, O. Salyuk, E. Tartakovskaya, A. Trzaskowska, and V. Golub, Standing spin waves in perpendicularly magnetized triangular dots, Phys. Rev. B 100, 184416 (2019).
* Zhou _et al._ [2021] X. Zhou, E. V. Tartakovskaya, G. N. Kakazei, and A. O. Adeyeye, Engineering spin wave spectra in thick ${\mathrm{Ni}}_{80}{\mathrm{Fe}}_{20}$ rings by using competition between exchange and dipolar fields, Phys. Rev. B 104, 214402 (2021).
* Kakazei _et al._ [2004] G. N. Kakazei, P. E. Wigen, K. Y. Guslienko, V. Novosad, A. N. Slavin, V. O. Golub, N. A. Lesnik, and Y. Otani, Spin-wave spectra of perpendicularly magnetized circular submicron dot arrays, Appl. Phys. Lett. 85, 443 (2004).
* Tartakovskaya _et al._ [2016] E. V. Tartakovskaya, M. Pardavi-Horvath, and R. D. McMichael, Spin-wave localization in tangentially magnetized films, Phys. Rev. B 93, 214436 (2016).
* Tartakovskaya [2005] E. V. Tartakovskaya, Quantized spin-wave modes in long cylindrical ferromagnetic nanowires in a transverse external magnetic field, Phys. Rev. B 71, 180404 (2005).
* Harms and Duine [2022] J. Harms and R. Duine, Theory of the dipole-exchange spin wave spectrum in ferromagnetic films with in-plane magnetization revisited, J. Magn. Magn. Mater. 557, 169426 (2022).
* Arias [2016] R. E. Arias, Spin-wave modes of ferromagnetic films, Phys. Rev. B 94, 134408 (2016).
* Rado and Weertman [1959] G. Rado and J. Weertman, Spin-wave resonance in a ferromagnetic metal, J. Phys. Chem. Solids 11, 315 (1959).
* Johnson _et al._ [1996] M. T. Johnson, P. J. H. Bloemen, F. J. A. den Broeder, and J. J. de Vries, Magnetic anisotropy in metallic multilayers, Rep. Prog. Phys. 59, 1409 (1996).
* Gurevich and Melkov [1996] A. G. Gurevich and G. A. Melkov, _Magnetization Oscillations and Waves_ (CRC Press, London, 1996).
* Herring [1937] C. Herring, Accidental degeneracy in the energy bands of crystals, Phys. Rev. 52, 365 (1937).
* Huang _et al._ [2011] X. Huang, Y. Lai, Z. H. Hang, H. Zheng, and C. T. Chan, Dirac cones induced by accidental degeneracy in photonic crystals and zero-refractive-index materials, Nat. Mater. 10, 582 (2011).
* Banerjee _et al._ [2017] C. Banerjee, P. Gruszecki, J. W. Klos, O. Hellwig, M. Krawczyk, and A. Barman, Magnonic band structure in a Co/Pd stripe domain system investigated by Brillouin light scattering and micromagnetic simulations, Phys. Rev. B 96, 024421 (2017).
|
# Constant Chemical Potential-Quantum Mechanical-Molecular Dynamics
simulations of the Graphene-electrolyte double layer
Nicodemo Di Pasquale Corresponding author<EMAIL_ADDRESS>Department of Chemical Engineering, Brunel University London, Uxbridge, UB8
3PH, United Kingdom Aaron R. Finney Department of Chemical Engineering,
University College London, London, WC1E 7JE, United Kingdom Joshua Elliott
Department of Chemical Engineering, University of Manchester, Manchester, M13
9PL, United Kingdom Diamond Light Source, Harwell Science and Innovation
Park, Didcot, Oxfordshire OX11 8UQ, United Kingdom Paola Carbone Department
of Chemical Engineering, University of Manchester, Manchester, M13 9PL, United
Kingdom Matteo Salvalaglio Department of Chemical Engineering, University
College London, London, WC1E 7JE, United Kingdom
(October 2021)
###### Abstract
We present the coupling of two frameworks—the pseudo-open boundary simulation
method known as constant potential Molecular Dynamics simulations (C$\mu$MD),
combined with QMMD calculations—to describe the properties of graphene
electrodes in contact with electrolytes.
The resulting C$\mu$QMMD model was then applied to three ionic solutions
(LiCl, NaCl and KCl in water) at bulk solution concentrations ranging from 0.5
M up to 6 M in contact with a charged graphene electrode. The new approach we
are describing here provides a simulation protocol to control the
concentration of the electrolyte solutions while including the effects of a
fully polarizable electrode surface. Thanks to this coupling, we are able to
accurately model both the electrode and solution side of the double layer and
provide a thorough analysis of the properties of electrolytes at charged
interfaces, such as the screening ability of the electrolyte and the
electrostatic potential profile. We also report the calculation of the
integral electrochemical double layer capacitance in the whole range of
concentrations analysed for each ionic species, while the QM simulations
provide access to the differential and integral quantum capacitance. We
highlight how subtle features, such as the adsorption of potassium at the
interface or the tendency of the ions to form clusters, emerge from our
simulations, contribute to explaining the ability of graphene to store charge
and suggest implications for desalination.
## 1 Introduction
Interest in graphene-based devices has grown in recent years, thanks of the
versatility and physical characteristics of this new material, in particular
for applications in which it is in contact with an electrolyte solution. Use
of nanoporous graphene as a membrane for water desalination [1, 2] is one
important example. The presence of pores of equal size to the electrolytes
allows the selective passage of water through the membrane. Combined with the
atomic scale thickness of graphene, this can lead to the creation of
desalination membranes with higher performances than common polymer-based ones
[3]. Another promising technologically relevant applications is the use of
graphene electrodes in electrochemical double layer (super)capacitor (EDLC)
devices[4, 5, 6]. In fact, graphene [7, 8, 9, 10], porous activated carbon
[11] and carbon nanotube [12, 13] electrodes potentially have relatively high
charge storage capacity and a favourable specific energy to power ratio, due
to rapid charge-discharge cycling [8] controlled by changes of an applied
potential, together with lifetimes that can reach millions of cycles [11].
Typically, charge storage at carbonaceous electrodes is a non-faradaic
process, where mobile ionic species accumulate at the interface between the
electrode and the liquid phase. An important class of systems of this kind,
which has gained lots of attention recently, is represented by cheap and easy-
to-prepare aqueous-based electrolytes in contact with a graphene electrode
[6]. Carbon-based EDLCs with aqueous-based electrolytes do not generally
suffer from electrochemical degradation, can be non-toxic, and provide an
attractive alternative solution at the problem of energy storage compared with
traditional battery devices. Combined with a longer lifetime and high power
density,[14] these energy storage systems could be increasingly applied to
power small electronic devices and for acceleration and breaking in electrical
vehicles [5].
Several experimental works were undertaken to understand the physicochemical
properties of neutral and charged graphene interfaces in contact with
electrolyte solutions and the nature of these systems charge storage capacity
[15, 16, 17]. However, the delicate balance between hydration-free energy and
surface effects, which regulate the physisorption of ionic species at
surfaces, resulted in conflicting experimental findings (see [18] for a more
detailed account). For instance, there are reports both supporting the
conclusion that the capacitance of graphene films is ion-independent [16], as
well as contrasting observations suggesting that basal capacitance is instead
ion-specific (with, for example, a greater propensity for Na+ and K+
adsorption over Li+ adsorption at negatively charged electrodes in the case of
group I cations)[17]. Atomic-scale defects in the graphitic surface, its
topography, dimensionality and chemical modifications are difficult to control
and have non-negligible effects in experimental measurements. As an example,
mechanical cutting produces structural defects known as “dangling bonds” which
modifies the measured capacitance of the sample [15, 19]. In this respect, a
model of the graphene interface and its interactions with an electrolyte
solution can exclude all the spurious effects coming from uncontrolled defects
and chemical modification of the surface. Molecular modelling and simulations
can help to improve understanding of the mechanisms involved in such complex
systems and guide the interpretation of experimental results.
Many key features of supercapacitive devices are underpinned by the properties
of the electrochemical double layer, and their responses to the charging of
the electrode. Gouy-Chapman theory [20, 21] describes the double layer as a
diffuse charged layer in the solution that compensates an applied surface
charge on the electrode. Modifications to this model include the adsorption of
counter-ions at the surface in the so-called Stern layer [22]. The development
of a mean-field theory based on the Poisson-Boltzmann lattice-gas model [23]
has shown that features not present in the Gouy-Chapman theory, such as steric
effects, ion correlations, and preferential adsorption [24, 25, 26] need to be
accounted for in order to correctly describe the interactions between the ions
and the electrode. Mechanistic insight for these kinds of effects and how they
control charge storage can be gained by atomistic simulations of the
graphene/electrolyte interface; these also enable the evaluation of ensemble
properties, such as the free energy of adsorption of the ions at the interface
[27]. Furthermore, simulations can establish the effect of solution
concentration on ion accumulation at the electrode, their interfacial
structure, and dynamical properties.
In order to compare simulations with a macroscopic system, this adsorption
should ideally be modelled in the presence of bulk electroneutral solution
with fixed composition to ensure a constant driving force for the adsorption
at a charged surface. This can be obtained for example as shown in Finney et
al. [28], where the authors performed MD simulations using constant chemical
potential MD simulations, C$\mu$MD [29], which mimics open-boundary
conditions. With C$\mu$MD, the authors simulated NaCl(aq) with concentrations
spanning $\sim 0.1-10$ M at graphite surfaces. Their results indicate that the
interface charge screening behaviour is a function of bulk solution
concentration, with a transition (at $\sim 1$M) from diffuse charge screening,
qualitatively consistent with the picture from simple mean field models, to a
complex multi-layered structuring that systematically either over or under
screens the surface potential. The multiple charged layers result from ion
finite-size effects, over-compensation of the surface charge by oppositely
charged ions closest to the surface, and non-idealities in solution, i.e.,
when the hypothesis of non-interaction between oppositely charged ions breaks
down for large ions concentrations [30]. This last effect also has
consequences on the conductance of the ions, which deviates from the
prediction of the Nerst-Einstein equations [31].
Together with a constant driving force for ion adsorption from the bulk,
another important effect to consider in the description of such systems is the
the polarisation of the electrode exerted by the adsorbing electrolytes [18].
Classical simulations typically model the non-bonded interactions between
atoms within the electrolyte and atoms belonging to the interface using
additive pairwise potentials such as the Lennard-Jones potential and Coulomb
interactions between fixed point atom charges. Polarisation can be introduced
using e.g., oscillating charge models, or by fitting short-range potentials to
binding energies obtained from ab initio methods [27, 32, 33]. However, these
models may not accurately capture the complex many-body effect associated with
charge polarization at the electrode-solution interface. Another way to
include polarisation in classical MD simulations is through the constant
potential method developed in [34]. This constant potential method has been
successfully deployed to describe the properties of the electrochemical double
layer of aqueous electrolytes and ionic liquids in contact with metal
electrodes such as Au and Cu. Despite its successes, one of the key
approximations of the constant potential method is that the electrode is fully
metallic and can perfectly screen charges, which is not the case for
(semimetallic) graphene [18].
On the other hand, a full Quantum Mechanical (QM) treatment of the
interactions between the electrolyte and the substrate is still unfeasible,
due to the length (tens of nm) and time (hundreds of ns) scales required for
modeling the effect of the aqueous electrolytes. However, while the full QM
model of the electrode/electrolyte system is out of reach, QM calculations can
be used to compute a set of atomic partial charges on the electrode in the
presence of the electrostatic potential arising from the position of the
electrolyte atoms. This is exactly the spirit of our QMMD scheme, where QM
calculations are coupled to MD simulations at fixed intervals of time
integration. As such, the surface atom partial charges within the classical
force field are updated on the fly. In a more recent development Machine
Learning models have recently proven to be a viable option in tuning the
surface polarization if the scope of the system becomes too large for QM
simulations. This is achieved by replacing the QM calculations with a Neural
Network (NN) model trained to reproduce results from a wide range of QM
calculations with varying distributions of electrolytes in solution. The NN
acts as a polarizable-like force field, combining fast classical MD
simulations with more accurate QM calculations of the interface polarization
[35].
This present work leverages the QMMD framework introduced in [18] and the
C$\mu$MD introduced in [29, 28]. The approach simultaneously captures surface
polarization and concentration effects that can modify the structure and
composition of the electrochemical double layer. We use the resulting
C$\mu$QMMD protocol to examine interfaces between aqueous alkali chloride
solutions at different concentrations with a graphene electrode surface,
elucidating complex interfacial structure, dynamics, and electrochemical
properties.
This paper is organized as follows: we first provide a brief overview of the
QMMD and C$\mu$MD protocols, pointing to the relevant literature for the
interested reader; we present the systems to which we apply the C$\mu$QMMD
framework: a charged graphene electrode in contact with three different
electrolyte solution, NaCl(aq), LiCl(aq), KCl(aq) at different concentrations.
We derive the electrical properties of the interface in terms of the screening
factor and electrical potential and calculate the total integral capacitance
of this system by deriving the quantum and electrical double layer
capacitance. Finally, we discuss the effects of complex solute speciation on
the performance of graphene-electrolyte devices and draw some conclusions
regarding this new proposed simulation scheme.
## 2 Computational Models
In order to capture the dynamic polarization of a charged graphene surface in
response to the evolving configuration of an electrolyte at a prescribed
concentration, we coupled the classical C$\mu$MD simulation to the electronic
structure theory calculations at regular time intervals. We will give a more
detailed account of both models (C$\mu$MD and QMMD) in the following sections,
while here we will only discuss their coupling.
A sketch of the sequence of the operations involved is given in Figure 1. All
the operations shown in Figure 1 are obtained through an in-house python
wrapper. During the MD time integration obtained with GROMACS 2018.4 MD
package [36], ion positions are passed to the Plumed software (v. 2.7) [37]
patched with GROMACS, to compute the C$\mu$MD forces (see section 2.1 for more
details). After the evolution of the atom positions, the final configuration
of the electrolyte is extracted to compute the electrostatic potential. In
turn, this latter quantity is used as input for the QM calculations obtained
with the Dftb+ software package [38]. From the QM results, the distribution of
the charges on the graphene is extracted (see section 2.2 for more details)
and used as input for the new iteration of the loop.
Figure 1: The computational workflow adopted in this work highlighting the two
“black boxes” (the MD software and the QM software) in the blue squares and
the operations included in the python wrapper (red squares).
### 2.1 C$\mu$MD Model
Figure 2: Example configuration from a C$\mu$QMMD simulation of KCl(aq) in
contact with graphene in this work projected onto simulation $x,z$ dimensions.
K+, Cl-, O of water and C of graphene are shown by the pink, cyan, red and
grey spheres. The blue lines highlight the C$\mu$QMMD control and reservoir
regions, which also indicate the simulation cell boundaries. An extended
vacuum region, around 8 nm in $z$, is truncated in the image.
The graphene electrode we considered is located at $z=0$ and is in contact
with an electrolyte slab of thickness 8 nm. A further 8 nm of vacuum separates
the system from its periodically repeating images. The electrolyte phase is
divided into three regions: the first region starts at the graphene electrode
up to a distance of 4 nm. The second one is the control region, which is used
to control the concentrations. The third region is the reservoir region which
provides the reservoir of ions to adjust the concentration of the electrolytes
in the other regions. Figure 2 provides an example of the set-up adopted in
this work, where we highlighted the different C$\mu$MD simulation cell
regions.
The control of the concentration of the ions in solution is obtained by
applying a force at the edge of the reservoir region according to a continuous
function of the form,
$F_{i}^{\mu}(z)=k_{i}(n_{i}^{\mathrm{CR}}-n_{i}^{0})\left[\frac{1}{4\omega}\left(1+\mathrm{cosh}\left(\frac{z-z_{F}}{\omega}\right)\right)^{-1}\right].$
(1)
Here, $\omega$ was set to 0.2 nm, and represents the width of the force region
(between the control and reservoir regions highlighted by the blue lines in
Figure 2) while $k$ was $2\times 10^{4}$ kJ mol-1 nm-1, giving the correct
densities in the bulk (see [28] for a discussion on these parameters). $n^{0}$
is the target ion number density, while $n^{\mathrm{CR}}$ is the density
calculated instantaneously during time integration in the control region.
Finally, $z_{F}$ is the position in $z$ where the C$\mu$MD forces are applied.
In our simulations this is set to 5.5 nm beyond the graphene surface. Using
this approach, the densities of cations and anions are constrained in the
control region to maintain target concentrations of 0.5, 2.0, 3.0, 4.0, 4.4
and 6 M. At each MD time-step ion positions are passed to Plumed in order to
compute the C$\mu$MD forces only acting on those ions in the region of
$z_{F}$. No external forces are applied to the ions outside of this region,
and any local change in the ion density at the interface results from the
physical interactions between graphene and the solution.
### 2.2 QMMD Model
The generality of electronic structure theory and its ability to reproduce the
electronic charge density distribution in semiconductors, metals, and
semimetals implies that that the QMMD approach can describe both long- and
short-ranged redistribution of the surface charge induced by the presence of
the electrolyte. Within each iteration (see Figure 1) of our scheme, the fully
classical system is taken as input for a quantum mechanical calculation. The
simulation box is partitioned into surface atoms whose electronic structure is
explicitly treated, and electrolyte atoms that are converted into a set of
point charges. The point charges take the values of the partial charges
contained in the classical force field and form the background electrostatic
potential during the computation of the electron structure. Upon derivation of
the electronic structure, partitioning of the charge density via Mulliken
population analysis yields the set of surface atom partial charges, which are
then passed to the classical force field. Finally, a short MD trajectory on
the order of several picoseconds can then be carried out (in the presence of
the quantum mechanically polarized surface) to generate the electrolyte
configuration for the following iteration. In our simulations we employ a
coupling between QM and MD calculations of 5 ps. We previously found for this
class of systems that 5 ps represents a good compromise in terms of
computational accuracy of the computed charges (0.004 $e$) vs computing time
when compared with a QMMD simulation where the charges were updated at every
MD time step [18].
In practice, in order to describe the electronic structure of solid-
electrolyte interfaces on the length scales required, we leverage the self-
consistent charge Density Functional Tight-Binding (SCC-DFTB) [39] approach,
which is an approximation to Kohn-Sham Density Functional Theory. The
empirical description in our Dftb+ calculations of the interactions between
the C atoms in the surface are described by the mio-1-1 parameter set. The SCC
charge threshold and Fermi temperature have been set to $1\times 10^{-2}$
Hartree and 300 K, respectively. Whereas, on first inspection, these criteria
can be considered loose and should not be adopted for the calculation of the
total electronic energy, rigorous testing in our previous works [18, 40] found
that they provide a sufficiently accurate description of the surface charge
distribution with respect to fully converged simulations, at a fraction of the
computational cost. Finally, to compute the partial charges passed to the
graphene force field at each MD step, we perform a Mulliken population
analysis [41], which gives reasonable results for this class of systems[18,
40].
### 2.3 Simulations Details
In our simulations, we consider a graphene electrode composed of 336 carbon
atoms in contact with aqueous electrolyte solutions. We investigated three
electrolyte systems, NaCl, KCl and LiCl at concentrations ranging from 0.5 M
to 6 M. However, due to the solubility limits of the KCl(aq) [42, 43], we
limit the investigated concentrations to 4.4 M for the KCl system.
Our simulations are carried out at constant surface charge, which makes it
difficult to draw comparisons across different electrodes since the potential
applied is not necessarily constant. As such, when we compute the capacitance,
we use the potential drop of the neutral electrode as a reference. This
approach has been applied previously to compare the properties of the
electrochemical double layer for different electrolytes [44]. Each operating
condition was therefore repeated for two different total charges of the
electrode: a charged graphene layer with a constant charge on the surface [45]
$\sigma$ of -0.449 e nm-2 (-0.0719 C/$m^{2}$) and a neutral one ($\sigma=0$).
Structural analyses of the solutions are carried out using PLUMED by post-
processing the simulation trajectories. The first-shell coordination numbers
for cations with anions ($N_{\mathrm{X-Cl}}$) and cations with water oxygen
atoms ($N_{\mathrm{X-Ow}}$) were computed using a continuous switching
function:
$N=\frac{1}{M}\sum_{i}^{M}\mathbf{e}^{\left(\frac{-(r-d_{0})^{2}}{2r_{0}^{2}}\right)}$
(2)
where $r$ are distances between atoms, $r_{0}$ was 0.01 nm, and $d_{0}$ was
chosen such that the function goes smoothly from one to zero at the position
of the first minimum in radial distribution functions for the cations with
anions and water oxygen atoms. This ensured that a conservative definition of
first-shell coordination was adopted in the analyses. Coordination numbers
were evaluated in 1.3 nm regions in $z$ closest to the graphene surface and
3.5 nm from the surface, representing the double layer and bulk solution
regions, respectively. The first coordination sphere distributions for ions
were used to construct a graph of ion-ion contacts using the NetworkX Python
library [46]. This allowed us to identify and compute the size of the ion
clusters formed. Ion clusters at the interface and within the bulk were
identified by sampling the regions defined for computing the coordination
numbers.
Molecular dynamics calculations in the NVT ensemble are carried out using
GROMACS [47, 48], version 2018.4. The leapfrog algorithm with a timestep of 1
fs was used to integrate the equations of motion at a constant temperature of
298.15 K, controlled with the Nosé-Hoover thermostat, with a relaxation time
of 0.1 ps. Long-range electrostatic interactions were treated using the
particle-mesh Ewald approach, with a cut-off of 1.4 nm. Non-bonded
interactions were computed using a Lennard-Jones 12-6 potential, truncated
smoothly at 1.0 nm using a switch function starting at a distance of 0.99 nm.
In all simulations, graphene carbon atoms were frozen, and water was modelled
using the SPC/E model [49] with the SETTLE algorithm used to maintain rigid
molecule geometries [50]. This choice is compatible with the Werder water-
graphene parameters that reproduce the experimentally measured graphene/water
contact angle [7, 51]. Ion force field parameters (for K+, Li+, Na+, Cl+),
also compatible with the SPC/E model, are taken from the work of Joung and
Cheatham [52]. In order to prevent water molecules and ions from escaping the
solution into the vacuum space, we added a fixed wall above the reservoir,
interacting with water molecules and ions only through a short-range Lennard-
Jones potential.
We equilibrated each system for 20 ns followed by 130 ns production runs to
collect data for subsequent analyses of the steady-state structure of the
interface. In all analyses discussed below, mean values and standard
deviations (error bars) are obtained via averaging performed using 5 ns
windows.
## 3 Results and Discussion
Thanks to the simulation protocol implemented, electroneutral solutions with
fixed ion concentrations can be maintained in the Control Region in Figure 2,
representing bulk solutions in equilibrium with the electrode-solution
interfaces. This allows us to compare the behaviour of different electrolytes
while controlling the electrolyte background concentration.
### 3.1 Density Profiles
We start this section by reporting in Figure 3 the concentration of the
different ionic species in solution as a function of the $z$-coordinate,
corresponding to the simulation cell direction orthogonal to the surface of
negatively charged graphene electrodes. As expected, these profiles show
preferential adsorption of cations at the electrode surfaces. For Na+ and Li+,
a sharp density peak is observed at a distance of $0.5$ nm from graphene,
followed by a second, less pronounced peak at $0.75$ nm. At the highest
concentrations, a third cation peak emerges around $1.15$ nm, which is more
pronounced for Li+. In contrast, in the case of K+, a small peak at $0.3$ nm
is followed by a much larger and relatively diffuse density peak at $0.6$ nm.
This is due to specific adsorption of the larger cation at the carbon surface,
a small number of which partially dehydrate to directly coordinate to carbon.
The difference in the $z$-density profiles for the different systems is less
notable when considering Cl- with respect to cations. At the lowest bulk
concentrations, there is a monotonically increasing density which reaches bulk
values around 1.5 nm from the graphene interface. As the concentration rises,
further density peaks are observed close to the carbon substrate, determined
by the emergence of a multi-layered electrical double-layer structure,
consistent with previously reported results [28, 18]. In such double-layer
configurations, adjacent solution layers, rich in cations or anions, arise at
the interface due to ion crowding (as in the case of the cations that are
attracted towards the negatively charged surface of the electrode) and ion
correlation (the localized positive excess charge in the closest layers to the
electrode, in turn, attracts the anions).
The results reported in Figure 3 are consistent with those of [40] with
NaCl(aq) and LiCl(aq) systems displaying, qualitatively, the same solution
side double layer structure. The case of KCl(aq) differs somewhat, with the
same position of the two first two peaks for Figure 3(e) in both cases, but a
different intensity compared with [40]. In turn, this intensity difference can
be due to the different classical force fields used for water, carbon and ions
as well as the use of scaled ionic charges not considered here. Besides these
rather minor differences, the results presented here seem to be robust with
respect to the chosen classical model. However, other results in the
literature (see [53]) show clear qualitative differences (in particular for
the KCl(aq) system where no adsorption is observed), most likely due to the
lack of dynamic polarization considered for the graphene electrodes.
(a) Na+
(b) K+
(c) Li+
(d) Cl- (NaCl)
(e) Cl- (KCl)
(f) Cl- (LiCl)
Figure 3: Molar (M) density of the cations (top row) and the corresponding
anions (bottom row) for the three systems considered in this work. Black,
green, magenta, red, orange, blue curves correspond to bulk solution
concentrations 0.5, 2, 3, 4, 4.4 and 6 M, respectively.
### 3.2 Electrical Double Layer Properties
In this section we will derive and analyze the electrical properties of the
electrode-electrolyte systems considered in this work.
#### Electrode Charge Screeninig
We begin by considering the screening factor [28] $f$ defined as:
$f(z)=-\int_{0}^{z}\frac{\rho_{ions}(z^{\prime})}{\sigma}\mbox{d}z^{\prime}$
(3)
where $\sigma$ is the superficial charge of the electrode interface and
$\rho_{ions}(z)$ is the density charge of ions only, which is considered a
function of just the $z$ coordinate, i.e., it is averaged over the $x$ and $y$
coordinates.
The screening factor represents the extent to which the electrolyte phase
electrically screens the charged interface. When $f$ converges to a value of
one, the charge on the electrode is entirely shielded by the electrolyte. By
considering only the ions in the calculations of $f$, we can compare their
screening potential to predictions of simple mean field models. The
integration shown in Equation 3 and Equation 5 is performed numerically. Data
are first smoothed by applying the Savitzky-Golay [54] finite impulse response
smoothing filter of order 3 with a window width of 5 points, implemented in
Matlab. The smoothed curves obtained are then integrated using the trapezoidal
rule. Error bars are computed by error propagation through the integration
procedure.
The screening factors for all systems are reported in Figure 4. When the
concentration of the ions is below 1 M, an under-screening near the interface
can be observed. $f$ increases smoothly to a value of one at around $z=2$ nm.
This is qualitatively consistent with the predictions of Gouy-Chapman’s
theory. For higher concentrations, however, $f$ transitions to over-screening
at relatively small values of the $z$ coordinate. The over-screening,
highlighted by the first peak at $z\approx 0.6$ nm reported in Figures 4(a),
4(b) and 4(c), depends both on the particular ion and the bulk concentration.
In particular, the LiCl system has the strongest over-screening effect on the
electrode across the entire concentration range considered. Over-screening is
a well-known effect for ionic liquids [25] and is usually not considered
important in the electrolyte solutions, as this was only apparent at
relatively high concentrations [18, 28, 53]. The fact that over-screening
appears for higher concentration of the solute, in turn, can be linked
directly to the structuring of the ions near the interface observed in Figure
3. With the increase in concentration, the density of the cations closest to
the electrode increases with respect to their value in the solution bulk (see
Figure 3). The excess charge associated with this ion accumulation is balanced
in adjacent solution layers until the average bulk density is reached [14].
This description is consistent with our observations, where lithium and sodium
show a high degree of structuring near the interface relative to potassium
(i.e., multiple ion density peaks are observed, accompanied by a significant
over-screening effect). In contrast, potassium, with the lowest degree of
structuring near the interface, shows the smallest over-screening among the
three ion solutions considered. Moreover, for potassium, we observe a
variation in the slope of the screening factor when $z\approx 0.5$ nm, which
increases (becoming more pronounced) as a function of concentration. This
additional feature in the screening factor, absent in NaCl and LiCl, can be
explained by the direct coordination of the K+ (i.e., through the first
coordination sphere) to carbon atoms (as also observed in [40]), as opposed to
the behaviour of the cations in LiCl(aq) and NaCl(aq) systems (see the first
peak at $\approx$0.35 nm in Figure 3(b) with respect to the first peak at
$\approx$0.5 nm in Figures 3(c) and 3(a)).
(a) KCl
(b) LiCl
(c) NaCl
Figure 4: Screening factor as defined in Equation 3 for the three systems
considered using ion solution charge densities only. We included only a subset
of the concentrations for clarity, and the results for all the concentrations
are reported in the SI (see Fig.S3 of the SI).
#### Electrode Polarisation
The coordination of the K+ with the carbon atoms on the graphene electrode is
shown qualitatively in Figure 3(b) for the lowest (0.5 M) and the highest
concentration (4.4 M) considered here. The plots in Figure 5 represents a
single snapshot in the 150 ns long simulation with the highest number of
potassium cations in direct contact with the interface (i.e., at a distance of
$0.26$ nm from the interface). As expected, the number of K+ in direct contact
with the interface increases as the bulk concentration of the cations
increases, consistent with the observation in Figure 4 for the short-distance
(from the electrode) behaviour of the screening factor, which increases with
concentration.
The accumulation of K+ in the nearby region to the negative electrode (see
Figure 3(b)) results in an increased non-uniformity of the partial charge
distribution on the electrode, with higher negative charges located on the
carbons closer to the coordinated K+. This, in turn, demonstrates how
polarisation effects are important to be considered in systems where direct
coordination of electrolytes to the electrode may occur.
(a) 0.5 M
(b) 4.4 M
Figure 5: Representative plot of the computed Mulliken charges on the
graphene sheet charged with 4 and in contact with kCl solutions at different
concentrations. Circled X’s mark the coordinates of K ions directly adsorbed
on the surface.
(a) KCl
(b) LiCl
(c) NaCl
Figure 6: Electrostatic potential as defined in Equation 5 for the three
systems considered. We included only a subset of the concentrations for
clarity. We included the results for all the concentrations in the SI (see
Fig.S3 of the SI).
#### Electrical Potential in the Double Layer
We calculated the electrical field $E(z)$ and the electrical potential,
$\psi(z)$ in the direction orthogonal to the interface using the Poisson
equation:
$-\frac{\mbox{d}^{2}\psi(z)}{\mbox{d}z^{2}}=\frac{\mbox{d}E(z)}{\mbox{d}z}=\frac{\rho(z)}{\epsilon(z)}$
(4)
where $\rho(z)$ is the charge density calculated for all atoms on the
perpendicular axis and we defined $\epsilon(z)=\epsilon_{r}(z)\epsilon_{0}$,
the product of the permittivity in vacuum $\epsilon_{0}$ and relative
permittivity $\epsilon_{r}$. It was reported that this latter quantity could
be a function of the distance from the electrode [55], a function of the
concentration of the electrolyte [16], or possibly both. Given such
uncertainties, we consider a constant relative permittivity equal to one in
this work. The electrical potential, $\psi$(z), is obtained from Equation 4 by
integrating twice with respect to the $z$-coordinate:
$\psi(z)=-\int_{0}^{z}\int_{0}^{z^{\prime}}\frac{\rho(\zeta)}{\epsilon(\zeta)}\mbox{d}\zeta\mbox{d}z^{\prime}$
(5)
The two integration constants in Equation 5 are chosen to set the
electrostatic field and potential equal to zero in the bulk, which amounts to
considering the bulk as the reference for the calculation of the electrostatic
potential.
The results of Equation 5 are reported in Figure 6 for a selection of
concentrations (see Figure S.3 of the SM for the entire range of
concentrations). In stark contrast to the exponential behaviour predicted by
models based on the Gouy-Chapman double layer theory which treats the solvent
medium as a continuum with known dielectric, atom/molecule finite-size effects
give rise to an undulating $\psi(z)$ function in the interfacial region at all
concentrations and in all systems. When calculating the charge distribution,
we include all solution species, including water molecules partial charges.
Hence, it is unsurprising that the structuring of ions and water molecules at
the interface gives rise to a significant departure from the predictions of
simple mean field models. Indeed, these finite size effects are a well-
reported feature of electrode-electrolyte systems [56, 57].
From a relatively large negative value of the potential at the electrode, the
(partial) charges of ions and water give rise to fluctuations that attenuate
at larger values of $z$, where the bulk solution behaviour is recovered.
Generally, increasing the bulk solution concentration increase the amplitude
of $\psi(z)$ fluctuations. Furthermore, it is evident from Figure 6(b) and
Figure 6(c) that the crowding of ions in the double-layer increases with
concentration as the positions of peaks and minima in $z$ shift to lower
values, a feature also observed by Finney et al. [28] with graphite and which
was related to changes in the screening factor. This concentration dependence
is less apparent in the case of KCl(aq), where the value of $\psi(z)$ at the
first maximum is less susceptible to changes in the concentration as opposed
to NaCl(aq) and LiCl(aq).
#### Electrical Double Layer Capacitance
The total capacitance $C_{TOT}$ in these kinds of systems is usually
considered as composed of three independent components combined in series: the
Electrochemical Double-Layer Capacitance (EDLC), $C_{EDL}$, and the quantum
capacitance (or the space charge capacitance)($C_{Q}$), depending on the
spatial distribution of the charges on the graphene [40]. The total
capacitance is then given by
$\frac{1}{C_{TOT}}=\frac{1}{C_{EDL}}+\frac{1}{C_{Q}}$ (6)
From Figure 6 we can easily derive the potential drop, $\Delta\psi$, across
the interface as*** A more precise notation for the potential drop across the
interface would have been $\Delta\Delta\psi=\Delta\psi^{-}-\Delta\psi_{ref}$.
[40] $\Delta\psi=\Delta\psi^{-}-\Delta\psi_{ref}$ where $\Delta\psi^{-}$ and
$\Delta\psi_{ref}$ represent the potential drop at the interface with respect
to the bulk for the charged and neutral electrodes, respectively. As a
reference for the calculation of the potential drop, we use the potential at
the interface in a neutral electrode with all other conditions unchanged. We
report the calculation of the potential across the system for a neutral
electrode in the SI (see Figure S.4 of the SI) along with the potential drop
at the interface ($\Delta\psi_{ref}$) (see Table S.2 of the SI). With this
definition of the potential drop, the EDLC can be obtained, as
$C_{EDL}=\frac{\sigma}{\Delta\psi}$ (7)
The quantum capacitance instead is obtained by calculating the differential
quantum capacitance $C_{Q}^{diff}$ according to [40]:
$C_{Q}^{diff}(\psi)=\frac{e^{2}}{4k_{B}T}\int_{-\infty}^{\infty}\left[D(E)\mbox{sech}^{2}\left(E+\psi\right)\right]\mbox{d}E$
(8)
Where $e$ is the electron charge, $E$ is the energy relative to the Fermi
level, $D(E)$ is the density of states at a given energy, $k_{B}$ is the
Boltzmann constant, and $T$ is the temperature. By integrating the
differential quantum capacitance with respect to the potential $\psi$ up to
the potential drop $\Delta\psi$ calculated for each system, we obtain the
integral quantum capacitance $C_{Q}$:
$C_{Q}=\frac{1}{\Delta\psi}\int_{0}^{\Delta\psi}C_{Q}^{diff}(\psi)\mbox{d}\psi$
(9)
For more detailed information about the calculation of the quantum capacitance
we refer the reader to our previous work [40].
concentration | $\Delta\psi$ | $C_{EDL}$ | $C_{Q}$ | $C_{TOT}$
---|---|---|---|---
| KCl
0.5 | -1.03 | 6.95 | 10.56 | 4.19
2.0 | -1.01 | 7.10 | 10.31 | 4.20
3.0 | -1.00 | 7.16 | 10.23 | 4.21
4.0 | -0.984 | 7.28 | 9.81 | 4.18
4.4 | -0.988 | 7.25 | 10.07 | 4.22
| LiCl
0.5 | -1.05 | 6.82 | 10.78 | 4.18
2.0 | -1.01 | 7.09 | 10.31 | 4.20
3.0 | -1.00 | 7.16 | 10.21 | 4.21
4.0 | -1.00 | 7.16 | 10.21 | 4.21
4.4 | -1.09 | 6.57 | 11.25 | 4.15
6.0 | -1.02 | 7.02 | 10.44 | 4.20
| NaCl
0.5 | -1.05 | 6.82 | 10.78 | 4.18
2.0 | -1.02 | 7.02 | 10.44 | 4.20
3.0 | -0.997 | 7.18 | 10.17 | 4.21
4.0 | -0.996 | 7.19 | 10.15 | 4.21
4.4 | -0.986 | 7.26 | 10.05 | 4.22
6.0 | -1.00 | 7.16 | 10.21 | 4.21
Table 1: Electrostatic potential drop ($\Delta\psi$) across the interface (in
V), Electrochemical double layer capacitance $C_{EDL}$, Quantum Capacitance
$C_{Q}$, and total capacitance $C_{TOT}$ (in $\mu$F cm-2) for each
concentration considered (in M).
The results for $C_{Q}$, $C_{EDL}$, and $C_{TOT}$ for all of the systems
considered are reported in Table 1. The data show that the total capacitance
is practically constant across all the concentration range and for all
solution types. The largest variation in $C_{TOT}$ we obtained among all the
systems is $\approx$2% (between the LiCl(aq) and KCl(aq) at 4.4 M). This
result contrasts with the different behaviour of the three cations in solution
and near the electrode interfaces, as highlighted in the discussion of the
number density of ionic species at the interface (see Figure 3) their
screening effect on the charge of the electrode (Figure 4), and as further
discussed in the following section in relation to their clustering properties.
An important point we want to highlight here is that such differences in the
behaviour of the cation in solution can be correctly captured through the use
of a simulation protocol that combines the pseudo-open boundary condition,
i.e., C$\mu$MD to maintain constant composition electroneutral bulk solutions
beyond the double layer, and the quantum mechanical description for the
distribution of partial charges of the electrode. However, while the
capacitance is a critical parameter for this kind of system’s applications as
supercapacitors, we showed here that the physics of the interfaces between
graphene electrodes and electrolytes is much richer than the one captured by
such quantity.
### 3.3 Ion Association
An often overlooked effect in systems in alkali chloride solutions is the
tendency for ions to associate, forming clusters. In particular, even simple
salt solutions exhibit significant non-ideal behaviour at high concentrations.
Recent experiments [58] and simulations [59] have shown that extended liquid-
like clusters exist in bulk NaCl(aq) at high concentrations and the extent of
these ionic networks is promoted in the double layer at carbon surfaces [28].
Since the effectiveness of the graphene-electrolyte devices often depends on
the ability to ‘build up the double layer’ (i.e., accumulate ions from the
bulk solution in the interfacial region), the structure and mobility of ion
species can be essential to this.
#### Ion Clusters
To identify and characterise ion associates in the simulations in this work,
pairwise RDFs were computed (see Fig. S.1 of the Supplementary Material,
(SM)), and the first minima in these informed truncation distances ($r_{c}$)
for first-sphere ion-ion coordination. $r_{c}=0.29$, 0.34 and 0.39 nm for Li–
Na– and K–Cl, respectively, reflecting the different sizes of the cations.
Clusters were identified as fully connected networks in the graph of adjacent
ion-ion connections according to this geometric criteria, regardless of their
total charge or lifetime.
Figure 7 provides the average first-sphere coordination number between cations
and O of water (see Figure 7(a)) as well as cations and anions for all
systems, calculated using Equation 2.
(a) Cation/Water
(b) Cation/Anion
Figure 7: Coordination number for the different systems at the different
concentrations.
The results shown in Figure 7 indicate no significant surface effect on the
coordination of cations with water or chloride when ions in the interface
($0<z<2.5$ nm) and bulk ($2.5<z<4.5$ nm) regions were investigated. There is a
slight increase in the mean cation-anion coordination, and a concomitant
decrease in cation-water coordination, at the interface compared to the bulk;
however, this difference is within the margin of error. Generally, the effect
of increasing concentration is to increase the number of cation-anion
contacts, particularly for KCl(aq), where the coordination number is more than
double that of the other systems for all concentrations (and with Li–Cl
coordination being negligible even at 6 M). From the largest to smallest
variation in the coordination number we can write
$\mbox{K}^{+}\rightarrow\mbox{Na}^{+}\rightarrow\mbox{Li}^{+}$. This trend
follows the decrease of the ion radius and it is likely due to the stronger
binding of water in the solvation spheres of smaller cations. Furthermore, the
average cation-water coordination number is unchanging with a concentration
within the margin of error.
In simulations of NaCl(aq) in contact with graphite, [28] the substrate was
found to increase cation-anion correlations in the double layer with respect
to the bulk, particularly beyond $5$ M. It is important to note that different
models (due to the different system) were used and also that system size
likely plays a role in the extent that clusters can grow (both in e.g., the
system-size dependence of the availability of ions to form associates and the
extent to which finite-size and percolating clusters may form in effectively
confined canonical systems.)
The change in coordination for different salts is reflected in the cluster
size probability distributions presented in Figure 8 for the case of 4.4 M (we
report the results for the entire range of concentrations in Figure S.5 of the
SM). There is a clear difference in the extent to which clusters can grow,
with lithium forming clusters containing at most four ions and potassium
forming much larger networks containing as many as 35 ions. Even at the
highest concentrations, the majority of the Li+ are dispersed in solution,
fully solvated in their first shell. A snapshot of a configuration obtained
during the simulation of KCl at 4.4 M is shown in Figure 8. Although the most
probable clusters contain only a few ions (for clusters composed of five ion
units, we obtained a relative frequency of 0.01), larger species do contribute
to the charge storage capacity and must be considered. What we observe is a
stronger tendency of the potassium to associate into large aggregates—albeit
ones which are highly dynamic on the timescales of the simulations—compared to
sodium or lithium.
Since the KCl(aq) system shows the formation of large aggregates of ions, it
is interesting to study the relative frequency of the charge of these
aggregates. In Figure 9 we plot the 2-dimensional histogram showing the
relative frequencies of the charge vs. the cluster size for the KCl(aq)
system. The histogram is skewed towards positive charges, with the appearance
of clusters containing an excess of positive charge as large as +7e, although
the majority of the clusters are neutral.
Figure 8: On the left: Histogram of the relative frequency of the cluster of
different sizes for the concentration of 4.4 M. In the inset, the same
quantity for the 0.5 M case. On the right: an example of a cluster composed of
26 ions for the KCl system at 4.4 M. Figure 9: 2-dimensional histogram
(charge VS dimension of the clusters) for the KCl(aq) system at the largest
concentration considered (4.4 M).
#### Ion Mobilities
As well as a high capacity to store charge, an optimal charge storage device
must also be a good electrical conductor. In this regard, we might expect the
conductivity of solutions to decrease when clusters are present. Indeed, this
can be perceived as a relative decrease in the activity of charge carriers due
to increasingly non-ideal solutions. To test this, we calculated the
conductivity of bulk NaCl(aq) solutions with concentrations ranging from 1–10
M from the ion diffusion coefficients calculated by Finney and Salvalaglio in
finite size systems and in the dilute limit [59, 60]. To determine
conductivity we use the Nernst-Einstein equation:
$\sigma_{NE}=\frac{e^{2}}{Vk_{\mathrm{B}}T}(N_{+}z_{+}^{2}D_{+}+N_{-}z_{-}^{2}D_{-})$
(10)
where $e$, $V$, $k_{\mathrm{B}}$ and $T$ are the elementary charge, simulation
cell volume, Boltzmann’s constant and temperature, respectively. $N$ and $D$
are the total number of ions and diffusion coefficients for ions with charge
indicated by the subscript. Furthermore, we assume that, given the highly
dynamic nature of the clusters observed in solution, the valency of ionic
species, $z$, is equal to one.
Figure 10: Solution conductivities, $\sigma_{NE}$, of bulk NaCl(aq) solutions
calculated for a range of concentrations. To this aim, the Nernst-Einstein
equation was adopted where ion diffusion coefficients were determined from
simulations at finite concentration, $D_{ion}$ (blue), or from a single
simulation at the dilute limit, $D_{ion}^{0}$ (red). Dashed lines are a guide
for the eye, while error bars indicate uncertainties in the conductivities
associated with the calculated $D$ value from Refs. [59] and [60].
Figure 10 provides the solution conductivities for NaCl(aq) where either the
diffusion of ions in finite concentration simulations was used ($D_{ion}$) or
the diffusion of ions in the dilute limit ($D_{ion}^{0}$) was considered. For
the latter, ions are assumed to be completely, dispersed as association, even
beyond the second solvation sphere, did not occur in simulations at the dilute
limit. For the estimate of $D_{ion}^{0}$, Finney and Salvalaglio [60]
performed extended simulations of a single cation and anion in 4,000 water
molecules; here, $D_{+}^{0}=1.223\pm 0.005\times 10^{-5}$ cm2 s-1 and
$D_{-}^{0}=1.282\pm 0.008\times 10^{-5}$ cm2 s-1. In all cases, diffusion
coefficients were corrected to account for simulation finite size effects
[61]. Unsurprisingly, a linear correlation in $\sigma_{NE}$ as a function of
concentration is found when a constant $D_{ion}^{0}$ is used for the diffusion
of ions, independent of concentration. This is inaccurate at relatively high
concentrations, given the simulation and experimental observations of ion-ion
correlations[59, 58].
When accounting for the non-idealities in the solution and the formation of
clusters explicitly in the diffusion of ions, we find that the solution
conductivity reaches an upper limit between 4 and 5 M. At the lowest
concentrations (1–2 M), the conductivity from finite concentration and dilute
simulation data agree, and the simulation predictions match well with
experimental measurements [62]. A crossover in the conductivity behaviour from
the ‘pseudo-ideal’ to non-ideal regime occurs between 2 and 3 M. Therefore,
over a wide concentration range up to the salt solubility, non-idealities will
affect the performance of electrical devices; depending upon the chosen
application, electrolytes should be chosen to minimize these effects.
## 4 Conclusions
In this work, we presented an extended set of simulations describing the
interface between three different electrolyte solutions - (KCl(aq), LiCl(aq),
and NaCl(aq)) - in contact with the surface of a negatively charged graphene
electrode. To investigate these systems, we combined QM/MD and C$\mu$MD
methodologies into a new simulation framework. QM/MD models of the graphene
electrode in contact with an electrolyte enabled the explicit coupling of the
electrode polarizability with the instantaneous configuration of the
electrolyte. The latter was maintained in equilibrium with a liquid phase at
constant bulk concentration thanks to the C$\mu$MD model, which mimics open-
boundary conditions.
We performed a thorough analysis of the interaction of the ions with the
electrode by showing the different behaviour of the three cations in the
double layer, focusing on K+, which, according to our results, is able to
directly adsorb at the electrode surface at shorter distances compared to Li+
and Na+, modifying the screening effect of the solution.
Calculations of the integral capacitance indicated no concentration dependence
or specific ion effects, with a total capacitance of around 4.2 µF cm-2 across
all systems. However, the lack of variation in capacitance hides the rich
electrolyte solution behaviour, particularly for the ions close to the
electrode. We showed, for example, that large KCl clusters emerge in solution,
which might be important when considering properties associated with ion
mobility and charge transfer.
Our results indicate that accurate models of the interface - able to account
for the position-dependent non-ideality of electrolyte solutions - better
capture the configurational and dynamical details underpinning the
electrochemical behavior of interfaces at the atomistic level, and that is
often overshadowed by the calculation of aggregated quantities such as the
integral capacitance. We plan to extend our calculations to include a range of
charged electrodes, both positive and negative, and further investigate ion
dynamics in solution.
## 5 Acknowledgements
We acknowledge the support provided by the IT Services use of the
Computational Shared Facility (CSF) and at the University of Manchester. NDP,
JDE and PC thank the European Union’s Horizon 2020 research and innovation
programme project VIMMP under Grant Agreement No. 760907. ARF and MS
acknowledge funding from an EPSRC Programme Grant (Grant EP/R018820/1), which
funds the Crystallisation in the Real World consortium.
## References
* Cohen-Tanugi and Grossman [2012] David Cohen-Tanugi and Jeffrey C Grossman. Water desalination across nanoporous graphene. _Nano letters_ , 12(7):3602–3608, 2012.
* Heiranian et al. [2021] Mohammad Heiranian, Yechan Noh, and Narayana R Aluru. Dynamic and weak electric double layers in ultrathin nanopores. _The Journal of Chemical Physics_ , 154(13):134703, 2021.
* Surwade et al. [2015] Sumedh P Surwade, Sergei N Smirnov, Ivan V Vlassiouk, Raymond R Unocic, Gabriel M Veith, Sheng Dai, and Shannon M Mahurin. Water desalination using nanoporous single-layer graphene. _Nature nanotechnology_ , 10(5):459–464, 2015\.
* Simon and Gogotsi [2008] Patrice Simon and Yury Gogotsi. Materials for electrochemical capacitors. In _Nanoscience and technology: a collection of reviews from Nature journals_ , pages 320–329. World Scientific, 2008.
* Wang et al. [2021] Yifan Wang, Lin Zhang, Haoqing Hou, Wenhui Xu, Gaigai Duan, Shuijian He, Kunming Liu, and Shaohua Jiang. Recent progress in carbon-based materials for supercapacitor electrodes: a review. _Journal of Materials Science_ , 56(1):173–200, 2021.
* Elliott et al. [2022a] J. Elliott, A. A. Papaderakis, R. Dryfe, and P. Carbone. The electrochemical double layer at the graphene/aqueous electrolyte interface: what we can learn from simulations, experiments, and theory. _Journal of Materials Chemistry C_ , 2022a.
* Wang et al. [2009] Yan Wang, Zhiqiang Shi, Yi Huang, Yanfeng Ma, Chengyang Wang, Mingming Chen, and Yongsheng Chen. Supercapacitor devices based on graphene materials. _Journal of Physical Chemistry C_ , 113(30):13103–13107, 2009.
* Liu et al. [2010] Chenguang Liu, Zhenning Yu, David Neff, Aruna Zhamu, and Bor Z Jang. Graphene-based supercapacitor with an ultrahigh energy density. _Nano letters_ , 10(12):4863–4868, 2010.
* Yu et al. [2010] Aiping Yu, Isaac Roes, Aaron Davies, and Zhongwei Chen. Ultrathin, transparent, and flexible graphene films for supercapacitor application. _Applied physics letters_ , 96(25):253105, 2010\.
* Zhang et al. [2010] Li Li Zhang, Rui Zhou, and XS Zhao. Graphene-based materials as supercapacitor electrodes. _Journal of Materials Chemistry_ , 20(29):5983–5992, 2010.
* Zhu et al. [2011] Y. Zhu, S. Murali, M. D. Stoller, K. J. Ganesh, W. Cai, P. J. Ferreira, A. Pirkle, R. M. Wallace, K. A. Cychosz, M. Thommes, D. Su, E. A. Stach, and R. S. Ruoff. Carbon-based supercapacitors produced by activation of graphene. _science_ , 332(6037):1537–1541, 2011.
* An et al. [2001] Kay Hyeok An, Won Seok Kim, Young Soo Park, Young Chul Choi, Seung Mi Lee, Dong Chul Chung, Dong Jae Bae, Seong Chu Lim, and Young Hee Lee. Supercapacitors using single-walled carbon nanotube electrodes. _Advanced Materials_ , 13(7):497–500, 2001.
* Yang et al. [2019] Zhoufei Yang, Jiarui Tian, Zefang Yin, Chaojie Cui, Weizhong Qian, and Fei Wei. Carbon nanotube-and graphene-based nanomaterials and applications in high-voltage supercapacitor: A review. _Carbon_ , 141:467–480, 2019.
* Merlet et al. [2013] C. Merlet, B. Rotenberg, P. A. Madden, and M. Salanne. Computer simulations of ionic liquids at electrochemical interfaces. _Physical Chemistry Chemical Physics_ , 15(38):15781–15792, 2013.
* Iamprasertkun et al. [2019] P. Iamprasertkun, W. Hirunpinyopas, A. Keerthi, B. Wang, B. Radha, M. A. Bissett, and R. A. W. Dryfe. Capacitance of basal plane and edge-oriented highly ordered pyrolytic graphite: specific ion effects. _The Journal of Physical Chemistry Letters_ , 10(3):617–623, 2019.
* Yang et al. [2017] H. Yang, J. Yang, Z. Bo, X. Chen, X. Shuai, J. Kong, J. Yan, and K. Cen. Kinetic-dominated charging mechanism within representative aqueous electrolyte-based electric double-layer capacitors. _The Journal of Physical Chemistry Letters_ , 8(15):3703–3710, 2017.
* Qu et al. [2008] QT Qu, B Wang, LC Yang, Y Shi, S Tian, and YP Wu. Study on electrochemical performance of activated carbon in aqueous li2so4, na2so4 and k2so4 electrolytes. _Electrochemistry Communications_ , 10(10):1652–1655, 2008.
* Elliott et al. [2020] J. D. Elliott, A. Troisi, and P. Carbone. A QM/MD coupling method to model the ion-induced polarization of graphene. _J. Chem. Theory Comput._ , 16(8):5253–5263, 2020\.
* Wang et al. [2008] Xinran Wang, Scott M Tabakman, and Hongjie Dai. Atomic layer deposition of metal oxides on pristine and functionalized graphene. _Journal of the American Chemical Society_ , 130(26):8152–8153, 2008.
* Gouy [1910] MJJPTA Gouy. Sur la constitution de la charge électrique à la surface d’un électrolyte. _J. Phys. Theor. Appl._ , 9(1):457–468, 1910\.
* Chapman [1913] D. L. Chapman. LI. A contribution to the theory of electrocapillarity. _The London, Edinburgh, and Dublin philosophical magazine and journal of science_ , 25(148):475–481, 1913.
* Stern [1924] Otto Stern. Zur theorie der elektrolytischen doppelschicht. _Zeitschrift für Elektrochemie und angewandte physikalische Chemie_ , 30(21-22):508–516, 1924.
* Popović and Šiber [2013] Marko Popović and Antonio Šiber. Lattice-gas poisson-boltzmann approach for sterically asymmetric electrolytes. _Physical Review E_ , 88(2):022302, 2013.
* Borukhov et al. [1997] Itamar Borukhov, David Andelman, and Henri Orland. Steric effects in electrolytes: A modified poisson-boltzmann equation. _Physical review letters_ , 79(3):435, 1997.
* Fedorov and Kornyshev [2008] M. V. Fedorov and A. A. Kornyshev. Ionic liquid near a charged wall: Structure and capacitance of electrical double layer. _The Journal of Physical Chemistry B_ , 112(38):11868–11872, aug 2008. doi: 10.1021/jp803440q.
* Howard et al. [2010] J. J. Howard, J. S. Perkyns, and B. M. Pettitt. The behavior of ions near a charged wall-dependence on ion size, concentration, and surface charge. _The journal of physical chemistry B_ , 114(18):6074–6083, 2010.
* Misra and Blankschtein [2021a] Rahul Prasanna Misra and Daniel Blankschtein. Ion adsorption at solid/water interfaces: Establishing the coupled nature of ion–solid and water–solid interactions. _The Journal of Physical Chemistry C_ , 125(4):2666–2679, 2021a.
* Finney et al. [2021] Aaron R Finney, Ian J McPherson, Patrick R Unwin, and Matteo Salvalaglio. Electrochemistry, ion adsorption and dynamics in the double layer: a study of nacl (aq) on graphite. _Chemical science_ , 12(33):11166–11180, 2021\.
* Perego et al. [2015] C. Perego, M. Salvalaglio, and M. Parrinello. Molecular dynamics simulations of solutions at constant chemical potential. _The Journal of Chemical Physics_ , 142(14):144113, apr 2015. doi: 10.1063/1.4917200.
* Doblhoff-Dier and Koper [2021] K. Doblhoff-Dier and M. T. M. Koper. Modeling the Gouy–Chapman diffuse capacitance with attractive ion–surface interaction. _The Journal of Physical Chemistry C_ , 125(30):16664–16673, 2021.
* France-Lanord and Grossman [2019] Arthur France-Lanord and Jeffrey C Grossman. Correlations from ion pairing and the nernst-einstein equation. _Physical review letters_ , 122(13):136001, 2019\.
* Misra and Blankschtein [2021b] Rahul Prasanna Misra and Daniel Blankschtein. Uncovering a Universal Molecular Mechanism of Salt Ion Adsorption at Solid/Water Interfaces. _Langmuir_ , 37(2):722–733, January 2021b.
* Williams et al. [2017] C. D. Williams, J. Dix, A. Troisi, and P. Carbone. Effective Polarization in Pairwise Potentials at the Graphene–Electrolyte Interface. _J. Phys. Chem. Lett._ , 8(3):703, February 2017\.
* Siepmann and Sprik [1995] J Ilja Siepmann and Michiel Sprik. Influence of surface topology and electrostatic potential on water/electrode systems. _The Journal of chemical physics_ , 102(1):511, 1995.
* Di Pasquale and Davidchack [2020] N. Di Pasquale and R. L. Davidchack. Shuttleworth Equation: A Molecular Simulations Perspective. _Journal of Chemical Physics_ , 153:154705, 2020.
* Abraham et al. [2015] M. J. Abraham, T. Murtola, R. Schulz, S. Páll, J. C. Smith, B. Hess, and E. Lindahl. GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers. _SoftwareX_ , 1:19–25, 2015.
* Tribello et al. [2014] G. A. Tribello, M. Bonomi, D. Branduardi, C. Camilloni, and G. Bussi. Plumed 2: New feathers for an old bird. _Computer physics communications_ , 185(2):604–613, 2014.
* Hourahine et al. [2020] B. Hourahine, B. Aradi, V. Blum, F. Bonafé, A. Buccheri, C. Camacho, C. Cevallos, M. Y. Deshaye, T. Dumitrică, A. Dominguez, S. Ehlert, M. Elstner, T. van der Heide, J. Hermann, S. Irle, J. J. Kranz, C. Köhler, T. Kowalczyk, T. Kubař, I. S. Lee, V. Lutsker, R. J. Maurer, S. K. Min, I. Mitchell, C. Negre, T. A. Niehaus, A. M. N. Niklasson, A. J. Page, A. Pecchia, G. Penazzi, M. P. Persson, J. Řezáč, C. G. Sánchez, M. Sternberg, M. Stöhr, F. Stuckenberg, A. Tkatchenko, V. W.-z. Yu, and T. Frauenheim. DFTB$+$, a software package for efficient approximate density functional theory based atomistic simulations. _The Journal of Chemical Physics_ , 152(12):124101, 2020. doi: 10.1063/1.5143190.
* Elstner et al. [1998] Marcus Elstner, Dirk Porezag, G Jungnickel, J Elsner, M Haugk, Th Frauenheim, Sandor Suhai, and Gotthard Seifert. Self-consistent-charge density-functional tight-binding method for simulations of complex materials properties. _Physical Review B_ , 58(11):7260, 1998.
* Elliott et al. [2022b] J. D. Elliott, M. Chiricotto, A. Troisi, and P. Carbone. Do specific ion effects influence the physical chemistry of aqueous graphene-based supercapacitors? perspectives from multiscale qmmd simulations. _arXiv_ , 2022b. doi: 10.48550/ARXIV.2203.02469.
* Mulliken [1955] Robert S Mulliken. Electronic population analysis on lcao–mo molecular wave functions. i. _The Journal of Chemical Physics_ , 23(10):1833–1840, 1955.
* Haynes et al. [2016] William M Haynes, David R Lide, and Thomas J Bruno. _CRC handbook of chemistry and physics_. CRC press, 2016.
* Zeron et al. [2019] I. M. Zeron, J. L. F. Abascal, and C. Vega. A force field of Li+, Na+, K$+$, Mg2+, Ca2+, Cl-, and SO${}_{4}^{2-}$ in aqueous solution based on the TIP4P/2005 water model and scaled charges for the ions. _The Journal of chemical physics_ , 151(13):134504, 2019.
* Ho and Striolo [2013] Tuan A Ho and Alberto Striolo. Capacitance enhancement via electrode patterning. _The Journal of Chemical Physics_ , 139(20):204708, 2013.
* Xu et al. [2020] Kui Xu, Hui Shao, Zifeng Lin, Céline Merlet, Guang Feng, Jixin Zhu, and Patrice Simon. Computational insights into charge storage mechanisms of supercapacitors. _Energy & environmental materials_, 3(3):235–246, 2020.
* Hagberg et al. [2008] Aric Hagberg, Pieter Swart, and Daniel S Chult. Exploring network structure, dynamics, and function using networkx. Technical report, Los Alamos National Lab.(LANL), Los Alamos, NM (United States), 2008.
* Berendsen et al. [1995] H. J. C. Berendsen, D. van der Spoel, and R. van Drunen. Gromacs: a message-passing parallel molecular dynamics implementation. _Computer physics communications_ , 91(1-3):43–56, 1995.
* van der Spoel et al. [2005] D. van der Spoel, E. Lindhal, B. Hess, G. Groenhof, A. E. Mark, and H. J. C. Berendsen. Gromacs: Fast, flexible and free. _J. Comput. Chem._ , 26(16):1701–1718, 2005.
* Berendsen et al. [1987] H. J. C. Berendsen, J. R. Grigera, and T. P. Straatsma. The missing term in effective pair potentials. _Journal of Physical Chemistry_ , 91(24):6269–6271, 1987.
* Miyamoto and Kollman [1992] S. Miyamoto and P. A. Kollman. SETTLE: An analytical version of the SHAKE and RATTLE algorithm for rigid water molecules. _J. Comput. Chem_ , 13:952–962, 1992.
* Huang et al. [2012] Yi Huang, Jiajie Liang, and Yongsheng Chen. An overview of the applications of graphene-based materials in supercapacitors. _small_ , 8(12):1805–1834, 2012.
* Joung and Cheatham [2008] I. S. Joung and T. E. III Cheatham. Determination of alkali and halide monovalent ion parameters for use in explicitly solvated biomolecular simulations. _The journal of physical chemistry B_ , 112(30):9020–9041, 2008.
* Dočkal et al. [2022] J. Dočkal, M. Lísal, and F. Moučka. Molecular dynamics of the interfacial solution structure of alkali-halide electrolytes at graphene electrodes. _Journal of Molecular Liquids_ , 353:118776, 2022.
* Savitzky and Golay [1964] Abraham Savitzky and Marcel JE Golay. Smoothing and differentiation of data by simplified least squares procedures. _Analytical chemistry_ , 36(8):1627–1639, 1964\.
* Olivieri et al. [2021] Jean-François Olivieri, James T Hynes, and Damien Laage. Confined water’s dielectric constant reduction is due to the surrounding low dielectric media and not to interfacial molecular ordering. _The Journal of Physical Chemistry Letters_ , 12(17):4319–4326, 2021.
* Kornyshev [2007] Alexei A Kornyshev. Double-layer in ionic liquids: paradigm change? _The Journal of Physical Chemistry B_ , 111(20):5545–5557, 2007.
* Snook and van Megen [1981] Ian Snook and William van Megen. Finite ion size effects in the electrical double layer—a monte carlo study. _The Journal of Chemical Physics_ , 75(8):4104–4106, 1981.
* Hwang et al. [2021] Hyerim Hwang, Yong Chan Cho, Sooheyong Lee, Yun-Hee Lee, Seongheun Kim, Yongjae Kim, Wonhyuk Jo, Patrick Duchstein, Dirk Zahn, and Geun Woo Lee. Hydration breaking and chemical ordering in a levitated NaCl solution droplet beyond the metastable zone width limit: evidence for the early stage of two-step nucleation. _Chemical Science_ , 12(1):179–187, 2021. doi: 10.1039/d0sc04817h. URL https://doi.org/10.1039%2Fd0sc04817h.
* Finney and Salvalaglio [2022] Aaron R Finney and Matteo Salvalaglio. Multiple pathways in nacl homogeneous crystal nucleation. _Faraday Discussions_ , 2022.
* Finney and Salvalaglio [2021] A. R. Finney and M. Salvalaglio. Bridging the gap between mesoscopic and molecular models of solid/liquid interfaces out-of-equilibrium. _arXiv preprint arXiv:2109.00568_ , 2021.
* Yeh and Hummer [2004] I. Yeh and G. Hummer. System-Size Dependence of Diffusion Coefficients and Viscosities from Molecular Dynamics Simulations with Periodic Boundary Conditions. _The Journal of Physical Chemistry B_ , 108(40):15873–15879, October 2004.
* Widodo et al. [2018] C. S. Widodo, H. Sela, and D. R. Santosa. The effect of nacl concentration on the ionic nacl solutions electrical impedance value using electrochemical impedance spectroscopy methods. In _AIP Conference Proceedings_ , volume 2021, page 050003. AIP Publishing LLC, 2018.
|
# Variational Loop Vertex Expansion
Vasily<EMAIL_ADDRESS>
Université Paris-Saclay, CEA, List,
F-91120, Palaiseau, France
###### Abstract
Loop Vertex Expansion (LVE) was developed to construct QFT models with local
and non-local interactions. Using LVE, one can prove the analyticity in the
finite cardioid-like domain in the complex plain of the coupling constant of
the free energies and cumulants of various vector, matrix, or tensor-type
models. Here, applying the idea of choosing the initial approximation
depending on the coupling constant, we construct the analytic continuation of
the free energy of the quartic matrix model beyond the standard LVE cardioid
over the branch cut and for arbitrary large couplings.
## 1 Introduction
The present work stems from the constructive field theory method of the Loop
Vertex Expansion (LVE) and ideas of the variational perturbation theory. The
concept of LVE was first introduced in [1] as a constructive approach for
quartic matrix models aimed to provide bounds that are uniform in the size of
the matrix. In its original form, LVE combines an intermediate field
representation with replica fields and a forest formula [2, 3] to express the
free energy of the theory through a convergent sum over trees. Unlike
conventional constructive methods, this loop vertex expansion does not rely on
cluster expansions and does not entail conditions related to small/large field
considerations.
Like Feynman’s perturbative expansion, the LVE provides a straightforward way
to calculate connected quantities. In this method, the theory’s partition
function is represented as a sum over forests, and its logarithm is
essentially the same sum but constrained to connected forests – trees. This
property arises from the fact that the amplitudes factorize over the connected
components of the forest. The functional integrands associated with each
forest or tree exhibit absolute and _uniform_ convergence for all field
values. Together with the non-proliferation of trees (in comparison to Feynman
diagrams) this leads to the convergence of the LVE in the ’pacman’-like or
cardioid-like domains, see Fig. 1.
Figure 1: The left-hand side represents the typical LVE ’pacman’-type domain
of analyticity with the radius $r_{\alpha}$ shrinking with decreasing of the
angle $\alpha$. On the right, we present the cardioid domain, which can be
understood as a union of ’pacman’-type domains.
The convergence of the LVE implies the Borel summability of the standard
perturbation series [4], and the LVE directly computes the Borel sum.
Essentially, the loop vertex expansion performs an _explicit repacking_ of
infinitely many subsets of Feynman amplitude components, leading to a
convergent expansion rather than a divergent one [5].
In the context of combinatorial field theories involving matrices and tensors
[6, 7], appropriately rescaled to possess a non-trivial $N\to\infty$ limit [8,
9, 10, 11], the Borel summability achieved by the LVE is _uniform_ with
respect to the model’s size $N$, [1, 12, 13, 14]. The LVE method is extendable
to ordinary field theories with cutoffs, as discussed in [15]. An adapted
multiscale version, known as MLVE [16], incorporates renormalization
techniques [17, 18, 19, 20, 21], although it’s worth noting that models
developed so far are limited to the superrenormalizable type. The MLVE is
particularly effective in resumming renormalized series for non-local field
theories of matrix or tensorial types. Originally developed for the quartic
interactions, the LVE method was then generalized to higher-order interactions
under the name of the Loop Vertex Representation [22, 23, 24].
The main principle of the Variational Perturbation Theory (VPT) is the
utilization of the initial approximation chosen depending on the coupling
constant and/or on the order of the perturbative expansion, or optimized
depending on these parameters. Such ideas of the VPT were applied under
different names: Variational Perturbation Theory [25, 26], Optimized
Perturbation Theory [27], Convergent Perturbation Theory [28], Delta Expansion
[29], etc., – with the different levels of rigor to a various range of
physical problems. The applications of VPT include: energy levels of quantum
anharmonic oscillator [30, 31, 29]; critical indices in scalar quantum field
theories [32]; optimization of the QCD perturbative computations [27, 33];
computations in lattice models with real and scalar actions [34, 35, 36];
coefficients of the $\frac{1}{D}$ expansion in the vector model [37]. VPT was
used for constructing strong coupling expansions [38]. It was shown that VPT
(Delta Expansion) should be applicable in cases where the standard
perturbation theory is non-Borel summable [29]. It is worth noting that the
proof of the convergence of the VPT (Delta Expansion) for the double-well case
of the anharmonic oscillator is based on the analyticity of its energy levels
[39] in the area of the coupling constant Riemann surface, which is larger
than the one required for the Borel summability in the single-well potential
case.
In the current work, we unite the LVE method with the VPT idea of the initial
approximation depending on the coupling constant. Taking the quartic matrix
model as an example, we construct LVE with the modified initial approximation
depending on the coupling constant and prove the convergence of the
corresponding series for the free energy of the model for arbitrary large
coupling constants with the arguments $-\frac{3\pi}{2}<\phi<\frac{3\pi}{2}$.
The latter result is not optimal and might be improved up to
$-2\pi+\epsilon<\phi<2\pi-\epsilon$, where $\epsilon\geq\epsilon_{0}$,
$\epsilon_{0}>0$. This and the extension of the current results to cumulants
and a constructive version of the $1/N$ expansion are additional outcomes of
the method, left for further exploration. We also aim to investigate
connections with the resurgence theory in the spirit of the recently carried
out analysis of the vector model [40].
## 2 Statement of the result
To illustrate the main advantages of the Loop Vertex Expansion modified by the
initial approximation depending on the coupling, we study a quartic matrix
model. The partition function of this model is defined by
$\displaystyle{\cal Z}[\lambda,N]=\frac{1}{Z_{0}}\int
dM\exp\Big{\\{}-\operatorname{Tr}(MM^{\dagger})-\frac{\lambda}{2N}\operatorname{Tr}([MM^{\dagger}]^{2})\Big{\\}}\,,$
(1)
where $M$ are complex $N\times N$ matrices, and
$\displaystyle Z_{0}=\int
dM\exp\Big{\\{}-\operatorname{Tr}(MM^{\dagger}))\Big{\\}}\,.$ (2)
The measure $dM$ is given by
$dM=\pi^{N}\prod_{1\leq i,j\leq N}d\text{Re}(M_{ij})d\text{Im}(M_{ij})\,.$ (3)
The core object of our studies here is the free energy of the model defined,
as
$\displaystyle F[\lambda,N]=-\frac{1}{N^{2}}\log{\cal Z}[\lambda,N]\,.$ (4)
As was mentioned in the introduction the standard LVE tools allow one to prove
the analyticity of the free energy (4) or of the cumulants of the model in the
’pacman’-like or cardioid-like domains, as in Fig. 1. The sharpest result
regarding the model (1) was obtained by LVE in [12], among other results it
was shown there that the free energy (4) is analytic in the cardioid domain of
the coupling constant $\lambda$,
${\cal
C}=\Big{\\{}\lambda\in\mathbb{C}\,\Big{|}\arg\lambda=\phi\,,4|\lambda|<\cos^{2}\Big{(}{\frac{\phi}{2}}\Big{)}\Big{\\}}\,.$
(5)
The main result of the current paper, obtained by merging the ideas of the
variational perturbation theory and loop vertex expansion is given by the
following theorem.
###### Theorem 1.
For any $\lambda\in{\cal X}$, where ${\cal X}={\cal C}\cup{\cal Y}$, and
${\cal Y}$ is defined as a subset of the Riemann surface of $\sqrt{\lambda}$
with $\lambda\neq 0$ and $|\phi|<\frac{3\pi}{2}$, the free energy
$F[\lambda,N]$ of the model (1), is analytic in $\lambda$ uniformly in $N$.
## 3 New initial approximation and intermediate field representation
Hereafter we assume that $\lambda\neq 0$. The case of $\lambda=0$ is trivial
and the vicinity of $\lambda=0$ can be easily treated with the standard LVE
tools [12]. As a first step of our program, we change the Gaussian part of the
action which we treat as an unperturbed measure, in other words, we shift the
initial approximation. Then, introducing the intermediate field
representation, we obtain,
$\displaystyle{\cal Z}[\lambda,N]=\frac{1}{Z_{0}}\int dM\int
dA\,\exp\Big{\\{}-\frac{1}{2}\operatorname{Tr}(A^{2})-a\operatorname{Tr}(MM^{\dagger})$
$\displaystyle+\mathrm{i}\operatorname{Tr}\Big{(}A\big{[}\sqrt{\frac{\lambda}{N}}MM^{\dagger}+\frac{(1-a)\sqrt{N}}{\sqrt{\lambda}}\mathbb{1}\big{]}\Big{)}+\operatorname{Tr}\big{[}\frac{(1-a)^{2}N}{2\lambda}\mathbb{1}\big{]}\Big{\\}}\,,$
(6)
where $Re\,a>0$, $A$ field is realised by the $N\times N$ Hermitian matrices,
and the integral over $A$ is assumed to be normalized.
The integral over the initial degrees of freedom, matrices $M$ and
$M^{\dagger}$, is Gaussian, so it can be evaluated. The corresponding
covariance is given by
$\big{(}a-\mathrm{i}\sqrt{\frac{\lambda}{N}}A\big{)}\otimes 1$, and using that
$\displaystyle\det\left[\Big{(}a-\mathrm{i}\sqrt{\frac{\lambda}{N}}A\Big{)}\otimes
1\right]=a^{N}\exp\bigg{\\{}N\operatorname{Tr}\log\Big{(}1-\mathrm{i}\sqrt{\frac{\lambda}{a^{2}N}}A\Big{)}\bigg{\\}}\,,$
(7)
we obtain
$\displaystyle{\cal Z}[\lambda,N]$ $\displaystyle=$
$\displaystyle\frac{e^{\frac{N^{2}(1-a)^{2}}{2\lambda}}a^{N}}{\widetilde{Z}_{0}}\int
dA\,\exp\bigg{\\{}-\frac{1}{2}\operatorname{Tr}(A^{2})$ (8) $\displaystyle-$
$\displaystyle
N\operatorname{Tr}\log\left(1-\mathrm{i}\sqrt{\frac{\lambda}{a^{2}N}}A\right)-\mathrm{i}\operatorname{Tr}\bigg{(}A\frac{(1-a)\sqrt{N}}{\sqrt{\lambda}}\bigg{)}\bigg{\\}}\,.$
Then, we define the non-polynomial interaction, as
$\displaystyle{\cal
S}[\lambda,N,a](A)=\operatorname{Tr}\log\left(1-\mathrm{i}\sqrt{\frac{\lambda}{a^{2}N}}A\right)+\frac{\mathrm{i}}{\sqrt{N}}\operatorname{Tr}\bigg{(}A\frac{(1-a)}{\sqrt{\lambda}}\bigg{)}\,,$
(9)
and the normalization, as
$\displaystyle
K[\lambda,N,a]=\frac{e^{\frac{N^{2}(1-a)^{2}}{2\lambda}}a^{N}}{\widetilde{Z}_{0}}\,.$
(10)
Thus, the partition function can be written in the following form
$\displaystyle{\cal Z}[\lambda,N]=K[\lambda,N,a]\int
dA\,\exp\bigg{\\{}-\frac{1}{2}\operatorname{Tr}(A^{2})-N{\cal
S}[\lambda,N,a](A)\bigg{\\}}\,.$ (11)
Let us study the analytic properties of the partition function. We first have
the following bound.
###### Lemma 1.
Let $\frac{\lambda}{a^{2}}=\rho\mathrm{e}^{\mathrm{i}\theta}$ with $\rho>0$,
then:
$\Big{\|}\Big{(}1-\mathrm{i}\frac{\sqrt{\lambda}}{a\sqrt{N}}A\Big{)}^{-1}\Big{\|}\leq\frac{1}{\cos\frac{\theta}{2}}\,,$
(12)
where $\Big{\|}\cdot\Big{\|}$ stands for the operator norm.
###### Proof.
The first step is to rewrite the resolvent, as an integral,
$\Big{(}1-\mathrm{i}\frac{\sqrt{\lambda}}{a\sqrt{N}}A\Big{)}^{-1}=\frac{a\sqrt{N}}{\sqrt{\lambda}}\int_{0}^{\infty}d\alpha\,\exp\Big{\\{}-\alpha\frac{a\sqrt{N}}{\sqrt{\lambda}}+\alpha\mathrm{i}A\Big{\\}}\,.$
(13)
Using the latter representation, we immediately arrive at the bound for the
operator norm,
$\displaystyle\Big{\|}\Big{(}1-\mathrm{i}\frac{\sqrt{\lambda}}{a\sqrt{N}}A\Big{)}^{-1}\Big{\|}$
$\displaystyle\leq$
$\displaystyle\frac{|a|\sqrt{N}}{|\sqrt{\lambda}|}\int_{0}^{\infty}\exp\Big{\\{}-\alpha\text{Re}\big{(}\frac{a\sqrt{N}}{\sqrt{\lambda}}\big{)}\Big{\\}}\Big{\|}\exp\Big{\\{}\mathrm{i}\alpha
A\Big{\\}}\Big{\|}$ (14) $\displaystyle=$
$\displaystyle\frac{1}{\cos\frac{\theta}{2}}\,.$
∎
Hereafter, we write the parameter $a$ describing the initial approximation in
the following form,
$\displaystyle
a=x|\sqrt{\lambda}|e^{\mathrm{i}\psi}\,,\qquad-\frac{\pi}{2}<\psi<\frac{\pi}{2}\,,\qquad
x>0\,.$ (15)
Writing (11), as
${\cal Z}[\lambda,N]=K[\lambda,N,a]\int
dA\,\frac{\exp\bigg{\\{}-\frac{1}{2}\operatorname{Tr}(A^{2})-\mathrm{i}\operatorname{Tr}\bigg{(}A\frac{(1-a)\sqrt{N}}{\sqrt{\lambda}}\bigg{)}\bigg{\\}}}{\Big{[}\det\Big{(}1-\mathrm{i}\frac{\sqrt{\lambda}}{a\sqrt{N}}A\Big{)}\Big{]}^{N}}\,,$
(16)
and using lemma 1, one can show that this integral is convergent for
$\theta\in(-\pi,\pi)$, or equivalently for
$\lambda/a^{2}\in{\mathbb{C}}-{\mathbb{R}}^{-}$. Since the integrand is
analytic for $\theta\in(-\pi,\pi)$, and $\theta=\phi-2\psi$, we have the
following result.
###### Proposition 1.
${\cal Z}[N,\lambda]$ is analytic in $\lambda$ on the Riemann surface of the
square root with $-2\pi<\phi<2\pi$.
Note, that since ${\cal Z}[N,\lambda]$ may be zero for certain values of
$\lambda$, its analyticity does not imply the analyticity of the free energy.
## 4 The Loop Vertex Expansion
Using the representation (11) for the partition function, we expand the
interaction part of the exponent into the Taylor series, which is convergent
if $\lambda/a^{2}\in{\mathbb{C}}-{\mathbb{R}}^{-}$,
$\displaystyle{\cal
Z}=K[\lambda,N,a]\sum_{n=0}^{\infty}\frac{(-1)^{n}}{n!}\int
d\mu(A)\,\bigg{[}N{\cal S}[\lambda,N,a](A)\bigg{]}^{n}\,,$ (17)
where $d\mu(A)=dA\exp\\{-\frac{1}{2}\operatorname{Tr}(A^{2})\\}$ is the
normalized Gaussian measure for the Hermitian matrices $A$ (and we dropped the
arguments $\lambda$ and $N$ of ${\cal Z}$, and ${\cal S}[\lambda,N,a](A)$ we
will write ${\cal S}(A)$ in order to simplify the notation).
Applying the replica trick, we replace (at each order $n$) the integral over a
single matrix $A$ by the integral over a $n$-copies of $N\times N$ Hermitian
matrices $A=(A_{i})_{1\leq i\leq n}$. The measure of the Gaussian integral
over the replicated matrices $A$ is normalized with a degenerated covariance
$C_{ij}=1$. For any real positive symmetric matrix $C_{ij}$ the Gaussian
integral obeys
$\int
d\mu_{C}(A)\,A_{i|ab}A_{j|cd}=C_{ij}\,\delta_{ad}\delta_{bc}\,,~{}~{}~{}\int
d\mu_{C}(A)=1\,,$ (18)
where $A_{i|ab}$ corresponds to the matrix element of $A_{i}$ in the row $a$
and column $b$. The degenerated covariance in the Gaussian integral is
equivalent to insertion $n-1$ Dirac $\delta$-functions
$\delta(A_{1}-A_{2})\cdots\delta(A_{n-1}-A_{n})$. In Feynman diagrams, the
uniform covariance connects the various replicas together (with the
appropriate weights).
Then, the partition function is given by
$\displaystyle{\cal
Z}=K[\lambda,N,a]\sum_{n=0}^{\infty}\frac{(-1)^{n}}{n!}\int
d\mu_{C}(A)\prod_{i=1}^{n}\bigg{[}N{\cal S}(A_{i})\bigg{]}\,.$ (19)
To take the logarithm of the partition function, we are going to convert the
latter expansion into the sum over forests by applying the Bridges-Kennedy-
Abdessalam-Rivasseau [2, 3]. The first step to do it is to replace the
covariance $C_{ij}=1$ by $C_{ij}(x)=x_{ij}$, $x_{ij}=x_{ji}$ (which at the end
should be evaluated at $x_{ij}=1$) for $i\neq j$ and $C_{ii}(x)=1$. Then,
$\displaystyle{\cal Z}$ $\displaystyle=$ $\displaystyle
K[\lambda,N,a]\sum_{F\,\text{labeled
forest}}\frac{(-1)^{n}}{n!}\int_{0}^{1}\prod_{(i,j)\in
F}dt_{ij}\,\,\left(\prod_{(i,j)\in F}\frac{\partial}{\partial x_{ij}}\right)$
$\displaystyle\times$ $\displaystyle\bigg{\\{}\int
d\mu_{C(x)}(A)\prod_{i=1}^{n}\bigg{[}N{\cal
S}(A_{i})\bigg{]}\bigg{\\}}\bigg{|}_{x_{ij}=v^{F}_{ij}}\,,$
where $n$ is the number of vertices of the forest $F$, $i$ and $j$ label the
forest’s vertices, and there is a weakening parameter $t_{ij}$ per each edge
$(i,j)$ of the forest, and
$v^{F}_{ij}=\left\\{\begin{array}[]{ccl}\inf_{(k,l)\in{P}_{i\leftrightarrow
j}^{{F}}}t_{kl}&\text{if}&{P}_{i\leftrightarrow j}^{{F}}\,\text{exists}\\\
0&\text{if}&{P}_{i\leftrightarrow j}^{{F}}\,\text{does not
exist}\end{array}\right.\,,$ (21)
${P}_{i\leftrightarrow j}^{{F}}$ is the unique path in the forest $F$ joining
$i$ and $j$ (the infimum is taken to be $1$ if $i=j$).
Applying the following lemma, [12], we can take the logarithm.
###### Lemma 2.
Let ${\cal W}(T)$ be the weight of a tree $T$, not depending on the labels of
the tree vertices and the weight of a forest ${\cal W}(F)$ is defined to be
the product of the weights of its trees. Then, in the formal series sense, we
have
$\log\sum_{F\text{ labeled forests}}\frac{{\cal W}(F)}{|V(F)|!}=\sum_{T\text{
labeled trees}}\frac{{\cal W}(T)}{|V(T)|!}\,,$ (22)
where $|V(F)|$ and $|V(T)|$ are the number of vertices in $F$ and $T$
correspondingly.
Since the differentiation with respect to $x_{ij}$ and the Gaussian
integration factor over the trees in the forest $F$, we obtain
$\displaystyle\log{\cal Z}$ $\displaystyle=$ $\displaystyle\log
K[\lambda,N,a]+\sum_{T\,\text{labeled
trees}}\frac{(-1)^{n}}{n!}\int_{0}^{1}\prod_{(i,j)\in
T}dt_{ij}\,\left(\prod_{(i,j)\in T}\frac{\partial}{\partial x_{ij}}\right)$
(23) $\displaystyle\times$ $\displaystyle\bigg{\\{}\int
d\mu_{C(x)}(A)\prod_{i=1}^{n}\bigg{[}N{\cal
S}(A_{i})\bigg{]}\bigg{\\}}\bigg{|}_{v^{T}_{ij}}\,,$ (24) $\displaystyle
v^{T}_{ij}$ $\displaystyle=$ $\displaystyle\inf_{(k,l)\in{P}_{i\leftrightarrow
j}^{T}}t_{kl}\,.$ (25)
where ${P}_{i\leftrightarrow j}^{T}$ stands for the unique path joining
vertices $i$ and $j$ in the tree $T$. Expressing the Gaussian integral as a
differential operator,
$\displaystyle\int
d\mu_{C(x)}(A)F(A)=\left[e^{\frac{1}{2}\sum_{i,j}x_{ij}\operatorname{Tr}\left[\frac{\partial}{\partial
A_{i}}\frac{\partial}{\partial A_{j}}\right]}F(A)\right]_{A_{i}=0}\,,$ (26)
we see that
$\frac{\partial}{\partial x_{ij}}\bigg{(}\int
d\mu_{C(x)}(A)F(A)\bigg{)}=\frac{1}{2}\int
d\mu_{C(x)}(A)\,\operatorname{Tr}\left[\frac{\partial}{\partial
A_{i}}\frac{\partial}{\partial A_{j}}\right]F(A)\,.$ (27)
The latter differential operator acts on $i$ and $j$ vertices and connects
them by an edge.
The first derivative of the loop vertex (non-polynomial action ${\cal
S}(A_{i})$) is given by
$\displaystyle\frac{\partial}{\partial
A_{i|cd}}\bigg{[}\operatorname{Tr}\log\left(1-\mathrm{i}\sqrt{\frac{\lambda}{a^{2}N}}A\right)+\frac{\mathrm{i}}{\sqrt{N}}\operatorname{Tr}\bigg{(}A\frac{(1-a)}{\sqrt{\lambda}}\bigg{)}\bigg{]}=$
$\displaystyle\frac{\sqrt{\lambda}}{a\sqrt{N}}\bigg{(}1-\text{i}\frac{\sqrt{\lambda}}{a\sqrt{N}}A_{i}\Big{)}_{cd}^{-1}+\mathrm{i}\frac{(1-a)}{\sqrt{\lambda}\sqrt{N}}\mathbb{1}_{cd}\,.$
(28)
Only the first term of (28) is relevant for applying all further derivatives.
Therefore all other derivatives can be computed using the following recursive
relation,
$\frac{\partial}{\partial
A_{i|ab}}\frac{\sqrt{\lambda}}{a\sqrt{N}}\Big{(}1-\text{i}\frac{\sqrt{\lambda}}{a\sqrt{N}}A_{i}\Big{)}_{cd}^{-1}=\mathrm{i}\frac{\lambda}{a^{2}N}\Big{(}1-\text{i}\frac{\sqrt{\lambda}}{a\sqrt{N}}A_{i}\Big{)}_{ca}^{-1}\Big{(}1-\text{i}\frac{\sqrt{\lambda}}{a\sqrt{N}}A_{i}\Big{)}_{bd}^{-1}\,.$
(29)
Let $V(T)$ be the set of all vertices of the tree, and $E(T)$ be the set of
edges of the tree. We observe that in each tree $T$ with $|V(T)|>2$ there are
two types of vertices: internal vertices and leaves. If the vertex is
differentiated only once, it is a leaf, and it brings the contribution of the
form of (28). We say that leaf vertices have only one corner. Multiple
derivatives acting on the same vertex (corresponding to multiple edges hooked
to it) can act on either of the corners of the vertex, splitting it into two
corners and eliminating the constant term of (28), if it was present, see Fig.
2.
Figure 2: From left to right: a vertex without corners – corresponds to a
trivial tree, a vertex with only one corner – a leaf, vertex with two corners.
Each derivative brings an additional factor of $\frac{1}{\sqrt{N}}$, and
consequently, each edge of any tree comes with the factor $\frac{1}{N}$. In
the following we attribute factors $\frac{1}{N}$ to the edges, and define a
corner operator as
${\cal
C}=\begin{cases}\frac{\sqrt{\lambda}}{a}\Big{(}1-\text{i}\sqrt{\frac{\lambda}{a^{2}N}}A_{i}\Big{)}_{cd}^{-1}+\mathrm{i}\frac{(1-a)}{\sqrt{\lambda}}\mathbb{1}_{cd}\,,\text{
only one corner in the vertex}\\\ ~{}\\\
\frac{\sqrt{\lambda}}{a}\Big{(}1-\text{i}\sqrt{\frac{\lambda}{a^{2}N}}A_{i}\Big{)}_{cd}^{-1}\,.\text{
if there are more corners}\end{cases}\,,$ (30)
Therefore, the logarithm of ${\cal Z}$ can be expressed, as
$\displaystyle\log{\cal Z}=$ $\displaystyle\log
K[\lambda,N,a]+\sum_{T\,\atop\text{LVE tree}}{\cal A}_{T}[\lambda,N]\,,$ (31)
$\displaystyle{\cal A}_{T}[\lambda,N]=$
$\displaystyle\frac{N^{|V(T)|-|E(T)|}}{|V(T)|!}\int_{0}^{1}\prod_{e\in
E(T)}dt_{e}\,$ (32) $\displaystyle\times\int
d\mu_{C_{T}}(A)\,\operatorname{Tr}\Big{[}\mathop{\overrightarrow{\prod}}\limits_{c\in\partial
T\,\text{corner}}{\cal C}_{c}(i_{c})\Big{]}\,,$ (33)
where $i_{c}$ labels the vertex, to which the corner $c$ is attached, the
covariance $C_{T}$ is
$(C_{T})_{ij}=\inf_{(k,l)\in P^{T}_{i\leftrightarrow j}}t_{kl}$ (34)
and the infimum is taken to be $1$ if $i=j$.
## 5 Bounds
The expansion (33) contains three types of contributions, which should be
considered separately: a trivial tree with only one vertex, a tree with two
vertices, and all other trees. In the following we treat these cases step by
step.
### 5.1 Trivial tree
As always in the LVE formalism there is a trivial tree with only one vertex,
see Fig. 2. It requires a special treatment. The contribution of the trivial
tree is given by
$\displaystyle{\cal A}_{T_{1}}=\int
d\mu_{C(x)}(A)\bigg{[}N\operatorname{Tr}\log\left(1-\mathrm{i}\frac{\sqrt{\lambda}}{a\sqrt{N}}A\right)+\mathrm{i}\operatorname{Tr}\bigg{(}A\frac{(1-a)\sqrt{N}}{\sqrt{\lambda}}\bigg{)}\bigg{]}\,.$
(35)
The second term gives zero after the integration and the first can be bounded
by integrating by parts as
$\displaystyle{\cal A}_{T_{1}}$ $\displaystyle=$ $\displaystyle N\int
dA\,e^{-\frac{1}{2}\operatorname{Tr}\big{[}A^{2}\big{]}}\operatorname{Tr}_{i=j}\log(\mathbb{1}-\text{i}\frac{\sqrt{\lambda}}{a\sqrt{N}}A_{ij})$
(36) $\displaystyle=$ $\displaystyle N\int
dA\,e^{-\frac{1}{2}\operatorname{Tr}\big{[}A^{2}\big{]}}\operatorname{Tr}_{i=j}\int_{0}^{1}dt\,\frac{-\text{i}\frac{\sqrt{\lambda}}{a\sqrt{N}}tA_{ik}}{(\mathbb{1}-\text{i}\frac{\sqrt{\lambda}}{a\sqrt{N}}A)_{kj}}$
$\displaystyle=$ $\displaystyle\int
d\mu(A)\,\operatorname{Tr}_{i=j}\int_{0}^{1}dt\,\frac{-\frac{\lambda}{a^{2}}t}{(\mathbb{1}-\text{i}\frac{\sqrt{\lambda}}{a\sqrt{N}}A)^{2}_{ij}}$
Since the integrand is analytic for $\theta\in(-\pi,\pi)$, and
$\theta=\phi-2\psi$, employing (15), we arrive at the following conclusion.
###### Lemma 3.
The amplitude of the trivial tree, ${\cal A}_{T_{1}}$, is analytic in
$\lambda$ and bounded on the Riemann surface of the square root with
$-2\pi<\phi<2\pi$.
### 5.2 Two-vertex tree
To derive the bound for the tree with two vertices, see Fig. 3, we start with
a general bound for the corner operators that are also useful for all other
trees. Using the lemma 1, we obtain
$\|{\cal
C}\|\leq\begin{cases}\Bigg{|}\frac{\sqrt{\lambda}}{a}\frac{1}{\cos\frac{\theta}{2}}\Bigg{|}+\Bigg{|}\frac{(1-a)}{\sqrt{\lambda}}\Bigg{|}\,,\text{
if there is only one corner in the vertex}\\\ ~{}\\\
\Bigg{|}\frac{\sqrt{\lambda}}{a}\frac{1}{\cos\frac{\theta}{2}}\Bigg{|}\,,\text{
if there are more corners}\end{cases}\,.$ (37)
Figure 3: Two-vertex tree.
Remembering the representation (15), for sufficiently large $x$, we can
simplify the one-corner bound in (37), requiring that
$\displaystyle\Bigg{|}\frac{\sqrt{\lambda}}{a}\frac{1}{\cos\frac{\theta}{2}}\Bigg{|}+\Bigg{|}\frac{(1-a)}{\sqrt{\lambda}}\Bigg{|}\leq\Bigg{|}\frac{3(1-a)}{2\sqrt{\lambda}}\Bigg{|}\,.$
(38)
It is not hard to see that the latter inequality is valid when
$\displaystyle x\geq x_{1}\,,\qquad
x_{1}=\frac{\cos\frac{\theta}{2}+\sqrt{\cos^{2}\frac{\theta}{2}+8|\lambda|\cos\frac{\theta}{2}}}{2|\sqrt{\lambda}|\cos\frac{\theta}{2}}\,.$
(39)
Note, as $-\pi<\theta<\pi$, we have that $\cos\frac{\theta}{2}>0$.
In the two-vertex tree, both vertices have only one corner, therefore, using
(38), taking into account that there are $2$ vertices, $1$ edge, and one
factor $N$ coming from the trace, we can bound the amplitude of this tree as
$\displaystyle|{\cal A}_{T_{2}}|\leq\frac{9N^{2}}{8}\int
d\mu_{C(x)}(A)\Bigg{|}\frac{(1-a)}{\sqrt{\lambda}}\Bigg{|}^{2}\,.$ (40)
Due to the analyticity of the integrand for $\theta\in(-\pi,\pi)$ and the
latter bound, we arrive at the following.
###### Lemma 4.
The amplitude of the trivial tree, ${\cal A}_{T_{2}}$, is analytic in
$\lambda$ and bounded on the Riemann surface of the square root with
$\lambda\neq 0$, $-2\pi<\phi<2\pi$.
### 5.3 Other trees
Obviously, the bound for the leaves is larger than for the internal vertices.
If we use it for all vertices, we can bound the amplitude of each tree, as
$\displaystyle\bigg{|}\operatorname{Tr}\Big{[}\mathop{\overrightarrow{\prod}}\limits_{c\in\partial
T\,\text{corner}}{\cal C}_{c}(i_{c})\Big{]}\bigg{|}$ $\displaystyle\leq
N\Bigg{(}\Bigg{|}\frac{\sqrt{\lambda}}{a}\frac{1}{\cos\frac{\theta}{2}}\Bigg{|}+\Bigg{|}\frac{(1-a)}{\sqrt{\lambda}}\Bigg{|}\Bigg{)}^{2|E(T)|}\,.$
(41)
However, the best results given by this bound are achieved when one takes
$a=1$, which corresponds to classic LVE results. It happens since it is
impossible to make simultaneously small both terms:
$\Bigg{|}\frac{\sqrt{\lambda}}{a}\frac{1}{\cos\frac{\theta}{2}}\Bigg{|}\text{
and }\Bigg{|}\frac{(1-a)}{\sqrt{\lambda}}\Bigg{|}$. To overcome this
difficulty, we observe that:
* •
We can always make the term
$\Bigg{|}\frac{\sqrt{\lambda}}{a}\frac{1}{\cos\frac{\theta}{2}}\Bigg{|}$ as
small as needed, by varying the ratio $\frac{\sqrt{\lambda}}{a}$ (considering
large $x$).
* •
The factor
$\Bigg{|}\frac{\sqrt{\lambda}}{a}\frac{1}{\cos\frac{\theta}{2}}\Bigg{|}+\Bigg{|}\frac{(1-a)}{\sqrt{\lambda}}\Bigg{|}$,
for which hereafter we will use the bound (38), comes only from the leave
vertices, and in a general tree there are not so many leaves, see Fig. 4.
Figure 4: 1) A tree with a maximal amount of leaves at the given order of the
loop vertex expansion. 2) A tree with a minimal amount of leaves at the given
order of the LVE. 3) Representation of an average LVE tree with not so many
leaves.
Being inspired by the latter observations, we first establish the bound for
the corner operators of a general tree.
###### Lemma 5.
If a tree has $n_{l}$ leaves, $n_{i}$ internal vertices, and
$n_{l}+n_{i}=|V(T)|>2$, its amplitude given by a trace of the product of the
corner operators is bounded by
$\displaystyle\bigg{|}\operatorname{Tr}\Big{[}\mathop{\overrightarrow{\prod}}\limits_{c\in\partial
T_{(n_{i},n_{l})}\,\text{corner}}{\cal C}_{c}(i_{c})\Big{]}\bigg{|}$
$\displaystyle\leq
N\Bigg{|}\frac{\sqrt{\lambda}}{a}\frac{1}{\cos\frac{\theta}{2}}\Bigg{|}^{2n_{i}-2}\Bigg{|}\frac{3(1-a)}{2a\cos\frac{\theta}{2}}\Bigg{|}^{n_{l}}\,.$
(42)
###### Proof.
We prove (42) by induction. For the base, it is enough to check (42) for the
tree with $n_{i}=1$ and $n_{l}=2$. According to (38), two leaves bring the
factor
$\displaystyle\Bigg{|}\frac{3(1-a)}{2\sqrt{\lambda}}\Bigg{|}^{2}\,,$ (43)
and two corner operators of the internal vertex give
$\displaystyle\Bigg{|}\frac{\sqrt{\lambda}}{a}\frac{1}{\cos\frac{\theta}{2}}\Bigg{|}^{2}\,.$
(44)
All together, taking into account the factor $N$ coming from the trace, we
have exactly (42) for $n_{i}=1$ and $n_{l}=2$.
Then, for the induction, we assume that (42) is valid for all trees with
$n_{v}$ vertices, $n_{l}+n_{i}=n_{v}$, and prove that it is then valid for all
trees with $(n_{v}+1)$ vertices. The trees with $(n_{v}+1)$ vertices can be
obtained from the trees with $n_{v}$ vertices by increasing the number of
leaves or number of internal vertices. In the first case, see Fig. 5, the
bound (42) gets an extra factor
$\displaystyle\Bigg{|}\frac{3(1-a)}{2\sqrt{\lambda}}\Bigg{|}\,,$ (45)
from the leaf, and an additional factor
$\displaystyle\Bigg{|}\frac{\sqrt{\lambda}}{a}\frac{1}{\cos\frac{\theta}{2}}\Bigg{|}$
(46)
from the new corner of the internal vertex, where the new leaf is attached.
Altogether, this gives (42) with the number of internal $n_{i}$ and number of
leaves $(n_{l}+1)$.
Figure 5: Possible ways to add a leaf to a tree.
The second case, see Fig. 6, when the number of vertices is augmented by
increasing the number of the internal vertices, can be interpreted as
attaching a new leaf to the already existing leaf – since the only way to
obtain an additional internal vertex is to convert a leaf into it. The
contribution of the original leaf will be preserved – by shifting to the
contribution of the new leaf. In addition, there will be a contribution from
two corner operators of the new internal vertex, the same as (44). All
together this gives (42) with the number of internal $(n_{i}+1)$ and number of
leaves $n_{l}$.
Figure 6: Possible ways to add an internal vertex to a tree.
∎
To prove the convergence of the series (33), we split the sum over the LVE
trees as
$\displaystyle\sum_{T\,\atop\text{LVE
tree}}=\sum_{2<|V(T)|<60}+\sum_{T_{\geq}}+\sum_{T_{<}}\,,$ (47)
where the first sum runs over all LVE trees with $2<|V(T)|<60$ number of
vertices, $T_{\geq}$ denotes the LVE trees which $\alpha|V(T)|$ leaves or more
and $|V(T)|\geq 60$ vertices, and $T_{<}$ stands for the LVE trees with less
than $\alpha|V(T)|$ leaves and $|V(T)|\geq 60$ vertices.
Let us now bound each sum of (47) separately. We start with the following
lemma.
###### Lemma 6.
For any $x$, satisfying
$\displaystyle x\geq x_{2}\,,\qquad
x_{2}=\frac{|\sqrt{\lambda}|+1}{|\sqrt{\lambda}|}\,,$ (48)
it is easy to see that
$\displaystyle\Bigg{|}\frac{\sqrt{\lambda}}{a}\frac{1}{\cos\frac{\theta}{2}}\Bigg{|}\leq\Bigg{|}\frac{(1-a)}{a\cos\frac{\theta}{2}}\Bigg{|}\leq\frac{\sqrt{2}}{\cos\frac{\theta}{2}}\,,$
(49)
and obviously
$\displaystyle\Bigg{|}\frac{\sqrt{\lambda}}{a}\frac{1}{\cos\frac{\theta}{2}}\Bigg{|}\leq\Bigg{|}\frac{3(1-a)}{2a\cos\frac{\theta}{2}}\Bigg{|}\leq\frac{3\sqrt{2}}{2\cos\frac{\theta}{2}}\,.$
(50)
###### Proof.
Recall that $\sqrt{\lambda}=e^{\text{i}\phi/2}|\sqrt{\lambda}|$, $\lambda\neq
0$, and that according to the representation (15),
$a=x|\sqrt{\lambda}|e^{\mathrm{i}\psi}$, $x>0$,
$-\frac{\pi}{2}<\psi<\frac{\pi}{2}$. Then, the first inequality in (50)
transforms to
$\displaystyle\frac{1}{x}\leq\Bigg{|}\frac{1-|\sqrt{\lambda}|xe^{\mathrm{i}\psi}}{|\sqrt{\lambda}|x}\Bigg{|}\,,$
(51)
and it is enough to satisfy it for $\psi=0$. Simplifying the absolute value
operation for $x\geq\frac{1}{|\sqrt{\lambda}|}$, we arrive at (48). The second
inequality in (50) rewrites, as
$\displaystyle\Bigg{|}\frac{(1-|\sqrt{\lambda}|xe^{\mathrm{i}\psi})}{|\sqrt{\lambda}|x\cos\frac{\theta}{2}}\Bigg{|}\leq\frac{\sqrt{2}}{\cos\frac{\theta}{2}}\,.$
(52)
Using the upper bound for the left-hand side of (52), we find that it is
enough to satisfy
$\displaystyle\Bigg{|}\frac{\sqrt{1+|\lambda|x^{2}}}{|\sqrt{\lambda}|x}\Bigg{|}\leq\sqrt{2}\,,$
(53)
what can be achieved for $x\geq\frac{1}{|\sqrt{\lambda}|}$. ∎
###### Lemma 7.
For $-\pi<\theta<\pi$, the sum of absolute values of trees amplitudes from the
finite set of trees in (47) is bounded by a constant,
$\displaystyle\sum_{2<|V(T)|<30}|{\cal A}_{T}|<const\,.$ (54)
###### Proof.
Follows from lemmas 5 and 6. ∎
Now we can estimate how many trees have more than $\alpha|V(T)|$ leaves (we
will be interested in $1>\alpha>1/2$).
###### Lemma 8.
The number of trees with $|V(T)|$ vertices and $\alpha|V(T)|$ or more leaves
with $1>\alpha>1/2$, $|T_{\geq}|$ is bounded by
$\displaystyle|T_{\geq}|$ $\displaystyle\leq$
$\displaystyle(|V(T)|-\left\lceil\alpha|V(T)|\right\rceil+1)\big{(}|V(T)|!\big{)}$
(55) $\displaystyle\times$ $\displaystyle
2^{|V(T)|-3}e^{\left\lceil\alpha|V(T)|\right\rceil}\bigg{(}\frac{1-\alpha}{\alpha}\bigg{)}^{\left\lceil\alpha|V(T)|\right\rceil}\,.$
###### Proof.
The number of labeled trees with $n$ vertices and $k$ leaves is given by
$\displaystyle{\cal N}(n,k)=\frac{n!}{k!}S(n-2,n-k)\,,$ (56)
where $S(n-2,n-k)$ is the Stirling number of the second type. We can bound
${\cal N}(n,k)$ using an upper bound for the Stirling’s numbers of the second
type,
$\displaystyle S(n-2,n-k)\leq\binom{n-3}{n-k-1}(n-k)^{k}\leq
2^{n-3}(n-k)^{k}\,,$ (57)
and a lower bound for the Gamma function
$\displaystyle k!\geq e^{-k}(k+1)^{k}\,.$ (58)
Thus, we have
$\displaystyle{\cal N}(n,k)\leq n!\frac{e^{k}}{k^{k}}2^{n-3}(n-k)^{k}\,.$ (59)
Applying it to $n=|V(T)|$, $k=\left\lceil\alpha|V(T)|\right\rceil$ – an
integer number larger or equal to $\alpha|V(T)|$, we obtain
$\displaystyle{\cal
N}(|V(T)|,\left\lceil\alpha|V(T)|\right\rceil)\leq\big{(}|V(T)|!\big{)}2^{|V(T)|-3}e^{\left\lceil\alpha|V(T)|\right\rceil}\bigg{(}\frac{1-\alpha}{\alpha}\bigg{)}^{\left\lceil\alpha|V(T)|\right\rceil}\,,$
(60)
where we have used an obvious relation
$\displaystyle\frac{|V(T)|-\left\lceil\alpha|V(T)|\right\rceil}{\left\lceil\alpha|V(T)|\right\rceil}\leq\frac{1-\alpha}{\alpha}\,.$
(61)
If we take $\frac{e}{1+e}<\alpha<1$, then $e\frac{1-\alpha}{\alpha}<1$, and it
decreases with increasing of $\alpha$ leading to decreasing of the bound (60).
Therefore for large values of $\alpha$ (for larger amounts of leaves), we will
have fewer trees. Consequently, we can bound the number of trees that have
$\alpha|V(T)|$ leaves or more just by multiplying the (60) by
$(|V(T)|-\left\lceil\alpha|V(T)|\right\rceil+1)$. ∎
Now we are ready to bound the sum of amplitudes of all trees $T_{\geq}$ with
$\alpha|V(T)|$ leaves or more.
###### Lemma 9.
For $-\frac{\pi}{2}<\theta<\frac{\pi}{2}$ and $\alpha=\frac{59}{60}$, the sum
of absolute values of amplitudes of the trees $T_{\geq}$ is smaller than the
sum of an absolutely convergent series,
$\displaystyle\sum_{T_{\geq}}\big{|}{\cal
A}_{T_{\geq}}\big{|}\leq\frac{N^{2}e}{472}\sum_{v=60}^{\infty}(\frac{1}{60}v+1)\,\Bigg{(}\frac{7}{8}\Bigg{)}^{v}\,.$
(62)
###### Proof.
First of all, we chose $x\geq x_{3}$, $x_{3}=\max\\{x_{1},x_{2}\\}$. Then,
according to lemmas 5 and 6, the contributions of the corner operators of each
tree from $T_{\geq}$ can be bounded, as
$\displaystyle\bigg{|}\operatorname{Tr}\Big{[}\mathop{\overrightarrow{\prod}}\limits_{c\in\partial
T_{\geq}\,\text{corner}}{\cal C}_{c}(i_{c})\Big{]}\bigg{|}\leq
N\Bigg{(}\frac{3\sqrt{2}}{2\cos\frac{\theta}{2}}\Bigg{)}^{2n_{i}+n_{l}-2}\leq
N\Bigg{(}\frac{3\sqrt{2}}{2\cos\frac{\theta}{2}}\Bigg{)}^{2|V(T)|}\,.$ (63)
Taking into account the number of trees in $T_{\geq}$ at each order of LVE by
lemma 8, we obtain
$\displaystyle\sum_{T_{\geq}}\big{|}{\cal A}_{T_{\geq}}\big{|}$
$\displaystyle\leq$
$\displaystyle\frac{N^{2}}{8}\sum_{v=60}^{\infty}(v-\left\lceil\alpha
v\right\rceil+1)\,2^{v}e^{\left\lceil\alpha
v\right\rceil}\bigg{|}\frac{1-\alpha}{\alpha}\bigg{|}^{\left\lceil\alpha
v\right\rceil}\Bigg{(}\frac{3\sqrt{2}}{2\cos\frac{\theta}{2}}\Bigg{)}^{2v}$
(64) $\displaystyle\leq$
$\displaystyle\frac{N^{2}}{8}\sum_{v=60}^{\infty}(v-\alpha v+1)\,e^{\alpha
v+1}\bigg{|}\frac{1-\alpha}{\alpha}\bigg{|}^{\alpha
v+1}\Bigg{(}\frac{3}{\cos\frac{\theta}{2}}\Bigg{)}^{2v}\,.$
Now let us take $\alpha$ in such a way that
$\displaystyle\Bigg{(}\frac{3}{\cos\frac{\theta}{2}}\Bigg{)}^{2}\,e^{\alpha}\Bigg{|}\frac{1-\alpha}{\alpha}\Bigg{|}^{\alpha}<1\,.$
(65)
It is easy to see that, for any value of the $\cos\frac{\theta}{2}$, one can
choose such $\alpha$’s, sufficiently close to $\alpha=1$, that the inequality
(65) is satisfied. However, to handle our bounds we need to fix a certain
constant value of $\alpha$. For this, we need to additionally restrict values
of $\theta$. For simplicity, hereafter we consider
$-\frac{\pi}{2}<\theta<\frac{\pi}{2}$. Then, it is enough to find $\alpha$’s
satisfying
$\displaystyle
18\,e^{\alpha}\Bigg{|}\frac{1-\alpha}{\alpha}\Bigg{|}^{\alpha}<1\,.$ (66)
If, for instance, we take $\alpha=\frac{59}{60}$, the left-hand side of (66)
$\approx 0.873<\frac{7}{8}$ (note that $\frac{59}{60}>\frac{e}{1+e}$). This
completes the proof. ∎
###### Lemma 10.
For $-\frac{\pi}{2}<\theta<\frac{\pi}{2}$ and $\alpha=\frac{59}{60}$, the sum
of absolute values of amplitudes of the trees $T_{<}$ is smaller than the sum
of an absolutely convergent series,
$\displaystyle\sum_{T_{<}}\big{|}{\cal A}_{T_{<}}\big{|}$ $\displaystyle\leq$
$\displaystyle
N^{2}\frac{2(x_{4}\cos\frac{\theta}{2})^{5}\sqrt{\lambda}}{3(x_{4}\sqrt{\lambda}-1)}\sum_{v=60}^{\infty}\frac{1}{v^{2}}\Bigg{(}\frac{1}{2}\Bigg{)}^{v}\,,$
(67)
where
$\displaystyle
x_{4}=\max\\{x3,\frac{2^{30}e^{30}}{\cos\frac{\theta}{2}}\Big{(}\frac{3\sqrt{2}}{2\cos\frac{\theta}{2}}\Big{)}^{59/30}\\}+1\,.$
(68)
###### Proof.
Recall that, for $x\geq x_{3}$,
$\displaystyle\Bigg{|}\frac{\sqrt{\lambda}}{a}\frac{1}{\cos\frac{\theta}{2}}\Bigg{|}<\Bigg{|}\frac{(1-a)}{a\cos\frac{\theta}{2}}\Bigg{|}<\frac{\sqrt{2}}{\cos\frac{\theta}{2}}\,.$
(69)
Taking this into account, we conclude that the corner operators of trees with
less than $\alpha|V(T)|$ leaves, with $\alpha=\frac{59}{60}$, are bounded by
the bounds for corner operators of trees from $T_{<}$ with the maximal
possible amount of leaves $(\left\lceil\frac{59}{60}|V(T)|\right\rceil-1)$.
Note that for any
$x>\max\\{x_{3},\frac{2^{30}e^{30}}{\cos\frac{\theta}{2}}\Big{(}\frac{3\sqrt{2}}{2\cos\frac{\theta}{2}}\Big{)}^{59/30}\\}$,
we have
$\displaystyle
e\Bigg{|}\frac{\sqrt{\lambda}}{a}\frac{1}{\cos\frac{\theta}{2}}\Bigg{|}^{\frac{1}{30}}\bigg{(}\frac{3\sqrt{2}}{2\cos\frac{\theta}{2}}\bigg{)}^{\frac{59}{60}}<\frac{1}{2}\,.$
(70)
Therefore, fixing $x_{4}$ as in (68), we obtain
$\displaystyle\bigg{|}\operatorname{Tr}\Big{[}\mathop{\overrightarrow{\prod}}\limits_{c\in\partial
T_{n_{l},n_{i}}\,\text{corner}}{\cal C}_{c}(i_{c})\Big{]}\bigg{|}\leq$
$\displaystyle
N\Bigg{|}\frac{\sqrt{\lambda}}{a}\frac{1}{\cos\frac{\theta}{2}}\Bigg{|}^{2|V(T)|-2\left\lceil\frac{59}{60}|V(T)|\right\rceil-2}\Bigg{|}\frac{3(1-a)}{2a\cos\frac{\theta}{2}}\Bigg{|}^{\left\lceil\frac{59}{60}|V(T)|\right\rceil-1}\leq$
$\displaystyle
N\Bigg{|}\frac{\sqrt{\lambda}}{a}\frac{1}{\cos\frac{\theta}{2}}\Bigg{|}^{\frac{1}{30}|V(T)|-4}\Bigg{|}\frac{3\sqrt{2}}{2\cos\frac{\theta}{2}}\Bigg{|}^{\frac{59}{60}|V(T)|}\leq$
$\displaystyle\frac{2(x_{4}\cos\frac{\theta}{2})^{5}\sqrt{\lambda}}{3(x_{4}\sqrt{\lambda}-1)}\frac{N}{e^{|V(T)|}}\Big{(}\frac{1}{2}\Big{)}^{|V(T)|}\,.$
(71)
The number of all LVE trees with $n$ vertices is given by $n^{n-2}$, this
amount can be employed as an upper bound for the number of trees $T_{<}$.
Using the lower bound for the factorial, we obtain
$\displaystyle|T_{<}|\leq\frac{n^{n-2}}{n!}\leq\frac{n^{n-2}}{\Big{(}\frac{n+1}{e}\Big{)}^{n}}\leq\frac{e^{n}}{n^{2}}\,.$
(72)
Combining (71) with (72), and (33), we obtain (67). ∎
Equations (33), (47), lemmas 3, 4, 7, 9, and 10 together with the theorem 1
from [12] give the proof of the theorem 1 of the current paper.
## References
* [1] V. Rivasseau, “Constructive Matrix Theory,” JHEP 0709 (2007) 008, arXiv:0706.1224 [hep-th].
* [2] D. Brydges and T. Kennedy, Mayer expansions and the Hamilton-Jacobi equation, Journal of Statistical Physics, 48, 19 (1987).
* [3] A. Abdesselam and V. Rivasseau, “Trees, forests and jungles: A botanical garden for cluster expansions,” arXiv:hep-th/9409094.
* [4] A. D. Sokal, “An Improvement Of Watson’s Theorem On Borel Summability,” J. Math. Phys. 21, 261 (1980).
* [5] V. Rivasseau and Z. Wang, “How to Resum Feynman Graphs,” Annales Henri Poincaré 15, no. 11, 2069 (2014), arXiv:1304.5913 [math-ph].
* [6] R. Gurau and J. P. Ryan, “Colored Tensor Models - a review,” SIGMA 8, 020 (2012), arXiv:1109.4812 [hep-th].
* [7] R. Gurau, “Random Tensors”, Oxford University Press (2016).
* [8] G. ’t Hooft, ”A planar diagram theory for strong interactions.” Nucl. Phys. B 72, 461 (1974).
* [9] R. Gurau, “The 1/N expansion of colored tensor models,” Annales Henri Poincaré 12, 829 (2011), arXiv:1011.2726 [gr-qc].
* [10] R. Gurau and V. Rivasseau, “The 1/N expansion of colored tensor models in arbitrary dimension,” Europhys. Lett. 95, 50004 (2011), arXiv:1101.4182 [gr-qc].
* [11] R. Gurau, “The complete 1/N expansion of colored tensor models in arbitrary dimension,” Annales Henri Poincaré 13, 399 (2012), arXiv:1102.5759 [gr-qc].
* [12] Gurau, Razvan G., and Thomas Krajewski. ”Analyticity results for the cumulants in a random matrix model.” Annales de l’Institut Henri Poincare D 2, no. 2 (2015): 169-228.
* [13] R. Gurau, “The 1/N Expansion of Tensor Models Beyond Perturbation Theory,” Commun. Math. Phys. 330, 973 (2014), arXiv:1304.2666 [math-ph].
* [14] T. Delepouve, R. Gurau and V. Rivasseau, “Universality and Borel Summability of Arbitrary Quartic Tensor Models,” arXiv:1403.0170 [hep-th].
* [15] J. Magnen and V. Rivasseau, “Constructive $\phi^{4}$ field theory without tears,” Annales Henri Poincaré 9 (2008) 403, arXiv:0706.2457 [math-ph].
* [16] R. Gurau and V. Rivasseau, “The Multiscale Loop Vertex Expansion,” Annales Henri Poincaré 16, no. 8, 1869 (2015), arXiv:1312.7226 [math-ph].
* [17] T. Delepouve and V. Rivasseau, “Constructive Tensor Field Theory: The $T^{4}_{3}$ Model,” arXiv:1412.5091 [math-ph].
* [18] V. Lahoche, “Constructive Tensorial Group Field Theory II: The $U(1)-T^{4}_{4}$ Model,” arXiv:1510.05051 [hep-th].
* [19] V. Rivasseau and F. Vignes-Tourneret, “Constructive tensor field theory: The $T^{4}_{4}$ model,” arXiv:1703.06510 [math-ph].
* [20] V. Rivasseau, “Constructive Tensor Field Theory,” SIGMA 12, 085 (2016), arXiv:1603.07312 [math-ph].
* [21] V. Rivasseau and Z. Wang, “Corrected loop vertex expansion for $\Phi_{2}^{4}$ theory,” J. Math. Phys. 56, no. 6, 062301 (2015), arXiv:1406.7428 [math-ph].
* [22] V. Rivasseau, “Loop Vertex Expansion for Higher Order Interactions,” arXiv:1702.07602 [math-ph].
* [23] Krajewski, Thomas, Vincent Rivasseau, and Vasily Sazonov. ”Constructive matrix theory for higher-order interaction.” In Annales Henri Poincare, vol. 20, pp. 3997-4032. Springer International Publishing, 2019.
* [24] Krajewski, Thomas, Vincent Rivasseau, and Vasily Sazonov. ”Constructive matrix theory for higher-order interaction II: Hermitian and real symmetric cases.” In Annales Henri Poincare, vol. 23, no. 10, pp. 3431-3452. Cham: Springer International Publishing, 2022.
* [25] Feynman, R.P. and Kleinert, H., 1986. ”Effective classical partition functions,” Physical Review A, 34(6), p.5080.
* [26] Kleinert, H., and W. Janke. ”Convergence behavior of variational perturbation expansion - A method for locating Bender-Wu singularities,” Physics Letters A 206, no. 5-6 (1995): 283-289.
* [27] Stevenson, Paul M. ”Optimized perturbation theory.” Physical Review D 23, no. 12 (1981): 2916.
* [28] Shaverdyan, B. S., and A. G. Ushveridze. ”Convergent perturbation theory for the scalar $\phi^{2p}$ field theories; the Gell-Mann-Low function.” Physics Letters B 123, no. 5 (1983): 316-318.
* [29] Guida, Riccardo, Kenichi Konishi, and Hiroshi Suzuki. ”Improved convergence proof of the delta expansion and order dependent mappings.” Annals of Physics 249, no. 1 (1996): 109-145.
* [30] Caswell, William E. ”Accurate energy levels for the anharmonic oscillator and a summable series for the double-well potential in perturbation theory.” Annals of Physics 123, no. 1 (1979): 153-184.
* [31] Halliday, I. G., and P. Suranyi. ”Anharmonic oscillator: A new approach.” Physical Review D 21, no. 6 (1980): 1529.
* [32] Kleinert, Hagen. ”Strong-coupling behavior of $\varphi^{4}$ theories and critical exponents.” Physical Review D 57, no. 4 (1998): 2264.
* [33] Stevenson, Paul M. ”Optimization and the ultimate convergence of QCD perturbation theory.” Nuclear Physics B 231, no. 1 (1984): 65-90.
* [34] Ivanov, Aleksandr., and Vasily Sazonov. ”Convergent series for lattice models with polynomial interactions.” Nuclear Physics B 914 (2017): 43-61.
* [35] Ivanov, Aleksandr, and Vasily Sazonov. ”Infinite lattice models by an expansion with a non-Gaussian initial approximation.” Physics Letters B 796 (2019): 52-58.
* [36] Sazonov, Vasily. ”Convergent series for polynomial lattice models with complex actions.” Modern Physics Letters A 34, no. 30 (2019): 1950243.
* [37] Brandt, Sebastian F., and Axel Pelster. ”Large-D expansion from variational perturbation theory.” Journal of Mathematical Physics 46, no. 11 (2005).
* [38] Janke, W. ”Variational Perturbation Theory: a Powerful Method for Deriving Strong-Coupling Expansions.” In Fluctuating Paths And Fields: Festschrift Dedicated to Hagen Kleinert on the Occasion of His 60th Birthday, pp. 301-314. 2001.
* [39] Simon, Barry, and A. Dicke. ”Coupling constant analyticity for the anharmonic oscillator.” Annals of Physics 58, no. 1 (1970): 76-136.
* [40] Benedetti, Dario, Razvan Gurau, Hannes Keppler, and Davide Lettera. ”The small-$N$ series in the zero-dimensional $O(N)$ model: constructive expansions and transseries.” arXiv preprint arXiv:2210.14776 (2022).
|
††thanks: These authors contributed equally to this work††thanks: These
authors contributed equally to this work
# Ionization-induced Long-lasting Orientation of Symmetric-top Molecules
Long Xu AMOS and Department of Chemical and Biological Physics, The Weizmann
Institute of Science, Rehovot 7610001, Israel Ilia Tutunnikov AMOS and
Department of Chemical and Biological Physics, The Weizmann Institute of
Science, Rehovot 7610001, Israel Yehiam Prior<EMAIL_ADDRESS>AMOS and Department of Chemical and Biological Physics, The Weizmann Institute
of Science, Rehovot 7610001, Israel Ilya Sh. Averbukh
<EMAIL_ADDRESS>AMOS and Department of Chemical and Biological
Physics, The Weizmann Institute of Science, Rehovot 7610001, Israel
###### Abstract
We theoretically consider the phenomenon of field-free long-lasting
orientation of symmetric-top molecules ionized by two-color laser pulses. The
anisotropic ionization produces a significant long-lasting orientation of the
surviving neutral molecules. The degree of orientation increases with both the
pulse intensity and, counterintuitively, with the rotational temperature. The
orientation may be enhanced even further by using multiple delayed two-color
pulses. The long-lasting orientation may be probed by even harmonic generation
or by Coulomb-explosion-based methods. The effect may enable the study of
relaxation processes in dense molecular gases, and may be useful for molecular
guiding and trapping by inhomogeneous fields.
_Introduction_.—Field-free oriented molecules are essential in many studies,
such as ultrafast dynamic imaging, molecular tomography, and electron
diffraction, to name just a few. A much more comprehensive list of
applications may be found in a recent review by Koch et al. Koch _et al._
(2019). Naturally, for practical applications, a sizable degree of orientation
is beneficial. One of the tools for inducing the molecular orientation is a
non-resonant two-color laser pulse consisting of the fundamental wave (FW) and
its second harmonic (SH). Using such fields, two different orientation
mechanisms have been identified and studied theoretically and experimentally
Kanai and Sakai (2001); De _et al._ (2009); Oda _et al._ (2010); Spanner
_et al._ (2012); Frumker _et al._ (2012a); Znakovskaya _et al._ (2014);
Kraus _et al._ (2015). The first orientation mechanism, which is dominant at
low to moderate (non-ionizing) intensities, relies on the interaction of the
external fields with the molecular hyperpolarizability, which results in
asymmetric torques that orient the molecules along the polarization direction
of the SH field Kanai and Sakai (2001); De _et al._ (2009); Oda _et al._
(2010); Lin _et al._ (2018); Xu _et al._ (2021a). At high (ionizing)
intensities, the dominant orientation mechanism Spanner _et al._ (2012) is
different – probability of ionization depends on the molecular orientation
with respect to the polarization direction of the asymmetric electric field of
the two-color pulse Spanner _et al._ (2012); Frumker _et al._ (2012a);
Znakovskaya _et al._ (2014). As a result, immediately after the pulse, the
angular distribution of the surviving neutral molecules is asymmetric and has
a non-zero orientation on average. Note that for linear molecules, this
orientation disappears shortly after the excitation, but it periodically
reemerges due to the phenomenon of rotational quantum revivals Sh. Averbukh
and Perelman (1989); Robinett (2004).
Here, we theoretically investigate the ionization-induced orientation of
_symmetric-top molecules_ excited by intense two-color femtosecond laser
pulses. We demonstrate that in addition to the transient post-pulse
orientation, and unlike linear molecules, there also exists a significant
long-lasting orientation in these molecules. Long-lasting means that the
orientation exists not only at the revival times but between the revivals too.
In other words, the orientation signal has a non-zero baseline. Within the
idealized model of non-interacting rigid rotors used here, this orientation
lasts indefinitely. In practice, however, it will eventually be suppressed by
additional physical effects, e.g., by intermolecular collisions in gas cell
experiments. Related effects of long-lasting orientation have been recently
investigated in chiral Milner _et al._ (2019); Tutunnikov _et al._ (2020,
2021); Xu _et al._ (2021a) and other non-linear Xu _et al._ (2020, 2021b,
2021a) molecules excited by non-ionizing THz and laser pulses.
In what follows, we present our numerical analysis, outline our results on
significant long-lasting orientation, and discuss its dependence on intensity
and temperature. We conclude with a discussion of the experimental feasibility
of observing the predicted effect.
_Numerical methods_.—In our analysis, we simulate the rotational dynamics of
symmetric-top molecules within the rigid rotor approximation both classically
and quantum-mechanically. The symmetric-top molecules are excited by a two-
color laser pulse, consisting of the co-linearly polarized and phase-locked FW
field and its SH. The electric field is described by
$\mathbf{E}(t)=E_{0}f(t)[\cos(\omega t)+\varepsilon\cos(2\omega
t+\phi_{0})]\mathbf{e}_{Z},$ (1)
where $E_{0}$ and $\omega$ are the peak amplitude and the carrier frequency of
the FW, respectively. The laser pulse envelope is defined by $f(t)=\exp[-2\ln
2\,(t^{2}/\sigma^{2})]$, where $\sigma$ is the full width at half maximum
(FWHM) of the pulse intensity profile, and $\mathbf{e}_{Z}$ is a unit vector
along the laboratory $Z$ axis. Here, we set $\varepsilon=1$ and $\phi_{0}=0$.
The Hamiltonian describing molecular rotation driven by a two-color laser
pulse is given by $H(t)=H_{r}+H_{\mathrm{int}}$, where $H_{r}$ is the
rotational kinetic energy Hamiltonian and the interaction Hamiltonian is given
by
$H_{\mathrm{int}}=V_{\mathrm{pol}}+V_{\mathrm{hyp}}+V_{\mathrm{ion}}.$ (2)
The field-polarizability and field-hyperpolarizability interaction terms are
defined as Buckingham (2007)
$V_{\mathrm{pol}}=-\frac{1}{2}\sum_{i,j}\alpha_{ij}E_{i}E_{j},V_{\mathrm{hyp}}=-\frac{1}{6}\sum_{i,j,k}\beta_{ijk}E_{i}E_{j}E_{k},\\!\\!$
(3)
where $E_{i}$, $\alpha_{ij}$, and $\beta_{ijk}$ are the components of the
electric field vector, polarizability tensor, and hyperpolarizability tensor,
respectively.
Figure 1: (a) $\mathrm{CH_{3}F}$ molecule. Atoms are color-coded: black,
carbon; gray, hydrogen; green, fluorine. $\theta$ is the angle between the
molecular $a$ axis and the laboratory $Z$ axis, and $\chi$ represents the
rotation angle about the $a$ axis. (b) Structure factor $G(\theta,\chi)$ [see
Eqs. (7) and (8)] determining the angle dependence of the ionization rate.
The ionization depletion term $V_{\mathrm{ion}}$ is sensitive to molecular
orientation. For linear molecules, e.g., $\mathrm{HCl}$ Akagi _et al._ (2009)
and $\mathrm{CO}$ Li _et al._ (2011); Wu _et al._ (2012); Spanner _et al._
(2012), the ionization rate depends on the angle between the molecular axis
and the polarization direction. For symmetric-top molecules, e.g.,
$\mathrm{CH_{3}F}$ and $\mathrm{CH_{3}Br}$, belonging to the $C_{3v}$ point
group, the ionization rate depends on two angles, $\theta$ and $\chi$ (see
Fig. 1) Kraus _et al._ (2015).
In this work we consider $\mathrm{CH_{3}F}$ as our example symmetric-top
molecule. The ionization process is modeled using a complex absorbing
potential
$V_{\mathrm{ion}}=-\frac{i}{2}\Gamma(\theta,\chi,t),$ (4)
where the ionization rate $\Gamma(\theta,\chi,t)$ is defined as Kraus _et
al._ (2015) (within the weak-field asymptotic theory)
$\Gamma(\theta,\chi,t)=\begin{cases}W(t)|G(\theta,\chi)|^{2},&E(t)>0,\\\
W(t)|G(\pi-\theta,\pi+\chi)|^{2},&E(t)<0.\end{cases}$ (5)
Here, $E(t)=\mathbf{e}_{Z}\cdot\mathbf{E}(t)$, $W(t)$ is the field factor, and
$G(\theta,\chi)$ is the structure factor. The field factor is given by Kraus
_et al._ (2015)
$W(t)=\frac{\kappa}{2}\left(\frac{4\kappa^{2}}{|E(t)|}\right)^{2/\kappa-1}\exp\left[-\frac{2\kappa^{3}}{3|E(t)|}\right],$
(6)
where $\kappa=\sqrt{2I_{p}}$, and $I_{p}$ is the field-free energy of the
highest occupied molecular orbital (HOMO). We use the following model for the
structure factor
$G(\theta,\chi)=\left[\sin(\theta)+\frac{3}{2}\sin(2\theta)\right]G_{1}(\chi),$
(7)
where
$G_{1}(\chi)=\begin{cases}\,\,\,\,\sqrt{1+\sin(3\chi)},&0\leq\chi<7\pi/6,\\\
-\sqrt{1+\sin(3\chi)},&7\pi/6\leq\chi<11\pi/6,\\\
\,\,\,\,\sqrt{1+\sin(3\chi)},&11\pi/6\leq\chi<2\pi.\end{cases}\\!\\!\\!$ (8)
$G(\theta,\chi)$ defined in Eqs. (7) and (8) closely approximates the
structure factor of field-dressed HOMO of $\mathrm{CH_{3}F}$ with the largest
dipole moment Kraus _et al._ (2015) (the orbital from which the strong field
ionization preferentially occurs). The definition in Eq. (5) accounts for the
oscillations of the laser electric field along the $Z$ axis. We quantify the
degree of orientation of surviving neutral molecules using the thermally
averaged quantum expectation value of $\cos(\theta)$, $\braket{\cos(\theta)}$.
Further details on the quantum simulations can be found in Xu _et al._
(2021b); Tutunnikov _et al._ (also see Sec. I of the Supplemental Material
Sup ).
In classical simulations, we use the Monte Carlo approach to simulate the
behavior of a classical ensemble consisting of $N=10^{7}$ sample molecules. A
detailed description can be found in Tutunnikov _et al._ (2019); Xu _et al._
(2021b) (also see Sec. I of the Supplemental Material Sup ). Following the
ionization depletion, the classical degree of orientation is given by
$\displaystyle\braket{\cos(\theta)}(t)=\sum\limits_{n=1}^{N}\rho(\theta_{n},\chi_{n},t)\cos(\theta_{n}),$
(9)
where the relative weight (non-ionized fraction) of the $n$-th molecule is
$\displaystyle\rho(\theta_{n},\chi_{n},t)=N_{\mathrm{neu}}^{-1}\exp\left[-\int_{0}^{t}\Gamma(\theta_{n},\chi_{n},t^{\prime})\,dt^{\prime}\right],$
(10)
and the total number of surviving neutral molecules is
$\displaystyle
N_{\mathrm{neu}}=\sum\limits_{n=1}^{N}\exp\left[-\int_{0}^{t}\Gamma(\theta_{n},\chi_{n},t^{\prime})\,dt^{\prime}\right],$
(11)
Here, $\theta_{n}$ and $\chi_{n}$ are the time-dependent angles of the $n$-th
molecule. The population of surviving neutral molecules is defined as
$N_{\mathrm{neu}}/N$.
_Results_.—Molecular parameters of $\mathrm{CH_{3}F}$ are provided in Sec. II
of the Supplemental Material Sup . Figure 2 shows the calculated, classically
and quantum-mechanically, time-dependent orientation factor of neutral
molecules following a single two-color pulse applied at $t=0$. The parameters
used in this calculation are: the rotational temperature is
$T=300\,\mathrm{K}$, the laser wavelengths are 800 nm (FW) and 400 nm (SH),
the peak intensity is $7\times 10^{13}\,\mathrm{W/cm^{2}}$, and
$\sigma=20\,\mathrm{fs}$, see Eq. (1).
The three panels of Fig. 2 show the orientation factor obtained for various
combinations of interaction terms [see Eq. (2)]. All cases include the field-
polarizability interaction, $V_{\mathrm{pol}}\propto\cos^{2}(\theta)$ which is
a symmetric function of $\theta$ (about $\theta=\pi/2$). A torque-kick by such
a potential results in molecular alignment only (for review, see Stapelfeldt
and Seideman (2003)). The two other terms, $V_{\mathrm{hyp}}$ and
$V_{\mathrm{ion}}$ are asymmetric functions of $\theta$ and thus induce
molecular orientation. All three panels depict the immediate response of
$\braket{\cos(\theta)}$ to the laser excitation near $t=0$. This transient
orientation effect is similar to the one studied in linear molecules excited
by two-color laser pulses De _et al._ (2009); Oda _et al._ (2010); Wu and
Zeng (2010); Mun and Sakai (2018); Mun _et al._ (2019); Mun and Kim (2020);
Mellado-Alcedo _et al._ (2020); Wang and Henriksen (2020). At room
temperature and field parameters used here, the transient molecular
orientation induced by the field-hyperpolarizability interaction alone [Fig.
2(a)] is negligible compared to the orientation resulting from the ionization
depletion [Fig. 2(b)]. Accordingly, the curves in Fig. 2(b) and Fig. 2(c) [in
which all the interaction terms are included, see Eq. (2)] are almost
indistinguishable. These results are consistent with previous results reported
for linear molecules Spanner _et al._ (2012); Znakovskaya _et al._ (2014),
where it was shown that at high (ionizing) intensities, the orientation
mechanism of ionization depletion dominates.
Figure 2: Time-dependent orientation factor at $T=300\,\mathrm{K}$ for
different orientation mechanisms calculated classically and quantum-
mechanically. Here the field intensity is $I_{0}=7\times
10^{13}\,\mathrm{W/cm^{2}}$ and the pulse duration is
$\sigma=20\,\mathrm{fs}$. About 40% of neutral molecules survive after the
ionization.
In this work, we focus on the long-term orientation existing in symmetric-top
molecules. Figures 2(b) and 2(c) show that under the stated conditions and for
these molecules, following the transient orientation the degree of orientation
doesn’t return to zero but persists at a constant value till the first revival
and beyond. _This effect of ionization-induced long-lasting orientation
doesn’t exist in linear molecules, and it is the main result of this Letter_.
Note that the quantum and classical results agree well on the short time
scale, during the initial transient response, and predict the same degree of
long-lasting orientation on the long time scale, suggesting that the long-
lasting orientation stems from a classical origin. Long-lasting orientation
has been previously observed in chiral Milner _et al._ (2019); Tutunnikov
_et al._ (2020, 2021); Xu _et al._ (2021a) and studied in other non-linear
molecules Xu _et al._ (2020, 2021b, 2021a) excited by non-ionizing THz and
laser pulses. In these cases, the orientation mechanisms, including the
interactions with the polarizability and hyperpolarizability, were considered.
Here, the ionization depletion mechanism of ionization depletion gives rise to
unprecedented degrees of long-lasting orientation at room temperature. The
degree of long-lasting orientation ($\approx-0.033$), as shown in Figs. 2(b)
and 2(c), is an order of magnitude higher than values reported in previous
studies.
_Mechanism_.—Next, we discuss the mechanism behind this large ionization-
induced long-lasting orientation. Under field-free condition, the symmetry
axis (dipole) of symmetric-top molecules precesses around the (conserved,
space-fixed) vector of angular momentum, whereas linear molecules rotate in a
plane perpendicular to the angular momentum Landau and Lifshitz (1976). It is
this precession that is the source of the long-lasting orientation in
symmetric-top molecules. The degree of long-lasting orientation (with respect
to the $Z$ axis) of a single symmetric-top molecule is given by the
combination of three quantities $L_{a}L_{Z}/L^{2}$ Xu _et al._ (2020, 2021b)
(see also Sec. III of the Supplemental Material Sup ), where $L_{a}$ and
$L_{Z}$ are the projections of the angular momentum along the molecular
symmetry axis [the molecular $a$ axis in Fig. 1(a)] and the laboratory $Z$
axis, respectively, and $L$ is the magnitude of the angular momentum. Note
that in the presence of $Z$-polarized pulses, $L_{a}$ and $L_{Z}$ are
conserved quantities.
Initially, before the laser pulse, the molecules are isotropically distributed
in space, namely there is an equal number of molecules having positive (along
the $Z$ axis) and negative (against the $Z$ axis) long-lasting orientations.
Therefore, the ensemble-averaged long-lasting orientation
$\braket{L_{a}L_{Z}/L^{2}}$ vanishes Xu _et al._ (2021b). The co-linearly
polarized two-color laser pulse preferentially ionizes molecules oriented more
or less along its polarization axis (with $\theta<\pi/2$) [see Fig. 1(b)].
Thus, this selective ionization depletion breaks the symmetry of the molecular
ensemble, generating a non-zero long-lasting orientation of the surviving (not
ionized) neutral molecules, as shown in Figs. 2(b) and 2(c).
For symmetric-top molecules, within the model used here, the long-lasting
orientation is permanent. In experiments, however, this orientation will be
gradually destroyed by other physical effects such as intermolecular
collisions, centrifugal distortion, and radiation emission caused by rapidly
rotating molecular permanent dipole moments. Furthermore, while the
centrifugal distortion is known to lead to the decay of the revivals’ peaks
due to the dephasing of the rotational states Damari _et al._ (2016);
Babilotte _et al._ (2016), the average dipole remains almost unchanged (see
Xu _et al._ (2020)). The radiative emission gradually decreases the
rotational energy Damari _et al._ (2016); Babilotte _et al._ (2016), but the
relative energy loss during a single revival is negligible for a rarefied
molecular gas.
_Intensity dependence_.—One of the ways to enhance the degree of the
ionization-induced long-lasting orientation is by increasing the laser pulse
energy – peak intensity and/or pulse duration, or investing the higher energy
in several pulses. Figure 3 shows the classically calculated long-lasting
orientation factor, $\braket{\cos(\theta)}_{p}$, and the population of
surviving neutral molecules as functions of the laser intensity for single or
multiple delayed pulses. The value of $\braket{\cos(\theta)}_{p}$ is taken
after the pulse(s), when the orientation factor reaches a constant value. As
expected, with the increasing input energy, the long-lasting orientation
factor (in absolute value) initially grows [for $I_{0}<7\times
10^{13}\,\mathrm{W/cm^{2}}$, see Fig. 3(a)], while the population of surviving
neutral molecules decreases monotonically [see Fig. 3(b)].
After reaching its maximum value, the long-lasting orientation decreases with
increasing laser intensity. The reason for this decrease is the structure
factor $G(\theta,\chi)$ shown in Fig. 1(b). Due to the anisotropy of the
ionization process, molecules at $\theta\approx 0,\,0.6\pi,\,\pi$ have a
relatively low ionization rate. Therefore, when the ionization saturates at
high laser intensities, only these molecules survive, but these molecules have
a relatively low contribution to the long-lasting orientation. The combination
of increased ionization yield, but counterproductive selectivity of low-
contributing molecules limits the usefulness of increasing the laser intensity
beyond a certain point.
Figure 3: Classically calculated (a) long-lasting orientation and (b)
population of neutral molecules after the pulse(s) as functions of the laser
intensity. The time delay between pulses is 0.5 ps. Here $T=300\,\mathrm{K}$
and $\sigma=20\,\mathrm{fs}$. Both orientation mechanisms are taken into
account.
A way to avoid the limits imposed on high intensities is to use multiple
pulses instead of a single strong pulse. Consider the application of several
delayed two-color pulses. Figure 3(a) shows that the maximum long-lasting
orientation is about $-0.033$ for one pulse, $-0.069$ for two pulses, and
$-0.107$ for three pulses. Here, the time delay between each pulse is set to
0.5 ps, a relatively long time delay which allows the molecules to rotate
between the pulses. This way, after each pulse additional neutral molecules
rotate to an orientation more favorable for ionization by the next pulse. This
approach overcomes the ionization saturation limit that exists in the case of
a single pulse, and is similar to the use of multiple pulses for achieving
enhanced alignment while avoiding ionization Leibscher _et al._ (2003, 2004).
Naturally, adding more pulses results in a progressively lower population of
surviving neutral molecules. Nevertheless, for a fixed population, a higher
long-lasting orientation is achieved by applying multiple delayed pulses. For
example, for $N_{\mathrm{neu}}/N\approx 40\%$, the long-lasting orientation is
about $-0.033$ for one pulse, $-0.049$ for two pulses, and $-0.054$ for three
pulses. _This observation is an additional main result of this Letter._
Optimization of the time delay may allow further enhancement of the long-
lasting orientation.
_Temperature dependence_.—Next, we consider the temperature dependence of the
long-lasting orientation as depicted in Fig. 4. At $T=0$, the long-lasting
orientation is zero for all three curves shown in Fig. 4(a), as a result of
$L_{a}=L_{Z}=0$ (these are conserved quantities). There is an optimal
temperature ($T<1\,\mathrm{K}$ in this example) at which the long-lasting
orientation induced by the field-hyperpolarizability interaction (together
with the field-polarizability interaction) reaches the maximum ($\approx
0.006$). Above the optimal temperature, the long-lasting orientation decays
with increasing temperature Xu _et al._ (2021b). In sharp contrast, the
ionization-induced long-lasting orientation factor (in absolute value)
increases monotonically with the temperature, which is another principal
finding of this work.
Figure 4: Classically calculated (a) long-lasting orientation factor and (b)
population of neutral molecules as functions of temperature for different
orientation mechanisms. The laser parameters are the same as in Fig. 2:
$I_{0}=7\times 10^{13}\,\mathrm{W/cm^{2}}$, $\sigma=20\,\mathrm{fs}$. The
population for the orientation mechanism of hyperpolarizability remains 1.
As mentioned above, the long-lasting orientation is given by
$L_{a}L_{Z}/L_{f}^{2}$, where $L_{f}=|\mathbf{L}_{f}|$ is the magnitude of the
angular momentum after the pulse. $L_{Z}$ and $L_{a}$ are conserved in the
case of excitation by linearly polarized laser pulses and thus are functions
of temperature only. The short two-color pulse has a two-fold effect: (i) it
(almost) instantaneously ionizes the molecules at particular angles (the
effect described by $V_{\mathrm{ion}}$), and (ii) it changes the molecular
angular momentum, $\mathbf{L}_{f}=\mathbf{L}_{i}+\delta\mathbf{L}$, where
$\mathbf{L}_{i}$ is the initial angular momentum, and $\delta\mathbf{L}$ is
the change caused by $V_{\mathrm{pol}}$. Around the room temperature (in the
examples considered here), the effect of $V_{\mathrm{pol}}$ becomes
negligible, i.e., $L_{a}L_{Z}/L_{f}^{2}\approx L_{a}L_{Z}/L_{i}^{2}$, and the
degree of long-lasting orientation reaches a constant value determined by
$V_{\mathrm{ion}}$. At lower temperatures, $V_{\mathrm{pol}}$ has a
detrimental effect by increasing the total angular momentum, such that
$L_{a}L_{Z}/L_{f}^{2}<L_{a}L_{Z}/L_{i}^{2}$ effectively lowers the long-
lasting orientation.
At temperatures above room temperature, the ionization can no longer be
considered instantaneous because the molecules rotate fast enough to change
their orientation during the pulse. Accordingly, more molecules get ionized,
as seen from the populations in Fig. 4(b). The higher the ionization yield,
the higher asymmetry of the molecular ensemble, manifesting in the higher
long-lasting orientation.
_Conclusions_.—We have theoretically demonstrated a sizable ionization-induced
long-lasting orientation of symmetric-top molecules excited by two-color laser
pulses. The mechanism leading to this observation is the selective ionization
of the polar molecules at particular angles with respect to the laser’s
electric field, and the ability of symmetric-top molecules to precess around
the fixed angular momentum vector. We show that using a proper sequence of
delayed two-color pulses allows for enhancing the long-lasting orientation of
neutral molecules without drastic depletion of their population. Due to the
required precession, the enhanced orientation favors high temperature (room
temperature and above in our examples). The long-lasting orientation may be
measured with the help of Coulomb explosion Znakovskaya _et al._ (2014), or
by observing second (or higher order) harmonic generation in the gas phase,
which is sensitive to the lack of inversion symmetry Frumker _et al._ (2012b,
c); Kraus _et al._ (2012). Long-lasting orientation can pave the way to the
study of relaxation processes in dense molecular gases that are not otherwise
accessible to direct probing (see analogous applications of persistent
alignment Hartmann and Boulet (2012); Vieillard _et al._ (2013)). In
addition, molecular focusing, guiding, and trapping by inhomogeneous fields
rely on molecular orientation Gershnabel and Sh. Averbukh (2011a, b); Küpper
_et al._ (2012), and their observation may be facilitated by this long-lasting
orientation effect.
###### Acknowledgements.
L.X. is a recipient of the Sir Charles Clore Postdoctoral Fellowship. This
research was made possible in part by the historic generosity of the Harold
Perlman Family.
## References
* Koch _et al._ (2019) C. P. Koch, M. Lemeshko, and D. Sugny, Quantum control of molecular rotation, Rev. Mod. Phys. 91, 035005 (2019).
* Kanai and Sakai (2001) T. Kanai and H. Sakai, Numerical simulations of molecular orientation using strong, nonresonant, two-color laser fields, J. Chem. Phys 115, 5492 (2001).
* De _et al._ (2009) S. De, I. Znakovskaya, D. Ray, F. Anis, N. G. Johnson, I. A. Bocharova, M. Magrakvelidze, B. D. Esry, C. L. Cocke, I. V. Litvinyuk, and M. F. Kling, Field-Free Orientation of CO Molecules by Femtosecond Two-Color Laser Fields, Phys. Rev. Lett. 103, 153002 (2009).
* Oda _et al._ (2010) K. Oda, M. Hita, S. Minemoto, and H. Sakai, All-optical molecular orientation, Phys. Rev. Lett. 104, 213901 (2010).
* Spanner _et al._ (2012) M. Spanner, S. Patchkovskii, E. Frumker, and P. Corkum, Mechanisms of two-color laser-induced field-free molecular orientation, Phys. Rev. Lett. 109, 113001 (2012).
* Frumker _et al._ (2012a) E. Frumker, C. T. Hebeisen, N. Kajumba, J. B. Bertrand, H. J. Wörner, M. Spanner, D. M. Villeneuve, A. Naumov, and P. B. Corkum, Oriented rotational wave-packet dynamics studies via high harmonic generation, Phys. Rev. Lett. 109, 113901 (2012a).
* Znakovskaya _et al._ (2014) I. Znakovskaya, M. Spanner, S. De, H. Li, D. Ray, P. Corkum, I. V. Litvinyuk, C. L. Cocke, and M. F. Kling, Transition between mechanisms of laser-induced field-free molecular orientation, Phys. Rev. Lett. 112, 113005 (2014).
* Kraus _et al._ (2015) P. M. Kraus, O. I. Tolstikhin, D. Baykusheva, A. Rupenyan, J. Schneider, C. Z. Bisgaard, T. Morishita, F. Jensen, L. B. Madsen, and H. J. Wörner, Observation of laser-induced electronic structure in oriented polyatomic molecules, Nat. Commun. 6, 7039 (2015).
* Lin _et al._ (2018) K. Lin, I. Tutunnikov, J. Qiang, J. Ma, Q. Song, Q. Ji, W. Zhang, H. Li, F. Sun, X. Gong, H. Li, P. Lu, H. Zeng, Y. Prior, I. Sh. Averbukh, and J. Wu, All-optical field-free three-dimensional orientation of asymmetric-top molecules, Nat. Commun. 9, 5134 (2018).
* Xu _et al._ (2021a) L. Xu, I. Tutunnikov, Y. Prior, and I. Sh. Averbukh, Three dimensional orientation of small polyatomic molecules excited by two-color femtosecond pulses, J. Phys. B 54, 164003 (2021a).
* Sh. Averbukh and Perelman (1989) I. Sh. Averbukh and N. F. Perelman, Fractional revivals: Universality in the long-term evolution of quantum wave packets beyond the correspondence principle dynamics, Phys. Lett. A 139, 449 (1989).
* Robinett (2004) R. W. Robinett, Quantum wave packet revivals, Phys. Rep 392, 1 (2004).
* Milner _et al._ (2019) A. A. Milner, J. A. M. Fordyce, I. MacPhail-Bartley, W. Wasserman, V. Milner, I. Tutunnikov, and I. Sh. Averbukh, Controlled Enantioselective Orientation of Chiral Molecules with an Optical Centrifuge, Phys. Rev. Lett. 122, 223201 (2019).
* Tutunnikov _et al._ (2020) I. Tutunnikov, J. Floß, E. Gershnabel, P. Brumer, I. Sh. Averbukh, A. A. Milner, and V. Milner, Observation of persistent orientation of chiral molecules by a laser field with twisted polarization, Phys. Rev. A 101, 021403(R) (2020).
* Tutunnikov _et al._ (2021) I. Tutunnikov, L. Xu, R. W. Field, K. A. Nelson, Y. Prior, and I. Sh. Averbukh, Enantioselective orientation of chiral molecules induced by terahertz pulses with twisted polarization, Phys. Rev. Res. 3, 013249 (2021).
* Xu _et al._ (2020) L. Xu, I. Tutunnikov, E. Gershnabel, Y. Prior, and I. Sh. Averbukh, Long-lasting molecular orientation induced by a single terahertz pulse, Phys. Rev. Lett. 125, 013201 (2020).
* Xu _et al._ (2021b) L. Xu, I. Tutunnikov, Y. Prior, and I. Sh. Averbukh, Long-lasting orientation of symmetric-top molecules excited by two-color femtosecond pulses, Front. Phys. 9, 689635 (2021b).
* Buckingham (2007) A. D. Buckingham, Permanent and induced molecular moments and long-range intermolecular forces, in _Advances in Chemical Physics_ (John Wiley & Sons, New York, 2007) pp. 107–142.
* Akagi _et al._ (2009) H. Akagi, T. Otobe, A. Staudte, A. Shiner, F. Turner, R. Dörner, D. M. Villeneuve, and P. B. Corkum, Laser tunnel ionization from multiple orbitals in HCl, Science 325, 1364 (2009).
* Li _et al._ (2011) H. Li, D. Ray, S. De, I. Znakovskaya, W. Cao, G. Laurent, Z. Wang, M. F. Kling, A. T. Le, and C. L. Cocke, Orientation dependence of the ionization of CO and NO in an intense femtosecond two-color laser field, Phys. Rev. A 84, 043429 (2011).
* Wu _et al._ (2012) J. Wu, L. P. H. Schmidt, M. Kunitski, M. Meckel, S. Voss, H. Sann, H. Kim, T. Jahnke, A. Czasch, and R. Dörner, Multiorbital tunneling ionization of the CO molecule, Phys. Rev. Lett. 108, 183001 (2012).
* (22) I. Tutunnikov, L. Xu, Y. Prior, and I. Sh. Averbukh, Echo-enhanced molecular orientation at high temperatures, arXiv:2207.08274 .
* (23) See Supplemental Material at URL for the further details on the quantum and classical simulations (Sec. I), for the molecular properties of methyl fluoride (Sec. II), as well as for the derivation of the long-lasting orientation factor in the classical model (Sec. III), which includes Refs. Buckingham (2007); Tolstikhin _et al._ (2011); Kraus _et al._ (2015); Kallush and Fleischer (2015); McDowell (1990); Sidje (1998); Goldstein _et al._ (2002); Kuipers (1999); Coutsias and Romero (2004); Tutunnikov _et al._ (2019); Xu _et al._ (2021b); Johnson (2019); Chong (1992); Bieri _et al._ (1981).
* Tolstikhin _et al._ (2011) O. I. Tolstikhin, T. Morishita, and L. B. Madsen, Theory of tunneling ionization of molecules: Weak-field asymptotics including dipole effects, Phys. Rev. A 84, 053423 (2011).
* Kallush and Fleischer (2015) S. Kallush and S. Fleischer, Orientation dynamics of asymmetric rotors using random phase wave functions, Phys. Rev. A 91, 063420 (2015).
* McDowell (1990) R. S. McDowell, Rotational partition functions for symmetric-top molecules, J. Chem. Phys. 93, 2801 (1990).
* Sidje (1998) R. B. Sidje, Expokit: A software package for computing matrix exponentials, ACM Trans. Math. Softw. 24, 130 (1998).
* Goldstein _et al._ (2002) H. Goldstein, C. Poole, and J. Safko, _Classical Mechanics_ (Addison Wesley, San Francisco, CA, 2002).
* Kuipers (1999) J. B. Kuipers, _Quaternions and Rotation Sequences: A Primer with Applications to Orbits, Aerospace and Virtual Reality_ (Princeton University Press, Princeton, N.J., 1999).
* Coutsias and Romero (2004) E. A. Coutsias and L. Romero, The quaternions with an application to rigid body dynamics, Sandia Technical Report, SAND2004-0153 (2004).
* Johnson (2019) R. D. Johnson, _NIST Computational chemistry comparison and benchmark database, Release 20_ , Tech. Rep. (2019).
* Chong (1992) D. P. Chong, Theoretical calculations of dipole moments, polarizabilities, and hyperpolarizabilities of HF, OCS, $\mathrm{O_{3}}$, $\mathrm{CH_{3}F}$, and $\mathrm{CH_{3}Cl}$ by local density approximation, J. Chin. Chem. Soc. 39, 375 (1992).
* Bieri _et al._ (1981) G. Bieri, L. Åsbrink, and W. Von Niessen, 30.4-nm He(II) photoelectron spectra of organic molecules: Part IV. fluoro-compounds (C, H, F), J. Electron Spectrosc. Relat. Phenom. 23, 281 (1981).
* Tutunnikov _et al._ (2019) I. Tutunnikov, J. Floß, E. Gershnabel, P. Brumer, and I. Sh. Averbukh, Laser-induced persistent orientation of chiral molecules, Phys. Rev. A 100, 043406 (2019).
* Stapelfeldt and Seideman (2003) H. Stapelfeldt and T. Seideman, Colloquium: Aligning molecules with strong laser pulses, Rev. Mod. Phys. 75, 543 (2003).
* Wu and Zeng (2010) J. Wu and H. Zeng, Field-free molecular orientation control by two ultrashort dual-color laser pulses, Phys. Rev. A 81, 053401 (2010).
* Mun and Sakai (2018) J. H. Mun and H. Sakai, Improving molecular orientation by optimizing relative delay and intensities of two-color laser pulses, Phys. Rev. A 98, 013404 (2018).
* Mun _et al._ (2019) J. H. Mun, H. Sakai, and R. González-Férez, Orientation of linear molecules in two-color laser fields with perpendicularly crossed polarizations, Phys. Rev. A 99, 053424 (2019).
* Mun and Kim (2020) J. H. Mun and D. E. Kim, Field-free molecular orientation by delay- and polarization-optimized two fs pulses, Sci. Rep 10, 18875 (2020).
* Mellado-Alcedo _et al._ (2020) D. Mellado-Alcedo, N. R. Quintero, and R. González-Férez, Linear polar molecule in a two-color cw laser field: A symmetry analysis, Phys. Rev. A 102, 023110 (2020).
* Wang and Henriksen (2020) S. Wang and N. E. Henriksen, Optimal field-free molecular orientation with nonresonant two-color adiabatic-turn-on and sudden-turn-off laser pulses, Phys. Rev. A 102, 063120 (2020).
* Landau and Lifshitz (1976) L. Landau and E. Lifshitz, _Mechanics_, 3rd ed. (Butterworth-Heinemann, Oxford, 1976).
* Damari _et al._ (2016) R. Damari, S. Kallush, and S. Fleischer, Rotational control of asymmetric molecules: Dipole- versus polarizability-driven rotational dynamics, Phys. Rev. Lett. 117, 103001 (2016).
* Babilotte _et al._ (2016) P. Babilotte, K. Hamraoui, F. Billard, E. Hertz, B. Lavorel, O. Faucher, and D. Sugny, Observation of the field-free orientation of a symmetric-top molecule by terahertz laser pulses at high temperature, Phys. Rev. A 94, 043403 (2016).
* Leibscher _et al._ (2003) M. Leibscher, I. Sh. Averbukh, and H. Rabitz, Molecular alignment by trains of short laser pulses, Phys. Rev. Lett. 90, 213001 (2003).
* Leibscher _et al._ (2004) M. Leibscher, I. Sh. Averbukh, and H. Rabitz, Enhanced molecular alignment by short laser pulses, Phys. Rev. A 69, 013402 (2004).
* Frumker _et al._ (2012b) E. Frumker, C. T. Hebeisen, N. Kajumba, J. B. Bertrand, H. J. Wörner, M. Spanner, D. M. Villeneuve, A. Naumov, and P. B. Corkum, Oriented rotational wave-packet dynamics studies via high harmonic generation, Phys. Rev. Lett. 109, 113901 (2012b).
* Frumker _et al._ (2012c) E. Frumker, N. Kajumba, J. B. Bertrand, H. J. Wörner, C. T. Hebeisen, P. Hockett, M. Spanner, S. Patchkovskii, G. G. Paulus, D. M. Villeneuve, A. Naumov, and P. B. Corkum, Probing polar molecules with high harmonic spectroscopy, Phys. Rev. Lett. 109, 233904 (2012c).
* Kraus _et al._ (2012) P. M. Kraus, A. Rupenyan, and H. J. Wörner, High-harmonic spectroscopy of oriented OCS molecules: Emission of even and odd harmonics, Phys. Rev. Lett. 109, 233903 (2012).
* Hartmann and Boulet (2012) J.-M. Hartmann and C. Boulet, Quantum and classical approaches for rotational relaxation and nonresonant laser alignment of linear molecules: A comparison for co2 gas in the nonadiabatic regime, J. Chem. Phys. 136, 184302 (2012).
* Vieillard _et al._ (2013) T. Vieillard, F. Chaussard, F. Billard, D. Sugny, O. Faucher, S. Ivanov, J.-M. Hartmann, C. Boulet, and B. Lavorel, Field-free molecular alignment for probing collisional relaxation dynamics, Phys. Rev. A 87, 023409 (2013).
* Gershnabel and Sh. Averbukh (2011a) E. Gershnabel and I. Sh. Averbukh, Electric deflection of rotating molecules, J. Chem. Phys. 134, 054304 (2011a).
* Gershnabel and Sh. Averbukh (2011b) E. Gershnabel and I. Sh. Averbukh, Deflection of rotating symmetric top molecules by inhomogeneous fields, J. Chem. Phys. 135, 084307 (2011b).
* Küpper _et al._ (2012) J. Küpper, F. Filsinger, G. Meijer, and H. Stapelfeldt, Manipulating the motion of complex molecules: Deflection, focusing, and deceleration of molecular beams for quantum-state and conformer selection, in _Methods in Physical Chemistry_ (John Wiley & Sons, Ltd, 2012) Chap. 1, pp. 1–28.
|
# Generalized Relation Learning with Semantic Correlation Awareness
for Link Prediction
Yao Zhang1, Xu Zhang1, Jun Wang2, Hongru Liang1,
Wenqiang Lei3111Corresponding authors., Zhe Sun4, Adam Jatowt5, Zhenglu
Yang111footnotemark: 1
###### Abstract
Developing link prediction models to automatically complete knowledge graphs
has recently been the focus of significant research interest. The current
methods for the link prediction task have two natural problems: 1) the
relation distributions in KGs are usually unbalanced, and 2) there are many
unseen relations that occur in practical situations. These two problems limit
the training effectiveness and practical applications of the existing link
prediction models. We advocate a holistic understanding of KGs and we propose
in this work a unified Generalized Relation Learning framework GRL to address
the above two problems, which can be plugged into existing link prediction
models. GRL conducts a generalized relation learning, which is aware of
semantic correlations between relations that serve as a bridge to connect
semantically similar relations. After training with GRL, the closeness of
semantically similar relations in vector space and the discrimination of
dissimilar relations are improved. We perform comprehensive experiments on six
benchmarks to demonstrate the superior capability of GRL in the link
prediction task. In particular, GRL is found to enhance the existing link
prediction models making them insensitive to unbalanced relation distributions
and capable of learning unseen relations.
## Introduction
Figure 1: (a) The unbalanced relation distribution in the FB15K-237 dataset
where relations are sorted according to their frequency. (b) Lots of unseen
relations. Three film-related relations are respectively categorized into the
many-shot class, few-shot class, and zero-shot class as marked.
Knowledge graphs (KGs), representing facts in semantic graph structures, have
been applied to multiple artificial intelligence tasks, e.g., recommendation
(Lei et al. 2020a; Wang et al. 2021), dialogue generation (Moon et al. 2019;
Lei et al. 2020b), and question answering (Christmann et al. 2019; Zhu et al.
2021). In KGs, facts are formed as triples, (head entity, relation, tail
entity), where the head entity is linked to the tail entity via the relation.
New knowledge emerges continuously, and hence the issue of incompleteness of
KGs has triggered wide research interests in link prediction task, which
requires predicting the missing links in KGs (Seyed and David 2018). The
mainstream link prediction models (Bordes et al. 2013; Dettmers et al. 2018)
learn the embeddings of entities and relations, and then use a score function
to estimate the validity of triples. However, we believe using the embedding
learning for mainstream link prediction models results in two key problems:
* $\bullet$
Unbalanced relation distribution. As shown in Figure 1, the relation
distribution in an off-the-shelf KG learning resource (i.e., FB15K-237
(Toutanova and Chen 2015)) is quite unbalanced. For example, the frequencies
of the two relations /film/film/language and /film/film/edited differ greatly.
Mainstream link prediction models assume enough training instances for all
relations and pay less attention to few-shot relations, disregarding the fact
that few-shot relation learning may influence the learning performance to a
high degree.
* $\bullet$
Existence of unseen relations. Real-world KGs tend to be open and evolve
quickly, and accordingly there is a large number of zero-shot relations unseen
in the off-the-shelf learning resources, for example, the relation
/film/film/sequel in Figure 1. The unseen relations are beyond the capacity of
mainstream link prediction models, as there are no training instances to learn
their embeddings. This problem may restrict the use of these models in
downstream tasks.
Recently, some efforts have been conducted on addressing the above problems.
Xiong et al. (2018), Shi and Weninger (2018), and Chen et al. (2019a) adopted
the meta-learning or metric-based approaches to train on limited training
samples and perform fast learning on new few-shot relations. These studies
show promise in few-shot relation learning, however they have difficulty in
tackling unbalanced relation distributions, which is mainly attributed to the
excessive time cost required for training numerous relations. More recently,
Chen et al. (2019b), Qin et al. (2020) predicted the unseen relations by
extracting information from textual descriptions. They were able to
successfully complete the unseen relation prediction task. However, these
models are not appropriate for link prediction task, since textual
descriptions tend to be noisy and also cannot build a bridge between seen and
unseen relations. In general, an ideal link prediction model should be able to
jointly learn many-, few-, and zero- shot relations.
Regarding the joint relation learning, we noticed that semantic correlations,
which denote the similarities of relations in semantics, can serve as a bridge
to connect the learning of many-, few-, and zero- shot relations. Take Figure
1 as an instance. The many-shot relation “/film/film/language”, few-shot
relation “/film/film/edited”, and zero-shot relation “/film/film/sequel” are
all related to “film”. Based on the assumption that semantically similar
relations should be located near each other in embedding space (Yang et al.
2015), it makes sense to exploit semantic correlations, such as the one in the
above-mentioned example, to accomplish the joint relation learning. Inspired
by this, we propose a Generalized Relation Learning framework (abbreviated to
GRL) based on learning semantic correlations. GRL can be plugged into a
mainstream link prediction model to make it (1) insensitive to unbalanced
relation distributions and (2) capable of learning zero-shot relations.
Specifically, GRL is plugged into a link prediction model after the embedding
learning stage. To optimize the relation embedding, GRL extracts rich semantic
correlations through an attention mechanism, fuses different relations, and
minimizes the classification-aware loss to enable these implicitly embedded
semantic correlations in the relation embeddings. Then, the closeness of
semantically similar relations in vector space and the discrimination of
dissimilar relations can be improved. In this way, few-shot relations can
learn knowledge from the semantically similar many-shot relations; for zero-
shot relations, their most semantically similar relation can also be
predicted. In our experiments, we improve two base models (DistMult (Yang et
al. 2015) and ConvE (Dettmers et al. 2018)) by incorporating the proposed GRL
framework on all relation classes, i.e., many, few, and zero-shot relations.
Our work is an important step towards a holistic understanding of KGs and a
generalized solution of relation learning for the link prediction task.
Our contributions are as follows:
* $\bullet$
We carefully consider two key problems of the embedding learning used by
mainstream link prediction models and we highlight the necessity of jointly
learning many-, few-, and zero- shot relations.
* $\bullet$
We propose GRL framework by leveraging the rich semantic correlations between
relations to make the link prediction models insensitive to unbalanced
relation distributions and capable of learning zero-shot relations.
* $\bullet$
We perform experiments on six benchmarks to evaluate the link prediction
capability of GRL, and show that GRL lets the base link prediction models
perform well across many-, few-, and zero- shot relations.
## Related Work
Since KGs are populated by automatic text processing they are often
incomplete, and it is usually infeasible to manually add to them all the
relevant facts. Hence, many researches approached the task of predicting
missing links in KGs.
Mainstream link prediction models widely use embedding-based methods to map
entities and relations into continuous low-dimensional vector space and use a
score function to predict whether the triples are valid. They can be broadly
classified as translational based (Bordes et al. 2013; Wang et al. 2014; Lin
et al. 2015; Ji et al. 2016), multiplicative based (Nickel, Tresp, and Kriegel
2011; Yang et al. 2015; Trouillon et al. 2016), and neural network based
(Dettmers et al. 2018; Schlichtkrull et al. 2018). These models are based on
the implicit assumption that all relations are distributed within the dataset
in a balanced way. Hence, they perform poorly in few-shot relation learning
scenarios because these models neglect the imbalanced distributions, as well
as they cannot properly handle zero-shot relations due to keeping only the
knowledge of existing relations and not learning information on unseen
relations.
Few-shot relation learning models attempt to adopt the meta-learning (Chen et
al. 2019a; Lv et al. 2019) and metric-based (Xiong et al. 2018; Wang et al.
2019) methods to learn knowledge from only a few samples. However, the few-
shot learning models are computationally expensive because they need to spend
extra time retraining on each few-shot relation (meta-learning), or need to
compare the few-shot relations one by one (metric-based). In practice, the
many-shot and few-shot scenarios are not explicitly distinguished. Zero-shot
relation learning models aim to learn relations that are unseen in the
training set. Researchers have proposed several models to deal with zero-shot
relations by leveraging information from textual descriptions (Chen et al.
2019b; Qin et al. 2020). They perform well on predicting the zero-shot
relations, but are not appropriate in the link prediction task because textual
descriptions could be noisy and a bridge connecting seen and unseen relations
could be missing.
Figure 2: The illustration of GRL, which consists of the intuitive explanation
(a), the base model (b) and the detailed architecture (c). The base model
denotes the mainstream link prediction model. GRL is plugged after the
embedding component of the base model and contains three components:
attention, fusion, and classifier.
In this work, we focus on jointly learning many-, few-, and zero- shot
relations without requiring extra textual knowledge. Recently, some computer
vision works (Ye et al. 2019; Shi et al. 2019) have attempted to approach the
generalized image classification. Nonetheless, they are not designed for
coping with graph structures, e.g., KGs. We leverage in this work the rich
semantic correlations between relations as a bridge to connect the learning of
many-, few-, and zero- shot relations. Zhang et al. (2019) integrated the rich
semantic correlations between specific hierarchical relations into relation
extraction. That method performs well only on hierarchical relations, as well
as it predicts relations from text, hence it does not cope with the link
prediction task.
## Method
Figure 2 provides the illustration of the proposed framework GRL. The figure
consists of three parts: the intuitive explanation of GRL in Figure 2 (a),
base model shown in Figure 2 (b), and the detailed architecture in Figure 2
(c).
The intuitive explanation of GRL is shown to utilize the semantic correlations
between many-shot and few-shot relations so that the relation embedding
learning can benefit from semantically similar relations. We devise three
modules, i.e., Attention, Fusion and Classifier, to embed and fuse the rich
semantic correlations among many-shot and few-shot relations in the training
phase; and to select the most similar relation embedding for zero-shot
relations in the testing phase. In this way, GRL can improve the performance
on all relation classes, i.e., many, few, and zero-shot relations. The base
model denotes the existing mainstream link prediction model consisting of an
embedding component and a score function component. GRL can be plugged between
the embedding and the score function components to make it (1) insensitive to
imbalanced relation distributions and (2) capable of detecting zero-shot
relations.
Before delving into the model description, we first formally represent a KG as
a collection of triples
$\mathcal{T}=\left\\{(e_{h},r,e_{t})|e_{h}\in\mathcal{E},e_{t}\in\mathcal{E},r\in\mathcal{R}\right\\}$,
where $\mathcal{E}$ and $\mathcal{R}$ are the entity and relation sets,
respectively. Each directed link in KG represents a triple (i.e., $e_{h}$ and
$e_{t}$ are represented as the nodes and $r$ as the labeled edge between
them). The link prediction task is to predict whether a given triple
$(e_{h},r,e_{t})$ is valid or not. In particular, for the zero-shot relations,
we need to emphasize that we mainly focus on predicting the validity of the
triple with a zero-shot relation, rather than predicting the zero-shot
relations, i.e., the relation prediction task (Chen et al. 2019b; Qin et al.
2020). However, GRL has also the ability to predict the most semantically
similar relation of a given zero-shot relation through learning from the many-
and few-shot relations, not from the text description.
### Base Model
We select a mainstream link prediction model as the base model and apply GRL
to it. The base model can be seen as multi-layer neural network consisting of
an embedding component and a score function component. For the base link
prediction model, given an input triple $(e_{h},r,e_{t})$, the embedding
component maps the head and tail entities $(e_{h},e_{t})$ and the relation $r$
to their distributed embedding representations
$(\bm{e}_{h},\bm{r},\bm{e}_{t})$ through the entity and relation embedding
layers, respectively. After the embedding representations are obtained, the
score function component is adopted to calculate the likelihood of
$(\bm{e}_{h},\bm{r},\bm{e}_{t})$ being a valid fact. The following binary
cross entropy loss is used to train model parameters:
$\mathcal{L}_{s}=-\frac{1}{N}\sum_{i=1}^{N}(t_{i}\log
p(s_{i})+(1-t_{i})\log(1-p(s_{i}))),$ (1)
where $s_{i}$ is the score of the $i$-th input triple, $t_{i}$ is the ground
truth label, $t_{i}$ is 1 if the input relation is valid and 0 otherwise, and
$N$ is the number of input triples.
### GRL Framework
The loss used by mainstream link prediction models is score-oriented and lacks
an in-depth exploration of rich semantic correlations in KGs. We propose the
GRL framework to learn appropriate representations for relations by embedding
semantic correlations into classification-aware optimization. GRL contains
three specific modules:
1) Attention Module, which builds the knowledge-aware attention distribution
and the relational knowledge vector. The aim of this module is to extract the
semantic correlations and the degree of these correlations.
2) Fusion Module, which fuses the relational knowledge vector with the joint
vector obtained from the attention module. This module realizes the fusion of
different relations, according to semantic correlations.
3) Classifier Module, which calculates the classification-aware loss to
implicitly enable the rich semantic correlations embedded in the embeddings.
Thanks to it, both the compactness of semantically similar relations and
discrimination of dissimilar relations can be enhanced.
The following is a detailed introduction to each module.
Attention Module
Joint Block. The classification-aware loss is calculated by the relation
classification results based on the head and tail entities from the given
triple $(e_{h},r,e_{t})$. Inspired by (Qin et al. 2020), the joint vector of
the head and tail entities has the ability to represent the potential relation
between them. The head and tail entities representations (i.e., $\bm{e}_{h}$
and $\bm{e}_{t}$) are jointed together at the joint block for which we adopt
three different alternatives:
$\bm{j}=\left\\{\begin{array}[]{ll}\bm{e}_{h}-\bm{e}_{t},&sub\\\
\bm{e}_{h}\otimes\bm{e}_{t},&multiply\\\
W_{1}[\bm{e}_{h};\bm{e}_{t}]+b_{1},&concat\end{array},\right.$ (2)
where $\otimes$ denotes the element-wise multiplication operator, and $W_{j}$
and $b_{j}$ are the learnable parameters.
Relation Memory Block. Using a memory block to store class information is
widely used in image classification (Snell, Swersky, and Zemel 2017; Karlinsky
et al. 2019; Liu et al. 2019). Following these studies, we design a relation
memory block to store all relation information by sharing parameters with the
relation embedding layer as
$\bm{M}=\left\\{\bm{r}_{1},\bm{r}_{2},...,\bm{r}_{K-1},\bm{r}_{K}\right\\},$
(3)
where $M\in\mathbb{R}^{K\times dim}$, $K$ is the number of relation classes.
As the training progresses, the relation embedding layer and relation memory
block are updated synchronously.
Relational Knowledge. To realize the classification-aware optimization
objective, we extract useful relational knowledge from the relation memory
block to enrich the joint vector. The semantic correlation degree between
different relations may vary; thus, we adopt the attention mechanism to
customize specific relational knowledge for each joint vector. Concretely, the
relational knowledge vector $\bm{rk}$ is computed as a weighted sum of each
relation representation in the relation memory block $M$, i.e.,
$\bm{rk}=\alpha_{sim}\bm{M}$, where $\alpha_{sim}\in\mathbb{R}^{K}$ represents
the knowledge-aware attention distribution.
Attention Distribution The knowledge-aware attention distribution
$\alpha_{sim}$ describes the similarity between the joint vector and each
relation representation in the relation memory block. We estimate
$\alpha_{sim}$ as
$\alpha_{sim}=softmax(\bm{j}\bm{M}^{\top}),$ (4)
where $softmax$ is the activation function, and $\bm{M}^{\top}$ represents the
transposed matrix of $\bm{M}$. Note that the attention value of the ground-
truth relation is masked with 0.
Fusion Module
In this module, the joint vector and relational knowledge vector are fused.
Intuitively, the proportion of fusion is different for each joint vector.
Inspired by the pointer-generator network (See, Liu, and Manning 2017) that
facilitates copying words from the source text during new words generation, we
propose a soft switch, that is, the fusion probability $p_{f}\in[0,1]$, to
adaptively adjust the fusion proportion between the joint vector and
relational knowledge vector. The fusion probability $p_{f}$ is estimated
according to the joint vector as $p_{f}=sigmoid(FC(\bm{j})),$ where $FC$ is
the fully connected neural network, and $sigmoid$ is the activation function.
Finally, we obtain the following fusion vector $\bm{f}$ over the joint vector
$\bm{j}$ and relational knowledge vector $\bm{rk}$ as
$\bm{f}=(1-p_{f})\bm{j}+p_{f}\bm{rk}.$ (5)
Classifier Module
Classification-aware Loss. The fusion vector $\bm{f}$ is mapped to a class
probability through the classifier block as
$D\sim softmax(\bm{f}^{\top}W_{c}),$ (6)
where $W_{c}\in\mathbb{R}^{dim\times K}$ is the classification weight matrix,
and $softmax$ is the activation function.
Given the ground truth relation $r_{i}$ from the $i$-th input
$(e_{h_{i}},r_{i},e_{t_{i}})$, we adopt cross entropy to assess the
classification-aware loss as
$\mathcal{L}_{c}=-\frac{1}{N}\sum_{i=1}^{N}\log
p(r_{i}|(e_{h_{i}},e_{t_{i}})),$ (7)
where $p(r_{i}|(e_{h_{i}},e_{t_{i}}))\in D_{i}$ is the probability of the
ground truth relation $r_{i}$.
| $\mathcal{\left|E\right|}$ | $\mathcal{\left|R\right|}$ | Train | Valid | Test
---|---|---|---|---|---
YAGO3-10 | 123k | 37 | 1M | 5k | 5k
FB15K-237 | 15k | 237 | 273k | 18k | 20k
NELL-995 | 75k | 200 | 150k | 543 | 4k
Kinship | 104 | 25 | 9k | 1k | 1k
WN18 | 41k | 18 | 141k | 5k | 5k
NELL-ONE | 69k | 358 | 190k | 1k | 2k
Table 1: Statistics of datasets. $\mathcal{\left|E\right|}$ and $\mathcal{\left|R\right|}$ represent the cardinalities of the entity and relation sets. | YAGO3-10 | FB15K-237 | NELL-995 | Kinship | WN18
---|---|---|---|---|---
| MRR | HITS@N | MRR | HITS@N | MRR | HITS@N | MRR | HITS@N | MRR | HITS@N
| | @10 | @1 | | @10 | @1 | | @10 | @1 | | @10 | @1 | | @10 | @1
ComplEx | 36.0 | 55.0 | 26.0 | 24.7 | 42.8 | 15.8 | 48.2 | 60.6 | 39.9 | 82.3 | 97.1 | 73.3 | 94.1 | 94.7 | 93.6
R-GCN | - | - | - | 24.8 | 41.7 | 15.3 | 12.0 | 18.8 | 8.2 | 10.9 | 23.9 | 3.0 | 81.4 | 96.4 | 69.7
ConvKB | - | - | - | 28.9 | 47.1 | 19.8 | 43.0 | 54.5 | 37.0 | 61.4 | 95.3 | 43.6 | - | - | -
D4-STE | 47.2 | 64.3 | 38.1 | 32.0 | 50.2 | 23.0 | - | - | - | - | - | - | 94.6 | 95.2 | 94.2
D4-Gumbel | 38.8 | 57.3 | 29.4 | 30.0 | 49.6 | 20.4 | - | - | - | - | - | - | 94.6 | 95.4 | 94.2
DistMult | 34.0 | 54.0 | 24.0 | 24.1 | 41.9 | 15.5 | 48.5 | 61.0 | 40.1 | 51.6 | 86.7 | 36.7 | 82.2 | 93.6 | 72.8
+GRL | 41.2 | 59.9 | 31.1 | 25.8 | 43.9 | 16.9 | 54.3 | 64.6 | 47.6 | 52.2 | 86.4 | 37.3 | 86.1 | 95.2 | 79.2
($\pm$ sd) | (0.3) | (1.0) | (0.1) | (0.2) | (0.3) | (0.1) | (0.2) | (0.3) | (0.3) | (0.2) | (0.8) | (0.2) | (1.0) | (0.4) | (1.1)
ConvE | 52.0 | 66.0 | 45.0 | 31.6 | 49.1 | 23.9 | 49.1 | 61.3 | 40.3 | 83.3 | 98.1 | 73.8 | 94.2 | 95.5 | 93.5
+GRL | 55.4 | 69.0 | 47.4 | 32.6 | 50.2 | 24.8 | 49.4 | 60.6 | 41.5 | 83.4 | 97.8 | 74.5 | 94.8 | 95.7 | 94.4
($\pm$ sd) | (1.0) | (0.1) | (0.1) | (0.3) | (0.2) | (0.3) | (0.2) | (0.3) | (0.3) | (0.2) | (0.5) | (0.5) | (0.1) | (0.4) | (0.0)
Table 2: Link prediction results (mean $\pm$ sd) of the compared models (%): the best results are marked in bold (pairwise t-test at 5% significance level). | YAGO3-10 | NELL-995
---|---|---
| Many | Few | All | Many | Few | All
DistMult | 38.1 | 26.7 | 34.0 | 52.6 | 41.9 | 48.5
DistMult+GRL | 44.8 | 34.2 | 41.2 | 57.3 | 48.8 | 54.3
(Increment) | ($\uparrow$6.7) | ($\uparrow$7.5) | ($\uparrow$7.2) | ($\uparrow$4.7) | ($\uparrow$6.9) | ($\uparrow$5.8)
ConvE | 57.9 | 20.0 | 52.4 | 52.0 | 42.2 | 49.1
ConvE+GRL | 59.4 | 24.6 | 55.4 | 52.4 | 43.9 | 49.4
(Increment) | ($\uparrow$1.5) | ($\uparrow$4.6) | ($\uparrow$3.0) | ($\uparrow$0.4) | ($\uparrow$1.7) | ($\uparrow$0.3)
Table 3: Link prediction results with the increment (%) on many-shot and few-
shot sub-groups, and entire test set.
Most Similar Relation. Existing mainstream link prediction models have
achieved impressive performance, yet they can only learn the patterns observed
in the closed datasets, thereby limiting their scalability for handling the
rapidly evolving KGs. Specifically, when a zero-shot relation $r_{z}$ (i.e.,
one not existing in the training set) occurs between an entity pair
$(e_{h},e_{t})$, it is almost impossible for the existing models to
distinguish whether this new triple $(e_{h},r_{u},e_{t})$ is valid or not. All
$r_{z}$ will be then identified as an ‘unknown‘ vector $\bm{u}$ by the
embedding component, and the newly constructed triple representation
$(\bm{e}_{h},\bm{u},\bm{e}_{t})$ will receive a low score. To alleviate this
defect, GRL selects the most semantically similar relation for replacing to
enhance the learning ability of base model on zero-shot relations. We argue
that the relation which corresponds to the maximum similarity in
$\alpha_{sim}$ reflects the semantic relation of two entities in the best way.
Therefore, we use the vector of the most similar relation $\bm{r}_{ms}$ to
replace the vector $\bm{u}$ and evaluate the newly constructed triple
representation $(\bm{e}_{h},\bm{r}_{ms},\bm{e}_{t})$.
### Learning Scheme
We follow the definition of score-aware loss in existing base models and
propose a classification-aware loss to train the model. The overall
optimization follows the joint learning paradigm that is defined as a weighted
combination of constituent losses as
$\mathcal{L}=\mathcal{L}_{s}+\lambda\mathcal{L}_{c},$ where $\lambda$ is a
hyper-parameter to balance the importance between the score-aware loss and
classification-aware loss for optimization.
## Experiments and Results
### Datasets
We select two categories of datasets to comprehensively evaluate GRL as
follows, whose statistical descriptions are shown in Table 1:
* $\bullet$
Imbalanced datasets: YAGO3-10 (Mahdisoltani, Biega, and Suchanek 2015),
FB15K-237 (Toutanova and Chen 2015), NELL-995 (Xiong, Hoang, and Wang 2017),
Kinship (Lin, Socher, and Xiong 2018), and WN18 (Bordes et al. 2013). These
datasets contain both many-shot and few-shot relations.
* $\bullet$
Few-shot dataset: NELL-ONE (Xiong et al. 2018), which is specially constructed
for the few-shot learning task in KG. The relations with less than 500 but
more than 50 training triples are selected as testing data.
### Baselines
We adopt two embedding-based models, DistMult (Yang et al. 2015) and ConvE
(Dettmers et al. 2018), as the base models of our proposed modules, and
compare the two enhanced models with the following popular relation prediction
models: RESCAL (Nickel, Tresp, and Kriegel 2011), TransE (Bordes et al. 2013),
DistMult (Yang et al. 2015), ComplEx (Trouillon et al. 2016), ConvE (Dettmers
et al. 2018), ConvKB (Nguyen et al. 2018), D4-STE, D4-Bumbel (Xu and Li 2019),
and TuckER (Balazevic, Allen, and Hospedales 2019). Besides the above general
models, we test two additional models, GMatching (Xiong et al. 2018) and CogKR
(Du et al. 2019), which are designed specifically for the few-shot relation
learning.
### Experimental Configuration
We implement the base models and our proposed two modules in PyTorch (Paszke
et al. 2017) . Throughout the experiments, we optimize the hyperparameters in
a grid search setting for the best mean reciprocal rank (MRR) on the
validation set. We use Adam to optimize all the parameters with initial
learning rate at 0.003. The dimensions of entity and relation embeddings are
both set to 200. The loss weight $\lambda$ is set to 0.1. According to the
frequency of relations, we take the top 20% and bottom 80% of relations as
many-shot and few-shot relation classes, respectively. The experimental
results of our model are averaged across three training repetitions, and
standard deviations (sd) are also reported.
### Experiment I: Link Prediction
#### Setting
We follow the evaluation protocol of (Dettmers et al. 2018): each input
$(e_{h},r,e_{t})$ is converted to two queries, that is, tail query
$(e_{h},r,?)$ and head query $(?,r,e_{t})$; then, the ranks of correct
entities are recorded among all entities for each query, excluding other
correct entities that were observed in any of the train/valid/test sets for
the same query. We use the filtered HITS@1, 5, 10, and MRR as evaluation
metrics.
#### Results
Table 2 records the results on five imbalanced datasets, which reflect the
general performance of the compared models in solving the link prediction
task. It shows that two base models (DistMult and ConvE) are generally
improved by incorporating the proposed GRL framework. That is, GRL improves
DistMult by an average of 3.84% and improves ConvE by an average of 1.08%
under the MRR evaluation. Especially, the enhanced model ConvE+GRL generally
outperforms the compared models on YAGO3-10, FB15K-237, Kinship, and WN18, and
the enhanced model DistMult+GRL also performs well on NELL-995. We also
evaluate the performance of GRL in learning many-shot and few-shot relations
and show the MRR results of DistMult, DistMult+GRL, ConvE, and ConvE+GRL on
YAGO3-10 and NELL-995 (c.f. Table 3). The results indicate that GRL achieves
consistent improvements on both “many-shot” and “few-shot” sub-groups. We
assume this may be because handling many-shot relations can be improved thanks
to useful implicit information from few-shot relations, even though there are
already numerous training samples for many-shot relations. From this aspect,
it is sensible for the mainstream link prediction models to rely on GRL
regarding the imbalanced relation issue.
| MRR | HITS@N
---|---|---
| | @10 | @5 | @1
TransE† | 9.3 | 19.2 | 14.1 | 4.3
GMatching† | 18.8 | 30.5 | 24.3 | 13.3
CogKR∗ | 25.6 | 35.3 | 31.4 | 20.5
DistMult† | 10.2 | 17.7 | 12.6 | 6.6
DistMult+GRL | 14.4 | 23.0 | 18.2 | 9.8
($\pm$ sd) | (2.0) | (2.1) | (1.9) | (2.3)
ConvE∗ | 17.0 | 30.6 | 23.0 | 10.5
ConvE+GRL | 25.6 | 38.9 | 33.6 | 18.8
($\pm$ sd) | (2.3) | (3.7) | (3.1) | (2.1)
Table 4: Few-shot relation learning results (mean $\pm$ sd) on NELL-ONE
dataset (%): the results marked by ‘$\dagger$’ or ‘$\ast$’ are taken from
(Xiong et al. 2018; Du et al. 2019).
### Experiment II: Few-shot Relation Learning
#### Setting
To further evaluate the performance of GRL in the few-shot relation learning
case, which is tricky for a link prediction model, especially, when relations
are very insufficient, we test approaches on the NELL-ONE dataset wherein each
test relation has only one instance in the training set. We follow the
evaluation protocol and metrics of (Xiong et al. 2018).
#### Results
Table 4 shows that GRL consistently improves both of the base models by
average 4.2% and 8.6% MRR scores. Especially for ConvE, incorporating GRL
helps it outperform the other approaches on three metrics. CogKR, a path
learning based model, performs best under HITS@1. The reason might be that the
testing query is easy to be completed by finding KG paths on the few-shot
relation datasets, such as NELL-ONE. Although there is only one training
instance for each testing query, GRL can effectively embed the few-shot
relations by learning from the semantically similar relations in the many-shot
class.
### Experiment III: Zero-shot Relation Learning
#### Setting
To evaluate the performance on zero-shot relations of GRL, we construct a
testing set containing 500 triples whose relations are unseen in the training
phase. The testing triples are randomly sampled from the FB15K dataset (Bordes
et al. 2013), and the training set is FB15K-237 to ensure the authenticity of
the triples. We adopt the fundamental testing protocol that quantitatively
determines the scores of triples with zero-shot relations.
Most of existing zero-shot relation studies have to depend on textual
descriptions, while the zero-shot learning addressed in this work does not
require this information. Therefore, we select the GMatching model (Xiong et
al. 2018) for comparison, which can predict similar relations by learning a
matching metric without any additional information. We use the classical
method TransE (Bordes et al. 2013) to learn the relation embeddings in the
FB15K dataset and calculate the similarity between the zero-shot relation and
the predicted relation.
Figure 3: Zero-shot relation learning results: (a) the average score of the
testing triples, and (b) the average similarity between the zero-shot relation
with the predicted relation.
#### Results
Figure 3 (a) demonstrates results of the average score of the testing triples
with zero-shot relations. Note that we use the fusion vector as the zero-shot
relations embedding. We can see that two base models (DistMult and ConvE)
cannot get a good average score because all zero-shot relations will be
identified as an ‘unknown’ relation. When GRL is plugged, two enhanced models
(DistMult+GRL and ConvE+GRL) are both boosted in learning positive relations,
proving that the GRL framework can effectively improve the validation
capabilities on triples with zero-shot relations of the base models. Figure 3
(b) shows the performance on predicting zero-shot relations. We can see that
the base models perform worse due to their superficial way of embedding zero-
shot relations as mentioned before. When equipping with GRL, the enhanced
models perform better than GMatching, indicating that learning from the
semantic correlations between unseen relations and seen relations provides a
comparably good way as learning from neighbor information.
| | YAGO3-10 | NELL-ONE
---|---|---|---
(1) | ConvE | 52.0 | 17.0
(2) | ConvE+GRL($p_{f}=0$) | 52.6 | 23.3
(3) | ConvE+GRL($p_{f}=0.5$) | 53.9 | 24.7
(4) | ConvE+GRL($p_{f}=1$) | 52.2 | 20.3
(5) | ConvE+Direct | 51.2 | 10.5
(6) | ConvE+GRL | 55.4 | 25.6
Table 5: Ablation Study.
## Further Analysis of GRL
### Ablation Study
Study of Fusion Probability To assess the effect of the fusion vector, we make
a comparison on three variants from the fusion probability perspective based
on ConvE, see Table 5 (2)-(4). The three variants are the followings: only
using the joint vector (i.e., $p_{f}=0$), only using the relational knowledge
vector (i.e., $p_{f}=1$), and using the joint and relational knowledge vectors
with an equal weight (i.e., $p_{f}=0.5$). Compared with three variants, fusing
the joint and relational knowledge vectors (i.e., ConvE+GRL) performs best,
which suggests the semantic correlations in the relational knowledge vectors
can help the base model learn more appropriate representations of relations
and thus boost the general performance. Moreover, the adaptive fusion
probability can improve the flexibility of the fusion operator.
Direct Fusion vs. GRL We test now a direct fusion method that fuses the
relational knowledge vector with the relation representation as the updated
relation representation without considering the classification-aware loss.
Table 5 (5) shows the MRR performance of ConvE when enhanced by the direct
method. Rich semantic correlations in KGs cannot be adequately learned by the
direct method because it simply leverage the superficial semantic
correlations, rather than embedding them into relation vectors. Moreover, the
direct method will make embedding learning more confusing especially for the
few-shot relation data such as NELL-ONE.
### Case Study
Visualization of Knowledge-aware Attention GRL is able to make the base model
fully learn semantic correlations between relations. To verify this, we
display the attention distribution for the base model (ConvE) and enhanced
model (ConvE+GRL) on FB15K-237 in Figure 4, and show the average attention
distribution of 237 relation classes where each row represents a type of
relation. The base model learns little about semantic correlations between
relations, while the enhanced model (ConvE+GRL) can perfectly capture the
semantic correlations. The attention distribution of few-shot relations is
more discrete than many-shot relations due to insufficient training data.
Figure 4: Case study: knowledge-aware attention cases with a heat map. Figure
5: Case study: t-SNE visualization of relation embeddings in FB15K-237 (better
view in color). The semantically similar relations get closer after plugging
GRL.
Visualization of Relation Embedding In addition, we also show in Figure 5 the
t-SNE (Maaten and Hinton 2008) plot of all relations on FB15K-237 in embedding
space. To provide more insights we highlight the relations associated with
“film”. The Stars and Triangles represent the many-shot and few-shot
relations, respectively. We can see that the many-shot and few-shot relations
are more compact in the case of the enhanced model than the base model .
## Conclusion and Future Work
In this work, we study two natural problems in the link prediction task: 1)
unbalanced relation distribution, and 2) unseen relations. To address them, we
focus on generalized relation learning and propose a framework, GRL, that uses
semantic correlations among relations as a bridge to connect semantically
similar relations. Through extensive experiments on six datasets, we
demonstrate the effectiveness of GRL, providing a comprehensive insight into
the generalized relation learning of KGs. There are a few loose ends for
further investigation. We will consider combining the external text
information and the semantic knowledge of KGs to facilitate the relation
learning. We will also try to deploy GRL to downstream applications that
involve generalized relation learning scenarios to gain more insights.
## Acknowledgments
This work was supported in part by the Ministry of Education of Humanities and
Social Science project under grant 16YJC790123 and the Natural Science
Foundation of Shandong Province under grant ZR2019MA049.
## References
* Balazevic, Allen, and Hospedales (2019) Balazevic, I.; Allen, C.; and Hospedales, T. 2019. TuckER: Tensor Factorization for Knowledge Graph Completion. In _Proceedings of EMNLP-IJCNLP_ , 5184–5193.
* Bordes et al. (2013) Bordes, A.; Usunier, N.; Garcia-Durán, A.; Weston, J.; and Yakhnenko, O. 2013. Translating Embeddings for Modeling Multi-relational Data. In _Proceedings of NeurIPS_ , 2787–2795.
* Chen et al. (2019a) Chen, M.; Zhang, W.; Zhang, W.; Chen, Q.; and Chen, H. 2019a. Meta Relational Learning for Few-Shot Link Prediction in Knowledge Graphs. In _Proceedings of EMNLP-IJCNLP_ , 4216–4225.
* Chen et al. (2019b) Chen, W.; Zhu, H.; Han, X.; Liu, Z.; and Sun, M. 2019b. Quantifying Similarity between Relations with Fact Distribution. In _Proceedings of ACL_ , 2882–2894.
* Christmann et al. (2019) Christmann, P.; Saha Roy, R.; Abujabal, A.; Singh, J.; and Weikum, G. 2019. Look before you Hop: Conversational Question Answering over Knowledge Graphs Using Judicious Context Expansion. In _Proceedings of CIKM_ , 729–738.
* Dettmers et al. (2018) Dettmers, T.; Minervini, P.; Stenetorp, P.; and Riedel, S. 2018. Convolutional 2D Knowledge Graph Embeddings. In _Proceedings of AAAI_ , 1811–1818.
* Du et al. (2019) Du, Z.; Zhou, C.; Ding, M.; Yang, H.; and Tang, J. 2019. Cognitive Knowledge Graph Reasoning for One-shot Relational Learning. _arXiv preprint arXiv:1906.05489_ .
* Ji et al. (2016) Ji, G.; Liu, K.; He, S.; and Zhao, J. 2016. Knowledge Graph Completion with Adaptive Sparse Transfer Matrix. In _Proceedings of AAAI_ , 985–991.
* Karlinsky et al. (2019) Karlinsky, L.; Shtok, J.; Harary, S.; Schwartz, E.; Aides, A.; Feris, R.; Giryes, R.; and Bronstein, A. M. 2019. RepMet: Representative-Based Metric Learning for Classification and Few-Shot Object Detection. In _Proceedings of CVPR_ , 5197–5206.
* Lei et al. (2020a) Lei, W.; He, X.; de Rijke, M.; and Chua, T.-S. 2020a. Conversational Recommendation: Formulation, Methods, and Evaluation. In _Proceedings of SIGIR_ , 2425–2428.
* Lei et al. (2020b) Lei, W.; Zhang, G.; He, X.; Miao, Y.; Wang, X.; Chen, L.; and Chua, T.-S. 2020b. Interactive Path Reasoning on Graph for Conversational Recommendation. In _Proceedings of KDD_ , 2073–2083.
* Lin, Socher, and Xiong (2018) Lin, X. V.; Socher, R.; and Xiong, C. 2018. Multi-Hop Knowledge Graph Reasoning with Reward Shaping. In _Proceedings of EMNLP_ , 3243–3253.
* Lin et al. (2015) Lin, Y.; Liu, Z.; Sun, M.; Liu, Y.; and Zhu, X. 2015. Learning Entity and Relation Embeddings for Knowledge Graph Completion. In _Proceedings of AAAI_ , 2181–2187.
* Liu et al. (2019) Liu, Z.; Miao, Z.; Zhan, X.; Wang, J.; Gong, B.; and Yu, S. X. 2019. Large-Scale Long-Tailed Recognition in an Open World. In _Proceedings of CVPR_ , 2537–2546.
* Lv et al. (2019) Lv, X.; Gu, Y.; Han, X.; Hou, L.; Li, J.; and Liu, Z. 2019. Adapting Meta Knowledge Graph Information for Multi-Hop Reasoning over Few-Shot Relations. In _Proceedings of EMNLP-IJCNLP_ , 3374–3379.
* Maaten and Hinton (2008) Maaten, L. v. d.; and Hinton, G. 2008. Visualizing Data Using T-SNE. _Journal of Machine Learning Research_ 2579–2605.
* Mahdisoltani, Biega, and Suchanek (2015) Mahdisoltani, F.; Biega, J.; and Suchanek, F. 2015. YAGO3: A Knowledge Base from Multilingual Wikipedias. In _Proceedings of CIDR_.
* Moon et al. (2019) Moon, S.; Shah, P.; Kumar, A.; and Subba, R. 2019. OpenDialKG: Explainable Conversational Reasoning with Attention-based Walks over Knowledge Graphs. In _Proceedings of ACL_ , 845–854.
* Nguyen et al. (2018) Nguyen, D. Q.; Nguyen, T. D.; Nguyen, D. Q.; and Phung, D. 2018. A Novel Embedding Model for Knowledge Base Completion Based on Convolutional Neural Network. In _Proceedings of NAACL_ , 327–333.
* Nickel, Tresp, and Kriegel (2011) Nickel, M.; Tresp, V.; and Kriegel, H.-P. 2011. A Three-Way Model for Collective Learning on Multi-Relational Data. In _Proceedings of ICML_ , 809–816.
* Paszke et al. (2017) Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; and Lerer, A. 2017. Automatic Differentiation in PyTorch. In _Proceedings of NeurIPS Workshop_.
* Qin et al. (2020) Qin, P.; Wang, X.; Chen, W.; Zhang, C.; Xu, W.; and Wang, W. Y. 2020. Generative Adversarial Zero-Shot Relational Learning for Knowledge Graphs. In _Proceedings of AAAI_ , 8673–8680.
* Schlichtkrull et al. (2018) Schlichtkrull, M.; Kipf, T. N.; Bloem, P.; Van Den Berg, R.; Titov, I.; and Welling, M. 2018. Modeling Relational Data with Graph Convolutional Networks. In _Proceedings of ESWC_ , 593–607.
* See, Liu, and Manning (2017) See, A.; Liu, P. J.; and Manning, C. D. 2017. Get To The Point: Summarization with Pointer-Generator Networks. In _Proceedings of ACL_ , 1073–1083.
* Seyed and David (2018) Seyed, M. K.; and David, P. 2018. SimplE Embedding for Link Prediction in Knowledge Graphs. In _Proceedings of NeurIPS_ , 4284–4295.
* Shi and Weninger (2018) Shi, B.; and Weninger, T. 2018. Open-World Knowledge Graph Completion. In _Proceedings of AAAI_ , 1957–1964.
* Shi et al. (2019) Shi, X.; Salewski, L.; Schiegg, M.; Akata, Z.; and Welling, M. 2019. Relational Generalized Few-Shot Learning. _arXiv preprint arXiv:1907.09557_ .
* Snell, Swersky, and Zemel (2017) Snell, J.; Swersky, K.; and Zemel, R. 2017. Prototypical Networks for Few-shot Learning. In _Proceedings of NeurIPS_ , 4077–4087.
* Toutanova and Chen (2015) Toutanova, K.; and Chen, D. 2015. Observed Versus Latent Features for Knowledge Base and Text Inference. In _Proceedings of CVSC_ , 57–66.
* Trouillon et al. (2016) Trouillon, T.; Welbl, J.; Riedel, S.; Gaussier, É.; and Bouchard, G. 2016. Complex Embeddings for Simple Link Prediction. In _Proceedings of ICML_ , 2071–2080.
* Wang et al. (2021) Wang, X.; Huang, T.; Wang, D.; Yuan, Y.; and Chua, T. S. 2021. Learning Intents behind Interactions with Knowledge Graph for Recommendation .
* Wang et al. (2019) Wang, Z.; Lai, K.; Li, P.; Bing, L.; and Lam, W. 2019. Tackling Long-Tailed Relations and Uncommon Entities in Knowledge Graph Completion. In _Proceedings of EMNLP-IJCNLP_ , 250–260.
* Wang et al. (2014) Wang, Z.; Zhang, J.; Feng, J.; and Chen, Z. 2014. Knowledge Graph Embedding by Translating on Hyperplanes. In _Proceedings of AAAI_ , 1112–1119.
* Xiong, Hoang, and Wang (2017) Xiong, W.; Hoang, T.; and Wang, W. Y. 2017. DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning. In _Proceedings of EMNLP_ , 564–573.
* Xiong et al. (2018) Xiong, W.; Yu, M.; Chang, S.; Guo, X.; and Wang, W. Y. 2018. One-Shot Relational Learning for Knowledge Graphs. In _Proceedings of EMNLP_ , 1980–1990.
* Xu and Li (2019) Xu, C.; and Li, R. 2019. Relation Embedding with Dihedral Group in Knowledge Graph. In _Proceedings of ACL_ , 263–272.
* Yang et al. (2015) Yang, B.; Yih, W.-t.; He, X.; Gao, J.; and Deng, L. 2015. Embedding Entities and Relations for Learning and Inference in Knowledge Bases. In _Proceedings of ICLR_.
* Ye et al. (2019) Ye, H.-J.; Hu, H.; Zhan, D.-C.; and Sha, F. 2019. Learning Classifier Synthesis for Generalized Few-Shot Learning. _arXiv preprint arXiv:1906.02944_ .
* Zhang et al. (2019) Zhang, N.; Deng, S.; Sun, Z.; Wang, G.; Chen, X.; Zhang, W.; and Chen, H. 2019. Long-tail Relation Extraction via Knowledge Graph Embeddings and Graph Convolution Networks. In _Proceedings of NAACL_ , 3016–3025.
* Zhu et al. (2021) Zhu, F.; Lei, W.; Wang, C.; Zheng, J.; Poria, S.; and Chua, T.-S. 2021. Retrieving and Reading: A Comprehensive Survey on Open-domain Question Answering.
|
# An Overview of the State of the Art for Practical Quantum Key Distribution
Daniel D. Moskovich Center for Quantum Information Science and Technology,
Ben-Gurion University of the Negev, Israel
(Date: 9 April, 2015)
###### Abstract.
This is an overview of the state of the art for quantum key distribution (QKD)
as of March 2015. It is written by a non-expert for non-experts. Additions and
corrections are welcome.
## 1\. Introduction
The goal of this overview to concisely summarize, in a way that is accessible
to a non-expert, where practical Quantum Key Distribution (QKD) stands now in
early 2015 and what seem to be promising directions for the near future to the
best of the author’s knowledge and understanding.
We begin with a general overview of what QKD is, followed by a discussion of
the major practical QKD players at the moment, a discussion of protocols, and
a discussion of photon sources, transmission, and detection. We conclude with
a section on attacks against QKD.
## 2\. What is quantum key distribution?
Quantum key distribution (QKD) uses principles of quantum information theory
to ensure secure communication (Weisner, 1983; Bennett & Brassard, 1984;
Ekert, 1991). Its goal is for two parties called (A)lice and (B)ob to share a
secret key made up of $0$ s and $1$ s which they will later use to encrypt and
decrypt communications between them. The information used to compose the key
is carried between Alice and Bob on qubits (two state quantum systems).
The _SECOCQ White Paper_ convincingly argued that QKD is a form of _trusted
courier_ (Alice hands a message to somebody she trusts, who carries that to
Bob), so that it is useful in contexts in which a trusted courier would be
useful (SECOQC, 2007). With idealized hardware and with perfect accuracy, the
advantages of QKD over classical trusted courier methods would be:
* •
Mathematically-proven security against all classical and quantum attacks.
* •
Alice and Bob can detect any active attempt by an eavesdropper (E)ve to
eavesdrop on the key distribution process.
The _Black Paper of Quantum Cryptography_ convincingly argued that real-life
QKD security is less than perfect, so that each different QKD setup should be
carefully and individually studied to assure its security (Scarani &
Kurtsiefer, 2014). All security threats discovered so far have been
redeemable, and we have no reason to believe that any given QKD setup cannot
be made perfectly secure against all known attacks in principle.
Real world QKD has become a focus of interest for industrial players, for
governments, and for security agencies.
## 3\. Fundamental challenges
A number of fundamental challenges to the widescale use of QKD have been
identified (Pritchard & Till, 2010).
1. (1)
Limited transmission rate and range. Both the range and the maximal bit-rate
of QKD are low compared to classical communications. It is considered
technologically impossible, for instance, to transmit a polarized photon
reliably over more than 400km of fiber, although quantum repeaters will allow
for longer range QKD.
2. (2)
QKD protocols are fundamentally point-to-point, and do not integrate with
packet-based protocols such as those used on the internet.
3. (3)
QKD requires expensive special-purpose hardware such as single-photon sources
and detectors. Such hardware is difficult to upgrade and to maintain.
4. (4)
QKD addresses only one aspect of the security problem. For example
authentication and integrity are not covered and must be handled classically.
5. (5)
There is nothing fundamentally wrong with existing classical cryptographic
techniques. Even if some classical ciphers (e.g. RSA) may be cracked using
quantum algorithms at some unspecified point in the future, other classical
ciphers are being developed that would be immune to quantum attacks.
6. (6)
Because it is a new technology, there are potential discovered and
undiscovered vulnerabilities in practical QKD systems. Indeed, several
proposed conditions for unconditional security of practical implementations
were found wanting and had to be revised, e.g. because key-length is finite
while many security proofs assume infinite-length keys (Inamori, Lütkenhaus &
Mayers, 2007; Tomamichel et al., 2012), or because the quantum effect of
_locking_ might allow unexpectedly large information leak during the error-
correction and privacy amplification steps (König et al., 2007; Iwakoshi &
Hirota, 2014; Yuen, 2013; Portmann & Renner, 2014). Commercial QKD systems
have been successfully hacked (Section 12.4). Because QKD is unconditionally
secure while the security of any quantum protocol implementation is a
probabilistic quantitative matter, these vulnerabilities will in principle
never be fatal flaws. But each new vulnerability might require re-tuning of
parameters and modifications to technological implementation.
## 4\. Advantages of QKD
1. (1)
QKD provides the possibility to establish a secret key in a way that is
provable secure against eavesdropping. Moreover, QKD can be composed with
other encryptions, so as to provide an additional layer of security for an
already secure message. For example a message encrypted using an RSA public
key may be once again encrypted using a quantum key. To intercept the message,
an attacker would have to break both the quantum key and the classical key.
2. (2)
Eavesdropping can be detected, following which countermeasures may be adopted.
This capability distinguished QKD from among all encryption methods.
3. (3)
Expertise and knowledge gained in QKD research will, in large part, be useful
for developing future technologies in future manifestations of the coming
quantum revolution predicted by Michael Berry when he said, “It is easy to
predict that in the twenty-first century, it will be quantum mechanics that
influences all our lives.” Berry (1998).
## 5\. World QKD projects
### 5.1. Large scale networks
Microwave apparatus used in quantum experiments. Retrieved from
http://www.foxnews.com/tech/2013/05/08/quantum-network-secretly-running-
for-2-years/
In the last 10 years, a number of multi-user QKD networks have been
constructed. All use relay between trusted nodes and optical switching. The
first of these was the $10$–node DARPA Quantum Network which has been
operating since 2004 (Elliot et al., 2005). It uses active optical switching
(i.e. an electrically powered switching device similar to a router) to
distribute the key between pairs of nodes. It is being developed by BBN
Technologies, Harvard University, Boston University and QinetiQ, with the
support of the US Defense Advanced Research Projects Agency (DARPA). The
SECOQC (Secure Communication Based on Quantum Cryptography) Quantum Network is
an EU project which integrated several different QKD systems into one quantum
backbone (QBB) network, developing a cross-platform interface
(http://www.secoqc.net/). This provided impetus for the European
Telecommunications Standards Institute (ETSI) to launch an industry
specification group for QKD (ISG-QKD)in order to create universally accepted
QKD standards (ETSI, 2015). The Swiss Quantum Network and the Durban Network
are testing long-term QKD operation in field environments
(http://swissquantum.idquantique.com and (Mirza & Petruccione, 2010)).
Transparent network implementation of QKD using only beam splitters, which
facilitate secure communication without requiring clients to be reconfigured,
have been demonstrated by several groups (Telecordia Technologies, Universidad
Politécnica de Madrid and Telefónica Investigación y Desarrollo, and two teams
from the University of Science and Technology of China). The Tokyo QKD network
used a central Key Management Service (KMS) and newer technologies to increase
its speed to the point of transmitting a QKD-secured live teleconference
between two nodes (Sasaki et al., 2011). This is suitable for government or
municipal settings in which one central body controls the flow of information.
Mitsubishi combined this system with an application for secure telephony to
demonstrate QKD-secured mobile telephony (Mitsubishi, 2015). Finally, Los
Alamos National Laboratory (LANL) runs a hub-and-spokes one-to-many quantum
network (Hughes et al., 2013). The LANL photon generator has been miniaturized
to around the size of a house key.
China is currently constructing a 1200-mile line between Beijing and Shanghai
as part of a proposed $20$-node QKD network which it aims to complete in 2016.
Its current network, the Hefei-Chaohu-Wuhu wide area QKD network, is the
largest in the world (Wang et al., 2014). To overcome the need to use trusted
nodes, where one compromised node could impact the security of the entire
network, there has been work aimed at using techniques of classical multiple
access optical communication in the quantum context (Sarwar Pasha & Bala Ram,
2014). Such technologies have been applied for one part of the DARPA network,
and also for an experimental three-node network at NIST (see
http://www.nist.gov/itl/quantum/threeusernetwork.cfm).
Taking the above technologies into account, the Engineering Science and
Research Council (EPSRC) estimated in their 2014 report that hand-held QKD
systems should be commercially available “with sufficient investment and
encouragement” within 4–7 years, and that long-range highly-connected quantum
networks should become available within 10–25 years (EPSRC, 2014).
### 5.2. University Centres
There are a growing number of university centers in the world which specialize
in quantum communication. We list a few of the most active.
The world’s foremost dedicated quantum communications center is the Group of
Applied Physics (GAP) at Geneva University (http://www.unige.ch/gap/quantum/)
and their commercial spinoff company Id Quantique
(http://www.idquantique.com/). They have developed what is today the world’s
best single photon detector (Korzh _et al._ , 2014) with which they have
achieved the current world record distance for QKD through fiber (Korzh _et
al._ , 2015). They also produce and sell photon detectors and random number
generators using patented technologies.
The Centre for Quantum Technologies (CQT) in Singapore, directed by Ekert who
developed the E91 protocol, specializes in quantum hacking
http://www.quantumlah.org/. They have developed several successful attacks,
which have taken advantage both of side-channels (e.g. (Lamas-Linares &
Kurtsiefer, 2007)) and of erroneous parameters in security proofs (e.g.
(Gerhardt et al., 2011)).
The Institute of Quantum Computing (IQC) in Waterloo also has a research group
working on QKD. It is directed by Norbert Lütkenhaus, who previously worked at
MagiQ to develop practical QKD. Vadim Makarov of that group discovered some
successful side-channel attacks against QKD (e.g. (Makarov _et al._ , 2006;
Makarov, 2009)).
The Key Laboratory of Quantum Information is China’s leading quantum
information center, which is creating the world’s longest and most
sophisticated QKD networks
http://en.physics.ustc.edu.cn/research_9/Quantum/201107/t20110728_116550.html.
Artist’s conception of quantum key distribution over free space ground to
satellite quantum information links. Retrieved from http://www.esa.int/ESA
### 5.3. Commercial companies
A number of commercial companies sell QKD systems and related devices. MagiQ
Technologies in the US sells the QPN-8505, a QKD system which combines BB84
QKD with classical 3DES and AES encryption (http://www.magiqtech.com/). It
works using decoy-state optimized BB84, with a secure key rate of 256Hz over
100km, or 140km with decoy states. In Europe the leading QKD company is Id
Quantique, whose flagship product is the Clavis2, a pure QKD system
(http://www.idquantique.com/). Clavis2 implements both BB84 and SARG04, with
secure key-rates of around 1KHz on a 25km fiber. SeQureNet is a Paris-based
company that produces QKD parts and that specialized is continuous variable
(CV) QKD (http://sequrenet.com). Quintessence Labs in Australia provides true
random number generators (http://www.quintessencelabs.com/).
## 6\. Protocols
### 6.1. BB84
The most widely used QKD protocol, which was also the world’s first QKD
protocol, was developed by Bennett and Brassard in 1984, and is called BB84.
It is typically divided into three layers: The physical layer in which the
quantum communication is carried out,the key-extraction in which the actual
key is extracted from the qubits that Alice sent to Bob, and the key-
application layer where the secret key is used to encode a communication such
as a telephone or a video conversation (Bennett et al., 1992; Gisin et al.,
2002).
In the physical layer, Alice sends random photons, $1$ with 50% probability
and $0$ with 50% probability, either in the so-called $X$ basis or in the so-
called $Z$ basis, each with 50% probability. Bob measures each bit he receives
in a random basis, either in the $X$ basis with 50% probability, or in the $Y$
basis with 50% probability. This is the hardware-intensive portion of the
protocol, for which good random-number generators, single-photon sources, and
single-photon detectors are required.
In the key-extraction layer, BB84 becomes classical. The first classical
sublevel is called _sifting_. Alice and Bob both reveal which bases they used
over a public channel. They then discard the bits which they measured in
different bases. The second sublevel is called _authentication_. In it, Alice
and Bob compare some of their sifted bits over the public channel to determine
whether eavesdropping has occurred. If the bits they compare are more
different than can be accounted by from random noise, then they can guess that
Eve has eavesdropped, and adopt countermeasures. The reason that they can make
this deduction is that Eve’s direct attack, intercept–resend REF, would
involve measuring some of the bits sent by Alice, and sending them on to Bob.
But since Eve does not know which basis was originally used by Alice, she will
choose the wrong basis with 50% probability, and if she chooses the wrong
basis then she will send the wrong qubit to Bob, and that incorrect qubit will
survive into the sifted key with 50% probability. The third sublevel is called
_error correction_. In it, Alice and Bob apply classical error-correcting
algorithms to remedy the effect of random errors caused by channel noise and
by the fact that equipment is non-ideal. The fourth and final level is called
_privacy amplification_ , in which Alice and Bob apply classical cryptography
algorithms to minimize the effect on the final key of any under-the-radar
eavesdropping by Eve. In other words, security of a QKD key is always a
quantitative affair because of non-ideal equipment and channel noise, so some
non-trivial information might have been picked-up by Eve without being
detected in the authentication phase. But the amount of leaked information is
guaranteed to be below a certain threshold, and privacy amplification can
negate the knowledge about the final key which that partial information
imparts.
The BB84 protocol. Figure retrieved from
http://swissquantum.idquantique.com/?Key-Sifting.
### 6.2. Modified BB84 protocols
The best known modification of BB84 is SARG04, which adapts it for use with
attenuated laser pulses (Scarani et al., 2004). SARG04 is more robust than
BB84 against so-called ‘coherent attacks’, but unfortunately it performs worse
against certain ‘incoherent attacks’ (Branciard et al., 2005).
Lo, Chau, and Ardehali presents a modification of BB84 which essentially
doubles its efficiency (Lo _et al._ , 2005b). The key differences are that
significantly different probabilities are assigned to the two bases so that
few bits are discarded, and that key extraction is performed separately for
data in each of the bases. The Cambridge–Toshiba team further improved
efficiency and included decoy states, developing a new protocol called T12
(Lucamarini _et al._ , 2013). The authors prove it to be unconditionally
secure. As of February 2015, this is the protocol with which the highest
ranges and secure key rates have been obtained (Korzh _et al._ , 2015).
Decoy state QKD comes to solve the problem that the secure key rate of a
quantum key from a coherent source scales like the square of the transmittance
(the proportion of photons that make it through from Alice to Bob) of the
medium, and thus a secure key becomes too long to be practical when it must be
transmitted for long distances. For decoy-state QKD, the key length scales
like the transmittance. Three-source decoy-state QKD was what was used in
(Lucamarini _et al._ , 2013). Decoy state QKD works also with non-coherent
sources such as PDC sources (Ma & Lo, 2008).
Additionally, there has been work on measuring-device independent (MDI) QKD,
in which Alice and Bob independently prepare phase randomized coherent pulses
in one of the four BB84 states (with decoy states) and send these to an
untrusted third party, Charlie. Charlie then performs Bell state measurements
(BSM), and announces to Alice and Bob over a public channel the successful BSM
events. Alice and Bob can get a sifted key by dropping events where they sent
pulses in different bases (Wang, 2013). This has been implemented and gives
good key rates in the laboratory (Tang et al., 2014). A further improvement
has been examined, using four-source decoy states (Jiang et al., 2015).
There has been recent work to modify the BB84 protocol to deal with higher bit
error rates on the sifted key, in order to distribute quantum keys for longer
distances without using repeaters in a way that is compatible with optical
amplification (Hughes & Norholdt, 2014).
### 6.3. Continuous Variable (CV) QKD
Continuous-variable (CV) QKD protocols employ continuous or discrete
modulations of the quadratures of an electromagnetic field. CV-QKD setups rely
on a coherent detection between the quantum signal and a classical reference
signal, and their implementation requires only standard telecom components.
They are compatible with wavelength division multiplexing, which greatly eases
their deployment into telecommunication networks. They should be easier to
integrate on silicon photonics chips (Jouguet et al., 2013; Kumar, Qin, &
Alléaume, 2014).
The bottleneck for CV-QKD is a classical cryptography problem, that of error-
correction. For a long time, the range of CV-QKD was limited to 25-30km. New
error-correcting codes have improved this range to 80km (Jouguet et al.,
2012). SeQureNet’s Cygnus module for CV-QKD features this range
(http://sequrenet.com/products.html). Currently, CV-QKD keyrates are
competitive with DV-QKD keyrates up to about 30km. But it may be more
difficult to increase ranges for CV-QKD than for DV-QKD because security
proofs for CV-QKD are penalized heavily for finite size effects. An additional
concern is that CV-QKD is a newer technology, and therefore has different
vulnerabilities, some of which may be unmapped. Several potential
vulnerabilities have been identified and addressed in (Jouguet, Kunz-Jacques,
& Diamanti, 2013; Huang et al., 2014; Kunz-Jacques & Jouguet, 2015).
Considering the above, CV-QKD should be considered a promising future
technology for medium-range QKD. The state of the art for CV-QKD is surveyed
in (Jouguet et al., 2014).
### 6.4. Entanglement-based protocols
There are a number of protocols involving entangled pairs of photons, chief
among these being E91 (Ekert, 1991). In E91, Alice and Bob each have half of
an entangled state (EPR pair, or singlet). The working concept of this scheme
is that there is nothing for Eve to intercept, as the qubit state manifests
only after a measurement has been made. If Eve attempts an intercept-resend
attack, her measurement will break the entanglement between the photons.
The protocol proceeds as follows. Alice and Bob each choose one of two
different bases to measure, with 50% probability of choosing one basis and 50%
probability of choosing the other. After having performed their measurements,
they disclose which bases they used over a public channel. If the results of
measurements which were made in different bases violate Bell inequalities,
then the state is still entangled and there has been no eavesdropping.
Despite being theoretically more secure than BB84 and its variants (fewer
side-channels and thus fewer bits required for a secure key), entanglement-
based protocols are not currently considered to be practical for long-range
large-scale systems because of the difficulty of controlling entangled pairs
caused by decoherence (Scarani & Kurtsiefer, 2014).
### 6.5. Counterfactual QKD
Tae-Gon Noh has demonstrated that a QKD can be achieved ostensibly without
sending the key through the quantum channel (Noh, 2009). The quantum principle
in play is that the possibility of sending a photon can be detected even if
the photon is seemingly not actually sent. Counterfactual QKD has been
demonstrated experimentally in the laboratory (Liu et al., 2011).
## 7\. Real-time key extraction
Key generation bandwidth in a pure CPU-based implementation has been shown to
saturate at rates of around 1MHz (Restelli et al., 2009). High speed QKD
networks routinely exceed this data rate— for example, the NIST system
generates sifted keys at around 2MHz and has a maximal capacity above 30MHz.
For secure real-time practical applications, GHz data rates are anticipated.
In order to shift the bottleneck from the classical computation layer back to
the physical layer where it should be, hardware-based implementations have
become necessary.
Sifting is computationally straightforward, and is relatively easy to perform
at high speed. There is good privacy-amplification software which can work
directly on a CPU-based system (Zhang et al., 2014), and the Wegman–Carter
strongly universal hashing method, as used by e.g. Id Quantique, is also good.
It is the error correction step which is complicated and which sets a hard
upper limit on the secure key rate.
The Cascade error-correction algorithm, developed for QKD in (Brassard &
Salvail, 1994), is the fastest at current data rates, and is best implemented
in a Field Programmable Gate Array (FPGA) because it requires many simple but
different logical bit-level operations. When the NIST QKD network began to
exceed data rates of 1MHz, implementation of the Cascade algorithm was moved
to hardware (Mink et al., 2006). The maximal throughput they were able to
achieve was 12MHz in theory, but they were not able to approach that limit in
a practical system due to timing jitter in their photon detectors (Mink &
Bienfang, 2013). The Wuhu metropolitan area QKD network uses a similar FPGA-
based system (Zhang et al., 2012).
For next-generation real-time error correction as data rates push towards the
GHz mark, the Low-Density Parity-Check (LDPC) algorithm is expected to replace
the Cascade algorithm for error correction (Elkouss et al., 2009). The LDPC
algorithm requires 20 to 30 bytes of memory per bit of data being corrected,
as opposed to 1 or 2 bytes for the Cascade algorithm. On an FPGA, LDPC
implementation rates of up to 607MHz have been reported (Mhaske et al., 2015).
The current fastest implementation of the LDPC algorithm runs on a GPU-based
system (Falcão et al., 2009) and has been tested for QKD (Martinez-Mateo et
al., 2013; Dixon & Sato, 2014). For even faster rates, LDPC performance of 47
GHz has been reported for a custom chip implementation but not in the context
of QKD (Zhang et al., 2009).
## 8\. Hardware: Photon sources
A common method of encoding qubits is the use of polarized photons (less
common methods include time-bin encoding (Marcikic et al., 2002) and frequency
encoding (Zhu et al., 2011)). To preclude photon number splitting attacks,
each qubit should be sent on a single photon.
The ideal single-photon source would send a single photon 100% of the time
whenever the user wishes (“on demand”), would send multiple photons 0% of the
time, and the photons it sends would be indistinguishable.
If a photon cannot be sent 100% of the time on demand, the detector might have
to be left on for a longer time, increasing ‘dark count’ (detection of photons
which were not sent to it) and thus increasing noise. If the source were to
send multiple photons, then Eve would be able to intercept one photon and
transmit the remaining photons to Bob, executing a _photon number splitting
attack_. And if photons were distinguishable, then interception one photon
could give non-trivial information about another photon.
Photon sources are classified as _deterministic_ versus _probabilistic_. A
deterministic single photon source emits a single photon on demand, whereas a
probabilistic source might emit more than one photon, and its photon emission
timing not be entirely on demand. One should note, however, that even the most
‘deterministic’ photon source might in practice exhibit probabilistic
behaviour because, for example, photons might get lost during emission with
some probability (“ _extraction loss_ ”).
A common measure for the efficiency of a single-photon source is the _$2$ nd
order correlation function_ $g^{(2)}$. If $g^{(2)}=1$, this means that the
number of photons emitted by the source follows a Poisson distribution, which
is the distribution one would expect from completely random and uncorrelated
emissions. It is usually assumed that $g^{(2)}=1$ for attenuated lasers,
although, as pointed out by the European Telecom Standards Institute, standard
number GS QKD 003 Section 6.4.1 (ETSI, 2015), experiment hasn’t always born
this out and perhaps further study is necessary e.g. when the diode is biased
close to the lasing threshold (Dixon _et al._ , 2009). The $g^{(2)}<1$
situation is referred to as _photon antibunching_. In this case the
probability of emitting one photon relative to the probability of emitting
multiple photons is higher than in a Poisson process. The ideal state is
$g^{(2)}=0$, which means that we get a single photon $100\%$ of the time.
The most common single photon sources are attenuated lasers, in which a laser
beam is sent through a powerful attenuator which weakens it to the point that
the probability of emitting one photon is greater than the probability of
emitting multiple photons. Attenuated lasers are relatively cheap, convenient,
and robust.
When higher performance (lower $g^{(2)}$) is desired, the most common single-
photon sources make use of parametric down-conversion (PDC). This type of
source is not on-demand, but it probabilistically produces a pair of photons,
one of which can be used as a _heralding photon_ to instruct the detector to
activate. This is a major advantage in QKD where it is important to minimize
detector dark-count. The heralding photon could also be used as an entangled
pair with the first photon, although here PDC makes it difficult in general to
obtain the desired wavelength and phase-matching for the pair (Eisaman _et
al._ , 2011).
A promising future technology is the use of nitrogen vacancy (NV) color
centers in diamond as single photon sources. An NV center is a defect in a
diamond lattice which occurs when a nitrogen atom is substituted for a carbon
atom, leaving a vacancy next to it. As single photon sources, NV centers are
on-demand and exhibit low $g^{(2)}$. The current challenges are that they are
not identical, although some tunability has been demonstrated (Tamarat et al.,
2006), and that the ‘shelving level’ reduces their efficiency. There are
several proposed approaches to solving these problems (e.g. (Babinec _et al._
, 2010)); but it is the promise of a single photon coupled with a long-lived
spin qubit (the vacancy itself is an excellent qubit) that makes NV centers
especially promising. Note also that must be thousands of optical defects in
solids which could potentially be used for single-photon generation; only two
of these have so far been seriously studied in this context (Santori, Fattal,
& Yamamoto, 2010).
An NV center used as a single photon source. Retrieved from
http://xqp.physik.uni-muenchen.de/research/single_photon/index.html
There are many other single-photon sources, including single atoms, ions, and
molecules, ensembles, quantum dots, nanowires, four-wave mixing, and
mesoscopic quantum wells, but these do not currently seem as suitable for QKD
as the sources discussed above.
## 9\. Transmission
Quantum key distribution can be performed through fiber, through free space,
or (experimentally) bounced off a satellite. The principles of sending photons
through fiber and through free space are the same, but fiber provides a
channel in which the amount of noise can be determined and even controlled to
some extent, whereas the amount noise in a free-space channel is unknown
(although sometimes one may try to estimate it as in e.g. (Gabay & Arnon,
2005)) and typically is changing. Frequencies used to send photons are
typically around 800nm for free space and around 1550nm for fibers.
Experimental QKD usually uses dark fibers with no other signals passing
through it, but real-world applications will typically involve sending
messages through bright fibers which are carrying other signals. Scattering
effects in bright fibers will raise the BER of Alice’s transmissions, and will
cause more of Alice’s photons to ‘get lost on the way’. Despite this, by
smartly time-filtering QKD photon and other communication photons, in 1992 a
team from Toshiba was able to obtain a secure bit-rate of 507KHz over a 95km
bright fiber, several factors of 10 over what had been achieved previously
(Patel et al., 2012).
The greatest distance positive key rates have been experimentally obtained
through fiber is 307km (Korzh _et al._ , 2015) and through free space is 144km
between two Canary Islands (Ursin _et al._ , 2007). The problem with free
space transmission is atmospheric turbulence— random fluctuations in the
refractive index of air. One potential solution is to bounce the polarized
photons off satellites. The distance to the International Space Station is
400km, but the atmospheric thickness is an order of magnitude smaller than the
Canary Islands experiment. In 2014 a team from the University of Padua bounced
photons off four satellites to show feasibility (Vallone _et al._ , 2014), and
China claims to have done so as well, and aims to have a dedicated QKD
satellite in orbit by 2016 (Yikra, 2014). When such technologies become
practically viable, they will significantly increase free space QKD ranges.
## 10\. Hardware: Photon detection
For Bob to receive qubits from Alice in the form of single photon
polarizations, Bob needs to have a good single photon detector. The main
technological bottleneck in the development of practical and secure QKD
systems for short to medium distances is the development of good single photon
detectors. Thus, in the last few years, any improvement to single photon
detection technology has immediately led to improved QKD capabilities. Our
main reference for this section are (Eisaman _et al._ , 2011) and (Hadfield,
2009).
Single crystal diamond nanowires for photon detection. Retrieved from
http://www.osa-opn.org/opn/media/Images/photocontests/gallery09_36.jpg
An ideal single photon detector for QKD should have 100% _detection
efficiency_ (every photon sent to the detector should be successfully
detected), 0% _dark count_ (the detector should not detect photons which were
not sent to it), no _dead time_ (the recovery time for the detector after it
has detected a photon until it had detected another photon), and no _timing
jitter_ (the time between the photon’s arrival and its registration by the
detector). Additionally, an ideal detector would have complete _photon number
resolution_ , meaning that it would be able to count the number of photons it
had received. It would also be _asynchronous_ , meaning that it need not know
the arrival times of photons in advance.
Low detection capacities and high dark counts create noise in the
communication channel, reducing its capacity. A low capacity channel is
vulnerable to an intercept-resent attack, because it is difficult to detect
eavesdropping in the presence of random noise (Section 12.1). High dead time
reduces the channel bit rate, and creates a vulnerability to faked-state
attacks and to time-shift attacks (Makarov _et al._ , 2006; Burenkov _et al._
, 2010; Makarov, 2009). Timing jitter can lead to a leak of secret key
information (Lamas-Linares & Kurtsiefer, 2007). Poor or nonexistent photon
number resolution creates a vulnerability to photon number splitting attacks.
A single photon detector typically works by converting a photon into a charge
carrier which in turn triggers an avalanche process in a physical system which
is held very close to a critical state, leading to a macroscopic current
pulse.
Superconducting nanowire single photon detectors (SNSPD) are the best single
photon detectors known currently. They were first developed in 2001. They have
high detection efficiency (ten times better than the best semiconductor
detectors), low dark count, low dead time, and low time jitter. They are also
fully asynchronous. Their drawback is that they require cryogenic temperatures
($<3K$) to operate, which makes them bulky and expensive, and has limited
their uses outside the lab. There is work being done, however, on variations
of SNSPD that can function at temperatures of over $20K$, making them a
promising future technology for military and government applications (Wang,
Miki, & Fujiwara, 2009).
Single photon avalanche detectors (SPADs) are the single photon detectors
which are most currently used in practice. They are cheap and compact, with
high detection efficiency and low time jitter. They are also fully
asynchronous. The challenge in building good SPADs has been _afterpulsing_ ,
which is the phenomenon of a spontaneous dark count occurring shortly after a
photon detection. If we wait until afterpulsing ends before reactivating the
SPAD, then we increase the dead time.
There has recently been dramatic progress in SPAD design. In 2013, the
University of Geneva Applied Physics team developed an InGaAs negative
feedback avalanche diode (NFAD) single photon detector whose performance
rivals many SNSPD systems, but which operates at temperatures of approximately
150–220 K (as opposed to $<3K$ for SNSPD systems) (Korzh _et al._ , 2014).
Using these InGaAs NFADs, the same team were able in February 2015 to
demonstrate provably secure QKD transmission over 307km of optical fiber,
which is the current record (Korzh _et al._ , 2015).
## 11\. Auxiliary systems
### 11.1. Random number generators
A QKD system is only as good as its random number generator. If Eve can
predict Alice and Bob’s random choices, they she can read the entire key. The
entire selling point of Quintessence Technologies is their random number
generators.
CPU-based random number generators are trusted for many classical cryptography
tasks, and are implemented in most operating systems. The numbers they produce
are not truly random, however, and therefore they are usually referred to as
_pseudorandom number generators_. When higher speeds are required and when
stronger random numbers are needed, hardware-based implementations are
preferred. These come in two flavours— they either use filtered random
physical processes within the FPGA as random number seeds (Tsoi et-al., 2007;
Kwok & Lam, 2006), or they use less random seeds and strong permutations
(Alimohammad et al., 2008; Cheung et al., 2007; Xiang & Benkrid, 2009).
Currently, both alternatives are considered cryptographically equivalent.
In QKD, in order to physically guarantee unconditional security, quantum
effects are desired for use as true random number generators (TRNG). A quick
and dirty way to do this, for Alice at least, is to send an unpolarized single
photon through a beam splitter— if it comes out one end then count that as a
zero, and if it comes out of the other end count it as a 1. A more
sophisticated version of this scheme which eliminates this bias is marketed by
Id Quantique, and another by Quintessence Technologies, and reaches rates of
16 KHz. Looking into the future, an experimental idea with great promise is to
use quantum vacuum fluctuations for high bandwidth truly random number
generation of up to 100GHz (Jofre et al., 2011).
We note that TRNGs arise as commercial spinoffs of QKD projects.
### 11.2. Memories and repeaters
The concept of a quantum repeater. Retrieved from
http://www.uqcc.org/research/index.html
To extend the range of QKD beyond a few hundred kilometers, quantum repeaters
will be necessary, which in turn will require quantum memories. A quantum
memory absorbs a photon, stores its quantum states for as long as possible,
and releases it on demand. A key feature is that it does not break
entanglement. The primary candidates for practical quantum memories for QKD in
the near future are Raman gas based quantum memories (Simon et al., 2010) and
quantum memories using NV centers (de Riedmatten & Afzelius, 2015). The
advantages of the former include its greater capacity, while the advantages of
the latter include that it is solid-state and that it allows longer storage
times. It is still unclear which of these approaches will be best.
It is still unclear which repeater technology will be best, although the first
quantum repeaters which outperform direct transmission will probably be based
on atomic ensembles, linear optics, and photon counting (Sangouard et al.,
2011).
Quantum memories and repeaters are expected to become a commercial technology
within 10-15 years.
## 12\. Attacks
While the protocols of QKD operating under certain conditions are
unconditionally secure, practical implementations have been successfully
attacked. While none of these attacks is fatal to the QKD concept— effective
countermeasures to each attack have been devised— it is generally agreed that
the security of each setup should be the object of a dedicated study whose
goal is to find and patch up all vulnerabilities (Scarani & Kurtsiefer, 2014).
### 12.1. Intercept-resend
The simplest and most direct attack against BB84 and its relatives is for Eve
to intercept a photon sent by Alice to Bob, to measure that photon, to prepare
her own photon encoding the bit which she measured, and to send that photon
off to Bob. Because Eve doesn’t know in which basis Alice’s photon was sent,
she’ll measure in the wrong basis approximately half of the time and she will
send a photon in the wrong basis to Bob approximately half of the time. An
intercept-resend attack thus introduces a bit error rate (BER) of around 25%,
although weaknesses in certain practical systems allow modified versions of
this attack to introduce BERs of 19.7% (Xi, Qi, & Lo, 2010). Because
acceptably BERs in commercial systems are around 8%, practical QKD is indeed
secure against pure intercept-resent attacks, which are caught during the
error-correction key-establishment phase— if the error rate is too high then
Eve has been there.
### 12.2. Photon Number Splitting (PNS)
Due to hardware limitations, most photon sources used in QKD are not true
single photon sources in the sense that there is a non-negligible probability
that they will generate multiple photons to transmit a single qubit. If Alice
sends two or more identical photons to Bob, then Eve can split off one photon
and send the remaining photons through. Eve stores the qubit she has learnt in
quantum memory until Alice has revealed her encoding basis. Then Eve measures
her photons in the correct basis and gains information about the key.
A successful photon number splitting attack requires sophisticated equipment—
Eve must be able to count photons and to split off just one to quantum memory
while sending others through. Moreover, various countermeasures have been
developed. Better single-photon sources and modifications of the BB84
protocol, such as for instance SARG04, make successful PNS attacks much more
difficult to carry out. Another solution is to use decoy states, in which
photons are randomly sent at different intensities. The security of decoy
state QKD against PNS attacks was proven in (Lo _et al._ , 2005a). Because a
successful photon number splitting attack is much more difficult to carry out
against decoy-state QKD, we can use attenuated lasers as photon sources when
transmitting keys, increasing secure key-rates (Yuan, 2007). The current
state-of-the-art for decoy-state QKD is 320MHz over a 200km fiber, yielding a
15Hz secure key rate (Liu _et al._ , 2010).
### 12.3. Timing attacks
When different light sources are used for beams in different polarizations,
and/or different detectors are used to make different measurements, it may be
possible to ‘listen in’ to which bit was sent or to which bases was used
without actually intercepting a photon. Such side-channels were evident
already in the first implementations QKD, as noted by Brassard (Brassard,
2005):
> The funny thing is that, while our theory had been serious, our prototype
> was mostly a joke. Indeed, the largest piece in the prototype was the power
> supply needed to feed in the order of one thousand volts to Pockels cells,
> used to turn photon polarization. But power supplies make noise, and not the
> same noise for the different voltages needed for different polarizations.
> So, we could literally hear the photons as they flew, and zeroes and ones
> made different noises. Thus, our prototype was unconditionally secure
> against any eavesdropper who happened to be deaf ! :-)
It is therefore critical to the security of the QKD system that different
light-sources and detectors be indistinguishable to Eve. One particular
vulnerability is that different light sources and detectors may not be
perfectly synchronized, so that Eve can figure out which detector clicked, for
example, by examining the time signature publicly announced by Bob in order to
determine which photon he detected of the photons sent by Alice. Such an
attack could read-off $\geq 25\%$ of the key for a detector mismatch of $0.5$
nanoseconds, an amount that could easily go unnoticed (Lamas-Linares &
Kurtsiefer, 2007). An attempt to carry out such an attack against a commercial
system was unsuccessful because of several practical difficulties (Zhao et
al., 2008).
### 12.4. Trojan attacks
The logo of the Quantum Hacking Lab. Retrieved from http://www.vad1.com/lab/
In a Trojan attack, Eve shines bright light at either Alice or Bob,
determining which base was used by analyzing the reflection. The Trojan attack
has successfully read off complete keys both in the lab (Gerhardt et al.,
2011) and also of commercial QKD systems, QPN-5505 from MagiQ Technologies and
Clavis2 of IP Quantique (Lydersen et al., 2010). This has been the most
powerful and the best-performing hack on QKD so far. Although these specific
attacks can be protected against, Trojan attacks using pulses of different
wavelengths may still be able to hack complete keys, and we still do not know
the full scope of the vulnerability of practical QKD systems to Trojan attacks
(Jain et al., 2014, 2015).
### 12.5. Other side-channel attacks
Many other attacks against QKD systems have been investigated, and new
vulnerabilities are periodically discovered. Some of these attacks (denial of
service, man-in-the-middle,…) can be carried out against classical systems as
well, and the vulnerability of QKD to these attacks is identical to the
vulnerability of any classical protocol. Other attacks which take advantage of
a weakness in an auxiliary system— e.g. a randomization attacks, in which the
random bases are successfully computed by Eve because the random number
generator is faulty— can be counteracted by using better hardware. Of course
each side-channel attack must be investigated and ruled out for each setup.
## 13\. Conclusion
Quantum Key Distribution (QKD) is a modern form of trusted courier, which in
principle allows Alice to communicate a message to Bob with complete
confidence that the message will not be eavesdropped on during transmission.
Real-life QKD security, however, is a quantitative issue, and each setup
should be individually studied to ensure its security. QKD is currently a
focus of interest for many private, governmental, and military groups all over
the world.
Current state of the art setups still use the first QKD protocol, BB84, and
its variants. The Cascade protocol is still the fastest for error correction,
but LDPC is expected to overtake it as key rates rise. In both cases, hardware
implementation using FPGA’s is the current state of the art and is likely to
remain so for the next decade at least.
Qubits are typically transmitted as polarized photons. Decoy-state QKD using
attenuated laser pulses are the current state of the art photon sources,
despite not being true single photon sources. NV centers are a promising
future technology. Polarized photons can be transmitted through fiber, through
air (free space), or bounced off satellites. There are various attempts to
send polarized photons via bright fibers through which other messages are
travelling, but the key rates being obtained are still quite low.
Photon detectors are the main technological bottleneck for practical QKD. The
current state of the art are InGaAs NFAD’s. A promising future technology are
SNSPD’s, which currently require cryogenic temperatures to operate, but future
SNSPD’s be able to operate at above 20K.
Memories and repeaters, which are thought to be required for QKD at ranges
over around 400km, are still in the experimental stage, and it is too early to
say which technology will be best.
Security of QKD is a well-studied field, and there have been numerous attempts
to attack QKD implementations both using standard attacks and also using side-
channel attacks. Only one of these attacks, a Trojan attack, has successfully
stolen a secret key, and the vulnerability it highlights can be plugged. QKD
of course has the same vulnerability to classical attacks such as denial-of-
service and man-in-the-middle as classical implementations.
QKD is an exciting emerging technology which is beginning to enter the
marketplace. We expect it to be successful in its own right, and also to serve
as a stepping stone towards greater and higher goals in quantum communication
and computation.
## 14\. Acknowledgement
The author wishes to thank Judith Kupferman for useful comments, and wishes to
thank Paul Jouguet at Takehisa Iwakoshi for corrections and references.
## References
* Alimohammad et al. (2008) Alimohammad, A., Fard, S.F., Cockburn, B.F., & Schlegel, C. 2008 A Compact and Accurate Gaussian Variate Generator. _IEEE T. VLSI Syst._ , 16(5), 517–527.
* Babinec _et al._ (2010) Babinec, T.M., Hausmann, B.J., Khan, M., Zhang, Y., Maze, J.R., Hemmer, P.R., & Lončar, M. 2010 A diamond nanowire single-photon source. _Nat. Nanotechnol._ , 5(3), 195–199.
* Bennett & Brassard (1984) Bennett, C.H. & Brassard, G. 1984 Quantum cryptography: Public key distribution and coin tossing. In _Proceedings of IEEE International Conference on Computers, Systems and Signal Processing_ , volume 175, page 8. http://researcher.watson.ibm.com/researcher/files/us-bennetc/BB84highest.pdf
* Bennett et al. (1992) Bennett, C.H., Bessette, F., Brassard, G., Salvail, L., & Smolin, J. 1992 Experimental quantum cryptography. _J. of Cryptography_ , 5(1) 3–28.
* Berry (1998) Berry, M. 1998 Foreword, in _Introduction to Quantum Computation and Information_ eds. Lo, H.-K., Popescu S., & Spiller, T. World Scientific, Singapore, hardcover 1998, paperback 2001.
* Branciard et al. (2005) Branciard, C., Gisin, N., Kraus, B., & Scarani, V. 2005 Security of two quantum cryptography protocols using the same four qubit states. _Phys. Rev. A_ , 72(3), 032301. arXiv:quant-ph/0505035
* Brassard & Salvail (1994) Brassard, G., & Salvail, L. 1994 Secret-key reconciliation by public discussion. In _Advances in Cryptology EUROCRYPT93_ , Springer Berlin Heidelberg, 410–423.
* Brassard (2005) Brassard, G. 2005 Brief history of quantum cryptography: A personal perspective. In _Theory and Practice in Information-Theoretic Security, 2005_. IEEE Information Theory Workshop, 19–23. arXiv:quant-ph/0604072
* Burenkov _et al._ (2010) Burenkov, V., Qi, B., Fortescue, B., & Lo H.-K. 2010 Security of high speed quantum key distribution with finite detector dead time. _Quantum Inf. Comput._ , 14(3-4), 217–235. arXiv:1005.0272
* Cheung et al. (2007) Cheung, R.C.C., Lee, D.-U., Luk, W., & Villasenor, J.D. 2007 Hardware Generation of Arbitrary Random Number Distributions From Uniform Distributions Via the Inversion Method. _IEEE T. VLSI Syst._ , 15(8), 952–962.
* Comandar _et al._ (2014) Comandar, L.C., Fröhlich, B., Lucamarini, M., Patel, K.A., Sharpe, A.W., Dynes, J.F., Yuan, Z.L., Penty, R.V., & Shields, A.J. 2014 Room temperature single-photon detectors for high bit rate quantum key distribution. _Appl. Phys. Lett._ , 104, 021101. arXiv:1402.2210
* de Riedmatten & Afzelius (2015) de Riedmatten, H., & Afzelius, M. 2015 Quantum Light Storage in Solid State Atomic Ensembles. Preprint. arXiv:1502.00307
* Dixon _et al._ (2009) Dixon, A.R., Dynes, J.F., Yuan, Z.L., Sharpe, A.W., Bennett, A.J., & Shields, A.J. 2009 Ultrashort dead time of photon-counting InGaAs avalanche photodiodes _Appl. Phys. Lett._ , 94(23), 231113. arXiv:0905.2931
* Dixon & Sato (2014) Dixon, A.R., & Sato, H. 2014 High speed and adaptable error correction for megabit/s rate quantum key distribution. _Scientific Reports_ , 4.
* Eisaman _et al._ (2011) Eisaman, M.D., Fan, J., Migdall, A., & Polyakov, S.V. 2011 Single-photon sources and detectors. _Rev. Sci. Instrum._ , 82(7), 071101.
* Ekert (1991) Ekert, A.K. 1991 Quantum cryptography based on Bell’s theorem. _Phys. Rev. Lett._ 67(6), 661.
* Elkouss et al. (2009) Elkouss, D., Leverrier, A., Alléaume, R., & Boutros, J.J. 2009 Efficient reconciliation protocol for discrete-variable quantum key distribution. In _IEEE International Symposium on Information Theory, 2009_ , ISIT 2009, 1879–1883).
* Elliot et al. (2005) Elliott, C., Colvin, A., Pearson, D., Pikalo, O., Schlafer, J., & Yeh, H. 2005 Current status of the DARPA quantum network. In _Defense and Security_ , International Society for Optics and Photonics, 138–149. arXiv:quant-ph/0503058
* EPSRC (2014) UK Quantum Technology Landscape, 2014. Retrieved from http://www.epsrc.ac.uk/newsevents/pubs/dstl-uk-quantum-technology-landscape-2014/
* ETSI (2015) European Telecom Standards Institute, Standards for Quantum Key Distribution. Retrieved from http://www.etsi.org/technologies-clusters/technologies/quantum-key-distribution
* Falcão et al. (2009) Falcão, G., Silva, V., & Sousa, L. 2009 How GPUs can outperform ASICs for fast LDPC decoding. In _Proceedings of the 23rd international conference on Supercomputing_ , ACM, 390–399.
* Gabay & Arnon (2005) Gabay, M., & Arnon, S. 2005 Effect of turbulence on a quantum-key distribution scheme based on transformation from the polarization to the time domain: laboratory experiment. _Opt. Engineering_ , 44(4), 045002.
* Gerhardt et al. (2011) Gerhardt, I., Liu, Q., Lamas-Linares, A., Skaar, J., Kurtsiefer, C., & Makarov, V. 2011 Full-field implementation of a perfect eavesdropper on a quantum cryptography system. _Nat. Commun._ , 2, 349. arXiv:1011.0105
* Gisin et al. (2002) Gisin, N., Ribordy, G., Tittel, W., & Zbinden, H. 2002 Quantum cryptography. _Rev.Mod.Phys._ , 74, 145–190.
* Hadfield (2009) Hadfield, R.H. 2009 Single-photon detectors for optical quantum information applications. _Nat. Photonics_ , 3(12), 696–705.
* Huang et al. (2014) Huang, J.Z., Kunz-Jacques, S., Jouguet, P., Weedbrook, C., Yin, Z.Q., Wang, S., Chen, W., Guo, G.C., & Han, Z.F. (2014). Quantum hacking on quantum key distribution using homodyne detection. _Phys. Rev. A_ , 89(3), 032304. arXiv:1402.6921
* Hughes et al. (2013) Hughes, R.J., Nordholt, J.E., McCabe, K.P., Newell, R.T., Peterson, C.G., & Somma, R.D. 2013 Network-Centric Quantum Communications. In _Frontiers in Optics_ , Optical Society of America. arXiv:1305.0305
* Hughes & Norholdt (2014) Hughes, R.J., & Nordholt, J.E. 2014 Long-range quantum cryptography: Amplified quantum key distribution (AQKD). Preprint. arXiv:1406.6990
* Inamori, Lütkenhaus & Mayers (2007) Inamori, H., Lütkenhaus, N., & Mayers, D. 2007 Unconditional security of practical quantum key distribution. _Eur. Phys. J. D._ , 41(3), 599–627. arXiv:quant-ph/0107017
* Iwakoshi & Hirota (2014) Iwakoshi, T., & Hirota, O. 2014 Misinterpretation of statistical distance in security of quantum key distribution shown by simulation. In _SPIE Security + Defence_ , International Society for Optics and Photonics, 92540L–92540L.
* Jain et al. (2014) Jain, N., Anisimova, E., Khan, I., Makarov, V., Marquardt, Ch., & Leuchs, G. 2014 Trojan-horse attacks threaten the security of practical quantum cryptography. _New J. Phys._ , 16(12), 123030. arXiv:1406.5813
* Jain et al. (2015) Jain, N., Stiller, B., Khan, I., Makarov, V., Marquardt, Ch., & Leuchs, G. 2015 Risk analysis of Trojan-horse attacks on practical quantum key distribution systems. _IEEE J. Sel. Top. Quantum Electron._ 21, 6600710. arXiv:1408.0492
* Jiang et al. (2015) Jiang, H., Gao, M., Wang, H., Li, H., & Ma, Z. 2015 Four-intensity Decoy-state Quantum Key Distribution with enhanced resistance against statistical fluctuation. Preprint. arXiv:1502.02249
* Jofre et al. (2011) Jofre, M., Curty, M., Steinlechner, F., Anzolin, G., Torres, J.P., Mitchell, M.W. & Pruneri, V. 2011 True random numbers from amplified quantum vacuum. _Opt. Express_ , 19(21), 20665–20672.
* Jouguet et al. (2012) Jouguet, P., Kunz-Jacques, S., Leverrier, A., Grangier, P., & Diamanti, E. 2013 Experimental demonstration of long-distance continuous-variable quantum key distribution. _Nat. Photonics_ , 7(5), 378–381. arXiv:1210.6216
* Jouguet, Kunz-Jacques, & Diamanti (2013) Jouguet, P., Kunz-Jacques, S., & Diamanti, E. 2013 Preventing calibration attacks on the local oscillator in continuous-variable quantum key distribution. _Phys. Rev. A_ , 87(6), 062313. arXiv:1304.7024
* Jouguet et al. (2013) Jouguet, P., Kunz-Jacques, S., Kumar, R., Qin, H., Gabet, R., Diamanti, E., & Alléaume, R. 2013 Experimental demonstration of the coexistence of continuous-variable quantum key distribution with an intense DWDM classical channel. In _Proc. 3rd Ann. Conf. Quantum Cryptography_.
* Jouguet et al. (2014) Jouguet, P., Elkouss, D., & Kunz–Jacques, S. 2014 High-bit-rate continuous-variable quantum key distribution. _Phys. Rev. A_ 90, 042329.
* König et al. (2007) König, R., Renner, R., Bariska, A., & Maurer, U. 2007 Small accessible quantum information does not imply security. _Phys. Rev. Lett._ , 98(14), 140502. arXiv:quant-ph/0512021
* Korzh _et al._ (2014) Korzh, B., Walenta, N., Lunghi, T., Gisin, N., & Zbinden, H. 2014 Free-running InGaAs single photon detector with 1 dark count per second at 10% efficiency. _Appl. Phys. Lett._ , 104, 081108. arXiv:1312.2636
* Korzh _et al._ (2015) Korzh, B., Lim, C.C.W., Houlmann, R., Gisin, N., Li M.J., Nolan, D., Sanguinetti, B., Thew, R., & Zbiden, H. 2015 Provably secure and practical quantum key distribution over 307km of optical fibre. _Nat. Photonics_. arXiv:1407.7427
* Kumar, Qin, & Alléaume (2014) Kumar, R., Qin, H., & Alléaume, R. 2014 Coexistence of continuous variable QKD with intense DWDM classical channels. Preprint. arXiv:1412.1403
* Kunz-Jacques & Jouguet (2015) Kunz-Jacques, S., & Jouguet, P. 2015 Robust Shot Noise Measurement for Continuous Variable Quantum Key Distribution. _Phys. Rev. A_ 91, 022307. arXiv:1406.7554
* Kwok & Lam (2006) Kwok, S.H.M., & Lam, E.Y. 2006 FPGA–based High-speed True Random Number Generator for Cryptographic Applications. _TENCON 2006. 2006 IEEE Region 10 Conference_ , 1–4 & 14–17.
* Lamas-Linares & Kurtsiefer (2007) Lamas-Linares, A., & Kurtsiefer, C. 2011 Breaking a quantum key distribution system through a timing side channel. _Opt. Express_ , 15, 9388. arXiv:0704.3297
* Liu _et al._ (2010) Liu, Y., Chen, T.-Y., Wang, J., Cai, W.-Q., Wan, X., Chen, L.-K., Wang, J.-H., Liu, S.-B. Liang, H., Yang, L., Peng, C.-Z. Chen, K., Chen, Z.-B. & Pan J.-W. 2010 Decoy–state quantum key distribution with polarized photons over 200 km. _Opt. Express_ , 18 (2010) 8587–8594. arXiv:0908.4063
* Liu et al. (2011) Liu, Y., Ju, L., Liang, X.L., Tang, S.B., Tu, G.L.S., Zhou, L., Peng, C.-Z., Chen, K. Chen, T.-Y. Chen, Z.-B. & Pan, J.W. 2012 Experimental demonstration of counterfactual quantum communication. _Phys. Rev. Lett._ , 109(3), 030501. arXiv:1107.5754
* Lo _et al._ (2005a) Lo, H.K., Ma, X., & Chen, K. 2005 Decoy state quantum key distribution. _Phys. Rev. Lett._ , 94(23), 230504. arXiv:quant-ph/0411004
* Lo _et al._ (2005b) Lo, H.-K., Chau, H.F. & Ardehali, M. 2005 Efficient quantum key distribution scheme and proof of its unconditional security. _J. of Crypt._ , 18, 133. arXiv:quant-ph/0011056
* Lucamarini _et al._ (2013) Lucamarini, M., Patel, K.A., Dynes, J.F., Fröhlich, B., Sharpe, A.W., Dixon, A.R., Yuan, Z.L., Penty, R.V., & Shields, A. J. 2013 Efficient decoy-state quantum key distribution with quantified security. _Opt. Express_ , 21(21), 24550–24565. http://arxiv.org/abs/1310.0240
* Lydersen et al. (2010) Lydersen, L., Wiechers, C., Wittmann, C., Elser, D., Skaar, J., & Makarov, V. 2010 Hacking commercial quantum cryptography systems by tailored bright illumination. _Nat. Photonics_ , 4(10), 686–689. arXiv:1008.4593
* Ma & Lo (2008) Ma, X., & Lo, H.K. 2008 Quantum key distribution with triggering parametric down-conversion sources. _New J. Phys._ , 10(7), 073018.
* Makarov _et al._ (2006) Makarov, V., Anisimov, A. & Skaar, J. 2006 Effects of detector efficiency mismatch on security of quantum cryptosystems. _Phys. Rev. A_ , 74, 022313; Erratum _Phys. Rev. A_ , 78, 019905. arXiv:quant-ph/0511032
* Makarov (2009) Makarov, V. 2009 Controlling passively quenched single photon detectors by bright light. _New J. Phys._ , 11(6), 065003. arXiv:quant-ph/0707.3987
* Marcikic et al. (2002) Marcikic, I., De Riedmatten, H., Tittel, W., Scarani, V., Zbinden, H., & Gisin, N. 2002 Time-bin entangled qubits for quantum communication created by femtosecond pulses. _Phys. Rev. A_ , 66(6), 062308. arXiv:quant-ph/0205144
* Martinez-Mateo et al. (2013) Martinez-Mateo, J., Elkouss, D., & Martin, V. 2013 Key Reconciliation for High Performance Quantum Key Distribution. _Scientific Reports_ , 3. http://www.nature.com/srep/2013/130402/srep01576/full/srep01576.html
* Mhaske et al. (2015) Mhaske, S., Kee, H., Ly, T., Aziz, A., & Spasojevic, P. 2015 High-throughput FPGA-based QC-LDPC decoder architecture. Preprint. arXiv:quant-ph/1503.02986.
* Mink et al. (2006) Mink, A., Tang, X., Ma, L., Nakassis, T., Hershman, B., Bienfang, J.C., Su, D., Boisvert, R., Clark, C.W., & Williams, C. J. 2006 High speed quantum key distribution system supports one-time pad encryption of real-time video. In _Defense and Security Symposium_ , International Society for Optics and Photonics, 62440M.
* Mink & Bienfang (2013) Mink, A., & Bienfang, J. 2013 QKD on a Board Limited by Detector Rates in a Free-Space Environment. In _ICQNM 2013, The Seventh International Conference on Quantum, Nano and Micro Technologies_ , 28–33.
* Mirza & Petruccione (2010) Mirza, A., & Petruccione, F. 2010 Realizing long-term quantum cryptography. _JOSA B_ , 27(6), A185–A188.
* Mitsubishi (2015) Quantum Cryptography: Realizing next-generation security that cannot be broken. [R& D Highlights] 2015. Retrieved from http://www.mitsubishielectric.com/company/rd/research/highlights/communications/quantum.html
* Nazarathy et al. (2006) Nazarathy, M., Tselniker, I., Regev, Y., Orenstein, M., & Katz, M. 2006 Integrated–optical realizations of quantum key distribution over maximally unbiased bases. _IEEE J. Sel. Top. Quant._ , 12(4), 897–913.
* Noh (2009) Noh, T.-G. 2009 Counterfactual quantum cryptography. _Phys. Rev. Lett._ , 103(23), 230501. arXiv:0809.3979
* Patel et al. (2012) Patel, K.A., Dynes, J.F., Choi, I., Sharpe, A.W., Dixon, A.R., Yuan, Z.L., Penty, R.V. & Shields, A.J. 2012 Coexistence of high-bit-rate quantum key distribution and data on optical fiber. _Phys. Rev. X_ , 2(4), 041010. arXiv:1212.0033
* Portmann & Renner (2014) Portmann, C., & Renner, R. 2014 Cryptographic security of quantum key distribution. Preprint. arXiv:1409.3525
* Pritchard & Till (2010) Pritchard, J. & Till, S. 2014 UK Quantum Technology Landscape 2014. Engineering and Physical Sciences Research Council Report DSTL/PUB75620 Retrieved from http://www.epsrc.ac.uk/newsevents/pubs/dstl-uk-quantum-technology-landscape-2014/
* Restelli et al. (2009) Restelli, A., Bienfang, J.C., Mink, A., & Clark, C.W. 2009 Quantum key distribution at GHz transmission rates. In _SPIE OPTO: Integrated Optoelectronic Devices_ , International Society for Optics and Photonics, 72360L.
* Sangouard et al. (2011) Sangouard, N., Simon, C., De Riedmatten, H., & Gisin, N. 2011 Quantum repeaters based on atomic ensembles and linear optics. _Rev. Mod. Phys._ , 83(1), 33. http://www.arxiv.org/abs/quant-ph/0906.2699
* Santori, Fattal, & Yamamoto (2010) Santori, C., Fattal, D., & Yamamoto, Y. 2010 _Single-photon Devices and Applications_. John Wiley & Sons.
* Sarwar Pasha & Bala Ram (2014) Sarwar Pasha M.D. & Bala Ram, A. 2014 Key pre-distribution using quantum key channel— A survey. _International Journal of Computer Science and Mobile Applications_ , 2(3) 85–101.
* Sasaki et al. (2011) Sasaki, M., Fujiwara, M., Ishizuka, H., Klaus, W., Wakui, K., Takeoka, M., Tanaka, A., Yoshino, K., Nambu, Y., Takahashi, S., Tajima, A., Tomita, A., Domeki, T., Hasegawa, T., Sakai, Y., Kobayashi, H., Asai, T., Shimizu, K., Tokura, T., Tsurumaru, T., Matsui, M., Honjo, T., Tamaki, K., Takusue, H., Tokura, Y., Dynes, J.F., Dixon, A.R., Sharpe, A.W., Yuan, Z.-L., Shields, A.J., Uchikoga, S., Legré, M., Robyr, S., Trinkler, P., Monat, L., Page, J.-B., Ribordy, G., Poppe, A., Allacher, A., Maurhart, O., Länger, T., Peev, M., & Zeilinger, A. 2011 Field test of quantum key distribution in the Tokyo QKD Network. _Opt. Express_ , 19(11), 10387–10409. arXiv:1103.3566
* Scarani et al. (2004) Scarani, V., Acín, A., Ribordy, G. & Gisin, N. 2004 Quantum Cryptography Protocols Robust against Photon Number Splitting Attacks for Weak Laser Pulse Implementations. _Phys. Rev. Lett._ 92, 057901.
* Scarani & Kurtsiefer (2014) Scarani, V., & Kurtsiefer, C. 2014 The black paper of quantum cryptography: Real implementation problems. _Theor. Comput. Sci._ , 560, 27–32. arXiv:0906.4547
* SECOQC (2007) Alléaume, R., Bouda, J., Branciard, C., Debuisschert, T., Dianati, M., Gisin, N., Godfrey, M., Grangier, P., Länger, T., Leverrier, A., Lütkenhaus, N., Painchault, P., Peev, M., Poppe, A., Pornin, T., Rarity, J., Renner, R., Ribordy, G., Riguidel, M., Salvail, L., Shields, A., Weinfurter, H., & Zeilinger, A. 2007 SECOQC White Paper on Quantum Key Distribution and Cryptography. Preprint. arXiv:quant-ph/0701168
* Simon et al. (2010) Simon, C., Afzelius, M., Appel, J., Boyer de La Giroday, A., Dewhurst, S.J., Gisin, N., Hu., C.-Y., Jelezko, F., Kröhl, S., Müller, J.H., Nunn, J., Polzik, E., Rarity, J., de Riedmatten, H., Rosenfeld, W., Shields, A.J., Sköld, N., Stevenson, R.M., Thew, R., Walmsley, I., Weber, M., Weinfurter, H., Wrachtrup, J., & Young, R.J. 2010 Quantum memories. _Eur. Phys. J. D._ , 58(1), 1–22. arXiv:1003.1107
* Tamarat et al. (2006) Tamarat, Ph., Gaebel, T., Rabeau, J.R., Khan, M., Greentree, A.D, Wilson, H., Hollenberg, L.C.L., Prawer, S., Hemmer, P., Jelezko, F., & Wrachtrup J. 2006 Stark shift control of single optical centers in diamond. _Phys. Rev. Lett._ , 97(8), 083002. arXiv:quant-ph/0607170
* Tang et al. (2014) Tang, Z., Liao, Z., Xu, F., Qi, B., Qian, L., & Lo, H.K. 2014 Experimental demonstration of polarization encoding measurement-device-independent quantum key distribution. _Phys. Rev. Lett._ , 112(19), 190503. http://2013.qcrypt.net/contributions/Tang-abstract.pdf
* Tomamichel et al. (2012) Tomamichel, M., Lim, C.C.W., Gisin, N., & Renner, R. 2012 Tight finite-key analysis for quantum cryptography. _Nat. Commun._ , 3, 634.
* Tsoi et-al. (2007) Tsoi, K.H., Leung, K.H., & Leong, P.H.W. 2007 High performance physical random number generator. _IET Comput. Digit. Tech._ , 1(4), 349–352.
* Ursin _et al._ (2007) Ursin, R., Tiefenbacher, F., Schmitt-Manderbach, T., Weier, H., Scheidl, T., Lindenthal, M., Blauensteiner, B., Jennewein, T., Perdigues, J., Trojek, P., Ömer, B., Fürst, M., Meyenburg, M., Rarity, J., Sodnik, Z., Barbieri, C., Weinfurter, H.& Zeilinger, A. (2007) Entanglement-based quantum communication over 144 km. _Nature Phys._ , 3(7), 481–486.
* Vallone _et al._ (2014) Vallone, G., Bacco, D., Dequal, D., Galarin, S., Luceri, V., Bianco, G., & Villoresi, P. 2014 Experimental Satellite Quantum Communications Preprint. arXiv:quant-ph/1406.4051
* Wang, Miki, & Fujiwara (2009) Wang, Z., Miki, S., & Fujiwara, M. 2009 Superconducting nanowire single-photon detectors for quantum information and communications. _IEEE J. Sel. Top. Quantum Electron._ , 15(6), 1741–1747.
* Wang (2013) Wang, X.B. 2013 Three-intensity decoy-state method for device-independent quantum key distribution with basis-dependent errors. _Phys. Rev. A_ , 87(1), 012320. arXiv:1308.5677
* Wang et al. (2014) Wang, S., Chen, W., Yin, Z.-Q., Li, H.-W., He, D.-Y., Li, Y.-H., Wang, D., Chen, H., Han, Y.-G., Huang, J.-Z., Guo, J.-F., Hao, P.-L., Li, M., Zhang, C.-M., Liu, D., Liang, W.-Y., Miao, C.-H., Wu, P., Guo, G.-C., & Han, Z.F. 2014 Field and long-term demonstration of a wide area quantum key distribution network. _Opt. Express_ , 22(18), 21739–21756. arXiv:1409.1568
* Weisner (1983) Wiesner, S. 1981 Conjugate coding. _SIGACT News_ , 15(1), 78–88. Original manuscript written circa 1969.
* Xi, Qi, & Lo (2010) Xu, F., Qi, B., & Lo, H.K. 2010 Experimental demonstration of phase-remapping attack in a practical quantum key distribution system. _New J. Phys._ , 12(11), 113026. arXiv:1005.2376
* Xiang & Benkrid (2009) Xiang, T., & Benkrid, K. 2009 Mersenne Twister Random Number Generation on FPGA, CPU and GPU. _NASA/ESA Conference on Adaptive Hardware and Systems_ , 460–464.
* Yikra (2014) Yikra,B. 2014 Researchers bounce polarized photons off satellites to show feasibility of space based quantum communications. _Phys.org_. Retrieved from http://phys.org/news/2014-06-polarized-photons-satellites-feasibility-space.html
* Yuan (2007) Yuan, Z.L., Sharpe, A.W., Shields, A.J. 2007 Unconditionally secure one-way quantum key distribution using decoy pulses. _Appl. Phys. Lett._ , 90(1), 011118. arXiv:quant-ph/0610015
* Yuen (2013) Yuen, H.P. 2013 Essential elements lacking in security proofs for quantum key distribution. In _Proceedings of SPIE- The International Society for Optical Engineering_. arXiv:quant-ph/1310.0842
* Zhang et al. (2009) Zhang, Z., Anantharam, V., Wainwright, M.J., & Nikolic, B. 2009 A 47 Gb/s LDPC decoder with improved low error rate performance. In _2009 Symposium on VLSI Circuits_ , IEEE, 286–287.
* Zhang et al. (2012) Zhang, H.F., Wang, J., Cui, K., Luo, C.L., Lin, S.Z., Zhou, L., Liang, H., Chen, T.-Y., Chen, K., & Pan, J.W. 2012 A real-time QKD system based on FPGA. _J. Lightwave Technol._ , 30(20), 3226–3234. arXiv:quant-ph/1301.2383
* Zhang et al. (2014) Zhang, C.-M., Li M., Huang J.-Z., Treeviriyanupab, P., Li, H.-W., Li, F.-Y., Wang, C., Yin, Z.-Q., Chen, W., Sripimanwat, K., & Han, Z.-F., 2013 Fast implementation of length-adaptive privacy amplification in quantum key distribution. _Chinese Phys. B_ , 23(9), 090310.
* Zhao et al. (2008) Zhao, Y., Fung, C.H.F., Qi, B., Chen, C., & Lo, H.K. 2008 Quantum hacking: Experimental demonstration of time-shift attack against practical quantum-key-distribution systems. _Phys. Rev. A_ , 78(4), 042333. arXiv:0704.3253
* Zhu et al. (2011) Zhu, C.-H., Pei, C.-X., Quan, D.-X., Gao, J.-L., Chen, N., Yi, Y.-H., 2010 A new quantum key distribution scheme based on frequency and time coding. _Chinese Phys. Lett._ , 27(9), 090301. arXiv:1003.2472
|
# PP-MeT: a Real-world Personalized Prompt based Meeting Transcription System
###### Abstract
Speaker-attributed automatic speech recognition (SA-ASR) improves the accuracy
and applicability of multi-speaker ASR systems in real-world scenarios by
assigning speaker labels to transcribed texts. However, SA-ASR poses unique
challenges due to factors such as speaker overlap, speaker variability,
background noise, and reverberation. In this study, we propose PP-MeT system,
a real-world personalized prompt based meeting transcription system, which
consists of a clustering system, target-speaker voice activity detection (TS-
VAD), and TS-ASR. Specifically, we utilize target-speaker embedding as a
prompt in TS-VAD and TS-ASR modules in our proposed system. In constrast with
previous system, we fully leverage pre-trained models for system
initialization, thereby bestowing our approach with heightened
generalizability and precision. Experiments on M2MeT2.0 Challenge dataset show
that our system achieves a cp-CER of 11.27% on the test set, ranking first in
both fixed and open training conditions.
Index Terms— SA-ASR, TS-VAD, TS-ASR, personalized prompt, M2MeT2.0 Challenge
## 1 Introduction
The rapid advancements in deep learning have led to remarkable strides in
automatic speech recognition (ASR), substantially enhancing its overall
performance. Despite these achievements, ASR systems continue to face
challenges in real-world far-field scenarios, such as meetings or home
parties, where background noise, unavoidable reverberation, and overlapping
speech from multiple speakers can significantly degrade their performance. In
order to develop a robust ASR system in such challenging acoustic
environments, numerous research studies have concentrated on multi-channel
multi-party speech recognition and diarization within dinner party scenarios
[1, 2].
The objective of the M2MeT2.0 challenge [3, 4] is to address the ASR task in
multi-party meetings, which involves providing precise transcriptions and
identifying the corresponding speakers. To advance the practical application
of current multi-speaker speech recognition systems, the M2MET 2.0 Challenge
evaluates the task of Speaker-attributed ASR (SA-ASR). Additionally, the
challenge includes two sub-tracks: fixed training condition track and open
training condition track. Speaker-attributed ASR (SA-ASR) poses several
challenges due to the complexity of accurately attributing speech to specific
speakers. The SA-ASR task improves the accuracy and applicability of multi-
speaker ASR systems in real-world scenarios by assigning speaker labels to
transcribed texts. Unlike traditional ASR systems that transcribe speech
without considering speaker identities, SA-ASR goes a step further by
associating each recognized word or phrase with the corresponding speaker.
SA-ASR faces unique challenges due to factors like speaker overlap, speaker
variability, background noise, and reverberation. Overcoming these challenges
involves developing advanced algorithms and techniques for speaker
diarization, speech separation, and speaker recognition to accurately
attribute spoken words to their respective speakers. The development of SA-ASR
systems has the potential to improve the performance and usability of speech
recognition in scenarios where multiple speakers are present, enabling
applications that require speaker-specific information and analysis.
In this study, we present the PP-MeT system, a personalized-prompt based
meeting transcription system designed to address the ASR task in multi-party
meetings. Our approach comprises three essential components: a clustering
system, target-speaker voice activity detection (TS-VAD), and target-speaker
ASR (TS-ASR). To enhance the system’s performance and applicability, we
integrate target-speaker embeddings as prompts within the TS-VAD and TS-ASR
modules. Leveraging pre-trained models during system initialization further
empowers our approach, granting it superior generalizability and precision. In
experiments conducted on the M2MeT2.0 dataset, our integrated PP-MeT system
achieves a concatenated minimum permutation character error rate (cp-CER) of
only 11.27% on the test set, achieving the top position in both fixed and open
training conditions. We also release our inference system with pre-trained
models at website111https://github.com/XimalayaEverestIntelligentLab/M2MET2.0.
The rest of this paper is organized as follows. In Section 2, we detail the
architecture of the PP-MeT system. Datasets and experimental setup are
described in Section 3. Section 4 presents the experimental results of
M2MeT2.0 Challenge test set and our ablation study. Finally, we conclude in
Section 5.
## 2 Proposed System Description
The overview of our proposed PP-MeT system for the M2MeT2.0 Challenge is shown
in Figure 1.
Fig. 1: The overview of our proposed PP-MeT system.
### 2.1 Speaker Embedding System
As M2MeT2.0 encourage the participants to use pre-trained models, we use two
pre-trained
models222https://github.com/wenet-e2e/wespeaker/blob/master/docs/pretrained.md
from Wespeaker toolkit [5, 6]. One is Resnet34 from the CN-Celeb example, and
another is ResNet34-LM, which is obtained by further training ResNet34 with a
large-margin technique. We also train a ResNet34 model with Speechbrain
toolkit333https://github.com/speechbrain/speechbrain/tree/develop to introduce
diversity to our speaker embedding model. We will refer to these three speaker
embedding models as SV-1/2/3 and the corresponding personalized prompt as
Prompt-1/2/3 for simplicity.
### 2.2 Clustering System
Before proceeding to TS-VAD and TS-ASR systems, we need to estimate the number
of speakers and initialize personalized prompts using clustering algorithm.
First, we extract voice speech segments based on VAD results for each session.
Then we split each segment into subsegments using a fixed 3s window size and
1.5s window-shift. After that, we use speaker embedding model to extract
embedding for each subsegment. Finally, we feed the L2-normalized embedding
into the clustering algorithm and obtain the speaker number for each session,
as well as the label for each subsegment.
We use DOVER-lap toolkit [7]444https://github.com/desh2608/dover-lap to merge
clustering results from different channel and speaker embedding models. We
compare Auto-tuning Spectral Clustering with Normalized Eigen Gap [8](NME-SC)
with Agglomerative Hierarchical Clustering(AHC) algorithm. As NME-SC
outperforms AHC by a large margin, we use NME-SC algorithm results to
initialize personalized prompts.
After obtaining the clustering system result, for each speaker, we extract
speech that contains only the targeted speaker as personalized speech. Then we
repeat the speaker embedding extraction steps over the personalized speech and
use the mean-pooled L2-normalized speaker embedding as the personalized
prompt.
### 2.3 TS-VAD System
As the clustering system can not handle overlap speech, it results in a high
miss error in multi-party meeting scenarios. To further reduce DER, we use TS-
VAD system to give a more accurate estimate of speaker labels.
We use ResNet34 model as backbone for our TS-VAD system, which is the same as
that of speaker embedding model. First, we extract the pooling layer input as
frame-level speaker embedding. Then we do a stats-pooling with 3-second stride
to extract the frame level mean and std feature, and concatenate it with the
original frame-level speaker embedding. We do mean-pooling and attention-
pooling for frame-level speaker embedding and personalized prompts,
respectively. After that, we use a conformer decoder layer to explore the
relationship between the frame-level speaker embedding and personalized
prompt. We feed the frame-level speaker embedding feature as conformer decoder
input, and each personalized prompt as decoder memory. Finally, we concatenate
the conformer decoder layer output and use a BiLSTM layer to explore the
relationship among each speaker. The BiLSTM output is fed into a fully-
connect(FC) layer with a sigmoid activation function to generate the final TS-
VAD probability[9, 10]. The detailed TS-VAD model structure is shown in Figure
2.
Fig. 2: TS-VAD model structure
### 2.4 TS-ASR system
Far-Field ASR poses a greater challenge compared to ASR of speech captured by
a close-proximity microphone due to the degraded quality of the signal. To
address this, we endeavor to engage in speech enhancement. In our practical
pursuit, there exist two pivotal components. Firstly, we employ a
sophisticated dereverberation method based on weighted prediction error (WPE)
[11] to mitigate the effects of late reverberation. In the challenge, we
utilize an accelerated GPU-version of WPE, incorporating the following
parameters: taps=12, delay=2, iterations=3. Secondly, in order to further
attenuate late reverberation and minimize noise interference, the weighted
delay-and-sum acoustic beamforming (BeamformIt) method [12] is employed.
As M2MeT2.0 requires participants to give transcription for each speaker, we
upgrade the traditional ASR model into TS-ASR system with personalized prompt
module, which enables it to yield different transcription given different
personalized prompt [13, 14]. We feed the personalized prompt into a FC layer,
and do Hadamard product with the output from the first layer of both asr
encoder and decoder. As our TS-ASR model makes little modification to the
traditional ASR model, we can easily adapt a pre-trained ASR model into a TS-
ASR model. We use Unfied-Conformer [15] model pretrained on
wenetspeech555https://github.com/wenet-e2e/wenet/blob/main/docs/pretrained_models.md
from [16] as the TS-ASR model backbone. The detailed TS-ASR model structure is
shown in Figure 3.
Fig. 3: TS-ASR model structure
## 3 Experimental Setup
### 3.1 Datasets
The original M2MeT1.0 dataset [3] contains 118.75 hours of speech data in
total. The dataset is divided into 104.75 hours for training, 4 hours for
development (denoted as Dev 1.0), and 10 hours as test set (denoted as Test
1.0) for scoring and ranking in M2MeT1.0 Challenge. Test 1.0 is used as
development set in M2MeT2.0 Challenge. M2MeT2.0 uses a new 10 hours dataset
(denote as Test 2.0) as test set. AISHELL4 [17] is a real-recorded Mandarin
speech dataset collected by 8-channel circular microphone array for speech
processing in a conference scenario. This dataset consists of 211 recorded
meeting sessions, each containing 4 to 8 speakers, with a total length of 120
hours, aiming to bridge the advanced research on multi-speaker processing and
the practical application scenarios. CN-Celeb [18] is a large-scale speaker
recognition dataset collected ‘in the wild’. This dataset contains more than
130, 000 utterances from 1, 000 Chinese celebrities, and covers 11 different
genres in the real world.
Both M2MeT and AISHELL4 datasets are far-field multi-channel datasets, while
the CN-Celeb dataset is a near-field dataset. Figure 4 shows the data
preparation. By Oracle VAD, the non-overlap speech of each speaker is obtained
from both near-field and far-field data. Then the personalized prompt is
extracted. The M2MeT dataset is processed according to the given prior
information into continuous voice speech. All far-field multi-channel datasets
are pre-processed to generate enhanced data by WPE and BF. The original far-
field 8-channel data and the enhanced data compose the speech of each speaker,
which is used in the next training process.
Fig. 4: Data preparation before training.
The data flow of each training process is shown in Figure 5. The near-field
data is processed into 3-second uniform segments and used in speaker embedding
training. In TS-VAD model training, the continuous voice speech and non-
overlap speech with online augmentation are processed into 16-second uniform
segments, and the target-speaker embedding is used as a prompt. Moreover, the
speech segment of each speaker and the personalized prompt are used in TS-ASR
model training.
Fig. 5: Data flow in each training process.
### 3.2 System Setup
For all systems, we use 80-dimension log-mel filter bank (Fbank) feature as
input. The Fbank feature is extracted using a 25ms window length and 10ms
window shift.
#### 3.2.1 Speaker embedding system
we use CN-Celeb data [18] to train our speaker embedding model and split each
utterance into 3s uniform length segments. When iterating over all segments,
we introduce diversity by randomly offsetting the start frame of the segments
from -1.5s to 1.5s. All these three speaker embedding models are trained using
AAM softmax loss [19] and generate 256 dimension speaker embedding as output.
We use a cyclical learning rate policy to dynamically adjust the lr for 16
epochs.
#### 3.2.2 TS-VAD system
we use M2MeT2.0 training data and Aishell-4 data for training. For each
session, first, we extract and combine all voiced speech as our real training
data. Then, for each speaker, we extract and combine speech that contains only
the target speaker as personalized speech. Finally, we initialize Prompt-1/2/3
using personalized speech. If the number of speaker is less than 4, we pad
Prompt-1/2/3 using zero vectors.
During training, we split the real training data into 16s segments and iterate
over each segment. We also do an online data simulation by choosing
personalized speech from random speakers to fill up the voiced region of real
data[20]. It is important that the randomly chosen speakers are from the same
session, in case the model learns background noise feature of each session,
rather than the essential difference of each speaker.
We train three TS-VAD models based on SV-1/2/3 and Prompt-1/2/3. For all TS-
VAD models, we use 2 layers, 256-dimension input, 512-dimension hidden
dimension, and 8 heads for the conformer decoder setup. We use 2 layers, 1024
dimension input, and 512 hidden dimensions for BiLSTM setup.
TS-VAD training consists three key stages. In stage 1, we copy the pre-trained
speaker embedding parameter into the TS-VAD model, freeze the backbone part
and train the model using real and simulated data until convergence with 1e-3
lr. In stage 2, we train the whole model using real and simulated data until
convergence with 1e-4 lr. In stage 3, we finetune the whole model only using
real data with 1e-5 lr. We choose the model with the lowest DER on Test 1.0
for decoding.
During TS-VAD decoding, we initialize Prompt-1/2/3 from clustering system. We
can iterate over the TS-VAD system by re-initialize Prompt-1/2/3 using TS-VAD
system outputs.
#### 3.2.3 TS-ASR system
We use the WeNet toolkit and its pre-trained Unified-Conformer model on
WeNetSpeech as backbone. Since M2MeT2.0 and Aishell-4 training data comprise
multiple channels, on one hand, we directly feed the model with raw mean-
pooled data, on the other hand, we feed the model with enhanced single-channel
data. Additionally, we incorporate speed augmentation techniques during the
training process. It is imperative to note that when the audio speed is
altered, the corresponding personalized prompt for that particular speed
variation should be rendered.
We also train three TS-ASR systems based on Prompt-1/2/3. For all TS-ASR
models, we use 12-layer conformer encoder with 512 dimension output, 2048
dimension linear units, and 8 attention heads. We use a 3-layer bi-transformer
decoder with 2048 dimension linear units and 8 attention heads. For the
personalized prompt module, we feed the 256 dimension personalized prompt into
a FC layer, project it into a 512 dimension vector and do a Hadamard product
with the first layer output of both encoder and decoder.
TS-ASR training also consists three key stages. In stage 1, we freeze the
Unified-Conformer backbone, and only train the personalized prompt module
using raw data and enhanced data. In stage 2, we train the whole model with
1e-4 lr. In stage 3, we finetune the whole model with 1e-5 lr using enhanced
data.
## 4 Experimental Results
### 4.1 Results on M2MeT2.0 Challenge
M2MeT2.0 challenge uses concatenated minimum permutation character error rate
(cp-CER) as the evaluation metric. It computes the minimum CER given all
speaker permutations, which requires the system to give the correct
transcription and speaker label. The calculation of cp-CER is divided into
three steps. First, recognition results and reference transcriptions belonging
to the same speaker are concatenated on the timeline in a session. Second, the
character error rate (CER) of all permutations of speakers is calculated.
Finally, the lowest CER is selected as the cp-CER.
Table 1 presents the cp-CER results of the official baseline and each
competition system. Our system achieves 15.05%, 16.84%, and 11.27% cp-CER on
Dev 1.0, Test 1.0, and Test 2.0, respectively. Notice that cp-CER on Dev 1.0
and Test 1.0 is achieved using oracle diarization result. We can observe that
our PP-MeT model gives better results over the official baseline and achieve
up to 30.28% absolute cp-CER improvement due to the enhanced dataset and
advanced model architectures. achieving first place in the challenge.
System | Dev 1.0 | Test 1.0 | Test 2.0
---|---|---|---
PP-MeT (Rank 1st) | 15.05 | 16.84 | 11.27
Rank 2nd Team | – | – | 18.64
Rank 3rd Team | – | – | 22.83
Rank 4th Team | – | – | 23.51
Rank 5th Team | – | – | 24.82
Official Baseline | 47.4 | 52.57 | 41.55
Table 1: The cp-CER (%)$\downarrow$ results of each competition system on the
M2MeT Dev 1.0, Test 1.0, and Test 2.0.
### 4.2 Ablation Study
We conduct a detailed ablation study to better understand the contribution of
cp-CER from each system, and the significance of pre-trained models.
Clustering Method | SV Model | Channel 1-8 DER(%) $\downarrow$ | DER (%)_Channel $\downarrow$ | DER (%)_Model $\downarrow$
---|---|---|---|---
SC | SV-1 | 16.87 | 16.21 | 16.89 | 17.49 | 18.61 | 18.00 | 16.09 | 18.85 | 16.40 | 15.22
SV-2 | 15.96 | 16.22 | 16.41 | 16.39 | 16.86 | 16.57 | 16.55 | 17.99 | 15.22
SV-3 | 17.26 | 16.18 | 17.16 | 16.97 | 17.07 | 16.55 | 16.52 | 17.11 | 15.75
AHC | SV-1 | 28.22 | 27.11 | 26.92 | 25.79 | 25.90 | 24.52 | 26.61 | 26.25 | 22.95 | 22.43
SV-2 | 26.48 | 23.85 | 26.13 | 26.70 | 25.28 | 25.61 | 24.58 | 26.65 | 22.43
SV-3 | 25.90 | 26.74 | 29.33 | 27.19 | 27.51 | 27.02 | 26.09 | 25.87 | 22.94
Table 2: DER Results for each clustering system on Test 1.0
#### 4.2.1 Clustering System
As clustering system gives the estimate of speaker number and rough speaker
label, its performance determines the superior limit of the whole PP-MeT
system. In Table 2, we study the impact of different speaker embedding models
and clustering algorithms in clustering systems.
SV-1/2/3 achieves 7.13%, 6.49%, and 7.06% on CN-Celeb dev trials,
respectively. The threshold for AHC clustering is tuned on Dev 1.0. Results
show that given each model and channel, NME-SC outperforms AHC significantly.
DOVER-lap makes the clustering result more stable by leveraging clustering
results from different channels and models.
As the accuracy of speaker embedding directly affects the quality of speaker
embedding, DER relates to speaker embedding performance evidently. The lowest
DER is achieved by SV-2, which also achieves the lowest EER on CN-Celeb
trials.
#### 4.2.2 TS-VAD system
In Table 3, we study the impact of pre-trained speaker embedding model and
different model architectures in TS-VAD system.
Results show that pre-trained model contributed heavily to the performance of
TS-VAD system. If TS-VAD model backbone parameter is randomly initialized, it
only achieves 13.28% DER on Test 1.0, which is only slightly better than that
of clustering system. Also, TS-VAD model backbone should match that of the
personalized prompt. If we initialize the TS-VAD model backbone parameter
using pre-trained ECAPA-TDNN speaker embedding model and train with Prompt-1.
It achieves 7.68% DER, which is much worse than its counterpart using matched
speaker embedding model and prompt. The above results demonstrate the
importance of pre-trained models in TS-VAD system, and using matched speaker
embedding model for initialization and personalized prompt makes it easier to
explore the relationship between frame-level speaker embedding and
personalized prompt.
We can also observe that the DER drops moderately if we iterate the TS-VAD
system by refining Prompt-1/2/3 using TS-VAD system output.
Initialization Model Parameter | Personalized Prompt | DER (%)_Iter0 $\downarrow$ | DER (%)_Model $\downarrow$ | DER (%)_Iter1 $\downarrow$ | DER (%)_Model $\downarrow$
---|---|---|---|---|---
SV-1 | Prompt-1 | 5.22 | 3.19 | 4.87 | 2.99
SV-2 | Prompt-2 | 4.52 | 4.25
SV-3 | Prompt-3 | 4.02 | – | 3.64
Random | Prompt-1 | 13.28 | –
ECAPA | Prompt-1 | 7.68 | –
Table 3: DER Results for each TS-VAD model on Test 1.0
#### 4.2.3 TS-ASR System
In Table 4, we study the impact of pre-trained models and personalized prompts
in TS-ASR system.
First, we try to finetune the pre-trained unified-conformer ASR model directly
without any structure modification. Results show that the pre-trained ASR
model achieves 32.63% and 35.89% cp-CER on Dev 1.0 and Test 1.0. After
finetuning the model on M2MET2.0 and Aishell-4 data, the cp-CER drops to
22.55% and 26.43%, respectively. However, the cp-CER improvement is largely
due to the model performance on nonoverlap speech. It fails to decrease
further because the traditional ASR model cannot handle overlap speech.
Then, we try to train the TS-ASR model from scratch with Prompt-1. However,
the TS-ASR model with a unified-conformer backbone fails to converge. This
demonstrates the necessity of pre-trained ASR model backbone in our TS-ASR
system.
Finally, we train three TS-ASR models based on the Prompt-1/2/3. cp-CER on TS-
ASR model with pretrained ASR model backbone and Prompt-1/2/3 drops
dramatically both on Dev 1.0 and Test 1.0. The result shows that pre-trained
ASR model with Prompt-2 achieves the lowest cp-CER, which means that the
performance of pre-trained speaker embedding model also affects the
performance of TS-ASR on overlapped speeches.
We also try to finetune the TS-ASR model further using LF-MMI with k2
toolkit666https://github.com/k2-fsa/k2, and introducing LM information by
decoding with HLG. However, the cp-CER fails to drop on both Dev 1.0 and Test
1.0. This is due to the fact that in multi-party meeting scenario, the
transcription from each session is highly irrelevant. External LM information
can not help to decrease cp-CER.
In Table 4, Test 1.0 cp-CER is calculated using segments and prompts from TS-
VAD system. The gap between cp-CER of Test 1.0 and Test 1.0 means the
degradation introduced by TS-VAD system, which is approximately 2%. We obtain
final results by leveraging each system results using SCTK rover
toolkit777https://github.com/usnistgov/SCTK.
Personalized Prompt | Dev 1.0 | Test 1.0 | Test 1.0
---|---|---|---
– | 22.55 | 26.43 | –
Prompt-1 | 15.35 | 17.20 | 19.45
Prompt-2 | 15.13 | 17.08 | 19.06
Prompt-3 | 15.28 | 17.16 | 19.06
Rover | 15.05 | 16.84 | 18.92
Table 4: cp-CER (%) results for each TS-ASR model on Dev 1.0, Test 1.0 and
Test 1.0.
## 5 Conclusion
In this paper, we present our PP-MET system for the Multi-channel Multi-party
Meeting Transcription Challenge 2.0 (M2MeT2.0) to address the ASR task in a
multi-party meeting scenario. Compared with the other conventional systems, we
incorporate target-speaker embedding as a personalized prompt in both TS-VAD
and TS-ASR stage. Moreover, to further enhance the system’s robustness and
reduce the training cost, pre-trained models are used in our system’s
initialization, enabling fast adaptation across all modules. Experimental
results shows proposed system outperforms conventional systems by a large
margin.
In future work, we plan to explore the potential of expanding personalized
prompts on the time axis. Additionally, we aim to enhance the TS-ASR model by
jointly training the speaker embedding module with the ASR backbone, further
improving its performance.
## References
* [1] Jon Barker, Shinji Watanabe, Emmanuel Vincent, and Jan Trmal, “The fifth’chime’speech separation and recognition challenge: dataset, task and baselines,” arXiv preprint arXiv:1803.10609, 2018.
* [2] Shinji Watanabe, Michael Mandel, Jon Barker, Emmanuel Vincent, Ashish Arora, Xuankai Chang, Sanjeev Khudanpur, Vimal Manohar, Daniel Povey, Desh Raj, et al., “Chime-6 challenge: Tackling multispeaker speech recognition for unsegmented recordings,” arXiv preprint arXiv:2004.09249, 2020.
* [3] Fan Yu, Shiliang Zhang, Yihui Fu, Lei Xie, Siqi Zheng, Zhihao Du, Weilong Huang, Pengcheng Guo, Zhijie Yan, Bin Ma, et al., “M2met: The icassp 2022 multi-channel multi-party meeting transcription challenge,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022, pp. 6167–6171.
* [4] Yuhao Liang, Mohan Shi, Fan Yu, Yangze Li, Shiliang Zhang, Zhihao Du, Qian Chen, Lei Xie, Yanmin Qian, Jian Wu, Zhuo Chen, Kong Aik Lee, Zhijie Yan, and Hui Bu, “The second multi-channel multi-party meeting transcription challenge (m2met) 2.0): A benchmark for speaker-attributed asr,” in arxiv preprint arxiv: 2309.13573, 2023.
* [5] Hongji Wang, Chengdong Liang, Shuai Wang, Zhengyang Chen, Binbin Zhang, Xu Xiang, Yanlei Deng, and Yanmin Qian, “Wespeaker: A research and production oriented speaker embedding learning toolkit,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023, pp. 1–5.
* [6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
* [7] Desh Raj, Leibny Paola Garcia-Perera, Zili Huang, Shinji Watanabe, Daniel Povey, Andreas Stolcke, and Sanjeev Khudanpur, “Dover-lap: A method for combining overlap-aware diarization outputs,” in 2021 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2021, pp. 881–888.
* [8] Tae Jin Park, Kyu J Han, Manoj Kumar, and Shrikanth Narayanan, “Auto-tuning spectral clustering for speaker diarization using normalized maximum eigengap,” IEEE Signal Processing Letters, vol. 27, pp. 381–385, 2019.
* [9] Weiqing Wang, Xiaoyi Qin, and Ming Li, “Cross-channel attention-based target speaker voice activity detection: Experimental results for the m2met challenge,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022, pp. 9171–9175.
* [10] Ivan Medennikov, Maxim Korenevsky, Tatiana Prisyach, Yuri Khokhlov, Mariya Korenevskaya, Ivan Sorokin, Tatiana Timofeeva, Anton Mitrofanov, Andrei Andrusenko, Ivan Podluzhny, et al., “Target-speaker voice activity detection: a novel approach for multi-speaker diarization in a dinner party scenario,” arXiv preprint arXiv:2005.07272, 2020.
* [11] Tomohiro Nakatani, Takuya Yoshioka, Keisuke Kinoshita, Masato Miyoshi, and Biing-Hwang Juang, “Speech dereverberation based on variance-normalized delayed linear prediction,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, no. 7, pp. 1717–1731, 2010.
* [12] Xavier Anguera, Chuck Wooters, and Javier Hernando, “Acoustic beamforming for speaker diarization of meetings,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 7, pp. 2011–2022, 2007.
* [13] Takafumi Moriya, Hiroshi Sato, Tsubasa Ochiai, Marc Delcroix, and Takahiro Shinozaki, “Streaming target-speaker asr with neural transducer,” Proc. Interspeech 2022, pp. 2673–2677, 2022.
* [14] Hiroshi Sato, Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Takafumi Moriya, and Naoyuki Kamo, “Should we always separate?: Switching between enhanced and observed signals for overlapping speech recognition,” arXiv preprint arXiv:2106.00949, 2021.
* [15] Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, et al., “Conformer: Convolution-augmented transformer for speech recognition,” arXiv preprint arXiv:2005.08100, 2020.
* [16] Binbin Zhang, Di Wu, Zhendong Peng, Xingchen Song, Zhuoyuan Yao, Hang Lv, Lei Xie, Chao Yang, Fuping Pan, and Jianwei Niu, “Wenet 2.0: More productive end-to-end speech recognition toolkit,” arXiv preprint arXiv:2203.15455, 2022.
* [17] Yihui Fu, Luyao Cheng, Shubo Lv, Yukai Jv, Yuxiang Kong, Zhuo Chen, Yanxin Hu, Lei Xie, Jian Wu, Hui Bu, et al., “Aishell-4: An open source dataset for speech enhancement, separation, recognition and speaker diarization in conference scenario,” arXiv preprint arXiv:2104.03603, 2021.
* [18] Yue Fan, JW Kang, LT Li, KC Li, HL Chen, ST Cheng, PY Zhang, ZY Zhou, YQ Cai, and Dong Wang, “Cn-celeb: a challenging chinese speaker recognition dataset,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 7604–7608.
* [19] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou, “Arcface: Additive angular margin loss for deep face recognition,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4690–4699.
* [20] Weiqing Wang, Danwei Cai, Qingjian Lin, Lin Yang, Junjie Wang, Jin Wang, and Ming Li, “The dku-dukeece-lenovo system for the diarization task of the 2021 voxceleb speaker recognition challenge,” arXiv preprint arXiv:2109.02002, 2021.
|
# Self-Supervised Visual Place Recognition
by Mining Temporal and Feature Neighborhoods
Chao Chen1, Xinhao Liu1, Xuchu Xu1, Yiming Li1, Li Ding2, Ruoyu Wang1, and
Chen Feng✉,1
✉ Corresponding author.1Chao Chen, Xinhao Liu, Xuchu Xu, Yiming Li, Ruoyu
Wang, and Chen Feng are with New York University, Brooklyn, NY 11201, USA
<EMAIL_ADDRESS>Ding is with University of Rochester, Rochester, NY 14627,
USA<EMAIL_ADDRESS>
###### Abstract
Visual place recognition (VPR) using deep networks has achieved state-of-the-
art performance. However, most of them require a training set with ground
truth sensor poses to obtain positive and negative samples of each
observation’s spatial neighborhood for supervised learning. When such
information is unavailable, temporal neighborhoods from a sequentially
collected data stream could be exploited for self-supervised training,
although we find its performance suboptimal. Inspired by noisy label learning,
we propose a novel self-supervised framework named TF-VPR that uses temporal
neighborhoods and learnable feature neighborhoods to discover unknown spatial
neighborhoods. Our method follows an iterative training paradigm which
alternates between: (1) representation learning with data augmentation, (2)
positive set expansion to include the current feature space neighbors, and (3)
positive set contraction via geometric verification. We conduct comprehensive
experiments on both simulated and real datasets, with either RGB images or
point clouds as inputs. The results show that our method outperforms our
baselines in recall rate, robustness, and heading diversity, a novel metric we
propose for VPR. Our code and datasets can be found at
https://ai4ce.github.io/TF-VPR/.
## I Introduction
Visual place recognition (VPR), which aims to identify previously visited
places based on current visual observation, is a well-known problem in
computer vision and plays a crucial role in autonomous robots [1]. Meanwhile,
VPR is closely related to re-localization [2], loop closure detection [3], and
image retrieval [4]. Despite all the efforts, VPR remains a difficult task due
to various challenges such as perceptual aliasing and view direction
differences [5, 6]. Classic VPR methods based on hand-crafted feature matching
do not require supervised learning, but are less robust to the challenges
mentioned above [7, 8, 9]. Therefore, learning-based methods have been
proposed to learn local or global feature descriptors [10] by place
classification [11] or contrastive-like similarity learning [12]. Some works
also use a short image sequence instead of a single image to mitigate the
issue of perceptual aliasing [13, 14, 15].
So far, most learning-based VPR methods are supervised, focusing on either
learning better feature representations or designing robust matching
strategies. They assume that accurate positions (and sometimes orientations)
are available in their training set, for obtaining the positive and negative
neighbors of each visual observation [12, 10], or defining place categories
[11]. However, this information could be onerous to obtain due to GPS errors
and its indoor unavailability or SLAM robustness challenges, especially at a
large scale. Considering human’s extraordinary VPR ability that does not seem
to need ground truth pose information for its training, we ask the following
question: is it possible to relax such an assumption and design a learning-
based VPR approach without pose-dependent supervision?
(a) A partial sensory stream & 3 types of neighbors (red/blue/green circles).
(b) A spatial neighborhood’s iterative update.
Figure 1: Our idea is based on the interconnections between the temporal,
spatial, and feature neighborhoods in sensory data: a query’s spatial
neighborhood expands from its temporal to feature neighbors, then contracts to
exclude wrong neighbors, iterated in training until such neighborhoods’
convergence.
To achieve this goal, our main idea is to leverage fixed temporal
neighborhoods and learnable feature neighborhoods to discover the unknown
spatial neighborhoods (which require ground truth pose to compute), leading to
a self-supervised VPR method shown in Fig. 1. We are inspired by research work
utilizing sensory streams (RGB videos or point cloud sequences) to obtain the
positive and negative neighbors in the temporal domain such as [16]. However,
we find that VPR learned from temporal cues alone will miss spatial
neighboring places with large viewpoint differences, because temporal
neighbors tend to share similar viewpoints.
This is suboptimal in applications such as visual navigation or loop closure
for SLAM. To automatically discover the true spatial neighbors with diverse
viewpoints, we propose a novel iterative learning strategy inspired by noisy
label training such as bootstrapping [17]. More specifically, we exploit the
temporal information for label initialization as shown in Fig. 1(a). We
further augment the labels by simulating observations perceived from different
view directions. Afterward, a feature representation will be learned based on
the current labels. Finally, as shown in Fig. 1(b), we re-label the dataset
using the learned feature space by: (1) adding feature-space neighbors as
tentative positive labels and (2) rejecting false positives via geometric
verification, in order to further refine the feature representation. Note that
the above steps are iteratively conducted until convergence.
To evaluate our self-supervised VPR with Temporal and Feature neighborhood
interactions (TF-VPR), we simulate datasets with different input modalities,
and develop a real-world dataset NYU-VPR-360 including two scenes with 38,426
GPS-georeferenced images in total. All the datasets are sequentially-collected
sensory streams. Meanwhile, we develop a novel metric to measure the heading
diversity of the retrieval results. In summary, our contributions are:
1. 1.
We propose a novel self-supervised VPR solution termed TF-VPR that eliminates
pose-dependent supervision by mining temporal and feature neighborhoods.
2. 2.
We propose a new evaluation metric to assess the heading diversity of VPR
retrieval results.
3. 3.
We conduct comprehensive experiments in both simulation and real-world to
demonstrate the advantages of our solution compared with other baseline
methods. Our codes and data will be released with this paper.
## II Related Work
Visual place recognition. Visual place recognition (VPR) is the problem of
identifying a previously visited place based on visual information [5].
Existing VPR methods mainly lie in two categories: (1) traditional VPR
techniques using hand-crafted features [18, 19, 20, 21, 22, 23], and (2)
state-of-the-art VPR techniques using deep learning [12, 24, 25, 26, 27].
Among those, NetVLAD [12] is a seminal deep-learning-based VPR framework,
followed by various research extensions such as (1) learning powerful feature
representation [28, 29], (2) designing robust matching strategies [13, 14,
15], and (3) investigating different input modalities in VPR [30, 31, 32, 33].
However, most methods are supervised by pose-dependent data [24, 25, 12, 26].
To relax such a constraint, several attempts have been made [34, 35, 36, 37],
yet failed to handle diverse re-visiting viewpoints. The most relevant work to
our method is the semi-parametric topological memory (SPTM) [16], which
utilizes temporal positives and negatives to train a binary classification
network for adding edges for topological mapping, similar to VPR. However,
since temporal neighbors tend to have very similar viewpoints, SPTM still
struggles to recognize revisits of the same place from different viewpoints.
To the best of our knowledge, no research has addressed self-supervised VPR
that can recognize places observed from various viewpoints as either 2D images
or 3D point clouds.
Noisy label learning. Noisy labels become a problem as training data size
increases, resulting in degraded performance [38]. To mitigate the issues of
noisy labels, several learning attempts [39] have been made from directions
including latent variable optimization [40], loss function design [41], and
pseudo-label-based self-training [42, 17, 43, 44]. Among all pseudo-label
methods, label refurbishment was firstly introduced by Bootstrapping [17].
Later on, another method addressed this problem using a self-training-based
approach with an iterative workflow [43] which was used as a baseline
architecture in their more recent framework confidence regularized [44]. Our
method’s iterative training paradigm takes inspiration from [43, 44].
Contrastive learning. It is a self-supervised learning technique to find
feature representations that differentiate similar data pairs from dissimilar
ones without labels. Data augmentation is often used, and the learning
objective is to decrease feature distances between the original and augmented
images (positive samples), while increasing those distances between different
images (negative samples) [45, 46, 47, 48]. In VPR, NetVLAD [30] uses the
triplet loss which is similar to contrastive learning, yet relies on ground
truth pose to define positive/negative samples. Building on NetVLAD, we
further adopt data augmentation to synthesize observations captured by
different viewpoints in order to learn more robust features for VPR.
VPR evaluation. There are several evaluation metrics for visual place
recognition, e.g., the popular AUC-PR [1] provides a good overview of
precision and recall performance but is less indicative in the cases when
ground truth match could take multiple values. Recall Rate@N, as used in [49,
50, 12], is designed to address such cases that the correct retrieval may be
in the top-N results, and multiple correct query matches are neither penalized
nor rewarded. However, existing VPR metrics rarely evaluate the viewpoint
diversity of the retrieved results [5], which is important in downstream
applications such as SLAM. In this work, we develop such a metric to fill this
gap. It assesses a VPR model’s capacity to recognize places revisited from
different directions.
## III Methodology
Figure 2: Overview of TF-VPR. Labeling, training, expansion, and contraction
are four major steps in our approach. Labels can be refurbished by iteratively
learning feature representations (training), adding feature neighborhoods
(expansion), and removing false positives by geometric verification
(contraction). Initial labels are generated by temporal adjacency.
We focus on a robot that is collecting a sensory stream of surround-view
visual observations while navigating in a certain area. Our goal is to enable
the robot to achieve visual place recognition (VPR) in the same spatial area
where the data stream is collected, without relying on any frame-wise pose
information. To this end, we utilize a learnable neural network $f_{\theta}$
parameterized by $\theta$ to map each visual observation to discriminative
feature space for VPR. However, it is non-trivial to train such a neural
network without ground-truth labels. In this work, we propose a novel
iterative learning paradigm based on the following intuition: for the $i$-th
query observation ${\bf{\mathbf{q}}}_{i}$, its spatial positive set
$\mathcal{P}_{{\bf{\mathbf{q}}}_{i}}$ could be inferred by its temporal
positive set $\mathcal{P}^{t}_{{\bf{\mathbf{q}}}_{i}}$ together with its
tentative feature-space positive set
$\mathcal{P}^{f}_{{\bf{\mathbf{q}}}_{i}}$.
Our objective: auto-labeling with self-supervised learning. Different from
existing supervised VPR methods that need to address the generalizability of
learned models, we aim to solve VPR only for a certain spatial area where the
data has been collected. Given an observation sequence, we want to
automatically label each data frame’s spatial topology without ground truth
poses for supervision. Similar to DeepMapping [51], we do not expect the
learned $f_{\theta}$ to generalize its VPR ability either to other areas or to
the same area but under different times/seasons/weather conditions than what
has been covered in the training dataset. Note that this is non-trivial and
useful in real-world, because (1) ground truth pose information is not always
easy to obtain, and (2) achieving our objective would enable auto-labeling of
large-scale datasets without relying on ground truth pose information, which
can be further used to provide training dataset for supervised VPR methods.
### III-A Overall framework
Given a sensory stream $\\{{\bf{\mathbf{o}}}_{i}\\}_{i=1}^{I}$ which can cover
the entire spatial area (each observation denoted by ${\bf{\mathbf{o}}}_{i}$
is either an RGB image or a LiDAR point cloud), we aim to discover the spatial
connectivity for these observations without any spatial information.
Specifically, we propose a self-supervised learning paradigm that utilizes the
interconnection of spatial, temporal, and feature neighborhoods to iteratively
refine the noisy pseudo-labels as well as the feature representations, as
shown in Fig. 2. The final output is the discovered spatial connectivity for
the given observation sequence, as well as the learned neural network
$f_{\theta}$ which can map each observation ${\bf{\mathbf{o}}}_{i}$ to a
discriminative feature space that supports VPR. Note that the optimized
$f_{\theta}$ can also discover the spatial connectivity for a new observation
sequence collected in the same spatial area: for each query observation
${\bf{\mathbf{q}}}_{i}$, a set of nearest feature neighbors
$\mathcal{P}^{f}_{{\bf{\mathbf{q}}}_{i}}=\\{{\bf{p}}_{j}\\}_{j\neq i}$ which
can approximate the spatial neighbors is retrieved based on the Euclidean
distance in the learned feature space:
$d\left({\bf{\mathbf{q}}}_{i},{\bf{p}}_{j}\right)=||f_{\theta}\left({\bf{\mathbf{q}}}_{i}\right)-f_{\theta}\left({\bf{p}}_{j}\right)||$.
The four sub-modules are detailed next.
### III-B Label initialization
Temporal adjacency between any two data frames generally implies their spatial
adjacency, but not vice versa [16]. Therefore, we could utilize temporal
adjacency to generate noisy labels for each observation. Note that the term
noisy means that such labels may include false negatives, e.g., it would
overlook positive labels for observations that are spatially adjacent yet
temporally distant. In this work, the labels are initialized solely with the
knowledge of temporal adjacency: given a query ${\bf{q}}_{i}$, the temporal
positive set is
$\mathcal{P}^{t}_{{\bf{\mathbf{q}}}_{i}}=\\{{\bf{p}}_{j}\\}_{|i-j|<n}$, and
the temporal negative set is
$\mathcal{N}^{t}_{{\bf{\mathbf{q}}}_{i}}=\\{{\bf{n}}_{v}\\}_{|i-v|>u\times
n}$, where $i,j,v$ denote the frame indices of the observations, $n$ controls
the size of temporal positives, $u$ controls the range of temporal negatives.
After initialization, the feature will be refined iteratively by inferring
spatial neighborhoods according to the known temporal neighborhoods and
current feature neighborhoods.
### III-C Training
In order to learn a more robust feature representation for recognizing places
viewed from diverse directions, we augment the training set by simulating
various sensor headings, i.e., an observed panoramic image/point cloud is
randomly rolled/rotated around the sensor’s vertical axis. Meanwhile, the
pseudo ground-truth for the $e$-th training epoch includes spatial positives
denoted by $\mathcal{P}_{{\bf{\mathbf{q}}}_{i}}^{(e)}$ and spatial negatives
denoted by $\mathcal{N}_{{\bf{\mathbf{q}}}_{i}}^{(e)}$, both for the $i$-th
query ${\bf{q}}_{i}$. Note that they are initialized by
$\mathcal{P}^{t}_{{\bf{\mathbf{q}}}_{i}}$ and
$\mathcal{N}^{t}_{{\bf{\mathbf{q}}}_{i}}$ for $e=1$ and will be iteratively
updated. For the training objective, we employ weakly supervised ranking loss
denoted by $L_{\theta}$ for the $i$-th training tuple
$({\bf{q}}_{i},\\{{\bf{p}}_{j}\\},\\{{\bf{n}}_{v}\\})$, following [12]:
$L_{\theta}=\sum_{i}\sum_{v}l\left(\min_{j}d\left({\bf{q}}_{i},{\bf{p}}_{j}\right)+m-d\left({\bf{q}}_{i},{\bf{n}}_{v}\right)\right),$
(1)
where ${\bf{p}}_{j}\in\mathcal{P}_{{\bf{\mathbf{q}}}_{i}}^{(e)}$ and
${\bf{n}}_{v}\ \in\mathcal{N}_{{\bf{\mathbf{q}}}_{i}}^{(e)}$, $d$ is the
Euclidean distance in feature space, $l$ is the hinge loss: $l(x)=\max(x,0)$,
and $m$ defines the margin (set to $0.2$ in this paper).
### III-D Expansion
At the initial stage (epoch $e=1$), the current positive label set
$\mathcal{P}_{{\bf{\mathbf{q}}}_{i}}^{(e)}$ has a limited size since we only
incorporate the temporal positives. Actually, there could be a number of
spatial positives in $\mathcal{N}_{{\bf{\mathbf{q}}}_{i}}^{(e)}$ which are
temporally distant yet spatially adjacent to the query. To improve the labels,
we need to expand our positive set $\mathcal{P}_{{\bf{\mathbf{q}}}_{i}}^{(e)}$
by including current feature space neighbors. Specifically, once the
$f_{\theta}$ is trained for one epoch with current labels, the $K$ nearest
neighbors (KNN) could be retrieved as the feature neighbors.
Dynamic KNN. However, it is hard to use the same $K$ for different query
observations, because places like intersections naturally have larger spatial
neighborhoods than hallways. Thus, we need a frame-specific threshold. To
design such a mechanism, we observe the following hypothesis: if an
observation is closer to the query than at least one of the query’s temporal
neighbors in the feature space, then such an observation is more likely to be
a ground truth spatial neighbor of the query. Therefore, we firstly select $K$
nearest feature neighbors as the feature neighborhood candidates
$\\{{\bf{\mathbf{o}}}_{k}\\}_{k=1}^{K}$. Then we utilize the maximum feature
distance between the query and its temporal neighbors as a frame-specific
threshold $\tau_{i}$ to determine which candidates should be included in the
expanded positive set:
$\tau_{i}=\max\limits_{{\bf{p}}_{j}\in\mathcal{P}^{t}_{{\bf{\mathbf{q}}}_{i}}}d\left({\bf{q}}_{i},{\bf{p}}_{j}\right),$
(2)
where $\mathcal{P}^{t}_{{\bf{\mathbf{q}}}_{i}}$ is the temporal neighborhood
set for query ${\bf{q}}_{i}$, $i$ and $j$ are the frame indices. Finally, any
candidate ${\bf{o}}_{k}$ (selected from KNN) with a smaller feature distance
to the query than the threshold $\tau_{i}$ will form the feature neighborhood
set:
$\mathcal{P}^{f}_{{\bf{\mathbf{q}}}_{i}}=\\{{\bf{\mathbf{o}}}_{k}\\}_{d\left({\bf{q}}_{i},{\bf{\mathbf{o}}}_{k}\right)<\tau_{i}}$.
### III-E Contraction
To avoid potential false positives caused by noisy feature space during
learning, we employ geometric verification to check the validity of feature
neighborhoods $\mathcal{P}^{f}_{{\bf{\mathbf{q}}}_{i}}$ before using it to
update $\mathcal{P}_{{\bf{\mathbf{q}}}_{i}}^{(e)}$. We adopt different
distance measures (independent of the neural network) based on input modality:
the number of matching points for images [52] and the Chamfer distance after
ICP for point clouds [53]. The verification strategy is similar to the one for
KNN feature neighborhoods verification: we consider the maximum distance
between the query and its temporal neighborhoods as the threshold. Then any
candidate with a distance value within the threshold will pass the
verification. The verified feature neighborhoods denoted by
$\hat{\mathcal{P}}^{f}_{{\bf{\mathbf{q}}}_{i}}$ are permanently added into the
positive set:
$\mathcal{P}^{(e+1)}_{{\bf{q}}_{i}}=\mathcal{P}^{(e)}_{{\bf{q}}_{i}}\cup\hat{\mathcal{P}}^{f}_{{\bf{q}}_{i}},$
(3)
where $e$ is the number of learning epochs. Note that
$\mathcal{N}_{{\bf{\mathbf{q}}}_{i}}^{(e)}$ will also be updated similarly.
Afterward, the model will be trained with the updated labels until
convergence.
## IV Experiments
We test TF-VPR in three different kinds of environments: simulated 2D point
clouds [51], simulated RGB images [54], and real-world RGB images. Our
codebase uses PyTorch [55] with network parameters optimized using Adam [56].
The learning rate is tuned to 0.001. We compare TF-VPR with both supervised
and self-supervised baseline methods.
### IV-A Evaluation metrics
Recall rate (Recall@N) is the ratio of successful retrievals to a total number
of queries, where a successful retrieval means at least one of the top-N
retrieved results is also the ground-truth spatial neighbor of the query.
Ground truth can be obtained by K-D tree search on geographical location
$(x,y,z)$ within a certain radius $R$. Note that when computing this metric,
we need to exclude the temporal neighbors of a query from its top-$N$
retrievals. This is because in our auto-labeling setup, the temporal-based
methods can easily overfit the temporal neighborhood, leading to uninformative
evaluation with recall rates that are always close to $100\%$ if the temporal
neighbors are kept in ground truth.
Heading diversity (HD) measures the diversity of sensor headings of the true
positives (w.r.t. the query’s heading) among the top-$|GT|$ retrievals. $|GT|$
is the size of ground-truth set that can be obtained based on a specific
radius $R$ as described in Recall@N. First, we believe that those positives
with headings different than the query are more valuable in downstream
applications. Thus, we evenly divide the 360∘ range of headings into 8 angular
bins in our setup, ignoring the first and last bins because they contain
positives with similar headings w.r.t. the query. In this case, the $m$-th bin
covers the heading difference range ${\mathcal{Q}_{m}}$ as:
$\mathcal{Q}_{m}=[m\times 45^{\circ},(m+1)\times
45^{\circ}],\,m\in[1,2,3,4,5,6].$ (4)
Then, we define HD for a query $\bf{q}$ as the bin coverage ratio between the
true positives and the ground truth:
$\text{HD(\bf{q})}=\frac{\sum_{m\in{[1...6]}}\mathds{1}(\exists\bf{x}\in\tilde{\mathcal{P}}^{|GT|}_{{\bf{\mathbf{q}}}}\land(\theta_{\bf{q}}-\theta_{\bf{x}})\in\mathcal{Q}_{m})}{\varepsilon+\sum_{m\in{[1...6]}}\mathds{1}(\exists\bf{y}\in\mathcal{P}^{|GT|}_{{\bf{\mathbf{q}}}}\land(\theta_{\bf{q}}-\theta_{\bf{y}})\in\mathcal{Q}_{m})},$
(5)
where $\theta_{\bf{q}}$ and $\theta_{\bf{x}}$ are respectively the heading of
the query and a frame ${\bf{x}}$,
$\tilde{\mathcal{P}}^{|GT|}_{{\bf{\mathbf{q}}}}$ is the set of true positives
in the top-$|GT|$ retrievals, $\mathcal{P}^{|GT|}_{{\bf{\mathbf{q}}}}$ is the
ground truth positive set, $\varepsilon$ is an arbitrarily small positive
quantity to avoid zero division error, and $\mathds{1}(\cdot)$ is the
indicator function. See Fig. 3 for an example. Finally, we report the averaged
HD for all queries.
Figure 3: Heading diversity illustration. The angle represents the heading
difference between the query and the evaluated positives. HD represents how
many angular bins are covered by true positives vs. that by the ground truth.
The figure gives an example of how to calculate HD. Excluding the first and
last bins, $\mathcal{P}^{f}_{{\bf{\mathbf{q}}}_{i}}$ contains 5 retrieved non-
temporal positives, 4 of which are true positives, and they fall into 3
different bins, while ground truth covers 5 bins, so HD is $3/5$ (not $4/5$).
### IV-B Experiments on simulated point cloud dataset
Dataset. We generate the 2D point cloud dataset using the tool provided in
[51]. Specifically, we create a large environment as a $1024\times 1024$
binary image in which black and white pixels respectively represent the
occupied and free-space locations in the 2D environment. We then manually
generate a set of trajectories in this environment, each of which contains a
sequence of $2048$ poses. At each pose, a point cloud scan is simulated by
finding the intersection points between 2D LiDAR rays and occupied space in
the environment. The simulated 2D point cloud dataset contains $3$ different
environments and $18$ trajectories in each environment. Each scan contains
$256$ points.
Baseline methods. We compare TF-VPR with the following baselines: (1)
PointNetVLAD [30] trained with pose-based supervision, (2) PointNetVLAD
trained with temporal pseudo labels as in [16] (SPTM), (3) SPTM with data
augmentation (SPTM+A), (4) SPTM with feature-neighbor KNN expansion
(SPTM+F(K)) , and (5) SPTM+A+F(K). Note that the contraction step is not used
for this simulated toy dataset, because we found the expansions are all
correct in this case.
Implementation details. The network architecture for the SPTM baseline is the
same as PointNetVLAD. We select $n=5$ and $u=2$ in Section III-B. The value of
$n$ depends on the sampling rate of the sensor. Additionally, similar to the
training in [30], we randomly select only $2$ positives and $18$ negatives to
speed up the loss computation. The output feature dimension is $512$. Based on
the sampling rate of the sensor, for each query, we exclude its closest $10$
temporal neighbors from the top-N retrievals as explained in IV-A.
Figure 4: Qualitative VPR results on 2D point cloud dataset. The first row
shows the results of SPTM [16] and the second row shows the results of TF-VPR.
The first column is the query point cloud and column 2-5 are the top 4
retrievals. Green and red respectively indicate true and false positives.
Figure 5: Recall@10 and HD with respect to training epoch on 2D point cloud
dataset. F denotes method with feature neighborhoods, A denotes method with
data augmentation, (K) denotes method with KNN during expansion, (D) denotes
method with dynamic KNN in expansion. Figure 6: Per-frame performance
visualization over ground truth trajectories on point cloud (top), simulated
RGB (middle), and NYU-VPR-360 (bottom) datasets. The left block shows the
Recall@10 and the right block shows the HD metrics. Each query’s metrics at
epoch 30 are color-coded.
Data augmentation. Fig. 5 shows a significant improvement in both recall@10
and HD after augmentation is added to SPTM. SPTM fails to retrieve the true
neighbors from different headings, leading to insufficient retrievals.
Differently, SPTM+A uses augmented positives in the triplet loss, leading to a
network that is insensitive to the orientation of the point cloud inputs.
Thus, in Fig. 4, with augmentation, TF-VPR demonstrates its capability to
retrieve positives from diverse directions.
Feature-neighbor expansion. SPTM+F(K) has slightly poorer performance than the
original SPTM as shown in Fig. 5, because only adding positives from the same
direction does not provide extra valuable training data. In comparison, SPTM
with data augmentation and feature space neighborhood (SPTM+A+F(K)) helps
feature-neighbor expansion more effectively discover true neighbors from
different headings, thereby improving performance.
Dynamic KNN. We further study the effectiveness of dynamic KNN compared to
fixed KNN in TF-VPR, i.e., SPTM+A+F(D) vs. SPTM+A+F(K). The difference is
negligible on the simulated point cloud dataset. However, the difference would
be larger on NYU-VPR-360 dataset.
Performance difference between TF-VPR and other baselines. In Table I, among
all of these methods, TF-VPR shows a comparable result of recall rate and
heading diversity to PointNetVLAD, and outperforms SPTM by a large margin.
Moreover, Fig. 6 visualizes the per-frame retrieval quality over the datasets’
ground truth trajectories, and SPTM clearly performs the worst with more
failures.
TABLE I: Quantitative results on 2D simulated point cloud data. We employ best
recall rate (R) and heading diversity (HD).
Method / Metric | | R@10 | R@5 | R@1 | HD
---|---|---|---|---|---
SPTM [16] | | 93.36 | 91.06 | 79.10 | 4.25
TF-VPR (Ours) | | 99.71 | 99.66 | 98.73 | 93.62
PointNetVLAD [30] | | 100.00 | 100.00 | 100.00 | 89.38
### IV-C Experiments on photorealistic RGB dataset
TABLE II: Quantitative results on photorealistic RGB data. We employ best
recall rate (R) and heading diversity (HD). We report results on three rooms
in habitat-sim (Goffs, Micanopy, and Spotswood).
Scene | Goffs | Micanopy | Spotswood
---|---|---|---
Metric | R@10 | R@5 | R@1 | HD | R@10 | R@5 | R@1 | HD | R@10 | R@5 | R@1 | HD
SPTM [16] | 95.52 | 94.69 | 90.15 | 55.28 | 95.13 | 94.32 | 90.06 | 56.37 | 96.33 | 95.65 | 92.16 | 54.41
SPTM (epoch 30) [16] | 94.50 | 93.28 | 88.03 | 49.61 | 92.89 | 90.20 | 78.27 | 50.47 | 93.53 | 91.23 | 81.52 | 49.69
PCL [57] | 49.27 | 42.86 | 27.00 | 0.35 | 60.36 | 54.18 | 37.73 | 0.19 | 57.36 | 51.66 | 35.33 | 0.03
VLAD [9, 8] | 42.48 | 30.69 | 12.54 | 0.07 | 49.23 | 37.82 | 17.36 | 0.00 | 53.52 | 41.23 | 16.53 | 0.12
TF-VPR (Ours) | 96.31 | 96.00 | 93.62 | 67.67 | 95.59 | 95.21 | 93.42 | 65.97 | 96.91 | 96.57 | 95.35 | 69.39
TF-VPR (Ours) (epoch 30) | 96.15 | 95.82 | 93.31 | 66.54 | 95.46 | 94.93 | 92.82 | 63.94 | 96.66 | 96.32 | 94.66 | 65.59
NetVLAD [12] | 99.63 | 99.16 | 96.36 | 60.48 | 99.48 | 99.26 | 97.38 | 63.38 | 99.56 | 99.17 | 96.60 | 64.64
Figure 7: Recall@10 and HD vs. training epoch on the photorealistic dataset
(Goffs). V denotes method with geometric verification. Other abbreviations
follow Fig. 5. Figure 8: Qualitative VPR results on the photorealistic RGB
dataset. The upper part shows an example where TF-VPR outperforms SPTM [16],
and the lower part depicts a challenging example in which none of the methods
retrieve the data correctly. Green and red indicates true and false non-
temporal spatial positives, respectively. Although the frames in the first row
appear to be positives, their positions are still far from the query. Due to
the high frame density in the habitat-sim dataset, frames can only be
considered positives if they are no more than $20$cm apart from the query.
Dataset. TF-VPR has also been tested via habitat-sim [54] simulator on the
Gibson photorealistic RGB dataset [58], which provides panoramic RGB images
for a variety of indoor scans. We capture RGB images by a panoramic camera
mounted on a robot moving randomly in the virtual environment. We captured a
total of $33,679$ RGB images in three Gibson rooms. Each image is downsampled
to $256\times 64$ pixels. In contrast to other datasets, this simulated RGB
dataset contains a large number of revisits of places from both similar and
different directions, which is useful for testing recall rate and heading
diversity for VPR.
Baseline methods. The following baselines are evaluated: (1) NetVLAD [12]
trained with pose-based supervision, (2) NetVLAD trained with temporal pseudo
labels as in [16] (SPTM), (3) prototypical contrastive learning (PCL) [57] as
another self-supervised VPR method used in visual navigation [37], (4) VLAD
[9, 8] as a classic non-deep-learning VPR, and the previous ablation study
baselines.
Implementation details. To speed up computation, the output dimension is set
to $512$. Our implementation is based on the code from NetVLAD. The method for
converting NetVLAD to SPTM is similar to Section IV-B. Similarly, we also
choose $n=5$ and $u=2$ in Section III-B, and exclude for each query’s closest
$30$ temporal neighbors from the top-N retrievals as explained in IV-A.
Considering new baselines involved, PCL is implemented as the default setting.
We need to tune the total number of clusters. In this experiment, we tried to
set the total number of clutter to $200$, $500$, and $1000$ and select the
best result. For the VLAD baseline, we use the classic 128-dimensional SIFT
features, and a cluster size of $32$. The raw VLAD descriptor dimension of
$32\times 128$ is further reduced to $512$ by PCA.
Importance of contraction. Geometric verification becomes more important for
RGB data. The generated positive candidate is not as trustworthy as in the toy
dataset in Section IV-B. As a result, including geometric verification helps
maintain or even increase accuracy. From Fig. 7, we can observe that both
metrics in all methods other than TF-VPR decline with respect to training
epochs, because they overfit temporal neighbors as depicted in the first stage
in Fig. 1(b). And they tend to miss true spatial neighbors. To prevent this,
the feature space neighborhood together with the verification discovers more
reliable positives in $\hat{\mathcal{P}}^{f}_{{\bf{\mathbf{q}}}_{i}}$ to be
added into $\mathcal{P}_{{\bf{\mathbf{q}}}_{i}}$, as depicted in the third
stage of Fig. 1(b). The stability of the model performance over training
epochs is critical for self-supervised VPR in reality because we would not
know when to stop training without ground truth poses.
Ablation study on habitat-sim. As shown in Fig. 7 and Table II, TF-VPR
outperforms all unsupervised baselines, and approaches the supervised NetVLAD
performance in both recall rate and heading diversity (HD). Particularly, HD
is improved by a large margin, particularly for scenes with a high number of
revisits from different directions. On average, TF-VPR improves recall rate by
$1\%$ and heading diversity by $10\%$. Fig. 6 visualizes the retrieval quality
on the simulated RGB dataset. SPTM can make reasonably accurate estimates in
recall@10 and HD. TF-VPR outperforms SPTM, but the improvement is not
significant. Especially, the improvement of HD in the RGB dataset is not
comparable to that in the point cloud dataset, because RGB images appear to be
less resistant to environmental changes such as lighting and seasons. The
qualitative results are in Fig. 8.
PCL, VLAD, and conventional visual SLAM system. Table II shows the poor
performance of PCL and VLAD. We believe that PCL might not be a good VPR
solution, because most contrastive learning methods do form several disjoint
clusters for each category, which is not suitable to represent a continuous
feature in vision-based SLAM problems. Similarly, VLAD performs poorly when
the input resolution is low and the output dimension is small. Moreover, we
test the conventional visual SLAM system, like OpenVSLAM [59]. However,
OpenVSLAM easily loses track of odometry. A total of $17.75\%$ of the frames
are lost during the tracking process. OpenVSLAM builds a disjoint topology
graph while tracking odometry, and the recall rate is $54.98\%$ versus
$96.23\%$ for TF-VPR. As a result of the poor performance, we do not use
OpenVSLAM as a benchmark.
### IV-D Experiments on NYU-VPR-360 dataset
Figure 9: Recall@10 and HD vs. training epoch on NYU-VPR-360 (scene 1).
Abbreviations follow Fig. 7. TABLE III: Quantitative results on NYU-VPR-360
dataset. We employ best recall rate (R) and heading diversity (HD). Two scenes
have been reported (Scene1, Scene2).
Scene | Scene 1 | Scene 2
---|---|---
Metric | R@10 | R@5 | R@1 | HD | R@10 | R@5 | R@1 | HD
SPTM [16] | 69.02 | 60.08 | 48.27 | 79.49 | 65.97 | 62.52 | 53.25 | 50.79
SPTM (epoch 30) [16] | 60.16 | 50.27 | 39.34 | 75.46 | 58.12 | 54.06 | 43.92 | 46.39
PCL [57] | 64.75 | 57.30 | 40.99 | 0.04 | 62.80 | 58.42 | 46.96 | 13.46
VLAD [9, 8] | 37.31 | 27.89 | 10.53 | 0.01 | 21.23 | 16.92 | 8.78 | 0.01
TF-VPR (Ours) | 71.94 | 63.89 | 52.56 | 83.34 | 66.16 | 62.05 | 52.49 | 51.19
TF-VPR (Ours) (epoch 30) | 70.97 | 62.61 | 50.99 | 80.79 | 64.18 | 60.09 | 51.44 | 49.33
NetVLAD [12] | 100.00 | 100.00 | 100.00 | 86.38 | 100.00 | 100.00 | 99.99 | 54.45
NYU-VPR-360 dataset. There are several VPR datasets with panoramic images [60,
50], yet few of them have repeated visits to the same place from a variety of
angles. To show the ability of our method in retrieving images of different
headings, we proposed the NYU-VPR-360 dataset captured by Gopro MAX (a dual-
lens 360∘ camera with GPS recording), which is composed of sequentially
collected panoramic RGB images of street views in New York City. The GoPro
camera was mounted on the top of the driving vehicle. We utilize the GPS
readings of the camera to provide the ground truth of spatial neighborhoods.
Note that we select the panoramic images from the whole video to make them
synchronized with GPS. The dataset is composed of two driving trajectories,
covering an area of approximately $80,000m^{2}$. There are over $15,000$
images of $3840\times 1920$ pixels in the dataset with their corresponding
locations for each scene. Most junctions have at least two types of actions
with different driving directions, with the exception of a few intersections
that are for traffic reasons.
Baseline methods. Following IV-C, we use SPTM [16], NetVLAD [12], VLAD [9, 8],
and PCL [57] as baselines.
Implementation details. The images are resized to $128\times 64$ pixels. We
set $n=10$ and $u=5$ as described in Section III-B. For scene 1 and scene 2,
based on different sampling rates of the sensors, for each query, we
respectively exclude its closest $30$ and $100$ temporal neighbors from the
top-N retrievals as explained in IV-A.
Comparison with baselines. The edge of our method is more distinguishable on
NYU-VPR-360 dataset as proven by the qualitative results in Fig. 10. As shown
in Fig. 6, there is an improvement in the recall rate using TF-VPR .
Furthermore, as shown in Table III, TF-VPR surpasses other baselines in recall
rate and heading diversity in scene 1 by about $2\%$ and $4\%$ respectively.
Furthermore, performance in scene 2 does not improve because there are no
spatial positives from different headings in scene 2. More importantly, the
performance gap between TF-VPR and other baselines becomes larger over epochs.
As shown in Fig. 9, the recall@10 of TF-VPR outperforms the baselines by
approximately $8\%$-$10\%$ at epoch $30$ as discussed in IV-C.
Dynamic KNN mechanism. The distinction between the K-nearest-neighbor
specification and the dynamic KNN described in Section III-D is sharper in our
real-world experiment. This proves that the dynamic KNN is a better mechanism
for selecting feature neighborhoods because its flexibility allows the model
to find an adequate number of neighbors for each location in the dataset.
Figure 10: Qualitative VPR results on NYU-VPR-360 dataset. The upper part
shows an example where TF-VPR outperforms SPTM, and the lower part shows a
challenging example in the dataset where only the top-1 retrieval of TF-VPR
and VLAD is correct. Green and red respectively indicate true and false
positives .
## V Conclusion
We propose TF-VPR as a self-supervised auto-labeling method for determining
the unknown spatial neighbors from the fixed temporal neighbors and learnable
feature neighbors. Extensive experiments show that TF-VPR not only improves
the recall rate over the existing method but also can retrieve spatial
positives with more diverse viewpoints on various datasets. TF-VPR enables
easier use of VPR in real-world robotics and computer vision applications, and
can be applied to most existing deep-learning-based VPR methods.
Acknowledgment. This work is supported by NSF grants under CMMI-1932187,
CNS-2121391, and EEC-2036870.
## References
* [1] S. Lowry, _et al._ , “Visual place recognition: A survey,” _T-RO_ , 2015\.
* [2] E. Brachmann and C. Rother, “Visual camera re-localization from RGB and RGB-D images using DSAC,” _TPAMI_ , 2021.
* [3] A. Angeli, S. Doncieux, J.-A. Meyer, and D. Filliat, “Real-time visual loop-closure detection,” in _2008 ICRA_ , 2008.
* [4] F. Radenović, A. Iscen, G. Tolias, Y. Avrithis, and O. Chum, “Revisiting oxford and paris: Large-scale image retrieval benchmarking,” in _CVPR_ , 2018\.
* [5] M. Zaffar, _et al._ , “VPR-Bench: An open-source visual place recognition evaluation framework with quantifiable viewpoint and appearance change,” _IJCV_ , 2021.
* [6] D. Sheng, _et al._ , “NYU-VPR: Long-term visual place recognition benchmark with view direction and data anonymization influences,” in _IROS_ , 2021.
* [7] D. G. Lowe, “Object recognition from local scale-invariant features,” in _ICCV_ , 1999.
* [8] H. Jégou, M. Douze, C. Schmid, and P. Pérez, “Aggregating local descriptors into a compact image representation,” in _CVPR_ , 2010.
* [9] R. Arandjelovic and A. Zisserman, “All about vlad,” in _CVPR_ , June 2013\.
* [10] P.-E. Sarlin, C. Cadena, R. Siegwart, and M. Dymczyk, “From coarse to fine: Robust hierarchical localization at large scale,” in _CVPR_ , 2019.
* [11] Z. Chen, _et al._ , “Deep learning features at scale for visual place recognition,” in _ICRA_ , 2017.
* [12] R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic, “Netvlad: Cnn architecture for weakly supervised place recognition,” in _CVPR_ , June 2016\.
* [13] S. Garg, M. Vankadari, and M. Milford, “Seqmatchnet: Contrastive learning with sequence matching for place recognition & relocalization,” in _CoRL_. PMLR, 2022.
* [14] A. Forechi, A. F. De Souza, C. Badue, and T. Oliveira-Santos, “Sequential appearance-based global localization using an ensemble of knn-dtw classifiers,” in _IJCNN_ , 2016.
* [15] O. Vysotska and C. Stachniss, “Lazy data association for image sequences matching under substantial appearance changes,” _RA-L_ , 2015.
* [16] N. Savinov, A. Dosovitskiy, and V. Koltun, “Semi-parametric topological memory for navigation,” in _ICLR_ , 2018.
* [17] S. E. Reed, H. Lee, D. Anguelov, C. Szegedy, D. Erhan, and A. Rabinovich, “Training deep neural networks on noisy labels with bootstrapping,” in _ICLR (Workshop)_ , 2015.
* [18] M. Cummins and P. Newman, “FAB-MAP: Probabilistic localization and mapping in the space of appearance,” _IJRR_ , 2008.
* [19] N. Cummins, Mark and Paul, “Appearance-only SLAM at large scale with FAB-MAP 2.0,” _IJRR_ , 2011.
* [20] M. J. Milford, G. F. Wyeth, and D. Prasser, “Ratslam: a hippocampal model for simultaneous localization and mapping,” in _ICRA_ , 2004.
* [21] D. Galvez-Lopez and J. D. Tardos, “Real-time loop detection with bags of binary words,” in _IROS_ , 2011.
* [22] M. J. Milford and G. F. Wyeth, “Seqslam: Visual route-based navigation for sunny summer days and stormy winter nights,” in _ICRA_ , 2012.
* [23] G. Costante, T. A. Ciarfuglia, P. Valigi, and E. Ricci, “A transfer learning approach for multi-cue semantic place recognition,” in _IROS_ , 2013.
* [24] N. Sünderhauf, _et al._ , “Place recognition with convnet landmarks: Viewpoint-robust, condition-robust, training-free,” in _Robotics: Science and Systems XI_. Robotics: Science and Systems Conference, 2015.
* [25] M. Lopez-Antequera, R. Gomez-Ojeda, N. Petkov, and J. Gonzalez-Jimenez, “Appearance-invariant place recognition by discriminatively training a convolutional neural network,” _Pattern Recognition Letters_ , 2017.
* [26] T. Naseer, G. L. Oliveira, T. Brox, and W. Burgard, “Semantics-aware visual localization under challenging perceptual conditions,” in _ICRA_ , 2017.
* [27] S. Garg, N. Suenderhauf, and M. Milford, “Lost? appearance-invariant place recognition for opposite viewpoints using visual semantics,” _arXiv preprint arXiv:1804.05526_ , June 2018.
* [28] C. Choy, J. Park, and V. Koltun, “Fully convolutional geometric features,” in _ICCV_ , 2019.
* [29] H. Deng, T. Birdal, and S. Ilic, “Ppfnet: Global context aware local features for robust 3D point matching,” in _CVPR_ , 2018.
* [30] M. A. Uy and G. H. Lee, “PointNetVLAD: Deep point cloud based retrieval for large-scale place recognition,” in _CVPR_ , 2018.
* [31] Z. Liu, _et al._ , “Lpd-net: 3d point cloud learning for large-scale place recognition and environment analysis,” in _ICCV_ , 2019.
* [32] X. Chen, _et al._ , “Overlapnet: Loop closing for lidar-based slam,” in _Robotics: Science and Systems_ , 2020.
* [33] K. Cai, B. Wang, and C. X. Lu, “Autoplace: Robust place recognition with low-cost single-chip automotive radar,” in _ICRA_ , 2022.
* [34] S. Lowry and M. J. Milford, “Supervised and unsupervised linear learning techniques for visual place recognition in changing environments,” _T-RO_ , 2016.
* [35] N. Merrill and G. Huang, “Lightweight unsupervised deep loop closure,” in _Robotics: Science and Systems_ , 2018.
* [36] X. Gao and T. Zhang, “Unsupervised learning to detect loops using deep neural networks for visual SLAM system,” _Autonomous robots_ , 2017.
* [37] O. Kwon, N. Kim, Y. Choi, H. Yoo, J. Park, and S. Oh, “Visual graph memory with unsupervised representation for visual navigation,” in _ICCV_ , 2021\.
* [38] N. Natarajan, I. S. Dhillon, P. K. Ravikumar, and A. Tewari, “Learning with noisy labels,” _NeurIPS_ , 2013.
* [39] H. Song, M. Kim, D. Park, Y. Shin, and J.-G. Lee, “Learning from noisy labels with deep neural networks: A survey,” _TNNLS_ , 2022.
* [40] Z. Yu, _et al._ , “Simultaneous edge alignment and learning,” in _ECCV_ , 2018.
* [41] G. Patrini, A. Rozza, A. Krishna Menon, R. Nock, and L. Qu, “Making deep neural networks robust to label noise: A loss correction approach,” in _CVPR_ , 2017.
* [42] D.-H. Lee _et al._ , “Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks,” in _ICML_ , 2013\.
* [43] Y. Zou, Z. Yu, B. Kumar, and J. Wang, “Unsupervised domain adaptation for semantic segmentation via class-balanced self-training,” in _ECCV_ , 2018\.
* [44] Y. Zou, Z. Yu, X. Liu, B. Kumar, and J. Wang, “Confidence regularized self-training,” in _ICCV_ , 2019.
* [45] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in _CVPR_ , 2020.
* [46] A. v. d. Oord, Y. Li, and O. Vinyals, “Representation learning with contrastive predictive coding,” _arXiv preprint arXiv:1807.03748_ , 2018\.
* [47] Y. Tian, D. Krishnan, and P. Isola, “Contrastive multiview coding,” in _ECCV_. Springer, 2020.
* [48] Y. Tian, C. Sun, B. Poole, D. Krishnan, C. Schmid, and P. Isola, “What makes for good views for contrastive learning?” _NeurIPS_ , 2020.
* [49] R. Arandjelović and A. Zisserman, “Visual vocabulary with a semantic twist,” in _ACCV_. Springer, 2014\.
* [50] A. Torii, R. Arandjelovic, J. Sivic, M. Okutomi, and T. Pajdla, “24/7 place recognition by view synthesis,” in _CVPR_ , 2015.
* [51] L. Ding and C. Feng, “Deepmapping: Unsupervised map estimation from multiple point clouds,” in _CVPR_ , 2019.
* [52] P. C. Ng and S. Henikoff, “Sift: Predicting amino acid changes that affect protein function,” _Nucleic acids research_ , 2003.
* [53] P. Achlioptas, O. Diamanti, I. Mitliagkas, and L. Guibas, “Learning representations and generative models for 3d point clouds,” in _ICML_. PMLR, 2018.
* [54] M. Savva, _et al._ , “Habitat: A platform for embodied AI research,” in _ICCV_ , 2019.
* [55] A. Paszke, _et al._ , “Automatic differentiation in pytorch,” _arXiv preprint arXiv:2010.07922_ , 2017.
* [56] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in _ICLR_ , 2015.
* [57] J. Li, P. Zhou, C. Xiong, and S. C. Hoi, “Prototypical contrastive learning of unsupervised representations,” in _ICLR_ , 2021.
* [58] F. Xia, A. R. Zamir, Z.-Y. He, A. Sax, J. Malik, and S. Savarese, “Gibson env: real-world perception for embodied agents,” in _CVPR_ , 2018.
* [59] S. Sumikura, M. Shibuya, and K. Sakurada, “Openvslam: A versatile visual slam framework,” in _ACM Multimedia_ , 2019.
* [60] A. Torii, J. Sivic, T. Pajdla, and M. Okutomi, “Visual place recognition with repetitive structures,” in _CVPR_ , 2013.
|
# FWD: Real-time Novel View Synthesis with Forward Warping and Depth
Ang Cao Chris Rockwell Justin Johnson
University of Michigan Ann Arbor
<EMAIL_ADDRESS>
###### Abstract
Novel view synthesis (NVS) is a challenging task requiring systems to generate
photorealistic images of scenes from new viewpoints, where both quality and
speed are important for applications. Previous image-based rendering (IBR)
methods are fast, but have poor quality when input views are sparse. Recent
Neural Radiance Fields (NeRF) and generalizable variants give impressive
results but are not real-time. In our paper, we propose a generalizable NVS
method with sparse inputs, called FWD , which gives high-quality synthesis in
real-time. With explicit depth and differentiable rendering, it achieves
competitive results to the SOTA methods with 130-1000$\times$ speedup and
better perceptual quality. If available, we can seamlessly integrate sensor
depth during either training or inference to improve image quality while
retaining real-time speed. With the growing prevalence of depths sensors, we
hope that methods making use of depth will become increasingly useful.
## 1 Introduction
Given several posed images, _novel view synthesis_ (NVS) aims to generate
photorealistic images depicting the scene from unseen viewpoints. This long-
standing task has applications in graphics, VR/AR, bringing life to still
images. It requires a deep visual understanding of geometry and semantics,
making it appealing to test visual understanding.
Figure 1: Real-time Novel View Synthesis. We present a real-time and
generalizable method to synthesize images from sparse inputs. NeRF variants
model the scene via an MLP, which is queried millions of times during
rendering and leads to low speeds. Our method utilizes explicit depths and
point cloud renderers for fast rendering, inspired by SynSin [82]. The model
is trained end-to-end with a novel fusion transformer to give high-quality
results, where regressed depths and features are optimized for synthesis.
Early work on NVS focused on _image-based rendering_ (IBR), where models
generate target views from a set of input images. Light field [39] or proxy
geometry (like mesh surfaces) [12, 24, 61, 62] are typically constructed from
posed inputs, and target views are synthesized by resampling or blending
warped inputs. Requiring dense input images, these methods are limited by 3D
reconstruction quality, and can perform poorly with _sparse_ input images.
Recently, Neural Radiance Fields (NeRF) [48] have become the leading methods
for NVS, using MLPs to represent the 5D radiance field of the scene
_implicitly_. The color and density of each sampling point are queried from
the network and aggregated by volumetric rendering to get the pixel color.
With dense sampling points and _differentiable renderer_ , explicit geometry
isn’t needed, and densities optimized for synthesis quality are learned.
Despite impressive results, they are not _generalizable_ , requiring MLP
fitting for each scene with dense inputs. Also, they are extremely _slow_
because of tremendous MLP query times for a single image.
Generalizable NeRF variants like PixelNeRF [89], IBRNet [78] and MVSNeRF [9]
emerge very recently, synthesizing novel views of unseen scenes without per-
scene optimization by modeling an MLP conditioned on sparse inputs. However,
they still query the MLP millions of times, leading to slow speeds. Albeit the
progress of accelerating NeRF with per-scene optimization [88, 27, 18], fast
and generalizable NeRF variants are still under-explored.
In this paper, we target a _generalizable_ NVS method with _sparse_ inputs,
refraining dense view collections. Both _real-time speed_ and _high-quality_
synthesis are expected, allowing interactive applications. Classical IBR
methods are fast but require dense input views for good results. Generalizable
NeRF variants show excellent quality without per-scene optimization but
require intense computations, leading to slow speeds. Our method, termed FWD,
achieves this target by Forward Warping features based on Depths.
Our key insight is that explicitly representing the _depth_ of each input
pixel allows us to apply _forward warping_ to each input view using a
differentiable point cloud renderer. This avoids the expensive volumetric
sampling used in NeRF-like methods, enabling real-time speed while maintaining
high image quality. This idea is deeply inspired by the success of SynSin
[82], which employs a differentiable point cloud renderer for single image
NVS. Our paper extends SynSin to multiple inputs settings and explores
effective and efficient methods to fuse multi-view information.
Like prior NVS methods, our approach can be trained with RGB data only, but it
can be progressively enhanced if noisy sensor depth data is available during
training or inference. Depth sensors are becoming more prevalent in consumer
devices such as the iPhone 13 Pro and the LG G8 ThinQ, making RGB-D data more
accessible than ever. For this reason, we believe that methods making use of
RGB-D will become increasingly useful over time.
Our method estimates depths for each input view to build a point cloud of
latent features, then synthesizes novel views via a point cloud renderer. To
alleviate the inconsistencies between observations from various viewpoints, we
introduce a view-dependent feature MLP into point clouds to model view-
dependent effects. We also propose a novel Transformer-based fusion module to
effectively combine features from multiple inputs. A refinement module is
employed to inpaint missing regions and further improve synthesis quality. The
whole model is trained end-to-end to minimize photometric and perceptual
losses, learning depth and features optimized for synthesis quality.
Our design possesses several advantages compared with existing methods. First,
it gives both high-quality and high-speed synthesis. Using explicit point
clouds enables real-time rendering. In the meanwhile, differentiable renderer
and end-to-end training empower high-quality synthesis results. Also, compared
to NeRF-like methods, which cannot synthesize whole images during training
because of intensive computations, our method could easily utilize perceptual
loss and refinement module, which noticeably improves the visual quality of
synthesis. Moreover, our model can seamlessly integrate sensor depths to
further improve synthesis quality. Experimental results support these
analyses.
We evaluate our method on the ShapeNet and DTU datasets, comparing it with
representative NeRF-variants and IBR methods. It outperforms existing methods,
considering speed and quality jointly: compared to IBR methods we improve both
speed and quality; compared to recent NeRF-based methods we achieve
competitive quality at realtime speeds (130-1000$\times$ speedup). A user
study demonstrates that our method gives the most perceptually pleasing
results among all methods. The code is available at
https://github.com/Caoang327/fwd_code.
## 2 Related Work
Novel view synthesis is a long-standing problem in computer vision, allowing
for the generation of novel views given several scene images. A variety of 3D
representations (both implicit and explicit) have been used for NVS, including
depth and multi-plane images [74, 94, 72, 57, 7, 67], voxels [69, 21], meshes
[61, 23, 28, 62], point clouds [82, 40, 64] and neural scene representations
[65, 41, 19, 34, 47, 55, 48]. In this work, we use point clouds as our 3D
representations for computational and memory efficiency.
Image-based Rendering. IBR synthesizes novel views from a set of reference
images by weighted blending [15, 39, 20, 24, 57, 61, 12, 62]. They generally
estimate proxy geometry from dense captured images for synthesis. For
instance, Riegler et al. [61] uses multi-view stereo [66, 87, 77, 77, 45, 29]
to produce scene mesh surface and warps source view images to target views
based on proxy geometry. Despite promising results in some cases, they are
essentially limited by the quality of 3D reconstructions, where dense inputs
(tens to hundreds) with large overlap and reasonable baselines are necessary
for decent results. These methods estimate geometry as an intermediate task
not directly optimized for image quality. In contrast we input sparse views
and learn depth jointly to optimize for synthesis quality.
Neural Scene Representations. Recent work uses implicit scene representations
for view synthesis [65, 41, 19, 34, 47, 55]. Given many views, neural radiance
fields (NeRF) show impressive results [48, 92, 46, 56, 81], but require
expensive per-scene optimization. Recent methods [78, 89, 75, 9, 31]
generalize NeRF without per-scene optimization by learning a shared prior,
with sparse inputs. However these methods require expensive ray sampling and
therefore are very slow. In contrast, we achieve significant speedups using
explicit representations. Some concurrent work accelerates NeRF by
reformulating the computation [18], using precomputation [88, 27], or adding
view dependence to explicit 3D representations [41, 83, 2, 8, 49]; unlike
ours, these all require dense input views and per-scene optimization.
Utilizing RGB-D in NVS. The growing availability of annotated depth maps [13,
5, 10, 1, 71, 68] facilitates depth utilization in NVS [54, 40, 26], which
serves as extra supervision or input to networks. Our method utilizes explicit
depths as 3D representations, allowing using sensor depths as additional
inputs for better quality. Given the increasing popularity of depth sensors,
integrating sensor depths is a promising direction for real-world
applications.
Figure 2: System Overview. Given a sparse set of images, we construct a point
cloud $\mathcal{P}_{i}$ for each image $I_{i}$ using Feature Network $f$,
View-Dependent Feature MLP $\psi$, and Depth Network $d$. Besides images, $d$
takes MVS estimated depths or sensor depths as inputs and regresses refined
depths. Per-pixel features $F^{\prime}_{i}$ are regressed by $f$ and $\psi$
based on images and relative view changes. A differentiable point cloud
renderer $\pi$ is employed to project and render point clouds to target views.
We use Transformer $T$ to fuse rendered results from arbitrary number inputs
and apply refinement module $R$ for final results. The model is trained with
photometric loss and content loss.
Depth has been used in neural scene representations for speedups [51, 73],
spaser inputs [16] and dynamic scenes[84]. However, these works still require
per-scene optimization. Utilizing RGB-D inputs to accelerate generalizable
NeRF like [89, 78] is still an open problem.
Differentiable Rendering and Refinement. We use advances in differentiable
rendering [42, 35, 11, 52, 43] to learn 3D end-to-end. Learned geometries rely
heavily on rendering and refinement [90, 86, 3, 79] to quickly synthesize
realistic results. Refinement has improved dramatically owing to generative
modeling [38, 36, 91, 95] and rendering frameworks [60, 32, 50, 30]. Instead
of aggregating information across viewpoints before rendering [44], we render
viewpoints separately and fuse using a Transformer [76, 17, 4], enabling
attention across input views.
## 3 Method
Given a sparse set of input images $\\{I_{i}\\}_{i=1}^{N}$ and corresponding
camera poses $\\{R_{i},T_{i}\\}$, our goal is to synthesize a novel view with
camera pose $\\{R_{t},T_{t}\\}$ fast and effectively. The depths
$\\{D^{sen}_{i}\\}$ of $I_{i}$ captured from sensors are optionally available,
which are generally incomplete and noisy.
The insight of our method is that using explicit depths and forward warping
enables real-time rendering speed and tremendous accelerations. Meanwhile, to
alleviate quality degradations caused by inaccurate depth estimations, a
differentiable renderer and well-designed fusion & refinement modules are
employed, encouraging the model to learn geometry and features optimized for
synthesis quality.
As illustrated in Figure 2, with estimated depths, input view $I_{i}$ is
converted to a 3D point cloud $\mathcal{P}_{i}$ containing geometries and
view-dependent semantics of the view. A differentiable neural point cloud
renderer $\pi$ is used to project point clouds to target viewpoints. Rather
than directly aggregating point clouds across views before rendering, we
propose a Transformer-based module $T$ fusing rendered results at target view.
Finally, a refinement module $R$ is employed to generate final outputs. The
whole model is trained end-to-end with photometric and perceptual loss.
### 3.1 Point Cloud Construction
We use point clouds to represent scenes due to their efficiency, compact
memory usage, and scalability to complex scenes. For input view $I_{i}$, point
cloud $\mathcal{P}_{i}$ is constructed by estimating depth $D_{i}$ and feature
vectors $F^{\prime}_{i}$ for each pixel in the input image, then projecting
the feature vectors into 3D space using known camera intrinsics. The depth
$D_{i}$ is estimated by a _depth network_ $d$; features $F^{\prime}_{i}$ are
computed by a _spatial feature encoder_ $f$ and _view-dependent MLP_ $\psi$.
Spatial Feature Encoder $f$. Scene semantics of input view $I_{i}$ are mapped
to per-pixel feature vectors $F_{i}$ by spatial feature encoder $f$. Each
feature vector in $F_{i}$ is 61-dimensions and is concatenated with RGB
channels, which is 64 dimensions in total. $f$ is built on BigGAN architecture
[3].
Depth Network $d$. Estimating depth from a single image has scaling/shifting
ambiguity, losing valuable multi-view cues and leading to inconsistent
estimations across views. Applying multi-view stereo algorithms (MVS) [66, 87,
77, 85] solely on sparse inputs is challenging because of limited overlap and
huge baselines between input views, leading to inaccurate and low-confidence
estimations. Therefore, we employ a hybrid design cascading a U-Net after the
MVS module. The U-Net takes image $I_{i}$ and estimated depths from the MVS
module as inputs, refining depths with multi-view stereo cues and image cues.
PatchmatchNet [77] is utilized as the MVS module, which is fast and
lightweight.
Depth Estimation with sensor depths. As stated, U-Net receives an initial
depth estimation from the MVS module and outputs a refined depth used to build
the point cloud. If sensor depth $D^{sen}_{i}$ is available, it is directly
input to the U-Net as the initial depth estimations. In this setting, U-Net
servers as completion and refinement module taking $D^{sen}_{i}$ and $I_{i}$
as inputs, since $D^{sen}_{i}$ is usually noisy and incomplete. During
training, loss $L_{s}$ is employed to encourage the U-Net output to match the
sensor depth.
$\displaystyle\mathcal{L}_{s}=\|M_{i}\odot D_{i}-M_{i}\odot D^{sen}_{i}\|$ (1)
where $M_{i}$ is a binary mask indicating valid sensor depths.
View-Dependent Feature MLP $\psi$. The appearance of the point could vary
across views because of lighting and view direction, causing inconsistency
between multiple views. Therefore, we propose to insert view direction changes
into scene semantics to model this view-dependent effects. An MLP $\psi$ is
designed to compute view-dependent features $F^{\prime}_{i}$ by taking $F_{i}$
and relative view changes $\Delta v$ from input to target view as inputs. For
each point in the cloud, $\Delta v$ is calculated based on normalized view
directions $v_{i}$ and $v_{t}$, from the point to camera centers of input view
$i$ and target view $t$. The relative view direction change is calculated as:
$\displaystyle\Delta v=[(v_{i}-v_{t})/\|v_{i}-v_{t}\|,v_{i}\cdot
v_{t}],v_{i},v_{t}\in\mathbb{R}^{3}.$ (2)
and the view-dependent feature $F^{\prime}_{i}$ is:
$\displaystyle F^{\prime}_{i}=\psi(F_{i},\delta(\Delta v))$ (3)
where $\delta$ is a two-layer MLP mapping $\Delta v$ to a 32-dimensions vector
and $\psi$ is also a two-layer MLP.
### 3.2 Point Cloud Renderer
To observe the constructed point cloud $\mathcal{P}_{i}$ at target views, we
employ a neural point cloud renderer $\pi$. $\mathcal{P}_{i}$ is first
transformed to target view coordinates based on camera poses and then rendered
by $\pi$. The rendered feature maps $\tilde{F}_{i}$ share the same dimension
as feature $F^{\prime}_{i}$ at each pixel. With explicit geometry
transformation, our rendered results are geometrically consistent and correct
across views.
We use the _differentiable_ renderer design of [82], which splats 3D points to
the image plane and gets pixel values by blending point features. The blending
weights are computed based on _z_ -buffer depths and distances between pixel
and point centers. It is implemented using Pytorch3D [60].
This fully differentiable renderer allows our model to be trained end-to-end,
where photometric and perceptual loss gradients can be propagated to points’
position and features. In this way, the model learns to estimate depths and
features optimized for synthesis quality, leading to superior quality. We show
the effectiveness of it in experiments.
### 3.3 Fusion and Refinement
Unlike SynSin [82] using a single image for NVS, fusing multi-view inputs is
required in our method. A naïve fusion transforms each point cloud to target
view and aggregates them into a large one for rendering. Despite high
efficiency, it is vulnerable to inaccurate depths since points with wrong
depths may occlude points from other views, leading to degraded results.
Methods like PointNet [58] may be feasible to apply on the aggregated point
cloud for refinement, but they are not efficient with significant point
numbers.
Instead, we render each point cloud individually at target viewpoints and fuse
rendered results by a fusion Transformer $T$. A refinement module $R$ is used
to inpaint missing regions, decode feature maps and improve synthesis quality.
Figure 3: Fusion Transformer. We use a lightweight transformer $T$ to fuse the
features from $N$ input views on each pixel. We use a learnable token to query
the fusion results.
Fusion Transformer $T$. Given an arbitrary number of rendered feature maps
$\\{\tilde{F}_{i}\\}$, fusion should be effective, fast, and permutation
invariant. Inspired by the success of Transformers, we propose a pixel-wise
Transformer $T$ for fusion, detailed in Figure 3. At each pixel, $T$ inputs
rendered feature vectors and queries fused results using a learnable “token”.
Applied on features, $T$ utilizes semantics for fusion.
Rendered results may lose geometry cues for fusion when rendered from 3D to
2D. For instance, depths may reveal occlusion relationships across views, and
relative view changes from input to target views relate to each input’s
importance for fusion. Therefore, we also explored to use geometry features as
position encoding while not helpful.
Refinement Module $R$. Built with 8 ResNet [22] blocks, $R$ decodes fused
feature maps $\tilde{F}$ to RGB images $\tilde{I}$ at target views. It
inpaints regions invisible for inputs in a semantically and geometrically
meaningful manner. Also, it corrects local errors caused by inaccurate depths
and improves perceptual quality based on semantics contained by feature maps,
leading to coherent and high-quality synthesis.
### 3.4 Training and Implementation Details
Our model is trained end-to-end with photometric $\mathcal{L}_{l_{2}}$ and
perceptual $\mathcal{L}_{c}$ losses between generated and ground-truth target
images. The whole loss function is:
$\displaystyle\mathcal{L}=\lambda_{l_{2}}\mathcal{L}_{l_{2}}+\lambda_{c}\mathcal{L}_{c}$
(4)
where $\lambda_{l_{2}}=5.0,\lambda_{c}=1.0$. The model is trained end-to-end
on 4 2080Ti GPUs for 3 days, using Adam [37] with learning rate $10^{-4}$ and
$\beta_{1}{=}0.9,\beta_{2}{=}0.999$. When sensors depths are available as
inputs, $\mathcal{L}_{s}$ is used with $\lambda_{s}=5.0$.
Table 1: Model variants settings. We predefine three model variants with
different settings. FWD utilizes a pre-trained MVS module, in which way it
gets access to depths during training.
Name | Test Depth | Train Depth | Depth Network | MVS Module | Losses
---|---|---|---|---|---
FWD-U | | | MVS + U-Net | Random ini. | $\mathcal{L}_{l_{2}}+\mathcal{L}_{c}$
FWD | | ✓ | MVS + U-Net | Pre-trained | $\mathcal{L}_{l_{2}}+\mathcal{L}_{c}$
FWD-D | ✓ | ✓ | RGB-D + U-Net | - | $\mathcal{L}_{l_{2}}+\mathcal{L}_{c}+\mathcal{L}_{s}$
## 4 Experiments
The goal of our paper is _real-time_ and _generalizable_ novel view synthesis
with _sparse_ inputs, which can optionally use sensor depths. To this end, our
experiments aim to identify the speed and quality at which our method can
synthesize novel images and explore the advantage of explicit depths. We
evaluate our methods on ShapeNet [6] and DTU [33] datasets, comparing results
with the SOTA methods and alternative approaches. Experiments take place with
held-out test scenes and no per-scene optimization. We conduct ablations to
validate the effectiveness of designs.
Metrics. We conduct A/B test to measure the visual quality, in which workers
select the image most similar to the ground truth from competing methods.
Automatic image quality metrics including PSNR, SSIM [80] and LPIPS [93] are
also reported, and we find LPIPS best reflects the image quality as perceived
by humans. Frames per second (FPS) during rendering is measured on the same
platform (single 2080Ti GPU with 4 CPU cores). All evaluations are conducted
using the same protocol (same inputs and outputs).
Model Variants. Three models are evaluated with various accessibility to
depths for training and test, as defined in Table 1. FWD utilizes a pretrained
PatchmatchNet [77] as the MVS module for depth estimations, which is also
updated during end-to-end training with photometric and perceptual loss. FWD-U
learns depth estimations in an Unsupervised manner, sharing the same model and
settings as FWD while PatchmatchNet is randomly initialized without any
pretraining. FWD-D takes sensor depths as additional inputs during both
training and inference. It doesn’t use any MVS module since sensor depths
provide abundant geometry cues. For pretraining PatchmatchNet, we train it
following typical MVS settings and using the same data splitting as NVS.
Input | PixelNeRF | FWD-U | GT
---|---|---|---
| | | | | |
| | | | | |
| | | | | |
| | | | | |
Figure 4: Qualitative results of category-agnostic NVS on ShapeNet. We test
the capacity of our model by training it across 13 categories of ShapeNet with
single view input, and compare with PixelNeRF [89]. No gt depths are available
during inference and training. Our results have better visual quality and
details. Table 2: Category-agnostic NVS on ShapeNet. Quantitative results
for category-agnostic view-synthesis are presented.
| 1-view | 2-view
---|---|---
model | PSNR | SSIM | LPIPS | FPS | PSNR | SSIM | LPIPS | FPS
DVR [53] | 22.70 | 0.860 | 0.130 | 1.5 | - | - | - | -
SRN [70] | 23.28 | 0.849 | 0.139 | 24 | - | - | - | -
PixelNeRF | 26.80 | 0.910 | 0.108 | 1.2 | 28.88 | 0.936 | 0.076 | 1.1
FWD-U | 26.66 | 0.911 | 0.055 | 364 | 28.43 | 0.931 | 0.043 | 336
Input: 3 views of held-out scene | Novel views
---|---
FWD-D | | | | | | | |
FWD | | | | | | | |
FWD-U | | | | | | | |
Figure 5: View synthesis results from FWD. We show the view synthesis results
with 3 input views on DTU dataset from FWD-D (row. 1), FWD (row. 2) and FWD-U
(row. 3). Our methods synthesize high-quality and geometrically correct novel
views in real time.
### 4.1 ShapeNet Benchmarks
We first evaluate our model for category-agnostic synthesis task on ShapeNet
[6]. Following the setting of [89], we train and evaluate a single model on 13
ShapeNet [6] categories. Each instance contains 24 fixed views of 64 $\times$
64 resolution. During training, one random view is selected as input and the
rests are served as target views. For testing, we synthesize all the other
views from a fixed informative view. The model is finetuned with two random
input views for 2-view experiments. We find that U-Net is sufficient for good
results on this dataset without the MVS module.
Qualitative comparisons to PixelNeRF are shown in Figure 4, where FWD-U gets
noticeably superior results. Our synthesized results are more realistic and
closely matching to target views, while PixelNeRF’s results tend to be blurry.
We observe the same trend in the DTU benchmark and evaluate the visual quality
quantitatively there.
We show quantitative results in Table 2, adding SRN [70] and DVR [53] as other
baselines. Our method outperforms others significantly for LPIPS, indicating a
much better perceptual quality, as corroborated by qualitative results.
PixelNeRF has a slightly better PSNR while its results are blurry. Most
importantly, FWD-U runs at a speed of over 300 FPS, which is 300$\times$
faster than PixelNeRF.
### 4.2 DTU MVS Benchmarks
We also evaluate our model on DTU MVS dataset [33], which is a real scene
dataset consisting of 103 scenes. Each scene contains one or multiple objects
placed on a table, while images and incomplete depths are collected by the
camera and structured light scanner mounted on an industrial robot arm.
Corresponding camera poses are provided.
As stated in [89], this dataset is challenging since it consists of complex
real scenes without apparent semantic similarities across scenes. Also, images
are taken under varying lighting conditions with distinct color
inconsistencies between views. Moreover, with only under 100 scenes available
for training, it is prone to overfitting in training.
We follow the same training and evaluation pipelines as PixelNeRF [89] for all
methods to give a fair comparison. The data consists of 88 training and 15
test scenes, between which there are no shared or highly similar scenes.
Images are down-sampled to a resolution of 300 $\times$ 400\. For training,
three input views are randomly sampled, with the rest as target views. For
inference, we choose three fixed informative input views and synthesize other
views of the scene.
Input Image | PixelNeRF | IBRNet | FWD-U | FWD | Blending+R | FWD-D | Target View
---|---|---|---|---|---|---|---
| | | | | | |
| | | | | | |
| | | | | | |
Figure 6: Qualitative Comparison. We compare synthesis results from different
methods with 3 input views (one of them shown in figure). Our methods give
geometrically consistent and visually appealing results, while other results
suffering shaking artifacts at some views. Unlike other methods, FWD-D and
Blending+R get access sensor depths as inputs during inference. Figure 7:
User study on DTU. We conduct a user study by asking subjects to select the
results most similar to the ground truth. The numbers indicate the percentage
of preference. Methods are grouped based whether using depths during test. We
also report FWD vs. FWD-D showing the advantages of sensor depths.
Baselines. We evaluate a set of representatives of generalizable NeRF and IBR
methods in two different scenarios: with RGB or RGB-D available as inputs
during inference.
PixelNeRF [89], IBRNet [78] and MVSNeRF [9] are the SOTA generalizable NeRF
variants, taking RGB as inputs. We use the official PixelNeRF model trained on
DTU MVS and carefully retrain IBRNet and MVSNeRF with the same 3-input-view
settings. PixelNeRF-DS is also included as reported in [16], which is
PixelNeRF supervised with depths. Please note that our settings are very
different from evaluations used in original papers of IBRNet and MVSNeRF.
A series of IBR methods are also evaluated. Since COLMAP [66] fails to give
reasonable outputs with sparse input images, methods using COLMAP like FVS
[61], DeepBlending [25] cannot estimate scene geometry in this setting. For
these methods, we use depths captured by sensors as estimated depths, which
should give upper-bound performance of these methods. To better cope with
missing regions, we add our refinement model to DeepBlending [25] and retrain
it on DTU dataset, termed Blending-R.
For fairness, we evaluate all methods using the same protocol, distinct from
some of their original settings. Although we try our best to adopt these
methods, our reported results may still not perfectly reflect their true
capacity.
Qualitative Results. Synthesis results are shown in Figure 5, where high-
quality and geometrically correct novel views are synthesized in real-time
(over 35 FPS) under significant viewpoint changes. Our refinement module
faithfully inpaints invisible regions; also, synthesized images have good
shadows, light reflections, and varying appearances across views, showing the
efficacy of view-dependent MLP. With sensor depths, results can be further
improved.
We show comparisons to baselines in Figure 6. Our methods provide noticeably
better results than baselines across different depth settings. For models
without depths in test, IBRNet and PixelNeRF give blurry results in areas of
high detail such as the buildings in the top row, while our FWD-U and FWD give
more realistic and sharper images. With sensor depths in test, baseline
Blending-R produces more cogent outputs, but still struggles to distinguish
objects from the background, such as in the middle row, while FWD-D gives
faithfully synthesis and clear boundaries.
Quantitative Results. We evaluate synthesis quality quantitatively by user
study following a standard A/B paradigm. Workers choose the closest to a
ground truth image between competing methods, and are monitored using a
qualifier and sentinel examples. All views in the test set (690 in total) are
evaluated, and each view is judged by three workers.
In Figure 7, user study results support qualitative observations. Among all
baselines with and without test depths , users choose our method as more
closely matching ground truth images than others most of the time. FWD-U is
selected over PixelNeRF in 65.6% of examples, and 77.8% compared to IBRNet.
Also, over 90% workers prefer FWD-D to FWD , showing advantage of using sensor
depths.
We show automated view synthesis metrics and speeds in Table 3. Across all
depth availability settings, our method is competitive with the SOTA baselines
while significantly faster. FWD-D runs in real-time and gives substantially
better image quality than others. FWD has competitive metrics to PixelNeRF-DS
while 1000$\times$ faster. Notably, NeRF variants such as PixelNeRF, IBRNet,
MVSNeRF, and PixelNeRF-DS are at least two orders of magnitude slower.
The exception to highly competitive performance is weaker PSNR and SSIM of our
unsupervised FWD-U against PixelNeRF and IBRNet. However, FWD-U has noticeably
better perceptual quality with the best LPIPS, and human raters prefer it to
other methods in A/B tests. The visual quality in figure 6 also illustrates
the disparity between comparisons using PSNR and LPIPS. Meanwhile, FWD-U is
above $1000\times$ faster than PixelNeRF and above $100\times$ faster than
IBRNet. Depth estimations, rendering and CNN would introduce tiny pixel
shiftings, which harm the PSNR of our method. NeRF-like methods are trained to
optimize L2 loss for each pixel independently, leading to blur results.
Among all methods without test depths, FWD has the best results. Although it
uses a pretrained MVS module, we think this comparison is still reasonable
since pretrained depth module is easy to get. Also, training depths can be
easily calculated from training images since they are dense.
Baseline comparisons also show that IBR methods are fast, but do not give
images that are competitive with our method. Our method outperforms them in
both perceptual quality and standard metrics, showing the efficacy of proposed
methods. Note that Blending+R doesn’t support variable number inputs and our
refinement module improves its results significantly. We also compare FWD-U
with SynSin [82] which only receives a single input image, showing the
benefits of using multi-view inputs in NVS.
### 4.3 Ablations and Analysis
We evaluate the effectiveness of our designs and study depth in more detail
through ablation experiments.
Effects of Fusion Transformer. We design a model without Transformer, which
concatenates point clouds across views into a bigger one for later rendering
and refinement. Its results in FWD-U settings are shown in Figure 8. The
ablated version is vulnerable to inaccurate depths learned in unsupervised
manner and synthesizes “ghost objects” since points with bad depths occlude
other views’ points.
We repeat the same ablation in FWD-D settings, shown in Table 4, which
settings give much better depth estimations with sensor depths. The ablated
model has notably worse results for all metrics, indicating that the proposed
method is not only powerful to tackle inaccurate depth estimations, but also
fuse semantic features effectively.
Table 3: Quantitative comparison on DTU real images. We compare our method
with representatives of generalizable NeRF variants and IBR methods for image
quality and rendering speed. Our method achieves significantly better speed-
quality tradeoff, indicating the effectiveness and efficiency of our design.
$\dagger$ Unlike other methods, SynSin receives only one image as input.
Test | Train | Model | PSNR$\uparrow$ | SSIM$\uparrow$ | LPIPS$\downarrow$ | FPS$\uparrow$
---|---|---|---|---|---|---
RGB | RGB | PixelNeRF [89] | 19.24 | 0.687 | 0.399 | 0.03
IBRNet [78] | 18.86 | 0.695 | 0.387 | 0.27
MVSNeRF [9] | 17.13 | 0.611 | 0.444 | 0.34
SynSin [82] $\dagger$ | 15.66 | 0.564 | 0.388 | 51.8
FWD-U | 17.42 | 0.598 | 0.341 | 35.4
RGB | RGB-D | PixelNeRF-DS [16] | 19.87 | 0.710 | 0.370 | 0.03
FWD | 20.15 | 0.721 | 0.259 | 35.4
RGB-D | RGB-D | Blending-R [25] | 16.98 | 0.661 | 0.351 | 41.8
FVS [61] | 15.92 | 0.733 | 0.267 | 9.70
FWD-D | 21.98 | 0.791 | 0.208 | 43.2
Effects of View Dependent MLP. For ablation, we remove the view-dependent
feature MLP and report its results in Table 4. Removing this module reduces
model’s ability to produce view-dependent appearances, leading to worse
performance for all metrics. We show more results in Supp.
Depth Analysis and Ablations. We visualize depths in Figure 9. Estimating
depths from sparse inputs is challenging and gives less accurate results
because of the huge baselines between inputs. We show estimated depths from
PatchmatchNet here, filtered based on the confidence scores. Therefore,
refinement is essential in our design to propagate multi-view geometry cues to
the whole image. Our end-to-end model learns it by synthesis losses.
We ablate the depth network in Table 5 and report depth error $\delta_{3cm}$,
which is the percentage of estimated depths within 3 cm of sensor depths. MVS
module is critical (row 2) to give geometrically consistent depths. U-Net
further refines depths and improves the synthesis quality (row 3).
PatchmatchNet has its own shallow refinement layer, already giving decent
refinements. Learning unsupervised MVS and NVS jointly from scratch is
challenging (row 4), and training depth network without supervision [14] first
may give a good initialization for further jointly training.
| | |
---|---|---|---
| | |
Input View | FWD-U | w/o Transformer | Target View
Figure 8: Ablation on Fusion Transformer. We show results for FWD-U with and
without Transformed-based fusion. Table 4: Ablation Studies. We show the
effectiveness of Transformer Fusion and View-dependent MLP by ablation study
on FWD-D. These designs improve synthesize quality noticeably while
maintaining real-time rendering speed.
Model | PSNR | SSIM | LPIPS | FPS
---|---|---|---|---
Full model | 21.98 | 0.791 | 0.208 | 43.2
w/o Transformer | 20.95 | 0.748 | 0.241 | 48.4
w/o View dependence | 21.16 | 0.769 | 0.212 | 44.0
## 5 Conclusion
We propose a real-time and generalizable method for NVS with sparse inputs by
using explicit depths. This method inherits the core idea of SynSin while
extending it to multi-view input settings, which is more challenging. Our
experiments show that estimating depths can give impressive results with a
real-time speed, outperforming existing methods. Moreover, the proposed method
could utilize sensor depths seamlessly and improve synthesis quality
significantly. With the increasing availability of mobile depth sensors, we
believe our method has exciting real-world 3D applications. We acknowledge
there could be the potential for the technology to be used for negative
purposes by nefarious actors, like synthesizing fake images for cheating.
There are also challenges and limitations yet to be explored. 1) Although
using explicit depths gives tremendous speedups, it potentially inflicts depth
reliance on our model. We designed a hybrid depth regressor to improve the
quality of depth by combining MVS and single image depth estimations. We also
employed an effective fusion and refinement module to reduce the degrades
caused by inaccurate depths. Despite these designs, the depth estimator may
still work poorly in some challenging settings (like very wide camera
baselines), and it would influence the synthesis results. Exploring other
depth estimation methods like MiDaS [59] could be an interesting direction for
future work.
2) The potential capacity of our method is not fully explored. Like SynSin
[82], our model (depth/feature network and refinement module especially) is
suitable and beneficial from large-scale training data, while the DTU MVS
dataset is not big enough and easy to overfit during training. Evaluating our
method on large-scale datasets like Hypersim [63] would potentially reveal
more advantages of our model, which dataset is very challenging for NeRF-like
methods.
3) Although our method gives more visually appealing results, our PSNR and
SSIM are lower than NeRF-like methods. We hypothesize that our refinement
module is not perfectly trained to decode RGB colors from feature vectors
because of limited training data. Also, tiny misalignments caused during the
rendering process may also harm the PSNR, although it is not perceptually
visible.
| | | | |
---|---|---|---|---|---
| | | | |
input image | sensor depths | FWD-D | filtered MVS | FWD | FWD-U
Figure 9: Depth visualizations. We visualize the normalized inverse depths
involved in our method. Sensor depths are incomplete because of hardware
limitations and MVS estimated depths are inaccurate, where many predictions
have low confidence. This demonstrates the necessity of depth completion and
refinement. Table 5: Depths network ablation and error. We ablate depth
network and compute $\delta_{3cm}$ as error, which is the percentage of
predicted depths within 3 cm of sensor depths.
Test | Train | Model | PSNR | SSIM | LPIPS | $\delta_{3cm}$
---|---|---|---|---|---|---
RGB | RGB-D | FWD | 20.15 | 0.721 | 0.259 | 79.07
RGB | RGB-D | -w/o MVS | 16.69 | 0.594 | 0.357 | 61.62
RGB | RGB-D | -w/o U-Net | 19.10 | 0.702 | 0.285 | 73.62
RGB | RGB | FWD-U | 17.42 | 0.598 | 0.341 | 54.27
Acknowledgement. Toyota Research Institute provided funds to support this
work. We thank Dandan Shan, Hao Ouyang, Jiaxin Xie, Linyi Jin, Shengyi Qian
for helpful discussions.
## References
* [1] Henrik Aanæs, Rasmus Ramsbøl Jensen, George Vogiatzis, Engin Tola, and Anders Bjorholm Dahl. Large-scale data for multiple-view stereopsis. IJCV, 120(2):153–168, 2016.
* [2] Alex Yu and Sara Fridovich-Keil, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: Radiance fields without neural networks, 2021.
* [3] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In ICLR, 2019.
* [4] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In ECCV, pages 213–229. Springer, 2020.
* [5] Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3d: Learning from rgb-d data in indoor environments. International Conference on 3D Vision (3DV), 2017.
* [6] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. Technical Report arXiv:1512.03012, 2015.
* [7] Gaurav Chaurasia, Sylvain Duchene, Olga Sorkine-Hornung, and George Drettakis. Depth synthesis and local warps for plausible image-based navigation. ACM Transactions on Graphics (TOG), 32(3):1–12, 2013.
* [8] Anpei Chen, Zexiang Xu, Andreas Geiger, , Jingyi Yu, and Hao Su. Tensorf: Tensorial radiance fields, 2022.
* [9] Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, and Hao Su. MVSNeRF: Fast generalizable radiance field reconstruction from multi-view stereo. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14124–14133, 2021.
* [10] Weifeng Chen, Zhao Fu, Dawei Yang, and Jia Deng. Single-image depth perception in the wild. In NeurIPS, volume 29, 2016.
* [11] Wenzheng Chen, Jun Gao, Huan Ling, Edward J Smith, Jaakko Lehtinen, Alec Jacobson, and Sanja Fidler. Learning to predict 3d objects with an interpolation-based differentiable renderer. 2019\.
* [12] Inchang Choi, Orazio Gallo, Alejandro Troccoli, Min H Kim, and Jan Kautz. Extreme view synthesis. In ICCV, pages 7781–7790, 2019.
* [13] Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 2017\.
* [14] Yuchao Dai, Zhidong Zhu, Zhibo Rao, and Bo Li. Mvs2: Deep unsupervised multi-view stereo with multi-view symmetry. In 2019 International Conference on 3D Vision (3DV), pages 1–8, 2019.
* [15] Paul E Debevec, Camillo J Taylor, and Jitendra Malik. Modeling and rendering architecture from photographs: A hybrid geometry-and image-based approach. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 11–20, 1996.
* [16] Kangle Deng, Andrew Liu, Jun-Yan Zhu, and Deva Ramanan. Depth-supervised nerf: Fewer views and faster training for free. arXiv preprint arXiv:2107.02791, 2021.
* [17] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
* [18] Stephan J Garbin, Marek Kowalski, Matthew Johnson, Jamie Shotton, and Julien Valentin. Fastnerf: High-fidelity neural rendering at 200fps. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14346–14355, 2021.
* [19] Kyle Genova, Forrester Cole, Avneesh Sud, Aaron Sarna, and Thomas Funkhouser. Local deep implicit functions for 3d shape. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4857–4866, 2020.
* [20] Steven J Gortler, Radek Grzeszczuk, Richard Szeliski, and Michael F Cohen. The lumigraph. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 43–54, 1996.
* [21] Pengsheng Guo, Miguel Angel Bautista, Alex Colburn, Liang Yang, Daniel Ulbricht, Joshua M Susskind, and Qi Shan. Fast and Explicit Neural View Synthesis. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 3791–3800, 2022.
* [22] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016.
* [23] Peter Hedman and Johannes Kopf. Instant 3d photography. TOG, 37(4):1–12, 2018.
* [24] Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, and Gabriel Brostow. Deep blending for free-viewpoint image-based rendering. ACM Transactions on Graphics (TOG), 37(6):1–15, 2018.
* [25] Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, and Gabriel Brostow. Deep blending for free-viewpoint image-based rendering. 37(6):257:1–257:15, 2018.
* [26] Peter Hedman, Tobias Ritschel, George Drettakis, and Gabriel Brostow. Scalable inside-out image-based rendering. ToG, 35(6):1–11, 2016.
* [27] Peter Hedman, Pratul P Srinivasan, Ben Mildenhall, Jonathan T Barron, and Paul Debevec. Baking neural radiance fields for real-time view synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5875–5884, 2021.
* [28] Ronghang Hu, Nikhila Ravi, Alexander C Berg, and Deepak Pathak. Worldsheet: Wrapping the world in a 3d sheet for view synthesis from a single image. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 12528–12537, 2021.
* [29] Po-Han Huang, Kevin Matzen, Johannes Kopf, Narendra Ahuja, and Jia-Bin Huang. Deepmvs: Learning multi-view stereopsis. Conference on Computer Vision and Pattern Recognition (CVPR), 2018\.
* [30] Rui Huang, Wanyue Zhang, Abhijit Kundu, Caroline Pantofaru, David A Ross, Thomas Funkhouser, and Alireza Fathi. An lstm approach to temporal 3d object detection in lidar point clouds. arXiv preprint arXiv:2007.12392, 2020.
* [31] Ajay Jain, Matthew Tancik, and Pieter Abbeel. Putting nerf on a diet: Semantically consistent few-shot view synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5885–5894, 2021.
* [32] Krishna Murthy Jatavallabhula, Edward Smith, Jean-Francois Lafleche, Clement Fuji Tsang, Artem Rozantsev, Wenzheng Chen, Tommy Xiang, Rev Lebaredian, and Sanja Fidler. Kaolin: A pytorch library for accelerating 3d deep learning research. arXiv preprint arXiv:1911.05063, 2019.
* [33] Rasmus Jensen, Anders Dahl, George Vogiatzis, Engil Tola, and Henrik Aanæs. Large scale multi-view stereopsis evaluation. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 406–413. IEEE, 2014.
* [34] Chiyu Jiang, Avneesh Sud, Ameesh Makadia, Jingwei Huang, Matthias Nießner, Thomas Funkhouser, et al. Local implicit grid representations for 3d scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6001–6010, 2020.
* [35] Yue Jiang, Dantong Ji, Zhizhong Han, and Matthias Zwicker. Sdfdiff: Differentiable rendering of signed distance fields for 3d shape optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1251–1261, 2020.
* [36] Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. In CVPR, pages 8110–8119, 2020.
* [37] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. 2015\.
* [38] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi. Photo-realistic single image super-resolution using a generative adversarial network. In CVPR, pages 105–114, 2017.
* [39] Marc Levoy and Pat Hanrahan. Light field rendering. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 31–42, 1996.
* [40] Andrew Liu, Richard Tucker, Varun Jampani, Ameesh Makadia, Noah Snavely, and Angjoo Kanazawa. Infinite nature: Perpetual view generation of natural scenes from a single image. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14458–14467, 2021.
* [41] Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. Neural sparse voxel fields. Advances in Neural Information Processing Systems, 33:15651–15663, 2020.
* [42] Shichen Liu, Weikai Chen, Tianye Li, and Hao Li. Soft rasterizer: Differentiable rendering for unsupervised single-view mesh reconstruction. ICCV, 2019.
* [43] Shaohui Liu, Yinda Zhang, Songyou Peng, Boxin Shi, Marc Pollefeys, and Zhaopeng Cui. Dist: Rendering deep implicit signed distance function with differentiable sphere tracing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2019–2028, 2020.
* [44] Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. Neural volumes: Learning dynamic renderable volumes from images. arXiv preprint arXiv:1906.07751, 2019.
* [45] Xuan Luo, Jia-Bin Huang, Richard Szeliski, Kevin Matzen, and Johannes Kopf. Consistent video depth estimation. ACM Transactions on Graphics (TOG), 39(4):71–1, 2020.
* [46] Ricardo Martin-Brualla, Noha Radwan, Mehdi S. M. Sajjadi, Jonathan T. Barron, Alexey Dosovitskiy, and Daniel Duckworth. NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections. In CVPR, 2021.
* [47] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4460–4470, 2019.
* [48] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020.
* [49] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. arXiv:2201.05989, Jan. 2022.
* [50] Mahyar Najibi, Guangda Lai, Abhijit Kundu, Zhichao Lu, Vivek Rathod, Thomas Funkhouser, Caroline Pantofaru, David Ross, Larry S Davis, and Alireza Fathi. Dops: learning to detect 3d objects and predict their 3d shapes. In CVPR, pages 11913–11922, 2020.
* [51] Thomas Neff, Pascal Stadlbauer, Mathias Parger, Andreas Kurz, Joerg H Mueller, Chakravarty R Alla Chaitanya, Anton Kaplanyan, and Markus Steinberger. DONeRF: Towards real-time rendering of compact neural radiance fields using depth oracle networks. In Computer Graphics Forum, volume 40, pages 45–59. Wiley Online Library, 2021.
* [52] Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3504–3515, 2020.
* [53] Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020.
* [54] David Novotny, Ben Graham, and Jeremy Reizenstein. Perspectivenet: A scene-consistent image generator for new view synthesis in real indoor environments. Advances in Neural Information Processing Systems, 32:7601–7612, 2019.
* [55] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 165–174, 2019.
* [56] Keunhong Park, Utkarsh Sinha, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Steven M Seitz, and Ricardo Martin-Brualla. Nerfies: Deformable neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5865–5874, 2021.
* [57] Eric Penner and Li Zhang. Soft 3d reconstruction for view synthesis. ToG, 36(6):1–11, 2017.
* [58] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 652–660, 2017.
* [59] René Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, and Vladlen Koltun. Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020.
* [60] Nikhila Ravi, Jeremy Reizenstein, David Novotny, Taylor Gordon, Wan-Yen Lo, Justin Johnson, and Georgia Gkioxari. Accelerating 3d deep learning with pytorch3d. arXiv preprint arXiv:2007.08501, 2020.
* [61] Gernot Riegler and Vladlen Koltun. Free view synthesis. In ECCV, pages 623–640. Springer, 2020.
* [62] Gernot Riegler and Vladlen Koltun. Stable view synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021.
* [63] Mike Roberts, Jason Ramapuram, Anurag Ranjan, Atulit Kumar, Miguel Angel Bautista, Nathan Paczan, Russ Webb, and Joshua M Susskind. Hypersim: A photorealistic synthetic dataset for holistic indoor scene understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10912–10922, 2021.
* [64] Chris Rockwell, David F. Fouhey, and Justin Johnson. Pixelsynth: Generating a 3d-consistent experience from a single image. In ICCV, 2021.
* [65] Robin Rombach, Patrick Esser, and Björn Ommer. Geometry-free view synthesis: Transformers and no 3d priors. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14356–14366, 2021.
* [66] Johannes Lutz Schönberger, Enliang Zheng, Marc Pollefeys, and Jan-Michael Frahm. Pixelwise view selection for unstructured multi-view stereo. In European Conference on Computer Vision (ECCV), 2016.
* [67] Meng-Li Shih, Shih-Yang Su, Johannes Kopf, and Jia-Bin Huang. 3d photography using context-aware layered depth inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8028–8038, 2020.
* [68] Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. Indoor segmentation and support inference from rgbd images. In ECCV, pages 746–760, 2012.
* [69] Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, and Michael Zollhofer. Deepvoxels: Learning persistent 3d feature embeddings. In CVPR, pages 2437–2446, 2019.
* [70] Vincent Sitzmann, Michael Zollhoefer, and Gordon Wetzstein. Scene representation networks: Continuous 3d-structure-aware neural scene representations. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
* [71] Shuran Song, Samuel P Lichtenberg, and Jianxiong Xiao. Sun rgb-d: A rgb-d scene understanding benchmark suite. In CVPR, pages 567–576, 2015.
* [72] Pratul P Srinivasan, Richard Tucker, Jonathan T Barron, Ravi Ramamoorthi, Ren Ng, and Noah Snavely. Pushing the boundaries of view extrapolation with multiplane images. In CVPR, pages 175–184, 2019.
* [73] Karl Stelzner, Kristian Kersting, and Adam R Kosiorek. Decomposing 3d scenes into objects via unsupervised volume segmentation. arXiv preprint arXiv:2104.01148, 2021.
* [74] Maxim Tatarchenko, Alexey Dosovitskiy, and Thomas Brox. Multi-view 3d models from single images with a convolutional network. In European Conference on Computer Vision, pages 322–337. Springer, 2016.
* [75] Alex Trevithick and Bo Yang. Grf: Learning a general radiance field for 3d representation and rendering. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15182–15192, 2021.
* [76] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. 2017\.
* [77] Fangjinhua Wang, Silvano Galliani, Christoph Vogel, Pablo Speciale, and Marc Pollefeys. Patchmatchnet: Learned multi-view patchmatch stereo. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14194–14203, 2021.
* [78] Qianqian Wang, Zhicheng Wang, Kyle Genova, Pratul P Srinivasan, Howard Zhou, Jonathan T Barron, Ricardo Martin-Brualla, Noah Snavely, and Thomas Funkhouser. Ibrnet: Learning multi-view image-based rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4690–4699, 2021.
* [79] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. In CVPR, pages 8798–8807, 2018.
* [80] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004.
* [81] Zirui Wang, Shangzhe Wu, Weidi Xie, Min Chen, and Victor Adrian Prisacariu. Nerf–: Neural radiance fields without known camera parameters. arXiv preprint arXiv:2102.07064, 2021.
* [82] Olivia Wiles, Georgia Gkioxari, Richard Szeliski, and Justin Johnson. Synsin: End-to-end view synthesis from a single image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7467–7477, 2020.
* [83] Suttisak Wizadwongsa, Pakkapon Phongthawee, Jiraphon Yenphraphai, and Supasorn Suwajanakorn. Nex: Real-time view synthesis with neural basis expansion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8534–8543, 2021.
* [84] Wenqi Xian, Jia-Bin Huang, Johannes Kopf, and Changil Kim. Space-time neural irradiance fields for free-viewpoint video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9421–9431, 2021.
* [85] Jiaxin Xie, Chenyang Lei, Zhuwen Li, Li Erran Li, and Qifeng Chen. Video depth estimation by fusing flow-to-depth proposals. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 10100–10107, 2020.
* [86] Chao Yang, Xin Lu, Zhe Lin, Eli Shechtman, Oliver Wang, and Hao Li. High-resolution image inpainting using multi-scale neural patch synthesis. In CVPR, pages 6721–6729, 2017.
* [87] Yao Yao, Zixin Luo, Shiwei Li, Tian Fang, and Long Quan. Mvsnet: Depth inference for unstructured multi-view stereo. In Proceedings of the European Conference on Computer Vision (ECCV), pages 767–783, 2018.
* [88] Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, and Angjoo Kanazawa. PlenOctrees for real-time rendering of neural radiance fields. In arXiv, 2021.
* [89] Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. pixelnerf: Neural radiance fields from one or few images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4578–4587, 2021.
* [90] Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. Generative image inpainting with contextual attention. In CVPR, pages 5505–5514, 2018.
* [91] Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. Self-attention generative adversarial networks. In ICML, pages 7354–7363. PMLR, 2019.
* [92] Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. Nerf++: Analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492, 2020.
* [93] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, pages 586–595, 2018.
* [94] Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. Stereo magnification: Learning view synthesis using multiplane images. In SIGGRAPH, 2018.
* [95] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV, 2017.
|
11institutetext: Department of Mathematics, University of Milan,
via Saldini 50, 10133 Milan Italy
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
# A set–valued framework for birth–and–growth process
Giacomo Aletti Enea G. Bongiorno Vincenzo Capasso
###### Abstract
We propose a set–valued framework for the well–posedness of birth–and–growth
process. Our birth–and–growth model is rigorously defined as a suitable
combination, involving Minkowski sum and Aumann integral, of two very general
set–valued processes representing nucleation and growth respectively. The
simplicity of the used geometrical approach leads us to avoid problems arising
by an analytical definition of the front growth such as boundary regularities.
In this framework, growth is generally anisotropic and, according to a
mesoscale point of view, it is not local, i.e. for a fixed time instant,
growth is the same at each space point.
## Introduction
Nucleation and growth processes arise in several natural and technological
applications (cf. [8, 7] and the references therein) such as, for example,
solidification and phase–transition of materials, semiconductor crystal
growth, biomineralization, and DNA replication (cf., e.g., [15]).
A _birth–and–growth process_ is a RaCS family given by
$\Theta_{t}=\bigcup_{n:T_{n}\leq t}\Theta_{T_{n}}^{t}(X_{n})$, for
$t\in\mathbb{R}_{+}$, where $\Theta^{t}_{T_{n}}\left(X_{n}\right)$ is the RaCS
obtained as the evolution up to time $t>T_{n}$ of the germ born at (random)
time $T_{n}$ in (random) location $X_{n}$, according to some growth model.
An analytical approach is often used to model birth–and–growth process, in
particular it is assumed that the growth is driven according to a non–negative
normal velocity, i.e. for every instant $t$, a border point
$x\in\partial\Theta_{t}$ “grows” along the outward normal unit (e.g. [3, 22,
13, 4, 6, 5, 11]). Thus, growth is pointwise isotropic; i.e. given a point
belonging $\partial\Theta_{t}$, the growth rate is independently from outward
normal direction. Note that, the existence of the outward normal vector
imposes a regularity condition on $\partial\Theta_{t}$ and also on the
nucleation process (it cannot be a point process).
This paper is an attempt to offer an original alternative approach based on a
purely geometric stochastic point of view, in order to avoid regularity
assumptions describing birth–and–growth process. In particular, Minkowski sum
(already employed in [19] to describe self–similar growth for a single convex
germ) and Aumann integral are used here to derive a mathematical model of such
process. This model, that emphasizes the geometric growth without regularity
assumptions on $\partial\Theta_{t}$, is rigorously defined as a suitable
combination of two very general set–valued processes representing nucleation
$\left\\{B_{t}\right\\}_{t\in[t_{0},T]}$ and growth
$\left\\{G_{t}\right\\}_{t\in[t_{0},T]}$ respectively
$\begin{array}[]{rl}\Theta_{t}=&\left(\Theta_{t_{0}}\oplus\int_{t_{0}}^{t}G_{s}ds\right)\cup\bigcup_{s\in[t_{0},t]}dB_{s}\\\
d\Theta_{t}=&\oplus G_{t}dt\cup dB_{t}\qquad\textrm{ or
}\qquad\Theta_{t+dt}=(\Theta_{t}\oplus G_{t}dt)\cup dB_{t}.\end{array}$
Roughly speaking, increment $d\Theta_{t}$, during an infinitesimal time
interval $dt$, is an enlargement due to an infinitesimal Minkowski addend
$G_{t}dt$ followed by the union with the infinitesimal nucleation $dB_{t}$.
As a consequence of Minkowski sum definition, for every instant $t$, each
point $x\in\Theta_{t}$ (and then each point $x\in\partial\Theta_{t}$) grows up
by $G_{t}dt$ and no regularity border assumptions are required. Then we deal
with _not–local_ growth; i.e. growth is the same Minkowski addend for every
$x\in\Theta_{t}$. Nevertheless, under mesoscale hypothesis we can only
consider constant growth region as described, for example, in [6]. On the
other hand, growth is anisotropic whenever $G_{t}$ is not a ball.
The aim of this paper is to ensure the well–posedness of such a model and,
hence, to show that above “integral” and “differential” notations are
meaningful.
In view of well–posedness, in [1], the authors show how the model leads to
different and significant statistical results.
The article is organized as follows. Section 0.1 contains some assumptions
about (random) closed sets and their basilar properties. Model assumptions are
collected in Section 0.2 and integrability properties of growth process are
studied in Section 0.3. For the sake of simplicity, we present, in Section
0.4, main results of the paper (that imply well-posedness of the model),
whilst correspondent proofs are in Section 0.4.1. At the last, Section 0.5
proposes a discrete time point of view, also justifying integral and
differential notations.
## 0.1 Preliminary results
Let $\mathbb{N}$, $\mathbb{Z}$, $\mathbb{R}$, $\mathbb{R}_{+}$ be the sets of
all non–negative integer, integer, real and non–negative real numbers
respectively. Let $\mathfrak{X}$, ${\mathfrak{X}^{*}}$, $B^{*}_{1}$ be a
Banach space, its dual space and the unit ball of the dual space centered in
the origin respectively. We shall consider
$\begin{array}[]{llcll}\mathfrak{P}^{\,0}(\mathfrak{X})&=\textrm{ the family
of all subsets of
}\mathfrak{X},&&\mathfrak{P}(\mathfrak{X})&=\mathfrak{P}^{\,0}(\mathfrak{X})\setminus\\{\emptyset\\}\\\
\mathbb{F}^{\,0}(\mathfrak{X})&=\textrm{ the family of all closed subsets of
}\mathfrak{X},&&\mathbb{F}(\mathfrak{X})&=\mathbb{F}^{\,0}(\mathfrak{X})\setminus\\{\emptyset\\}.\end{array}$
The suffixes $c$ and $b$ denote convexity and boundedness properties
respectively (e.g. $\mathbb{F}^{\,0}_{bc}(\mathfrak{X})$ denotes the family of
all closed, bounded and convex subsets of $\mathfrak{X}$).
For all $A,B\in\mathfrak{P}^{\,0}(\mathfrak{X})$ and
$\alpha\in\mathbb{R}_{+}$, let us define
$\begin{array}[]{rll}A+B=&\left\\{a+b:a\in A,\ b\in B\right\\}=\bigcup_{b\in
B}b+A,&\textrm{(Minkowski Sum)}\\\ \alpha\cdot A=&\alpha A=\left\\{\alpha
a:a\in A\right\\},&\textrm{(Scalar Product)}\end{array}$
By definition, $\forall A\in\mathfrak{P}^{\,0}(\mathfrak{X})$,
$\alpha\in\mathbb{R}_{+}$, we have $\emptyset+A=\emptyset=\alpha\emptyset$. It
is well known that $+$ is a commutative and associative operation with a
neutral element but $(\mathfrak{P}(\mathfrak{X}),+)$ is not a group (cf.
[20]). The following relations are useful in the sequel (see [21]): for all
$\forall A,B,C\in\mathfrak{P}(\mathfrak{X})$
$\begin{array}[]{c}(A\cup B)+C=(A+C)\cup(B+C)\\\ \textrm{if }B\subseteq
C,\quad A+B\subseteq A+C\end{array}$
In the following, we shall work with closed sets. In general, if
$A,B\in\mathbb{F}^{\,0}(\mathfrak{X})$ then $A+B$ does not belong to
$\mathbb{F}^{\,0}(\mathfrak{X})$ (e.g., in $\mathfrak{X}=\mathbb{R}$ let
$A=\left\\{n+1/n:n>1\right\\}$ and $B=\mathbb{Z}$, then
$\left\\{1/n=\left(n+1/n\right)+(-n)\right\\}\subset A+B$ and $1/n\downarrow
0$, but $0\not\in A+B$). In view of this fact, we define $A\oplus
B=\overline{A+B}$ where $\overline{(\cdot)}$ denotes the closure in
$\mathfrak{X}$.
For any $A,B\in\mathbb{F}(\mathfrak{X})$ the _Hausdorff distance_ (or
_metric_) is defined by
$\delta_{H}(A,B)=\max\left\\{\sup_{a\in A}\inf_{b\in
B}\left\|a-b\right\|_{\mathfrak{X}},\sup_{b\in B}\inf_{a\in
A}\left\|a-b\right\|_{\mathfrak{X}}\right\\}.$
For all $(x^{*},A)\in B^{*}_{1}\times\mathbb{F}(\mathfrak{X})$, the _support
function_ is defined by $s(x^{*},A)=\sup_{a\in A}x^{*}(a)$. It can be proved
(cf. [14, 2]) that for each $A,B\in\mathbb{F}_{bc}(\mathfrak{X})$,
$\delta_{H}(A,B)=\sup\left\\{\left|s(x^{*},A)-s(x^{*},B)\right|:x^{*}\in
B^{*}_{1}\right\\}.$ (1)
Let $(\Omega,\mathfrak{F})$ be a measurable space with $\mathfrak{F}$ complete
with respect to some $\sigma$-finite measure, let
$X:\Omega\to\mathfrak{P}^{\,0}(\mathfrak{X})$ be a set–valued map, and
$\displaystyle D(X)$
$\displaystyle=\left\\{\omega\in\Omega:X(\omega)\neq\emptyset\right\\}$ be the
domain of $X$ $\displaystyle X^{-1}(A)$
$\displaystyle=\left\\{\omega\in\Omega:X(\omega)\cap
A\neq\emptyset\right\\},\quad A\subset\mathfrak{X},$ be the inverse image of
$X$
Roughly speaking, $X^{-1}(A)$ is the set of all $\omega$ such that $X(\omega)$
hits set $A$.
Different definitions of measurability for set–valued functions are developed
over the years by several authors (cf. [17, 10, 2, 16] and reference therein).
Here, $X$ is _measurable_ if, for each $O$, open subset of $\mathfrak{X}$,
$X^{-1}(O)\in\mathfrak{F}$.
###### Proposition 0.1.1
_(See[17]) _ $X:\Omega\to\mathfrak{P}^{\,0}(\mathfrak{X})$ is a measurable
set–valued map if and only if $D(X)\in\mathfrak{F}$, and $\omega\mapsto
d(x,X(\omega))$ is a measurable function of $\omega\in D(X)$ for each
$x\in\mathfrak{X}$.
From now on, $\mathcal{U}[\Omega,\mathfrak{F},\mu;\mathbb{F}(\mathfrak{X})]$
($=\mathcal{U}[\Omega;\mathbb{F}(\mathfrak{X})]$ if the measure $\mu$ is
clear) denotes the family of $\mathbb{F}(\mathfrak{X})$–valued measurable maps
(analogous notation holds whenever $\mathbb{F}(\mathfrak{X})$ is replaced by
another family of subsets of $\mathfrak{X}$).
Let $(\Omega,\mathfrak{F},\mathbb{P})$ be a complete probability space and let
$X\in\mathcal{U}[\Omega,\mathfrak{F},\mathbb{P};\mathbb{F}(\mathfrak{X})]$,
then $X$ is a RaCS.
It can be proved (see [18]) that, if $X,X_{1},X_{2}$ are RaCS and if $\xi$ is
a measurable real–valued function, then $X_{1}\oplus X_{2}$, $X_{1}\ominus
X_{2}$, $\xi X$ and $(\textrm{Int}\ X)^{C}$ are RaCS. Moreover, if
$\left\\{X_{n}\right\\}_{n\in\mathbb{N}}$ is a sequence of RaCS then
$X=\overline{\bigcup_{n\in\mathbb{N}}X_{n}}$ is so.
Let $(\Omega,\mathfrak{F},\mu)$ be a finite measure space (although most of
the results are valid for $\sigma$-finite measures space). The _Aumann
integral_ of
$X\in\mathcal{U}[\Omega,\mathfrak{F},\mu;\mathbb{F}(\mathfrak{X})]$ is defined
by
$\int_{\Omega}Xd\mu=\left\\{\int_{\Omega}xd\mu:x\in S_{X}\right\\},$
where $S_{X}=\left\\{x\in L^{1}[\Omega;\mathfrak{X}]:x\in X\
\mu-\textrm{a.e.}\right\\}$ and $\int_{\Omega}xd\mu$ is the usual Bochner
integral in $L^{1}[\Omega;\mathfrak{X}]$. Moreover,
$\int_{A}Xd\mu=\left\\{\int_{A}xd\mu:x\in S_{X}\right\\}$ for
$A\in\mathfrak{F}$. If $\mu$ is a probability measure, we denote the Aumann
integral by $\mathbb{E}{X}=\int_{\Omega}Xd\mu$.
Let $X\in\mathcal{U}[\Omega,\mathfrak{F},\mu;\mathbb{F}(\mathfrak{X})]$, it is
_integrably bounded_ , and we shall write $X\in
L^{1}[\Omega,\mathfrak{F},\mu;\mathbb{F}(\mathfrak{X})]=L^{1}[\Omega;\mathbb{F}(\mathfrak{X})]$,
if $\left\|X\right\|_{h}\in L^{1}[\Omega,\mathfrak{F},\mu;\mathbb{R}]$.
## 0.2 Model assumptions
Let us consider
$\begin{array}[]{rl}\Theta_{t}=&\left(\Theta_{t_{0}}\oplus\int_{t_{0}}^{t}G_{s}ds\right)\cup\bigcup_{s\in[t_{0},t]}dB_{s}\\\
d\Theta_{t}=&\oplus G_{t}dt\cup dB_{t}\qquad\textrm{ or
}\qquad\Theta_{t+dt}=(\Theta_{t}\oplus G_{t}dt)\cup dB_{t}.\end{array}$ (2)
In fact, above equation is not a definition since, for example, problems arise
handling non–countable union of (random) closed sets. The well–posedness of
(2) and hence the existence of such a process are the main purpose of this
paper.
From now on, let us consider the following assumptions.
* (A-0)
* -
$(\mathfrak{X},\left\|\cdot\right\|_{\mathfrak{X}})$ is a reflexive Banach
space with separable dual space
$({\mathfrak{X}^{*}},\left\|\cdot\right\|_{{\mathfrak{X}^{*}}})$, (then,
$\mathfrak{X}$ is separable too, see [12, Lemma II.3.16 p. 65]).
* -
$[t_{0},T]\subset\mathbb{R}$ is the _time observation interval_ (or _time
interval_),
* -
$\left(\Omega,\mathfrak{F},\left\\{\mathfrak{F}_{t}\right\\}_{t\in[t_{0},T]},\mathbb{P}\right)$
is a filtered probability space, where the filtration
$\left\\{\mathfrak{F}_{t}\right\\}_{t\in[t_{0},T]}$ is assumed to have the
usual properties.
(_Nucleation Process_). $B=\left\\{B(\omega,t)=B_{t}:\omega\in\Omega,\
t\in[t_{0},T]\right\\}$ is a process with non–empty closed values, i.e.
$B:\Omega\times[t_{0},T]\to\mathbb{F}(\mathfrak{X})$ such that
* (A-1)
$B(\cdot,t)\in\mathcal{U}[\Omega,\mathfrak{F}_{t},\mathbb{P};\mathbb{F}(\mathfrak{X})]$,
for every $t\in[t_{0},T]$, i.e. $B_{t}$ is an _adapted_ (to
$\left\\{\mathfrak{F}_{t}\right\\}_{t\in[t_{0},T]}$) process.
* (A-2)
$B_{t}$ is increasing: for every $t,s\in[t_{0},T]$ with $s<t$, $B_{s}\subseteq
B_{t}$.
(_Growth Process_). $G=\left\\{G_{t}=G(\omega,t):\omega\in\Omega,\
t\in[t_{0},T]\right\\}$ is a process with non–empty closed values, i.e.
$G:\Omega\times[t_{0},T]\to\mathbb{F}(\mathfrak{X})$ such that
* (A-3)
for every $\omega\in\Omega$ and $t\in[t_{0},T]$, $0\in G(\omega,t)$.
* (A-4)
for every $\omega\in\Omega$ and $t\in[t_{0},T]$, $G(\omega,t)$ is convex, i.e.
$G:\Omega\times[t_{0},T]\to\mathbb{F}_{c}(\mathfrak{X})$.
* (A-5)
there exists $K\in\mathbb{F}_{b}(\mathfrak{X})$ such that
$G(\omega,t)\subseteq K$ for every $t\in[t_{0},T]$ and $\omega\in\Omega$.
As a consequence, $G(\omega,t)\in\mathbb{F}_{b}(\mathfrak{X})$ and
$\left\|G(\omega,t)\right\|_{h}\leq\left\|K\right\|_{h}$,
$\forall(\omega,t)\in\Omega\times[t_{0},T]$.
In order to establish the well–posedness of integral $\int_{t_{0}}^{t}G_{s}ds$
in (2), let us consider a suitable hypothesis of measurability for $G$
(analogously to what is).
A $\mathbb{F}(\mathfrak{X})$–valued process
$G=\left\\{G_{t}\right\\}_{t\in[t_{0},T]}$ has _left continuous trajectories_
on $[t_{0},T]$ if, for every $\overline{t}\in[t_{0},T]$ with $t<\overline{t}$,
$\lim_{t\to\overline{t}}\delta_{H}\left(G(\omega,t),G(\omega,\overline{t})\right)=0,\qquad\textrm{a.s.}$
The $\sigma$-algebra on $\Omega\times[t_{0},T]$ generated by the processes
$\left\\{G_{t}\right\\}_{t\in[t_{0},T]}$ with left continuous trajectories on
$[t_{0},T]$, is called the _previsible_ (or _predictable_) $\sigma$-algebra
and it is denoted by $\mathcal{P}$.
###### Proposition 0.2.1
__The previsible $\sigma$-algebra is also generated by the collection of
random sets $A\times t_{0}$ where $A\in\mathfrak{F}_{t_{0}}$ and
$A\times(s,t]$ where $A\in\mathfrak{F}_{s}$ and $(s,t]\subset[t_{0},T]$.
Proof. Let the $\sigma$-algebra generated by the above collection of sets be
denoted by $\mathcal{P}^{\prime}$. We shall show
$\mathcal{P}=\mathcal{P}^{\prime}$. Let $G$ be a left continuous process and
let $\alpha=(T-t_{0})$, consider for $n\in\mathbb{N}$
$G_{n}(\omega,t)=\left\\{\begin{array}[]{ll}G(\omega,t_{0}),&t=t_{0}\\\ \\\
G\left(\omega,t_{0}+\frac{k\alpha}{2^{n}}\right),&\begin{array}[]{c}\left(t_{0}+\frac{k\alpha}{2^{n}}\right)<t\leq\left(t_{0}+\frac{(k+1)\alpha}{2^{n}}\right)\\\
k\in\left\\{0,\ldots,(2^{n}-1)\right\\}\end{array}\end{array}\right.$
It is clear that $G_{n}$ is $\mathcal{P}^{\prime}$-measurable, since $G$ is
adapted. As $G$ is left continuous, the above sequence of left-continuous
processes converges pointwise (with respect to $\delta_{H}$) to $G$ when $n$
tends to infinity, so $G$ is $\mathcal{P}^{\prime}$-measurable, thus
$\mathcal{P}\subseteq\mathcal{P}^{\prime}$.
Conversely consider $A\times(s,t]\in\mathcal{P}^{\prime}$ with
$(s,t]\subset[t_{0},T]$ and $A\in\mathfrak{F}_{s}$. Let
$b\in\mathfrak{X}\setminus\\{0\\}$ and $G$ be the process
$G(\omega,v)=\left\\{\begin{array}[]{ll}b,&v\in(s,t],\ \omega\in A\\\
0,&\textrm{otherwise}\end{array}\right.$
this function is adapted and left continuous, hence
$\mathcal{P}^{\prime}\subseteq\mathcal{P}$. $\blacksquare$
Then let us consider the following assumption.
* (A-6)
$G$ is $\mathcal{P}$-measurable.
## 0.3 Growth process properties
Theorem 0.3.2 is the main result in this section. It shows that
$\omega\mapsto\int_{a}^{b}G(\omega,\tau)d\tau$ is a RaCS with non–empty
bounded convex values. This is the first step in order to obtain
well–posedness of (2).
###### Proposition 0.3.1
__Suppose (A-3), …, (A-6), and let $\mu_{\lambda}$ be the
Lebesgue measure on $[t_{0},T]$, then
* •
$G(\omega,\cdot)\in\mathcal{U}\left[[t_{0},T],\mathcal{B}_{[t_{0},T]},\mu_{\lambda};\mathbb{F}_{bc}(\mathfrak{X})\right]$
for every $\omega\in\Omega$.
* •
$G(\cdot,t)\in\mathcal{U}[\Omega,\widetilde{\mathfrak{F}}_{t^{-}},\mathbb{P};\mathbb{F}_{bc}(\mathfrak{X})]$
for each $t\in[t_{0},T]$, where $\widetilde{\mathfrak{F}}_{t^{-}}$ is the so
called _history $\sigma$-algebra_ i.e.
$\widetilde{\mathfrak{F}}_{t^{-}}=\sigma\left(\mathfrak{F}_{s}:0\leq
s<t\right)\subseteq\mathfrak{F}$.
* •
$G\in\
L^{1}[[t_{0},T],\mathcal{B}_{[t_{0},T]},\mu_{\lambda};\mathbb{F}_{bc}(\mathfrak{X})]\cap
L^{1}[\Omega,\mathfrak{F},\mathbb{P};\mathbb{F}_{bc}(\mathfrak{X})]$
Proof. Assumptions (A-3) and (A-4) imply that $G$ is
non–empty and convex. Measurability and integrability properties are
consequence of (A-6) and (A-5) respectively.
$\blacksquare$
###### Theorem 0.3.2
__Suppose (A-3), …, (A-6). For every $a,b\in[t_{0},T]$,
the integral $\int_{a}^{b}G(\omega,\tau)d\tau$ is non–empty and the set–valued
map
$\begin{array}[]{rccl}G_{a,b}:&\Omega&\to&\mathfrak{P}(\mathfrak{X})\\\
&\omega&\mapsto&\int_{a}^{b}G(\omega,\tau)d\tau\end{array}$
is measurable. Moreover, $G_{a,b}$ is a non–empty, bounded convex RaCS.
In order to prove Theorem 0.3.2, consider following properties for real
processes.
A real–valued process $X=\left\\{X_{t}\right\\}_{t\in[t_{0},T]}$ is
_predictable_ with respect to filtration
$\left\\{\mathfrak{F}_{t}\right\\}_{t\in\mathbb{R}_{+}}$, if it is measurable
with respect to the _predictable $\sigma$-algebra_ $\mathcal{P}_{\mathbb{R}}$,
i.e. the $\sigma$-algebra generated by the collection of random sets
$A\times\left\\{0\right\\}$ where $A\in\mathfrak{F}_{0}$ and $A\times(s,t]$
where $A\in\mathfrak{F}_{s}$.
###### Proposition 0.3.3
_(See[9, Propositions 2.30, 2.32 and 2.41]) _Let
$X=\left\\{X_{t}\right\\}_{t\in[t_{0},T]}$ be a predictable real–valued
process, then $X$ is
$(\mathfrak{F}\otimes\mathcal{B}_{[t_{0},T]},\mathcal{B}_{\mathbb{R}})$-measurable.
Further, for every $\omega\in\Omega$, the trajectory
$X(\omega,\cdot):[t_{0},T]\to\mathbb{R}$ is
$(\mathcal{B}_{[t_{0},T]},\mathcal{B}_{\mathbb{R}})$-measurable .
###### Lemma 0.3.4
__Let $x^{*}$ be an element of the unit ball in the dual space $B^{*}_{1}$,
then $G\mapsto s(x^{*},G)$ is a measurable map.
Proof. By definition $s(x^{*},G)=\sup\left\\{x^{*}(g):g\in G\right\\}$. Since
$\mathfrak{X}$ is separable (A-0), there exists
$\left\\{g_{n}\right\\}_{n\in\mathbb{N}}\subset G$ such that
$G=\overline{\left\\{g_{n}\right\\}}$. Then, for every $x^{*}\in B^{*}_{1}$ we
have
$s(x^{*},G)=\sup_{g\in G}x^{*}(g)=\sup_{n\in\mathbb{N}}x^{*}(g_{n}).$
Since $x^{*}$ is a continuous map then, $s(x^{*},\cdot)$ is measurable.
$\blacksquare$
Proof of Theorem 0.3.2. At first, we prove that $G_{a,b}$ is a measurable
map. From Proposition 0.3.1, integral
$G_{a,b}=\int_{a}^{b}G(\omega,\tau)d\tau$ is well defined for all
$\omega\in\Omega$. Assumption (A-3) implies $0\in
G_{a,b}(\omega)\neq\emptyset$ for every $\omega\in\Omega$. Hence, the domain
of $G_{a,b}$ is the whole $\Omega$ for all $a,b\in[t_{0},T]$
$D\left(G_{a,b}\right)=\left\\{\omega\in\Omega:G_{a,b}\neq\emptyset\right\\}=\Omega\in\mathfrak{F}.$
Thus, by Proposition 0.1.1 and for a fixed couple $a,b\in[t_{0},T]$, $G_{a,b}$
is (weakly) measurable if and only if, for every $x\in\mathfrak{X}$, the map
$\omega\mapsto
d\left(x,\int_{a}^{b}G(\omega,\tau)d\tau\right)=\delta_{H}\left(x,\int_{a}^{b}G(\omega,\tau)d\tau\right)$
(3)
is measurable. Equation (1) guarantees that (3) is measurable if and only if,
for every $x\in\mathfrak{X}$, the map
$\omega\mapsto\sup_{x^{*}\in
B^{*}_{1}}\left|s(x^{*},x)-s\left(x^{*},\int_{a}^{b}G(\omega,\tau)d\tau\right)\right|$
is measurable. The above expression can be computed on a countable family
dense in $B^{*}_{1}$ (note that such family exists since ${\mathfrak{X}^{*}}$
is assumed separable (A-0)):
$\omega\mapsto\sup_{n\in\mathbb{N}}\left|s(x^{*}_{i},x)-s\left(x^{*}_{i},\int_{a}^{b}G(\omega,\tau)d\tau\right)\right|.$
It can be proved ([18, Theorem 2.1.12 p. 46]) that
$s\left(x^{*},\int_{a}^{b}G(\omega,\tau)d\tau\right)=\int_{a}^{b}s\left(x^{*},G(\omega,\tau)\right)d\tau,\qquad\forall
x^{*}\in B^{*}_{1}$
and therefore, since $s(x^{*}_{i},x)$ is a constant, $G_{a,b}$ is measurable
if, for every $x^{*}\in\left\\{x_{i}^{*}\right\\}_{i\in\mathbb{N}}$, the
following map
$\begin{array}[]{ccl}(\Omega,\mathfrak{F})&\to&(\mathbb{R},\mathcal{B}_{\mathbb{R}})\\\
\omega&\mapsto&\int_{a}^{b}s\left(x^{*},G(\omega,\tau)\right)d\tau\end{array}$
(4)
is measurable. Note that $s(x^{*},G(\cdot,\cdot))$, as a map from
$\Omega\times[t_{0},T]$ to $\mathbb{R}$, is predictable since it is the
composition of a predictable map (A-6) with a measurable one (see
Lemma 0.3.4):
$\begin{array}[]{rccccc}s\left(x^{*},G(\cdot,\cdot)\right):&(\Omega\times[t_{0},T],\mathcal{P})&\to&(\mathbb{F}(\mathfrak{X}),\sigma_{f})&\to&(\mathbb{R},\mathcal{B}_{\mathbb{R}})\\\
&(\omega,t)&\mapsto&G(\omega,t)&\mapsto&s\left(x^{*},G(\omega,t)\right)\end{array}$
thus, by Proposition 0.3.3, it is a $\mathcal{P}$-measurable map and hence (4)
is a measurable map.
In view of the first part, it remains to prove that $G_{a,b}$ is a bounded
convex set for a.e. $\omega\in\Omega$. Since $\mathfrak{X}$ is reflexive
(A-0), by Proposition 0.3.1 we have that $G_{a,b}$ is closed ([18,
Theorem 2.2.3]). Further, $G_{a,b}$ is also convex (see [18, Theorem 2.1.5 and
Corollary 2.1.6]).
To conclude the proof, it is sufficient to show that $G_{a,b}$ is included in
a bounded set:
$\displaystyle\int_{a}^{b}G(\omega,\tau)d\tau$ $\displaystyle=$
$\displaystyle\left\\{\int_{a}^{b}g(\omega,\tau)d\tau:g(\omega,\cdot)\in
G(\omega,\cdot)\subseteq K\right\\}$ $\displaystyle\subseteq$
$\displaystyle\left\\{\int_{a}^{b}kd\tau:k\in K\right\\}=\left\\{(b-a)k:k\in
K\right\\}=(b-a)K.$
$\blacksquare$
## 0.4 Geometric Random Process
For the sake of simplicity, let us present the main results which proofs will
be given in Section 0.4.1.
Let us assume conditions from (A-0) to (A-6). For every
$t\in[t_{0},T]\subset\mathbb{R}$, $n\in\mathbb{N}$ and
$\Pi=\left(t_{i}\right)_{i=0}^{n}$ partition of $[t_{0},t]$, let us define
$\displaystyle s_{\Pi}(t)=$
$\displaystyle\left(B_{t_{0}}\oplus\int_{t_{0}}^{t}G(\tau)d\tau\right)\cup\bigcup_{i=1}^{n}\left(\Delta
B_{t_{i}}\oplus\int_{t_{i}}^{t}G(\tau)d\tau\right)$ (5) $\displaystyle
S_{\Pi}(t)=$
$\displaystyle\left(B_{t_{0}}\oplus\int_{t_{0}}^{t}G(\tau)d\tau\right)\cup\bigcup_{i=1}^{n}\left(\Delta
B_{t_{i}}\oplus\int_{t_{i-1}}^{t}G(\tau)d\tau\right)$ (6)
where $\Delta B_{t_{i}}=B_{t_{i}}\setminus B_{t_{i-1}}^{o}$ ($B_{t_{i-1}}^{o}$
denotes the interior set of $B_{t_{i-1}}$) and where the integral is in the
Aumann sense with respect to the Lebesgue measure $d\tau=d\mu_{\lambda}$. We
write $s_{\Pi}$ and $S_{\Pi}$ instead of $s_{\Pi}(t)$ and $S_{\Pi}(t)$ when
the dependence on $t$ is clear.
Proposition 0.4.1 guarantees that both $s_{\Pi}$ and $S_{\Pi}$ are well
defined RaCS, further, Proposition 0.4.3 shows $s_{\Pi}\subseteq S_{\Pi}$ as a
consequence of different time intervals integration: if the time interval
integration of $G$ increases then the integral of $G$ does not decrease with
respect to set-inclusion (Lemma 0.4.2). Proposition 0.4.4 means that
$\left\\{s_{\Pi}\right\\}$ ($\left\\{S_{\Pi}\right\\}$) increases (decreases)
whenever a refinement of $\Pi$ is considered. At the same time, Proposition
0.4.5 implies that $s_{\Pi}$ and $S_{\Pi}$ become closer each other (in the
Hausdorff distance sense) when partition $\Pi$ becomes finer. The “limit” is
independent on the choice of the refinement as consequence of Proposition
0.4.6.
Corollary 0.4.7 means that, given any
$\left\\{\Pi_{j}\right\\}_{j\in\mathbb{N}}$ refinement sequence of
$[t_{0},t]$, the random closed sets $s_{\Pi_{j}}$ and $S_{\Pi_{j}}$ play the
same role that lower sums and upper sums have in classical analysis when we
define the Riemann integral. In fact, if $\Theta_{t}$ denotes their limit
value (see (7)), $s_{\Pi_{j}}$ and $S_{\Pi_{j}}$ are a lower and an upper
approximation of $\Theta_{t}$ respectively. Note that, as a consequence of
monotonicity of $s_{\Pi_{j}}$ and $S_{\Pi_{j}}$, we avoid problems that may
arise considering uncountable unions in integral expression in (2).
###### Proposition 0.4.1
__Let $\Pi$ be a partition of $[t_{0},t]$. Both $s_{\Pi}$ and $S_{\Pi}$,
defined in (5) and (6), are RaCS.
###### Lemma 0.4.2
__Let $X\in L^{1}[I,\mathfrak{F},\mu_{\lambda};\mathbb{F}(\mathfrak{X})]$,
where $I$ is a bounded interval of $\mathbb{R}$, such that $0\in X$
$\mu_{\lambda}$-almost everywhere on $I$ and let $I_{1},I_{2}$ be two other
intervals of $\mathbb{R}$ with $I_{1}\subset I_{2}\subset I$. Then
$\int_{I_{1}}X(\tau)d\tau\subseteq\int_{I_{2}}X(\tau)d\tau.$
###### Proposition 0.4.3
__Let $\Pi$ be a partition of $[t_{0},t]$. Then $s_{\Pi}\subseteq S_{\Pi}$
almost surely.
###### Proposition 0.4.4
__Let $\Pi$ and $\Pi^{\prime}$ be two partitions of $[t_{0},t]$ such that
$\Pi^{\prime}$ is a refinement of $\Pi$. Then, almost surely,
$s_{\Pi}\subseteq s_{\Pi^{\prime}}$ and $S_{\Pi^{\prime}}\subseteq S_{\Pi}$.
###### Proposition 0.4.5
__Let $\left\\{\Pi_{j}\right\\}_{j\in\mathbb{N}}$ be a refinement sequence of
$[t_{0},t]$ (i.e. $\left|\Pi_{j}\right|\to 0$ if $j\to\infty$). Then, almost
surely, $\lim_{j\to\infty}\delta_{H}\left(s_{\Pi_{j}},S_{\Pi_{j}}\right)=0$.
###### Proposition 0.4.6
__Let $\left\\{\Pi_{j}\right\\}_{j\in\mathbb{N}}$ and
$\left\\{\Pi_{l}^{\prime}\right\\}_{l\in\mathbb{N}}$ be two distinct
refinement sequences of $[t_{0},t]$, then, almost surely,
$\lim_{\textrm{\tiny$\begin{array}[]{c}j\rightarrow\infty\\\
l\rightarrow\infty\end{array}$}}\delta_{H}\left(s_{\Pi_{j}},s_{\Pi^{\prime}_{l}}\right)=0\qquad\textrm{and}\qquad\lim_{\textrm{\tiny$\begin{array}[]{c}j\rightarrow\infty\\\
l\rightarrow\infty\end{array}$}}\delta_{H}\left(S_{\Pi_{j}},S_{\Pi^{\prime}_{l}}\right)=0.$
###### Corollary 0.4.7
__For every $\left\\{\Pi_{j}\right\\}_{j\in\mathbb{N}}$ refinement sequence of
$[t_{0},t]$, the following limits exist
$\overline{\left(\bigcup_{j\in\mathbb{N}}s_{\Pi_{j}}\right)},\
\overline{\left(\lim_{j\rightarrow\infty}s_{\Pi_{j}}\right)},\
\lim_{j\rightarrow\infty}S_{\Pi_{j}},\ \bigcap_{j\in\mathbb{N}}S_{\Pi_{j}},$
(7)
and they are equals almost surely. The convergences is taken with respect to
the Hausdorff distance.
We are now ready to define the continuous time stochastic process.
###### Definition 0.4.8
__Assume (A-0), …, (A-6). For every $t\in[t_{0},T]$, let
$\left\\{\Pi_{j}\right\\}_{j\in\mathbb{N}}$ be a refinement sequence of the
time interval $[t_{0},t]$ and let $\Theta_{t}$ be the RaCS defined by
$\overline{\left(\bigcup_{j\in\mathbb{N}}s_{\Pi_{j}}(t)\right)}=\overline{\left(\lim_{j\rightarrow\infty}s_{\Pi_{j}}(t)\right)}=\Theta_{t}=\lim_{j\rightarrow\infty}S_{\Pi_{j}}(t)=\bigcap_{j\in\mathbb{N}}S_{\Pi_{j}}(t),$
then, the family $\Theta=\left\\{\Theta_{t}:t\in[t_{0},T]\right\\}$ is called
_geometric random process G-RaP_ (on $[t_{0},T]$).
###### Theorem 0.4.9
__Let $\Theta$ be a G-RaP on $[t_{0},T]$, then $\Theta$ is a non-decreasing
process with respect to the set inclusion, i.e.
$\mathbb{P}\left(\Theta_{s}\subseteq\Theta_{t},\ \forall t_{0}\leq s<t\leq
T\right)=1.$
Moreover, $\Theta$ is adapted with respect to filtration
$\left\\{\mathfrak{F}_{t}\right\\}_{t\in[t_{0},T]}$.
###### Remark 0.4.10
__We want to point out that, assumptions we considered on
$\left\\{B_{t}\right\\}$ and $\left\\{G_{t}\right\\}$ are so general, that a
wide family of classical random sets and evolution processes can be described
(for example, Boolean model is a birth–and–growth process with “null growth”).
### 0.4.1 Proofs of Propositions in Section 0.4
Proof of Proposition 0.4.1. For every $i\in\left\\{0,\ldots,n\right\\}$,
$\int_{t_{i-1}}^{t}G(\tau)d\tau$ is a RaCS (Theorem 0.3.2). Thus,
measurability Assumption (A-1) on $B$ guarantees that, for every
$t_{i}\in\Pi$, $B_{t_{i}}$, $\Delta B_{t_{i}}$, $\left(\Delta
B_{t_{i}}\oplus\int_{t_{i}}^{t}G(\tau)d\tau\right)$, and hence $s_{\Pi}$ and
$S_{\Pi}$ are RaCS. $\blacksquare$
Proof of Lemma 0.4.2. Let $y\in\left(\int_{I_{1}}X(\tau)d\tau\right)$, then
there exists $x\in S_{X}$, for which
$y=\left(\int_{I_{1}}x(\tau)d\tau\right)$. Let us define on $I_{2}(\supset
I_{1})$
$x^{\prime}(\tau)=\left\\{\begin{array}[]{ll}x(\tau),&\tau\in I_{1}\\\
0,&\tau\in I_{2}\setminus I_{1}\end{array}\right.$
then $x^{\prime}\in S_{X}$ and
$y=\left(\int_{I_{2}}x^{\prime}(\tau)d\tau\right)\in\left(\int_{I_{2}}X(\tau)d\tau\right)$.
$\blacksquare$
Proof of Proposition 0.4.3. Thesis is a consequence of Lemma 0.4.2 and
Minkowski addition properties, in fact
$\left(\int_{t_{i-1}}^{t}G(\tau)d\tau\right)\subseteq\left(\int_{t_{i}}^{t}G(\tau)d\tau\right)$
implies $s_{\Pi}\subseteq S_{\Pi}$. $\blacksquare$
Proof of Proposition 0.4.4. Let $\Pi^{\prime}$ be a refinement of partition
$\Pi$ of $[t_{0},t]$, i.e. $\Pi\subset\Pi^{\prime}$. We prove that
$s_{\Pi}\subseteq s_{\Pi^{\prime}}$ ($S_{\Pi^{\prime}}\subseteq S_{\Pi}$ is
analogous). It is sufficient to show the thesis only for
$\Pi^{\prime}=\Pi\cup\left\\{\overline{t}\right\\}$ where
$\Pi=\left\\{t_{0},\ldots,t_{n}\right\\}$ with $t_{0}<\ldots<t_{n}=t$ and
$\overline{t}\in(t_{0},t)$. Let $i\in\left\\{0,\ldots,(n-1)\right\\}$ be such
that $t_{i}\leq\overline{t}\leq t_{i+1}$ then
$\displaystyle s_{\Pi}$ $\displaystyle=$
$\displaystyle\left(B_{t_{0}}\oplus\int_{t_{0}}^{t}G(\tau)d\tau\right)\cup\bigcup_{\textrm{\tiny$\begin{array}[]{c}j=1\\\
j\neq i+1\end{array}$}}^{n}\left(\Delta
B_{t_{j}}\oplus\int_{t_{j}}^{t}G(\tau)d\tau\right)\cup$
$\displaystyle\left[\left(B_{t_{i+1}}\setminus
B_{t_{i}}^{o}\right)\oplus\int_{t_{i+1}}^{t}G(\tau)d\tau\right]$
and
$\displaystyle s_{\Pi^{\prime}}$ $\displaystyle=$
$\displaystyle\left(B_{t_{0}}\oplus\int_{t_{0}}^{t}G(\tau)d\tau\right)\cup\bigcup_{\textrm{\tiny$\begin{array}[]{c}j=1\\\
j\neq i+1\end{array}$}}^{n}\left(\Delta
B_{t_{j}}\oplus\int_{t_{j}}^{t}G(\tau)d\tau\right)\cup$
$\displaystyle\left[\left(B_{\overline{t}}\setminus
B_{t_{i}}^{o}\right)\oplus\int_{\overline{t}}^{t}G(\tau)d\tau\right]\cup\left[\left(B_{t_{i+1}}\setminus
B_{\overline{t}}^{o}\right)\oplus\int_{t_{i+1}}^{t}G(\tau)d\tau\right]$
Definitely, in order to prove that $s_{\Pi}\subseteq s_{\Pi^{\prime}}$ we have
to prove that
$\displaystyle\left\\{\left[\left(B_{\overline{t}}\setminus
B_{t_{i}}^{o}\right)\oplus\int_{\overline{t}}^{t}G(\tau)d\tau\right]\cup\left[\left(B_{t_{i+1}}\setminus
B_{\overline{t}}^{o}\right)\oplus\int_{t_{i+1}}^{t}G(\tau)d\tau\right]\right\\}$
$\displaystyle\supseteq\left[\left(B_{t_{i+1}}\setminus
B_{t_{i}}^{o}\right)\oplus\int_{t_{i+1}}^{t}G(\tau)d\tau\right].$
This inclusion is a consequence of
$\left(\int_{\overline{t}}^{t}G(\tau)d\tau\right)\supseteq\left(\int_{t_{i+1}}^{t}G(\tau)d\tau\right)$
(Lemma 0.4.2) and of the Minkowski distribution property. $\blacksquare$
Proof of Proposition 0.4.5. Let $\Pi_{j}=\left(t_{i}\right)_{i=0}^{n}$ be the
$j$-partition of the refinement sequence
$\left\\{\Pi_{j}\right\\}_{j\in\mathbb{N}}$, then
$\delta_{H}\left(s_{\Pi_{j}},S_{\Pi_{j}}\right)=\max\left\\{\sup_{x\in
s_{\Pi_{j}}}d(x,S_{\Pi_{j}}),\sup_{y\in S_{\Pi_{j}}}d(y,s_{\Pi_{j}})\right\\}$
where $d(x,S_{\Pi_{j}})=\inf_{y\in
S_{\Pi_{j}}}\left\|x-y\right\|_{\mathfrak{X}}$. By Proposition 0.4.3,
$s_{\Pi_{j}}\subseteq S_{\Pi_{j}}$ then
$\sup_{x\in s_{\Pi_{j}}}d(x,S_{\Pi_{j}})=0$
and hence we have to prove that, whenever $j\to\infty$ (i.e.
$\left|\Pi_{j}\right|\to 0$),
$\delta_{H}\left(s_{\Pi_{j}},S_{\Pi_{j}}\right)=\sup_{y\in
S_{\Pi_{j}}}d(y,s_{\Pi_{j}})=\sup_{y\in S_{\Pi_{j}}}\inf_{x\in
s_{\Pi_{j}}}\left\|x-y\right\|_{\mathfrak{X}}\longrightarrow 0.$
For every $\omega\in\Omega$, let $y$ be any element of $S_{\Pi_{j}}(\omega)$,
then we distinguish two cases:
(1) if
$y\in\left(B_{t_{0}}(\omega)\oplus\int_{t_{0}}^{t}G(\omega,\tau)d\tau\right)$,
then it is also an element of $s_{\Pi_{j}}(\omega)$, and hence
$d\left(s_{\Pi_{j}}(\omega),y\right)=0$.
(2) if
$y\not\in\left(B_{t_{0}}(\omega)\oplus\int_{t_{0}}^{t}G(\omega,\tau)d\tau\right)$,
then there exist $j\in\left\\{1,\ldots,n\right\\}$ such that
$y\in\left(\Delta
B_{t_{j}}(\omega)\oplus\int_{t_{j-1}}^{t}G(\omega,\tau)d\tau\right).$
By definition of $\oplus$, for every $\omega\in\Omega$, there exist
$\left\\{y_{m}\right\\}_{m\in\mathbb{N}}\subseteq\left(\Delta
B_{t_{j}}(\omega)+\int_{t_{j-1}}^{t}G(\omega,\tau)d\tau\right),$
such that $\lim_{m\to\infty}y_{m}=y$. Then, for every $\omega\in\Omega$, there
exist $h_{m}\in\Delta B_{t_{j}}(\omega)$ and
$g_{m}\in\left(\int_{t_{j-1}}^{t}G(\omega,\tau)d\tau\right)$ such that
$y_{m}=(h_{m}+g_{m})$ and hence
$y=\lim_{m\to\infty}(h_{m}+g_{m})=\lim_{m\to\infty}y_{m}$
where the convergence is in the Banach norm, then let
$\overline{m}\in\mathbb{N}$ be such that
$\left\|y-y_{m}\right\|_{\mathfrak{X}}<\left|\Pi_{j}\right|$, for every
$m>\overline{m}$.
Note that, for every $\omega\in\Omega$ and $m\in\mathbb{N}$, by Aumann
integral definition, there exists a selection $\widehat{g_{m}}(\cdot)$ of
$G(\omega,\cdot)$ (i.e. $\widehat{g_{m}}(t)\in G(\omega,t)$
$\mu_{\lambda}$-a.e.) such that
$g_{m}=\int_{t_{j-1}}^{t}\widehat{g_{m}}(\tau)d\tau\qquad\textrm{ and }\qquad
y_{m}={h_{m}+\int_{t_{j-1}}^{t}\widehat{g_{m}}(\tau)d\tau}.$
For every $\omega\in\Omega$, let us consider
$x_{m}=h_{m}+\int_{t_{j}}^{t}\widehat{g_{m}}(\tau)d\tau$
then $x_{m}\in s_{\Pi_{j}}(\omega)$ for all $m\in\mathbb{N}$. Moreover, the
following chain of inequalities hold, for all $m>\overline{m}$ and
$\omega\in\Omega$,
$\displaystyle\inf_{x^{\prime}\in
s_{\Pi_{j}}}\left\|x^{\prime}-y\right\|_{\mathfrak{X}}$ $\displaystyle\leq$
$\displaystyle\left\|x_{m}-y\right\|_{\mathfrak{X}}\leq\left\|x_{m}-y_{m}\right\|_{\mathfrak{X}}+\left\|y_{m}-y\right\|_{\mathfrak{X}}$
$\displaystyle\leq$
$\displaystyle\left\|\int_{t_{j-1}}^{t_{j}}\widehat{g_{m}}(\tau)d\tau\right\|_{\mathfrak{X}}+\left|\Pi_{j}\right|\leq\int_{t_{j-1}}^{t_{j}}\left\|\widehat{g_{m}}(\tau)\right\|_{\mathfrak{X}}d\tau+\left|\Pi_{j}\right|$
$\displaystyle\leq$
$\displaystyle\int_{t_{j-1}}^{t_{j}}\left\|G(\tau)\right\|_{h}d\tau+\left|\Pi_{j}\right|\leq\left|t_{j}-t_{j-1}\right|\left\|K\right\|_{h}+\left|\Pi_{j}\right|$
$\displaystyle\leq$
$\displaystyle\left|\Pi_{j}\right|\left(\left\|K\right\|_{h}+1\right)\stackrel{{\scriptstyle
j\to\infty}}{{\longrightarrow}}0$
since $\left\|K\right\|_{h}$ is a positive constant. By the arbitrariness of
$y\in S_{\Pi_{j}}(\omega)$ we obtain the thesis. $\blacksquare$
Proof of Proposition 0.4.6. Let $\Pi_{j}$ and $\Pi_{l}^{\prime}$ be two
partitions of the two distinct refinement sequences
$\left\\{\Pi_{j}\right\\}_{j\in\mathbb{N}}$ and
$\left\\{\Pi_{l}^{\prime}\right\\}_{l\in\mathbb{N}}$ of $[t_{0},t]$. Let
$\Pi^{\prime\prime}=\Pi_{j}\cup\Pi_{l}^{\prime}$ be the refinement of both
$\Pi_{j}$ and $\Pi_{l}^{\prime}$. Then Proposition 0.4.4 and Proposition 0.4.3
imply that $s_{\Pi_{j}}\subseteq s_{\Pi^{\prime\prime}}\subseteq
S_{\Pi^{\prime\prime}}\subseteq S_{\Pi_{l}^{\prime}}$. Therefore
$s_{\Pi_{j}}\subseteq S_{\Pi_{l}^{\prime}}$ for every $j,l\in\mathbb{N}$. Then
$\overline{\left(\bigcup_{j\in\mathbb{N}}s_{\Pi_{j}}\right)}\subseteq\bigcap_{l\in\mathbb{N}}S_{\Pi_{l}^{\prime}}.$
Analogously
$\overline{\left(\bigcup_{l\in\mathbb{N}}s_{\Pi_{l}^{\prime}}\right)}\subseteq\bigcap_{j\in\mathbb{N}}S_{\Pi_{j}}.$
Proposition 0.4.5 concludes the proof. $\blacksquare$
In order to prove Theorem 0.4.9, let us consider the following Lemma.
###### Lemma 0.4.11
__Let $s,t\in[t_{0},T]$ with $t_{0}<s<t$ and let $\Pi^{s}$ and $\Pi^{t}$ be
two partition of $[t_{0},s]$ and $[t_{0},t]$ respectively, such that
$\Pi^{s}\subset\Pi^{t}$. Then
$s_{\Pi^{s}}(s)\subseteq s_{\Pi^{t}}(t)\qquad\textrm{ and }\qquad
S_{\Pi^{s}}(s)\subseteq S_{\Pi^{t}}(t).$
Proof. The proofs of the two inclusions are similar. Let us prove that
$s_{\Pi^{s}}(s)\subseteq s_{\Pi^{t}}(t)$.
Since $\Pi^{s}\subset\Pi^{t}$, then $\Pi^{s}=\left(t_{i}\right)_{i=0}^{n}$ and
$\Pi^{t}=\Pi^{s}\cup\left(t_{i}\right)_{i=n+1}^{n+m}$ with $m\in\mathbb{N}$.
By Lemma 0.4.2, we have that
$\displaystyle s_{\Pi^{s}}(s)$ $\displaystyle=$
$\displaystyle\left(B_{t_{0}}\oplus\int_{t_{0}}^{s}G(\tau)d\tau\right)\cup\bigcup_{i=1}^{n}\left(\Delta
B_{t_{i}}\oplus\int_{t_{i}}^{s}G(\tau)d\tau\right)$ $\displaystyle\subseteq$
$\displaystyle\left(B_{t_{0}}\oplus\int_{t_{0}}^{t}G(\tau)d\tau\right)\cup\bigcup_{i=1}^{n}\left(\Delta
B_{t_{i}}\oplus\int_{t_{i}}^{t}G(\tau)d\tau\right)$ $\displaystyle\subseteq$
$\displaystyle\left(B_{t_{0}}\oplus\int_{t_{0}}^{t}G(\tau)d\tau\right)\cup\bigcup_{i=1}^{n}\left(\Delta
B_{t_{i}}\oplus\int_{t_{i}}^{t}G(\tau)d\tau\right)$
$\displaystyle\cup\bigcup_{i=n+1}^{n+m}\left(\Delta
B_{t_{i}}\oplus\int_{t_{i}}^{t}G(\tau)d\tau\right)$
i.e. $s_{\Pi^{s}}(s)\subseteq s_{\Pi^{t}}(t)$. $\blacksquare$
Proof of Theorem 0.4.9. For every $s,t\in[t_{0},T]$ with $s<t$, let
$\left\\{\Pi^{s}_{i}\right\\}_{i\in\mathbb{N}}$ and
$\left\\{\Pi^{t}_{i}\right\\}_{i\in\mathbb{N}}$ be two refinement sequences of
$[t_{0},s]$ and $[t_{0},t]$ respectively, such that
$\Pi^{s}_{i}\subset\Pi^{t}_{i}$ for every $i\in\mathbb{N}$. Then, by Lemma
0.4.11, $S_{\Pi^{s}_{i}}\subseteq S_{\Pi^{t}_{i}}$. Now, as $i$ tends to
infinity, we obtain
$\Theta_{s}=\bigcap_{i\rightarrow\infty}S_{\Pi^{s}_{i}}\subseteq\bigcap_{i\rightarrow\infty}S_{\Pi^{t}_{i}}=\Theta_{t}.$
For the second part, note that Theorem 0.3.2 still holds replacing
$\mathfrak{F}_{t}$ instead of $\mathfrak{F}$, so that for every
$s\in[t_{0},T]$, the family
$\left\\{\int_{s}^{t}G(\omega,\tau)d\tau\right\\}_{t\in[s,T]}$ is an adapted
process to the filtration $\left\\{\mathfrak{F}_{t}\right\\}_{t\in[t_{0},T]}$.
This fact together with Assumption (A-1) guarantees that
$\left\\{S_{\Pi}\right\\}_{t\in[s,T]}$ is adapted for every partition $\Pi$ of
$[s,T]$ and hence $\Theta$ is adapted too. $\blacksquare$
## 0.5 Discrete time case and infinitesimal notations
Let us consider $\Theta_{s}$ and $\Theta_{t}$ with $s<t$. Let
$\left\\{\Pi^{s}_{j}\right\\}_{j\in\mathbb{N}}$ and
$\left\\{\Pi^{t}_{j}\right\\}_{j\in\mathbb{N}}$ be two refinement sequences of
$[t_{0},s]$ and $[t_{0},t]$ respectively, such that
$\Pi^{s}_{j}\subset\Pi^{t}_{j}$ for every $j\in\mathbb{N}$ (i.e.
$\Pi_{j}^{s}=\left(t_{i}\right)_{i=0}^{n}$ and
$\Pi_{j}^{t}=\Pi_{j}^{s}\cup\left(t_{i}\right)_{i=n+1}^{n+m}$ with
$n,m\in\mathbb{N}$). It is easy to compute
$s_{\Pi^{t}_{j}}=\left(s_{\Pi^{s}_{j}}\oplus\int_{s}^{t}G(\tau)d\tau\right)\cup\bigcup_{i=n+1}^{n+m}\left(\Delta
B_{t_{i}}\oplus\int_{t_{i}}^{t}G(\tau)d\tau\right).$
Then, by Definition 0.4.8, whenever $\left|\Pi^{t}_{j}\right|\to 0$, we obtain
$\Theta_{t}=\left(\Theta_{s}\oplus\int_{s}^{t}G(\tau)d\tau\right)\cup\lim_{\left|\Pi^{t}_{j}\right|\rightarrow
0}\bigcup_{i=n+1}^{n+m}\left(\Delta
B_{t_{i}}\oplus\int_{t_{i}}^{t}G(\tau)d\tau\right).$ (10)
The following notations
$G_{k}=\int_{s}^{t}G(\tau)d\tau\qquad\textrm{and}\qquad
B_{k}=\lim_{\left|\Pi^{t}_{j}\right|\rightarrow
0}\bigcup_{i=n+1}^{n+m}\left(\Delta
B_{t_{i}}\oplus\int_{t_{i}}^{t}G(\tau)d\tau\right)$
lead us to the set-valued discrete time stochastic process
$\Theta_{k}=\left\\{\begin{array}[]{ll}(\Theta_{k-1}\oplus G_{k})\cup
B_{k},&k\geq 1,\\\ B_{0},&k=0.\end{array}\right.$
In view of this, we are able to justify infinitesimal notations introduced in
(2). In particular, from Equation (10), whenever
$\left|\Pi^{t}_{j}\right|\rightarrow 0$, we obtain
$\Theta_{t}=\left(B_{t_{0}}\oplus\int_{t_{0}}^{t}G(\tau)d\tau\right)\cup\bigcup_{s=t_{0}}^{t}\left(dB_{s}\oplus\int_{s}^{t}G(\tau)d\tau\right),\qquad
t\in[t_{0},T].$
Moreover, with a little abuse of this infinitesimal notation, we get two
differential formulations
$d\Theta_{t}=\oplus G_{t}dt\cup
dB_{t}\qquad\textrm{and}\qquad\Theta_{t+dt}=(\Theta_{t}\oplus G_{t}dt)\cup
dB_{t}.$
## References
* [1] G. Aletti, E. G. Bongiorno, and V. Capasso. Statistical aspects of set–valued continuous time stochastic processes. (submitted).
* [2] J. Aubin and H. Frankowska. Set–valued Analysis, volume 2 of Systems & Control: Foundations & Applications. Birkhäuser Boston Inc., 1990.
* [3] G. Barles, H. M. Soner, and P. E. Souganidis. Front propagation and phase field theory. SIAM J. Control Optim., 31(2):439–469, 1993.
* [4] M. Burger. Growth of multiple crystals in polymer melts. European J. Appl. Math., 15(3):347–363, 2004.
* [5] M. Burger, V. Capasso, and A. Micheletti. An extension of the Kolmogorov–Avrami formula to inhomogeneous birth–and–growth processes. In Math Everywhere (G. Aletti et al., Eds). Springer, Berlin, 63–76, 2007.
* [6] M. Burger, V. Capasso, and L. Pizzocchero. Mesoscale averaging of nucleation and growth models. Multiscale Model. Simul., 5(2):564–592 (electronic), 2006.
* [7] V. Capasso, editor. Mathematical Modelling for Polymer Processing. Polymerization, Crystallization, Manufacturing. Mathematics in Industry, Vol. 2, Springer–Verlag, Berlin, 2003.
* [8] V. Capasso. On the stochastic geometry of growth. In Morphogenesis and Pattern Formation in Biological Systems (Sekimura, T. et al. Eds). Springer, Tokyo, 45–58, 2003.
* [9] V. Capasso and D. Bakstein. An Introduction to Continuous–Time Stochastic Processes. Modeling and Simulation in Science, Engineering and Technology. Birkhäuser Boston Inc., 2005.
* [10] C. Castaing and M. Valadier. Convex Analysis and Measurable Multifunctions. Lecture Notes in Mathematics, Vol. 580, Springer–Verlag, Berlin, 1977\.
* [11] S. N. Chiu. Johnson–Mehl tessellations: asymptotics and inferences. In Probability, finance and insurance, pages 136–149. World Sci. Publ., River Edge, NJ, 2004.
* [12] N. Dunford and J. T. Schwartz. Linear Operators. Part I. Wiley Classics Library. John Wiley & Sons Inc., New York, 1988.
* [13] H. J. Frost and C. V. Thompson. The effect of nucleation conditions on the topology and geometry of two–dimensional grain structures. Acta Metallurgica, 35:529–540, 1987.
* [14] E. Giné, M. G. Hahn, and J. Zinn. Limit theorems for random sets: an application of probability in Banach space results. In Probability in Banach Spaces, IV (Oberwolfach, 1982). Lecture Notes in Mathematics, Vol. 990, 112–135, Springer, Berlin, 1983.
* [15] J. Herrick, S. Jun, J. Bechhoefer, and A. Bensimon. Kinetic model of DNA replication in eukaryotic organisms. J.Mol.Biol., 320:741–750, 2002.
* [16] F. Hiai and H. Umegaki. Integrals, conditional expectations, and martingales of multivalued functions. J. Multivariate Anal., 7(1):149–182, 1977.
* [17] C. J. Himmelberg. Measurable relations. Fund. Math., 87:53–72, 1975.
* [18] S. Li, Y. Ogura, and V. Kreinovich. Limit Theorems and Applications of Set–Valued and Fuzzy Set–Valued Random Variables. Vol. 43 of Theory and Decision Library. Series B: Mathematical and Statistical Methods. Kluwer Academic Publishers Group, Dordrecht, 2002.
* [19] A. Micheletti, S. Patti, and E. Villa. Crystal growth simulations: a new mathematical model based on the Minkowski sum of sets. In Industry Days 2003-2004 (D.Aquilano et al. Eds), volume 2 of The MIRIAM Project, pages 130–140. Esculapio, Bologna, 2005\.
* [20] H. Rådström. An embedding theorem for spaces of convex sets. Proc. Amer. Math. Soc., 3:165–169, 1952.
* [21] J. Serra. Image Analysis and Mathematical Morphology. Academic Press Inc. [Harcourt Brace Jovanovich Publishers], London, 1984\.
* [22] Bo Su and Martin Burger. Global weak solutions of non-isothermal front propagation problem. Electron. Res. Announc. Amer. Math. Soc., 13:46–52 (electronic), 2007.
|
# Faster Relative Entropy Coding with
Greedy Rejection Coding
Gergely Flamich
Department of Engineering
University of Cambridge
<EMAIL_ADDRESS>&Stratis Markou∗
Department of Engineering
University of Cambridge
<EMAIL_ADDRESS>&José Miguel Hernández Lobato
Department of Engineering
University of Cambridge
<EMAIL_ADDRESS>Equal contribution.
###### Abstract
Relative entropy coding (REC) algorithms encode a sample from a target
distribution $Q$ using a proposal distribution $P$ using as few bits as
possible. Unlike entropy coding, REC does not assume discrete distributions or
require quantisation. As such, it can be naturally integrated into
communication pipelines such as learnt compression and differentially private
federated learning. Unfortunately, despite their practical benefits, REC
algorithms have not seen widespread application, due to their prohibitively
slow runtimes or restrictive assumptions. In this paper, we make progress
towards addressing these issues. We introduce Greedy Rejection Coding (GRC),
which generalises the rejection based-algorithm of Harsha et al. (2007) to
arbitrary probability spaces and partitioning schemes. We first show that GRC
terminates almost surely and returns unbiased samples from $Q$, after which we
focus on two of its variants: GRCS and GRCD. We show that for continuous $Q$
and $P$ over $\mathbb{R}$ with unimodal density ratio $dQ/dP$, the expected
runtime of GRCS is upper bounded by $\beta
D_{\mathrm{KL}}[Q\|P]+\mathcal{O}(1)$ where $\beta\approx 4.82$, and its
expected codelength is optimal. This makes GRCS the first REC algorithm with
guaranteed optimal runtime for this class of distributions, up to the
multiplicative constant $\beta$. This significantly improves upon the previous
state-of-the-art method, A* coding (Flamich et al., 2022). Under the same
assumptions, we experimentally observe and conjecture that the expected
runtime and codelength of GRCD are upper bounded by
$D_{\mathrm{KL}}[Q\|P]+\mathcal{O}(1)$. Finally, we evaluate GRC in a
variational autoencoder-based compression pipeline on MNIST, and show that a
modified ELBO and an index-compression method can further improve compression
efficiency.
## 1 Introduction and motivation
Over the past decade, the development of excellent deep generative models
(DGMs) such as variational autoencoders (VAEs; Vahdat & Kautz, 2020; Child,
2020), normalising flows (Kingma et al., 2016) and diffusion models (Ho et
al., 2020) demonstrated great promise in leveraging machine learning (ML) for
data compression. Many recent learnt compression approaches have significantly
outperformed the best classical hand-crafted codecs across a range of domains
including, for example, lossless and lossy compression of images and video
(Zhang et al., ; Mentzer et al., 2020, 2022).
#### Transform coding.
Most learnt compression algorithms are transform coding methods: they first
map a datum to a latent variable using a learnt transform, and encode it using
entropy coding (Ballé et al., 2020). Entropy coding assumes discrete variables
while the latent variables in DGMs are typically continuous, so most transform
coding methods quantize the latent variable prior to entropy coding.
Unfortunately, quantization is a non-differentiable operation. Thus, state-of-
the-art DGMs trained with gradient-based optimisation must resort to some
continuous approximation to quantisation during training and switch to hard
quantisation for compression. Previous works have argued that using
quantisation within learnt compression is restrictive or otherwise harmful,
and that a method which naturally interfaces with continuous latent variables
is needed (Havasi et al., 2018; Flamich et al., 2020; Theis & Agustsson, 2021;
Flamich et al., 2022).
#### Relative entropy coding.
In this paper, we study relative entropy coding (REC; Havasi et al., 2018;
Flamich et al., 2020), an alternative to quantization and entropy coding. A
REC algorithm uses a proposal distribution $P$, and a public source of
randomness $S$, to produce a random code which represents a single sample from
a target distribution $Q$. Thus REC does not assume discrete distributions and
interfaces naturally with continuous variables. Remarkably, REC has
fundamental advantages over quantization in lossy compression with realism
constraints (Theis & Agustsson, ; Theis et al., 2022). More generally, it
finds application across a range of settings including, for example,
differentially private compression for federated learning (Shah et al., 2022).
#### Limitations of existing REC algorithms.
While algorithms for solving REC problems already exist, most of them suffer
from limitations that render them impractical. These limitations fall into
three categories: prohibitively long runtimes, overly restrictive assumptions,
or additional coding overheads. In this work, we study and make progress
towards addressing these limitations.
#### General-purpose REC algorithms.
On the one hand, some REC algorithms make very mild assumptions and are
therefore applicable in a wide range of REC problems (Harsha et al., 2007; Li
& El Gamal, 2018). Unfortunately, these algorithms have prohibitively long
runtimes. This is perhaps unsurprising in light of a result by Agustsson &
Theis (2020), who showed that without additional assumptions on $Q$ and $P$,
the worst-case expected runtime of any general-purpose REC algorithm scales as
$\smash{2^{D_{\mathrm{KL}}[Q\|P]}}$, which is impractically slow. There are
also REC algorithms which accept a desired runtime as a user-specified
parameter, at the expense of introducing bias in their samples (Havasi et al.,
2018; Theis & Yosri, 2022). Unfortunately, in order to reduce this bias to
acceptable levels, these algorithms require runtimes of an order of
$\smash{2^{D_{\mathrm{KL}}[Q\|P]}}$, and are therefore also impractical.
#### Faster algorithms with additional assumptions.
On the other hand, there exist algorithms which make additional assumptions in
order to achieve faster runtimes. For example, dithered quantisation (Ziv,
1985; Agustsson & Theis, 2020) achieves an expected runtime of
$D_{\mathrm{KL}}[Q\|P]$, which is optimal since any REC algorithm has an
expected runtime of at least $D_{\mathrm{KL}}[Q\|P]$. However, it requires
both $Q$ and $P$ to be uniform distributions, which limits its applicability.
Recently, Flamich et al. (2022) introduced A∗ coding, an algorithm based on A∗
sampling (Maddison et al., 2014) which, under assumptions satisfied in
practice, achieves an expected runtime of $D_{\infty}[Q\|P]$. Unfortunately,
this runtime is sub-optimal and is not always practically fast, since
$D_{\infty}[Q\|P]$ can be arbitrarily large for fixed $D_{\mathrm{KL}}[Q\|P]$.
Further, as discussed in Flamich et al. (2022) this runtime also comes at a
cost of an additional, substantial, overhead in codelength, which limits the
applicability of A∗ coding.
AS∗ codingGlobal A∗ AD∗ codingGRCDGRCSGRCGSample partitioningGlobal
partitioningDyadic partitioningBranch & bound search Rejection coding Figure
1: An illustration of the relations between the variants of GRC, introduced in
this work, and the variants of A∗ coding. Algorithms in purple are introduced
in this work. The algorithms of Harsha et al. (2007) and Li & El Gamal (2018)
are equivalent to GRCG and Global A∗ coding respectively.
#### Our contributions.
In this work, we address some of these limitations. First, we propose greedy
rejection coding (GRC), a REC algorithm based on rejection sampling. Then,
inspired by A* coding (Flamich et al., 2022), we develop GRCS and GRCD, two
variants of GRC that partition the sample space to dramatically speed up
termination. Figure 1 illustrates the relations between GRC and its variants
with existing algorithms. We analyze the correctness and the runtime of these
algorithms and, in particular, prove that GRCS has an optimal codelength and
order-optimal runtime on a wide class of one-dimensional problems. In more
detail, our contributions are:
* •
We introduce Greedy Rejection Coding (GRC), which generalises the algorithm of
Harsha et al. (2007) to arbitrary probability spaces and partitioning schemes.
We prove that under mild conditions, GRC terminates almost surely and returns
an unbiased sample from $Q$.
* •
We introduce GRCS and GRCD, two variants of GRC for continuous distributions
over $\mathbb{R}$, which adaptively partition the sample space to dramatically
improve their convergence, inspired by AS∗ and AD∗ coding (Flamich et al.,
2022), respectively.
* •
We prove that whenever $dQ/dP$ is unimodal, the expected runtime and
codelength of GRCS is $\mathcal{O}(D_{\mathrm{KL}}[Q\|P])$. This significantly
improves upon the $\mathcal{O}(D_{\infty}[Q\|P])$ runtime of AS∗ coding, which
is always larger than that of GRCS. This runtime is order-optimal, while
making far milder assumptions than, for example, ditered quantization.
* •
We provide clear experimental evidence for and conjecture that whenever
$dQ/dP$ is unimodal, the expected runtime and codelength of GRCD are
$D_{\mathrm{KL}}[Q\|P]$. This also significantly improves over the
$D_{\infty}[Q\|P]$ empirically observed runtime of AD∗ coding.
* •
We implement a compression pipeline with VAEs, using GRC to compress MNIST
images. We propose a modified ELBO objective and show that this, together with
a practical method for compressing the indices returned by GRC further improve
compression efficiency.
## 2 Background and related work
#### Relative entropy coding.
First, we define REC algorithms. Definition 1 is stricter than the one given
by Flamich et al. (2022), as it has a stronger condition on the the expected
codelength of the algorithm. In this paper, all logarithms are base 2, and all
divergences are measured in bits.
###### Definition 1 (REC algorithm).
Let $(\mathcal{X},\Sigma)$ be a measurable space, let $\mathcal{R}$ be a set
of pairs of distributions $(Q,P)$ over $(\mathcal{X},\Sigma)$ such that
$D_{\mathrm{KL}}[Q\|P]<\infty$ and $\mathcal{P}$ be the set of all
distributions $P$ such that $(Q,P)\in\mathcal{R}$ for some distribution $Q$.
Let $S=(S_{1},S_{2},\dots)$ be a publicly available sequence of independent
and fair coin tosses, with corresponding probability space
$(\mathcal{S},\mathcal{F},\mathbb{P})$ and let $\mathcal{C}=\\{0,1\\}^{*}$ be
the set of all finite binary sequences. A REC algorithm is a pair of functions
$\mathtt{enc}:\mathcal{R}\times\mathcal{S}\to\mathcal{C}$ and
$\mathtt{dec}:\mathcal{C}\times\mathcal{P}\times\mathcal{S}\to\mathcal{X}$,
such that for each $(Q,P)\in\mathcal{R}$, the outputs of the encoder
$C=\mathtt{enc}(Q,P,S)$ and the decoder $X=\mathtt{dec}(P,C,S)$ satisfy
$X\sim
Q\quad\text{and}\quad\mathbb{E}_{S}[|C|]=D_{\mathrm{KL}}[Q\|P]+\mathcal{O}(\log
D_{\mathrm{KL}}[Q\|P]),$ (1)
where $|C|$ is the length of the string $C$. We call $\mathtt{enc}$ the
encoder and $\mathtt{dec}$ the decoder.
In practice, $S$ is implemented with a pseudo-random number generator (PRNG)
with a public seed. In the remainder of this section, we discuss relevant REC
algorithms, building up to GRC in section 3.
Figure 2: Example run of Harsha et al. (2007), for a pair of continuous $Q$
and $P$ over $[0,1]$. The green and red regions correspond to acceptance and
rejection regions at each step. Here the algorithm rejects the first two
samples and accepts the third one, terminating at the third step.
#### Existing REC algorithms.
While there are many REC algorithms already, they suffer from various issues
limiting their applicability in practice. Our proposed algorithm, Greedy
Rejection Coding (GRC), is based on and generalises the rejection-based
algorithm of Harsha et al. (2007), by drawing inspiration from A∗ coding
(Flamich et al., 2022). Specifically, A∗ coding can be viewed as a
generalisation of an algorithm due to Li & El Gamal (2018). The former
generalises the latter by introducing a partitioning scheme to speed up
termination. In an analogous fashion, GRC generalises Harsha et al. (2007) by
also introducing partitioning schemes, to speed up termination and achieve
optimal runtimes. Here we discuss relevant algorithms, building up to GRC in
section 3.
#### REC with rejection sampling.
Harsha et al. (2007) introduced a REC algorithm based on rejection sampling,
which we generalise and extend in this work. While this algorithm was
originally presented for discrete $Q$ and $P$, we will show that it can be
generalised to arbitrary probability spaces. In this section, we present this
generalised version and in section 3 we further extend it to arbitrary
partitioning schemes (see definition 5). The generalisation to arbitrary
probability spaces relies on the Radon-Nikodym derivative $dQ/dP$, which is
guaranteed to exist since $Q\ll P$ by definition 1. When $Q$ and $P$ both have
densities, $dQ/dP$ coincides with the density ratio.
At each step, the algorithm draws a sample from $P$ and performs an accept-
reject step, as illustrated in fig. 2. If it rejects the sample, it rules out
part of $Q$ corresponding to the acceptance region, adjusts the proposal to
account for the removed mass, and repeats until acceptance. More formally,
define $T_{0}$ to be the zero-measure on $(\mathcal{X},\Sigma)$, and
recursively for $d\in\mathbb{N}$, set:
$\displaystyle T_{d+1}(S)$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{=}}T_{d}(S)+A_{d+1}(S),$
$\displaystyle
A_{d+1}(S)\stackrel{{\scriptstyle\text{def}}}{{=}}\int_{S}\alpha_{d+1}(x)\,dP(x),$
(2) $\displaystyle t_{d}(x)$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{=}}\frac{dT_{d}}{dP}(x),$
$\displaystyle~{}\alpha_{d+1}(x)\stackrel{{\scriptstyle\text{def}}}{{=}}\min\left\\{\frac{dQ}{dP}(x)-t_{d}(x),(1-T_{d}(\mathcal{X}))\right\\},$
(3) $\displaystyle X_{d}\sim P,$
$\displaystyle~{}~{}U_{d}\sim\text{Uniform}(0,1)$
$\displaystyle~{}\beta_{d+1}(x)\stackrel{{\scriptstyle\text{def}}}{{=}}\frac{\alpha_{d+1}(x)}{1-T_{d}(\mathcal{X})},$
(4)
for all $x\in\mathcal{X},S\in\Sigma$. The algorithm terminates at the first
occurrence of $U_{d}\leq\beta_{d+1}(X_{d})$. The $T_{d}$ measure corresponds
to the mass that has been ruled off up to and including the $d^{\text{th}}$
rejection: $T_{1}(\mathcal{X}),T_{2}(\mathcal{X})$ and $T_{3}(\mathcal{X})$
are the sums of the blue and green masses in the left, middle and right plots
of fig. 2 respectively. The $A_{d}$ measure corresponds to the acceptance mass
at the $d^{\text{th}}$ step: $A_{1}(\mathcal{X}),A_{2}(\mathcal{X})$ and
$A_{3}(\mathcal{X})$ are the masses of the green regions in the left, middle
and right plots of fig. 2 respectively. Lastly, $t_{d},\alpha_{d}$ are the
Radon-Nikodym derivatives i.e., roughly speaking, the densities, of
$T_{d},A_{d}$ with respect to $P$, and $\beta_{d+1}(X_{d})$ is the probability
of accepting the sample $X_{d}$.
Here, the encoder $\mathtt{enc}$ amounts to keeping count of the number of
rejections that occur up to the first acceptance, setting $C$ equal to this
count and returning $X$ and $C$. The decoder $\mathtt{dec}$ amounts to drawing
$C+1$ samples from $P$, using the same seed as the encoder, and returning the
last of these samples. While this algorithm is elegantly simple and achieves
optimal codelengths, Flamich & Theis (2023) showed its expected runtime is
$\smash{2^{D_{\infty}[Q\|P]}}$, where
$D_{\infty}[Q\|P]=\sup_{x\in\mathcal{X}}\log(dQ/dP)(x)$ is the Rényi
$\infty$-divergence. Unfortunately, this is prohibitively slow in most
practical cases.
#### REC with Poisson & Gumbel processes.
Li & El Gamal (2018) introduced a REC algorithm based on Poisson processes,
referred to as Poisson Functional Representation (PFR). PFR assumes that
$dQ/dP$ is bounded above, and relies on the fact that (Kingman, 1992), if
$T_{n}$ are the ordered arrival times of a homogeneous Poisson process on
$\mathbb{R}^{+}$ and $X_{n}\sim P$, then
$N\stackrel{{\scriptstyle\text{def}}}{{=}}\operatorname*{arg\,min}_{n\in\mathbb{N}}\left\\{T_{n}\frac{dP}{dQ}(X_{n})\right\\}\implies
X_{N}\sim Q,$ (5)
Therefore, PFR casts the REC problem into an optimisation, or search, problem,
which can be solved in finite time almost surely. The PFR encoder draws pairs
of samples $T_{n},X_{n}$, until it solves the search problem in eq. 5, and
returns $X=X_{N},C=N-1$. The decoder can recover $X_{N}$ from $(P,C,S)$, by
drawing $N$ samples from $P$, using the same random seed, and keeping the last
sample. While, like the algorithm of Harsha et al. (2007), PFR is elegantly
simple and achieves optimal codelengths, its expected runtime is also
$2^{D_{\infty}[Q\|P]}$ (Maddison, 2016).
#### Fast REC requires additional assumptions.
These algorithms’ slow runtimes are perhaps unsurprising considering Agustsson
& Theis’s result, which shows under the computational hardness assumption
$\mathrm{RP}\neq\mathrm{NP}$ that without making additional assumptions on $Q$
and $P$, there is no REC algorithm whose expected runtime scales polynomially
in $D_{\mathrm{KL}}[Q\|P]$. Therefore, in order achieve faster runtimes, a REC
algorithm must make additional assumptions on $Q$ and $P$.
#### A∗ coding.
To this end, Flamich et al. (2022) proposed: (1) a set of appropriate
assumptions which are satisfied by many deep latent variable models in
practice and (2) a REC algorithm, referred to as A∗ coding, which leverages
these assumptions to achieve a substantial speed-up over existing methods. In
particular, A∗ coding generalizes PFR by introducing a partitioning scheme,
which splits the sample space $\mathcal{X}$ in nested partitioning subsets, to
speed up the solution of eq. 5. Drawing inspiration from this, our proposed
algorithm generalises eqs. 2, 3 and 4 in an analogous manner (see fig. 1),
introducing partitioning processes (definition 2) to speed up the algorithm’s
termination.
###### Definition 2 (Partitioning process).
A partitioning process is a process $Z:\mathbb{N}^{+}\to\Sigma$ such that
$Z_{1}=\mathcal{X},~{}~{}Z_{2n}\cap Z_{2n+1}=\emptyset,~{}~{}Z_{2n}\cup
Z_{2n+1}=Z_{n}.$ (6)
In other words, a partitioning process $Z$ is a process indexed by the heap
indices of an infinite binary tree, where the root node is $\mathcal{X}$ and
any two children nodes $Z_{2n},Z_{2n+1}$ partition their parent node $Z_{n}$.
In section 3 we present specific choices of partitioning processes which
dramatically speed up GRC.
#### Greedy Poisson Rejection Sampling.
Contemporary to our work, Flamich (2023) introduces a rejection sampler based
on Poisson processes, which can be used as a REC algorithm referred to as
Greedy Poisson Rejection Sampling (GPRS). Similar to GRC and A* coding, GPRS
partitions the sample space to speed up the convergence to the accepted
sample. Furthermore, a variant of GPRS also achieves order-optimal runtime for
one-dimensional distribution pairs with a unimodal density ratio. However, the
construction of their method is significantly different from ours, relying
entirely on Poisson processes. Moreover, GPRS requires numerically solving a
certain ODE, while our method does not, making it potentially more favourable
in practice. We believe establishing a closer connection between GPRS and GRC
is a promising future research direction.
## 3 Greedy Rejection Coding
#### Generalising Harsha et al. (2007).
In this section we introduce Greedy Rejection Coding (GRC; definition 5),
which generalises the algorithm of Harsha et al. (2007) in two ways. First,
GRC can be used with distributions over arbitrary probability spaces.
Therefore, it is applicable to arbitrary REC problems, including REC with
continuous distributions. Second, similar to A∗ coding, GRC can be combined
with arbitrary partitioning processes, allowing it to achieve optimal runtimes
given additional assumptions on the REC problem, and an appropriate choice of
partitioning process.
(a) Sample & accept or reject
(b) Partition & sample
$b_{1}\in\\{{\color[rgb]{0.6875,0.4453125,0.69140625}\definecolor[named]{pgfstrokecolor}{rgb}{0.6875,0.4453125,0.69140625}\mathbf{0}},{\color[rgb]{0.96875,0.57421875,0.11328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.96875,0.57421875,0.11328125}\mathbf{1}}\\}$
(c) Sample & accept or reject
(d) Sample & accept or reject
(e) Partition & sample
$b_{1}\in\\{{\color[rgb]{0.6875,0.4453125,0.69140625}\definecolor[named]{pgfstrokecolor}{rgb}{0.6875,0.4453125,0.69140625}\mathbf{0}},{\color[rgb]{0.96875,0.57421875,0.11328125}\definecolor[named]{pgfstrokecolor}{rgb}{0.96875,0.57421875,0.11328125}\mathbf{1}}\\}$
(f) Sample & accept or reject
Figure 3: Illustrations of the two variants of GRC considered in this work.
(a) to (c) show GRC with the sample-splitting partitioning process (GRCS). (d)
to (f) show GRC with the dyadic partition process (GRCD). GRC interleaves
accept-reject steps with partitioning steps. In the former, it draws a sample
and either accepts or rejects it. In the latter, it partitions the sample
space and randomly chooses one of the partitions, ruling out large parts of
the sample space and speeding up termination.
### 3.1 Algorithm definition
#### Overview.
Before specifying GRC, we summarise its operation with an accompanying
illustration. On a high level, GRC interleaves accept-reject steps with
partitioning steps, where the latter are determined by a partitioning process.
Specifically, consider the example in figs. 3(d), 3(e) and 3(f), where $Q$ and
$P$ are distributions over $\mathcal{X}=[0,1]$, and $Z$ is the partitioning
process defined by
$Z_{n}=[L,R]\implies Z_{2n}=[L,M),Z_{2n+1}=[M,R],\text{ where }M=(L+R)/2.$ (7)
In each step $d=1,2,\dots$, GRC maintains a heap index $I_{d}$ of an infinite
binary tree, and an active subset $S_{d}=Z_{I_{d}}\subseteq\mathcal{X}$ of the
sample space, initialised as $I_{0}=1$ and $S_{1}=Z_{1}=\mathcal{X}$
respectively.
#### Accept-reject step.
In each step, GRC draws a sample from the restriction of $P$ to $S_{d}$,
namely $P|_{S_{d}}/P(S_{d})$, and either accepts or rejects it. If the sample
is accepted, the algorithm terminates. Otherwise, GRC performs a partitioning
step as shown in fig. 3(d)
#### Partitioning step.
In each partitioning step, GRC partitions $S_{d}=Z_{I_{d}}$ into $Z_{2I_{d}}$
and $Z_{2I_{d}+1}$, as specified by the partitioning process $Z$. It then
samples a Bernoulli random variable $b_{d}$, whose outcomes have probabilities
proportional to the mass of $Q$ which has not been accounted for, up to and
including step $d$, within the partitions $Z_{2I_{d}}$ and $Z_{2I_{d}+1}$
respectively. In fig. 3(e), these two masses correspond to the purple and
orange areas, and the algorithm has sampled $b_{d}=1$. Last, GRC updates the
heap index to $I_{d+1}=2I_{d}+b_{d}$ and the active subset to
$S_{d+1}=Z_{I_{d+1}}$. GRC proceeds by interleaving accept-reject and
partitioning steps until an acceptance occurs.
#### Algorithm specification.
The aforementioned algorithm can be formalised in terms of probability
measures over arbitrary spaces and arbitrary partitioning processes. Above,
algorithms 1 and 2 describe Harsha et al.’s rejection sampler and our
generalisation of it, respectively. For the sake of keeping the exposition
lightweight, we defer the formal measure-theoretic definition of GRC to the
appendix (see definition 5 in section A.1), and refer to algorithm 2 as a
working definition here.
Algorithm 1 Harsha et al.’s rejection algorithm; equivalent to GRC with a
global partition
1:Target $Q$, proposal $P$, space $\mathcal{X}$
2:$d\leftarrow 0,T_{0}\leftarrow 0$
3:
4:while True do
5: $X_{d+1}\sim P$
6: $U_{d+1}\sim\text{Uniform}(0,1)$
7: $\beta_{d+1}\leftarrow\texttt{AcceptProb}(Q,P,X_{d+1},T_{d})$
8: if $U_{d+1}\leq\beta_{d+1}$ then
9: return $X_{d+1},d$
10: end if
11:
12:
13:
14: $T_{d+1}\leftarrow\texttt{RuledOutMass}(Q,P,T_{d})$
15: $d\leftarrow d+1$
16:end while
Algorithm 2 GRC with partition process $Z$; differences to Harsha et al.’s
algorithm shown in green
1:Target $Q$, proposal $P$, space $\mathcal{X}$, partition $Z$
2:$d\leftarrow 0,T_{0}\leftarrow 0$
3:${\color[rgb]{0.234375,0.5,0.19140625}\definecolor[named]{pgfstrokecolor}{rgb}{0.234375,0.5,0.19140625}I_{0}\leftarrow
1,S_{1}\leftarrow\mathcal{X}}$
4:while $\mathtt{True}$ do
5:
$X_{I_{d}}\sim{\color[rgb]{0.234375,0.5,0.19140625}\definecolor[named]{pgfstrokecolor}{rgb}{0.234375,0.5,0.19140625}P|_{S_{d}}/P(S_{d})}$
6: $U_{I_{d}}\sim\text{Uniform}(0,1)$
7: $\beta_{I_{d}}\leftarrow\texttt{AcceptProb}(Q,P,X_{I_{d}},T_{d})$
8: if $U_{I_{d}}\leq\beta_{d+1}$ or $d=D_{\max}$ then
9: return $X_{I_{d}},I_{d}$
10: end if
11: $p\leftarrow\texttt{PartitionProb}(Q,P,T_{d},Z_{2d},Z_{2d+1})$
12: $b_{d}\sim\text{Bernoulli}(p)$
13: $I_{d+1}\leftarrow 2I_{d}+b_{d}$ and $S_{d+1}\leftarrow Z_{I_{d+1}}$
14:
$T_{d+1}\leftarrow\texttt{RuledOutMass}(Q,P,T_{d},{\color[rgb]{0.234375,0.5,0.19140625}\definecolor[named]{pgfstrokecolor}{rgb}{0.234375,0.5,0.19140625}S_{d+1}})$
15: $d\leftarrow d+1$
16:end while
#### Comparison to Harsha et al.
While algorithms 1 and 2 are similar, they differ in two notable ways. First,
rather than drawing a sample from $P$, GRC draws a sample from the restriction
of $P$ to an active subset $S_{d}=Z_{d}\subseteq\mathcal{X}$, namely
$P|_{S_{d}}/P(S_{d})$. Second, GRC updates its active subset $S_{d}=Z_{d}$ at
each step, setting it to one of the children of $Z_{d}$, namely either
$Z_{2d}$ or $Z_{2d+1}$, by drawing $b_{d}\sim\text{Bernoulli}$, and setting
$Z_{2d+b_{d}}$. This partitioning mechanism, which does not appear in
algorithm 1, yields a different variant of GRC for each choice of partitioning
process $Z$. In fact, as shown in Proposition 1 below, algorithm 1 is a
special case of GRC with $S_{d}=\mathcal{X}$ for all $d$. See section A.2 for
the proof.
###### Proposition 1 (Harsha et al. (2007) is a special case of GRC).
Let $Z$ be the global partitioning process over $\Sigma$, defined as
$Z_{1}=\mathcal{X},~{}~{}~{}Z_{2n}=Z_{n},~{}~{}~{}Z_{2n+1}=\emptyset,~{}~{}\text{
for all }~{}n=1,2,\dots.$ (8)
Harsha et al. (2007) is equivalent to GRC using this $Z$ and setting $C=D^{*}$
instead of $C=I_{D^{*}}$. We refer to this algorithm as Global GRC, or GRCG
for short.
#### Partitioning processes and additional assumptions.
While Proposition 1 shows that Harsha et al.’s algorithm is equivalent to GRC
with a particular choice of $Z$, a range of other choices of $Z$ is possible,
and this is where we can leverage additional structure. In particular, we show
that when $Q$ and $P$ are continuous distributions over $\mathbb{R}$ with a
unimodal density ratio $dQ/dP$, we can dramatically speed up GRC with an
appropriate choice of $Z$. In particular, we will consider the sample-
splitting and dyadic partitioning processes from Flamich et al. (2022), given
in Definitions 3 and 4.
###### Definition 3 (Sample-splitting partitioning process).
Let $\mathcal{X}=\mathbb{R}\cup\\{-\infty,\infty\\}$ and $P$ a continuous
distribution. The sample-splitting partitioning process is defined as
$Z_{n}=[a,b],a,b\in\mathcal{X}\implies
Z_{2n}=[a,X_{n}],~{}~{}Z_{2n+1}=[X_{n},b],\text{ where }X_{n}\sim
P|_{Z_{n}}/P(Z_{n}).$
In other words, in the sample-splitting process, $Z_{n}$ are intervals of
$\mathbb{R}$, each of which is partitioned into sub-intervals $Z_{2n}$ and
$Z_{2n+1}$ by splitting at the sample $X_{n}$ drawn from
$P|_{Z_{n}}/P(Z_{n})$. We refer to GRC with the sample-splitting partitioning
process as GRCS.
###### Definition 4 (Dyadic partitioning process).
Let $\mathcal{X}=\mathbb{R}\cup\\{-\infty,\infty\\}$ and $P$ a continuous
distribution. The dyadic partitioning process is defined as
$Z_{n}=[a,b],a,b\in\mathcal{X}\implies
Z_{2n}=[a,c],~{}~{}Z_{2n+1}=[c,b],\text{ such that }P(Z_{2n})=P(Z_{2n+1}).$
Similar to the sample-splitting process, in the dyadic process $Z_{n}$ are
intervals of $\mathbb{R}$. However, in the dyadic process, $Z_{n}$ is
partitioned into sub-intervals $Z_{2n}$ and $Z_{2n+1}$ such that
$P(Z_{2n})=P(Z_{2n+1})$. We refer to GRC with the dyadic partitioning process
as GRCD.
#### GRC with a tunable codelength.
Flamich et al. presented a depth-limited variant of AD∗ coding, DAD∗ coding,
in which the codelength $|C|$ can be provided as a tunable input to the
algorithm. Fixed-codelength REC algorithms are typically approximate because
they introduce bias in their samples, but are nevertheless useful in certain
contexts, such as for coding a group of random variables with the same fixed
codelength. GRCD can be similarly modified to accept $|C|$ as an input, by
limiting the maximum steps of the algorithm by $D_{\max}$ (see algorithm 2).
Setting $D_{\max}=\infty$ in algorithm 2 corresponds to exact GRC, while
setting $D_{\max}<\infty$ corresponds to depth-limited GRC.
### 3.2 Theoretical results
#### Correctness of GRC.
In theorem 1 we show that GRC terminates almost surely and produces unbiased
samples from $Q$, given interchangeable mild assumptions on $Q,P$ and $Z$.
Assumption 1 is the most general, since it holds for any $Q$ and $P$ over
arbitrary probability spaces, and can be used to apply GRC to arbitrary coding
settings.
###### Assumption 1.
GRC has a finite ratio mode if $dQ/dP(x)<M$ for all $x\in\mathcal{X}$, for
some $M\in\mathbb{R}$.
Assumption 1 holds for GRCG, GRCS and GRCD, so long as $dQ/dP$ is bounded.
While this assumption is very general, in some cases we may want to consider
$Q,P$ with unbounded $dQ/dP$. To this end, we show that it can be replaced by
alternative assumptions, such as assumptions 2 and 3.
###### Assumption 2.
GRC is single-branch if for each $d$, $b_{d}=0$ or $b_{d}=1$ almost surely.
GRC with the global partitioning process (eq. 8) satisfies assumption 2. In
addition, if $Q$ and $P$ are distributions over $\mathbb{R}$ and $dQ/dP$ is
unimodal, GRCS also satisfies assumption 2.
###### Assumption 3.
Suppose $\mathcal{X}\subseteq\mathbb{R}^{N}$. GRC has nicely shrinking $Z$ if,
almost surely, the following holds. For each $x\in\mathcal{X}$ which is in a
nested sequence of partitions $x\in Z_{1}\supseteq\dots\supseteq
Z_{k_{d}}\supseteq\dots$ with $P(Z_{k_{d}})\to 0$, there exist
$\gamma,r_{1},r_{2},...\in\mathbb{R}_{>0}$ such that
$r_{d}\to 0,~{}Z_{k_{d}}\subseteq B_{r_{d}}(x)\text{ and
}P(Z_{k_{d}})\geq\gamma P(B_{r_{d}}(x)).$ (9)
If $Q$ and $P$ are distributions over $\mathbb{R}$, GRCD satisfies assumption
3. Theorem 1 shows that if any of the above assumptions hold, then GRC
terminates almost surely and yields unbiased samples from $Q$. We provide the
proof of the theorem in appendix B.
###### Theorem 1 (Correctness of GRC).
Suppose $Q,P$ and $Z$ satisfy any one of assumptions 1, 2 and 3. Then,
algorithm 2 terminates with probability $1$, and its returned sample $X$ has
law $X\sim Q$.
#### Expected runtime and codelength of GRCS.
Now we turn to the expected runtime and codelength of GRCS. Theorem 2 shows
that the expected codelength of GRCS is optimal, while Theorem 3 establishes
that its runtime is order-optimal. We present the proofs of the theorems in
appendix C.
###### Theorem 2 (GRCS codelength).
Let $Q$ and $P$ be continuous distributions over $\mathbb{R}$ such that $Q\ll
P$ and with unimodal $dQ/dP$. Let $Z$ be the sample-splitting process, and $X$
its returned sample. Then,
$\mathbb{H}[X|Z]\leq
D_{\mathrm{KL}}[Q\|P]+2\log\left(D_{\mathrm{KL}}[Q\|P]+1\right)+\mathcal{O}(1).$
(10)
###### Theorem 3 (GRCS runtime).
Let $Q$ and $P$ be continuous distributions over $\mathbb{R}$ such that $Q\ll
P$ and with unimodal $dQ/dP$. Let $Z$ be the sample-splitting process and $D$
the number of steps the algorithm takes before accepting a sample. Then, for
$\beta=2/\log(4/3)\approx 4.82$ we have
$\mathbb{E}[D]\leq\beta~{}D_{\mathrm{KL}}[Q\|P]+\mathcal{O}(1)$ (11)
#### Improving the codelength of GRCD.
In Theorem 2 we state the bound for the REC setting, where we make no further
assumptions on $Q$ and $P$. However, we can improve the bound if we consider
the reverse channel coding (RCC) setting (Theis & Yosri, 2022). In RCC, we
have a pair of correlated random random variables $X,Y\sim P_{X,Y}$. During
one round of communication, the encoder receives $Y\sim P_{Y}$ and needs to
encode a sample $X\sim P_{X|Y}$ from the posterior using $P_{X}$ as the
proposal distribution. Thus, RCC can be thought of as the average-case version
of REC, where the encoder sets $Q\leftarrow P_{X|Y}$ and $P\leftarrow P_{X}$.
In this case, when the conditions of Theorem 2 hold for every
$(P_{X|Y},P_{X})$ pair, in appendix C we show that the bound can be improved
to $\mathbb{I}[X;Y]+2\log(\mathbb{I}[X;Y]+1)+\mathcal{O}(1)$, where
$\mathbb{I}[X;Y]=\mathbb{E}_{Y\sim
P_{Y}}\left[D_{\mathrm{KL}}[P_{X|Y}\|P_{Y}]\right]$ is the mutual information
between $X$ and $Y$.
#### GRCS runtime is order-optimal.
Theorem 3 substantially improves upon the runtime of A∗ coding, which is the
current fastest REC algorithm with similar assumptions. In particular, AS∗
coding has $\mathcal{O}(D_{\infty}[Q\|P])$ expected runtime, which can be
arbitrarily larger than that of GRCS. Remarkably, the runtime of GRCS is
optimal up to the multiplicative factor $\beta$. This term arises from the
fact the sample-splitting process may occasionally rule out a small part of
the sample space at a given step.
## 4 Experiments
We conducted two sets of experiments: one on controlled synthetic REC problems
to check the predictions of our theorems numerically, and another using VAEs
trained on MNIST to study how the performance of GRC-based compression
pipelines can be improved in practice. We conducted all our experiments under
fair and reproducible conditions and make our source code public.111Source
code to be published with the camera-ready version: https://github.com/source-
code.
### 4.1 Synthetic Experiments
#### Synthetic REC experiments.
First, we compare GRCS and GRCD, against AS∗ and AD∗ coding, on a range of
synthetic REC problems. We systematically vary distribution parameters to
adjust the difficulty of the REC problems. Figure 4 shows the results of our
synthetic experiments.
Figure 4: Comparison between GRC and A∗ coding on synthetic REC problems with
Gaussian $Q$ and $P$. Left: we fix $D_{\mathrm{KL}}[Q\|P]=3$ and vary
$D_{\infty}[Q\|P]$, measuring the number of steps taken by each algorithm.
Right: we fix $D_{\infty}[Q\|P]=D_{\mathrm{KL}}[Q\|P]+2$ and vary
$D_{\mathrm{KL}}[Q\|P]$, plotting the codelengths produced by each algorithm.
Reported codelengths do not include additional logarithmic overhead terms.
Results are averaged over $4\times 10^{3}$ different random seeds for each
datapoint. We have included error-bars in both plots but these are too small
to see compared to the plot scales.
#### Partitioning processes improve the runtime of GRC.
First, we observe that, assuming that $dQ/dP$ is unimodal, introducing an
appropriate partitioning process such as the sample-splitting or the dyadic
process, dramatically speeds up GRC. In particular, fig. 4 shows that
increasing the infinity divergence $D_{\infty}[Q\|P]$ (for a fixed
$D_{\mathrm{KL}}[Q\|P]$) does not affect the runtimes of GRCS and GRCD, which
remain constant and small. This is a remarkable speed-up over the exponential
expected runtime of GRCG.
#### GRC is faster than A∗ coding.
Further, we observe that GRC significantly improves upon the runtime of A*
coding, which is the fastest previously known algorithm with similar
assumptions. In particular, Figure 4 shows that increasing the infinity
divergence $D_{\infty}[Q\|P]$, while keeping the KL divergence
$D_{\mathrm{KL}}[Q\|P]$ fixed, increases the runtime of both AS∗ and AD∗
coding, while the runtimes of GRCS and GRCD remain constant. More generally,
for a fixed KL divergence, the infinity divergence can be arbitrarily large or
even infinite. In such cases, A∗ coding would be impractically slow or even
inapplicable, while GRCS and GRCD remain practically fast.
#### GRCD improves on GRCS.
In our experiments, we observe that the performance of GRCD (green in fig. 4)
matches that of GRCS (blue in fig. 4) in terms of runtime and codelength.
While in our experiments, GRCD does not yield an improvement over GRCS, we
note the following behaviour. The sample-splitting process may occasionally
rule out a only a small part of space, which can slow down convergence. In
particular, in appendix C we show that on average, the sample-splitting
process rules out $\nicefrac{{1}}{{2}}$ of the active sample space in the best
case at each step, and $\nicefrac{{3}}{{4}}$ in the worst case. By contrast,
the dyadic process always rules out $\nicefrac{{1}}{{2}}$ of the sample space,
potentially speeding up termination. We conjecture that GRCD achieves an
optimal expected runtime with $\beta=1$.
### 4.2 Compression with Variational Autoencoders
#### Compressing images with VAEs and REC.
One of the most promising applications of REC is in learnt compression. Here,
we implement a proof-of-concept lossless neural compression pipeline using a
VAE with a factorized Gaussian posterior on MNIST and take the architecture
used by Townsend et al. (2018). To compress an image $Y$, we encode a latent
sample $X$ from the VAE posterior $q(X\mid Y)$ by applying GRCD dimensionwise
after which we encode the image $Y$ with entropy coding using the VAE’s
conditional likelihood $p(Y\mid X)$ as the coding distribution. Unfortunately,
in addition to the $D_{\mathrm{KL}}[q(X_{d}\mid Y)\|p(X_{d})]$ bits coding
cost for latent dimension $d$, this incurs an overhead of
${\log(D_{\mathrm{KL}}[q(X_{d}\mid Y)\|p(X_{d})]+1)+\mathcal{O}(1)}$ bits,
analogously to how a symbol code, like Huffman coding, incurs a constant
overhead per symbol (MacKay, 2003). However, since $\log(1+x)\approx x$ when
$x\approx 0$, the logarithmic overhead of GRC can become significant compared
to the KL divergence. Hence, we now investigate two approaches to mitigate
this issue.
Training objective | # latent | Total BPP with $\zeta$ coding | Total BPP with $\delta$ coding | Neg. ELBO per pixel | Overhead BPP with $\delta$ coding
---|---|---|---|---|---
ELBO | 20 | $1.472\pm 0.004$ | $1.482\pm 0.004$ | $1.391\pm 0.004$ | $0.091\pm 0.000$
50 | $1.511\pm 0.003$ | $1.530\pm 0.003$ | $1.357\pm 0.003$ | $0.172\pm 0.000$
100 | $1.523\pm 0.003$ | $1.600\pm 0.003$ | $1.362\pm 0.003$ | $0.238\pm 0.000$
Modified ELBO | 20 | $1.470\pm 0.004$ | $1.478\pm 0.004$ | $1.393\pm 0.004$ | $0.085\pm 0.000$
50 | $1.484\pm 0.003$ | $1.514\pm 0.003$ | $1.373\pm 0.003$ | $0.141\pm 0.000$
100 | $1.485\pm 0.003$ | $1.579\pm 0.003$ | $1.373\pm 0.003$ | $0.205\pm 0.000$
Table 1: Lossless compression performance comparison on the MNIST test set of
a small VAE with different latent space sizes, optimized using either the ELBO
or the modified ELBO in eq. 12. We report the bits per pixel (BPP) attained
using different coding methods, averaged over the 10,000 test images, along
with the standard error, using GRCD. See section 4.2 for further details.
#### Modified ELBO for REC.
A principled approach to optimizing our neural compression pipeline is to
minimize its expected codelength. For bits-back methods (Townsend et al.,
2018, 2019), the negative ELBO indeed expresses their expected codelength, but
in REC’s case, it does not take into account the additional dimensionwise
logarithmic overhead we discussed above. Thus, we propose to minimize a
modified negative ELBO to account for this (assuming that we have $D$ latent
dimensions):
$\displaystyle\underbrace{\mathbb{E}_{X\sim q(X|Y)}[-\log
p(Y|X)]+D_{\mathrm{KL}}[q(X|Y)\|p(X)]}_{\text{Regular
ELBO}}+\sum_{d=1}^{D}\underbrace{\log\left(D_{\mathrm{KL}}[q(X_{d}|Y)\|p(X_{d})]+1\right)}_{\text{Logarithmic
overhead per dimension}}.$ (12)
#### Coding the latent indices.
As the final step during the encoding process, we need a prefix code to encode
the heap indices $I_{d}$ returned by GRCD for each $d$. Without any further
information, the best we can do is use Elias $\delta$ coding (Elias, 1975),
which, assuming our conjecture on the expected runtime of GRCD holds, yields
an expected codelength of
$\mathbb{I}[Y;X]+2\log(\mathbb{I}[Y;X]+1)+\mathcal{O}(1)$. However, we can
improve this if we can estimate $\mathbb{E}[\log I_{d}]$ for each $d$: it can
be shown, that the maximum entropy distribution of a positive integer-valued
random variable with under a constraint on the expectation on its logarithm is
$\zeta(n|\lambda)\propto n^{-\lambda}$, with $\lambda^{-1}=\mathbb{E}[\log
I_{d}]+1$. In this case, entropy coding $I_{d}$ using this $\zeta$
distribution yields improves the expected codelength to
$\mathbb{I}[Y;X]+\log(\mathbb{I}[Y;X]+1)+\mathcal{O}(1)$.
#### Experimental results.
We trained our VAE with $L\in\\{20,50,100\\}$ latent dimensions optimized
using the negative ELBO and its modified version in Equation 12, and
experimented with encoding the heap indices of GRCD with both $\delta$ and
$\zeta$ coding. We report the results of our in Table 1 on the MNIST test set
in bits per pixel. In addition to the total coding cost, we report the
negative ELBO per pixel, which is the fundamental lower bound on the
compression efficiency of REC with each VAE. Finally, we report the
logarithmic overhead due to $\delta$ coding. We find that both the modified
ELBO and $\zeta$ coding prove beneficial, especially as the dimensionality of
the latent space increases. This is expected, since the overhead is most
significant for latent dimensions with small KLs, which becomes more likely as
the dimension of the latent space grows. The improvements yielded by each of
the two methods are significant, with $\zeta$ coding leading to a consistent
$1-7\%$ gain compared to $\delta$ coding and the modified objective resulting
in up to $2\%$ gain in coding performance.
## 5 Conclusion and Future Work
#### Summary.
In this work, we introduced Greedy Rejection Coding (GRC), a REC algorithm
which generalises the rejection algorithm of Harsha et al. to arbitrary
probability spaces and partitioning processes. We proved the correctness of
our algorithm under mild assumptions, and introduced GRCS and GRCD, two
variants of GRC. We showed that the runtimes of GRCS and GRCD significantly
improve upon the runtime of A∗ coding, which can be arbitrarily larger. We
evaluated our algorithms empirically, verifying our theory and conducted a
proof-of-concept learnt compression experiment on MNIST using VAEs. We
demonstrated that a principled modification to the ELBO and entropy coding
GRCD’s indices using a $\zeta$ distribution can further improve compression
efficiency.
#### Limitations and Further work.
One limitation of GRC is that, unlike A∗ coding, it requires us to be able to
evaluate the CDF of $Q$. While in some settings this CDF may be intractable,
this assumption is satisfied by most latent variable generative models, and is
not restrictive in practice. However, one practical limitation of GRCS and
GRCD, as well as AS∗ and AD∗ , is that they assume target-proposal pairs over
$\mathbb{R}$. For multivariate distributions, we can decompose them into
univariate conditionals and apply GRC dimensionwise, however this incurs an
additional coding overhead per dimension, resulting in a non-negligible cost.
Thus, an important direction is to investigate whether fast REC algorithms for
multivariate distributions can be devised, to circumvent this challenge.
## References
* Agustsson & Theis (2020) Eirikur Agustsson and Lucas Theis. Universally quantized neural compression. _Advances in Neural Information Processing Systems_ , 2020.
* Ballé et al. (2020) Johannes Ballé, Philip A Chou, David Minnen, Saurabh Singh, Nick Johnston, Eirikur Agustsson, Sung Jin Hwang, and George Toderici. Nonlinear transform coding. _IEEE Journal of Selected Topics in Signal Processing_ , 2020.
* Child (2020) Rewon Child. Very deep vaes generalize autoregressive models and can outperform them on images. _CoRR_ , abs/2011.10650, 2020.
* Dudley (2018) Richard M Dudley. _Real analysis and probability_. CRC Press, 2018.
* Dunford & Schwartz (1988) Nelson Dunford and Jacob T Schwartz. _Linear operators, part 1: general theory_ , volume 10. John Wiley & Sons, 1988.
* Elias (1975) Peter Elias. Universal codeword sets and representations of the integers. _IEEE transactions on information theory_ , 1975.
* Flamich (2023) Gergely Flamich. Greedy Poisson rejection sampling. _Advances in Neural Information Processing Systems_ , 2023.
* Flamich & Theis (2023) Gergely Flamich and Lucas Theis. Adaptive greedy rejection sampling. _arXiv preprint arXiv:2304.10407_ , 2023.
* Flamich et al. (2020) Gergely Flamich, Marton Havasi, and José Miguel Hernández-Lobato. Compressing images by encoding their latent representations with relative entropy coding. _Advances in Neural Information Processing Systems_ , 2020.
* Flamich et al. (2022) Gergely Flamich, Stratis Markou, and Jose Miguel Hernandez-Lobato. Fast relative entropy coding with A* coding. In _Proceedings of the 39th International Conference on Machine Learning_ , Proceedings of Machine Learning Research. PMLR, 2022.
* Harsha et al. (2007) Prahladh Harsha, Rahul Jain, David McAllester, and Jaikumar Radhakrishnan. The communication complexity of correlation. In _Twenty-Second Annual IEEE Conference on Computational Complexity (CCC’07)_. IEEE, 2007.
* Havasi et al. (2018) Marton Havasi, Robert Peharz, and José Miguel Hernández-Lobato. Minimal random code learning: Getting bits back from compressed model parameters. In _International Conference on Learning Representations_ , 2018.
* Ho et al. (2020) Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. _CoRR_ , 2020.
* Kingma et al. (2016) Durk P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved variational inference with inverse autoregressive flow. _Advances in neural information processing systems_ , 2016.
* Kingman (1992) J.F.C. Kingman. _Poisson Processes_. Oxford Studies in Probability. Clarendon Press, 1992.
* Li & El Gamal (2018) Cheuk Ting Li and Abbas El Gamal. Strong functional representation lemma and applications to coding theorems. _IEEE Transactions on Information Theory_ , 2018.
* MacKay (2003) David JC MacKay. _Information theory, inference and learning algorithms_. Cambridge university press, 2003.
* Maddison (2016) CA Maddison. Poisson process model for Monte Carlo. _Perturbation, Optimization, and Statistics_ , 2016.
* Maddison et al. (2014) Chris J Maddison, Daniel Tarlow, and Tom Minka. A* sampling. _Advances in Neural Information Processing Systems_ , 2014.
* Mentzer et al. (2020) Fabian Mentzer, George D Toderici, Michael Tschannen, and Eirikur Agustsson. High-fidelity generative image compression. _Advances in Neural Information Processing Systems_ , 2020.
* Mentzer et al. (2022) Fabian Mentzer, George Toderici, David Minnen, Sergi Caelles, Sung Jin Hwang, Mario Lucic, and Eirikur Agustsson. Vct: A video compression transformer. In _Advances in Neural Information Processing Systems_ , 2022.
* Pérez-Cruz (2008) Fernando Pérez-Cruz. Kullback-Leibler divergence estimation of continuous distributions. In _2008 IEEE international symposium on information theory_ , 2008\.
* Rudin (1986) Walter Rudin. _Real and Complex Analysis_. McGraw-Hill, 1986.
* Shah et al. (2022) Abhin Shah, Wei-Ning Chen, Johannes Balle, Peter Kairouz, and Lucas Theis. Optimal compression of locally differentially private mechanisms. In _International Conference on Artificial Intelligence and Statistics_. PMLR, 2022.
* Theis & Agustsson (2021) L. Theis and E. Agustsson. On the advantages of stochastic encoders. In _Neural Compression Workshop at ICLR_ , 2021.
* (26) Lucas Theis and Eirikur Agustsson. On the advantages of stochastic encoders. In _Neural Compression: From Information Theory to Applications–Workshop@ ICLR 2021_.
* Theis & Yosri (2022) Lucas Theis and Noureldin Yosri. Algorithms for the communication of samples. In _International Conference on Machine Learning_ , 2022.
* Theis et al. (2022) Lucas Theis, Tim Salimans, Matthew D Hoffman, and Fabian Mentzer. Lossy compression with gaussian diffusion. _arXiv preprint arXiv:2206.08889_ , 2022.
* Townsend et al. (2018) James Townsend, Thomas Bird, and David Barber. Practical lossless compression with latent variables using bits back coding. In _International Conference on Learning Representations_ , 2018.
* Townsend et al. (2019) James Townsend, Thomas Bird, Julius Kunze, and David Barber. Hilloc: lossless image compression with hierarchical latent variable models. In _International Conference on Learning Representations_ , 2019.
* Vahdat & Kautz (2020) Arash Vahdat and Jan Kautz. Nvae: A deep hierarchical variational autoencoder. _Advances in neural information processing systems_ , 2020.
* (32) Shifeng Zhang, Ning Kang, Tom Ryder, and Zhenguo Li. iflow: Numerically invertible flows for efficient lossless compression via a uniform coder. _CoRR_.
* Ziv (1985) Jacob Ziv. On universal quantization. _IEEE Transactions on Information Theory_ , 1985.
## Appendix A Formal definition of Greedy Rejection Coding
### A.1 Formal definition
Here we give a formal definition of GRC in terms of measures. We chose to omit
this from the main text for the sake of exposition, and instead formally
define GRC in definition 5 below.
###### Definition 5 (Greedy Rejection Coding).
Let $Z$ be a partitioning process on $\Sigma$, and $I_{0}=1$,
$S_{0}=Z_{I_{0}}$. Let $T_{0}(\cdot,S_{0})$ be the zero-measure on
$(\mathcal{X},\Sigma)$. Then for $d=0,1,\dots$ define
$\displaystyle t_{d}(x,S_{0:d})$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{=}}\frac{dT_{d}(\cdot,S_{0:d})}{dP(\cdot)}(x),$
(13) $\displaystyle\alpha_{d+1}(x,S_{0:d})$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{=}}\min\left\\{\frac{dQ}{dP}(x)-t_{d}(x,S_{0:d}),\frac{1-T_{d}(\mathcal{X},S_{0:d})}{P(S_{d})}\right\\}$
(14) $\displaystyle A_{d+1}(S,S_{0:d})$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{=}}\int_{S}dP(x)~{}\alpha_{d+1}(x,S_{0:d}),$
(15) $\displaystyle\beta_{d+1}(x,S_{0:d})$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{=}}\alpha_{d+1}(x,S_{0:d})~{}\frac{P(S_{d})}{1-T_{d}(\mathcal{X},S_{0:d})},$
(16) $\displaystyle X_{I_{d}}$ $\displaystyle\sim\frac{P|_{S_{d}}}{P(S_{d})},$
(17) $\displaystyle U_{I_{d}}$ $\displaystyle\sim\text{Uniform}(0,1),$ (18)
$\displaystyle b_{d}$
$\displaystyle\sim\text{Bernoulli}\left(\frac{Q(Z_{2I_{d}+1})-T_{d}(Z_{2I_{d}+1},S_{0:d})-A_{d+1}(Z_{2I_{d}+1},S_{0:d})}{Q(S_{d})-T_{d}(S_{d},S_{0:d})-A_{d+1}(S_{d},S_{0:d})}\right),$
(19) $\displaystyle I_{d+1}$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{=}}2I_{d}+b_{d},$ (20)
$\displaystyle S_{d+1}$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{=}}Z_{I_{d+1}},$ (21)
$\displaystyle T_{d+1}(S,S_{0:d+1})$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{=}}T_{d}(S\cap
S_{d+1},S_{0:d})+A_{d+1}(S\cap S_{d+1},S_{0:d})+Q(S\cap S_{d+1}^{\prime}),$
(22)
where $S\in\Sigma$ and $P|_{Z_{d}}$ denotes the restriction of the measure $P$
to the set $Z_{d}$. Generalised Greedy Rejection Coding (GRC) amounts to
running this recursion, computing
$\displaystyle
D^{*}=\min\\{d\in\mathbb{N}:U_{I_{d}}\leq\beta_{d+1}(X_{I_{d}},S_{0:d})\\},$
(23)
and returning $X=X_{I_{D^{*}}}$ and $C=I_{D^{*}}$.
The functions AcceptProb and RuledOutMass in algorithm 2 correspond to
calculating the quantities in eq. 16 and eq. 22. The function PartitionProb
corresponds to computing the success probability of the Bernoulli coin toss in
eq. 19.
### A.2 Harsha et al.’s algorithm is a special case of GRC
Here we show that the algorithm of Harsha et al. is a special case of GRC
which assumes discrete $P$ and $Q$ distributions and uses the global
partitioning process, which we refer to as GRCG. Note that the original
algorithm described by Harsha et al. assumes discrete $P$ and $Q$
distributions, whereas GRCG does not make this assumption.
###### Proposition 2 (Harsha et al. (2007) is a special case of GRC).
Let $Z$ be the global partitioning process over $\Sigma$, defined as
$Z_{1}=\mathcal{X},~{}~{}~{}Z_{2n}=Z_{n},~{}~{}~{}Z_{2n+1}=\emptyset,~{}~{}\text{
for all }~{}n=1,2,\dots.$ (24)
Harsha et al. (2007) is equivalent to GRC using this $Z$ and setting $C=D^{*}$
instead of $C=I_{D^{*}}$. We refer to this variant of GRC as Global GRC, or
GRCG for short.
###### Proof.
With $Z$ defined as in eq. 24, we have $b_{d}\sim\text{Bernoulli}(0)$ by eq.
19, so $b_{d}=0$ almost surely. Therefore $S_{d}=\mathcal{X}$ for all
$d\in\mathbb{N}^{+}$. From this, we have
$T_{d+1}(S,S_{0:d})=T_{d}(S,S_{0:d})+A_{d}(S,S_{0:d})$ and also
$P(S_{d})=P(\mathcal{X})=1$ for all $d\in\mathbb{N}^{+}$. Substituting these
in the equations of definition 5, we recover eqs. 2, 3 and 4. Setting
$C=D^{*}$ instead of $C=I_{D^{*}}$ makes the two algorithms identical. ∎
## Appendix B Proof of correctness of GRC: Theorem 1
In this section we give a proof for the correctness of GRC. Before going into
the proof, we outline our approach and the organisation of the proof.
#### Proof outline.
To prove theorem 1, we consider running GRC for a finite number of $d$ steps.
We consider the measure $\tau_{d}:\Sigma\to[0,1]$, defined such that for any
$S\in\Sigma$, the quantity $\tau_{d}(S)$ is equal to the probability that GRC
terminates within $d$ steps and returns a sample $X\in S\subseteq\Sigma$. We
then show that $\tau_{d}\to Q$ in total variation as $d\to\infty$, which
proves theorem 1.
#### Organisation of the proof.
First, in section B.1 we introduce some preliminary definitions, assumptions
and notation on partitioning processes, which we will use in later sections.
Then, in B.2 we derive the $\tau_{d}$ measure, and prove some intermediate
results about it. Specifically, proposition 3 shows that the measures $A_{d}$
and $T_{d}$ from the definition of GRC (definition 5) correspond to
probabilities describing the termination of the algorithm, and lemma 1 uses
these facts to derive the form of $\tau_{d}$ in terms of $A_{d}$. Then, lemma
2 shows that the measure $\tau_{d}$ is no larger than the measure $Q$ and
lemma 3 shows that the limit of $\tau_{d}$ as $d\to\infty$ is also a measure.
Lastly lemma 4 shows that $T_{d}$ and $\tau_{d}$ are equal on the active sets
of the partition process followed within a run of GRC, and then lemma 5 uses
that result to derive the subsets of the sample space on which $\tau_{d}$ is
equal to $Q$ and $\tau$ is equal to $Q$.
Then, in section B.3 we break down the proof of theorem 1 in four cases.
First, we consider the probability $p_{d}$ that GRC terminates at step $d$,
given that it has not terminated up to and including step $d-1$. Lemma 7 shows
that if $p_{d}\not\to 0$, then $\tau_{d}\to Q$ in total variation. Then we
consider the case $p_{d}\to 0$ and show that in this case, if any of
assumptions 1, 2 or 3 hold, then again $\tau_{d}\to Q$ in total variation.
Putting these results together proves theorem 1.
### B.1 Preliminary definitions, assumptions and notation
For the sake of completeness, we restate relevant definitions and assumptions.
Definition 6 restates our notation on the target $Q$ and proposal $P$ measures
and assumption 4 emphasises our assumption that $Q\ll P$. Definition 7
restates the definition of partitioning processes.
###### Definition 6 (Target $Q$ and proposal $P$ distributions).
Let $Q$ and $P$ be probability measures on a measurable space
$(\mathcal{X},\Sigma)$. We refer to $Q$ and $P$ as the target and proposal
measures respectively.
###### Assumption 4 ($Q\ll P$).
We assume $Q$ is absolutely continuous w.r.t. $P$, that is $Q\ll P$. Under
this assumption, the Radon-Nikodym derivative of $Q$ w.r.t. $P$ exists and is
denoted as $dQ/dP:\mathcal{X}\to\mathbb{R}^{+}$.
###### Definition 7 (Partitioning process).
A random process $Z:\mathbb{N}^{+}\to\Sigma$ which satisfies
$Z_{1}=\mathcal{X},~{}~{}Z_{2n}\cap Z_{2n+1}=\emptyset,~{}~{}Z_{2n}\cup
Z_{2n+1}=Z_{n}.$ (25)
is called a partitioning process.
That is, a partitioning process $Z$ is a random process indexed by the heap
indices of an infinite binary tree, where the root node is $\mathcal{X}$ and
any two children nodes $Z_{2n}$ and $Z_{2n+1}$ partition their parent node
$Z_{n}$. Note that by definition, a partitioning process takes values which
are measurable sets in $(\mathcal{X},\Sigma)$.
Because GRC operates on an binary tree, we find it useful to define some
appropriate notation. Definition 8 specifies the ancestors of a node in a
binary tree. Notation 1 gives some useful indexing notation for denoting
different elements of the partitioning process $Z$, as well as for denoting
the branch of ancestors of an element in a partitioning process.
###### Definition 8 (Ancestors).
We define the one-step ancestor function $A_{1}:2^{\mathbb{N}^{+}}\to
2^{\mathbb{N}^{+}}$ as
$\displaystyle A_{1}(N)$
$\displaystyle=N\cup\\{n\in\mathbb{N}^{+}:n^{\prime}=2n\text{ or
}n^{\prime}=2n+1,\text{ for some }n^{\prime}\in N\\},$ (26)
and the ancestor function $A:2^{\mathbb{N}^{+}}\to 2^{\mathbb{N}^{+}}$ as
$A(N)=\left\\{n\in\mathbb{N}^{+}:n\in A_{1}^{k}(\\{n^{\prime}\\})\text{ for
some }n^{\prime}\in N,k\in\mathbb{N}^{+}\right\\}.$ (27)
where $A_{1}^{k}$ denotes the composition of $A_{1}$ with itself $k$ times.
Viewing $\mathbb{N}^{+}$ as the set of heap indices of an infinite binary
tree, $A$ maps a set $N\subseteq\mathbb{N}$ of natural numbers (nodes) to the
set of all elements of $N$ and their ancestors.
###### Notation 1 (Double indexing for $Z$, ancestor branch).
Given a partitioning process $Z$, we use the notation $Z_{d,k}$, where
$d=1,2,\dots$ and $k=1,\dots,2^{d-1}$ to denote the $k^{th}$ node at depth
$d$, that is
$Z_{d,k}:=Z_{2^{d-1}-1+k}.$ (28)
We use the hat notation $\hat{Z}_{d,k}$ to denote the sequence of nodes
consisting of $Z_{d,k}$ and all its ancestors
$\hat{Z}_{d,k}:=(Z_{n}:n\in A(\\{2^{d-1}-1+k\\})),$ (29)
and call $\hat{Z}_{d,k}$ the ancestor branch of $Z_{d,k}$.
###### Notation 2 ($\mathbb{P}$ measure).
In definition 5, we defined $\mathbb{P}$ to be the measure associated with an
infinite sequence of independent fair coin tosses over a measurable space
$(\Omega,\mathcal{S})$. To avoid heavy notation, for the rest of the proof we
will overload this symbol as follows: if $F$ is a random variable from
$\Omega$ to some measurable space, we will abbreviate $\mathbb{P}\circ F^{-1}$
by simply $\mathbb{P}(F)$.
### B.2 Deriving the measure of samples returned by GRC
For the remainder of the proof, we condition on a fixed partitioning process
sample $Z$. For brevity, we omit this conditioning which, from here on is
understood to be implied. Proposition 3 shows that the measures $A_{d}$ and
$T_{d}$ correspond to the probabilities that GRC picks a particular branch of
the binary tree and terminates at step $d$, or does not terminate up to and
including step $d$, respectively.
###### Proposition 3 (Acceptance and rejection probabilities).
Let $V_{d}$ be the event that GRC does not terminate up to and including step
$d$ and $W_{d}$ be the event that it terminates at step $d$. Let
$S_{0:d}=B_{0:d}$ denote the event that the sequence of the first $d$ bounds
produced is $B_{0:d}$. Then
$\displaystyle\mathbb{P}(V_{d},S_{0:d}=B_{0:d})$
$\displaystyle=1-T_{d}(\mathcal{X},B_{0:d}),$ $\displaystyle\text{ for
}d=0,1,\dots,$ (30) $\displaystyle\mathbb{P}(W_{d+1},S_{0:d}=B_{0:d})$
$\displaystyle=A_{d+1}(\mathcal{X},B_{0:d}),$ $\displaystyle\text{ for
}d=0,1,\dots.$ (31)
###### Proof.
First we consider the probability that GRC terminates at step $k+1$ given that
it has not terminated up to and including step $d$, that is the quantity
$\mathbb{P}(W_{k+1}~{}|~{}V_{k},S_{0:k}=B_{0:k})$. By definition 5, this
probability is given by integrating the acceptance probability
$\beta_{k+1}(x,B_{0:k})$ over $x\in\mathcal{X}$, with respect to the measure
$P|_{B_{k}}/P(B_{k})$, that is
$\displaystyle\mathbb{P}(W_{k+1}~{}|~{}V_{k},S_{0:k}=B_{0:k})$
$\displaystyle=\int_{x\in B_{k}}dP(x)\frac{\beta_{k+1}(x,B_{0:k})}{P(B_{k})}$
(32)
$\displaystyle=\int_{x\in\mathcal{X}}dP(x)\frac{\beta_{k+1}(x,B_{0:k})}{P(B_{k})}$
(33)
$\displaystyle=\int_{x\in\mathcal{X}}dP(x)\frac{\alpha_{k+1}(x,B_{0:k})}{1-T_{k}(\mathcal{X},B_{0:k})}$
(34)
$\displaystyle=\frac{A_{k+1}(\mathcal{X},B_{0:k})}{1-T_{k}(\mathcal{X},B_{0:k})},$
(35)
Now, we show the result by induction on $d$, starting from the base case of
$d=0$.
Base case: For $d=0$, by the definition of GRC (definition 5)
$S_{0}=Z_{I_{0}}=\mathcal{X}$, so
$\displaystyle\mathbb{P}\left(V_{0},S_{0}=B_{0}\right)=1~{}\text{ and
}~{}T_{0}(\mathcal{X},B_{0})=0,$ (36)
which show the base case for eq. 30. Now, plugging in $k=0$ in eq. 35 we
obtain
$\mathbb{P}(W_{1},S_{0}=B_{0})=\mathbb{P}(W_{1}~{}|~{}V_{0},S_{0}=B_{0})=\frac{A_{1}(\mathcal{X},B_{0})}{1-T_{0}(\mathcal{X},B_{0})}=A_{1}(\mathcal{X},B_{0})$
(37)
where we have used the fact that $T_{0}(\mathcal{X},B_{0})=0$, showing the
base case for eq. 31.
Inductive step: Suppose that for all $k=0,1,2,\dots,d$ it holds that
$\mathbb{P}\left(V_{d},S_{0:k}=B_{0:k}\right)=1-T_{d}(\mathcal{X},B_{0:k})~{}~{}\text{
and
}~{}~{}\mathbb{P}\left(W_{k+1},S_{0:k}=B_{0:k}\right)=A_{k+1}(\mathcal{X},B_{0:k}).$
(38)
Setting $k=d$ in eq. 35, we obtain
$\displaystyle\mathbb{P}(W^{\prime}_{d+1}~{}|~{}V_{d},S_{0:d}=B_{0:d})$
$\displaystyle=\frac{1-T_{d}(\mathcal{X},B_{0:d})-A_{d+1}(\mathcal{X},B_{0:d})}{1-T_{d}(\mathcal{X}.B_{0:d})},$
(39)
and using the inductive hypothesis from eq. 38, we have
$\displaystyle\mathbb{P}(V_{d+1},S_{0:d}=B_{0:d})$
$\displaystyle=\mathbb{P}(W_{d+1}^{\prime},V_{d},S_{0:d}=B_{0:d})=1-T_{d}(\mathcal{X},B_{0:d})-A_{d+1}(\mathcal{X},B_{0:d}).$
(40)
Now, $B_{d}=Z_{n}$ for some $n\in\mathbb{N}^{+}$. Denote $B_{d}^{L}:=Z_{2n}$
and $B_{d}^{R}:=Z_{2n+1}$. Then, by the product rule
$\displaystyle\mathbb{P}(V_{d+1},S_{0:d}$
$\displaystyle=B_{0:d},S_{d+1}=B^{R}_{d})=$ (41)
$\displaystyle=\mathbb{P}(S_{d+1}=B^{R}_{d}~{}|~{}V_{d+1},S_{0:d}=B_{0:d})\mathbb{P}(V_{d+1},S_{0:d}=B_{0:d})$
(42)
$\displaystyle=\frac{Q(B_{d}^{R})-T_{d}(B_{d}^{R},B_{0:d})-A_{d+1}(B_{d}^{R},B_{0:d})}{Q(B_{d})-T_{d}(B_{d},B_{0:d})-A_{d+1}(B_{d},B_{0:d})}\mathbb{P}(V_{d+1},S_{0:d}=B_{0:d})$
(43)
$\displaystyle=\frac{Q(B_{d}^{R})-T_{d}(B_{d}^{R},B_{0:d})-A_{d+1}(B_{d}^{R},B_{0:d})}{\underbrace{Q(\mathcal{X})}_{=~{}1}-T_{d}(\mathcal{X},B_{0:d})-A_{d+1}(\mathcal{X},B_{0:d})}\mathbb{P}(V_{d+1},B_{0:d}=B_{0:d})$
(44)
$\displaystyle=Q(B_{d}^{R})-T_{d}(B_{d}^{R},B_{0:d})-A_{d+1}(B_{d}^{R},B_{0:d})$
(45) $\displaystyle=1-T_{d+1}(\mathcal{X},B_{0:d+1})$ (46)
where we have written $B_{0:d+1}=(B_{0},\dots,B_{d},B_{d}^{R})$. Above, to go
from 41 to 42 we used the definition of conditional probability, to go from 42
to 43 we used the definition in 19, to go from 43 to 44 we used the fact that
for $k=0,1,2,\dots,$ it holds that
$\displaystyle
Q(\mathcal{X})-T_{k}(\mathcal{X},B_{0:k})-A_{k+1}(\mathcal{X},B_{0:k})$
$\displaystyle=Q(B_{k})-T_{k}(B_{k},B_{0:k})-A_{k+1}(B_{k},B_{0:k})+$
$\displaystyle~{}~{}~{}~{}~{}~{}+Q(B_{k}^{\prime})-\underbrace{T_{k}(B_{k}^{\prime},B_{0:k})}_{=~{}Q(B_{k}^{\prime})}-\underbrace{A_{k+1}(B_{k}^{\prime},B_{0:k})}_{=~{}0}$
(47) $\displaystyle=Q(B_{k})-T_{d}(B_{k},B_{0:k})-A_{k+1}(B_{k},B_{0:k}),$
(48)
from 44 to 45 we have used eq. 40, and lastly from 45 to 46 we have again used
eq. 48. Equation 46 similarly holds if $B_{d+1}=B^{R}_{d}$ by
$B_{d+1}=B^{L}_{d}$, so we arrive at
$\displaystyle\mathbb{P}(V_{d+1},B_{0:d+1}$
$\displaystyle=B_{0:d+1})=1-T_{d+1}(\mathcal{X},B_{0:d+1}),$ (49)
which shows the inductive step for eq. 30. Further, we have
$\displaystyle\mathbb{P}(W_{d+2},B_{0:d+1}=B_{0:d+1})=\mathbb{P}(W_{d+2}~{}|~{}V_{d+1},B_{0:d+1}=B_{0:d+1})\mathbb{P}(V_{d+1},B_{0:d+1}=B_{0:d+1})$
(50)
and also by setting $k=d+1$ in eq. 35 we have
$\mathbb{P}(W_{d+2}~{}|~{}V_{d+1},B_{0:d+1}=B_{0:d+1})=\frac{A_{d+2}(\mathcal{X},B_{0:d+1})}{1-T_{d+1}(\mathcal{X},B_{0:d+1})}.$
(51)
Combining eq. 49 and eq. 51 we arrive at
$\displaystyle\mathbb{P}(W_{d+2},B_{0:d+1}=B_{0:d+1})=A_{d+2}(\mathcal{X},B_{0:d+1}),$
(52)
which is the inductive step for eq. 31. Putting eqs. 49 and 52 together shows
the result. ∎
We now turn to defining and deriving the form of the measure $\tau_{D}$. We
will define $\tau_{D}$ to be the measure such that for any $S\in\Sigma$, the
probability that GRC terminates up to and including step $D$ and returns a
sample within $S$ is given by $\tau_{D}(S)$. We will also show that $\tau_{D}$
is non-increasing in $D$.
###### Lemma 1 (Density of samples generated by GRC).
The probability that GRC terminates by step $D\geq 1$ and produces a sample in
$S$ is given by the measure
$\tau_{D}(S)=\sum_{d=1}^{D}\sum_{k=1}^{2^{d-1}}A_{d}(S,\hat{Z}_{d,k}),$ (53)
where $\hat{Z}_{D,k}$ is the ancestor branch of $Z_{D,k}$ as defined in eq.
29. Further, $\tau_{D}$ is non-decreasing in $D$, that is if $n\leq m$, then
$\tau_{n}(S)\leq\tau_{m}(S)$ for all $S\in\Sigma$.
###### Proof.
Let $V_{d}$ be the event that GRC does not terminate up to and including step
$d$ and let $W_{d}(S)$ be the event that GRC terminates at step $d$ and
returns a sample in $S$. Then
$\displaystyle\tau_{D}(S)$ $\displaystyle=\sum_{d=1}^{D}\mathbb{P}(W_{d}(S))$
(54) $\displaystyle=\sum_{d=1}^{D}\mathbb{P}(W_{d}(S),V_{d-1})$ (55)
$\displaystyle=\sum_{d=1}^{D}\sum_{k=1}^{2^{d-1}}\mathbb{P}(W_{d}(S),V_{d-1},S_{0:d-1}=\hat{Z}_{d,k})$
(56)
$\displaystyle=\sum_{d=1}^{D}\sum_{k=1}^{2^{d-1}}\mathbb{P}(W_{d}(S)~{}|~{}V_{d-1},S_{0:d-1}=\hat{Z}_{d,k})~{}\mathbb{P}(V_{d-1},S_{0:d-1}=\hat{Z}_{d,k}).$
(57)
Further, the terms in the summand can be expressed as
$\displaystyle\mathbb{P}(V_{d-1},S_{0:d-1}=\hat{Z}_{d,k})$
$\displaystyle=1-T_{d-1}(\mathcal{X},\hat{Z}_{d,k}),$ (58)
$\displaystyle\mathbb{P}(W_{d}(S)~{}|~{}V_{d-1},S_{0:d-1}=\hat{Z}_{d,k})$
$\displaystyle=\int_{x\in
S}dP(x)\frac{\beta_{d}(x,\hat{Z}_{d,k})}{P(Z_{d,k})}$ (59)
$\displaystyle=\int_{x\in
S}dP(x)\frac{\alpha_{d}(x,\hat{Z}_{d,k})}{1-T_{d-1}(\mathcal{X},\hat{Z}_{d,k})}$
(60)
$\displaystyle=\frac{A_{d}(S,\hat{Z}_{d,k})}{1-T_{d-1}(\mathcal{X},\hat{Z}_{d,k})},$
(61)
and substituting eqs. 58 and 61 into the sum in eq. 57, we obtain eq. 53.
Further, since the inner summand is always non-negative, increasing $D$ adds
more non-negative terms to the sum, so $\tau_{D}$ is also non-decreasing in
$D$. ∎
Now we turn to proving a few results about the measure $\tau_{D}$. Lemma 2
shows that $\tau_{D}\leq Q$ for all $D$. This result implies that
$||Q-\tau_{D}||_{TV}=Q(\mathcal{X})-\tau_{D}(\mathcal{X})$, which we will use
later.
###### Lemma 2 ($Q-\tau_{D}$ is non-negative).
Let $D\in\mathbb{N}^{+}$. Then $Q-\tau_{D}$ is a positive measure, that is
$Q(S)-\tau_{D}(S)\geq 0\text{ for any }S\in\Sigma.$ (62)
###### Proof.
Let $S\in\Sigma$ and write
$\displaystyle Q(S)-\tau_{D}(S)$ $\displaystyle=\sum_{k=1}^{2^{D-1}}Q(S\cap
Z_{D,k})-\tau_{D}(S\cap Z_{D,k})$ (63)
$\displaystyle=\sum_{k=1}^{2^{D-1}}\left[Q(S\cap
Z_{D,k})-\sum_{d=1}^{D}\sum_{k^{\prime}=1}^{2^{D-1}}A_{d}(S\cap
Z_{D,k},\hat{Z}_{D,k^{\prime}})\right]$ (64)
$\displaystyle=\sum_{k=1}^{2^{D-1}}\left[Q(S\cap
Z_{D,k})-\sum_{d=1}^{D}A_{d}(S\cap Z_{D,k},\hat{Z}_{D,k})\right]$ (65)
$\displaystyle=\sum_{k=1}^{2^{D-1}}\left[Q(S\cap Z_{D,k})-T_{D-1}(S\cap
Z_{D,k},\hat{Z}_{D,k})-A_{D}(S\cap Z_{D,k},\hat{Z}_{D,k})\right]$ (66)
We will show that the summand in eq. 66 is non-negative. From the definition
in eq. 14 we have
$\displaystyle\alpha_{D}(x,\hat{Z}_{D,k})$
$\displaystyle=\min\left\\{\frac{dQ}{dP}(x)-t_{D-1}(x,\hat{Z}_{D,k}),\frac{1-T_{D-1}(\mathcal{X},\hat{Z}_{D,k})}{P({Z_{D,k}})}\right\\}$
(67) $\displaystyle\leq\frac{dQ}{dP}(x)-t_{D-1}(x,\hat{Z}_{D,k})$ (68)
and integrating both sides of eq. 68 over $S\cap Z_{D,k}$, we obtain
$\displaystyle A_{D}(S\cap Z_{D,k},\hat{Z}_{D,k})\leq Q(S\cap
Z_{D,k})-T_{D-1}(S\cap Z_{D,k},\hat{Z}_{D,k})$ (69)
Putting this together with eq. 66 we arrive at
$\displaystyle Q(S)-\tau_{D}(S)\geq 0,$ (70)
which is the required result. ∎
Thus far we have derived the form of $\tau_{D}$, shown that it is non-
decreasing in $D$ and that it is no greater than $Q$. As we are interested in
the limiting behaviour of $\tau_{D}$, we next show that its limit,
$\tau=\lim_{D\to\infty}\tau_{D}$, is also a measure. Further, it also holds
that $\tau\leq Q$.
###### Lemma 3 (Measures $\tau_{D}$ converge to a measure $\tau\leq Q$).
For each $S\in\Sigma$, $\tau_{D}(S)$ converges to a limit. Further, the
function $\tau:\Sigma\to[0,1]$ defined as
$\tau(S)=\lim_{D\to\infty}\tau_{D}(S)$ (71)
is a measure on $(\mathcal{X},\Sigma)$ and $\tau(S)\leq Q(S)$ for all
$S\in\Sigma$.
###### Proof.
First, by lemma 1, $\tau_{D}(S)$ is non-decreasing in $D$, and bounded above
by $Q(S)$ for all $S\in\Sigma$. Therefore, for each $S\in\Sigma$,
$\tau_{D}(S)$ converges to some limit as $D\to\infty$. Define
$\tau:\Sigma\to[0,1]$ as
$\tau(S)=\lim_{D\to\infty}\tau_{D}(S),$ (72)
and note that $\tau$ is a non-negative set function for which
$\tau(\emptyset)=0$. By the Vitali-Hahn-Saks theorem (see Corollary 4, p. 160;
Dunford & Schwartz, 1988), $\tau$ is also countably additive, so it is a
measure. Also, by lemma 2, $\tau_{D}(S)\leq Q(S)$ for all $D\in\mathbb{N}^{+}$
and all $S\in\Sigma$, so $\tau(S)\leq Q(S)$ for all $S\in\Sigma$. ∎
###### Definition 9 ($H_{d,k}$, $H_{d}$ and $H$).
For $d=1,2,\dots$ and $k=1,\dots,2^{d-1}$, we define the sets $H_{d,k}$ as
$H_{d,k}=\left\\{x\in
Z_{d,k}~{}\Big{|}~{}\frac{dQ}{dP}(x)-t_{d-1}(x,\hat{Z}_{d,k})\geq\frac{1-T_{d-1}(\mathcal{X},\hat{Z}_{d,k})}{P(Z_{d,k})}\right\\}.$
(73)
Also, define the sets $H_{d}$ and $H$ as
$\displaystyle H_{d}=\bigcup_{k=1}^{2^{d-1}}H_{d,k}~{}\text{ and
}~{}H=\bigcap_{d=1}^{\infty}H_{d}.$ (74)
###### Lemma 4 ($T_{D}(\cdot,\hat{Z}_{D+1,k})$ and $\tau_{D}$ agree in
$Z_{D+1,k}$).
Let $R\in\Sigma$. If $R\subseteq Z_{D+1,k}$, then
$\tau_{D}(R)=T_{D}(R,\hat{Z}_{D+1,k}).$ (75)
###### Proof.
Suppose $R\subseteq Z_{D+1,k}$. First, we have
$\tau_{D}(R)=\sum_{d=1}^{D}\sum_{k^{\prime}=1}^{2^{d-1}}A_{d}(R,\hat{Z}_{d,k^{\prime}})=\sum_{d=1}^{D}A_{d}(R,(\hat{Z}_{D+1,k})_{1:d}).$
(76)
From the definition of $T_{D}$ in eq. 22, we have
$\displaystyle T_{D}(R,\hat{Z}_{D+1,k})$ $\displaystyle=T_{D-1}(R\cap
Z_{D+1,k},(\hat{Z}_{D+1,k})_{1:D})+A_{D}(R\cap
Z_{D+1,k},(\hat{Z}_{D+1,k})_{1:D})+$ (77)
$\displaystyle\quad\quad+\underbrace{Q(R\cap Z_{D+1,k}^{\prime})}_{=~{}0}$
$\displaystyle=T_{D-1}(R\cap Z_{D+1,k},(\hat{Z}_{D+1,k})_{1:D})+A_{D}(R\cap
Z_{D+1,k},(\hat{Z}_{D+1,k})_{1:D})$ (78)
$\displaystyle=T_{D-1}(R,(\hat{Z}_{D+1,k})_{1:D})+A_{D}(R,(\hat{Z}_{D+1,k})_{1:D})$
(79)
where we have used the assumption that $R\subseteq Z_{D+1,k}$. In a similar
manner, applying eq. 79 recursively $D-1$ more times, we obtain
$T_{D}(R,\hat{Z}_{D+1,k})=\sum_{d=1}^{D}A_{d}(R,(\hat{Z}_{D+1,k})_{1:d})=\tau_{D}(R).$
(80)
which is the required result. ∎
###### Lemma 5 (Equalities with $Q,\tau_{D}$ and $\tau$).
The following two equalities hold
$Q(\mathcal{X}\setminus H_{D})=\tau_{D}(\mathcal{X}\setminus H_{D})~{}\text{
and }~{}Q(\mathcal{X}\setminus H)=\tau(\mathcal{X}\setminus H).$ (81)
###### Proof.
Let $R=Z_{D+1,k}\setminus H_{D,k}$. Then, by similar reasoning used to prove
eq. 77, we have
$\displaystyle
T_{D}(R,\hat{Z}_{D+1,k})=T_{D-1}(R,(\hat{Z}_{D+1,k})_{1:D})+A_{D}(R,(\hat{Z}_{D+1,k})_{1:D})$
(82)
Further, we also have
$\displaystyle A_{D}(R,\hat{Z}_{D,k})$
$\displaystyle=\int_{R}dP(x)~{}\alpha_{D}(x,\hat{Z}_{D,k})$ (83)
$\displaystyle=\int_{R}dP(x)~{}\min\left\\{\frac{dQ}{dP}(x)-t_{D-1}(x,\hat{Z}_{D,k}),\frac{1-T_{D-1}(\mathcal{X},\hat{Z}_{D,k})}{P(Z_{D,k})}\right\\}$
(84)
$\displaystyle=\int_{R}dP(x)~{}\left(\frac{dQ}{dP}(x)-t_{D-1}(x,\hat{Z}_{D,k})\right)$
(85) $\displaystyle=Q(R)-T_{D-1}(R,\hat{Z}_{D,k})$ (86)
where from eq. 84 to eq. 85 we have used the definition of $H_{D,k}$. Then,
combining eqs. 82 and 86 and using lemma 4, we arrive at
$Q(Z_{D+1,k}\setminus H_{D,k})=T_{D}(Z_{D+1,k}\setminus
H_{D,k},\hat{Z}_{D+1,k})=\tau_{D}(Z_{D+1,k}\setminus H_{D,k}).$ (87)
Now, using the equation above, we have that
$\tau_{D}(\mathcal{X}\setminus
H_{D})=\sum_{k=1}^{2^{D}}\tau_{D}(Z_{D+1,k}\setminus
H_{D})=\sum_{k=1}^{2^{D}}Q(Z_{D+1,k}\setminus H_{D})=Q(\mathcal{X}\setminus
H_{D}).$ (88)
Now, using $\tau_{D}\leq\tau\leq Q$ and $\tau_{D}(\mathcal{X}\setminus
H_{D})=Q(\mathcal{X}\setminus H_{D})$, we have that $\tau(\mathcal{X}\setminus
H_{D})=Q(\mathcal{X}\setminus H_{D})$, which is the first part of the result
we wanted to show. Taking limits, we obtain
$Q(\mathcal{X}\setminus H)=\lim_{D\to\infty}Q(\mathcal{X}\setminus
H_{D})=\lim_{D\to\infty}\tau(\mathcal{X}\setminus
H_{D})=\tau(\mathcal{X}\setminus H),$ (89)
which is the second part of the required result. ∎
### B.3 Breaking down the proof of Theorem 1 in five cases
In definition 10 we introduce the quantities
$w_{d}=Q(\mathcal{X})-\tau_{d}(\mathcal{X})$ and
$p_{d}=\mathbb{P}(W_{d}~{}|~{}V_{d-1})$. Then we break down the proof of
theorem 1 in five cases. First, in lemma 7 we show that if $p_{d}\not\to 0$,
then $w_{d}\to 0$. Second, in lemma 8 we show that if $P(H_{d})\to 0$, then
$w_{d}\to 0$. In lemma 9 we show an intermediate result, used in the other
three cases, which we consider in lemmas 10, 11 and 12. Specifically, in these
three cases we show that if $p_{d}\to 0$ and $P(H_{d})\not\to 0$, and
assumption 1, 2 or 3 hold respectively, we have $w_{d}\to 0$. Putting these
results together shows theorem 1.
###### Definition 10 ($p_{d}$, $w_{d,k}$ and $w_{d}$).
Define $p_{d}=\mathbb{P}(W_{d}~{}|~{}V_{d-1})$. Also define $w_{d,k}$ and
$w_{d}$ as
$\displaystyle w_{d,k}$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{=}}Q(Z_{d,k})-\tau_{d}(Z_{d,k}),$
(90) $\displaystyle w_{d}$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{=}}\sum_{k=1}^{2^{d-1}}w_{d,k}.$
(91)
###### Lemma 6 ($w_{d}$ non-increasing in $d$).
The sequence $w_{d}$ is non-negative and non-increasing in $d$.
###### Proof.
Since $\tau_{d}$ is non-decreasing in $d$ (from lemma 5) and
$w_{d}=\sum_{k=1}^{2^{d-1}}Q(Z_{d,k})-\tau_{d}(Z_{d,k})=Q(\mathcal{X})-\tau_{d}(\mathcal{X}),$
(92)
it follows that $w_{d}$ is a non-increasing and non-negative sequence. ∎
###### Lemma 7 (Case 1).
If $p_{d}\not\to 0$, then $w_{d}\to 0$.
###### Proof.
Let $p_{d}=\mathbb{P}(W_{d}~{}|~{}V_{d-1})$ and suppose $p_{d}\not\to 0$.
Then, there exists $\epsilon>0$ such that $p_{d}>\epsilon$ occurs infinitely
often. Therefore, there exists an increasing sequence of integers
$a_{d}\in\mathbb{N}$ such that $p_{a_{d}}>\epsilon$ for all $d\in\mathbb{N}$.
Then
$\displaystyle\tau_{a_{d}}(\mathcal{X})$
$\displaystyle=\mathbb{P}\left(\bigcup_{d=1}^{a_{d}}W_{d}\right)$ (93)
$\displaystyle=1-\mathbb{P}\left(V_{a_{d}}\right),$ (94)
$\displaystyle=1-\prod_{d=1}^{a_{d}}\mathbb{P}\left(V_{d}~{}|~{}V_{d-1}\right),$
(95) $\displaystyle=1-\prod_{d=1}^{a_{d}}(1-p_{d}),$ (96) $\displaystyle\geq
1-(1-\epsilon)^{d}\to 1\text{ as }d\to\infty.$ (97)
Therefore, $\tau_{d}(\mathcal{X})\to 1$ as $d\to\infty$, which implies that
$||Q-\tau_{d}||_{TV}\to 0$. ∎
###### Lemma 8 (Case 2).
If $P(H_{d})\to 0$, then $w_{d}\to 0$.
###### Proof.
Suppose $P(H_{d})\to 0$. Since $Q\ll P$, we have $Q(H)=0$, and since
$Q\geq\tau\geq 0$ (by lemma 3), we also have $\tau(H)=0$. Therefore
$\displaystyle\lim_{d\to\infty}w_{d}$
$\displaystyle=\lim_{d\to\infty}||Q-\tau_{d}||_{TV}$ (98)
$\displaystyle=Q(\mathcal{X})-\tau(\mathcal{X})$ (99)
$\displaystyle=\underbrace{Q(\mathcal{X}\setminus H)-\tau(\mathcal{X}\setminus
H)}_{=~{}0\text{ from lemma
}\ref{lem:Qtau}}+\underbrace{Q(H)}_{=~{}0}-\underbrace{\tau(H)}_{=~{}0}$ (100)
$\displaystyle=0$ (101)
which is the required result. ∎
###### Lemma 9 (An intermediate result).
If $p_{d}\to 0$ and $w_{d}\not\to 0$ as $d\to\infty$, then
$\sum_{k=1}^{2^{d-1}}\frac{P(H_{d,k})}{P(Z_{d,k})}~{}w_{d,k}\to 0\text{ as
}d\to\infty.$ (102)
###### Proof.
Suppose that $p_{d}=\mathbb{P}(W_{d}~{}|~{}V_{d-1})\to 0$ and $w_{d}\not\to
0$. Then
$\displaystyle\mathbb{P}(W_{d}~{}|~{}V_{d-1})$
$\displaystyle\geq\mathbb{P}(W_{d}(H_{d})~{}|~{}V_{d-1})$ (103)
$\displaystyle=\sum_{k=1}^{2^{d-1}}\mathbb{P}\left(W_{d}(H_{d,k})~{}|~{}V_{d-1}\right)$
(104)
$\displaystyle=\sum_{k=1}^{2^{d-1}}\mathbb{P}\left(W_{d}(H_{d,k}),S_{0:d-1}=\hat{Z}_{d,k}~{}|~{}V_{d-1}\right)$
(105)
$\displaystyle=\sum_{k=1}^{2^{d-1}}\mathbb{P}\left(W_{d}(H_{d,k})~{}|~{}V_{d-1},S_{0:d-1}=\hat{Z}_{d,k}\right)\mathbb{P}\left(S_{0:d-1}=\hat{Z}_{d,k}~{}|~{}V_{d-1}\right)$
(106)
$\displaystyle=\sum_{k=1}^{2^{d-1}}\frac{P(H_{d,k})}{P(Z_{d,k})}\mathbb{P}\left(S_{0:d-1}=\hat{Z}_{d,k}~{}|~{}V_{d-1}\right)$
(107)
$\displaystyle=\sum_{k=1}^{2^{d-1}}\frac{P(H_{d,k})}{P(Z_{d,k})}\frac{w_{d,k}}{w_{d}}\to
0.$ (108)
In addition, if $w_{d}\not\to 0$, then since $0\leq w_{d}\leq 1$ we have
$\sum_{k=1}^{2^{d-1}}\frac{P(H_{d,k})}{P(Z_{d,k})}w_{d,k}\to 0.$ (109)
which is the required result. ∎
###### Lemma 10 (Case 3).
Suppose that $p_{d}\to 0$, $P(H_{d})\not\to 0$ and assumption 1 holds. Then
$w_{d}\to 0$.
###### Proof.
Suppose that $p_{d}\to 0$, $P(H_{d})\not\to 0$. Suppose also that assumption 1
holds, meaning there exists $M\in\mathbb{R}$ such that $dQ/dP(x)<M$ for all
$x\in\mathcal{X}$. Then for any $S\in\Sigma$, we have
$\frac{Q(S)-\tau(S)}{P(S)}\leq\frac{Q(S)}{P(S)}=\frac{\int_{S}\frac{dQ}{dP}dP}{P(S)}\leq
M~{}\frac{\int_{S}dP}{P(S)}=M\implies\frac{Q(S)-\tau(S)}{M}\leq P(S).$ (110)
Further, we have
$\displaystyle\sum_{k=1}^{2^{d-1}}\frac{P(H_{d,k})}{P(Z_{d,k})}~{}w_{d,k}$
$\displaystyle\geq\sum_{k=1}^{2^{d-1}}\frac{P(H_{d,k})}{P(Z_{d,k})}~{}(Q(H_{d,k})-\tau(H_{d,k}))$
(111)
$\displaystyle\geq\frac{1}{M}\sum_{k=1}^{2^{d-1}}\frac{(Q(H_{d,k})-\tau(H_{d,k}))^{2}}{P(Z_{d,k})}$
(112) $\displaystyle\geq\frac{1}{M}\sum_{k=1}^{2^{d-1}}\frac{(Q(H\cap
H_{d,k})-\tau(H\cap H_{d,k}))^{2}}{P(Z_{d,k})}$ (113)
$\displaystyle\geq\frac{1}{M}\sum_{k=1}^{2^{d-1}}\frac{\Delta_{d,k}^{2}}{P(Z_{d,k})}$
(114) $\displaystyle=\frac{1}{M}~{}\Phi_{d}$ (115) $\displaystyle\to 0,$ (116)
where in the second inequality we have used eq. 110 and we have defined
$\displaystyle\Delta_{d,k}$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{=}}Q(H\cap
H_{d,k})-\tau(H\cap H_{d,k}),$ (117) $\displaystyle\Phi_{d}$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{=}}\sum_{k=1}^{2^{d-1}}\frac{\Delta_{d,k}^{2}}{P(Z_{d,k})}.$
(118)
Now note that the sets $H\cap H_{d+1,2k}$ and $H\cap H_{d+1,2k+1}$ partition
the set $H\cap H_{d,k}$. Therefore
$\Delta_{d,k}=\Delta_{d+1,2k}+\Delta_{d+1,2k+1}.$ (119)
By the definition of $\Phi_{d}$ in eq. 118, we can write
$\Phi_{d+1}=\sum_{k=1}^{2^{d}}\frac{\Delta_{d,k}^{2}}{P(Z_{d+1,k})}=\sum_{k=1}^{2^{d-1}}\left[\frac{\Delta_{d+1,2k}^{2}}{P(Z_{d+1,2k})}+\frac{\Delta_{d+1,2k+1}^{2}}{P(Z_{d+1,2k+1})}\right],$
(120)
where we have written the sum over $2^{d}$ terms as a sum over $2^{d-1}$ pairs
of terms. We can rewrite the summand on the right hand side as
$\displaystyle\frac{\Delta_{d+1,2k}^{2}}{P(Z_{d+1,2k})}+\frac{\Delta_{d+1,2k+1}^{2}}{P(Z_{d+1,2k+1})}$
$\displaystyle=\frac{\Delta_{d+1,2k}^{2}}{P(Z_{d+1,2k})}+\frac{(\Delta_{d,k}-\Delta_{d+1,2k})^{2}}{P(Z_{d+1,2k+1})}$
(121)
$\displaystyle=\Delta_{d,k}^{2}\left[\frac{\rho^{2}}{P(Z_{d+1,2k-1})}+\frac{(1-\rho)^{2}}{P(Z_{d+1,2k})}\right]$
(122) $\displaystyle=\Delta_{d,k}^{2}~{}g(\rho)$ (123)
where in eq. 121 we have used eq. 119, from eq. 121 to eq. 122 we defined the
quantity $\rho=\Delta_{d+1,2k}/\Delta_{d,k}$, and from eq. 122 to eq. 123 we
have defined $g:[0,1]\to\mathbb{R}$ as
$g(r)\stackrel{{\scriptstyle\text{def}}}{{=}}\frac{r^{2}}{P(Z_{d+1,2k})}+\frac{(1-r)^{2}}{P(Z_{d+1,2k+1})}.$
(124)
The first and second derivatives of $g$ are
$\displaystyle\frac{dg}{dr}$
$\displaystyle=\frac{2r}{P(Z_{d+1,2k})}-\frac{2(1-r)}{P(Z_{d+1,2k+1})},$ (125)
$\displaystyle\frac{d^{2}g}{dr^{2}}$
$\displaystyle=\frac{2}{P(Z_{d+1,2k})}+\frac{2}{P(Z_{d+1,2k+1})}>0,$ (126)
so $g$ has a single stationary point that is a minimum, at $r=r_{\min}$, which
is given by
$\displaystyle r_{\min}:=\frac{P(Z_{d+1,2k})}{P(Z_{d+1,2k})+P(Z_{d+1,2k+1})}.$
(127)
Plugging this back in $g$, we obtain
$\displaystyle
g(r_{\min})=\frac{1}{P(Z_{d+1,2k})+P(Z_{d+1,2k+1})}=\frac{1}{P(Z_{d,k})},$
(128)
which implies that
$\displaystyle\frac{\Delta_{d+1,2k}^{2}}{P(Z_{d+1,2k})}+\frac{\Delta_{d+1,2k+1}^{2}}{P(Z_{d+1,2k+1})}\geq\frac{\Delta_{d,k}^{2}}{P(Z_{d,k})}.$
(129)
Therefore
$\Phi_{d+1}=\sum_{k=1}^{2^{d}}\frac{\Delta_{d,k}^{2}}{P(Z_{d+1,k})}\geq\sum_{k=1}^{2^{d-1}}\frac{\Delta_{d,k}^{2}}{P(Z_{d,k})}=\Phi_{d},$
(130)
but since $\Phi_{d}\to 0$, this is only possible if $\Phi_{d}=0$ for all $d$,
including $d=1$, which would imply that
$\Delta_{1,1}=Q(H\cap H_{1,1})-\tau(H\cap H_{1,1})=Q(H)-\tau(H)=0,$ (131)
which, together with lemma 5, implies that
$Q(\mathcal{X})-\tau(\mathcal{X})=Q(H)-\tau(H)=0,$ (132)
and therefore $w_{d}=||Q-\tau_{d}||_{TV}\to 0$. ∎
###### Lemma 11 (Case 4).
Suppose that $p_{d}\to 0$, $P(H_{d})\not\to 0$ and assumption 3 holds. Then
$w_{d}\to 0$.
###### Proof.
Suppose that $p_{d}\to 0$, $P(H_{d})\not\to 0$. Suppose also that assumption
that assumption 3 holds, meaning that for each $d$, we have $w_{d,k}>0$ for
exactly one value of $k=k_{d}$, and $w_{d,k}=0$ for all other $k\neq k_{d}$.
In this case, it holds that $H_{d,k}=\emptyset$ for all $k\neq k_{d}$ and
$H_{d}=H_{d,k_{d}}$. Since $P(H_{d})\not\to 0$ and $P(H_{d})$ is a decreasing
sequence, it converges to some positive constant. We also have
$p_{d}\geq\sum_{k=1}^{2^{d-1}}\frac{P(H_{d,k})}{P(Z_{d,k})}~{}w_{d,k}=\frac{P(H_{d,k_{d}})}{P(Z_{d,k_{d}})}~{}w_{d,k_{d}}=\frac{P(H_{d,k_{d}})}{P(Z_{d,k_{d}})}~{}w_{d}\geq
P(H_{d})~{}w_{d}\to 0,$ (133)
which can only hold if $w_{d}\to 0$, arriving at the result. ∎
###### Lemma 12 (Case 5).
Suppose that $p_{d}\to 0$, $P(H_{d})\not\to 0$ and assumption 3 holds. Then
$w_{d}\to 0$.
###### Proof.
Suppose that $p_{d}\to 0$, $P(H_{d})\not\to 0$ and assumption 3 holds. Since
each $x\in\mathcal{X}$ belongs to exactly one $Z_{d,k}$ we can define the
function $B_{d}:\mathcal{X}\to\Sigma$ as
$B_{d}(x)=Z_{d,k}\text{ such that }x\in Z_{d,k}.$ (134)
Using this function we can write
$p_{d}\geq\sum_{k=1}^{2^{d-1}}\frac{P(H_{d,k})}{P(Z_{d,k})}~{}w_{d,k}=\sum_{k=1}^{2^{d-1}}P(H_{d,k})~{}\frac{Q(Z_{d,k})-\tau_{d}(Z_{d,k})}{P(Z_{d,k})}=\int_{H_{d}}dP~{}\frac{Q(B_{d}(x))-\tau_{d}(B_{d}(x))}{P(B_{d}(x))}.$
Now, because the sets $H_{d}$ are measurable, their intersection
$H:=\cap_{d=1}^{\infty}H_{d}$ is also measurable. We can therefore lower bound
the integral above as follows
$\displaystyle\int_{H_{d}}dP~{}\frac{Q(B_{d}(x))-\tau_{d}(B_{d}(x))}{P(B_{d}(x))}$
$\displaystyle\geq\int_{H}dP~{}\frac{Q(B_{d}(x))-\tau_{d}(B_{d}(x))}{P(B_{d}(x))}$
(135)
$\displaystyle\geq\int_{H}dP~{}\frac{Q(B_{d}(x))-\tau(B_{d}(x))}{P(B_{d}(x))},$
(136)
where the first inequality holds as the integrand is non-negative and we are
constraining the integration domain to $H\subseteq H_{d}$, and the second
inequality holds because $\tau_{d}(S)\leq\tau(S)$ for any $S\in\Sigma$. Define
$\mathcal{C}$ to be the set of all intersections of nested partitions, with
non-zero mass under $P$
$\mathcal{C}=\left\\{\bigcap_{d=0}^{\infty}Z_{d,k_{d}}:P\left(\bigcap_{d=0}^{\infty}Z_{d,k_{d}}\right)>0,k_{0}=1,k_{d+1}=2k_{d}\text{
or }k_{d+1}=2k_{d}+1\right\\},$ (137)
and note that all of its elements are pairwise disjoint. Each of the elements
of $\mathcal{C}$ is a measurable set because it is a countable intersection of
measurable sets. In addition, $\mathcal{C}$ is a countable set, which can be
shown as follows. Define the sets $\mathcal{C}_{n}$ as
$\mathcal{C}_{n}=\left\\{E\in\mathcal{C}:2^{-n-1}<P(E)\leq
2^{-n}\right\\}\text{ for }n=0,1,\dots$ (138)
and note that their union equals $\mathcal{C}$. Further, note that each
$\mathcal{C}_{n}$ must contain a finite number of elements. That is because if
$\mathcal{C}_{n}$ contained an infinite number of elements, say
$E_{1},E_{2},\dots\in\mathcal{C}_{n}$, then
$\displaystyle P(\mathcal{X})\geq
P\left(\bigcup_{k=1}^{\infty}E_{k}\right)=\sum_{k=1}^{\infty}P(E_{k})>\sum_{k=1}^{\infty}2^{-n-1}\to\infty,$
(139)
where the first equality holds because $P$ is an additive measure and the
$E_{n}$ terms are disjoint, and the second inequality follows because
$E_{k}\in\mathcal{C}_{n}$ so $P(E_{k})>2^{-n-1}$. This results in a
contradiction because $P(\mathcal{X})=1$, so each $\mathcal{C}_{n}$ must
contain a finite number of terms. Therefore, $\mathcal{C}$ is a countable
union of finite sets, which is also countable. This implies that the union of
the elements of $\mathcal{C}$, namely
$C=\cup_{C^{\prime}\in\mathcal{C}}C^{\prime}$ is a countable union of
measurable sets and therefore also measurable. Since $C$ is measurable,
$H\setminus C$ is also measurable and we can rewrite the integral in eq. 135
as
$\displaystyle p_{d}$
$\displaystyle\geq\int_{H}dP~{}\frac{Q(B_{d}(x))-\tau(B_{d}(x))}{P(B_{d}(x))}$
(140) $\displaystyle=\int_{H\cap
C}dP~{}\frac{Q(B_{d}(x))-\tau(B_{d}(x))}{P(B_{d}(x))}+\int_{H\setminus
C}dP~{}\frac{Q(B_{d}(x))-\tau(B_{d}(x))}{P(B_{d}(x))}$ (141) $\displaystyle\to
0$ (142)
Since both terms above are non-negative and their sum converges to $0$, the
terms must also individually converge to $0$. Therefore, for the first term,
we can write
$\lim_{d\to\infty}\int_{H\cap
C}dP~{}\frac{Q(B_{d}(x))-\tau(B_{d}(x))}{P(B_{d}(x))}=\liminf_{d\to\infty}\int_{H\cap
C}dP~{}\frac{Q(B_{d}(x))-\tau(B_{d}(x))}{P(B_{d}(x))}=0.$ (143)
Similarly to $B_{d}$ defined in eq. 134, let us define $B:C\to\Sigma$ as
$B(x)=C^{\prime}\in\mathcal{C}\text{ such that }x\in C^{\prime}.$ (144)
Applying Fatou’s lemma (4.3.3, p. 131; Dudley, 2018) to eq. 143, we obtain
$\displaystyle\liminf_{d\to\infty}\int_{H\cap
C}dP~{}\frac{Q(B_{d}(x))-\tau(B_{d}(x))}{P(B_{d}(x))}$
$\displaystyle\geq\int_{H\cap
C}dP~{}\liminf_{d\to\infty}\frac{Q(B_{d}(x))-\tau(B_{d}(x))}{P(B_{d}(x))}$
(145) $\displaystyle=\int_{H\cap C}dP~{}\frac{Q(B(x))-\tau(B(x))}{P(B(x))}$
(146) $\displaystyle=0,$ (147)
where from eq. 145 to eq. 146 we have used the fact that $P(B_{d}(x))>0$
whenever $x\in C$ and also that $B_{1}(x)\supseteq B_{2}(x)\supseteq\dots$.
Now we can re-write this integral as a sum, as follows. Let the elements of
$\mathcal{C}$, which we earlier showed is countable, be $C_{1},C_{2},\dots$
and write
$\displaystyle\int_{H\cap C}dP~{}\frac{Q(B(x))-\tau(B(x))}{P(B(x))}$
$\displaystyle=\sum_{n=1}^{\infty}\int_{H\cap
C_{n}}dP~{}\frac{Q(B(x))-\tau(B(x))}{P(B(x))}$ (148)
$\displaystyle=\sum_{n=1}^{\infty}\frac{P(H\cap
C_{n})}{P(C_{n})}\left(Q(C_{n})-\tau(C_{n})\right)$ (149) $\displaystyle=0.$
(150)
Now, from lemma 5, we have
$\displaystyle\sum_{n=1}^{\infty}\frac{P(H\cap
C_{n})}{P(C_{n})}\left(Q(C_{n})-\tau(C_{n})\right)=\sum_{n=1}^{\infty}\frac{P(H\cap
C_{n})}{P(C_{n})}\left(Q(H\cap C_{n})-\tau(H\cap C_{n})\right)=0,$ (151)
which in turn implies that for each $n=1,2,\dots$, we have either $Q(H\cap
C_{n})-\tau(H\cap C_{n})=0$ or $P(H\cap C_{n})=0$. However, the latter case
also implies $Q(H\cap C_{n})-\tau(H\cap C_{n})=0$ because $Q\ll P$, so
$Q(H\cap C_{n})-\tau(H\cap C_{n})=0$ holds for all $n$. Therefore
$\tau(H\cap C)=\sum_{n=1}^{\infty}\tau(H\cap C_{n})=\sum_{n=1}^{\infty}Q(H\cap
C_{n})=Q(H\cap C).$ (152)
Returning to the second term in the right hand of eq. 141, and again applying
Fatou’s lemma
$\displaystyle\liminf_{d\to\infty}\int_{H\setminus
C}dP~{}\frac{Q(B_{d}(x))-\tau(B_{d}(x))}{P(B_{d}(x))}$
$\displaystyle\geq\int_{H\setminus
C}dP~{}\liminf_{d\to\infty}\frac{Q(B_{d}(x))-\tau(B_{d}(x))}{P(B_{d}(x))}.$
(153)
Now, since $Z$ has the nice-shrinking property from assumption 3, we can apply
a standard result from measure theory and integration Rudin (1986, given in
Theorem 7.10, p. 140), to show that the following limit exists and the
following equalities are satisfied
$\displaystyle\lim_{d\to\infty}\frac{Q(B_{d}(x))-\tau(B_{d}(x))}{P(B_{d}(x))}$
$\displaystyle=\lim_{d\to\infty}\frac{1}{P(B_{d}(x))}\int_{B_{d}}dP\left(\frac{dQ}{dP}(x)-\frac{d\tau}{dP}(x)\right)$
(154) $\displaystyle=\frac{dQ}{dP}(x)-\frac{d\tau}{dP}(x)$ (155)
Inserting 155 into eq. 153, we obtain
$\displaystyle\liminf_{d\to\infty}\int_{H\setminus
C}dP~{}\frac{Q(B_{d}(x))-\tau(B_{d}(x))}{P(B_{d}(x))}\geq\int_{H\setminus
C}dP~{}\left(\frac{dQ}{dP}(x)-\frac{d\tau}{dP}(x)\right)=0,$ (156)
which in turn implies that
$\frac{dQ}{dP}(x)-\frac{d\tau}{dP}(x)=0~{}~{}P\text{-almost-everywhere on
}H\setminus C,$ (157)
or equivalently that $Q(H\setminus C)=\tau(H\setminus C)$. Combining this with
the fact that $Q(\mathcal{X}\setminus H)=\tau(\mathcal{X}\setminus H)$ and our
earlier result that $Q(H\cap C)=\tau(H\cap C)$, we have
$||Q-\tau||_{TV}=Q(\mathcal{X}\setminus H)-\tau(\mathcal{X}\setminus
H)+Q(H\setminus C)-\tau(H\setminus C)+Q(H\cap C)-\tau(H\cap C)=0,$
which is equivalent to $w_{d}=||Q-\tau_{d}||_{TV}\to 0$, that is the required
result. ∎
###### Theorem (Correcness of GRC).
If any one of the assumptions 1, 2 or 3 holds, then
$||Q-\tau_{d}||_{TV}\to 0~{}\text{ as }~{}d\to\infty.$ (158)
###### Proof.
If $p_{d}\to 0$, then $w_{d}\to 0$ by lemma 7. If $P(H_{d})\to 0$, then
$w_{d}\to 0$ by lemma 8. Therefore suppose that $p_{d}\not\to 0$ and
$P(H_{d})\not\to 0$. Then if any one of assumptions 1, 2 or 3 holds, we can
conclude from lemma 10, 11 or 12 respectively, that $||Q-\tau_{d}||_{TV}\to
0$. ∎
## Appendix C Optimality of GRCS
Algorithm 3 GRCS with arthmetic coding for the heap index.
1:Target $Q$, proposal $P$ over $\mathbb{R}$ with unimodal density ratio
$r=dQ/dP$ with mode $\mu$.
2:$d\leftarrow 0,T_{0}\leftarrow 0,L_{0}\leftarrow 0$
3:$I_{0}\leftarrow 1,S_{1}\leftarrow\mathbb{R}$
4:while $\mathtt{True}$ do
5: $X_{I_{d}}\sim P|_{S_{d}}/P(S_{d})$
6: $U_{I_{d}}\sim\text{Uniform}(0,1)$
7:
$\beta_{I_{d}}\leftarrow\mathtt{clip}\left(P(S_{d})\cdot\frac{r(X_{I_{d}})-L_{d}}{1-T_{d}},0,1\right)$
$\triangleright$
$\mathtt{clip}(y,a,b)\stackrel{{\scriptstyle\mathit{def}}}{{=}}\max\\{\min\\{y,b\\},a\\}$
8: if $U_{I_{d}}\leq\beta_{d+1}$ then
9: return $X_{I_{d}},I_{d}$
10: end if
11: if $X_{I_{d}}>\mu$ then
12: $I_{d+1}\leftarrow 2I_{d}$
13: $S_{d+1}\leftarrow S_{d}\cap(-\infty,X_{I_{d}})$
14: else
15: $I_{d+1}\leftarrow 2I_{d}+1$
16: $S_{d+1}\leftarrow S_{d}\cap(X_{I_{d}},\infty)$
17: end if
18: $L_{d+1}\leftarrow L_{d}+T_{d}/P(S_{d})$
19: $T_{d+1}\leftarrow\mathbb{P}_{Y\sim Q}[r(Y)\geq
L_{d+1}]-L_{d+1}\cdot\mathbb{P}_{Y\sim P}[r(Y)\geq L_{d+1}]$
20: $d\leftarrow d+1$
21:end while
In this section, we prove Theorems 2 and 3. We are only interested in
continuous distributions over $\mathbb{R}$ with unimodal density ratio $dQ/dP$
for these theorems. Hence, we begin by specializing Algorithm 2 to this
setting, shown in Algorithm 3. For simplicity, we also dispense with the
abstraction of partitioning processes and show the bound update process
directly. Furthermore, we also provide an explicit form for the AcceptProb and
RuledOutMass functions.
Before we move on to proving our proposed theorems, we first prove two useful
results. First, we bound the negative log $P$-mass of the bounds with which
Algorithm 3 terminates.
###### Lemma 13.
Let $Q$ and $P$ be distributions over $\mathbb{R}$ with unimodal density ratio
$r=dQ/dP$, given to Algorithm 3 as the target and proposal distribution as
input, respectively. Let $d\geq 0$ and let
$X_{1:d}\stackrel{{\scriptstyle\mathit{def}}}{{=}}X_{1},\ldots,X_{d}$ denote
the samples simulated by Algorithm 3 up to step $d+1$, where for $d=0$ we
define the empty list as $X_{1:0}=\emptyset$. Let $S_{d}$ denote the bounds at
step $d+1$. Then,
$\displaystyle-\sum_{j=0}^{d}A_{j+1}(\mathbb{R},S_{0:d})\cdot\log P(S_{j})\leq
D_{\mathrm{KL}}[Q\|P]+\log e.$ (159)
###### Proof.
For brevity, we will write $A_{d}=A_{d}(\mathbb{R},S_{0:d})$ and
$T_{d}=T_{d}(\mathbb{R},S_{0:d})$. Furthermore, as in Algorithm 3, we define
$\displaystyle
L_{d}\stackrel{{\scriptstyle\mathit{def}}}{{=}}\sum_{j=0}^{d-1}\frac{1-T_{j}}{P(S_{j})}\quad\text{with}\quad
L_{0}=0.$ (160)
Note that $X_{1:d}$ is well-defined for all $d\geq 0$ since we could remove
the return statement from the algorithm to simulate the bounds it would
produce up to an arbitrary step $d$. Now, note that by Proposition 3 we have
$\mathbb{P}[D=d\mid X_{1:d}]=A_{d+1}(\mathbb{R},S_{0:d})$. Now, fix $d\geq 0$
and bounds $S_{0:d}$, and let $x\in\mathbb{R}$ be such that
$\alpha_{d+1}(x)>0$ which holds whenever $r(x)\geq L_{d}$. From this, for
$d\geq 1$ we find
$\displaystyle r(x)$
$\displaystyle\geq\sum_{j=0}^{d-1}\frac{1-T_{j}}{P(S_{j})}$ (161)
$\displaystyle\geq\frac{1-T_{d-1}}{P(S_{d-1})},$ (162)
where the second inequality follows from the fact that the
$(1-T_{j})/P(S_{j})$ terms are all positive. taking logs, we get
$\displaystyle\log r(x)-\log(1-T_{d-1})\geq-\log P(S_{d-1}).$ (163)
Now, we consider the expectation of interest:
$\displaystyle\sum_{j=0}^{d}-A_{j+1}\cdot\log P(S_{j})$
$\displaystyle=-\sum_{j=0}^{d}\int_{\mathbb{R}}\alpha_{j+1}(x)\log
P(S_{j})\,dx$ (164)
$\displaystyle\stackrel{{\scriptstyle\text{\lx@cref{creftype~refnum}{eq:log_bound_size_ineq}}}}{{\leq}}\sum_{j=0}^{d}\int_{\mathbb{R}}\alpha_{j+1}(x)(\log(r(x))-\log(1-T_{j}))\,dx$
(165)
$\displaystyle\stackrel{{\scriptstyle(a)}}{{\leq}}\int_{\mathbb{R}}\sum_{j=0}^{\infty}\alpha_{j+1}(x)\log
r(x)\,dx+\sum_{j=0}^{\infty}A_{j+1}\log\frac{1}{1-T_{j}}$ (166)
$\displaystyle\stackrel{{\scriptstyle(b)}}{{=}}\int_{\mathbb{R}}q(x)\log
r(x)\,dx+\sum_{j=0}^{\infty}(T_{j+1}-T_{j})\log\frac{1}{1-T_{j}}$ (167)
$\displaystyle=D_{\mathrm{KL}}[Q\|P]+\sum_{j=0}^{\infty}(T_{j+1}-T_{j})\log\frac{1}{1-T_{j}}$
(168)
$\displaystyle\stackrel{{\scriptstyle(c)}}{{\leq}}D_{\mathrm{KL}}[Q\|P]\cdot\log
2+\int_{0}^{1}\log\frac{1}{1-t}\,dt$ (169)
$\displaystyle=D_{\mathrm{KL}}[Q\|P]+\log e.$ (170)
Inequality (a) holds because all terms are positive. This is guaranteed by the
fact that for $d\geq 1$, we have $L_{d}\geq 1$, hence $0\leq\log L_{d}\leq
r(x)$ whenever Equation 163 holds. Equality (b) follows by the correctness of
GRC (Theorem 1), which implies that for all $x\in\mathbb{R}$ we have
$\sum_{j=0}^{\infty}\alpha_{d}(x)=q(x)$, and inequality (c) follows from the
facts that $0\leq T_{d}\leq 1$ for all $d$ and that the summand in the second
term forms a lower-Riemann sum approximation to $-\log(1-t)$. ∎
Second, we consider the contraction rate of the bounds $S_{0:d}$, considered
by Algorithm 3.
###### Lemma 14.
Let $Q$ and $P$ be distributions over $\mathbb{R}$ with unimodal density ratio
$r=dQ/dP$, given to Algorithm 3 as the target and proposal distribution as
input, respectively. Assume $P$ has CDF $F_{P}$ and the mode of $r$ is at
$\mu$. Fix $d\geq 0$ and let $X_{1:d}$ be the samples considered by Algorithm
3 and $S_{d}$ the bounds at step $d+1$. Then,
$\displaystyle\mathbb{E}_{X_{1:d}}[P(S_{d})]\leq\left(\frac{3}{4}\right)^{d}$
(171)
###### Proof.
We prove the claim by induction. For $d=0$ the claim holds trivially, since
$S_{0}=\mathbb{R}$, hence $P(S_{0})=1$. Assume now that the claim holds for
$d=k-1$, and we prove the statement for $d=k$. By the law of iterated
expectations, we have
$\displaystyle\mathbb{E}_{X_{1:k}}[P(S_{k})]=\mathbb{E}_{X_{1:k-1}}[\mathbb{E}_{X_{k}\mid
X_{1:k-1}}[P(S_{k})]].$ (172)
Let us now examine the inner expectation. First, assume that $S_{k-1}=(a,b)$
for some real numbers $a<b$ and define $A=F_{P}(a),B=F_{P}(B),M=F_{P}(\mu)$
and $U=F_{P}(X_{k})$. Since $X_{k}\mid X_{1:k-1}\sim P|_{S_{k-1}}$, by the
probability integral transform we have $U\sim\mathrm{Unif}(A,B)$, where
$\mathrm{Unif}(A,B)$ denotes the uniform distribution on the interval $(A,B)$.
The two possible intervals from which Algorithm 3 will choose are $(a,X_{k})$
and $(X_{k},b)$, whose measures are $P((a,X_{k}))=F_{P}(X_{k})-F_{P}(a)=U-A$
and similarly $P((X_{k},b))=B-U$. Then, $P(S_{k})\leq\max\\{U-A,B-U\\}$, from
which we obtain the bound
$\displaystyle\mathbb{E}_{X_{k}\mid
X_{1:k-1}}[P(S_{k})]\leq\mathbb{E}_{U}[\max\\{U-A,B-U\\}]=\frac{3}{4}(B-A)=\frac{3}{4}P(S_{k-1}).$
(173)
Plugging this into Equation 172, we get
$\displaystyle\mathbb{E}_{X_{1:k}}[P(S_{k})]$
$\displaystyle\leq\frac{3}{4}\mathbb{E}_{X_{1:k-1}}\left[P(S_{k-1})\right]$
(174) $\displaystyle\leq\frac{3}{4}\cdot\left(\frac{3}{4}\right)^{k-1},$ (175)
where the second inequality follows from the inductive hypothesis, which
finishes the proof. ∎
The proof of Theorem 3: We prove our bound on the runtime of Algorithm 3
first, as this will be necessary for the proof of the bound on the codelength.
First, let $D$ be the number of steps Algorithm 3 takes before it terminates
minus $1$. Then, we will show that
$\displaystyle\mathbb{E}[D]\leq\frac{1}{\log(4/3)}D_{\mathrm{KL}}[Q\|P]+4$
(176)
We tackle this directly. Hence, let
$\displaystyle\mathbb{E}_{D}[D]$
$\displaystyle=\lim_{d\to\infty}\mathbb{E}_{X_{1:j}}\left[\sum_{j=1}^{d}j\cdot
A_{j+1}\right]$ (177)
$\displaystyle=\lim_{d\to\infty}\mathbb{E}_{X_{1:j}}\left[\sum_{j=1}^{d}\frac{-j}{\log
P(S_{j})}\cdot-A_{j+1}\log P(S_{j})\right]$ (178)
$\displaystyle\leq\lim_{d\to\infty}\mathbb{E}_{X_{1:j}}\left[\max_{j\in[1:d]}\left\\{\frac{-j}{\log
P(S_{j})}\right\\}\cdot\sum_{j=1}^{d}-A_{j+1}\log P(S_{j})\right]$ (179)
$\displaystyle\stackrel{{\scriptstyle\text{\lx@cref{creftype~refnum}{lemma:negative_log_bound_size_bound}}}}{{\leq}}\left(D_{\mathrm{KL}}[Q\|P]+\log
e\right)\cdot\lim_{d\to\infty}\mathbb{E}_{X_{1:j}}\left[\max_{j\in[1:d]}\left\\{\frac{-j}{\log
P(S_{j})}\right\\}\right].$ (180)
To finish the proof, we will now bound the term involving the limit. To do
this, note, that for any finite collection of reals $F$, we have $\max_{x\in
F}\\{x\\}=-\min_{x\in F}\\{-x\\}$, and that for a finite collection of real-
valued random variables $\hat{F}$ we have
$\mathbb{E}[\min_{{\mathbf{x}}\in\hat{F}}\\{{\mathbf{x}}\\}]\leq\min_{{\mathbf{x}}\in\hat{F}}\\{\mathbb{E}[{\mathbf{x}}]\\}$.
Now, we have
$\displaystyle\lim_{d\to\infty}\mathbb{E}_{X_{1:j}}\left[\max_{j\in[1:d]}\left\\{\frac{-j}{\log
P(S_{j})}\right\\}\right]$
$\displaystyle=\lim_{d\to\infty}-\mathbb{E}_{X_{1:j}}\left[\min_{j\in[1:d]}\left\\{\frac{j}{\log
P(S_{j})}\right\\}\right]$ (181)
$\displaystyle\leq\lim_{d\to\infty}\left(-\min_{j\in[1:d]}\left\\{\mathbb{E}_{X_{1:j}}\left[\frac{j}{\log
P(S_{j})}\right]\right\\}\right)$ (182)
$\displaystyle\stackrel{{\scriptstyle\text{(a)}}}{{\leq}}\lim_{d\to\infty}\left(-\min_{j\in[1:d]}\left\\{\frac{j}{\log\mathbb{E}_{X_{1:j}}\left[P(S_{j})\right]}\right\\}\right)$
(183)
$\displaystyle\stackrel{{\scriptstyle\text{\lx@cref{creftype~refnum}{lemma:bound_size_exp_bound}}}}{{\leq}}\lim_{d\to\infty}\left(-\min_{j\in[1:d]}\left\\{\frac{-j}{j\log(4/3)}\right\\}\right)$
(184)
$\displaystyle=\lim_{d\to\infty}\left(\max_{j\in[1:d]}\left\\{\frac{1}{\log(4/3)}\right\\}\right)$
(185) $\displaystyle=\frac{1}{\log(4/3)}$ (186)
Inequality (a) follows from Jensen’s inequality. Finally, plugging this back
into the previous equation, we get
$\displaystyle\mathbb{E}[D]\leq\frac{D_{\mathrm{KL}}[Q\|P]+\log e}{\log
4/3}\leq\frac{D_{\mathrm{KL}}[Q\|P]}{\log 4/3}+4$ (187)
Proof of Theorem 2: For the codelength result, we need to encode the length of
the search path and the search path itself. More formally, since the returned
sample $X$ is a function of the partition process $Z$, the search path length
$D$ and search path $S_{0:D}$, we have
$\displaystyle\mathbb{H}[X\mid
Z]\leq\mathbb{H}[D,S_{0:D}]=\mathbb{H}[D]+\mathbb{H}[S_{0:D}\mid D].$ (188)
we can encode $D$ using Elias $\gamma$-coding, from which we get
$\displaystyle\mathbb{H}[D]$ $\displaystyle\leq\mathbb{E}_{D}[2\log(D+1)]+1$
(189) $\displaystyle\leq 2\log(\mathbb{E}[D]+1)+1$ (190) $\displaystyle\leq
2\log\left(\frac{D_{\mathrm{KL}}[Q\|P]+\log e}{\log(4/3)}+1\right)+1$ (191)
$\displaystyle\leq 2\log\left(D_{\mathrm{KL}}[Q\|P]+\log
e+\log(4/3)\right)+1-2\log\left(\log(4/3)\right)$ (192) $\displaystyle\leq
2\log\left(D_{\mathrm{KL}}[Q\|P]+1\right)+1-2\log\left(\log(4/3)\right)+2\log(\log
e+\log(4/3))$ (193) $\displaystyle\leq
2\log\left(D_{\mathrm{KL}}[Q\|P]+1\right)+6.$ (194)
Given the search path length $D$, we can use arithmetic coding (AC) to encode
the sequence of bounds $S_{0:D}$ using $-\log P(S_{D})+2$ bits (assuming
infinite precision AC). Hence, we have that the average coding cost is upper
bounded by
$\displaystyle\mathbb{H}[S_{0:D}\mid D]\leq\mathbb{E}_{D}[-\log
P(S_{D})]+2\stackrel{{\scriptstyle\text{\lx@cref{creftype~refnum}{lemma:negative_log_bound_size_bound}}}}{{\leq}}D_{\mathrm{KL}}[Q\|P]+5.$
(195)
Putting everything together, we find
$\displaystyle\mathbb{H}[D,S_{0:D}]\leq
D_{\mathrm{KL}}[Q\|P]+2\log(D_{\mathrm{KL}}[Q\|P]+1)+11,$ (196)
as required.
## Appendix D Additional experiments with depth-limited GRC
In this section we show the results of some experiments comparing the
approximation bias of depth limited GRCD, to that of depth limited AD∗ ,
following the setup of Flamich et al. (2022). Limiting the depth of each
algorithm introduces bias in the resulting samples, as these are not
guaranteed to be distributed from the target distribution $Q$, but rather from
a different distribution $\smash{\hat{Q}}$. Figure 5 quantifies the effect of
limiting the depth on the bias of the resulting samples.
In our experiment we take $Q$ and $P$ to be Gaussian and we fix
$D_{\mathrm{KL}}[Q\|P]=3$ (bits), and consider three different settings of
$D_{\infty}[Q\|P]=5,7$ or $9$ (bits), corresponding to each of the panes in
fig. 5. For each such setting, we set the depth limit of each of the two
algorithms to $D_{\max}=D_{\mathrm{KL}}[Q\|P]+d$ bits, and refer to $d$ as the
number of additional bits. We then vary the number of additional bits allowed
for each algorithm, and estimate the bias of the resulting samples by
evaluating the KL divergence between the empirical and the exact target
distribution, that is $\smash{D_{\mathrm{KL}}[\hat{Q}\|Q]}$. To estimate this
bias, we follow the method of Pérez-Cruz (2008). For each datapoint shown we
draw 200 samples $X\sim\hat{Q}$ and use these to estimate
$\smash{D_{\mathrm{KL}}[\hat{Q}\|Q]}$. We then repeat this for 10 different
random seeds, reporting the mean bias and standard error in the bias, across
these 10 seeds.
Generally we find that the bias of GRCD is higher than that of AD∗ . This is
likely because AD∗ is implicitly performing importance sampling over a set of
$2^{D_{\max}+d}-1$ samples, and returning the one with the highest importance
weight. By contrast, GRCD is running rejection sampling up to a maximum of
$D_{\max}+d$ steps, returning its last sample if it has not terminated by its
$(D_{\max}+d)^{\text{th}}$ step. While it might be possible to improve the
bias of depth limited GRCD by considering an alternative way of choosing which
sample to return, using for example an importance weighting criterion, we do
not examine this here and leave this possibility for future work.
Figure 5: Bias of depth-limited AD∗ and GRCD, as a function of the number of
additional bit budget given to each algorithm. See text above for discussion.
|
# Optimal Moments on Redundancies in Noisy Parallel Computing Setup
Sahasrajit Sarmasarkar<EMAIL_ADDRESS>Stanford
UniversityUSA43017-6221 and Harish Pillai<EMAIL_ADDRESS>Indian Institute
of Technology, BombayIndia
###### Abstract.
We consider the problem of job assignment where a master server aims to
compute some tasks and is provided a few child servers to compute under a
uniform straggling pattern where each server is equally likely to straggle. We
distribute tasks to the servers so that the master is able to receive most of
the tasks even if a significant number of child servers fail to communicate.
We first show that all balanced assignment schemes have the same expectation
on the number of distinct tasks received and then study the variance. The
variance or the second moment is a useful metric to study as there could be a
high variation in the number of distinct tasks received. We show constructions
using a generalization of “Balanced Incomplete Block Design”(BOSE, 1939;
Sprott, 1955) minimizes the variance, and constructions based on repetition
coding schemes attain the largest variance. Both minimum variance and maximum
variance attaining designs have their own use cases depending on whether the
master aims for a heavy-tailed or light-tailed distribution on the number of
distinct jobs. We further show the equivalence between job and server-based
assignment schemes when the number of jobs and child servers are equal.
Block Designs, Repetition Codes, Coding Theory, Straggler Mitigation and
Parallel Computing
## 1\. Introduction
A distributed computing framework where the computation is done on multiple
machines has been widely used for large-scale computations in (Azu, [n. d.];
AWS, [n. d.]; Goo, [n. d.]). This framework allows us to utilize the
computation resources and memory of multiple machines often referred to as
workers. Under a common and simple implementation of distributed computing,
the master server divides the computation tasks to multiple child servers
(workers). After each worker finishes its computation, it shares its results
with the master server. The master aggregates the results received from
different workers to finish its task.
However, in real noisy communication frameworks, a subset of servers (workers)
can be arbitrarily slow (often referred to as the stragglers) compared to the
rest of the workers. Lately, there has been work to mitigate the issue of
stragglers by introducing redundancies (Karakus et al., 2019; Aktas and
Soljanin, 2019; Joshi et al., 2017). Coding theoretic techniques have often
been used to introduce redundancies for straggler mitigation described
extensively in (Li and Avestimehr, 2020) and used to split the data and assign
different data parts to different servers. Distributed coding framework has
been used in gradient computation (Tandon et al., 2017), matrix-matrix
multiplication (Lee et al., 2017; Yu et al., 2017), polynomial computation (Yu
et al., 2019) and convolution coding(Dutta et al., 2017) using techniques from
coding theory namely MDS codes(Lee et al., 2017; Ferdinand and Draper, 2018),
LDPC codes(Maity et al., 2019) and rateless codes(Mallick et al., 2020). The
master server aggregates the computations transmitted by the non-straggling
workers to compute the desired result.
However, as happens typically in most modern cloud computing systems like
Amazon EC2, some servers can operate with a significantly high throughput
(Ananthanarayanan et al., 2013a; Zhang et al., 2013; Zhao et al., 2014).
Statistical knowledge of communication and computation latency of each server
can be used to design better assignment schemes and allocate the tasks more
efficiently as studied in(Wang et al., 2019a; Sun et al., 2019; Yang et al.,
2019).
On a side note, there may be several scenarios where it may not be possible to
split a computing task to multiple sub-parts, and job cloning is often used
for straggler mitigation in such scenarios as studied in (Chen et al., 2014;
Ananthanarayanan et al., 2013b; Joshi et al., 2015, 2017; Szajda et al.,
2005). This is also popular in cloud computing (Joshi et al., 2017; Wang et
al., 2015, 2014) where one task is assigned to multiple servers to combat the
stragglers to obtain a low compute time.
In this paper, we broadly study this problem of job assignment of multiple
cloned jobs to multiple servers treating each server with identical
communication latency where each server has a significant probability of
straggling. These assignment schemes may be particularly useful in framework
where there is high noise in the communication channel between the child
servers and master server.
## 2\. Related work
Block designs have been widely used in experiment design (Bose and Nair, 1939;
Addelman, 1969; Shieh and Jan, 2004) where experiment designs are grouped into
blocks and random treatments are applied to each block. Recently block designs
and its variants have been used to construct LDPC codes (Ammar et al., 2004),
gradient coding (Kadhe et al., 2019; Sakorikar and Wang, 2021) and error
correcting codes (Smith, 1968). The most common amongst these are 2-designs or
as often called balanced incomplete block designs (BIBDs), which are designs
where every pair of points occur together in the same number of blocks (Bose
and Nair, 1939; Colbourn and Dinitz, 2006). It has been shown that 2-designs
(if exist) uniquely attain A-, D- and E optimality for experiment design
(Kiefer, 1975; Kshirsagar, 1958), which was further generalized in (Yeh,
1986). However, our framework is different where we study variance on the
number of distinct tasks received at the the master and show that repetition
coding and a generalization of BIBDs can be utilized to attain the largest and
the least variance respectively.
In another direction, assignment policies of tasks to different servers under
a distributed server system have been studied, check (Semchedine et al., 2011)
for a survey on the same. Typically tasks arrive to a distributed server
system stochastically in real time and this system typically distributes the
tasks to various servers to minimise the response time as studied in
(Colajanni et al., 1998; Harchol-Balter, 2000; Harchol-Balter et al., 1999).
Recent observations show that the distribution of tasks follows a heavy-tailed
distribution (Crovella and Bestavros, 1997; Arlitt and Williamson, 1996;
Crovella et al., 1998; Williams et al., 2005), and designing assignment
policies suited for load balancing across servers becomes difficult. In many
cases, it may not be possible to recover the entire result but only partially
recover the result as studied in coding theory (Babu and Kumar, 2015; Korhonen
and Frossard, 2009). This setup has also been studied in distributed computing
setups where the aim is not to recover the exact result but an approximation
of the result suffices. This setup has been studied for matrix computation
(Ozfatura et al., 2021) and for gradient coding (Sarmasarkar et al., 2022;
Wang et al., 2019b). Our problem follows a similar setup where the master only
aims to recover a high fraction of the jobs as it may be difficult for the
server to recover all tasks in a noisy environment where a substantial
fraction of servers may straggle.
## 3\. Our contributions
In this paper, we study a setup where a master aims to compute $n$ jobs and
has $c$ identical servers to do the computations. In our model, the arrival
and assignment of tasks are not in real-time but instead, the set of tasks
(jobs) to be computed and the set of servers is given apriori. Jobs are
appropriately replicated and can be viewed as a way to mitigate the issue of
slow (straggling) workers. We study a homogeneous setup treating all the
servers as having identical computing and stochastic properties, that is,
balanced schemes with each server being assigned $k$ jobs and each job being
assigned to $r$ servers. We further assume that each server is equally likely
to straggle. We consider a setup where a non-straggling worker can
successfully transmit all tasks to the master as studied in (Tandon et al.,
2017; Joshi et al., 2017; Ozfatura et al., 2019). We aim to design coding
schemes where the master aims to maximise the number of computed distinct jobs
that are received. Towards this aim, we study the mean and variance of the
number of distinct jobs received in Section 5.
Our contributions can be described as follows.
a) We first show that when any set of $x$ servers is equally likely to
straggle, then for every balanced assignment, the expected number of completed
jobs that the master receives is the same (Theorem 1) and study the variance
of the number of jobs received within the class of balanced assignment
schemes.
b) Typically, assignment schemes with the largest variance may be useful for
systems where we aim to increase the likelihood of the master receiving a
large number (or all) of jobs (heavier mass at tails), and schemes with the
least variance may be useful for systems where one aims to reduce the
likelihood of the master receiving a small number of jobs (lighter mass at the
tails). This follows since every balanced assignment scheme has the same
expectation on the jobs received.
c) We show that certain special balanced assignments (called proximally
compact and stretched compact designs) when they exist, are guaranteed to
attain the least variance and largest variance respectively (Theorem 5 and
Theorem 8 in Section 6).
d) We show how our results generalise to the case where $x$ is sampled from a
distribution. This would imply that results on least variance and largest
variance would continue to hold when each worker is independently and equally
likely to straggle with probability $p$ (Theorems 1, 2 and 3 in Section 7).
e) Finally, we show that when the number of jobs equals the number of servers,
proximally compact job assignments and server assignments (replacing the roles
of jobs and servers) are identical and both of them attain the least variance
in Theorem 2 in Appendix A.
## 4\. Preliminaries and Notation
We consider a setup where a master has $n$ jobs to compute and has $c$ servers
to do the computations. We further assume a noisy scenario where only a
fraction of these servers are able to communicate back to the master.
Therefore, for increasing reliability, redundancies are introduced in the
setup by assigning each job to multiple servers. In this paper, we examine the
expectation and variance on the number of distinct completed jobs that the
master receives from the servers that were able to communicate, for various
assignment schemes. In particular, we study assignment schemes that achieve
certain desired variances on the number of received distinct jobs.
Given a set of $n$ jobs and $c$ servers, we study various assignments of jobs
to different servers by the master server. More formally, let us denote the
$n$ jobs by $\mathcal{A}=\\{a_{1},\ldots,a_{n}\\}$ and $c$ servers by
$\mathcal{S}=\\{s_{1},s_{2},\ldots,s_{c}\\}$. Any assignment ($D$) of jobs in
$\mathcal{A}$ to servers in $\mathcal{S}$, can be equivalently represented by
a bipartite graph $\mathcal{G}_{D}$ where the nodes denote the jobs and the
servers. The edges of the graph exist between nodes representing job $a_{i}$
and server $s_{j}$ if job $a_{i}$ is assigned to server $s_{j}$. Alternately,
for a job assignment ${D}$, we can define an assignment matrix
$A_{{D}}\in\\{0,1\\}^{n\times c}$ as given below.
###### Definition 1.
(Construction of $A_{D}$): Given an assignment of jobs in $\mathcal{A}$ to
servers in $\mathcal{S}$, we define matrix $A_{D}\in\\{0,1\\}^{n\times c}$ as
follows. :
$\displaystyle{}A_{{D}}[i,j]$ $\displaystyle=1\text{ if job $a_{i}$ is
assigned to server $s_{j}$}$ (1) $\displaystyle=0\text{ otherwise}$
Observe that the matrix $A_{D}$ represents the adjacency matrix for the
bipartite graph $\mathcal{G}_{D}$.
We specifically study balanced assignment schemes where each server is
assigned the same number of jobs and each job is assigned to the same number
of servers. More formally, we define this as follows.
###### Definition 2.
(Balanced $(n,k,r,c)$ assignment): Given a set of $n$ jobs and $c$ servers, we
call an assignment scheme of jobs a balanced $(n,k,r,c)$ assignment if the
following conditions are satisfied.
* •
Each server is assigned precisely $k$ distinct jobs to compute.
* •
Each job is assigned to precisely $r$ distinct servers.
Note that this assignment scheme ensures that $n\times r=k\times c$.
We can equivalently define it in terms of matrix $A_{D}$ as follows.
###### Definition.
(Balanced $(n,k,r,c)$ assignment in terms of $A_{D}$): Given a set of $n$ jobs
and $c$ servers, we call the assignment scheme $D$ of jobs to servers a
balanced $(n,k,r,c)$ assignment if each row of $A_{D}$ sums up to $r$ and each
column sums up to $k$.
Let us look at an example of a balanced $(9,3,2,6)$ assignment scheme.
###### Example 0.
We describe a balanced assignment scheme with 9 jobs
$\\{a_{1},a_{2},\ldots,a_{9}\\}$ and 6 servers
$\\{s_{1},s_{2},\ldots,s_{6}\\}$ in Table 1. Note that each job is assigned to
precisely $2$ servers and each server has exactly 3 jobs to compute. The
assignment scheme is motivated from a cyclic assignment scheme.
Jobs Servers | $s_{1}$ | $s_{2}$ | $s_{3}$ | $s_{4}$ | $s_{5}$ | $s_{6}$
---|---|---|---|---|---|---
$a_{1}$ | 1 | 1 | | | |
$a_{2}$ | | 1 | 1 | | |
$a_{3}$ | | | 1 | 1 | |
$a_{4}$ | | | | 1 | 1 |
$a_{5}$ | | | | | 1 | 1
$a_{6}$ | 1 | | | | | 1
$a_{7}$ | 1 | 1 | | | |
$a_{8}$ | | | 1 | 1 | |
$a_{9}$ | | | | | 1 | 1
Table 1. Assignment of jobs to various servers in a balanced $(9,3,2,6)$
assignment scheme
## 5\. The Mean and the Variance
We consider the number of distinct jobs $d$ received at the master when only a
subset of $x$ servers manage to communicate with the master. We consider any
subset of $\mathcal{S}$ with cardinality $x$ to be equally likely be the set
of servers that communicates with the master. Note that with this definition,
if $\hat{S}\subseteq\mathcal{S}$ (with $|\hat{S}|=x$) is the subset of servers
that communicate with the master, then we can denote the number of distinct
jobs received $d=|\cup_{j\in\hat{S}}\text{supp}(A_{D}[:,j])|$ where
$\text{supp}(v)$ denotes the indices of the non-zero entries of the vector
$v$.
Now, consider the uniform distribution over all subsets of servers of
cardinality $x$ which we denote by $\mathfrak{D}_{\mathcal{S},x}$ i.e. a
sample from this distribution returns any subset of $\mathcal{S}$ of
cardinality $x$ with probability $\frac{1}{{{|\mathcal{S}|\choose x}}}$.
For a given assignment ${D}$ of jobs to servers, we denote the expectation of
the number of distinct completed jobs received by the master when any subset
of $x$ servers is able to communicate with master uniformly at random by
$\mathbbm{E}_{{D},x}[d]$ and the corresponding variance by
$\sigma_{{D},x}[d]$. The expectation and the variance on the number of
distinct received jobs $d$ may be written as
(2)
${}\mathbbm{E}_{D,x}[d]=\mathbbm{E}_{\hat{S}\sim\mathfrak{D}_{\mathcal{S},x}}\left[\left|\bigcup_{j\in\hat{S}}\text{supp}(A_{D}[:,j])\right|\right]\text{
and
}\sigma_{D,x}[d]=\sigma_{\hat{S}\sim\mathfrak{D}_{\mathcal{S},x}}\left[\left|\bigcup_{j\in\hat{S}}\text{supp}(A_{D}[:,j])\right|\right]$
Note that the randomness in this setup is only in the set of servers that can
communicate with the master. The assignment scheme has no randomness
associated with it.
Theorem 1 states that the expectation on the number of distinct jobs
$\mathbb{E}_{D,x}[d]$ is the same for every balanced $(n,k,r,c)$ assignment.
This expectation is a function of $n,k,r,c$ and $x$ and is independent of the
specific balanced assignment $D$ we choose. Throughout the remainder of this
paper, $\mathfrak{n}^{D}_{i,\hat{S}}$ denotes the number of servers in
$\hat{S}$ to which job $a_{i}$ is assigned under the assignment scheme $D$.
Observe that $\mathfrak{n}^{D}_{i,\hat{S}}$ can take any value from $0$ to
$r$.
###### Theorem 1.
Consider any balanced $(n,k,r,c)$ assignment $D$. The expectation of the
number of distinct completed jobs $d$ received by the master when any subset
of cardinality $x$ of the set of servers $\mathcal{S}$ is able to communicate
with the master with equal probability is the same for every balanced
$(n,k,r,c)$ assignment $D$ and is given by
(3) $\mathbbm{E}_{{D},x}[d]=n\cdot\left(1-\frac{{c-r\choose x}}{{c\choose
x}}\right)$
We present a proof sketch below. A detailed proof is presented in Appendix B.
###### Proof Sketch.
The number of distinct jobs $d$ received by the master when servers in a
subset $\hat{S}$ (with $|\hat{S}|=x$) is able to communicate with the master
is given by
(4)
${}d=\left|\bigcup_{j\in\hat{S}}\text{supp}(A_{D}[:,j])\right|=\left(k\times
x-\sum\limits_{i=1}^{n}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}\right)$
Note that the term
$\sum\limits_{i=1}^{n}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}$
excludes those jobs which have been received multiple times from various
servers present in $\hat{S}$.
(5)
$\displaystyle{}\mathbb{E}_{D,x}[d]=\frac{\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}(k\times
x-\sum\limits_{i=1}^{n}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1})}{\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}1}{=}$ $\displaystyle k\times
x-\frac{n\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}}{{c\choose
x}}$
Observe that for every job $a_{i}$ in a balanced $(n,k,r,c)$ assignment, the
quantity
$\sum\limits_{\hat{S}\subset\mathcal{S},|\hat{S}|=x}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}$
is the same, i.e., this summation is independent of $i$. We now show that the
quantity
$\sum\limits_{\hat{S}\subset\mathcal{S},|\hat{S}|=x}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}$
for any specified $x$, is the same for every balanced $(n,k,r,c)$ distribution
${D}$. We compute this sum by counting the number of subsets
$\hat{S}\subset\mathcal{S}$ of cardinality $x$ which additionally satisfies
the constraint on $\mathfrak{n}^{D}_{i,\hat{S}}=t$ (i.e. job $a_{i}$ is
present in exactly $t$ servers from $\hat{S}$) for every $t$ from $2$ to $r$
(as these cases deal with the job $a_{i}$ appearing more than once in the
subset $\hat{S}$). Equation (46) follows from multiplying two binomial
expressions 111Detailed proof is presented in Appendix B and considering their
coefficients.
$\displaystyle{}\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}{=}\sum\limits_{t=1}^{r-1}\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x,\mathfrak{n}^{D}_{i,\hat{S}}=t+1\end{subarray}}t\overset{(a)}{=}$
$\displaystyle\sum\limits_{t=1}^{r-1}t{r\choose(t+1)}{(c-r)\choose(x-t-1)}$
(7)
${}\sum\limits_{t=1}^{r-1}t{r\choose(t+1)}{(c-r)\choose(x-t-1)}=r\times{{c-1\choose
x-1}}+{{c-r\choose x}}-{{c\choose x}}$
Combining equations (B), (B) and (46), we get the desired result.
∎
A few comments are in order here. Note that for $x=1$, the expectation (as
expected) is precisely $k$. Observe that for $x>c-r$, the expectation goes to
$n$. In other words, if the number of servers that successfully communicates
with the master is greater than $(c-r)$, then the master obtains at least one
copy of every job $a_{i}\in\mathcal{A}$. This follows since every job is
assigned to exactly $r$ servers and therefore for any job to be missed out,
the $r$ servers to which that specific job was assigned, should fail to
communicate with the master. Thus, if any job is missed out, then the number
of servers that manage to communicate with the master $x$ can at most be
$c-r$.
We now calculate the variance for the number of distinct jobs $d$ received at
the master for any balanced $(n,k,r,c)$ job assignment $D$. From the comments
in the previous paragraph, it is clear that $\sigma_{D,1}[d]=0$ for the case
$x=1$, since the master always receives precisely $k$ distinct jobs, if only
one server manages to communicate with the master. Similarly,
$\sigma_{D,x}[d]=0$ for $x>c-r$, as the master would receive all the $n$ jobs
if more than $c-r$ servers communicate (for the sake of completeness, we
formally calculate this in Corollary 1).
For calculating the variance on the number of distinct jobs $d$ received by
the master, observe
$\sigma_{D,x}(d)=\sigma_{D,x}\left(k\times
x-\sum\limits_{i}(\mathfrak{n}^{D}_{i,\hat{S}}-1\right)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1})=\sigma_{D,x}\left(\sum\limits_{i}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}\right)$
The above follows since $\sigma(t-X)=\sigma(X)$ where $t$ is a constant and
$X$ is a random variable. We now make use of the definition
$\text{var}(X)=\mathbbm{E}[X^{2}]-(\mathbbm{E}[X])^{2}$. Therefore,
$\displaystyle{}\sigma_{D,x}\left(\sum\limits_{i}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}\right)=$
$\displaystyle\frac{\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}{\left(\sum\limits_{i=1}^{n}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}\right)}^{2}}{\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}1}-\left(\frac{\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};|\hat{S}|=x\end{subarray}}{(\sum\limits_{i=1}^{n}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1})}}{\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}1}\right)^{2}$
$\displaystyle\overset{(a)}{=}\frac{\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}(\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}(\mathfrak{n}^{D}_{j,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{j,\hat{S}}>1})}{\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}1}-\left(\frac{\sum\limits_{i=1}^{n}\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}{((\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1})}}{{c\choose
x}}\right)^{2}$
$\displaystyle\overset{(b)}{=}\frac{\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}(\mathfrak{n}^{D}_{j,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{j,\hat{S}}>1}}{{c\choose
x}}-\left(\frac{n\sum\limits_{t=1}^{r-1}t{r\choose(t+1)}{(c-r)\choose(x-t-1)}}{{c\choose
x}}\right)^{2}$ (8)
$\displaystyle\overset{(c)}{=}\frac{\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}(\mathfrak{n}^{D}_{j,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{j,\hat{S}}>1}}{{c\choose
x}}-\left(\frac{n{{c-r\choose x}}}{{{c\choose
x}}}+n\times\left(\frac{rx}{c}-1\right)\right)^{2}$
In the above set of equations, $(a)$ follows from the identity
$(\sum\limits_{i}b_{i})^{2}=\sum\limits_{i}\sum\limits_{j}b_{i}b_{j}$. The
first term in $(b)$ is obtained by interchanging the order of summations,
whereas the second term comes from equation (B). Further, the second term in
$(c)$ follows using the equation (46) given in the proof of Theorem 1.
Observe that the second term in the final expression in equation (5) depends
only on $n,r,c$ and $x$ and is independent of the specific balanced
$(n,k,r,c)$ job assignment $D$. On the other hand, the first term in equation
(5) depends on the particular assignment $D$. We now consider the numerator of
the first term of equation (5) in more detail. We can break this expression
into two parts, where one part is dependent on just one index $i$ and the
other part is dependent on two distinct indices $i,j$. Thus,
$\displaystyle\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}(\mathfrak{n}^{D}_{j,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{j,\hat{S}}>1}$
(9) $\displaystyle=$ $\displaystyle 2\sum\limits_{1\leq i<j\leq
n}\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}(\mathfrak{n}^{D}_{j,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{j,\hat{S}}>1}$
(10) $\displaystyle\hskip 80.00012pt+\sum\limits_{1\leq i=j\leq
n}\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}(\mathfrak{n}^{D}_{j,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{j,\hat{S}}>1}$
In equation (5), the second term can be rewritten as $\sum\limits_{1\leq i\leq
n}\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}((\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1})^{2}$.
For every job $a_{i}$, this expression calculates
$\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}((\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1})^{2}$,
which is independent of the choice of the job $a_{i}$ in any balanced
$(n,k,r,c)$ assignment $D$. In fact, this second term of equation (5) is
independent of the choice of $D$ and it depends only on the values of $n,c,r$
and $x$. We can compute this sum by counting the number of subsets
$\hat{S}\subset\mathcal{S}$ of cardinality $x$ that additionally satisfy the
constraint $\mathfrak{n}^{D}_{i,\hat{S}}=t$ (i.e. job $a_{i}$ is present in
exactly $t$ servers from $\hat{S}$) for every $t$ from $2$ to $r$. Thus
(11) $\sum\limits_{1\leq i\leq
n}\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}((\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1})^{2}=n\sum\limits_{t=1}^{r-1}t^{2}{r\choose(t+1)}{(c-r)\choose(x-t-1)}$
Note that the number of subsets $\hat{S}\subset\mathcal{S}$ of cardinality $x$
such that a particular job $a_{i}$ appears $t+1$ times in $\hat{S}$ is given
by ${r\choose(t+1)}{(c-r)\choose(x-t-1)}$. As
$\mathfrak{n}^{D}_{i,\hat{S}}=t+1$ for this particular $\hat{S}$, therefore
$((\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1})^{2}=t^{2}$.
This explains the final expression in equation (11). A closed form expression
for the sum $\sum\limits_{t=1}^{r-1}t^{2}{r\choose(t+1)}{(c-r)\choose(x-t-1)}$
can be obtained by considering the following binomial expressions
(12)
${}r(r-1)y^{2}(1+y)^{r-2}-ry(1+y)^{r-1}-1+{(1+y)^{r}}=\sum_{t=1}^{r-1}t^{2}{{r\choose
t+1}}y^{t+1}$
(13) ${}(1+y)^{c-r}=\sum_{v=0}^{c-r}{{c-r\choose v}}y^{v}$
Multiplying equations (12) and (13), one obtains
$\sum\limits_{t=1}^{r-1}t^{2}{{r\choose t+1}}{{c-r\choose x-t-1}}$ to be the
coefficient of $y^{x}$ in
$r(r-1)y^{2}(1+y)^{c-2}-ry(1+y)^{c-1}-(1+y)^{c-r}+(1+y)^{c}$, thus,
(14) ${}\sum\limits_{t=1}^{r-1}t^{2}{{r\choose t+1}}{{c-r\choose
x-t-1}}=r(r-1){{c-2\choose x-2}}-r{{c-1\choose x-1}}-{{c-r\choose
x}}+{{c\choose x}}$
Finally, we analyse the first term in equation (5), viz., $\sum\limits_{1\leq
i<j\leq n}\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}(\mathfrak{n}^{D}_{j,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{j,\hat{S}}>1}$.
If a specific pair of jobs $a_{i},a_{j}$ appear $(\alpha+1)$ and $(\beta+1)$
times respectively in some subset $\hat{S}\subset\mathcal{S}$ of cardinality
$x$, then such a pair of jobs contribute $\alpha\beta$ towards this expression
that we are analysing. One needs to add up such contributions from every
distinct pair of jobs $(a_{i},a_{j})$ and every subset
$\hat{S}\subset\mathcal{S}$ of cardinality $x$ to get the final value of this
expression. The strategy that we adopt to compute this sum is as follows : we
find $\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}(\mathfrak{n}^{D}_{j,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{j,\hat{S}}>1}$
for any given pair of distinct jobs $(a_{i},a_{j})$. Observe that this
expression depends on how the pair of jobs $(a_{i},a_{j})$ are distributed
amongst the $c$ servers, which in turn depends on the particular balanced
$(n,k,r,c)$ job assignment $D$ that is under consideration. Now, given a
particular pair of jobs $(a_{i},a_{j})$, how they are farmed to the servers
can essentially differ only in the number of servers that are assigned both
the jobs $a_{i},a_{j}$ simultaneously. The number of servers that are
simultaneously assigned both the jobs $(a_{i},a_{j})$ can range from $0$ to
$r$. If a pair of jobs $(a_{i},a_{j})$ are assigned together to precisely $m$
servers (with $0\leq m\leq r$), then the sum
$\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}(\mathfrak{n}^{D}_{j,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{j,\hat{S}}>1}$
calculated for this particular pair of jobs is precisely equal to the
corresponding sum for every other pair of jobs that are assigned together to
precisely $m$ servers. We use the notation
(15) ${}g(m,x)=\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}(\mathfrak{n}^{D}_{j,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{j,\hat{S}}>1}$
to indicate this particular sum that arises from a pair of jobs $a_{i},a_{j}$
that are assigned together to precisely $m$ servers. We now show that the
values of $g(m,x)$ depends only on $c,r,m$ and $x$. We give the expression for
$g(0,x)$ in Lemma 2 and give a recursion to compute $g(m,x)$ in Lemma 3.
###### Lemma 0.
For a balanced $(n,k,r,c)$ assignment, the value of $g(0,x)$ is given by
(16) $\displaystyle{}g(0,x)=$ $\displaystyle r^{2}{{c-2\choose
x-2}}-2r{{c-1\choose x-1}}+{{c\choose x}}-2{{c-r\choose x}}+2r{{c-r-1\choose
x-1}}+{c-2r\choose x}$
###### Lemma 0.
For a balanced $(n,k,r,c)$ assignment, the values for $g(m,x)$ are related in
the following fashion
(17) ${}g(m+1,x)-g(m,x)={c-2\choose x-1}-2{c-r-1\choose x-1}+{c-2r+m\choose
x-1}$
We prove these Lemmas in Appendix E and F respectively using a careful
application of techniques from combinatorics.
Note that all the expressions for $g(m,x)$ depends on the values of $c,r,m$
and $x$ and is therefore independent of which balanced $(n,k,r,c)$ assignment
$D$ we choose. Observe further that the expression for $g(m+1,x)-g(m,x)$ is an
increasing function of $m$ when $x$ is fixed. This is clear from the fact that
only the last term in (17) depends on $m$. Of course, if $x\leq c-2r$, then we
can further conclude that $g(m+1,x)-g(m,x)$ is a strictly increasing function
of $m$.
We now define $\mathfrak{m}^{D}(m)$ as the number of distinct pairs of jobs
$(a_{i},a_{j})$ with $1\leq i<j\leq n$ that are assigned together to precisely
$m$ servers in the balanced $(n,k,r,c)$ assignment $D$. One can formally
define this number for a specific balanced $(n,k,r,c)$ assignment $D$ using
the assignment matrix $A_{D}$ as
(18) ${}\mathfrak{m}^{D}(m)=\sum_{\begin{subarray}{c}(i_{1},i_{2})\\\ 1\leq
i_{1}<i_{2}\leq
n\end{subarray}}\mathbbm{1}_{\sum_{j=1}^{c}A_{D}[i_{1},j]A_{D}[i_{2},j]=m}$
Given a balanced $(n,k,r,c)$ assignment $D$, the numbers $\mathfrak{m}^{D}(m)$
have some additional properties
(19) $\sum_{m=0}^{r}\mathfrak{m}^{D}(m)={n\choose 2}$
(20) $\sum_{m=0}^{r}m\mathfrak{m}^{D}(m)=c{k\choose 2}$
Equation (19) follows from the fact that there are a total of $n$ jobs and
thus the total number of job pairs is given by ${{n\choose 2}}$. Equation (20)
follows from the fact that the number of pairs of jobs that are assigned
together to a fixed server $s_{i}$ is given by ${{k\choose 2}}$. Summing over
all the servers in $\mathcal{S}$ gives us the RHS in (20). Note that we count
each pair of jobs as many times as they appear together in a server and thus
we say $\sum_{m=0}^{r}m\mathfrak{m}^{D}(m)=c{k\choose 2}$.
Observe that the first term of equation (5) that we are evaluating can now be
rewritten as
(21) $\sum\limits_{1\leq i<j\leq
n}\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}(\mathfrak{n}^{D}_{j,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{j,\hat{S}}>1}=\sum_{m=0}^{r}\mathfrak{m}^{D}(m)g(m,x)$
Putting all of the discussion together, we therefore conclude the following
regarding evaluation of variance $\sigma_{D,x}(d)$ :
###### Theorem 4.
Consider any assignment $D$ amongst balanced $(n,k,r,c)$ assignments. The
variance on the number of distinct jobs $d$ $(\sigma_{D,x}(d))$ received by
the master when any subset of servers $\mathcal{S}$ of cardinality $x>1$ is
able to communicate to the master with equal probability, is equal to
(22)
${}\sigma_{{D},x}(d)=\frac{2\sum\limits_{m=0}^{r}\mathfrak{m}^{D}(m)g(m,x)+T_{2}(n,r,c,x)}{{c\choose
x}}-\left(T_{1}(n,r,c,x)\right)^{2}\\\ $
where $T_{1}(n,r,c,x)=\frac{n{{c-r\choose x}}}{{{c\choose
x}}}+n\times(\frac{rx}{c}-1)$ and
(23) $\displaystyle T_{2}(n,r,c,x)=$ $\displaystyle
n\sum\limits_{t=1}^{r-1}t^{2}{r\choose(t+1)}{(c-r)\choose(x-t-1)}$ (24)
$\displaystyle=$ $\displaystyle n\left(r(r-1){{c-2\choose x-2}}-r{{c-1\choose
x-1}}-{{c-r\choose x}}+{{c\choose x}}\right)$
Thus, we have shown that while the mean of the number of received jobs $d$ is
the same for all balanced $(n,k,r,c)$ assignments, the variance of $d$ is
dependent on the frequency distribution of job pairs assigned to the same
server.
## 6\. Results on extreme variance based on $[\mathfrak{m}^{D}(m)]_{m=0}^{r}$
It is clear from the previous sections, that while all $(n,k,r,c)$ balanced
job assignments give the same mean, they display different variances based on
the values of $\mathfrak{m}^{D}(m)$ that arise in the respective job
assignments. Here $\mathfrak{m}^{D}(m)$ refers to number of pairs of distinct
jobs assigned together to precisely $m$ servers as defined in (18). We now
explore the range of variances that balanced $(n,k,r,c)$ job assignments can
attain.
###### Definition 2.
Given a balanced $(n,k,r,c)$ job assignment $D$, we define a shape vector
$h_{D}\in\mathbb{N}^{r+1}$ associated to the assignment $D$ as
(25)
${}h_{D}=[\mathfrak{m}^{D}(0),\mathfrak{m}^{D}(1),\ldots\mathfrak{m}^{D}(r)]^{T}$
Thus the shape vector lists the number of pairs of jobs that are assigned
together to $m$ servers, for $m=0,1,\cdots,r$ and is a vector of length $r+1$.
Clearly, the entries of $h_{D}$ are all non-negative integers. Two distinct
balanced $(n,k,r,c)$ job assignments $D_{1},D_{2}$ would have the same mean
and variance if and only if the corresponding shape vectors $h_{D_{1}}$ and
$h_{D_{2}}$ are the same.
We first characterize all possible candidate shape vectors in
$\mathbb{N}^{r+1}$.222In our notation, $\mathbb{N}$ is the set of all non-
negative integers including 0. Since the entries of the shape vectors are
$\mathfrak{m}^{D}(m)$ arising out of balanced $(n,k,r,c)$ job assignment, they
must satisfy equations (19) and (20). These can be rewritten as
(26) ${}\begin{bmatrix}1&1&1&\cdots&1\\\
0&1&2&\cdots&r\end{bmatrix}\begin{pmatrix}\mathfrak{m}^{D}(0)\\\
\mathfrak{m}^{D}(1)\\\ \vdots\\\
\mathfrak{m}^{D}(r)\end{pmatrix}=Hh_{D}=\begin{pmatrix}{n\choose 2}\\\
c{k\choose 2}\end{pmatrix}$
Here the matrix $H=\begin{bmatrix}1&1&1&\cdots&1\\\
0&1&2&\cdots&r\end{bmatrix}$. Thus, two possible shape vectors differ by a
vector in the kernel of the matrix $H$. Therefore, if one has a particular
balanced $(n,k,r,c)$ job assignment $D$ with the corresponding shape vector
$h_{D}$, then all other possible shape vectors can be characterized as vectors
$(h_{D}+v)\in\mathbb{N}^{r+1}$ where $v\in\ker H$. A basis for the kernel of
the matrix $H$ is given by the $r-1$ vectors
(27)
${}\left\\{h_{1},h_{2},h_{3},\cdots,h_{r-2},h_{r-1}\right\\}=\left\\{\begin{pmatrix}1\\\
-2\\\ 1\\\ 0\\\ \vdots\\\ 0\\\ 0\\\ 0\end{pmatrix},\begin{pmatrix}0\\\ 1\\\
-2\\\ 1\\\ \vdots\\\ 0\\\ 0\\\ 0\end{pmatrix},\begin{pmatrix}0\\\ 0\\\ 1\\\
-2\\\ \vdots\\\ 0\\\ 0\\\ 0\end{pmatrix},\cdots,\begin{pmatrix}0\\\ 0\\\ 0\\\
0\\\ \vdots\\\ -2\\\ 1\\\ 0\end{pmatrix},\begin{pmatrix}0\\\ 0\\\ 0\\\ 0\\\
\vdots\\\ 1\\\ -2\\\ 1\end{pmatrix}\right\\}$
Note that each of these basis vectors $h_{i}$ have only three nonzero entries.
We make use of these basis vectors in determining extremal values of variances
that a balanced $(n,k,r,c)$ job assignment can attain.
Based on the shape vectors, we now define certain special kinds of balanced
$(n,k,r,c)$ job assignments.
###### Definition 2.
A balanced $(n,k,r,c)$ assignment $D$ is compact if the corresponding shape
vector $h_{D}$ has at most two non-zero elements.
Under certain special conditions, there is a possibility that the shape vector
$h_{D}$ has only one nonzero entry. Of course, in this special case, every
possible pair of jobs is assigned together to $m$ servers, where
$m=\frac{ck(k-1)}{n(n-1)}=r\frac{k-1}{n-1}$. Clearly, the dependence of $m$ on
the values of $n,k,r$ forces such a possibility to be rare. So in general,
compact balanced $(n,k,r,c)$ job assignments have two nonzero entries.
###### Definition 2.
A balanced assignment $D$ is proximally compact if the shape vector $h_{D}$
has either exactly one nonzero entry or has exactly two consecutive nonzero
entries.
###### Lemma 1.
For proximally compact $(n,k,r,c)$ assignment $D$, we have
$\ell=\Bigl{\lfloor}\frac{r(k-1)}{n-1}\Bigr{\rfloor}$ where $\ell$ denotes the
index of the smallest non-zero entry in the shape vector $h_{D}$.
###### Proof.
For a proximally compact $(n,k,r,c)$ assignment $D$, if the shape vector has
only one nonzero entry, then $\mathfrak{m}^{D}(\ell)={n\choose 2}$. As the
total number of job pairs at the $c$ servers is $c{k\choose 2}$, therefore
using (20), we have $\ell=\frac{c{k\choose 2}}{{n\choose
2}}=\frac{r(k-1)}{n-1}$.
On the other hand, if a proximally compact $(n,k,r,c)$ assignment $D$, has a
shape vector with exactly two consecutive nonzero entries, then
$\mathfrak{m}^{D}(m)$ is zero for all $m\neq\ell,\ell+1$ for some $\ell$.
Using (19), we have $\mathfrak{m}^{D}(\ell+1)={{n\choose
2}}-\mathfrak{m}^{D}(\ell)$ and by (20), we get
$\ell(\mathfrak{m}^{D}(\ell))+(\ell+1)({{n\choose
2}}-\mathfrak{m}^{D}(\ell))=c{{k\choose 2}}$. Thus we can conclude that
$\ell{{n\choose 2}}\leq c{{k\choose 2}}$ and $(\ell+1){{n\choose
2}}>c{{k\choose 2}}$ (as $\mathfrak{m}^{D}(\ell)>0$) and therefore
$\ell=\Bigl{\lfloor}\frac{c.{{k\choose 2}}}{{{n\choose
2}}}\Bigr{\rfloor}=\Bigl{\lfloor}\frac{r(k-1)}{n-1}\Bigr{\rfloor}$. ∎
Given $n,k,r,c$ it is not clear whether a balanced $(n,k,r,c)$ job assignment
which is proximally compact exists. By Lemma 1, it is clear that there can be
only one shape vector in $\mathbb{N}^{r+1}$ that satisfies the condition of
proximal compactness. One can therefore conclude that there is at most only
one proximally compact balanced $(n,k,r,c)$ job assignment.
The special case of the shape vector having only one nonzero entry corresponds
to the case of balanced incomplete block designs (BIBD) (BOSE, 1939; Colbourn
and Dinitz, 2006). Balanced incomplete block designs is a very well studied
subject. For the sake of completeness, we give the definition of BIBD below.
###### Defintion 3.
(BIBD $(v,b,r,k,\lambda)$ scheme as in (Colbourn and Dinitz, 2006)) - A
balanced incomplete block design (BIBD) is a pair $(V,B)$ where V is a $v$-set
and B is a collection of $b$ $k$-sized subsets of $V$ (blocks) such that each
element of $V$ is contained in exactly $r$ blocks and any 2-subset of V is
contained in exactly $\lambda$ blocks.
Note that we can associate the set $V$ to the set of jobs $\mathcal{A}$. Thus
$v$ is the same as $n$ that we have employed so far. Each $k$ sized subset of
$V$ (or $\mathcal{A}$) can be identified to the set of jobs assigned to a
server. The number $r$ has the same interpretation as in our case. Since $B$
is a collection of $b$ $k$-sized sets, we can think of the number of servers
$c$ being equal to $b$. Finally $\lambda=\frac{r(k-1)}{n-1}$. Thus $(n,k,r,c)$
in our case is the same as $(v,b,r,k,\lambda)$ quoted in the definition of
BIBD above.
Even in cases where $n,k,r$ may lead to a $\lambda$ which is a positive
integer, it is not known whether a BIBD always exists. However, multiple
constructions of BIBD for various parameters have been described in (BOSE,
1939) using various techniques, like vector sub-spaces over finite fields etc.
The famous Bruck-Ryser-Chowla theorem in (Sprott, 1955) gives some necessary
conditions on $n,k,r$ that guarantees the existence of a BIBD.
Proximally compact assignments may be thought of as a generalization of BIBDs
that do not insist on a unique number $\lambda$, that represents the number of
servers to be shared by every pair of jobs. Instead proximally compact
assignments allow every pair of jobs to be assigned to either $\ell$ or
$\ell+1$ servers. As remarked earlier, Lemma 1 ensures that given $n,k,r,c$
there is a unique shape vector that can be constructed and therefore at most
only one proximally compact assignment can exist. We now provide an example of
a proximally compact assignment scheme that is not a BIBD.
###### Example 0.
Consider balanced $(9,3,3,9)$ assignment schemes. In this case,
$\frac{r(k-1)}{n-1}=\frac{3}{4}$ and so there is no BIBD possible. Further,
$\ell=0$ and the corresponding shape vector for a possible proximally compact
assignment should be $h_{D}=[9,27,0,0]^{T}$. We display an assignment scheme
in Table 2 whose shape vector is indeed $h_{D}$. Note that in this scheme, 27
pairs of jobs are assigned together to a server once and there are 9 pairs of
jobs that were never assigned together.
Jobs Servers | $s_{1}$ | $s_{2}$ | $s_{3}$ | $s_{4}$ | $s_{5}$ | $s_{6}$ | $s_{7}$ | $s_{8}$ | $s_{9}$
---|---|---|---|---|---|---|---|---|---
$a_{1}$ | 1 | 1 | 1 | | | | | |
$a_{2}$ | | | | 1 | 1 | 1 | | |
$a_{3}$ | | | | | | | 1 | 1 | 1
$a_{4}$ | 1 | | | 1 | | | 1 | |
$a_{5}$ | | 1 | | | 1 | | | 1 |
$a_{6}$ | | | 1 | | | 1 | | | 1
$a_{7}$ | 1 | | | | 1 | | | | 1
$a_{8}$ | | 1 | | | | 1 | 1 | |
$a_{9}$ | | | 1 | 1 | | | | 1 |
Table 2. Assignment of jobs to servers in a proximally minimally compact
$(9,3,3,9)$ assignment scheme
###### Theorem 5.
If a proximally compact balanced $(n,k,r,c)$ job assignment exists, then it
has the least variance amongst all balanced $(n,k,r,c)$ job assignments.
A proof sketch is presented below. A detailed proof is given in Appendix C.
Recall that in our notation $\mathbb{N}$ denotes the set of all non-negative
integers including 0.
###### Proof Sketch.
Let $h_{D}$ be the shape vector corresponding to the proximally compact
balanced $(n,k,r,c)$ job assignment. Thus $h_{D}(i)=0$ for all $i\leq\ell$ and
$i>\ell+2$ for $\ell$ as calculated in Lemma 1. Any balanced $(n,k,r,c)$ job
assignment $D_{1}$ would have a shape vector $h_{D}+v$ where $v\in\ker H$ with
matrix $H$ as defined in (26). Observe from the expression of variance in (22)
that it is only the term $\sum\limits_{m=0}^{r}\mathfrak{m}^{D}(m)g(m,x)$ that
varies amongst the different balanced $(n,k,r,c)$ assignments. Thus it is
enough to show that for every permissible $v\in\ker H$ mentioned above,
$\sum\limits_{m=0}^{r}v(m+1)g(m,x)\geq 0$, in order to conclude that the
proximally compact balanced $(n,k,r,c)$ assignment has the least variance.
We therefore first characterize $v\in\ker H$ that may appear from some
balanced $(n,k,r,c)$ assignment. As both the shape vectors
$h_{D},h_{D}+v\in\mathbb{N}^{r+1}$, therefore $v(i)\geq 0$ for all
$i\neq\ell+1,\ell+2$. Further, as $v\in\ker H$, therefore
$\sum\limits_{i=1}^{r+1}v(i)=0$ and so if $v$ is a nonzero vector, then at
least one of $v(\ell+1),v(\ell+2)$ must be a negative integer. As $v\in\ker
H$, therefore $v$ can be expressed in terms of the basis vectors
$\left\\{h_{i}\right\\}$ listed in (27). Let
$v=\sum\limits_{i=1}^{r-1}\alpha_{i}h_{i}$ and we show 333A detailed proof
proving non-negativity of $\alpha_{i}$ is in Appendix C that all
$\alpha_{i}\in\mathbb{N}$.
As a result, we get
$\displaystyle\sum\limits_{m=0}^{r}v(m+1)g(m,x)=\sum\limits_{i=1}^{r-1}\alpha_{i}\left(g(i-1,x)-2g(i,x)+g(i+1,x)\right)$
By Lemma 3, we know that
$g(m+1,x)-g(m,x)={{c-2}\choose{x-1}}-2{{c-r-1}\choose{x-1}}+{{c-2r+m}\choose{x-1}}$
and therefore
(28) $\displaystyle g(i-1,x)-2g(i,x)+g(i+1,x)$ $\displaystyle=$
$\displaystyle\left(g(i+1,x)-g(i,x)\right)-\left(g(i,x)-g(i-1,x)\right)$
$\displaystyle=$
$\displaystyle{{c-2r+i}\choose{x-1}}-{{c-2r+i-1}\choose{x-1}}$
$\displaystyle=$
$\displaystyle\frac{x-1}{c-2r+i-x+1}{{c-2r+i-1}\choose{x-1}}\geq 0$
Thus we have
(29) $\displaystyle{}\sum\limits_{m=0}^{r}v(m+1)g(m,x)$ $\displaystyle=$
$\displaystyle\sum\limits_{i=1}^{r-1}\alpha_{i}\left(g(i-1,x)-2g(i,x)+g(i+1,x)\right)$
$\displaystyle=$
$\displaystyle\sum\limits_{i=1}^{r-1}\frac{\alpha_{i}(x-1)}{c-2r+i-x+1}{{c-2r+i-1}\choose{x-1}}\geq
0$
As the above is true for every permissible $v\in\ker H$ such that $h_{D}+v$ is
a shape vector, therefore we conclude that proximally compact balanced
$(n,k,r,c)$ assignment has the least variance amongst all balanced $(n,k,r,c)$
assignments. ∎
The above theorem guarantees that if a proximally compact balanced $(n,k,r,c)$
assignment exists, then it has the least variance. Also note from (48) that
for the case $x=1$, the contribution due to every permissible $v\in\ker H$ is
zero and so all shape vectors give the same variance, which is zero. That is
consistent with what we have shown earlier. Similarly, note that for $x>c-r$,
every ${{c-2r+i-1}\choose{x-1}}$ is zero and therefore from (48) we can
conclude that every balanced $(n,k,r,c)$ assignment has the same variance when
one considers the cases of $x>c-r$.
We now provide an example of $(n,k,r,c)$ that does not have a proximally
compact assignment.
###### Example 0.
Consider balanced $(10,5,4,8)$ assignments. Here
$\frac{r(k-1)}{n-1}=\frac{16}{9}$ and so the shape vector must have at least
two nonzero entries. Moreover, $\ell=1$ and the corresponding shape vector for
a possible proximally compact assignment should be $h_{D}=[0,10,35,0,0]^{T}$.
We now show that an assignment with the shape vector $h_{D}$ does not exist.
As every job is assigned to $r=4$ servers, every job is involved in
$r(k-1)=16$ job pairs. Consider the job $a_{1}$ and let $x$ be the number of
other jobs with whom $a_{1}$ shares only one server. So $9-x$ jobs share two
servers each with $a_{1}$. Clearly $2(9-x)+x=16$ which implies that $x=2$. As
$a_{1}$ was an arbitrary choice, we can conclude that every job shares one
server with two other jobs and shares two servers with the other $7$ jobs.
Let $\mathfrak{G}=\left\\{a_{1},a_{2},a_{3},\cdots a_{i},a_{1}\right\\}$ be a
cycle of jobs such that each job shares only one server with the jobs that are
its predecessor and successor in $\mathfrak{G}$. One can now argue that the
only permissible lengths of these cycles can be either 5 or 10.444This follows
since there cannot exist a cycle of size 3 i.e. there cannot exist jobs $a,b$
and $c$ such that pairs $(a,b)$, $(b,c)$ and $(c,a)$ share a server each. Then
some more work proves that it is not possible to have an assignment with
$h_{D}=[0,10,35,0,0]^{T}$ The actual balanced $(10,5,4,8)$ assignments that
have shape vectors closest to $h_{D}$ have shape vectors $[1,8,36,0,0]^{T}$
and $[0,12,31,2,0]^{T}$ respectively.
We now define another class of compact assignments.
###### Definition 3.
A balanced assignment $D$ is stretched compact if the shape vector $h_{D}$ has
non-zero elements only in the first and the last entries.
If only the first and last entries of the shape vector $h_{D}$ are nonzero,
then by (26) it is clear that the last entry of the shape vector is
$\frac{c}{r}{k\choose 2}=\frac{n(k-1)}{2}$ and therefore the first entry of
the shape vector is $\frac{n(n-k)}{2}$. Of course, if $n$ is a odd number and
$k$ is even, then these calculated quantities are not integers and therefore
for such cases, there is no possibility of existence of a stretched compact
$(n,k,r,c)$ assignment. Even for the other cases, there is no guarantee that a
stretched compact $(n,k,r,c)$ assignment exists even though the shape vector
has integer entries.
###### Theorem 8.
If a stretched compact balanced $(n,k,r,c)$ job assignment exists, then it has
the largest variance amongst all balanced $(n,k,r,c)$ job assignments.
This proof goes along very similar lines to that of Theorem 5 and we prove in
Appendix D.
###### Example 0.
Let us revisit the earlier example of $(10,5,4,8)$ assignments. It is clear
that a shape vector corresponding to stretched compact assignment is
permissible, namely $h_{D}=[25,0,0,0,20]^{T}$. Such an assignment is indeed
possible. Divide the $10$ jobs into two sets of $5$ jobs. Assign each of these
sets of jobs to $4$ servers. That results in a total of $20$ pairs sharing $4$
servers each and the rest $25$ pairs consisting of a job each from the two
sets sharing no servers. This repetition assignment is a stretched compact
assignment and therefore has the largest variance amongst all possible
balanced $(10,5,4,8)$ assignments.
It is clear that this sort of repetition assignment where multiple servers
have the same set of jobs assigned is only possible if $k$ divides $n$. In
such a situation, one can subdivide the jobs into $\frac{n}{k}$ groups of $k$
jobs each. Each of these groups are repeated at $r$ servers, thus accounting
for $\frac{n}{k}r=c$ servers. Thus the number of job pairs that appear
together $r$ times is equal to $\frac{n}{k}{k\choose 2}=\frac{n(k-1)}{2}$. And
in all these cases, these repetition assignments would correspond to a
stretched compact assignment which has the largest variance amongst all
balanced $(n,k,r,c)$ assignments.
## 7\. Generalization where the number of servers that return ($x$) is random
We now look at a scenario where each of the $c$ servers is independently and
equally likely to communicate with the master with probability $p$. Note that
under this setup, the distribution of the number of servers $x$ that could
communicate is given by the binomial distribution $B(c,p)$. Observe that
conditioned on $x$, every subset of $x$ servers is equally likely to
communicate with the master. Under this setup, we can now state our results on
mean and variance on the number of distinct jobs received by the master.
###### Theorem 1.
Consider any balanced $(n,k,r,c)$ assignment $D$, where each server is
independently and equally likely to communicate with the master with
probability $p$. The expectation of the number of distinct completed jobs $d$
received is the same for every assignment $D$ amongst all balanced $(n,k,r,c)$
assignments and is given by
(30) $\mathbbm{E}_{D}[d]=n-n(1-p)^{r}$
and the variance is given by
(31) $\sigma_{D}(d)=\sum_{x=0}^{c}\sigma_{D,x}(d){{c\choose
x}}p^{x}(1-p)^{c-x}$
where $\sigma_{D,x}(d)$ is given by the expression in Equation (22).
###### Proof.
Observe that under this setup, the number of servers that communicates with
the master $x$ is be given by the binomial distribution $B(c,p)$. Also observe
that under this setup conditioned on $x$, any set of $x$ servers is equally
likely to communicate with the master.
We can thus say that
(32) $\displaystyle\mathbbm{E}_{D}[d]=$ $\displaystyle\mathbbm{E}_{x\sim
B(c,p)}\mathbbm{E}_{D,x}[d]$ (33)
$\displaystyle\overset{(a)}{=}\sum\limits_{x=0}^{c}n\left(1-\frac{{c-r\choose
x}}{{c\choose x}}\right){{c\choose x}}p^{x}(1-p)^{c-x}$ (34)
$\displaystyle=n-n(1-p)^{r}\sum\limits_{x=0}^{c}{{c-r\choose
x}}p^{x}(1-p)^{c-r-x}=n-n(1-p)^{r}$
Note $(a)$ follows from the expression of mean in Theorem 1. Using a very
similar technique, we can prove a result of $\sigma_{D}(d)$ as well. ∎
We can actually generalize some of our results in Theorem 5 and 8 for a more
generalized setup where the number of servers that return is not unique but is
sampled from some distribution $\mathcal{P}$. However, we ensure that the
subset $S_{1}$ is the set of servers that could communicate is equally likely
as the subset of servers $S_{2}$ that could communicate if $|S_{1}|=|S_{2}|$.
Formally, we study the setup where $x$ is sampled from a distribution
$\mathcal{P}$ and conditioned on $x$, any subset of $x$ servers is equally
likely to be the set of servers that could communicate with the master.
This precisely captures the case where every server is independently able to
communicate to the master with probability $p$, in which case $\mathcal{P}$
would be given by $B(c,p)$
###### Theorem 2.
Let us consider $x\sim\mathcal{P}$. Conditioned on $x$, we study the setup
where any set of $x$ servers is equally likely to communicate with the master.
Then the proximally compact assignment (if it exists) attains the least
variance on the number of distinct jobs received at master amongst all
balanced $(n,k,r,c)$ assignment schemes.
###### Proof.
Let us denote the number of distinct jobs when any set of $x$ servers return
uniformly at random by $d$. However, in our problem $x$ itself might be
sampled from a distribution $\mathcal{P}$. Let us denote the variance in this
set-up under this assignment of jobs to servers (say ${D}$) by
$\sigma_{{D},x\sim\mathcal{P}}(d)$.
Now using law of variances(Eve’s law), we can say that
$\sigma_{{D},x\sim\mathcal{P}}(d)=\mathbb{E}_{x\sim\mathcal{P}}[\sigma_{{D},x}(d)]+\sigma_{x\sim\mathcal{P}}[\mathbb{E}_{{D},x}(d)]$
Now consider assignments $D$ and ${D}_{1}$ such that assignment $D$ is a
proximally compact $(n,k,r,c)$ assignment scheme and assignment $D_{1}$ could
be any balanced $(n,k,r,c)$ assignment scheme.
However, we know from Theorem 5 that
$\sigma_{{D},x}(d)\leq\sigma_{{D}_{1},x}(d)$ for every $x$ if $D$ is a
proximally compact $(n,k,r,c)$ assignment scheme and assignment $D_{1}$ is any
other balanced $(n,k,r,c)$ assignment scheme.
We also know that $\mathbbm{E}_{{D},x}(d)=\mathbbm{E}_{{D}_{1},x}(d)$ from
Theorem 1. Combining the two properties, we get that
$\sigma_{{D},x\sim\mathcal{P}}(d)\leq\sigma_{{D}_{1},x\sim\mathcal{P}}(d)$
thus proving the theorem.
∎
Similarly, we can prove a result corresponding to that of Theorem 8 for this
setup.
###### Theorem 3.
Let us consider $x\sim\mathcal{P}$. Conditioned on $x$, if any set of $x$
servers is equally likely to communicate with the master, then the stretched
compact $(n,k,r,c)$ assignment scheme (if it exists) attains the largest
variance on the number of distinct jobs received at master amongst all
balanced $(n,k,r,c)$ assignment schemes.
## 8\. Conclusion
In this work, we study the mean and the variance of the number of distinct
jobs received at server under various assignment schemes and show that
assignment schemes based on the repetition coding and block designs attain the
largest and least variance respectively. However, it is not always known
whether such designs always exist and famous Bruck-Ryser-Chowla theorem in
(Sprott, 1955) just gives some necessary conditions. In future works, it would
be interesting to see if small modifications to designs are still close to
extremal variance in case they do not exist. Another direction of work could
be to extend results beyond the second moment to the $t^{th}$ moment and
investigate if constructions based on $t$-designs can attain extremal results.
## 9\. Acknowledgements
Most of this work was performed when SS was an undergraduate at Indian
Institute of Technology, Bombay. SS would like to thank Siddharth Chandak for
insightful discussions on Section 7.
## References
* (1)
* AWS ([n. d.]) [n. d.]. Distributed Computing AWS. https://aws.amazon.com/what-is/distributed-computing/.
* Goo ([n. d.]) [n. d.]. Distributed Computing Google Cloud. https://cloud2data.com/distributed-computing-on-the-google-cloud-platform/.
* Azu ([n. d.]) [n. d.]. Distributed Computing Microsoft Azure. https://cloud2data.com/distributed-computing-in-microsoft-azure/.
* Addelman (1969) Sidney Addelman. 1969\. The generalized randomized block design. _The American Statistician_ 23, 4 (1969), 35–36.
* Aktas and Soljanin (2019) Mehmet Fatih Aktas and Emina Soljanin. 2019. Straggler Mitigation at Scale. _CoRR_ abs/1906.10664 (2019). arXiv:1906.10664 http://arxiv.org/abs/1906.10664
* Ammar et al. (2004) Bassem Ammar, Bahram Honary, Yu Kou, Jun Xu, and Shu Lin. 2004. Construction of low-density parity-check codes based on balanced incomplete block designs. _IEEE Transactions on information Theory_ 50, 6 (2004), 1257–1269.
* Ananthanarayanan et al. (2013a) Ganesh Ananthanarayanan, Ali Ghodsi, Scott Shenker, and Ion Stoica. 2013a. Effective Straggler Mitigation: Attack of the Clones. In _Proceedings of the 10th USENIX Conference on Networked Systems Design and Implementation_ (Lombard, IL) _(nsdi’13)_. USENIX Association, USA, 185–198.
* Ananthanarayanan et al. (2013b) Ganesh Ananthanarayanan, Ali Ghodsi, Scott Shenker, and Ion Stoica. 2013b. Effective straggler mitigation: Attack of the clones. In _10th USENIX Symposium on Networked Systems Design and Implementation (NSDI 13)_. 185–198.
* Arlitt and Williamson (1996) Martin F. Arlitt and Carey L. Williamson. 1996. Web Server Workload Characterization: The Search for Invariants. In _Proceedings of the 1996 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems_ (Philadelphia, Pennsylvania, USA) _(SIGMETRICS ’96)_. Association for Computing Machinery, New York, NY, USA, 126–137. https://doi.org/10.1145/233013.233034
* Babu and Kumar (2015) Balaji Srinivasan Babu and P. Vijay Kumar. 2015. On Partial Maximally-Recoverable and Maximally-Recoverable Codes. _CoRR_ abs/1501.07130 (2015). arXiv:1501.07130 http://arxiv.org/abs/1501.07130
* BOSE (1939) R. C. BOSE. 1939\. ON THE CONSTRUCTION OF BALANCED INCOMPLETE BLOCK DESIGNS. _Annals of Eugenics_ 9, 4 (1939), 353–399. https://doi.org/10.1111/j.1469-1809.1939.tb02219.x arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1469-1809.1939.tb02219.x
* Bose and Nair (1939) Raj Chandra Bose and K Raghavan Nair. 1939. Partially balanced incomplete block designs. _Sankhyā: The Indian Journal of Statistics_ (1939), 337–372.
* Chen et al. (2014) Shengbo Chen, Yin Sun, Longbo Huang, Prasun Sinha, Guanfeng Liang, Xin Liu, Ness B Shroff, et al. 2014\. When queueing meets coding: Optimal-latency data retrieving scheme in storage clouds. In _IEEE INFOCOM 2014-IEEE Conference on Computer Communications_. IEEE, 1042–1050.
* Colajanni et al. (1998) Michele Colajanni, Philip S. Yu, and Daniel M Dias. 1998\. Analysis of task assignment policies in scalable distributed Web-server systems. _IEEE transactions on Parallel and Distributed Systems_ 9, 6 (1998), 585–600.
* Colbourn and Dinitz (2006) Charles J. Colbourn and Jeffrey H. Dinitz. 2006. _Handbook of Combinatorial Designs, Second Edition (Discrete Mathematics and Its Applications)_. Chapman & Hall/CRC.
* Crovella and Bestavros (1997) M.E. Crovella and A. Bestavros. 1997. Self-similarity in World Wide Web traffic: evidence and possible causes. _IEEE/ACM Transactions on Networking_ 5, 6 (1997), 835–846. https://doi.org/10.1109/90.650143
* Crovella et al. (1998) Mark E Crovella, Murad S Taqqu, and Azer Bestavros. 1998\. Heavy-tailed probability distributions in the World Wide Web. _A practical guide to heavy tails_ 1 (1998), 3–26.
* Dutta et al. (2017) S. Dutta, V. Cadambe, and P. Grover. 2017. Coded convolution for parallel and distributed computing within a deadline. In _2017 IEEE International Symposium on Information Theory (ISIT)_. 2403–2407. https://doi.org/10.1109/ISIT.2017.8006960
* Ferdinand and Draper (2018) Nuwan Ferdinand and Stark C Draper. 2018. Hierarchical coded computation. In _2018 IEEE International Symposium on Information Theory (ISIT)_. IEEE, 1620–1624.
* Harchol-Balter (2000) Mor Harchol-Balter. 2000\. Task assignment with unknown duration. In _Proceedings 20th IEEE international conference on distributed computing systems_. IEEE, 214–224.
* Harchol-Balter et al. (1999) Mor Harchol-Balter, Mark E Crovella, and Cristina D Murta. 1999\. On choosing a task assignment policy for a distributed server system. _J. Parallel and Distrib. Comput._ 59, 2 (1999), 204–228.
* Joshi et al. (2015) Gauri Joshi, Emina Soljanin, and Gregory Wornell. 2015\. Queues with redundancy: Latency-cost analysis. _ACM SIGMETRICS Performance Evaluation Review_ 43, 2 (2015), 54–56.
* Joshi et al. (2017) Gauri Joshi, Emina Soljanin, and Gregory Wornell. 2017\. Efficient Redundancy Techniques for Latency Reduction in Cloud Systems. _ACM Trans. Model. Perform. Eval. Comput. Syst._ 2, 2, Article 12 (apr 2017), 30 pages. https://doi.org/10.1145/3055281
* Kadhe et al. (2019) Swanand Kadhe, O Ozan Koyluoglu, and Kannan Ramchandran. 2019\. Gradient coding based on block designs for mitigating adversarial stragglers. In _2019 IEEE International Symposium on Information Theory (ISIT)_. IEEE, 2813–2817.
* Karakus et al. (2019) Can Karakus, Yifan Sun, Suhas Diggavi, and Wotao Yin. 2019\. Redundancy Techniques for Straggler Mitigation in Distributed Optimization and Learning. _Journal of Machine Learning Research_ 20, 72 (2019), 1–47. http://jmlr.org/papers/v20/18-148.html
* Kiefer (1975) J. Kiefer. 1975\. Balanced Block Designs and Generalized Youden Designs, I. Construction (Patchwork). _The Annals of Statistics_ 3, 1 (1975), 109 – 118. https://doi.org/10.1214/aos/1176343002
* Korhonen and Frossard (2009) Jari Korhonen and Pascal Frossard. 2009. Flexible forward error correction codes with application to partial media data recovery. _Signal Processing: Image Communication_ 24, 3 (2009), 229–242. https://doi.org/10.1016/j.image.2008.12.005
* Kshirsagar (1958) A. M. Kshirsagar. 1958\. A Note on Incomplete Block Designs. _The Annals of Mathematical Statistics_ 29, 3 (1958), 907–910. http://www.jstor.org/stable/2237276
* Lee et al. (2017) Kangwook Lee, Maximilian Lam, Ramtin Pedarsani, Dimitris Papailiopoulos, and Kannan Ramchandran. 2017\. Speeding up distributed machine learning using codes. _IEEE Transactions on Information Theory_ 64, 3 (2017), 1514–1529.
* Li and Avestimehr (2020) Songze Li and Salman Avestimehr. 2020. Coded computing. _Foundations and Trends® in Communications and Information Theory_ 17, 1 (2020).
* Maity et al. (2019) Raj Kumar Maity, Ankit Singh Rawa, and Arya Mazumdar. 2019\. Robust gradient descent via moment encoding and LDPC codes. In _2019 IEEE International Symposium on Information Theory (ISIT)_. IEEE, 2734–2738.
* Mallick et al. (2020) Ankur Mallick, Malhar Chaudhari, Utsav Sheth, Ganesh Palanikumar, and Gauri Joshi. 2020\. Rateless codes for near-perfect load balancing in distributed matrix-vector multiplication. In _Abstracts of the 2020 SIGMETRICS/Performance Joint International Conference on Measurement and Modeling of Computer Systems_. 95–96.
* Ozfatura et al. (2019) Emre Ozfatura, Deniz Gündüz, and Sennur Ulukus. 2019\. Speeding Up Distributed Gradient Descent by Utilizing Non-persistent Stragglers. In _2019 IEEE International Symposium on Information Theory (ISIT)_. 2729–2733. https://doi.org/10.1109/ISIT.2019.8849684
* Ozfatura et al. (2021) Emre Ozfatura, Sennur Ulukus, and Deniz Gündüz. 2021\. Coded distributed computing with partial recovery. _IEEE Transactions on Information Theory_ 68, 3 (2021), 1945–1959.
* Sakorikar and Wang (2021) Animesh Sakorikar and Lele Wang. 2021. Variants on block design based gradient codes for adversarial stragglers. In _2021 11th International Symposium on Topics in Coding (ISTC)_. IEEE, 1–5.
* Sarmasarkar et al. (2022) Sahasrajit Sarmasarkar, V Lalitha, and Nikhil Karamchandani. 2022\. On gradient coding with partial recovery. _IEEE Transactions on Communications_ 71, 2 (2022), 644–657.
* Semchedine et al. (2011) Fouzi Semchedine, Louiza Bouallouche-Medjkoune, and Djamil Aissani. 2011\. Task assignment policies in distributed server systems: A survey. _Journal of network and Computer Applications_ 34, 4 (2011), 1123–1130.
* Shieh and Jan (2004) Gwowen Shieh and Show-Li Jan. 2004. The effectiveness of randomized complete block design. _Statistica Neerlandica_ 58, 1 (2004), 111–124.
* Smith (1968) KJC Smith. 1968\. _An application of incomplete block designs to the construction of error-correcting codes_. Technical Report. North Carolina State University. Dept. of Statistics.
* Sprott (1955) D. A. Sprott. 1955\. Balanced Incomplete Block Designs and Tactical Configurations. _Ann. Math. Statist._ 26, 4 (12 1955), 752–758. https://doi.org/10.1214/aoms/1177728433
* Sun et al. (2019) Yuxuan Sun, Junlin Zhao, Sheng Zhou, and Deniz Gunduz. 2019\. Heterogeneous Coded Computation across Heterogeneous Workers. In _2019 IEEE Global Communications Conference (GLOBECOM)_. 1–6. https://doi.org/10.1109/GLOBECOM38437.2019.9014006
* Szajda et al. (2005) Doug Szajda, Barry Lawson, and Jason Owen. 2005\. Toward an Optimal Redundancy Strategy for Distributed Computations. In _2005 IEEE International Conference on Cluster Computing_. 1–11. https://doi.org/10.1109/CLUSTR.2005.347045
* Tandon et al. (2017) Rashish Tandon, Qi Lei, Alexandros G. Dimakis, and Nikos Karampatziakis. 2017. Gradient Coding: Avoiding Stragglers in Distributed Learning. In _Proceedings of the 34th International Conference on Machine Learning_ _(Proceedings of Machine Learning Research, Vol. 70)_ , Doina Precup and Yee Whye Teh (Eds.). PMLR, 3368–3376. https://proceedings.mlr.press/v70/tandon17a.html
* Wang et al. (2014) Da Wang, Gauri Joshi, and Gregory Wornell. 2014. Efficient Task Replication for Fast Response Times in Parallel Computation. _SIGMETRICS Perform. Eval. Rev._ 42, 1 (jun 2014), 599–600. https://doi.org/10.1145/2637364.2592042
* Wang et al. (2015) Da Wang, Gauri Joshi, and Gregory Wornell. 2015. Using Straggler Replication to Reduce Latency in Large-Scale Parallel Computing. _SIGMETRICS Perform. Eval. Rev._ 43, 3 (nov 2015), 7–11. https://doi.org/10.1145/2847220.2847223
* Wang et al. (2019a) Haozhao Wang, Song Guo, Bin Tang, Ruixuan Li, and Chengjie Li. 2019a. Heterogeneity-aware Gradient Coding for Straggler Tolerance. In _2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS)_. 555–564. https://doi.org/10.1109/ICDCS.2019.00062
* Wang et al. (2019b) Sinong Wang, Jiashang Liu, and Ness Shroff. 2019b. Fundamental Limits of Approximate Gradient Coding. _Proc. ACM Meas. Anal. Comput. Syst._ 3, 3, Article 52 (dec 2019), 22 pages. https://doi.org/10.1145/3366700
* Williams et al. (2005) Adepele Williams, Martin Arlitt, Carey Williamson, and Ken Barker. 2005. Web workload characterization: Ten years later. _Web content delivery_ (2005), 3–21.
* Yang et al. (2019) Chien-Sheng Yang, Ramtin Pedarsani, and A. Salman Avestimehr. 2019\. Timely-Throughput Optimal Coded Computing over Cloud Networks. In _Proceedings of the Twentieth ACM International Symposium on Mobile Ad Hoc Networking and Computing_ (Catania, Italy) _(Mobihoc ’19)_. Association for Computing Machinery, New York, NY, USA, 301–310. https://doi.org/10.1145/3323679.3326528
* Yeh (1986) Ching-Ming Yeh. 1986\. Conditions for Universal Optimality of Block Designs. _Biometrika_ 73, 3 (1986), 701–706. http://www.jstor.org/stable/2336535
* Yu et al. (2019) Qian Yu, Songze Li, Netanel Raviv, Seyed Mohammadreza Mousavi Kalan, Mahdi Soltanolkotabi, and Salman A Avestimehr. 2019. Lagrange coded computing: Optimal design for resiliency, security, and privacy. In _The 22nd International Conference on Artificial Intelligence and Statistics_. PMLR, 1215–1225.
* Yu et al. (2017) Qian Yu, Mohammad Ali Maddah-Ali, and A Salman Avestimehr. 2017\. Polynomial codes: an optimal design for high-dimensional coded matrix multiplication. In _Proceedings of the 31st International Conference on Neural Information Processing Systems_. 4406–4416.
* Zhang et al. (2013) Qi Zhang, Mohamed Faten Zhani, Raouf Boutaba, and Joseph L. Hellerstein. 2013. Harmony: Dynamic Heterogeneity-Aware Resource Provisioning in the Cloud. In _2013 IEEE 33rd International Conference on Distributed Computing Systems_. 510–519. https://doi.org/10.1109/ICDCS.2013.28
* Zhao et al. (2014) Xu Zhao, Ling Liu, Qi Zhang, and Xiaoshe Dong. 2014\. Improving MapReduce Performance in a Heterogeneous Cloud: A Measurement Study. In _2014 IEEE 7th International Conference on Cloud Computing_. 400–407. https://doi.org/10.1109/CLOUD.2014.61
## Appendix A Assignment equivalence when number of servers equals number of
jobs
So far, we have been viewing the $(n,k,r,c)$ balanced assignment as a task of
assigning jobs to servers. Equivalently, one can also view the same balanced
assignment as assigning servers to jobs. Since every job is assigned to $r$
distinct servers, there is a total of ${r\choose 2}$ server pairs that can be
associated to each job. Like the shape vector $h_{D}$ that was built from
information about frequency of job-pairs at the servers in a given assignment,
one may build an equivalent shape vector based on frequency of server-pairs
that are associated to the jobs. Thus every server-pair can possibly be
associated to $m$ jobs, where $0\leq m\leq k$. Thus, the shape vector
corresponding to server assignment viewpoint would be a vector in
$\mathbb{N}^{k+1}$. One may now define compact balanced $(n,k,r,c)$ server
assignments as those balanced assignments that have server shape vector having
at most two nonzero entries. We may view proximally compact balanced
$(n,k,r,c)$ server assignment, through an analogous definition.
###### Definition 4.
Given a balanced $(n,k,r,c)$ assignment, we call it proximally compact
$(n,k,r,c)$ server assignment if for every pair of distinct servers $s_{i}$
and $s_{j}$ in $\mathcal{S}$, the number of jobs assigned to both $s_{i}$ and
$s_{j}$ simultaneously is exactly $\ell$ or $\ell+1$ for some integer $\ell$.
###### Lemma 2.
For any proximally compact $(n,k,r,c)$ server assignment, we must have
$\ell=\Bigl{\lfloor}\frac{n.{{r\choose 2}}}{{{c\choose 2}}}\Bigr{\rfloor}$
This lemma can be proved in a very similar way as that of Lemma 1.
We may observe that in general, proximally compact server assignment schemes
may not be proximally compact job assignment schemes. We demonstrate that with
an Example 1 below.
###### Example 0.
We now describe a proximally compact $(14,6,3,7)$ server assignment which is
not a proximally compact job assignment in Table 3. Note that in this scheme,
every pair of servers have exactly $2$ jobs in common and therefore proximally
compact server assignment. However, some pairs of jobs appear together in one
server like $(a_{1},a_{2})$, some pairs of jobs appear together in $2$ servers
like pairs of jobs $(a_{1},a_{8})$ and some pairs of jobs like
$(a_{3},a_{10})$ appear together in 3 servers and thus it is not proximally
compact job assignment.
Jobs Servers | $s_{1}$ | $s_{2}$ | $s_{3}$ | $s_{4}$ | $s_{5}$ | $s_{6}$ | $s_{7}$
---|---|---|---|---|---|---|---
$a_{1}$ | | 1 | | 1 | | 1 |
$a_{2}$ | 1 | | | 1 | 1 | |
$a_{3}$ | | | 1 | 1 | | | 1
$a_{4}$ | 1 | 1 | 1 | | | |
$a_{5}$ | | 1 | | | 1 | | 1
$a_{6}$ | 1 | | | | | 1 | 1
$a_{7}$ | | | 1 | | 1 | 1 |
$a_{8}$ | 1 | | | 1 | | 1 |
$a_{9}$ | | 1 | | 1 | 1 | |
$a_{10}$ | | | 1 | 1 | | | 1
$a_{11}$ | 1 | 1 | 1 | | | |
$a_{12}$ | 1 | | | | 1 | | 1
$a_{13}$ | | 1 | | | | 1 | 1
$a_{14}$ | | | 1 | | 1 | 1 |
Table 3. Assignment of jobs to various servers in a proximally compact
$(14,6,3,7)$ server assignment scheme
However, when $n=c$ our subsequent result shows that proximally compact server
assignment schemes and proximally compact job assignment schemes are
equivalent.
###### Theorem 2.
Amongst balanced $(n,k,k,n)$ assignments, every proximally compact $(n,k,k,n)$
job assignment is also a proximally compact server assignment and vice-versa.
To prove this theorem, we first prove Lemma 3.
We define a random variable $Y^{D}$ as follows for balanced $(n,k,r,c)$
assignment $D$ as the number of servers in which a pair of jobs chosen
uniformly at random occur together. Formally, we can say that
(35) ${}\mathbb{P}[Y^{D}=p]=\frac{\mathfrak{m}^{D}(p)}{{{n\choose
2}}}\text{for any integer $p\in[0,r]$}$
Observe that this is a valid distribution as
$\sum\limits_{p=0}^{r}\mathfrak{m}^{D}(p)={{n\choose 2}}$ in Equation (19).
###### Lemma 2.
For any balanced $(n,k,r,c)$ assignment $D$, the variance of $Y^{D}$ is
linearly proportional to the variance of distinct jobs $d$ received at master
when any 2 servers chosen uniformly at random return i.e. are able to
communicate their results to the master. We can also state it as follows.
(36) $\sigma_{D,2}(d)=\frac{{{n\choose 2}}\sigma(Y^{D})+\frac{\left(c{k\choose
2}\right)^{2}}{{n\choose 2}}+n\left({r\choose 2}\right)-c\left({k\choose
2}\right)}{{c\choose 2}}-\left(\frac{n{r\choose 2}}{{c\choose 2}}\right)^{2}$
Further for $n=c$, we can say that $\sigma(Y^{D})=\sigma_{D,2}(d)$
###### Proof.
Observe that
$g(0,x)=\sum\limits_{i=2}^{r+1}\sum\limits_{j=2}^{r+1}(i-1)(j-1){(c-2r)\choose(x-i-j)}{r\choose
i}{r\choose j}$ from equation (16) and therefore $g(0,2)=0$. Further, using
equation (17), we have $g(m+1,2)-g(m,2)=(c-2)-2(c-r-1)+(c-2r+m)=m$. Therefore,
$g(m,2)=1+2+\cdots+(m-1)=\frac{m(m-1)}{2}$.
Consider the numerator of first term in $\sigma_{D,x}(d)$ in equation (22)
which was shown to be
$2\sum\limits_{m=0}^{r}\mathfrak{m}^{D}(m)g(m,x)+n\left(\sum\limits_{t=1}^{r-1}t^{2}{r\choose(t+1)}{(c-r)\choose(x-t-1)}\right)$.
Note that here we consider $x=2$ as we are considering the case when only two
servers communicate back to the master. So for $x=2$,
$\displaystyle
2\sum\limits_{m=0}^{r}\mathfrak{m}^{D}(m)g(m,2)+n\left(\sum\limits_{t=1}^{k-1}t^{2}{r\choose(t+1)}{(c-r)\choose(2-t-1)}\right)=$
$\displaystyle\sum\limits_{m=0}^{r}\mathfrak{m}^{D}(m)m(m-1)+n{r\choose 2}$
$\displaystyle\overset{(a)}{=}$
$\displaystyle\sum\limits_{m=0}^{r}m^{2}\mathfrak{m}^{D}(m)+n{r\choose
2}-c{k\choose 2}$
$(a)$ follows since $\sum\limits_{m=0}^{r}m\mathfrak{m}^{D}(m)=c{k\choose 2}$
in Equation (20).
Therefore,
(37) $\displaystyle{}\sigma_{D,2}(d)=$
$\displaystyle\frac{\sum\limits_{m=0}^{r}m^{2}\mathfrak{m}^{D}(m)+n{r\choose
2}-c{k\choose 2}}{{c\choose 2}}-\left(\frac{n{r\choose 2}}{{c\choose
2}}\right)^{2}$
Observe
$\mathbbm{E}[Y^{D}]=\frac{\sum_{m=0}^{r}m\mathfrak{m}^{D}(m)}{{{n\choose
2}}}=\frac{c{{k\choose 2}}}{{{n\choose 2}}}$ as
$\sum\limits_{m=0}^{r}m\mathfrak{m}^{D}(m)=c{k\choose 2}$ by equation (20) and
using the definition of $Y^{D}$ in Equation (35).
$\displaystyle{}\sigma(Y^{D})=$
$\displaystyle\mathbbm{E}[(Y^{D})^{2}]-(\mathbbm{E}[Y^{D}])^{2}$ (38)
$\displaystyle=$ $\displaystyle\frac{1}{{n\choose
2}}\sum_{m=0}^{r}m^{2}\mathfrak{m}^{D}(m)-\left(\frac{c{{k\choose
2}}}{{{n\choose 2}}}\right)^{2}$
Thus, using equations (A) and (37), we obtain the expression (36) in the
statement of the lemma. Further, when $n=c$, then $r=k$ and the two equations
(A) and (37) become equal. ∎
We now prove Theorem 2.
###### Proof.
Consider the set of balanced $(n,k,k,n)$ assignment schemes. Now recall
$Y^{D}$ denoted the number of servers where a pair of jobs chosen uniformly at
random occurs together and for $n=c$ and $k=r$, we have
(39) ${}\sigma_{D,2}(d)=\sigma(Y^{D}).$
Further $\mathbb{E}[Y^{D}]=\frac{n{{k\choose 2}}}{{{n\choose 2}}}$. Also,
observe from Theorem 1 (under $x=2,n=c$ and $k=r$ ) that
(40)
${}\mathbb{E}_{D,2}[d]=n\left(1-\frac{(n-k)(n-k-1)}{n(n-1)}\right)=\left(2k-n\frac{k(k-1)}{n(n-1)}\right)=\mathbb{E}_{D}[2k-Y^{D}]$
We first show that every proximally compact $(n,k,k,n)$ job assignment is also
a proximally compact server assignment scheme.
Suppose not and consider a balanced assignment scheme $D$ which is not a
proximally compact server assignment but is a proximally compact job
assignment. Now let us consider the scenario where $x=2$ (exactly 2 randomly
chosen servers) are able to communicate with the master. Now if $D$ is not a
proximally compact server assignment, then $d$ (the number of distinct jobs
received) can take at least two distinct integral values which are non-
consecutive, hence $2k-d$ also has a support of at least $2$ distinct non-
consecutive integral values. However the random variable $Y^{D}$ has a support
of at most 2 over two consecutive indices (since it is a proximally compact
job assignment). Now observe that random $2k-d$ and $Y^{D}$ have the same
expectation (shown above in Equation (40)). Therefore the variance of $2k-d$
is clearly more than that of $Y^{D}$ which contradicts equation (39).
Now we consider the reverse situation where a balanced assignment scheme is a
proximally compact server assignment but not proximally compact job
assignment. Again consider the scenario where $x=2$ (exactly 2 randomly chosen
servers) are able to communicate with the master. The number of distinct jobs
received at the master $d$ can take at most 2 consecutive values (as it is a
proximally compact server assignment) and therefore the random variable $2k-d$
has a support of at most 2 over two consecutive indices. Suppose the
assignment scheme is not proximally compact job assignment, $Y^{D}$ has a
support of at least 2 elements which are non consecutive. As the random
variables $Y^{D}$ and $2k-d$ have the same expectation according to (40),
hence the variance of $Y^{D}$ has to be clearly larger than that $2k-d$ which
is turn contradicts equation (39).
Thus, the set of proximally compact job assignments is identical to the set of
proximally compact server assignments. ∎
Thus, Theorem 2 ensures that when the number of servers equals the number of
jobs, proximally compact server assignment and proximally compact job
assignments minimise the variance of the number of distinct jobs received at
the server as shown in Theorem 2.
## Appendix B Proof of Theorem 1
###### Proof.
The number of distinct jobs $d$ received by the master when servers in a
subset $\hat{S}$ (with $|\hat{S}|=x$) is able to communicate with the master
is given by
(41)
${}d=\left|\bigcup_{j\in\hat{S}}\text{supp}(A_{D}[:,j])\right|=\left(k\times
x-\sum\limits_{i=1}^{n}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}\right)$
Note that the term
$\sum\limits_{i=1}^{n}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}$
excludes those jobs which have been received multiple times from various
servers present in $\hat{S}$.
$\displaystyle{}\mathbb{E}_{D,x}[d]=\frac{\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}(k\times
x-\sum\limits_{i=1}^{n}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1})}{\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}1}{=}$ $\displaystyle k\times
x-\frac{\sum\limits_{i=1}^{n}\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}}{{c\choose
x}}$ (42) $\displaystyle{=}$ $\displaystyle k\times
x-\frac{n\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}}{{c\choose
x}}$
Observe that for every job $a_{i}$ in a balanced $(n,k,r,c)$ assignment, the
quantity
$\sum\limits_{\hat{S}\subset\mathcal{S},|\hat{S}|=x}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}$
is the same, i.e., this summation is independent of $i$. We now show that the
quantity
$\sum\limits_{\hat{S}\subset\mathcal{S},|\hat{S}|=x}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}$
for any specified $x$, is the same for every balanced $(n,k,r,c)$ distribution
${D}$. We compute this sum by counting the number of subsets
$\hat{S}\subset\mathcal{S}$ of cardinality $x$ which additionally satisfies
the constraint on $\mathfrak{n}^{D}_{i,\hat{S}}=t$ (i.e. job $a_{i}$ is
present in exactly $t$ servers from $\hat{S}$) for every $t$ from $2$ to $r$
(as these cases deal with the job $a_{i}$ appearing more than once in the
subset $\hat{S}$).
$\displaystyle{}\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}{=}\sum\limits_{t=1}^{r-1}\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x,\mathfrak{n}^{D}_{i,\hat{S}}=t+1\end{subarray}}t{=}\text{ }$
$\displaystyle\sum\limits_{t=1}^{r-1}t\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x,\mathfrak{n}^{D}_{i,\hat{S}}=t+1\end{subarray}}1$ (43)
$\displaystyle\overset{(a)}{=}$
$\displaystyle\sum\limits_{t=1}^{r-1}t{r\choose(t+1)}{(c-r)\choose(x-t-1)}$
The last equality $(a)$ comes from counting the number of subsets
$\hat{S}\subset\mathcal{S}$ of cardinality $x$ that contain precisely $t+1$
servers that were assigned the job $a_{i}$. Consider the following binomial
expressions
(44) ${}ry(1+y)^{r-1}+1-{(1+y)^{r}}=\sum\limits_{t=0}^{r-1}t{{r\choose
t+1}}y^{t+1}$
(45) ${}(1+y)^{c-r}=\sum\limits_{u=0}^{c-r}{{c-r\choose u}}y^{u}$
Multiplying equations (44) and (45), one observes that
$\sum\limits_{t=1}^{r-1}t{r\choose(t+1)}{(c-r)\choose(x-t-1)}$ is precisely
the coefficient of $y^{x}$ in $ry(1+y)^{c-1}+{(1+y)^{c-r}}-{(1+y)^{c}}$. Thus,
(46)
${}\sum\limits_{t=1}^{r-1}t{r\choose(t+1)}{(c-r)\choose(x-t-1)}=r\times{{c-1\choose
x-1}}+{{c-r\choose x}}-{{c\choose x}}$
Combining equations (B), (B) and (46), we get
$\mathbbm{E}_{D,x}[d]=k\times x-\frac{n\left(r\times{{c-1\choose
x-1}}+{{c-r\choose x}}-{{c\choose x}}\right)}{{{c\choose
x}}}=n\left(1-\frac{{{c-r\choose x}}}{{{c\choose x}}}\right)$
∎
## Appendix C Proof of Theorem 5
###### Proof.
Let $h_{D}$ be the shape vector corresponding to the proximally compact
balanced $(n,k,r,c)$ job assignment. Thus $h_{D}(i)=0$ for all $i\leq\ell$ and
$i>\ell+2$ for $\ell$ as calculated in Lemma 1. Any balanced $(n,k,r,c)$ job
assignment $D_{1}$ would have a shape vector $h_{D}+v$ where $v\in\ker H$ with
matrix $H$ as defined in (26). Observe from the expression of variance in (22)
that it is only the term $\sum\limits_{m=0}^{r}\mathfrak{m}^{D}(m)g(m,x)$ that
varies amongst the different balanced $(n,k,r,c)$ assignments. Thus it is
enough to show that for every permissible $v\in\ker H$ mentioned above,
$\sum\limits_{m=0}^{r}v(m+1)g(m,x)\geq 0$, in order to conclude that the
proximally compact balanced $(n,k,r,c)$ assignment has the least variance.
We therefore first characterize $v\in\ker H$ that may appear from some
balanced $(n,k,r,c)$ assignment. As both the shape vectors
$h_{D},h_{D}+v\in\mathbb{N}^{r+1}$, therefore $v(i)\geq 0$ for all
$i\neq\ell+1,\ell+2$. Further, as $v\in\ker H$, therefore
$\sum\limits_{i=1}^{r+1}v(i)=0$ and so if $v$ is a nonzero vector, then at
least one of $v(\ell+1),v(\ell+2)$ must be a negative integer. As $v\in\ker
H$, therefore $v$ can be expressed in terms of the basis vectors
$\left\\{h_{i}\right\\}$ listed in (27). Let
$v=\sum\limits_{i=1}^{r-1}\alpha_{i}h_{i}$. We now show that all
$\alpha_{i}\in\mathbb{N}$.
Consider the components of $v$ for $i\leq\ell$. Let $j$ be the smallest index
where $v(j)>0$ and $j\leq\ell$. Then one can inductively argue that
$\alpha_{i}=0$ for all $i<j$ by starting with $i=1$ as $v(i)=0$ for $i<j$.
Further, $\alpha_{j}=v(j)>0$. Now, $v(j+1)=\alpha_{j+1}-2\alpha_{j}\geq 0$
implies that $\alpha_{j+1}\geq 2\alpha_{j}$. Similarly,
$v(j+2)=\alpha_{j+2}-2\alpha_{j+1}+\alpha_{j}$, which implies
$\alpha_{j+2}\geq 2\alpha_{j+1}-\alpha_{j}\geq 3\alpha_{j}$ as $v(j+2)\geq 0$.
On the same lines, $\alpha_{j+3}\geq 2\alpha_{j+2}-\alpha_{j+1}\geq
2(2\alpha_{j+1}-\alpha_{j})-\alpha_{j+1}=3\alpha_{j+1}-2\alpha_{j}\geq
4\alpha_{j}$. Inductively, one can show that $\alpha_{j+k}\geq(k+1)\alpha_{j}$
for all $k\leq(\ell-j)$. Thus $\alpha_{i}\geq 0$ for all $1\leq i\leq\ell$.
This accounts for all $v(i)\geq 0$ for $i\leq\ell$.
Similarly, one can utilize $v(i)\geq 0$ for $i>\ell+2$ to conclude that
$\alpha_{i}\geq 0$ for $\ell+1\leq i\leq r-1$, by proceeding from the other
end. Let $j$ now be the largest index where $v(j)>0$ and $j>\ell+2$. If
$j<r+1$, then $v(r+1)=0$ forces $\alpha_{r-1}=0$. Once again, one can
inductively argue that $\alpha_{i}=0$ for $j-1\leq i\leq r-1$ as the
corresponding $v(i+2)=0$. Further, $\alpha_{j-2}=v(j)>0$. Now,
$v(j-1)=\alpha_{j-3}-2\alpha_{j-2}\geq 0$ implies that $\alpha_{j-3}\geq
2\alpha_{j-2}$. Using $v(j-2)\geq 0$, one obtains $\alpha_{j-4}\geq
2\alpha_{j-3}-\alpha_{j-2}\geq 3\alpha_{j-2}$. Reflecting the argument used
before, one can conclude that $\alpha_{j-2-k}\geq(k+1)\alpha_{j-2}$ for $0\leq
k\leq(j-\ell-3)$ and so $\alpha_{i}\geq 0$ for $\ell+1\leq i\leq r-1$. Thus
all $\alpha_{i}\in\mathbb{N}$.
As a result, we get
$\displaystyle\sum\limits_{m=0}^{r}v(m+1)g(m,x)=\sum\limits_{i=1}^{r-1}\alpha_{i}\left(g(i-1,x)-2g(i,x)+g(i+1,x)\right)$
By Lemma 3, we know that
$g(m+1,x)-g(m,x)={{c-2}\choose{x-1}}-2{{c-r-1}\choose{x-1}}+{{c-2r+m}\choose{x-1}}$
and therefore
(47) $\displaystyle g(i-1,x)-2g(i,x)+g(i+1,x)$ $\displaystyle=$ $\displaystyle
g(i+1,x)-g(i,x)-\left\\{g(i,x)-g(i-1,x)\right\\}$ $\displaystyle=$
$\displaystyle{{c-2r+i}\choose{x-1}}-{{c-2r+i-1}\choose{x-1}}$
$\displaystyle=$
$\displaystyle\frac{x-1}{c-2r+i-x+1}{{c-2r+i-1}\choose{x-1}}\geq 0$
Thus we have
(48) $\displaystyle{}\sum\limits_{m=0}^{r}v(m+1)g(m,x)$ $\displaystyle=$
$\displaystyle\sum\limits_{i=1}^{r-1}\alpha_{i}\left(g(i-1,x)-2g(i,x)+g(i+1,x)\right)$
$\displaystyle=$
$\displaystyle\sum\limits_{i=1}^{r-1}\frac{\alpha_{i}(x-1)}{c-2r+i-x+1}{{c-2r+i-1}\choose{x-1}}\geq
0$
As the above is true for every permissible $v\in\ker H$ such that $h_{D}+v$ is
a shape vector, therefore we conclude that proximally compact balanced
$(n,k,r,c)$ assignment has the least variance amongst all balanced $(n,k,r,c)$
assignments. ∎
## Appendix D Proof of Theorem 8
###### Proof.
Let $h_{D}$ be the shape vector corresponding to a stretched compact balanced
$(n,k,r,c)$ job assignment. Then $h_{D}(i)=0$ for all $i\neq 1,r+1$. Any
balanced $(n,k,r,c)$ job assignment $D_{1}$ has a shape vector $h_{D}+v$ where
$v\in\ker H$ with $H$ as defined in (26). Similar to the proof of Theorem 5,
we now characterize $v\in\ker H$ that can arise from some balanced $(n,k,r,c)$
assignment. Note that $v(i)\geq 0$ for all $i\neq 1,r+1$ and
$\sum\limits_{i=1}^{r+1}v(i)=0$. Hence $v(1)\leq 0$ and $v(r+1)\leq 0$. As
$v\in\ker H$, therefore $v=\sum\limits_{i=1}^{r-1}\alpha_{i}h_{i}$ where
$\left\\{h_{i}\right\\}$ is the basis for $\ker H$ as listed in (27).
Following the proof of Theorem 5, it is now enough to show that all
$\alpha_{i}\leq 0$ for any permissible nonzero vector $v\in\ker H$ since the
expression in (48) would then be rendered negative, thereby signalling a
decrease in the variance for all balanced $(n,k,r,c)$ assignments with shape
vector $h_{D}+v$.
Observe that as $v(1)\leq 0$, therefore $\alpha_{1}=v(1)\leq 0$. If $r=2$,
then $\ker H$ is one dimensional and therefore $v(1)=v(3)\leq 0$, while
$v(2)=-2v(1)\geq 0\geq v(1)$. If $r>2$, then as $v(2)\geq 0$ and
$v(2)=\alpha_{2}-2\alpha_{1}$, therefore $\alpha_{2}\geq 2\alpha_{1}$. Now if
$r=3$, then $\ker H$ is two dimensional and $\alpha_{2}=v(4)\leq 0$. Further,
$v(3)\geq 0$ implies $\alpha_{1}-2\alpha_{2}\geq 0$ and therefore
$\alpha_{1}\geq 2\alpha_{2}$. Thus again, for $r=3$, both
$\alpha_{1},\alpha_{2}$ are negative and their values are mutually bound by
the constraints $\frac{\alpha_{1}}{2}\geq\alpha_{2}\geq 2\alpha_{1}$ and
$\frac{\alpha_{2}}{2}\geq\alpha_{1}\geq 2\alpha_{2}$.
Now, we consider the cases of $r>3$. In this case, $v(3)\geq 0$ translates to
$2\alpha_{2}\leq\alpha_{1}+\alpha_{3}$. Therefore $4\alpha_{2}\leq
2\alpha_{1}+2\alpha_{3}\leq\alpha_{2}+2\alpha_{3}$ (by using the condition
obtained from $v(2)\geq 0$) which in turn translates to $3\alpha_{2}\leq
2\alpha_{3}$. From the condition $v(4)\geq 0$, we get
$2\alpha_{3}\leq\alpha_{2}+\alpha_{4}$ which can now be manipulated to
$6\alpha_{3}\leq 3\alpha_{2}+3\alpha_{4}\leq 2\alpha_{3}+3\alpha_{4}$ which
gives us $4\alpha_{3}\leq 3\alpha_{4}$. Following the steps mentioned above,
one can use the subsequent $v(k)\geq 0$ to show that
$k\alpha_{k-1}\leq(k-1)\alpha_{k}$ for $2\leq k\leq(r-1)$. Combining all these
inequalities, one gets
$\alpha_{1}\leq\frac{\alpha_{2}}{2}\leq\frac{\alpha_{3}}{3}\leq\cdots\leq\frac{\alpha_{k}}{k}\leq\cdots\leq\frac{\alpha_{r-1}}{r-1}\leq
0$. This proves that all the $\alpha_{i}\leq 0$. ∎
Interestingly, in the proof above, one could have started from the other end
and as already shown for the case $r=3$, one can get another set of
constraints
$\alpha_{r-1}\leq\frac{\alpha_{r-2}}{2}\leq\cdots\leq\frac{\alpha_{r-k}}{k}\leq\cdots\leq\frac{\alpha_{1}}{r-1}\leq
0$. These interwoven constraints restrict the possible values for the
$\alpha_{i}$ where $v=\sum\limits_{i=1}^{r-1}\alpha_{i}h_{i}$.
## Appendix E Proof of Claim 2
###### Proof.
Consider a pair of jobs $(a_{i},a_{j})$ such that no server has been assigned
both $a_{i}$ and $a_{j}$ together. Therefore there are precisely $r$ servers
that have been assigned $a_{i}$ and not $a_{j}$. Another $r$ servers that are
assigned $a_{j}$ but not $a_{i}$ while the remaining $c-2r$ servers are
assigned neither $a_{i}$ nor $a_{j}$. Then
(49) $g(0,x)=\sum\limits_{\begin{subarray}{c}\hat{S}\subset\mathcal{S};\\\
|\hat{S}|=x\end{subarray}}(\mathfrak{n}^{D}_{i,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{i,\hat{S}}>1}(\mathfrak{n}^{D}_{j,\hat{S}}-1)\mathbbm{1}_{\mathfrak{n}^{D}_{j,\hat{S}}>1}=\sum_{t=2}^{r}\sum_{u=2}^{r}(t-1)(u-1){r\choose
t}{r\choose u}{c-2r\choose x-t-u}$
Clearly any subset of servers $\hat{S}\subset\mathcal{S}$ of cardinality $x$
that has at most only one instance of the job $a_{i}$ assigned amongst its
members does not contribute to the sum. Ditto for $a_{j}$. Therefore, one
needs to consider only those subsets $\hat{S}$ of servers that contain at
least two servers that are assigned $a_{i}$ and at least two servers that are
assigned $a_{j}$. In the final expression of equation (49), ${r\choose
t}{r\choose u}{c-2r\choose x-t-u}$ counts the number of subsets of servers
$\hat{S}$ of cardinality $x$ that contain $t$ servers assigned $a_{i}$, $u$
servers assigned $a_{j}$ and $x-t-u$ servers that have been assigned neither.
The summation limits ensure that there are at least $2$ servers assigned
$a_{i}$ and at least $2$ servers assigned $a_{j}$. The expression $(t-1)(u-1)$
is the contribution of each subset $\hat{S}$ that contains $t$ copies of
$a_{i}$ and $u$ copies of $a_{j}$ assigned to its members. A closed form
solution of the expression for $g(0,x)$ can be obtained by considering
(50) ${}ry(1+y)^{r-1}+1-(1+y)^{r}=\sum_{t=1}^{r}(t-1){{r\choose t}}y^{t}$
(51) ${}ry(1+y)^{r-1}+1-(1+y)^{r}=\sum_{u=1}^{r}(u-1){{r\choose u}}y^{u}$
(52) ${}(1+y)^{c-2r}=\sum_{v=0}^{c-2r}{{c-2r\choose v}}y^{v}$
Multiplying these three expressions (50), (51) and (52), we get
$\sum_{t=2}^{r}\sum_{u=2}^{r}(t-1)(u-1){{c-2r\choose x-t-u}}{{r\choose
t}}{{r\choose u}}$ to be the coefficient of $y^{x}$ in
$\left(ry(1+y)^{r-1}+1-(1+y)^{r}\right)^{2}(1+y)^{c-2r}$.
$\displaystyle\sum_{t=2}^{r}\sum_{u=2}^{r}(t-1)(u-1){{c-2r\choose
x-t-u}}{{r\choose t}}{{r\choose u}}$ (53) $\displaystyle=$ $\displaystyle
r^{2}{{c-2\choose x-2}}-2r{{c-1\choose x-1}}+{{c\choose x}}-2{{c-r\choose
x}}+2r{{c-r-1\choose x-1}}+{c-2r\choose x}$
∎
## Appendix F Proof of Claim 3
###### Proof.
Let us consider a pair of jobs $(a_{i},a_{j})$ that have been assigned
together to precisely $m$ servers. Without loss of generality, let
$s_{1},s_{2},\cdots,s_{m}$ be the servers that are assigned both the jobs
$a_{i},a_{j}$. Let $s_{m+1},s_{m+2},\cdots,s_{r}$ be the servers that have
been assigned $a_{i}$ but not $a_{j}$. Assume servers
$s_{r+1},s_{r+2},\cdots,s_{2r-m}$ are the servers assigned $a_{j}$ but not
$a_{i}$. The last $c-2r+m$ servers $s_{2r-m+1},s_{2r-m+2},\cdots,s_{c}$ are
the ones that have not been assigned $a_{i}$ or $a_{j}$.
Let another pair of jobs $(a_{i_{1}},a_{j_{1}})$ be such that they have been
assigned together to precisely $m+1$ servers. We now consider a bijective map
$f:\mathcal{S}\rightarrow\mathcal{S}$ described in the following fashion. Let
$f(s_{\ell})$ for $1\leq\ell\leq m+1$ be servers that have been assigned both
the jobs $a_{i_{1}}$ and $a_{j_{1}}$. Let $f(s_{\ell})$ for $m+2\leq\ell\leq
r$ be servers that have been assigned $a_{i_{1}}$ but not $a_{j_{1}}$. Further
let $f(s_{\ell})$ for $r+2\leq\ell\leq 2r-m$ be servers that have been
assigned $a_{j_{1}}$ but not $a_{i_{1}}$. The rest of $f(s_{\ell})$ have not
been assigned $a_{i_{1}}$ or $a_{j_{1}}$. Thus there are two special servers,
namely $s_{m+1}$ (which does job $a_{i}$ but not $a_{j}$) and $s_{r+1}$ (which
does job $a_{j}$ but not $a_{i}$), and whose images $f(s_{m+1})$ (which does
both the jobs $a_{i_{1}}$ and $a_{j_{1}}$) and $f(s_{r+1})$ (which does
neither $a_{i_{1}}$ nor $a_{j_{1}}$) that we shall pay special attention to.
For any $\hat{S}\subset\mathcal{S}$ of cardinality $x$, let us compare its
contribution to the sum $g(m,x)$ with the contribution of $f(\hat{S})$ towards
$g(m+1,x)$. Clearly, if
$\hat{S}\subset\mathcal{S}\setminus\left\\{s_{m+1},s_{r+1}\right\\}$, then the
contribution of $\hat{S}$ towards $g(m,x)$ is exactly the same as the
contribution of $f(\hat{S})$ to $g(m+1,x)$. Similarly, if
$s_{m+1},s_{r+1}\in\hat{S}$, then contribution of $\hat{S}$ towards $g(m,x)$
and that of $f(\hat{S})$ towards $g(m+1,x)$ is exactly the same. Therefore it
suffices to only consider those subsets $\hat{S}$ of cardinality $x$ that
contain exactly one of the two special servers
$\left\\{s_{m+1},s_{r+1}\right\\}$ to evaluate the difference
$g(m+1,x)-g(m,x)$. Hence we look at subsets $\hat{S}$ that are formed by
taking either $s_{m+1}$ or $s_{r+1}$ along with
$\bar{S}\subset\mathcal{S}\setminus\left\\{s_{m+1},s_{r+1}\right\\}$ of
cardinality $x-1$.
Let $\bar{S}\subset\mathcal{S}\setminus\left\\{s_{m+1},s_{r+1}\right\\}$ of
cardinality $x-1$ contain $\alpha>0$ instances of job $a_{i}$ and $\beta>0$
instances of job $a_{j}$ assigned to its servers. Then
$\bar{S}\cup\\{s_{m+1}\\}$ contributes $\alpha(\beta-1)$ towards $g(m,x)$,
whereas $\bar{S}\cup\\{s_{r+1}\\}$ contributes $(\alpha-1)\beta$ towards
$g(m,x)$. At the same time, $f(\bar{S})\cup\\{f(s_{m+1})\\}$ contributes
$\alpha\beta$ towards $g(m+1,x)$, whereas $f(\bar{S})\cup\\{f(s_{r+1})\\}$
contributes $(\alpha-1)(\beta-1)$ towards $g(m+1,x)$. Thus, one can evaluate
the contribution of $\bar{S}$ towards the difference $g(m+1,x)-g(m,x)$ to be
$\alpha\beta+(\alpha-1)(\beta-1)-\alpha(\beta-1)-(\alpha-1)\beta=1$. So every
subset $\bar{S}\subset\mathcal{S}\setminus\\{s_{m+1},s_{r+1}\\}$ of
cardinality $x-1$, whose servers have at least one instance each of jobs
$a_{i}$ and $a_{j}$ assigned to them, contributes a net change of $1$ towards
the difference $g(m+1,x)-g(m,x)$. One needs to just count the number of
subsets $\bar{S}$ of cardinality $x-1$ that satisfy these conditions to find
$g(m+1,x)-g(m,x)$.
Total number of subsets of cardinality $x-1$ of the set
$\mathcal{S}\setminus\\{s_{m+1},s_{r+1}\\}$ is given by ${c-2\choose x-1}$. If
the subset $\bar{S}$ is one of the ${c-r-1\choose x-1}$ subsets chosen from
the servers $\\{s_{r+2},s_{r+3},\cdots s_{c}\\}$, then the job $a_{i}$ is not
assigned to any of its servers. Similarly, if $\bar{S}$ is one of the
${c-r-1\choose x-1}$ chosen from the servers
$\\{s_{m+2},s_{m+3},\cdots,s_{r}\\}\cup\\{s_{2r-m+1},s_{2r-m+2},\cdots,s_{c}\\}$,
then it does not have any instance of the job $a_{j}$ assigned to its servers.
As these subsets $\bar{S}$ do not contribute to the difference
$g(m+1,x)-g(m,x)$, their numbers have to be subtracted from ${c-2\choose
x-1}$. In the process,
$\bar{S}\subset\\{s_{2r-m+1},s_{2r-m+2},\cdots,s_{c}\\}$ have been subtracted
twice and therefore ${c-2r+m\choose x-1}$ needs to added back (inclusion-
exclusion principle), thereby giving
$g(m+1,x)-g(m,x)={c-2\choose x-1}-2{c-r-1\choose x-1}+{c-2r+m\choose x-1}$
∎
## Appendix G Proof of Corollary 1
###### Corollary 1.
For $x>c-r$ , the expression of $\sigma_{D,x}(d)$ in Equation (22) in Theorem
4 goes to zero.
###### Proof.
Recall the expression of $\sigma_{D,x}(d)$ from Equation (22). Observe that
expression $g(m+1,x)-g(m,x)$ from Equation (17) would be ${{c-2\choose x-1}}$
for $x>c-r$ as the second and third term in equation (17) goes to zero since
$p\leq r$ and $x>c-r$.
(54) ${}g(m+1,x)-g(m,x)={{c-2\choose x-1}}$
Let us now compute $g(0,x)$ using the expression in (49) and (16) for $x>c-r$.
(55) ${}g(0,x)=\sum_{i=2}^{r+1}\sum_{j=2}^{r+1}(i-1)(j-1){{c-2r\choose
x-i-j}}=r^{2}{{c-2\choose x-2}}-2r{{c-1\choose x-1}}+{{c\choose x}}$
Thus, from equations (54) and (55), we get
(56) ${}g(m,x)=r^{2}{{c-2\choose x-2}}-2r{{c-1\choose x-1}}+{{c\choose
x}}+m\times{{c-2\choose x-1}}$
Since, $x>c-r$, we may claim that the term $T_{2}(n,k,r,c)$ in Equation (22)
in Thoerem 4 goes as follows.
(57)
${}T_{2}(n,k,r,c)=\sum\limits_{t=1}^{r-1}t^{2}{r\choose(t+1)}{(c-r)\choose(x-t-1)}=\left(r(r-1){{c-2\choose
x-2}}-r{{c-1\choose x-1}}+{{c\choose x}}\right)$
Also observe that since $x>c-r$ the term ${{c-r\choose x}}$ goes to zero,
hence not written in equation (14). Thus the numerator of the first term in
equation (22) in Thoerem 4 is given by (from equations (55) and (56) and (57))
$\displaystyle
2.\sum\limits_{m=0}^{r}\mathfrak{m}^{D}(m)g(m,x)+n\sum\limits_{t=1}^{r-1}t^{2}{r\choose(t+1)}{(c-r)\choose(x-t-1)}$
$\displaystyle=$
$\displaystyle\sum\limits_{m=0}^{r}\Biggl{(}2\mathfrak{m}^{D}(m)\left(r^{2}{{c-2\choose
x-2}}-2r{{c-1\choose x-1}}+{{c\choose
x}}\right)+2m\mathfrak{m}^{D}(m){{c-2\choose x-1}}\Biggr{)}$
$\displaystyle\hskip 180.00027pt+n\left(r(r-1){{c-2\choose x-2}}-r{{c-1\choose
x-1}}+{{c\choose x}}\right)$
$\displaystyle\overset{(a)}{=}\left(r^{2}{{c-2\choose x-2}}-2r{{c-1\choose
x-1}}+{{c\choose x}}\right)n(n-1)+{{c-2\choose x-1}}ck(k-1)$
$\displaystyle\hskip 180.00027pt+n\left(r(r-1){{c-2\choose x-2}}-r{{c-1\choose
x-1}}+{{c\choose x}}\right)$ $\displaystyle\overset{(b)}{=}{{c-2\choose
x-2}}(nr(nr-1))+{{c-2\choose x-1}}ck(k-1)-{{c-1\choose
x-1}}(nr(2n-1))+n^{2}{{c\choose x}}$
$\displaystyle\overset{(c)}{=}{{c-2\choose x-2}}n^{2}r^{2}+{{c-2\choose
x-1}}ck^{2}-{{c-1\choose x-1}}(nr(2n))+n^{2}{{c\choose x}}$
$\displaystyle\overset{(d)}{=}{{c-1\choose x-1}}{nr\times kx}-{{c-1\choose
x-1}}(nr(2n))+n^{2}{{c\choose x}}$ $\displaystyle\overset{(e)}{=}{{c\choose
x}}\left(\left(\frac{nrx}{c}\right)^{2}-2n\left(\frac{nrx}{c}\right)+n^{2}\right)$
(58) $\displaystyle\overset{(f)}{=}{{c\choose
x}}\left(n\times\left(\frac{rx}{c}-1\right)\right)^{2}$
We now argue for each of the steps below.
* •
$(a)$ follows since
$\sum\limits_{m=0}^{r}m\times\mathfrak{m}^{D}(m)=c{{k\choose 2}}$ and
$\sum\limits_{m=0}^{r}\mathfrak{m}^{D}(m)={{n\choose 2}}$ in Equations (19)
and (20)
* •
$(b)$ follows by combining the coefficints of ${{c-2\choose x-2}}$,
${c-1\choose x-1}$ and ${c\choose x}$.
* •
$(c)$ follows as $nr{{c-2\choose x-2}}+kc{{c-2\choose x-1}}=nr{{c-1\choose
x-1}}$. This can be explained by the fact that $n\times r=k\times c$.
* •
$(d)$ follows from the following set of equalities
$\displaystyle{{c-2\choose x-2}}n^{2}r^{2}+{{c-2\choose x-1}}ck^{2}=$
$\displaystyle\frac{(c-1)!}{(x-2)!}{(c-x-1)!}\left(\frac{ck}{c-x}+\frac{k}{x-1}\right)$
$\displaystyle=$ $\displaystyle\frac{(c-1)!\times nr\times
kx(c-1)}{(x-2)!(c-x)(x-1)}=nr\times kx{{c-1\choose x-1}}$
* •
$(e)$ and $(f)$ follow from the fact that $n\times r=k\times c$
Now, observe the second term of $\sigma_{D,x}(d)$ in equation (22) and we see
that $T_{1}(n,k,r,c)=n\times\left(\frac{rx}{c}-1\right)$ as $x>c-r$. Thus,
using equation (G), we can say that $\sigma_{D,x}(d)=0$ for $x>c-r$.
∎
|
# Is asymptotically safe inflation eternal?
J. Chojnacki,**footnotetext: Corresponding author. J. Krajecka J. H. Kwapisz
O. Slowik A. Strag
(December 2020)
###### Abstract
Recently, based on swampland considerations in string theory, the (no) eternal
inflation principle has been put forward. The natural question arises whether
similar conditions hold in other approaches to quantum gravity. In this
article, the asymptotic safety hypothesis is considered in the context of
eternal inflation.
As exemplary inflationary models the SU(N) Yang-Mills in the Veneziano limit
and various RG-improvements of the gravitational action are studied. The
existence of UV fixed point generically flattens the potential and our
findings suggest no tension between eternal inflation and asymptotic safety,
both in the matter and gravitational sector in contradistinction to string
theory. Moreover, the eternal inflation cannot take place in the range of
applicability of effective field quantum gravity theory.
We employ the analytical relations for eternal inflation to some of the models
with single minima, such as Starobinsky inflation, alpha-attractors, or the
RG-improved models and verify them with the massive numerical simulations. The
validity of these constraints is also discussed for a multi-minima model.
## 1 Introduction
Our Universe consists of 4 fundamental forces. Three of these forces have been
consistently described on the quantum level and combined into the Standard
Model of particle physics. Only quantum gravity seems to be elusive and has
not been fully described in terms of quantum theory. This is not only because
gravity is power counting non-renormalizable but also due to the fact that
direct quantum gravity regime cannot be accessed experimentally (for example
an accelerator measuring the quantum gravity effects would have to be big as
our Solar System).
In recent years an alternative strategy has been put forward, namely one
formulates a fundamental quantum gravity theory and then tests, which of the
low energy effective theories can be UV completed by this quantum gravity
model. In string theory this goes under the name of swampland conjectures [2,
3]. Recently widely discussed is so-called de-Sitter conjecture [4, 5] which
states that string theory cannot have de-Sitter vacua and is in tension with
single field inflation [6, 7]. There seems to be also a tension between
standard S-matrix formulation of quantum gravity and existence of stable de-
Sitter space [8, 9, 10, 11]. However it is not established whether asymptotic
safety admits a standard S-matrix formulation [12] due to fractal spacetime
structure in the deep quantum regime [13, 14] In line of these swampland
criteria the no eternal inflation principle has been put forward [1], see also
the further discussions on the subject of eternal inflation [15, 16, 17, 18,
19, 20, 21, 22, 23, 24, 25, 26].
On the other hand, the theory of inflation is a well-established model
providing an answer to problems in classical cosmology, such as the flatness
problem, large-scale structures formation, homogeneity, and isotropy of the
universe. A handful of models is in an agreement with the CMB observations. In
the inflationary models, quantum fluctuations play a crucial role in
primordial cosmology, providing a seed for the large-scale structure formation
after inflation and giving a possibility for the eternally inflating
multiverse. Initial fluctuations in the early universe may cause an
exponential expansion in points scattered throughout the space. Such regions,
rapidly grow and dominate the volume of the universe, creating ever-inflating,
disconnected pockets. Since so far there is no way to verify the existence of
the other pockets, we treat them as potential autonomous universes, being part
of the multiverse.
In the light of this tension between string theory and inflationary paradigm
[1], one can be interested in how robust are the swampland criteria for the
various quantum gravity models. In accordance with the inflation theory, we
anticipate that the dynamics of the universe are being determined by the
quantum corrections to the general relativity stemming from the concrete UV
model. The effective treatment led Starobinsky to create a simple inflationary
model taking into account the anomaly contributions to the energy-momentum
tensor.
As pointed out by Donoghue [27] below the Planck scale, for quantum gravity
one can safely take the effective field theory perspective. Yet these quantum
gravity effects can be important below the Planck scale by the inclusion of
higher dimensional operators. The gravitational constant $G_{N}$ has a
vanishing anomalous dimension below the Planck scale and various logarithmic
corrections to the $R^{2}$ have been considered, capturing the main quantum
effects [28, 29, 30, 31]. Yet in order to get the correct 60 e-fold duration
of inflationary period one has to push the scalar field value in the Einstein
frame beyond the Planck mass [1]. Furthermore most of these models do not
possess a flat potential limit (either diverge or have a runaway solutions),
suggesting that eternal inflation can be investigated only if one takes into
account the full quantum corrections to the Starobinsky inflation.
In the effective field theory scheme, the predictive power of the theory is
limited, as the description of gravity at transplanckian scales requires
fixing infinitely many coupling constants from experiments. The idea of
asymptotic safety [32] was introduced by Stephen Weinberg in 1978 as a UV
completion of the quantum theory of gravity. The behavior of an asymptotically
safe theory is characterized by scale invariance in the high-momentum regime.
Scale invariance requires the existence of a non-trivial Renormalization Group
fixed point for dimensionless couplings. There are many possible realizations
of such non-trivial fixed point scenario, such as the canonical vs anomalous
scaling (gravitational fixed point [33, 34, 35, 36]) and one-loop vs two-loop
contributions or gauge vs Yukawa contributions, see [37] for further details
and [38] for current status of asymptotically safe gravity.
The existence of an interacting fixed point and hence the flatness of the
potential in the Einstein frame led Weinberg to discuss [39] cosmological
inflation as a consequence of Asymptotically Safe Gravity, see also [40, 41]
for discussion of AS cosmology. Following this suggestion, we study two types
of models.
The first type relies on the RG-improvement of the gravitational actions and
is based on the asymptotic safety hypothesis that gravity admits a non-trivial
UV fixed point. Since asymptotically safe gravity flattens the scalar field
potentials [42], one can expect that it will result in the eternal inflation
for large enough initial field values. On the other hand, RG-improved actions
can serve as a UV completion of the Starobinsky model. One should also note
that asymptotically safe swampland has been vastly studied [43, 44, 45, 46,
47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65,
66, 67, 68, 69].
The other model relies on the non-trivial fixed point in the pure matter
sector governed by the Yang-Mills dynamics in the Veneziano limit [70, 71],
see also [72, 73, 74, 75, 75]. In this model, we have uncovered a new type of
eternal inflation scenario relying on tunneling to a false vacuum - in the
opposite direction as it was considered in the old-inflation proposal [76].
In contradistinction to string theory, the couplings in the asymptotic safety
paradigm are predicted from the RG-flow of the theory and their fixed point
values rather than as vacuum expectation values (vev’s) of certain scalar
fields. Hence, the asymptotically safe eternally inflating multiverses
landscape is much less vast than the one stemming from the string theory,
making these models much less schismatic [77]. Finally, let us note that
asymptotic safety can argue for the homogeneous and isotropic initial
conditions on its own using the finite action principle [78].
Our work is organized as follows. In Chapter 2 we introduce the idea of
eternal inflation and multiverse. We discuss necessary conditions for eternal
inflation to occur based on the Fokker-Planck equation. In Chapter 3 we show,
how the developed tools work in practice with the two popular inflationary
models. Chapter 4 is devoted to the presence of eternal inflation in
Asymptotically Safe models. In Chapter 5 the results are discussed and
concluded.
## 2 How inflation becomes eternal?
In this section we discuss, under what circumstances the inflation becomes
eternal. Our discussion follows closely [1].
### 2.1 Fokker-Planck equation
Consider a scalar field in the FLRW metric
$\displaystyle S=\int
d^{4}\sqrt{-g}\left(\frac{1}{2}M_{Pl}^{2}R+\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-V(\phi)\right),$
(2.1)
with $\phi(t,\vec{x})=\phi(t)$ one obtains the following equations of motion:
$\displaystyle\ddot{\phi}+3H\phi+\frac{\partial V}{\partial\phi}=0,\quad
H^{2}M_{P}^{2}=\frac{1}{3}\left(\frac{1}{2}\dot{\phi}^{2}+V(\phi)\right),$
(2.2)
which in the slow-roll approximation become [79]:
$\displaystyle 3H\dot{\phi}+\frac{\partial V}{\partial\phi}\approx 0,\quad$
$\displaystyle H^{2}M_{Pl}^{2}\approx\frac{1}{3}V\left(\phi\right).$ (2.3)
Inflation ends once one of the so-called slow-roll parameters becomes of order
one
$\displaystyle\epsilon\simeq\frac{M_{P}^{2}}{2}\left(\frac{V_{,\phi}}{V}\right)^{2},\quad\eta\simeq
M_{P}^{2}\frac{V_{,\phi\phi}}{V},$ (2.4)
and enters the oscillatory, reheating phase. The standard treatment of eternal
inflation relies on the stochastic inflation approach [80]. One splits the
field into classical background and short-wavelength quantum field
$\displaystyle\phi\left(t,\vec{x}\right)=\phi_{cl}\left(t,\vec{x}\right)+\delta\phi\left(t,\vec{x}\right).$
(2.5)
Due to the fact that action is quadratic in the fluctuations, their spatial
average over the Hubble volume is normally distributed. Hence from now on we
shall assume that both background and fluctuations are homogeneous, which is
the standard treatment of eternal inflation (if not otherwise specified). In
the large e-fold limit, equation of motion for the full field takes form of
slow-roll equation with additional classical noise term [1, 81, 82], known as
the Langevin Equation:
$\displaystyle 3H\dot{\phi}+\frac{\partial V}{\partial\phi}=N\left(t\right),$
(2.6)
where $N\left(t\right)$ is a Gaussian distribution with mean equal 0 and
variance $\sigma=\frac{H^{3}t}{4\pi^{2}}$ [83]. Then the probability density
of the inflaton field is then given by the Fokker-Planck equation [1]:
$\displaystyle\dot{P}[\phi,t]=\frac{1}{2}\left(\frac{H^{3}}{4\pi^{2}}\right)\frac{\partial^{2}P[\phi,t]}{\partial\phi\partial\phi}+\frac{1}{3H}\partial_{i}\left(\partial^{i}V\left(\phi\right)P[\phi,t]\right),$
(2.7)
where $\dot{P}[\phi,t]:=\frac{\partial}{\partial t}P[\phi,t]$.
### 2.2 Analytic solutions
To understand better the Fokker-Planck equation, let us now briefly discuss
the analytical solutions.
#### Case 1. Constant potential
$\displaystyle V\left(\phi\right)=V_{0},$ (2.8)
the Fokker-Planck equation reduces to
$\displaystyle\dot{P}[\phi,t]=\frac{1}{2}\left(\frac{H^{3}}{4\pi^{2}}\right)\frac{\partial^{2}P[\phi,t]}{\partial\phi\partial\phi},$
(2.9)
furthermore $H^{2}=\textrm{const}$ by the Friedman equations. Then the Fokker
Planck equation reduces to the standard heat equation, which has a solution
given by a Gaussian distribution:
$\displaystyle
P[\phi,t]=\frac{1}{\sigma\left(t\right)\sqrt{2\pi}}\exp\left[-\frac{\left(\phi-\mu\left(t\right)\right)^{2}}{2\sigma\left(t\right)^{2}}\right],$
(2.10)
with
$\displaystyle\mu\left(t\right)=0,\quad$
$\displaystyle\sigma^{2}\left(t\right)=\frac{H^{3}}{4\pi^{2}}t.$ (2.11)
A delta-function distribution initially centered at $\phi$ = 0 will remain
centered at $\phi=0$ for all time. It will however, spread out by the amount
$\sigma\left(t=H^{-1}\right)=H/2\pi$ after a Hubble time. This represents the
standard “Hubble-sized” quantum fluctuations that are well-known in the
context of inflation, famously imprinted in the CMB and ultimately seeding the
observed large-scale structure.
#### Case 2. Linear potential
For the linear hilltop model the potential is given by
$\displaystyle V\left(\phi\right)=V_{0}-\alpha\phi.$ (2.12)
Fokker-Planck equation is analogously solved by the Gaussian distribution
(2.10) with:
$\displaystyle\mu\left(t\right)=\frac{\alpha}{3H}t,\quad$
$\displaystyle\sigma^{2}\left(t\right)=\frac{H^{3}}{4\pi^{2}}t.$ (2.13)
The time-dependence of $\mu\left(t\right)$ is due to the classical rolling of
the field in the linear potential. The time-dependence of
$\sigma^{2}\left(t\right)$ is purely due to Hubble-sized quantum fluctuations,
and it precisely matches the result in the constant case. In general, for a
linear and quadratic potential the equation simplifies to the heat equation,
hence the solutions are Gaussian. Furthermore, if the potential is
asymptotically flat, a finite limit at infinity exists. One may employ a
series expansion around this point at infinity, approximating the potential up
to the linear term. It is then expected that the probability density is
approximately Gaussian (2.13). In the next section, we describe in detail how
the Gaussian distribution causes the inflaton to decay exponentially.
### 2.3 Eternal inflation conditions
Given an arbitrary field value $\phi_{c}$, one can ask what is the probability
that quantum field $\phi=\phi(t)$ is above this value:
$\displaystyle\mathrm{Pr}[\phi>\phi_{c},t]=\int^{\infty}_{\phi_{c}}d{\phi}P[\phi,t].$
(2.14)
Since the distribution is Gaussian, then for $\phi_{c}$ large enough the
$Pr[\phi>\phi_{c},t]$ can be approximated by an exponential decay:
$\displaystyle\mathrm{Pr}[\phi>\phi_{c},t]\approx C(t)\exp(-\Gamma t),$ (2.15)
where $C(t)$ is polynomial in $t$ and all of the dependence on $\phi_{c}$ is
contained in $C(t)$. Then it seems that inflation cannot last forever since
$\displaystyle\lim_{t\to\infty}\mathrm{Pr}[\phi>\phi_{c},t]=0.$ (2.16)
However, there is an additional effect to be included: expansion of the
universe during inflation. The size of the universe depends on time according
to:
$\displaystyle U\left(t\right)=U_{0}e^{3Ht},$ (2.17)
where $U_{0}$ is the initial volume of the pre-inflationary universe. One can
interpret the probability $Pr[\phi>\phi_{c},t]$ as fraction of the volume
$U_{inf}\left(t\right)$ still inflating, that is:
$\displaystyle U_{inf}\left(t\right)=U_{0}e^{3Ht}Pr[\phi>\phi_{c},t],$ (2.18)
then in order for the Universe to inflate eternally, the positive exponential
factor $3H$ in Eq. (2.18) and the negative exponential factor $-\Gamma$ in
(2.15) must satisfy:
$\displaystyle 3H>\Gamma.$ (2.19)
We shall illustrate this general property on an example of linear potential.
Evaluating the integral for probability density, in the linear case gives:
$\displaystyle
Pr[\phi>\phi_{c},t]=\frac{1}{2}\textrm{erfc}\left({\frac{\frac{\alpha}{3H}t-\phi_{c}}{\frac{H}{2\pi}\sqrt{2Ht}}}\right).$
(2.20)
The error function may be approximated by an exponential:
$\displaystyle
Pr[\phi>\phi_{c},t]=C\left(t\right)\textrm{exp}\left(-\frac{4\pi^{2}\alpha^{2}}{18H^{5}}t\right),$
(2.21)
where $C\left(t\right)$ is power-law in $t$ and $\phi_{c}$ vanished from the
final approximation of the probability, which is a generic feature. By
comparing the exponents we can check, whether $U_{inf}$ will grow or tend to
zero. The condition for eternal inflation to occur becomes:
$\displaystyle 3H>\frac{4\pi^{2}\alpha^{2}}{18H^{5}}.$ (2.22)
For linear potential $\alpha=V^{\prime}\left(\phi\right)$ using the slow-roll
equations equation (2.3), above condition can be rewritten:
$\displaystyle\frac{|V^{\prime}|}{V^{\frac{3}{2}}}<\frac{\sqrt{2}}{2\pi}\frac{1}{M^{2}_{Pl}}.$
(2.23)
This can be interpreted as quantum fluctuations dominating over classical
field rolling. For linear potential, this is satisfied for a large $\phi$.
Similarly, the second condition for the eternal inflation may be derived from
the quadratic hilltop potential:
$-\frac{V^{\prime\prime}}{V}<\frac{3}{M^{2}_{Pl}}.$ (2.24)
Further necessary conditions on p-th derivative with $p>2$ have been derived
in [1] and give:
$\displaystyle[-\textrm{sgn}\left(\partial^{p}V\right)]^{p+1}\frac{|\partial^{p}V|}{V^{(4-p}/2}<\mathcal{N}_{p}M_{Pl}^{p-4},$
(2.25)
where $\mathcal{N}_{p}\gg 1$ is numerically determined coefficient. Eternal
inflation can be understood as a random walk of a field and a diffusion
process on top of the classical motion [Vilenkin, Guth:2000ka, 15]. In order
to cross check the formulas (2.23, 2.24) the numerical simulation has been
developed. To reconstruct the probability distribution one simulates the
discretized version of equation (2.6):
$\displaystyle\phi_{n}=\phi_{n-1}-\frac{1}{3H}V^{\prime}\left(\phi_{n-1}\right)\delta
t+\delta\phi_{q}\left(\delta t\right),$ (2.26)
with $\delta\phi_{q}\left(\delta t\right)$ being random number taken from the
gaussian distribution with mean equal zero, and variance
$\frac{H^{3}}{4\pi^{2}}\delta t$. We further assume the Hubble parameter to be
constant and respecting the slow-roll regime
$H=\frac{1}{M_{Pl}}\sqrt{\frac{V(\phi_{0})}{3}}$, where $V(\phi_{0})$ is a
value of the potential at the start of the simulation. We verified, that the
change of $H$ caused by the field fluctuation does not affect the conclusions
for eternal inflation. The simulation starts at the user-given value
$\phi_{0}$ and follows the Langevin discretized equation (2.26).
If the inflation occurs, the corresponding timestep $t_{n}$ is added to a
list. This happens while the slow-roll conditions are satisfied, meaning
$\epsilon(t_{n})$ and $\eta(t_{n})$ are smaller than one. Violation of one of
these conditions resets the simulation. However, the list containing
information about time $t_{n}$ of the ongoing inflation is stored in the
memory. This large time list is appended in the similar way each evolution.
Its size may be estimated by $N\frac{T_{c}}{\delta t}$, where $N$ is the total
number of simulations and $T_{c}$ is the time of the classical slow-roll
inflation starting at $\phi_{0}$.
It is important to stress, that the duration of a particular evolution may be
too long to compute in any practical time. We employ a large timeout ending
the evolution. This is a good approximation for our purposes. Finally, the
list containing information about every timestep at which inflation was
ongoing in $N$ independent simulations is sorted in an ascending order. A
normalized histogram with 1000 equal-width bins is created from the list. The
number of counts is related to the probability of ongoing inflation, while the
bins correspond to inflationary time. The field’s evolution supports the
Fokker-Planck result (2.15). In the slow-roll regime, the inflaton decays
exponentially with decay parameter $\Gamma$. This is true for every
numerically investigated potential in this work. In order to recognize the
eternally inflating models we search for such initial value of the field
$\phi_{0}$ that $\Gamma<3H$. In the numerical analysis we eliminate $\phi_{c}$
in equation (2.14) and instead check for the slow-roll conditions violation at
each step.
### 2.4 Tunneling and eternal inflation
Most of the inflationary potentials are of the single-minimum type such as
Starobinsky inflation and alpha-attractors. There are however, potentials
which are of type depicted on figure 1 and possess a various minima. In such
models, the inflation can become eternal due to tunnelling to the false vacua.
When the vacua are degenerate enough, the tunneling dominates over quantum
uphill rolling. The tunnelling goes in the opposing direction to the old
inflation scenario [76] as showed on figure 1. As it will turn out this is the
dominant effect for the model discussed in section 4.3. The eternal inflation
mechanism discussed in the previous sections relies on the local shape of the
potential and cannot provide an accurate description in that case. In order to
quantitatively derive predictions for this new effect, we shall rely on the
first passage formalism [84, 85] instead, and apply it to the eternal
inflation considerations.
Figure 1: Left: field initially placed at the maximum of the potential may
decay towards one of the two vacua: at $\phi_{-}$ with probability $p_{-}$ and
at $\phi_{+}$ with probability $p_{+}$.
Right: field initially placed at $\phi_{0}<\phi_{max}$ may tunnel trough the
barrier towards $\phi_{+}$ with probability $p_{+}(\phi_{0})$. Analogous
tunneling from "plus" to "minus" side is also possible.
Given the initial value of the field $\phi_{0}$ being between $\phi_{-}$ and
$\phi_{+}$, the probability that it reaches $\phi_{+}$ before $\phi_{-}$ and
$\phi_{-}$ before $\phi_{+}$, denoted respectively $p_{+}(\phi_{0})$ and
$p_{-}(\phi_{0})$, obeys the following equation:
$\displaystyle
vp^{\prime\prime}_{\pm}(\phi)-\frac{v^{\prime}}{v}p^{\prime}_{\pm}(\phi)=0,$
(2.27)
with initial conditions: $p_{\pm}(\phi_{\pm})=1$, $p_{\pm}(\phi_{\mp})=0$,
where $v=v(\phi)$ is the dimensionless potential:
$\displaystyle v(\phi):=\frac{V(\phi)}{24\pi^{2}M_{Pl}^{4}}.$ (2.28)
The analytical solution is:
$\displaystyle
p_{\pm}(\phi_{0})=\pm\frac{\int^{\phi_{0}}_{\phi_{\pm}}e^{-\frac{1}{v(\phi)}}d\phi}{\int^{\phi_{+}}_{\phi_{-}}e^{-\frac{1}{v(\phi)}}d\phi}.$
(2.29)
One may also define the probability ratio $R$:
$\displaystyle
R(\phi_{0}):=\frac{p_{+}(\phi_{0})}{p_{-}(\phi_{0})}=\frac{\int^{0}_{\phi_{-}}e^{-\frac{1}{v(\phi)}}d\phi}{\int^{\phi_{+}}_{\phi_{0}}e^{-\frac{1}{v(\phi)}}d\phi}.$
(2.30)
The above integrals may be evaluated numerically. However, if the amplitude of
$v(\phi)$ is much smaller than 1, the term $e^{-1/v(\phi)}$ will be extremely
small, possibly exceeding machine precision in the computation.
Yet, one can use the steepest descent approximation. Consider a potential with
the field located initially on the local maximum $\phi_{max}$ with a minimum
on each side, as depicted on figure 1. Then the probability ratio $R$ may be
evaluated approximately, where the leading contributions to (2.30) come from
the values of the field in the neighborhood of $\phi_{max}$. $p_{+}$ and
$p_{-}$ gives a probability of evolution realised respectively by the red and
the green ball. We get:111For the details of the calculation consult [85].
$\displaystyle R(\phi_{max})\approx
1-\frac{2}{3}\frac{\sqrt{2}}{\pi}\frac{v(\phi_{max})v^{\prime\prime\prime}(\phi_{max})}{|v^{\prime\prime}(\phi_{max})|^{3/2}}.$
(2.31)
In this regime, probability of descending into each of the minima $\phi_{-}$
and $\phi_{+}$ is similar, giving $|1-R|\ll 1$. It is possible to start the
inflation in the subset of $[\phi_{-},\phi_{+}]$ that would lead to the
violation of the slow-roll conditions and tunnel through the potential barrier
to the sector dominated by eternal inflation, schematically shown on figure 1.
We further analyze this possibility in Sec. 4.3 for a particular effective
potential with two vacua, stemming from an asymptotically safe theory. We use
equation (2.31) to find the dependence of $R$ on the parameters of the theory
and verify the result with direct numerical simulation of the Langevin
Equation (2.6) given set of parameters.
## 3 Exemplary models
In this section we show the basic application of the conditions (2.23, 2.24)
to simple effective potentials stemming from the $\alpha$-attractor models and
the Starobinsky inflation.
### 3.1 Alpha-attractor models
We start our investigation with the $\alpha$-attractor models [86], a general
class of the inflationary models, originally introduced in the context of
supergravity. They are consistent with the CMB data, and their preheating
phase has been studied on a lattice in [87]. The phenomenological features of
these models are described by the Lagrangian:
$\displaystyle\frac{1}{\sqrt{-g}}\mathcal{L}_{T}=\frac{1}{2}R-\frac{1}{2}\frac{\partial\phi^{2}}{\left(1-\frac{\phi^{2}}{6\alpha}\right)^{2}}-V(\phi).$
(3.1)
Here, $\phi$ is an inflaton and $\alpha$ can take any real, positive value. At
the limit $\alpha\xrightarrow{}\infty$ the scalar field becomes canonically
normalized, and the theory coincides with the chaotic inflation. Canonical and
non-canonical fields are related by the transformation:
$\displaystyle\phi=\sqrt{6\alpha}\tanh{\frac{\varphi}{\sqrt{6\alpha}}}.$ (3.2)
We further consider T-models, in which the potential of canonically normalised
field is given by:
$\displaystyle
V(\phi)=\alpha\mu^{2}\tanh^{2n}{\frac{\varphi}{\sqrt{6\alpha}}},$ (3.3)
where parameter $\mu$ is of order $10^{-5}$. The shape of the potential for
$n=1$ was plotted on figure 2. At the large $\phi$ the potential (3.3) is
asymptotically flat, this creates the possibility for eternal inflation to
occur. Using the first condition (2.23) we have verified, that generally the
space is eternally inflating for all initial values of the $\phi$, above
certain $\phi_{EI}$. The second eternal condition (2.24) as well as the higher
order conditions are satisfied for almost all values of $\phi_{0}$ above $0$,
providing no new information. This is a generic feature for all of the models
we investigate.
For every $\alpha$, $\phi_{0}$ necessary to produce 60 e-folds is safely below
$\phi_{EI}$. We have found $\phi_{0}$ by solving slow-roll equation
numerically. It is shown on figure 2. The time of inflation larger than 60
Planck-times is unlikely according to the Planck Collaboration data [88]. The
values of $\phi_{EI}$ change only slightly with $n$. We may therefore
conclude, that $\alpha$-attractor models are consistent with the beginning of
"our" pocket universe. However, it is not inconceivable that the field
fluctuations in other parts of the early universe had values
$\phi_{0}>\phi_{EI}$, driving the eternal inflation.
Figure 2: Left: the T-model potential with $n=1$ and various $\alpha$ were
depicted.
Right: plot of the initial value $\phi_{0}$ necessary for 60 e-folds, as a
function of $\alpha$ (blue), as well as the lowest initial value $\phi_{EI}$
of the field, at which the eternal inflation "kicks in" (yellow).
### 3.2 Starobinsky model
Solutions stemming from Einstein-Hilbert action predict initial singularity.
In 1980 Starobinsky proposed a model [89], where pure modified gravitational
action can cause non-singular evolution of the universe, namely:
$\displaystyle
S=\frac{1}{2}\int\sqrt{|g|}d^{4}x\left(M^{2}_{p}R+\frac{1}{6M^{2}}R^{2}\right),$
(3.4)
this can be rewritten to the effective potential form, with:
$\displaystyle
V\left(\phi\right)=V_{0}\left(1-\exp\left(-\sqrt{\frac{2}{3}}\frac{\phi}{M_{Pl}}\right)\right)^{2}.$
(3.5)
The inflation begins on a plateau at large $\phi$. The field rolls towards a
minimum at $\phi=0$, where the oscillatory reheating phase occurs. It has been
estimated from the CMB data, that during the inflation the volume of the
universe has grown by approximately 60 e-folds. This corresponds to the
initial condition $\phi_{0}=5.5$ $M_{Pl}$, without taking into account quantum
gravity effects. It is possible to perturbatively recover information about
the shape of the potential from the CMB, for details see [90]. Amplitude of
the scalar power spectrum $A_{s}=2\times 10^{-9}$ fixes the value
$V_{0}=8.12221\times 10^{-11}$ $M^{4}_{Pl}$, via the relation:
$\displaystyle V_{0}=24\pi^{2}\epsilon(\phi_{0})A_{s}.$ (3.6)
Applying the analytical eternal inflation conditions (2.23, 2.24) to the
Starobinsky potential, the initial value of the field, above which the eternal
inflation occurs, has been estimated to be $\phi_{0}=16.7$ $M_{Pl}$. It has
been found, that decay rate $\Gamma$ decreases approximately exponentially
with $\phi_{0}$. Our numerical simulation confirms the analytical prediction
discussed in [1] within a sample of 10000 simulations. They are performed as
described in the previous section. An exemplary numerical evolution of the
Langevin equation (2.6) is shown in the figure 3. The linear fit in the early
times shows the exponential decay of the inflation in the slow-roll regime.
Nevertheless, for eternal inflation scenario and realistic phenomenology, this
model requires the transplanckian values of the fields in order to reproduce
the correct tensor to scalar ratio $r$, amplitudes and spectral tilt $n_{s}$.
Hence the Starobinsky inflation will be affected, possibly invalidated by the
quantum gravity fluctuations. The leading log corrections have been studied in
[31], which we as well study in the context of eternal inflation. Yet, due to
the large field values required for eternal inflation to occur, one should
seek a theory predictive up to an arbitrary large energy scale, which we
discuss in the next section.
Figure 3: Left: exemplary field evolution for the Starobinsky model has been
plotted. Green plot shows the solution to the classical slow-roll equation,
and the black plot is the Langevin solution. The inflation ends, when the
slow-roll parameter reaches 1. Values of the field are given in $M_{Pl}$.
Right: the time dependence of the probability, that inflation still occurs. In
the slow-roll regime, the probability decays exponentially. Linear fit slope,
the decay rate is around $\Gamma=0.15$. Red, dashed line denotes eternal
inflation threshold with slope $3H$ of order $10^{-5}$. Since $3H<\Gamma$,
initial condition $\phi_{0}=3$ $M_{Pl}$ is an example of non-eternally
inflating universe.
## 4 Eternal inflation in asymptotically safe models
In this chapter, as a warm up we study effective corrections to the
Starobinsky inflation, providing a different behavior at the large field
values. Later we show, that RG-improvement of Starobinsky model proposed in
[91], see also [92, 93] for review, closely related to the $R+R^{2}$
renormalizable Fradkin-Tseytlin gravity [94] produces a branch of inflationary
potential entirely dominated by eternal inflation. We find the initial values
of the inflaton such that inflation becomes eternal, for the remaining branch,
as a function of theory parameters. Finally we show, that the possibility of
tunneling through the potential barrier present in [95] becomes a new
mechanism for eternal inflation. In all of the asymptotically safe
inflationary theories, eternal inflation is present as a consequence of
asymptotic flatness of the effective potential.
### 4.1 Quantum corrections to the Starobinsky model
Below Planck scale the gravitational constant $G_{N}$ has a vanishing
anomalous dimension and the $R^{2}$ has a coefficient that runs
logarithmically [96] (this comes from the fact that $R^{2}$ is dimensionless
in $4$ dimensions). Hence, one can motivate various quantum corrected
inflationary models, such as [28, 29, 30, 31]. In particular, the leading-log
corrections to the Starobinsky model are given by [31]:
$\displaystyle\mathcal{L}_{eff}=\frac{M_{Pl}^{2}R}{2}+\frac{\frac{a}{2}R^{2}}{1+b\ln(\frac{R}{\mu^{2}})}+\mathcal{O}(R^{3}).$
(4.1)
In order to find the Einstein frame potential for this model a few steps need
to be taken. First, we use the conformal transformation [97]. Then, following
the next transformation for the Ricci scalar and the metric determinant we get
the Einstein frame action:
$\displaystyle S=\int
d^{4}x\sqrt{-g_{E}}\left[\frac{M_{Pl}^{2}}{2}R_{E}-\frac{1}{2}g_{E}^{\mu\nu}\left(\partial_{\mu}\phi_{E}\right)\left(\partial_{\nu}\phi_{E}\right)-V_{E}(\phi_{E})\right],$
(4.2)
which depends on the sought potential $V_{E}$, that can be further
obtained:222Here, the effective action differs from [31] due to the
introduction of auxiliary numerical parameter
$e\approx 2.81$. Nevertheless, the dynamics stemming from each of the
potentials are equivalent.
$\displaystyle
V_{E}(\Phi)=\frac{M_{Pl}^{4}}{2}\frac{a\Phi^{2}\left(1+b\ln\left(\frac{\Phi}{\mu^{2}}\right)\right)^{2}\left(1+b\ln\left(\frac{\Phi}{e\mu}^{2}\right)\right)}{\left\\{M_{p}^{2}\left(1+b\ln\left(\frac{\Phi}{\mu^{2}}\right)\right)^{2}+2a\Phi\left(1+b\ln\left(\frac{\Phi}{\sqrt{e}\mu^{2}}\right)\right)\right\\}^{2}},$
(4.3)
with $\phi_{E}$ given by
$F(\phi_{E})=M_{Pl}^{2}\exp\left(\sqrt{\frac{2}{3}}\frac{\phi_{E}}{M_{Pl}}\right)=M_{P}^{2}+\frac{a\Phi[2-b+2b\ln(\Phi/\mu^{2})]}{[1+2b\ln(\Phi/\mu^{2})]^{2}},$
(4.4)
yet the transformation between $\Phi$ and $\phi_{E}$ is non-invertible. By
taking into account the COBE normalization, we can treat $b$ as a free
parameter and fix $a(b)$. For $b=0$ one obtains the usual Starobinsky model,
and for $b\ll 1$ one gets the model discussed in [29] $R^{2}(1+\beta\ln R)$
with the potential given by Lambert W function (so the same as for model
discussed in section 4.3) and approximated in the limit $\beta\ll 1$ as
$V\approx\frac{V_{s}}{1+b/(2\alpha)+\beta/\alpha\ln[(e^{\tilde{\chi}}-1)/2\alpha]},$
(4.5)
where $V_{s}$ is the Starobinsky potential, $\tilde{\chi}$ is the Einstein
frame field and $\alpha(\beta)$, where we have kept the original notation.
From the plots 4, 4, one can see that both of the models should give similar
inflationary observables as in the Starobinsky inflation. On the other hand
the eternal inflation in this models will be quite different. These models for
$\beta<0$ and $b>0$ have the potentials that are non-flat for large field
values, while for $\beta>0$ potential depicted on Figure 4 have the runaway
behaviour, so different asymptotic behaviour, which is discussed in Section
4.3. This makes those potentials quantitatively different from the Starobinsky
model in the context of eternal inflation and suggest, that eternal inflation
cannot take place in those models. To be concrete, we have checked that
eternal inflation for model described by 4.3 takes place at $\Phi\approx
1000\,M_{Pl}$, which is far beyond the applicability of the model. So now we
turn to the inflationary models stemming from the asymptotic safety.
Figure 4: Left: Plots of the potential (4.3) for various $b$ parameters.
Right: Potential (4.5) in dependence of $\beta$ parameter values. Approach
towards infinity in case of $\beta>0$ is visible.
### 4.2 RG-improved Starobinsky inflation
Renormalization Group improvement is a procedure of identifying and replacing
the RG scale $k^{2}$ with a physical scale. It incorporates leading-order
quantum effects in the dynamics of classical system. In the case of gravity,
running of coupling constants in Einstein-Hilbert action results in additional
contribution to the field equations from the gravitational energy-momentum
tensor [93]. In the de Sitter-type setting $k^{2}\sim R$ is the unique
identification of the physical scale dictated by Bianchi identities [93]. Such
replacement in the scale-dependent Einstein-Hilbert action generates an
effective $f(R)$ action, whose analytical expression is determined by running
of the gravitational couplings. RG-improvement could solve classical black
hole singularity problem [98, 99], gives finite entanglement entropy [100] and
generates inflationary regime in quantum gravity [37].
In this section, we study the asymptotically safe inflation based on RG-
improved quadratic gravity Lagrangian, considered in [91, 93]:
$\mathcal{L}_{k}=\frac{1}{16\pi
g_{k}}\left(R-2\lambda_{k}k^{2}\right)-\beta_{k}R^{2},$ (4.6)
with the running dimensionless couplings $g_{k}$, $\lambda_{k}$, $\beta_{k}$
being the three relevant directions of the theory, with running given by
[101]:
$g_{k}=\frac{6\pi
c_{1}k^{2}}{6\pi\mu^{2}+23c_{1}(k^{2}-\mu^{2})},\quad\quad\beta_{k}=\beta_{\ast}+b_{0}\left(\frac{k^{2}}{\mu^{2}}\right)^{-\theta_{3}/2},$
(4.7)
where $\mu$ is the infrared renormalization point such that
$c_{1}=g_{k}(k=\mu)$ and $c_{1}$ and $b_{0}$ are integration constants. We
introduce a parameter $\alpha$ as
$\alpha=-2\mu^{\theta_{3}}b_{0}/M_{P}^{2},$ (4.8)
that measures the departure from the non-gaussian fixed point (NGFP). One may
find the behavior of the couplings near the NGFP and substitute the
appropriate expressions to the Lagrangian using the RG-improvement and the
following identification of scale $k^{2}=\xi R$, where $\xi$ is an arbitrary
parameter of order one.
Figure 5: Left: $V_{+}(\phi)$ plot for various $\alpha$ and fixed $\Lambda=1$
is shown.
Right, logarithmic dependence of initial field value above which eternal
inflation occurs on parameter $\alpha$ has been found. Blue points were
evaluated via (2.23).
Following [91] we shall assume $\theta_{3}=1$, then the transformation from
the Jordan to the Einstein frame yields an effective potential [91, 93]:
$\displaystyle\begin{split}V_{\pm}=&\frac{m^{2}e^{-2\sqrt{\frac{2}{3}}\kappa\phi}}{256\kappa}\Bigg{\\{}\vphantom{6\alpha^{3}\sqrt{\alpha^{2}+16e^{\sqrt{\frac{2}{3}}\kappa\phi}-16}}192(e^{\sqrt{\frac{2}{3}}\kappa\phi}-1)^{2}-3\alpha^{4}+128\Lambda\\\
&-\sqrt{32}\alpha\left[(\alpha^{2}+8e^{\sqrt{\frac{2}{3}}\kappa\phi}-8)\pm\alpha\sqrt{\alpha^{2}+16e^{\sqrt{\frac{2}{3}}\kappa\phi}-16}\right]^{\frac{3}{2}}\\\
&-3\alpha^{2}(\alpha^{2}+16e^{\sqrt{\frac{2}{3}}\kappa\phi}-16)\mp
6\alpha^{3}\sqrt{\alpha^{2}+16e^{\sqrt{\frac{2}{3}}\kappa\phi}-16}\Bigg{\\}}\end{split},$
(4.9)
where the only free parameters are cosmological constant $\Lambda$ and
$\alpha$ after the CMB normalization we perform below. The $V_{+}$ branch
predicts the reheating phase, figure 5 shows its plot for various $\alpha$.
Similarly as in the case of Starobinsky inflation, we have denoted $V_{0}$ the
constant part of the potential at infinity
$V(\phi\xrightarrow{}\infty)=V_{0}=\frac{3m^{2}}{4\kappa^{2}}$ and fixed it
with CMB data by the relation (3.6). For example, given $\alpha=2.8$,
$\Lambda=1$ the plateau value is equal to $V_{0}=1.99\times 10^{-10}$
$M^{4}_{Pl}$, hence one may fix the mass parameter $m=2\times 10^{14}$ GeV.
Now we shall investigate the eternal inflation conditions given by (2.23,
2.24). These conditions restricts the initial value of the field. We search
for $\phi_{0}$ above which the eternal inflation occurs, as a function of the
theory parameters. We have also found, that initial value above which eternal
inflation occurs does not depend on the cosmological constant. It is due to
the fact that $\Lambda$ only shifts the minimum of the potential and does not
affect the large-field behavior of the system. Analytical conditions for EI
have been checked for a set of $\alpha$ and depicted on figure 5. The initial
value of the field, depends logarithmically on $\alpha$. The reason for that
behaviour is the following. In the large field expansion:
$\displaystyle V_{\pm}(\phi)=V_{plateau}-128V_{0}\alpha
e^{-\frac{1}{2}\sqrt{\frac{3}{2}}\phi},$ (4.10)
and by the substitution $\tilde{\phi}=e^{-\frac{1}{2}\sqrt{\frac{3}{2}}\phi}$
the potential reduces to the linear hilltop model, which justifies the usage
of formulae (2.23, 2.24) and the functional form of $\phi_{0}(\alpha)$. The
results were also confirmed by the numerical simulations. For example, given
$\Lambda=1,\,\alpha=1.6$, the analytical considerations predict
$\phi_{EI}=22.6\,M_{Pl}$. The direct numerical simulation for this set of
parameters yields $\Gamma=0.0001$ $M_{Pl}$, and $3H=0.0003$ $M_{Pl}$, meaning
that the eternal inflation begins slightly below the expected value
$\phi_{EI}$. The plateau of (4.9) at large field values is a characteristic
feature of effective inflationary potentials stemming from the asymptotically
safe theories. It is dominated by eternal inflation and may suggest a deeper
relation between the asymptotic safety of quantum gravity and multiverse.
### 4.3 Large N-dynamics and (eternal) inflation
In this section we investigate model in which inflation is driven by an
ultraviolet safe and interacting scalar sector stemming from a new class of
non-supersymmetric gauge field theories. We consider a $\mathrm{SU}(N_{C})$
gauge theory, with $N_{F}$ Dirac fermions and interacting with an $N_{F}$
$\times$ $N_{F}$ complex scalar matrix $H_{ij}$ that self interacts, described
in [95]. The Veneziano limit ($N_{F}\to+\infty$, $N_{C}\to+\infty$,
$N_{F}/N_{C}=\mathrm{const}$) is taken such that the ratio $N_{F}/N_{C}$
becomes a continuous parameter [70]. The action in Jordan frame has the
following form:
$S_{J}=\int
d^{4}x\sqrt{-g}\left\\{-\frac{M^{2}+\xi\phi^{2}}{2}R+\frac{g^{\mu\nu}}{2}\partial_{\mu}\phi\partial_{\nu}\phi-
V_{\mathrm{iUVFP}}\right\\},$ (4.11)
where the leading logarithmically resummed potential $V_{iUVFP}$ is given by:
$V_{\mathrm{iUVFP}}(\phi)=\frac{\lambda_{*}\phi^{4}}{4N_{f}^{2}\left(1+W(\phi)\right)}\left(\frac{W(\phi)}{W(\mu_{0})}\right)^{\frac{18}{13\delta}},$
(4.12)
where
$\lambda_{*}=\delta\frac{16\pi^{2}}{19}(\sqrt{20+6\sqrt{23}}-\sqrt{23}-1)$ is
positive quartic coupling at the fixed point, $\phi$ is the real scalar field
along the diagonal of $H_{ij}=\phi\delta_{ij}/\sqrt{2N_{f}}$ and
$\delta=N_{F}/N_{C}-11/2$ is the positive control parameter, $W(\phi)$ is the
Lambert function solving the transcendent equation
$z=W\exp W,$ (4.13)
with
$z(\mu)=\left(\frac{\mu_{0}}{\mu}\right)^{\frac{4}{3}\delta\alpha*}\left(\frac{\alpha*}{\alpha_{0}}-1\right)\exp\left[\frac{\alpha*}{\alpha_{0}}-1\right].$
(4.14)
The parameter $\alpha*=\frac{26}{57}\delta+O(\delta^{2})$ is the gauge
coupling at its UV fixed point value and $\alpha_{0}=\alpha(\mu_{0})$ is the
same coupling at a reference scale $\mu_{0}$.
A conformal transformations allows to rewrite the action from Jordan to
Einstein frame. Assuming a single field slow-roll inflation, we examine
inflationary predictions of the potential and compute the slow-roll
parameters:
$\epsilon=\frac{M_{Pl}^{2}}{2}\left(\frac{dU/d\chi}{U}\right)^{2},\quad\quad\eta=M_{Pl}^{2}\frac{d^{2}U/d\chi^{2}}{U},$
(4.15)
where $U=V_{\mathrm{iUVFP}}/\Omega^{4}$, with
$\Omega^{2}=(M^{2}+\xi\phi^{2})/M_{Pl}^{2}$ being the conformal transformation
of the metric, and $\chi$ is the canonically normalized field in the Einstein
frame. We assume, that $M=M_{Pl}$. Inflation ends when the slow-roll
conditions are violated, that is when $\epsilon(\phi_{end})$ or
|$\eta(\phi_{end})$| = 1. We analyze the non-minimal case, where the coupling
$\xi$ is non-vanishing. The potential $U$ is given by:
$U=\frac{V_{\mathrm{iUVFP}}}{\Omega^{4}}\approx\frac{\lambda_{*}\phi^{4}}{4N_{F}^{2}\left(1+\frac{\xi\phi^{2}}{M_{Pl}^{2}}\right)^{2}}\left(\frac{\phi}{\mu_{0}}\right)^{-\frac{16}{19}\delta}\mathrm{.}$
(4.16)
Figure 6: Left: the non-minimally coupled potential as a function of $\phi$
for $\delta=0.1$, $\xi=1/6$ and $\mu_{0}=10^{-3}M_{Pl}$. There is a maximum at
$\phi_{max}=16.7\,M_{Pl}$.
Right: For the same set of parameters, we plot the first eternal inflation
condition as a function of $\phi$ (blue curve) and the eternal inflation bound
(yellow curve). Inflation becomes eternal if the blue curve is below the
yellow one. At $\phi_{max}$ the first derivative of the potential vanishes and
(2.23) predicts a narrow window for the eternal inflation.
In the large field limit $\phi$ $\gg$ $M_{Pl}/\sqrt{\xi}$ the $\phi^{4}$ term
in the numerator cancels against the term in the denominator. In this limit,
the quantum corrections dictate the behaviour of the potential, which is found
to decrease as:
$\frac{\lambda_{*}M_{Pl}^{4}}{4N_{F}^{2}\xi^{2}}\left(\frac{\phi}{\mu_{0}}\right)^{-\frac{16}{19}\delta}\mathrm{.}$
(4.17)
The non-minimal coupled potential has one local maximum and two minima. The
region to the left of the maximum is the region, where the inflation can be
brought to an end and the reheating takes place [102]. To the right of the
maximum, the inflation becomes classically eternal. For large values of $\phi$
the potential flattens out and the slow-roll conditions are not violated.
Numerical solutions to the FP equation shows that the is no possibility of
eternal inflation in that region, since it is an unstable maximum and hence
any quantum fluctuation will drop it from that position. Furthermore due to
steepness of the potential around this maxima there is no possibility for the
field to remain in that region. Let us now investigate the analytical eternal
inflation conditions. Similarly as in the Starobinsky model, the second
condition (2.24) is always satisfied. The first condition (2.23) is
illustrated on the figure 6. There is a peak for $\phi$ = $\phi_{max}$ = 16.7
$M_{Pl}$, due to the vanishing derivative and if we "zoom in", the analytical
condition allows for eternal inflation in the close neighbourhood of
$\phi_{max}$. We have verified numerically this is not a sustainable attractor
of eternal inflation. A field that starts evolution at $\phi_{max}$ will leave
this region, as it cannot climb further up-hill. Nevertheless, eternal
inflation may still occur due to the quantum tunnelling through the potential
barrier.
#### Tunneling through the potential barrier
As described in section 2.4, if the potential has multiple vacua, quantum
tunneling through the potential barrier is expected. The non-minimal coupling
potential (4.16) belongs in this class. The question is, whether tunnelling
from the non-eternal inflation region of $\phi<\phi_{max}$ to the region of
classical eternal inflation $\phi>\phi_{max}$ is possible.
We start with investigating the fate of the field initially placed at the peak
$\phi_{0}=\phi_{max}$ of the potential depicted on figure 6. By the virtue of
the steepest descent approximation at the maximum equation (2.31) may be
employed. The resulting ratio of probabilities of the right-side descent to
the left-side descent $R(\alpha,\xi)=\frac{p_{+}}{p_{-}}$, as a function of
parameter $\alpha$ and the non-minimal coupling constant $\xi$ was calculated
directly from the formula (2.31) without the need of numerical simulation of
the Langevin equation. It is presented on figure 7. Due to the complexity of
the potential (4.16) its maximum was found numerically and then employed in
(2.31). Figure 7 shows how the maximum changes with parameters. As expected,
the ratio $R(\alpha,\xi)$ is close to 1 and favors the right-side (left-side)
of the potential for large (small) values of the parameters. The biggest ratio
emerges at large values of the parameters $\xi$ and $\delta$, since the
potential is "step-like" and highly asymmetric. It is monotonically decreasing
with the values of $\phi_{max}$, for which the potential has a maximum, see
figure 8.
Figure 7: Left: probability ratio $R=\frac{p_{+}}{p_{-}}$ of descending from
the maximum towards the right minimum ($p_{+}$) and the left minimum
($p_{-}$), as a function of theory parameters $\xi$ and $\delta$. For the
small values of the parameters it is more probable to fall from the maximum
towards $\phi_{-}=0$ with non-eternal inflation, while for the large values of
parameters, minimum at $\phi_{+}=\infty$ is favored, resulting in eternally
inflating universe.
Right: the value of the field at which, the potential is maximal. The above
figures are qualitatively similar because for the small values of
$\phi_{max}$, the effective potential is highly asymmetric ("step-like"). This
brakes the symmetry between the right and left descend probability.
In order to verify the accuracy of the relation (2.31), we have performed
numerical simulation of the discretized Langevin equation (2.26) with initial
condition $\phi_{0}=\phi_{max}$. For example it was found, that for the set of
the parameters $N_{F}=10$, $\mu=10^{-3}M_{Pl}$, $\xi=\frac{1}{6}$,
$\delta=0.1$ the steepest descent approximation yields $R=0.92$, and the
numerical analysis results in $R=0.97$, which proves good accuracy of the
analytical formula.
One may wonder, how does the probability $p_{\pm}(\phi_{0})$ depends on the
departure from the maximum $\phi_{0}\neq\phi_{max}$. The analytical answer is
given by (2.29). As we have checked numerically inflation becomes eternal,
when tunneling probability is non-zero, as depicted on Figure 8.
To bypass the numerical calculation of the integral (2.29) we employ the
direct numerical simulation of the Langevin equation. This time however, we do
not seek for the time evolution of the inflaton. Rather than creating
histograms of the count of inflationary events at a given timestep, we simply
track the probabilities $p_{+}$ and $p_{-}$. We say that the particle
tunnelled through the potential barrier contributing to $p_{+}$ if the
evolution starts at $\phi_{0}<\phi_{max}$ and proceeds to arbitrarily large
field values after a long time. For each point at figure 8 the probability has
been calculated on the sample of 10000 simulations. As expected, choosing
values of $\phi_{0}$ smaller than $\phi_{max}$ lowers the probability of the
tunneling to the right side of the barrier. Moreover, the probability of
tunneling decreases linearly with the distance to the maximum. The result of
the simulation for the set of parameters
$N_{F}=10,\quad\mu=10^{-3}M_{Pl},\quad\xi=\frac{1}{6},\quad\delta=0.1$ (4.18)
is shown on figure 8. The green line corresponds to the green ball on figure 1
and shows the probability of tunneling through the barrier (as in figure 1) as
a function of proximity to the maximum $\phi_{0}\neq\phi_{max}$. The red line
corresponds to the red ball on figure 1 and shows the probability of rolling
towards the minimum at infinity.
The rolling is also a stochastic process, as the tunneling in the opposite
direction is possible. The probability distribution of tunneling in either
direction is not a symmetric process. Notice, that the initial condition, for
which $p_{+}=\frac{1}{2}$ is shifted to the right of $\phi_{max}$. This means,
that starting from the maximum, it is slightly more probable to land in
$\phi_{-}$. There is a point, below which the green ball cannot tunnel
$p_{+}=0$ (for the set of parameters given by (4.18), at $9.2M_{Pl}$), and the
other limiting case (at $24.8$ $M_{P}$), when the red ball cannot tunnel
$p_{-}=0$. Hence, for every initial value of the field above $9.2$ Planck
Masses, there is a non-zero probability of eternal inflation. On the other
hand, for $\phi_{0}=9.2\,M_{Pl}$ and parameters given by (4.18), the inflation
classically produces roughly 54 e-folds, depending on the reheating time
[102], and is in the agreement with CMB data. This shows that the model is on
a verge of being eternally inflating, which may point out to the interesting
phenomenology.
To sum up the critical point of our analysis is that the analytical conditions
(2.23, 2.24) did not allow for eternal inflation, even though the tunneling
process may evolve any initial point above 9.2 Planck Masses to
$\phi_{+}=\infty$, and not violate the slow-roll conditions. This shows that
conditions (2.23,2.24) are not well suited for the multiple minima models and
cannot contain the full information of the global influence of quantum
fluctuations in the early universe.
Figure 8: Left: Linear probability distribution of tunneling (green side) and
rolling (red side) towards the minimum at infinity as. The data points have
been directly simulated.
Right: Probability ratio $R$, evaluated with (2.31), is a monotonically
decreasing with the value of the maximum of the potential in the steepest
descent approximation
## 5 Conclusions
Eternal inflation remains a conceptual issue of the inflationary paradigm. The
creation of scattered, causally disconnected regions of spacetime - the
multiverse is not confirmed observationally and raises question about the
inflationary predictions [77]. Hence, one may impose the no eternal inflation
principle [1] to restrict free parameters and the initial conditions.
We have investigated popular inflationary models, and have found that in
principle, eternal inflation is present at every asymptotically flat effective
potential for large field values, assuming the ergodicity of the system. The
finite inflationary time of our pocket universe serves as a consistency
condition of the multiverse predictions. In section 3.1, we verified that
$\alpha$-attractor T-models are consistent from this point of view.
If the initial value of the scalar field driving inflation is above the Planck
scale, a UV-completeness of a given model is necessary. Starobinsky inflation
stemming from $R^{2}$ gravity gives around 60 e-folds for $\phi_{0}$=5.5
$M_{pl}$. We have considered the effective quantum corrections to the
Starobinsky inflation based on the qualitative behavior of the running
coupling constants. Next, the RG-improvement of $R+R^{2}$ Lagrangian was
studied in this case and we have found that field values required for eternal
inflation are typically higher than the ones for the Starobinsky case. The
flatness of the potential and possibility of eternal inflation seems to be a
signature mark of asymptotically safe UV completions, in contradistinction to
the effective theory corrections. We have checked that [31] $\Phi\sim 1000$
$M_{Pl}$ in order to get eternal inflation, which is far beyond the
applicability of the model.
Furthermore we have found, that for potentials with multiple vacua, tunneling
through potential barriers provides a new mechanism for eternal inflation. So
in order to understand the inflationary dynamics one cannot simply cut the
potential at the maximum. The $\mathrm{SU}(N)$ Gauge theory with Dirac
fermions provides an example for such behavior. The probability of tunneling
to the side dominated by eternal inflation becomes negligible few Planck
Masses away from the peak of the potential. Yet the fixed point values of the
couplings and possibly the shape of the potential can be obscured by the
quantum gravity effects and this shall be investigated elsewhere.
Our analysis reveals that there is no obstruction for the multiverse scenario
in the asymptotically safe models. Yet its occurrence depends on the initial
conditions for the inflationary phase and the matching to the observational
data, tying these three profound issues together. On the other hand, in AS
models these questions can have intriguing answers by the finite action
principle [78].
## Acknowledgments
We thank J. Reszke and J. Łukasik for participating in the early stages of
this project. We thank G. Dvali, A. Eichhorn, M. Pauli, A. Platania, T.
Rudelius, S. Vagnozzi and Z.W. Wang for fruitful discussions and extensive
comments on the manuscript. Work of J.H.K. was supported by the Polish
National Science Center (NCN) grant 2018/29/N/ST2/01743. J.H.K. would like to
acknowledge the CP3-Origins hospitality during this work. The computational
part of this research has been partially supported by the PL-Grid
Infrastructure.
## References
* [1] T. Rudelius, “Conditions for (No) Eternal Inflation,” JCAP, vol. 08, p. 009, 2019.
* [2] C. Vafa, “The String landscape and the swampland,” 9 2005.
* [3] H. Ooguri and C. Vafa, “On the Geometry of the String Landscape and the Swampland,” Nucl. Phys. B, vol. 766, pp. 21–33, 2007.
* [4] G. Obied, H. Ooguri, L. Spodyneiko, and C. Vafa, “De Sitter Space and the Swampland,” 6 2018.
* [5] P. Agrawal, G. Obied, P. J. Steinhardt, and C. Vafa, “On the Cosmological Implications of the String Swampland,” Phys. Lett. B, vol. 784, pp. 271–276, 2018.
* [6] A. Achúcarro and G. A. Palma, “The string swampland constraints require multi-field inflation,” JCAP, vol. 02, p. 041, 2019.
* [7] W. H. Kinney, S. Vagnozzi, and L. Visinelli, “The zoo plot meets the swampland: mutual (in)consistency of single-field inflation, string conjectures, and cosmological data,” Class. Quant. Grav., vol. 36, no. 11, p. 117001, 2019.
* [8] G. Dvali and C. Gomez, “Quantum Compositeness of Gravity: Black Holes, AdS and Inflation,” JCAP, vol. 01, p. 023, 2014.
* [9] G. Dvali and C. Gomez, “Quantum Exclusion of Positive Cosmological Constant?,” Annalen Phys., vol. 528, pp. 68–73, 2016.
* [10] G. Dvali, C. Gomez, and S. Zell, “Quantum Break-Time of de Sitter,” JCAP, vol. 06, p. 028, 2017.
* [11] G. Dvali, “$S$-Matrix and Anomaly of de Sitter,” Symmetry, vol. 13, no. 1, p. 3, 2020.
* [12] J. H. Kwapisz and K. A. Meissner, “Asymptotic safety and quantum gravity amplitudes,” 5 2020.
* [13] O. Lauscher and M. Reuter, “Fractal spacetime structure in asymptotically safe gravity,” JHEP, vol. 10, p. 050, 2005.
* [14] O. Lauscher and M. Reuter, “Asymptotic safety in quantum Einstein gravity: Nonperturbative renormalizability and fractal spacetime structure,” in 14th Oporto Meeting on Geometry, Topology and Physics: Mathematical Aspects of Quantum Field Theory, pp. 293–313, 11 2005.
* [15] A. H. Guth, “Eternal inflation and its implications,” J. Phys. A, vol. 40, pp. 6811–6826, 2007.
* [16] M. C. Johnson and J.-L. Lehners, “Cycles in the Multiverse,” Phys. Rev. D, vol. 85, p. 103509, 2012.
* [17] J.-L. Lehners, “Eternal Inflation With Non-Inflationary Pocket Universes,” Phys. Rev. D, vol. 86, p. 043518, 2012.
* [18] G. León, “Eternal inflation and the quantum birth of cosmic structure,” Eur. Phys. J. C, vol. 77, no. 10, p. 705, 2017.
* [19] H. Matsui and F. Takahashi, “Eternal Inflation and Swampland Conjectures,” Phys. Rev. D, vol. 99, no. 2, p. 023533, 2019.
* [20] Z. Wang, R. Brandenberger, and L. Heisenberg, “Eternal Inflation, Entropy Bounds and the Swampland,” Eur. Phys. J. C, vol. 80, no. 9, p. 864, 2020\.
* [21] J. J. Blanco-Pillado, H. Deng, and A. Vilenkin, “Eternal Inflation in Swampy Landscapes,” JCAP, vol. 05, p. 014, 2020.
* [22] C.-M. Lin, “Topological Eternal Hilltop Inflation and the Swampland Criteria,” JCAP, vol. 06, p. 015, 2020.
* [23] O. Hohm and B. Zwiebach, “Non-perturbative de Sitter vacua via $\alpha^{\prime}$ corrections,” Int. J. Mod. Phys. D, vol. 28, no. 14, p. 1943002, 2019\.
* [24] O. Hohm and B. Zwiebach, “Duality invariant cosmology to all orders in $\alpha$’,” Phys. Rev. D, vol. 100, no. 12, p. 126011, 2019.
* [25] T. Banks, “On the Limits of Effective Quantum Field Theory: Eternal Inflation, Landscapes, and Other Mythical Beasts,” 10 2019.
* [26] M.-S. Seo, “Eternal inflation in light of Wheeler-DeWitt equation,” JCAP, vol. 11, p. 007, 2020.
* [27] J. F. Donoghue, “General relativity as an effective field theory: The leading quantum corrections,” Phys. Rev. D, vol. 50, pp. 3874–3888, 1994.
* [28] A. Codello, J. Joergensen, F. Sannino, and O. Svendsen, “Marginally Deformed Starobinsky Gravity,” JHEP, vol. 02, p. 050, 2015.
* [29] I. Ben-Dayan, S. Jing, M. Torabian, A. Westphal, and L. Zarate, “$R^{2}\log R$ quantum corrections and the inflationary observables,” JCAP, vol. 09, p. 005, 2014.
* [30] K. Bamba, G. Cognola, S. D. Odintsov, and S. Zerbini, “One-loop modified gravity in a de Sitter universe, quantum-corrected inflation, and its confrontation with the Planck result,” Phys. Rev. D, vol. 90, no. 2, p. 023525, 2014.
* [31] L.-H. Liu, T. Prokopec, and A. A. Starobinsky, “Inflation in an effective gravitational model and asymptotic safety,” Phys. Rev. D, vol. 98, no. 4, p. 043505, 2018.
* [32] S. Weinberg, Ultraviolet divergences in quantum theories of gravitation, pp. 790–831. 1 1980.
* [33] M. Reuter, “Nonperturbative evolution equation for quantum gravity,” Phys. Rev. D, vol. 57, pp. 971–985, 1998.
* [34] W. Souma, “Nontrivial ultraviolet fixed point in quantum gravity,” Prog. Theor. Phys., vol. 102, pp. 181–195, 1999.
* [35] O. Lauscher and M. Reuter, “Ultraviolet fixed point and generalized flow equation of quantum gravity,” Phys. Rev. D, vol. 65, p. 025013, 2002.
* [36] M. Reuter and F. Saueressig, “Renormalization group flow of quantum gravity in the Einstein-Hilbert truncation,” Phys. Rev. D, vol. 65, p. 065016, 2002.
* [37] A. Eichhorn, “An asymptotically safe guide to quantum gravity and matter,” 2019\.
* [38] N. Dupuis, L. Canet, A. Eichhorn, W. Metzner, J. Pawlowski, M. Tissier, and N. Wschebor, “The nonperturbative functional renormalization group and its applications,” 6 2020.
* [39] S. Weinberg, “Asymptotically safe inflation,” Physical Review D, vol. 81, Apr 2010.
* [40] M. Reuter and F. Saueressig, “From big bang to asymptotic de Sitter: Complete cosmologies in a quantum gravity framework,” JCAP, vol. 09, p. 012, 2005\.
* [41] M. Reuter and F. Saueressig, “Quantum Einstein Gravity,” New J. Phys., vol. 14, p. 055022, 2012.
* [42] A. Eichhorn, Y. Hamada, J. Lumma, and M. Yamada, “Quantum gravity fluctuations flatten the Planck-scale Higgs potential,” Phys. Rev. D, vol. 97, no. 8, p. 086004, 2018.
* [43] M. Shaposhnikov and C. Wetterich, “Asymptotic safety of gravity and the Higgs boson mass,” Phys. Lett. B, vol. 683, pp. 196–200, 2010.
* [44] O. Zanusso, L. Zambelli, G. Vacca, and R. Percacci, “Gravitational corrections to Yukawa systems,” Phys. Lett. B, vol. 689, pp. 90–94, 2010\.
* [45] J.-E. Daum, U. Harst, and M. Reuter, “Running Gauge Coupling in Asymptotically Safe Quantum Gravity,” JHEP, vol. 01, p. 084, 2010.
* [46] S. Folkerts, D. F. Litim, and J. M. Pawlowski, “Asymptotic freedom of Yang-Mills theory with gravity,” Phys. Lett. B, vol. 709, pp. 234–241, 2012.
* [47] N. Christiansen, D. F. Litim, J. M. Pawlowski, and A. Rodigast, “Fixed points and infrared completion of quantum gravity,” Phys. Lett. B, vol. 728, pp. 114–117, 2014.
* [48] Z.-W. Wang, F. S. Sage, T. G. Steele, and R. B. Mann, “Asymptotic Safety in the Conformal Hidden Sector?,” J. Phys. G, vol. 45, no. 9, p. 095002, 2018\.
* [49] A. Eichhorn, A. Held, and J. M. Pawlowski, “Quantum-gravity effects on a Higgs-Yukawa model,” Phys. Rev. D, vol. 94, no. 10, p. 104027, 2016.
* [50] F. Grabowski, J. H. Kwapisz, and K. A. Meissner, “Asymptotic safety and Conformal Standard Model,” Phys. Rev. D, vol. 99, no. 11, p. 115029, 2019\.
* [51] J. H. Kwapisz, “Asymptotic safety, the Higgs boson mass, and beyond the standard model physics,” Phys. Rev. D, vol. 100, no. 11, p. 115001, 2019\.
* [52] A. Eichhorn and A. Held, “Top mass from asymptotic safety,” Phys. Lett. B, vol. 777, pp. 217–221, 2018.
* [53] A. Eichhorn and A. Held, “Mass difference for charged quarks from asymptotically safe quantum gravity,” Phys. Rev. Lett., vol. 121, no. 15, p. 151302, 2018.
* [54] A. Eichhorn, A. Held, and C. Wetterich, “Quantum-gravity predictions for the fine-structure constant,” Phys. Lett. B, vol. 782, pp. 198–201, 2018\.
* [55] A. Eichhorn and S. Lippoldt, “Quantum gravity and Standard-Model-like fermions,” Phys. Lett. B, vol. 767, pp. 142–146, 2017.
* [56] N. Christiansen, D. F. Litim, J. M. Pawlowski, and M. Reichert, “Asymptotic safety of gravity with matter,” Phys. Rev. D, vol. 97, no. 10, p. 106012, 2018.
* [57] A. Eichhorn, S. Lippoldt, and M. Schiffer, “Zooming in on fermions and quantum gravity,” Phys. Rev. D, vol. 99, no. 8, p. 086002, 2019.
* [58] N. Christiansen and A. Eichhorn, “An asymptotically safe solution to the U(1) triviality problem,” Phys. Lett. B, vol. 770, pp. 154–160, 2017.
* [59] A. Eichhorn, A. Held, and C. Wetterich, “Predictive power of grand unification from quantum gravity,” JHEP, vol. 08, p. 111, 2020.
* [60] A. Eichhorn and M. Schiffer, “$d=4$ as the critical dimensionality of asymptotically safe interactions,” Phys. Lett. B, vol. 793, pp. 383–389, 2019.
* [61] R. Alkofer, A. Eichhorn, A. Held, C. M. Nieto, R. Percacci, and M. Schröfl, “Quark masses and mixings in minimally parameterized UV completions of the Standard Model,” Annals Phys., vol. 421, p. 168282, 2020.
* [62] J. Daas, W. Oosters, F. Saueressig, and J. Wang, “Asymptotically safe gravity with fermions,” Phys. Lett. B, vol. 809, p. 135775, 2020.
* [63] A. Held, “Effective asymptotic safety and its predictive power: Gauge-Yukawa theories,” Front. in Phys., vol. 8, p. 341, 2020.
* [64] Y. Hamada, K. Tsumura, and M. Yamada, “Scalegenesis and fermionic dark matters in the flatland scenario,” Eur. Phys. J. C, vol. 80, no. 5, p. 368, 2020.
* [65] M. Reichert and J. Smirnov, “Dark Matter meets Quantum Gravity,” Phys. Rev. D, vol. 101, no. 6, p. 063015, 2020.
* [66] A. Eichhorn and M. Pauly, “Safety in darkness: Higgs portal to simple Yukawa systems,” 5 2020.
* [67] A. Eichhorn and M. Pauly, “Constraining power of asymptotic safety for scalar fields,” 9 2020.
* [68] Y. Hamada, J. M. Pawlowski, and M. Yamada, “Gravitational instantons and anomalous chiral symmetry breaking,” 9 2020.
* [69] G. P. de Brito, A. Eichhorn, and M. Schiffer, “Light charged fermions in quantum gravity,” 10 2020.
* [70] D. F. Litim and F. Sannino, “Asymptotic safety guaranteed,” Journal of High Energy Physics, vol. 2014, Dec 2014.
* [71] D. F. Litim, M. Mojaza, and F. Sannino, “Vacuum stability of asymptotically safe gauge-Yukawa theories,” JHEP, vol. 01, p. 081, 2016.
* [72] R. Mann, J. Meffe, F. Sannino, T. Steele, Z.-W. Wang, and C. Zhang, “Asymptotically Safe Standard Model via Vectorlike Fermions,” Phys. Rev. Lett., vol. 119, no. 26, p. 261802, 2017.
* [73] O. Antipin, N. A. Dondi, F. Sannino, A. E. Thomsen, and Z.-W. Wang, “Gauge-Yukawa theories: Beta functions at large $N_{f}$,” Phys. Rev. D, vol. 98, no. 1, p. 016003, 2018.
* [74] E. Molinaro, F. Sannino, and Z. W. Wang, “Asymptotically safe Pati-Salam theory,” Phys. Rev. D, vol. 98, no. 11, p. 115007, 2018.
* [75] Z.-W. Wang, A. Al Balushi, R. Mann, and H.-M. Jiang, “Safe Trinification,” Phys. Rev. D, vol. 99, no. 11, p. 115017, 2019.
* [76] A. H. Guth, “The Inflationary Universe: A Possible Solution to the Horizon and Flatness Problems,” Adv. Ser. Astrophys. Cosmol., vol. 3, pp. 139–148, 1987.
* [77] A. Ijjas, P. J. Steinhardt, and A. Loeb, “Inflationary schism,” Phys. Lett. B, vol. 736, pp. 142–146, 2014.
* [78] J.-L. Lehners and K. Stelle, “A Safe Beginning for the Universe?,” Phys. Rev. D, vol. 100, no. 8, p. 083540, 2019.
* [79] J. H. Kwapisz, “Conformal standard model and inflation,” 11 2019.
* [80] A. D. Linde, “Hard art of the universe creation (stochastic approach to tunneling and baby universe formation),” Nucl. Phys. B, vol. 372, pp. 421–442, 1992.
* [81] C. Kiefer, D. Polarski, and A. A. Starobinsky, “Quantum to classical transition for fluctuations in the early universe,” Int. J. Mod. Phys. D, vol. 7, pp. 455–462, 1998.
* [82] C. Kiefer and D. Polarski, “Why do cosmological perturbations look classical to us?,” Adv. Sci. Lett., vol. 2, pp. 164–173, 2009.
* [83] A. Linde, “Stochastic approach to tunneling and baby universe formation,” Nuclear Physics B, vol. 372, p. 421–442, Mar 1992.
* [84] V. Vennin and A. A. Starobinsky, “Correlation functions in stochastic inflation,” The European Physical Journal C, vol. 75, Sep 2015.
* [85] M. Noorbala, V. Vennin, H. Assadullahi, H. Firouzjahi, and D. Wands, “Tunneling in stochastic inflation,” Journal of Cosmology and Astroparticle Physics, vol. 2018, p. 032–032, Sep 2018.
* [86] J. J. M. Carrasco, R. Kallosh, and A. Linde, “Cosmological attractors and initial conditions for inflation,” Physical Review D, vol. 92, Sep 2015\.
* [87] T. Krajewski, K. Turzyński, and M. Wieczorek, “On preheating in $\alpha$-attractor models of inflation,” The European Physical Journal C, vol. 79, Aug 2019.
* [88] Planck Collaboration, “Planck 2018 results - x. constraints on inflation,” A&A, vol. 641, p. A10, 2020.
* [89] A. A. Starobinsky, “A new type of isotropic cosmological models without singularity,” Physics Letters B, vol. 91, pp. 99–102, Mar. 1980.
* [90] J. E. Lidsey, A. R. Liddle, E. W. Kolb, E. J. Copeland, T. Barreiro, and M. Abney, “Reconstructing the inflaton potential—an overview,” Reviews of Modern Physics, vol. 69, p. 373–410, Apr 1997.
* [91] A. Bonanno and A. Platania, “Asymptotically safe inflation from quadratic gravity,” Phys. Lett. B, vol. 750, pp. 638–642, 2015.
* [92] A. Bonanno and F. Saueressig, “Asymptotically safe cosmology – A status report,” Comptes Rendus Physique, vol. 18, pp. 254–264, 2017.
* [93] A. Platania, “From renormalization group flows to cosmology,” Front. in Phys., vol. 8, p. 188, 2020.
* [94] E. Fradkin and A. A. Tseytlin, “Renormalizable Asymptotically Free Quantum Theory of Gravity,” Phys. Lett. B, vol. 104, pp. 377–381, 1981.
* [95] N. G. Nielsen, F. Sannino, and O. Svendsen, “Inflation from Asymptotically Safe Theories,” Phys. Rev. D, vol. 91, p. 103521, 2015.
* [96] M. Demmel, F. Saueressig, and O. Zanusso, “A proper fixed functional for four-dimensional Quantum Einstein Gravity,” JHEP, vol. 08, p. 113, 2015\.
* [97] D. I. Kaiser, “Conformal Transformations with Multiple Scalar Fields,” Phys. Rev. D, vol. 81, p. 084044, 2010.
* [98] A. Bonanno and M. Reuter, “Quantum gravity effects near the null black hole singularity,” Phys. Rev. D, vol. 60, p. 084011, 1999.
* [99] A. Bonanno and M. Reuter, “Spacetime structure of an evaporating black hole in quantum gravity,” Phys. Rev. D, vol. 73, p. 083005, 2006.
* [100] C. Pagani and M. Reuter, “Finite Entanglement Entropy in Asymptotically Safe Quantum Gravity,” JHEP, vol. 07, p. 039, 2018.
* [101] A. Bonanno and M. Reuter, “Proper time flow equation for gravity,” JHEP, vol. 02, p. 035, 2005.
* [102] O. Svendsen, H. Bazrafshan Moghaddam, and R. Brandenberger, “Preheating in an Asymptotically Safe Quantum Field Theory,” Phys. Rev. D, vol. 94, no. 8, p. 083527, 2016.
|
# Regressive Side Effects of Training
Language Models to Mimic Student Misconceptions
Shashank Sonkar
Rice University
Houston, TX
<EMAIL_ADDRESS>
&Naiming Liu
Rice University
Houston, TX
<EMAIL_ADDRESS>
&Richard G. Baraniuk
Rice University
Houston, TX
<EMAIL_ADDRESS>
###### Abstract
This paper presents a novel exploration into the regressive side effects of
training Large Language Models (LLMs) to mimic student misconceptions for
personalized education. We highlight the problem that as LLMs are trained to
more accurately mimic student misconceptions, there is a compromise in the
factual integrity and reasoning ability of the models. Our work involved
training an LLM on a student-tutor dialogue dataset to predict student
responses. The results demonstrated a decrease in the model’s performance
across multiple benchmark datasets, including the ARC reasoning challenge and
TruthfulQA, which evaluates the truthfulness of model’s generated responses.
Furthermore, the HaluEval Dial dataset, used for hallucination detection, and
MemoTrap, a memory-based task dataset, also reported a decline in the model
accuracy. To combat these side effects, we introduced a “hallucination token”
technique. This token, appended at the beginning of each student response
during training, instructs the model to switch between mimicking student
misconceptions and providing factually accurate responses. Despite the
significant improvement across all datasets, the technique does not completely
restore the LLM’s baseline performance, indicating the need for further
research in this area. This paper contributes to the ongoing discussion on the
use of LLMs for student modeling, emphasizing the need for a balance between
personalized education and factual accuracy.
## 1 Introduction
Personalized education, an approach that caters to the unique learning needs
of individuals, is increasingly becoming a key aspiration in educational
technology [1, 2, 3]. With the advent of advanced Large Language Models
(LLMs), this aspiration is inching closer to reality. LLMs, such as Llama [4]
and GPT [5, 6] models, are playing a pivotal role in this domain,
demonstrating significant potentials in various applications, including the
simulation of student behavior [7] and learning patterns [8]. However, the
road to leveraging LLMs for personalized education is challenging [9, 10, 11].
In this paper, we have identified regressive side effects arising from
training LLMs to mimic student behavior. We find that training LLMs to
replicate student misconceptions accurately is a double-edged sword. On one
hand, it creates a model that can mimic student behavior, making it a
potentially effective tool for personalized learning. On the other hand, it
significantly compromises the model’s factual integrity and reasoning ability.
These regressive side effects are a critical issue, as the primary role of any
educational model is to provide accurate and reliable information.
Figure 1: This figure illustrates a typical student-tutor interaction from the
CLASS [12] dataset. Unlike the CLASS methodology, which focuses on training a
tutor model, our study trains a ‘student model’, with the LLM predicting
student responses. This approach is motivated by the potential of personalized
education, where understanding and mimicking student behavior can lead to more
effective learning interventions. However, this approach, while effectively
mimicking student misconceptions, leads to regressive side effects such as
compromising the model’s factual integrity and diminishing its reasoning
abilities. The conversation shown here exemplifies this issue, where the
student’s response, while partially correct, contains misconceptions. To
mitigate these side effects, we introduce hallucination tokens ([hal] and
[/hal]) appended to student responses during training. These tokens instruct
the model to switch between mimicking student misconceptions and providing
factually accurate responses. Despite significant improvements, the technique
does not fully restore the model’s baseline performance, highlighting the
complexity of the issue and the need for further research.
To investigate this issue further, we conducted a comprehensive exploration
involving training LLMs on a student-tutor dialogue dataset. This dataset,
derived from the CLASS framework [12, 13, 2], comprises dialogues on biology
questions sourced from college-level textbooks. An example of the student-
tutor interaction from the dataset is illustrated in figure 1. It provides a
realistic representation of student learning patterns, featuring student
misconceptions and the tutor’s rectifications.
We used the dataset to train the latest Vicuna models (7B and 13B) [14],
state-of-the-art Llama [4] variants, to mimic student responses. However, the
training process significantly decreased the model’s performance across
various benchmark datasets, including the ARC reasoning challenge [15],
TruthfulQA [16], Hallucination Evaluation Dialogue [17], and MemoTrap [18]. We
present a detailed analysis across nine key benchmarks using the Eleuther AI
Language Model Evaluation Harness [19], a widely used framework [20] which
provides a thorough and fair assessment of generative models across a spectrum
of reasoning and general knowledge tasks. We present a detailed analysis
across nine key benchmarks using the Eleuther AI Language Model Evaluation
Harness [19], a widely used framework [20] to test generative language models
on a large number of different evaluation tasks.
To further understand the regressive side effects, we conducted a control
experiment to compare the model trained to predict tutor responses versus one
trained to predict student responses. The results showed that training the LLM
on tutor responses did not lead to any performance decline observed when
mimicking student responses. This trends highlight that the regressive side
effects are a unique challenge specific to training LLMs to replicate student
misconceptions.
To counteract the side effects, we propose to incorporate novel start and end
hallucination tokens ([hal] and [/hal]) into the LLM training process. These
tokens, placed at the beginning and end of each student response, serve as
cues to the model, instructing it when to differentiate between providing
accurate responses and replicating student misconceptions. Our results
indicate a substantial improvement in the model’s performance across all
datasets after introducing this token. However, these tokens do not fully
restore the model’s baseline performance, underscoring the complexity of the
issue.
Through our research, we have brought to the following critical contributions
in the realm of personalized education leveraging LLMs:
1\. We have uncovered and thoroughly investigated regressive side effects in
the LLMs trained for student modeling. This research highlights the
paradoxical challenge when LLMs are trained to mimic student misconceptions,
potentially compromising their factual integrity and reasoning ability.
2\. We have proposed hallucination tokens to mitigate these regressive
effects. These tokens, added to the training process, instruct the LLMs to
differentiate between mimicking student misconceptions and providing factually
accurate responses, substantially improving the model’s performance.
3\. Despite the improvements achieved with the hallucination tokens, our
research indicates that it does not fully counteract the regressive side
effects. This points to the complexity of this issue and underscores the need
for further research in this area.
Our research marks a significant step towards understanding the complexities
of using LLMs for student modeling. The findings and contributions of this
study will fuel further exploration and innovation in this domain, ultimately
refining the use of LLMs in personalized learning environments.
## 2 Related Work
The intersection of artificial intelligence and education has been an area of
active research, with a focus on developing systems that can adapt to and
support individual learners. Our work touches upon several research domains,
including student modeling, the design of intelligent tutoring systems, and
the deployment of Large Language Models (LLMs) in educational contexts.
### 2.1 Student Modeling
Student modeling has long been the cornerstone of personalized learning, with
early attempts using rule-based and Bayesian systems to predict student
knowledge and behaviors [21]. Recent advancements have shifted towards
utilizing machine learning to create more sophisticated models that can adapt
to student learning patterns over time [22, 7]. Our work builds upon these
foundations by exploring how LLMs can simulate not only the knowledge but also
the typical errors and misconceptions students have during the learning
process.
### 2.2 Intelligent Tutoring Systems (ITS)
Intelligent tutoring systems have been designed to provide immediate and
personalized instruction or feedback to learners without human intervention
[23]. The application of LLMs in ITS presents a novel opportunity to create
systems that can engage in more natural and meaningful dialogues with students
[3, 12]. Our approach diverges from traditional ITS by focusing on the
intentional generation of errors to mimic a student’s learning trajectory,
rather than solely providing expert-level instructions [24].
### 2.3 Large Language Models in Education
The use of LLMs like GPT [5] in education is a relatively new but rapidly
growing field of study [25]. These models have been employed for various
educational purposes, from generating educational content to serving as
conversational agents [26, 12]. However, the challenge of ensuring the
truthfulness and reliability of the information provided by LLMs is a
recurring concern [27]. Our research contributes to this dialogue by
investigating the impact of training LLMs to produce student-like errors and
proposing a novel ‘hallucination token’ to manage this trade-off.
### 2.4 Truthfulness and Reliability in AI
The TruthfulQA benchmark has been instrumental in highlighting the issues of
truthfulness in AI-generated content [28]. The ARC challenge further
emphasizes the complexity of reasoning required from AI systems beyond simple
fact retrieval [29]. Our work is aligned with these challenges, as we seek to
understand and improve the truthfulness and reasoning capacity of LLMs when
they are trained to replicate student behaviors.
In conclusion, our study intersects with and contributes to the existing body
of work in these areas by addressing the unique challenge of training LLMs to
authentically mimic student learning processes, including the generation of
errors. Our introduction of the “hallucination token” represents a step
forward in this domain, suggesting a new direction for future research and
development.
## 3 Methodology
Our methodology is divided into three main parts: data preparation, model
training, and the incorporation of hallucination tokens.
### 3.1 Data Preparation
The first step in our methodology involves preparing the dataset for training
the LLMs. We denote the conversation dataset as $\mathcal{D}$, which consists
of ordered pairs of tutor-student conversational turns:
$\mathcal{D}=\\{(\mathbf{x}_{1},\mathbf{y}_{1}),(\mathbf{x}_{2},\mathbf{y}_{2}),\ldots,(\mathbf{x}_{N},\mathbf{y}_{N})\\}$,
where $N$ is the total number of conversational turns. Each $\mathbf{x}$
represents a sequence of tutor utterances, and each corresponding $\mathbf{y}$
represents the student response.
The dataset is derived from the CLASS framework [12], which provides a
realistic representation of student learning patterns, featuring student
misconceptions and the tutor’s rectifications. This dataset provides a rich
source of student-tutor dialogues on biology questions sourced from college
textbooks.
### 3.2 Model Training
The second step in our methodology involves training LLMs. The LLMs are
designed to predict the next utterance given the previous conversational
context. Unlike traditional approaches that focus on the correct responses
typically output by a tutoring system, our model centers on student outputs,
which may possess a mix of correctness and misconceptions.
For an input sequence $\mathbf{x}_{i}$, the LLM aims to generate an output
sequence $\hat{\mathbf{y}}_{i}$ that resembles a student’s response. The
language modeling loss for a single data pair is defined by the negative log
likelihood:
$\mathcal{L}(\mathbf{y}_{i},\hat{\mathbf{y}}_{i})=-\sum_{t=1}^{|\mathbf{y}_{i}|}\log
p\left(y_{i,t}\middle|\mathbf{x}_{i},\mathbf{y}_{i,<t};\theta\right)$
where $\mathbf{y}_{i,<t}$ indicates the tokens in the true response preceding
the current token $y_{i,t}$, and $\theta$ encapsulates the parameters of the
LLM. The overall training loss is the sum over the entire dataset:
$\mathcal{L}_{\text{total}}=\sum_{i=1}^{N}\mathcal{L}\left(\mathbf{y}_{i},\hat{\mathbf{y}}_{i}\right)$
### 3.3 Incorporation of Hallucination Tokens
The third step in our methodology involves the incorporation of hallucination
tokens. To enhance the LLM’s ability to generate responses that simulate
student behaviors, including providing incorrect or uncertain information, we
introduce hallucination token markers. Each student response in the dataset is
enriched with these markers to indicate the beginning and the end of the
potentially inaccurate content.
Let $\mathbf{y}_{i}$ be an original student response sequence from the
dataset. The augmented student response $\tilde{\mathbf{y}}_{i}$ used for
training is constructed by prepending and appending hallucination tokens [hal]
and [/hal], respectively:
$\tilde{\mathbf{y}}_{i}=\left[\texttt{[hal]},\mathbf{y}_{i,1},\mathbf{y}_{i,2},\ldots,\mathbf{y}_{i,|\mathbf{y}_{i}|},\texttt{[/hal]}\right]$
In the modified training regime, the LLM predicts the sequence
$\hat{\mathbf{y}}_{i}$ such that it learns to include these tokens,
effectively grasping the context of student uncertainty or errors. These
tokens serve as cues to the model, instructing it when to differentiate
between providing accurate responses and replicating student misconceptions.
## 4 Experiments and Discussion
Table 1: Performance of Vicuna models on TruthfulQA tasks. The table compares
the performance of the original vicuna model, the control model trained to
mimic tutor responses in biology (tutor) the model trained to mimic tutor
responses in biology (tutor), and the model trained with hallucination tokens
(student-hal). The results are presented for three different settings: MC1,
MC2, and Generation. MC1 refers to a setting where there is only one correct
answer to a question, while MC2 refers to a setting where there are multiple
correct answers. For these settings, the performance is measured in terms of
accuracy. The generation setting involves the model generating 1-2 sentence
answers, with performance evaluated using BLEU and ROUGE scores. The results
highlight the significant drop in performance when the model is trained to
mimic student responses, demonstrating a regressive side effect in terms of
truthfulness. However, the substantial recovery in performance with the
introduction of hallucination tokens suggests a promising strategy to mitigate
these regressive effects.
Dataset | | TQA MC1
---
(Single-true)
| TQA MC2
---
(Multi-true)
TruthfulQA (TQA) Generation
Metric | Accuracy | Accuracy | BLEU | | ROUGE
---
(unigram)
| ROUGE
---
(bigram)
| ROUGE
---
(LCS)
vicuna-7b-v1.5 | 32.93 | 50.37 | 49.69 | 51.41 | 45.90 | 50.55
tutor-7b | 34.64 | 52.43 | 42.72 | 47.12 | 37.94 | 45.29
student-7b | 23.75 | 36.14 | 24.60 | 29.74 | 14.32 | 28.89
student-hal-7b | 29.25 | 44.68 | 43.94 | 47.61 | 36.47 | 45.53
vicuna-13b-v1.5 | 35.01 | 50.87 | 47.12 | 50.18 | 44.92 | 49.08
tutor-13b | 34.76 | 52.20 | 42.84 | 48.71 | 38.80 | 46.76
student-13b | 22.15 | 33.93 | 15.18 | 18.12 | 6.12 | 17.75
student-hal-13b | 27.91 | 41.46 | 39.29 | 42.35 | 33.66 | 42.96
In this section, we present our experimental methodology and discuss the
findings in detail. The experiments were designed to explore the regressive
side effects of training LLMs to mimic student behavior and to assess the
effectiveness of our proposed hallucination tokens in mitigating these
effects.
### 4.1 Experimental Setup
We trained the Vicuna 7B and 13B models [14], one of the best open-source
LLMs, on a student-tutor dialogue dataset derived from the CLASS [12]
framework. This dataset, which provides a realistic representation of student
learning patterns, misconceptions, and the tutor’s rectifications, was used to
fine-tune the models to generate outputs that mimic student dialogue. The
dataset contains $648$ conversations, which sums up to a total of 20K student-
tutor interactions. Average length of conversations is around $~{}400$ words,
only including the student and tutor fields in the conversation template.
The models were evaluated across seven key benchmarks using the Eleuther AI
Language Model Evaluation Harness [19]. These benchmarks include the
TruthfulQA [16], ARC [15], HellaSwag [30], Winogrande [31], MMLU [32],
HaluEval Dialogue [17], and MemoTrap [18]. Each of these benchmarks tests
different aspects of the model’s performance, including its truthfulness,
reasoning abilities, ability to recognize hallucinations, and memory-based
task performance.
### 4.2 In-depth Analysis: TruthfulQA
In the realm of educational technology, the veracity of information provided
by a model is of paramount importance. Misinformation or misconceptions can
lead to significant learning detriments, making the truthfulness of a model’s
responses a critical factor in its effectiveness as an educational tool.
Therefore, we chose to conduct an in-depth analysis of our models’ performance
on the TruthfulQA benchmark.
TruthfulQA is a benchmark specifically designed to measure the truthfulness of
a language model’s responses across a wide range of categories. It tests the
model’s ability to avoid generating false answers learned from imitating human
texts, a challenge that is particularly relevant to our study. Given the
importance of truthfulness in educational contexts and the unique challenges
posed by training models to mimic student misconceptions, we believe that a
rigorous analysis of our models’ performance on TruthfulQA is warranted.
In this section, we present our findings from the TruthfulQA benchmark,
exploring the impact of training models to mimic student behavior and the
effectiveness of our proposed hallucination tokens in mitigating any negative
effects. We delve into the results from the multiple-choice and generation
tasks within TruthfulQA, providing a comprehensive view of our models’
truthfulness in different contexts.
TruthfulQA Multiple-Choice Setting 1 (MC1) Findings. In the first multiple-
choice setting, where there is a single correct label, the student-7b model’s
accuracy decreased by 15 points compared to the vicuna-7b model. However, the
introduction of hallucination tokens led to a significant recovery in
performance. This finding is particularly relevant in the context of
education, where maintaining the truthfulness of responses is crucial. The
improvement with hallucination tokens suggests that it is possible to train
models that can both simulate student behavior and adhere to factual accuracy,
a key consideration for deploying LLMs in educational settings.
TruthfulQA Multiple-Choice Setting 2 (MC2) Findings. In the second multiple-
choice setting, where multiple correct labels are possible, we observed a
similar trend to the MC1 setting. The student-7b model experienced a
significant drop in accuracy, from 50.37% in the vicuna-7b model to 36.14%
when trained to mimic student responses. However, the introduction of
hallucination tokens led to a notable improvement in performance, with the
student-7b model’s accuracy recovering to 44.68%.
This recovery is particularly relevant in the context of education, where
multiple perspectives or answers might be correct. The ability of the model to
navigate such complexities while maintaining truthfulness is crucial. The
improvement with hallucination tokens suggests that it is possible to train
models that can both simulate student behavior and adhere to factual accuracy,
a key consideration for deploying LLMs in educational settings.
TruthfulQA Generation Findings. For the TruthfulQA generation task, where the
model is tasked with generating 1-2 sentence answers, we employed ROUGE scores
to evaluate performance due to the generative nature of the task. The
student-7b model saw a significant decrease in ROUGE scores, from 51.41 in the
vicuna-7b model to 29.74, indicating a substantial loss in the ability to
generate truthful, relevant responses. However, the introduction of
hallucination tokens led to a significant recovery in performance, with ROUGE
scores improving to 47.61.
This finding is crucial for educational technology as LLMs are increasingly
used as generative agents to create educational content, provide explanations,
and engage in dialogue with students. The ability to generate truthful,
accurate responses is fundamental to their utility in these contexts. The
recovery observed with hallucination tokens highlights their potential to
enable LLMs to simulate student misconceptions for personalized learning
without sacrificing the quality and truthfulness of their output.
Table 2: Comparative performance of Large Language Models (LLMs) on various
benchmarks before and after the introduction of hallucination tokens, with a
control experiment involving tutor models. The table presents the performance
of Vicuna 7B models across five key benchmarks: ARC Reasoning, Hallucination
Evaluation Dialogue (HaluDial), Hallucination Memorization Trap (MemoTrap),
TruthfulQA (TQA), HellaSwag (HSwag), MMLU, and Winogrande (WinoG). The numbers
in parentheses (e.g., 25-S in ARC) represent the number of few-shot examples
provided to the model during evaluation. The performance is measured in terms
of accuracy percentage. The table compares the performance of the original
vicuna models, tutor models, student models, and student models trained with
hallucination tokens (student-hal). The results highlight the significant drop
in performance when the model is trained to mimic student responses,
demonstrating regressive side effects across multiple tasks. However, the
introduction of hallucination tokens leads to a substantial recovery in
performance across all benchmarks, underscoring their potential in mitigating
these regressive effects.
Model | Avg | | ARC
---
(25-S)
| HaluDial
---
(0-S)
| MemoTrap
---
(0-S)
| TQA
---
(6-S)
| HSwag
---
(10-S)
| MMLU
---
(5-S)
| WinoG
---
(5-S)
vicuna-7b-v1.5 | 60.8 | 53.24 | 69.08 | 68.48 | 50.34 | 77.39 | 51.04 | 72.14
tutor-7b | 61.0 | 52.13 | 68.81 | 69.23 | 52.3 | 78.07 | 51.32 | 71.19
student-7b | 55.4 | 40.61 | 65.39 | 65.28 | 36.87 | 76.72 | 50.77 | 71.9
student-hal-7b | 58.0 | 45.48 | 70.73 | 66.88 | 44.83 | 77.21 | 51.54 | 72.03
vicuna-13b-v1.5 | 64.2 | 57.08 | 73.78 | 67.2 | 51.51 | 81.24 | 56.67 | 74.66
tutor-13b | 64.7 | 57.34 | 73.92 | 66.13 | 52.99 | 81.51 | 57.02 | 74.35
student-13b | 58.2 | 46.5 | 66.97 | 65.81 | 35.0 | 80.36 | 57.06 | 72.22
student-hal-13b | 60.3 | 48.63 | 72.98 | 66.13 | 42.75 | 80.28 | 56.4 | 73.16
### 4.3 Benchmark Evaluation
Following the exploration of TruthfulQA settings, we delve into the
performance of our models across a broader range of benchmarks as detailed in
Table 2. These benchmarks—ARC, HaluEval Dial, MemoTrap, MMLU, HellaSwag, and
Winogrande—offer a comprehensive view of the models’ capabilities in
reasoning, detecting hallucinations, avoiding memorization traps, and
understanding commonsense, respectively.
AI2 Reasoning Challenge (ARC) Findings. ARC serves as a rigorous benchmark to
evaluate a model’s reasoning capabilities through a set of grade-school
science questions. These questions are designed to test not just the factual
knowledge of the models but also their ability to apply this knowledge in
reasoning through complex, multi-step problems. The ARC dataset is
particularly relevant in educational contexts as it mirrors the type of
critical thinking and problem-solving skills students are expected to develop.
In our experiments, the performance of models trained to mimic student
responses on the ARC benchmark experienced a notable decline. Specifically,
the vicuna-7b model saw its accuracy decrease from 53.24% to 40.61% when
trained on student dialogues. This significant drop in performance highlights
a critical concern: training LLMs to replicate student behavior, including
misconceptions, can severely impair their reasoning abilities.
However, our introduction of hallucination tokens into the training process
presents a silver lining. Our approach led to a partial recovery in the ARC
performance, with accuracy improving to 45.48%. While this does not fully
restore the model’s baseline performance, it represents a significant step
towards mitigating the regressive side effects of training LLMs on student
data.
Hallucination Evaluation (HaluEval) Dialogue Findings. The HaluEval Dial
benchmark is designed to assess a model’s ability to recognize and avoid
hallucinations in generated responses, particularly in the context of
knowledge grounded dialogue tasks. Hallucinations in this context refer to the
model generating information that is not supported by the input data or
general knowledge, a critical issue when models are used in educational
settings where accuracy is paramount. Our findings indicate that training
models to mimic student responses led to a decrease in performance on the
HaluEval Dial benchmark. Specifically, the vicuna-7b model saw its accuracy
drop from 69.0% to 65.39%. However, the introduction of hallucination tokens
demonstrated a remarkable ability to counteract this effect, with the
student-7b model’s accuracy improving to 70.73%.
Memorization Traps (MemoTrap) Findings. MemoTrap is a benchmark designed to
test whether language models can avoid memorization traps by prompting them to
complete well-known proverbs with endings that deviate from the commonly used
ones. This benchmark is particularly relevant for evaluating a model’s ability
to generate creative and contextually appropriate responses rather than
relying on rote memorization.
In our experiments, training models to mimic student responses resulted in a
decrease in performance on the MemoTrap benchmark. The vicuna-7b model’s
accuracy decreased from 68.48% to 65.28%, indicating that training on student
dialogues might encourage the model to rely more on memorization rather than
understanding and applying knowledge flexibly. The introduction of
hallucination tokens led to a slight improvement, with accuracy increasing to
66.88%.
MMLU, HellaSwag, and Winogrande Findings. The performance of models on the
MMLU, HellaSwag, and Winogrande benchmarks remained relatively stable,
regardless of whether they were trained to mimic tutor or student responses.
The nuanced impact observed in other benchmarks underscores the importance of
carefully considering the training data and methodologies used when developing
LLMs for educational purposes. The introduction of hallucination tokens
emerges as a promising strategy for mitigating some of the regressive side
effects associated with training models to mimic student behavior, ensuring
that they can still serve as effective tools for personalized learning without
compromising on factual accuracy or reasoning capabilities.
### 4.4 Control Models: Tutor Models
To further understand the regressive side effects of training LLMs to mimic
student behavior, we conducted a control experiment by training models to
predict tutor responses. This experiment aimed to compare the performance of
models trained to predict tutor responses versus those trained to predict
student responses. The tutor models were trained using the same student-tutor
dialogue dataset derived from the CLASS framework [12]. However, instead of
training the models to mimic student responses, we trained them to predict the
responses of the tutor. Our findings, as shown in Table 2, revealed that
training the LLMs on tutor responses did not lead to the same performance
decline observed when mimicking student responses. This result underscores
that the regressive side effects are a unique challenge specific to training
LLMs to replicate student misconceptions.
## 5 Conclusion
In this study, we have delved into the challenges of training LLMs to mimic
student behavior, with a particular focus on the regressive side effects that
emerge in this process. Our findings reveal a complex paradox: as LLMs become
more adept at replicating student misconceptions, they tend to compromise on
their factual integrity and reasoning ability. Our experiments demonstrated a
notable decrease in the model’s performance across various key benchmark
datasets like ARC Reasoning Challenge and TruthfulQA. To mitigate these
regressive side effects, we introduced a novel technique involving the use of
hallucination tokens during the training process. Our results indicate that
the introduction of these tokens leads to a substantial improvement in the
model’s performance across all datasets. However, it’s important to note that
despite the significant improvements achieved with the hallucination tokens,
they do not fully restore the model’s baseline performance. This outcome
underscores the complexity of the problem and highlights the need for a more
nuanced approach when training LLMs to mimic student behavior. While we have
made some strides in addressing the regressive side effects, our work is just
the beginning. We believe that our findings will pave the way for further
research in this domain, ultimately contributing to the refinement of LLMs in
personalized learning environments.
## 6 Ethics
The ethical implications of training models on incorrect data are profound and
demand conscientious exploration in future work. As our models find
application in real-world educational settings, the delineation between the
effective simulation of student behaviors and the propagation of
misinformation will need to be continually assessed and refined. Our research
has thus laid the groundwork for a new pedagogical paradigm, where AI becomes
a symbiotic partner in the complex choreography of learning and teaching.
## Acknowledgments
This work was supported by NSF grants 1842378, ONR grant N0014-20-1-2534,
AFOSR grant FA9550-22-1-0060, and a Vannevar Bush Faculty Fellowship, ONR
grant N00014-18-1-2047.
## References
* [1] Sankalan Pal Chowdhury, Vilém Zouhar, and Mrinmaya Sachan. Scaling the authoring of autotutors with large language models. arXiv preprint arXiv:2402.09216, 2024.
* [2] Shashank Sonkar, Kangqi Ni, Sapana Chaudhary, and Richard G Baraniuk. Pedagogical alignment of large language models. arXiv preprint arXiv:2402.05000, 2024.
* [3] Robin Schmucker, Meng Xia, Amos Azaria, and Tom Mitchell. Ruffle&riley: Towards the automated induction of conversational tutoring systems. arXiv preprint arXiv:2310.01420, 2023.
* [4] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
* [5] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv preprint arXiv:2303.12712, 2023.
* [6] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022.
* [7] Naiming Liu, Zichao Wang, Richard Baraniuk, and Andrew Lan. Open-ended knowledge tracing for computer science education. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3849–3862, 2022.
* [8] Enkelejda Kasneci, Kathrin Seßler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan Günnemann, Eyke Hüllermeier, et al. Chatgpt for good? on opportunities and challenges of large language models for education. Learning and Individual Differences, 103:102274, 2023.
* [9] Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jian, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D Hwang, et al. Faith and fate: Limits of transformers on compositionality. arXiv preprint arXiv:2305.18654, 2023.
* [10] Shashank Sonkar and Richard G. Baraniuk. Deduction under perturbed evidence: Probing student simulation (knowledge tracing) capabilities of large language models. CEUR Workshop Proceedings, 3487:26–33, 2023.
* [11] Allison Wang, Ethan Prihar, and Neil Heffernan. Assessing the ality of large language models in generating mathematics explanations. Learning@ Scale (L@ S’23), July 20–22, 2023, Copenhagen, Denmark., 2023.
* [12] Shashank Sonkar, Naiming Liu, Debshila Mallick, and Richard Baraniuk. CLASS: A Design Framework for Building Intelligent Tutoring Systems Based on Learning Science principles. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 1941–1961, Singapore, December 2023.
* [13] Shashank Sonkar, Xinghe Chen, Myco Le, Naiming Liu, Debshila Basu Mallick, and Richard Baraniuk. Code soliloquies for accurate calculations in large language models. In Proceedings of the 14th Learning Analytics and Knowledge Conference, pages 828–835, 2024.
* [14] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023.
* [15] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge, 2018.
* [16] Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods, 2022.
* [17] Junyi Li, Xiaoxue Cheng, Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. HaluEval: A large-scale hallucination evaluation benchmark for large language models. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6449–6464, Singapore, December 2023. Association for Computational Linguistics.
* [18] Ian McKenzie, Alexander Lyzhov, Alicia Parrish, Ameya Prabhu, Aaron Mueller, Najoung Kim, Sam Bowman, and Ethan Perez. The inverse scaling prize, 2022.
* [19] Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, 12 2023.
* [20] Edward Beeching, Clémentine Fourrier, Nathan Habib, Sheon Han, Nathan Lambert, Nazneen Rajani, Omar Sanseviero, Lewis Tunstall, and Thomas Wolf. Open llm leaderboard, 2023.
* [21] Martha C Polson and J Jeffrey Richardson. Foundations of intelligent tutoring systems. Psychology Press, 2013.
* [22] Ryan SJD Baker, Kalina Yacef, et al. The state of educational data mining in 2009: A review and future visions. Journal of educational data mining, 1(1):3–17, 2009.
* [23] Beverly Park Woolf. Building intelligent interactive tutors: Student-centered strategies for revolutionizing e-learning. Morgan Kaufmann, 2010.
* [24] Kurt VanLehn. The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational psychologist, 46(4):197–221, 2011.
* [25] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
* [26] Neil T Heffernan and Cristina Lindquist Heffernan. The assistments ecosystem: Building a platform that brings scientists and teachers together for minimally invasive research on human learning and teaching. International Journal of Artificial Intelligence in Education, 24:470–497, 2014.
* [27] Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958, 2021.
* [28] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018.
* [29] Oren Etzioni, Anthony Fader, Janara Christensen, Stephen Soderland, and Mausam Mausam. Open information extraction: The second generation. In IJCAI, volume 11, pages 3–10, 2011.
* [30] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence?, 2019.
* [31] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. WINOGRANDE: an adversarial winograd schema challenge at scale, 2019.
* [32] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. CoRR, abs/2009.03300, 2020.
|
OPES, MetaD, CVs, FES
# Exploration vs Convergence Speed
in Adaptive-bias Enhanced Sampling
Michele Invernizzi Freie Universität Berlin, 14195 Berlin, Germany
<EMAIL_ADDRESS>Michele Parrinello Italian Institute of
Technology, 16163 Genova, Italy
###### Abstract
In adaptive-bias enhanced sampling methods, a bias potential is added to the
system to drive transitions between metastable states. The bias potential is a
function of a few collective variables and is gradually modified according to
the underlying free energy surface. We show that when the collective variables
are suboptimal, there is an exploration-convergence tradeoff, and one must
choose between a quickly converging bias that will lead to fewer transitions,
or a slower to converge bias that can explore the phase space more efficiently
but might require a much longer time to produce an accurate free energy
estimate. The recently proposed On-the-fly Probability Enhanced Sampling
(OPES) method focuses on fast convergence, but there are cases where fast
exploration is preferred instead. For this reason, we introduce a new variant
of the OPES method that focuses on quickly escaping metastable states, at the
expense of convergence speed. We illustrate the benefits of this approach on
prototypical systems and show that it outperforms the popular metadynamics
method.
###### keywords:
opes, metadynamics, umbrella sampling, collective variables, free energy
## 1 Introduction
Molecular dynamics has become a valuable tool in the study of a variety of
phenomena in physics, chemistry, biology, and materials science. One of the
long-standing challenges in this important field is the sampling of rare
events, such as chemical reactions or conformational changes in biomolecules.
To simulate effectively such systems, many enhanced sampling methods have been
developed. An important class of such methods is based on an adaptive-bias
approach and includes adaptive umbrella sampling1, metadynamics (MetaD) 2, 3,
and the recently developed on-the-fly probability enhanced sampling (OPES)4,
5, 6. Adaptive-bias methods operate by adding to the system’s energy
$U(\mathbf{R})$ an external bias potential $V=V(\mathbf{s})$, that is a
function of a set of collective variables (CVs), $\mathbf{s}$. The CVs,
$s=s(\mathbf{R})$, depend on the atomic coordinates $\mathbf{R}$ and are meant
to describe the slow modes associated with the rare event under study. They
also define a free energy surface (FES), $F(\mathbf{s})=-\frac{1}{\beta}\log
P(\mathbf{s})$, where $\beta=(k_{B}T)^{-1}$ is the inverse Boltzmann factor
and $P(\mathbf{s})$ the marginal $\mathbf{s}$ distribution,
$P(\mathbf{s})\propto\int e^{-\beta
U(\mathbf{R})}\delta[\mathbf{s}-\mathbf{s}(\mathbf{R})]\,d\mathbf{R}$. The
bias is periodically updated until it converges to a chosen form. A popular
choice is to have it exactly offset the underlying FES,
$V(\mathbf{s})=-F(\mathbf{s})$, so that the resulting $\mathbf{s}$
distribution is uniform.
The main limitation of adaptive-bias methods is that finding good collective
variables is sometimes difficult and a bad choice of CVs might not promote the
desired transitions in an affordable computer time. In practical applications
one generally has to live with suboptimal CVs7 that still can drive
transitions, but do not include some of the slow modes. In this case, applying
a static bias cannot speed up the slow modes that are not accounted for, and
thus transitions remain quite infrequent. It is sometimes possible to achieve
a faster transition rate by using a rapidly changing bias, which can push the
system out of a metastable state through a high free energy pathway, different
from the energetically favoured one. However, unless one wishes to deal
explicitly with out-of-equilibrium statistics8, 9, 10, it is not possible to
obtain reliable information about the system while the bias changes in a non-
adiabatic fashion. To estimate the FES and other observables one must let the
adaptive-bias method approach convergence, and as the bias becomes quasi-
static, transitions inevitably become less frequent.
We refer to this situation as an exploration-convergence tradeoff that every
adaptive-bias enhanced sampling method has to deal with, when suboptimal CVs
are used. Some methods, like OPES, focus more on quickly converging to a
quasi-static bias potential and thus obtaining an efficiently reweighted FES,
while others, like metadynamics, focus more on escaping metastable states and
exploring the phase space. We will demonstrate this qualitative difference on
some prototypical systems. For simplicity, in the paper we only consider the
well-tempered variant of metadynamics3, but in the Supporting Information (SI)
we provide examples that use the original non-tempered MetaD2 and other
popular variants, such as parallel-bias MetaD11.
We propose here a variant of OPES, named OPES-explore, that focuses on rapid
exploration, rather than on fast convergence. It shares many features with the
original OPES, and is designed to be an easy-to-use tool requiring few input
parameters. To this end, we also introduce an adaptive bandwidth algorithm
that can be used in both OPES variants, and further reduces the number of
input parameters that need to be specified. The detailed description of the
adaptive bandwidth algorithm is left to the SI. All OPES simulations presented
make use of this algorithm.
## 2 The OPES method
The enhanced sampling method OPES works by adding an adaptive-bias potential
to the energy of the system, so as to modify the Boltzmann probability
distribution into a desired target one. Most adaptive-bias methods aim at
sampling uniformly the CV space, but it has been shown that choosing a
different target distribution could be advantageous12, 13. There are two
different classes of target distributions that can be sampled with OPES;
metadynamics-like and replica-exchange-like. We will consider here only the
former type, introduced in Ref. 4, but the interested reader can find in Ref.
5 information about OPES for replica-exchange-like sampling.
To define a metadynamics-like target distribution, one has to choose a set of
collective variables, $s=s(\mathbf{R})$. As stated in the introduction, the
unbiased marginal probability along such CVs is $P(\mathbf{s})\propto\int
e^{-\beta
U(\mathbf{R})}\delta[\mathbf{s}-\mathbf{s}(\mathbf{R})]\,d\mathbf{R}$, where
$U(\mathbf{R})$ is the potential energy. The target distribution is then
defined by requiring a specific marginal probability distribution over the
CVs, $p^{\text{tg}}(\mathbf{s})$. Consequently, the desired bias potential is
written as:
$V(\mathbf{s})=-\frac{1}{\beta}\log\frac{p^{\text{tg}}(\mathbf{s})}{P(\mathbf{s})}\,,$
(1)
so that $\int
e^{-\beta[U(\mathbf{R})+V(\mathbf{s})]}\delta[\mathbf{s}-\mathbf{s}(\mathbf{R})]\,d\mathbf{R}\propto
p^{\text{tg}}(\mathbf{s})$. A typical choice for $p^{\text{tg}}(\mathbf{s})$
is the well-tempered distribution3:
$p^{\text{WT}}(\mathbf{s})\propto[P(\mathbf{s})]^{1/\gamma}\,,$ (2)
where the bias factor $\gamma>1$ controls how much the original distribution
is smoothed out. In the limit of $\gamma=\infty$ one targets a uniform
distribution.
The core idea of OPES is to update self-consistently the estimate of the
probability distributions and of the bias potential, in an on-the-fly fashion
similar to self-healing umbrella sampling 14. The estimate of the unbiased
probability is obtained via a weighted kernel density estimation, so that at
step $n$ one has:
$P_{n}(\mathbf{s})=\frac{\sum_{k}^{n}w_{k}G(\mathbf{s},\mathbf{s}_{k})}{\sum_{k}^{n}w_{k}}\,,$
(3)
where the weights $w_{k}$ are given by $w_{k}=e^{\beta
V_{k-1}(\mathbf{s}_{k})}$, and the Gaussian kernels
$G(\mathbf{s},\mathbf{s}^{\prime})=h\exp\left[-\frac{1}{2}(\mathbf{s}-\mathbf{s}^{\prime})^{T}\boldsymbol{\Sigma}^{-1}(\mathbf{s}-\mathbf{s}^{\prime})\right]$
have a diagonal covariance matrix $\Sigma_{ij}=\sigma^{2}_{i}\delta_{ij}$ and
fixed height $h=\prod_{i}\left(\sigma_{i}\sqrt{2\pi}\right)^{-1}$. The number
of kernels to represent $P_{n}(\mathbf{s})$ would grow linearly with
simulation time, but this is avoided thanks to an on-the-fly kernel
compression algorithm15, as described in detail in the supporting information
of Ref. 4. The compression algorithm also allows for the bandwidth of the
kernels to shrink over time, as the effective sample size
$N_{\text{eff}}^{(n)}=\left(\sum_{k}^{n}w_{k}\right)^{2}/\sum_{k}^{n}w_{k}^{2}$
grows. The idea is to start with a coarse estimate of $P(\mathbf{s})$ and then
refine it as more data are available. The kernel bandwidth of the $i$-th CV at
step $n$ is:
$\sigma_{i}^{(n)}=\sigma_{i}^{(0)}[N_{\text{eff}}^{(n)}(d+2)/4]^{-1/(d+4)}\,,$
(4)
where $d$ is the total number of CVs.
The instantaneous bias is based on the probability estimate
$P_{n}(\mathbf{s})$, following Eq. (1) and using the approximation
$p^{\text{WT}}(\mathbf{s})\propto[P_{n}(\mathbf{s})]^{1/\gamma}$, one has:
$V_{n}(\mathbf{s})=(1-1/\gamma)\frac{1}{\beta}\log\left(\frac{P_{n}(\mathbf{s})}{Z_{n}}+\epsilon\right)\,,$
(5)
where $\epsilon$ is a regularization term that limits the maximum possible
absolute value of the bias potential, and $Z_{n}$ can be understood as a
normalization of $P_{n}(\mathbf{s})$ over the CV space thus far explored,
$\Omega_{n}$:
$Z_{n}=\frac{1}{|\Omega_{n}|}\int_{\Omega_{n}}P_{n}(\mathbf{s})\,d\mathbf{s}\,.$
(6)
This integral is calculated approximately as a sum of $P_{n}$ over the
compressed kernels, as explained in the supplementary information of Ref. 4.
The intuitive idea is that new kernels are added to the compressed
representation only when a new region of CV space is sampled (otherwise they
are merged with existing ones), thus the explored CV-space volume
$|\Omega_{n}|$, is approximately proportional to the total number of
compressed kernels.
The introduction of the $Z_{n}$ term is one of the key innovations of OPES. In
similar methods, once a new metastable state is found one often sees a
dramatic increase of the exit time, compared to the first one16 (see SI, Fig.
S5). This exit time problem is present also when the CVs are optimal, and
should not be confused with the exploration-convergence tradeoff that is the
primary concern of this paper. Other convergence-focused methods introduce
extra parameters to tackle this problem, for example in transition-tempered
metadynamics17 prior knowledge of the position of all metastable states is
required. Instead, OPES avoids the exit time problem by taking into account
the expansion of the CV space via the $Z_{n}$ term, which allows the bias to
adjust more quickly when a new CV-space region is sampled4.
At the start of an OPES simulation only a handful of parameters needs to be
chosen, namely the initial kernel bandwidth, the pace at which the bias is
updated, and the approximate FES barrier height that needs to be overcome.
From this last information a prescription is given to automatically set the
values $\gamma$ and $\epsilon$. The number of parameters can be reduced even
further if one uses, as we shall do here, the adaptive bandwidth algorithm
discussed in the SI.
## 3 An exploratory OPES variant
We present now a new OPES variant called OPES-explore which, compared to the
original OPES formulation, leads to a faster exploration of the phase space at
the cost of a slower convergence. We have recalled that the $Z_{n}$ term
allows OPES to quickly adapt the bias when a metastable state is found in a
previously unexplored region of CV space. However, if the CVs used are
suboptimal, it may happen that a new metastable state is found in an already
explored $\mathbf{s}$ region18, 19. In such a case, the $Z_{n}$ term remains
constant and is therefore ineffective in accelerating the exit time. Instead,
to encourage a rapid exit, one would need a method that allows the bias to
significantly change shape again. Fortunately, it is possible to achieve this
exploratory behaviour simply by making a minimal change to the OPES protocol,
which gives rise to the OPES-explore variant.
In formulating OPES-explore, we restrict ourselves to the case of using as
target the well-tempered distribution,
$p^{\text{tg}}(\mathbf{s})=p^{\text{WT}}(\mathbf{s})$, Eq. (2). In OPES, the
bias is expressed as a function of $P_{n}(\mathbf{s})$, the on-the-fly
estimate of the unknown equilibrium distribution $P(\mathbf{s})$. At the
beginning of the simulation this estimate is not reliable, but it improves
over time and converges in a self-consistent way. In OPES-explore instead, one
builds the bias starting from the on-the-fly estimate of the distribution that
is being sampled in the biased simulation:
$p^{\text{WT}}_{n}(\mathbf{s})=\frac{1}{n}\sum_{k}^{n}G(\mathbf{s},\mathbf{s}_{k})\,,$
(7)
where $\mathbf{s}_{k}$ is the CVs value sampled at step $k$. As the simulation
converges, $p^{\text{WT}}_{n}(\mathbf{s})$ approaches the target well-tempered
distribution $p^{\text{WT}}(\mathbf{s})$. Thus, analogously to Sec. 2, we use
the approximation
$P(\mathbf{s})\propto[p^{\text{WT}}_{n}(\mathbf{s})]^{\gamma}$ and write the
bias according to Eq. (1):
$V_{n}(\mathbf{s})=(\gamma-1)\frac{1}{\beta}\log\left(\frac{p^{\text{WT}}_{n}(\mathbf{s})}{Z_{n}}+\epsilon\right)\,,$
(8)
where $\epsilon$ and $Z_{n}$ have been added for the same reasons as in Eq.
(5). We notice that the expressions in Eqs. (3) and (7), which define the
probability estimates used in the two OPES schemes, converge respectively to
$P(\mathbf{s})$ and $p^{\text{WT}}(\mathbf{s})$ only within the self-
consistent scheme where the simulation runs with a bias that is updated on-
the-fly according to Eqs. (5) and (8) respectively. Both OPES variants are
applications of the general Eq. (1), but OPES estimates on-the-fly
$P(\mathbf{s})$ and uses it to calculate the bias, while OPES-explore does the
same but with $p^{\text{WT}}(\mathbf{s})\propto[P(\mathbf{s})]^{1/\gamma}$.
The free energy surface as a function of the CVs can be estimated in two
distinct ways, either directly from the probability estimate,
$F_{n}(\mathbf{s})=-\gamma\frac{1}{\beta}\log p_{n}^{\text{WT}}(\mathbf{s})$,
or via importance sampling reweighting, e.g. using a weighted kernel density
estimation,
$F_{n}(\mathbf{s})=-\frac{1}{\beta}\log\sum_{k}^{n}e^{\beta
V_{k-1}(\mathbf{s}_{k})}G(\mathbf{s},\mathbf{s}_{k})\,.$ (9)
In standard OPES these two estimates are equivalent, while in OPES-explore
(similarly to MetaD) they can differ significantly in the first part of the
simulation until they eventually converge to the same estimate.
|
---|---
(a) OPES | (b) OPES EXPLORE
Figure 1: Time evolution of a typical simulation of alanine dipeptide in
vacuum using the two OPES variants with the dihedral angles $\phi$ and $\psi$
as CVs. For each method, the compressed kernels are shown on the left with the
point size indicating the adaptive bandwidth, and the corresponding free
energy estimate $F_{n}(\phi,\psi)$ on the right. (a) In the original OPES,
kernels make up the unbiased distribution estimate $P_{n}(\phi,\psi)$ and
$F_{n}(\phi,\psi)=-\frac{1}{\beta}\log P_{n}(\phi,\psi)$, while (b) in OPES-
explore kernels make up the sampled distribution estimate
$p^{\text{WT}}_{n}(\phi,\psi)$ and
$F_{n}(\phi,\psi)=-\gamma\frac{1}{\beta}\log p^{\text{WT}}_{n}(\phi,\psi)$.
All $F_{n}(\phi,\psi)$ are shifted to have zero minimum. Notice how OPES-
explore requires fewer kernels and visits higher FES regions.
In figure 1 we contrast an OPES and OPES-explore simulation of alanine
dipeptide in vacuum, which has become a standard test for enhanced sampling
methods. Both simulations have the same input parameters and use the adaptive
bandwidth scheme described in the SI. The bias is initially quite coarse, but
the width of the kernels reduces as the simulation proceeds and the details of
the FES are increasingly better described. It can clearly be seen that the
OPES-explore variant employs fewer kernels compared to the original OPES. This
is due to the fact that in OPES-explore the kernel density estimation is used
for $p^{\text{WT}}(\mathbf{s})\propto[P(\mathbf{s})]^{1/\gamma}$ that is a
smoothed version of $P(\mathbf{s})$, and thus requires less details. This more
compact representation can be useful especially in higher dimensions, where
the number of kernels can greatly increase despite the compression algorithm.
However, as a drawback it can result in a less accurate bias estimate,
especially for large values of $\gamma$.
## 4 Fewer transitions can lead to better convergence
Figure 2: (a) The Müller potential energy surface, $U(x,y)$. (b) The free
energy surface along the $x$ coordinate, $F(x)$, with and without the addition
of the bias potential $V(x)=-(1-1/\gamma)F(x)$, where $\gamma=20$. (c) The
potential energy modified by the bias potential, $U(x,y)+V(x)$. It can be seen
that, despite the almost flat profile along $x$, the transition region between
the states remains at high energy.
The difference in performance between OPES and OPES-explore cannot be judged
from the alanine dipeptide example, because in this case the CVs chosen are
extremely efficient. In order to highlight the difference between the two
methods, we study a simple two-dimensional model potential that is known as
the Müller potential20, see Fig. 2a, using the $x$ coordinate as collective
variable. This is a clear example of suboptimal CV, since it can discriminate
the metastable states, but not the transition state.
For two-dimensional systems the free energy along the CV, $F(x)$, can be
computed precisely with numerical integration, Fig. 2b. From $F(x)$, the free
energy difference between the two metastable states can be calculated as
$\Delta F=-\frac{1}{\beta}\log\frac{\int_{0}^{1}e^{-\beta
F(x)}dx}{\int_{-1.3}^{0}e^{-\beta F(x)}dx}\,.$ (10)
While it is possible to distinguish better the two states by using also the
$y$ coordinate, this does not result in a significant difference in the
$\Delta F$ value (see SI, Sec. S3). On the other hand, $x$ does a poor job of
identifying the transition state, which is around $x\approx-0.7$ and $y\approx
0.6$, and not at $x\approx 0$ as it would seem from $F(x)$. As a consequence,
it is not possible to significantly increase the transition rate between the
states using a static bias that is a function of $x$ only.
To show this, we consider the effect of adding to the system the converged
well-tempered bias $V(x)=-(1-1/\gamma)F(x)$, with $\gamma=20$. In Fig. 2b, we
can see the effect of the bias on the FES along $x$, which becomes almost
completely flat. However, when we consider the full 2D landscape, Fig. 2c, we
can see that such bias does not really remove the barrier between the two
states. From the height of the barrier at the transition state, one can
roughly estimate that adding $V(x)$ improves the transition rate of about one
order of magnitude. Nevertheless, transitions remain quite rare, around one
every $10^{6}$ uncorrelated samples (see SI, Sec. S3).
We want to compare the two OPES variants and well-tempered metadynamics in
this challenging setting, where CVs are suboptimal and the total simulation
time is not enough to reach full convergence. This type of situation is not
uncommon in practical applications, and it is thus of great interest. Given
enough time, all the method considered converge to the same bias potential and
sample the same target distribution, but we shall see that before reaching
this limit they behave very differently.
|
---|---
(a) $x$ trajectory | (b) $\Delta F$ estimate
Figure 3: Typical simulations of the Müeller potential using different methods
for biasing the $x$ coordinate. Given more time, the three methods will
converge to the same bias potential and will sample the same target
distribution. In (a) is the trajectory along the CV and in (b) is the
corresponding $\Delta F_{n}$, Eq. 10, calculated using the FES estimate
obtained directly from the applied bias,
$F_{n}(x)=-(1-1/\gamma)^{-1}V_{n}(x)$. The correct $\Delta F$ value is
highlighted by a blue stripe 1 k${}_{\text{B}}$T thick.
Figure 3a shows a typical run of the Müller potential obtained by biasing the
$x$ coordinate with OPES, OPES-explore or MetaD. As a simple way to visualize
the evolution of the bias, we also report in Fig. 3b the $\Delta F_{n}$
estimate obtained directly from the applied bias, by using
$F_{n}(x)=-(1-1/\gamma)^{-1}V_{n}(x)$ in Eq. (10). We can see a qualitative
difference between OPES and the other two methods.
OPES reaches a quasi-static bias that is very close to the converged one, but
samples a distribution that is far from the well-tempered one, where the two
basins would be about equally populated. On the other hand, the $x$
distribution sampled by OPES-explore is closer to the target well-tempered
one, but its bias is far from converged, and makes ample oscillations around
the correct value. Metadynamics behaves similarly to OPES-explore. This is the
exploration-convergence tradeoff described in the introduction. Since the CV
is suboptimal, even when using the converged bias $V(x)$, to see a transition
occur one has to wait for an average number of steps $\tau\approx 10^{6}$,
which is more than the total length of the simulation. However, it is possible
to greatly accelerate transitions by using a time-dependent bias that forces
the system into higher energy pathways, that are not accessible at
equilibrium.
In OPES-explore the bias is based on the estimate of the sampled probability
$p^{\text{WT}}_{n}(\mathbf{s})$, and pushes to make it similar to the almost
flat well-tempered target. This means that in order to have a quasi-static
bias, $p^{\text{WT}}_{n}(\mathbf{s})$ should both be almost flat and not
change significantly as the simulation proceeds. Clearly, this cannot happen
unless the simulation is longer than $\tau$, otherwise most of the time would
be spent in the same basin and $p^{\text{WT}}_{n}(\mathbf{s})$ would be far
from flat. On the contrary, in OPES the bias is based on the reweighted
estimate $P_{n}(\mathbf{s})$, and thus it can reach a quasi-static regime even
before sampling the target distribution.
|
---|---
(a) Direct $\Delta F$ | (b) Reweighted $\Delta F$
Figure 4: Estimate of the free energy difference $\Delta F$ for the Müeller
potential obtained by averaging 25 independent runs for each biasing method.
The standard deviation is also shown for each estimate. Given more time, all
these estimates will converge to the correct $\Delta F$. All simulations start
from the main basin, $x<0$, but with different initial conditions. In (a) is
the estimate obtained directly from the applied bias, as in Fig. 3b, while in
(b) is the corresponding estimate obtained via reweighting. For metadynamics
two different reweighting schemes are considered, bias-offset 21, 22 and last-
bias reweighting23, 19. The correct $\Delta F$ value is highlighted by a blue
stripe 1 k${}_{\text{B}}$T thick.
In figure 4a we show the $\Delta F_{n}$ estimate averaged over 25 independent
runs, all starting from the main basin $x<0$. We can see that on average OPES
provides the best $\Delta F_{n}$ estimate at any $n$ in spite of the fact that
it induces far less transitions. In fact, most of the time only one full back-
and-forth transition is observed (see SI). One should notice that after a
single transition the $\Delta F_{n}$ estimate is far from being accurate (see
Fig. 3b) but, since the bias quickly becomes quasi-static, it is possible to
collect equilibrium samples and reliably reweight them, and the average
estimate becomes more accurate the more simulations are run. Instead in OPES-
explore and MetaD, despite starting from independent initial conditions, the
runs are highly correlated, due to the transitions being mostly driven by the
strong changes in the bias rather than the natural fluctuations of the system.
As a further consequence of this, a systematic error is present in the average
estimate, even if $\Delta F_{n}$ is further averaged over time, to remove the
oscillatory behaviour of OPES-explore and MetaD. Such systematic error depends
on the characteristic of the system and the chosen CVs, and is hard to predict
weather it will be relevant or small. Nevertheless, one can be sure that it
reduces over time as the bias converges24.
Estimates of $\Delta F_{n}$ using different reweighting schemes are shown in
Fig. 4b. For OPES and OPES-explore the simple Eq. (9) has been used, while for
MetaD we consider two of the most popular reweighiting schemes, namely last-
bias reweighting23, 19 and bias-offset reweighting21, 22. As expected, the
reweighting estimate of OPES is virtually identical to the direct estimate
obtained from the bias, while for the other two methods the two estimates
differ. The reweighing of OPES-explore has very small statistical uncertainty,
which further highlights the presence of a systematic error in the free energy
difference estimate. Like others before us22, 25, 26, we observe empirically
that the last-bias reweighting for MetaD tends to always be in agreement with
the direct estimate, even when the simulation is far from converged, while the
bias-offset reweighting provides a very unreliable estimate if the MetaD bias
has not reached a quasi-static regime and the initial part of the simulation
is not discarded. Once again, it must be noted that the simulations considered
here are not fully converged, otherwise all the different estimates of the
various methods would have yielded the correct result, without systematic
errors. However, for most practical purposes they behave very differently,
thus it is important to choose between an exploration-focused or a
convergence-focused enhanced sampling method, depending on the specific aim of
the simulation.
## 5 Sometimes exploration is what matters
In the examples of the previous paragraph, it was shown in that OPES converges
to a quasi-static bias faster than OPES-explore and provides more accurate FES
estimates. However, FES estimation is not the only goal of an enhanced
sampling simulation. In complex systems where good CVs are not available,
convergence can remain out of reach, still one might be interested in
exploring the phase space and find all the relevant metastable basins. In such
situation, OPES-explore can be a useful tool.
Figure 5: The eight metastable basins of alanine tetrapeptide in vacuum
sampled via OPES-explore by biasing the three $\psi$ angles, a suboptimal set
of CVs. Each basin is identified by the sign of the three $\phi$ angles, for a
total of $2^{3}$ possible combinations. The most stable basin has
$\phi_{1},\phi_{2},\phi_{3}<0$, while for the least stable
$\phi_{1},\phi_{2},\phi_{3}>0$. Figure 6: Exploration time of the eight
metastable basins of alanine tetrapeptide over 100 ns. The lines are an
average over 10 independent runs for each method, showing the total number of
visited basins. In (a) the bias is a function of the three $\phi$ angles,
$V=V(\phi_{1},\phi_{2},\phi_{3})$, while in (b) the three $\psi$ angles are
used, $V=V(\psi_{1},\psi_{2},\psi_{3})$. See SI for results with different
input parameters and other MetaD variants, such as parallel-bias MetaD11.
We consider here as test system alanine tetrapeptide in vacuum, as in Ref. 4.
It has three $\phi$ dihedral angles, each of them can change from positive to
negative values and vice versa with a relatively low probability. This leads
to $2^{3}=8$ distinct metastable basins, each corresponding to a different
combination of $\phi$ angles signs, as shown in Fig. 5. Here we are not
interested in estimating the FES, but rather we want to compare the ability of
different methods to explore this space and discover all metastable states.
Figure 6 shows the number of explored basins averaged over 10 independent
simulations for each enhanced sampling method. The simulations in the top
panel (Fig. 6a) use as CVs the $\phi$ angles,
$V=V(\phi_{1},\phi_{2},\phi_{3})$, which are good CVs, while in the bottom
(Fig. 6b) the suboptimal $\psi$ angles are used,
$V=V(\psi_{1},\psi_{2},\psi_{3})$. In all methods, the exploration time
increases approximately by two orders of magnitude when suboptimal CVs are
used (please note the horizontal logarithmic scale). As expected, OPES and
OPES-explore have similar exploration speed when using good CVs, while with
suboptimal CVs OPES struggles to find all the metastable basins. This is
because the same region of CV space might correspond to two different
metastable basins, or to a basin and a transition state, as for the Müller
potential18, 19. In this situation, the previously estimated bias must change
considerably for the simulation to escape quickly the current metastable
state.
The exploration speed of MetaD depends critically on the input parameters and
requires a trial-and-error tuning. We report here only the outcome of MetaD
simulations in which a standard choice of the input parameters has been made.
As can be seen in Fig. 6, in these simulations the exploration speed is
roughly one order of magnitude slower than that of OPES-explore. However, the
performance of MetaD simulations can be improved by using different settings,
as shown in the SI. In the SI we also report and briefly discuss results
obtained with non-tempered metadynamics2, adaptive-Gaussians metadynamics23
and parallel-bias metadynamics11. None of these MetaD variants significantly
improve the exploration speed, and some make it even worse.
Finally, in the SI we show how a preliminary OPES-explore run can be combined
with a multithermal OPES simulation5, 6 to sample efficiently alanine
tetrapeptide and reach a converged FES, even without explicitly biasing the
$\phi$ angles.
## 6 Conclusion
We have shown with the help of model systems that there is an exploration-
convergence tradeoff in adaptive-bias methods when suboptimal CVs are used.
This tradeoff should not be confused with the exit time problem, that is
present also with optimal CVs, and is discussed in Sec. 2 and Refs. 17, 16.
Contrary to the exit time problem, the exploration-convergence tradeoff cannot
be solved and is an intrinsic limitation of CV-based adaptive-bias methods,
that is a consequence of suboptimal CVs. We believe the best way to handle
this tradeoff is to have separate methods that clearly focus on one or the
other aspect, so that they can be used depending on the application. In a
convergence-focused method the bias soon becomes quasi-static to allow for
accurate reweighting and free energy estimation. However, with suboptimal CVs
this leads to a slow transition rate and a long time is required to sample the
target distribution. As discussed, even if one knows the true $F(\mathbf{s})$
and directly applies the converged bias, one would not obtain a faster
exploration. In an exploration-focused method, it is possible to improve the
exploration speed by letting the bias change substantially even in a CV region
that has already been visited. While this may increase the number of
transitions, it comes at the cost of a less accurate estimate of the free
energy.
The original OPES method focuses on fast convergence to provide an accurate
estimate of the free energy surface and reweighted observables. As a
consequence, it is very sensitive to the quality of the CVs (see e.g. Fig. 3a)
and any improvement in the CVs results in a clear acceleration of the
transition rate. This is a particularly useful property when developing
machine learning-based CVs, and in fact OPES has already been used several
times in this context27, 28, 29, 30, 31, 32.
In other situations, improving the CVs may require first a better exploration
of the phase space33, 34, 35, 28. Furthermore, one may be interested simply in
exploring the metastable states of a system rather than estimating an accurate
FES36, 37, 38, 39. For this reason, we have introduced a variant of the OPES
method, OPES-explore, that focuses on quickly sampling the target distribution
and exploring the phase space.
We have shown that also well-tempered metadynamics is an exploration-focused
method. One of the main advantages of OPES-explore over MetaD is that it is
easier to use, since it requires fewer input parameters and it has a more
straightforward reweighting scheme (but more advanced ones can also be used25,
40). Another important difference between the two methods is that OPES-
explore, similarly to OPES, by default provides a maximum threshold to the
applied bias potential, thus it avoids unreasonably high free energy regions.
To obtain the same effect with MetaD, one typically has to define some ad hoc
static bias walls by trial and error. This last feature of OPES-explore has
been recently leveraged by Raucci et al. to systematically discover reaction
pathways in chemical processes41.
Finally, we should clarify that OPES-explore, just as metadynamics, might not
be able to exit any metastable state if the CVs are too poor42, 19, and its
improved exploration capability can only be harnessed if the CVs are close
enough to the correct ones to make such transitions possible. The speed and
small number of input parameters of OPES-explore are extremely helpful for
quickly testing several candidate CVs, to find out which can drive transitions
and discard the bad ones.
We believe that OPES-explore is an important addition to the OPES family of
methods and will become a useful tool for researchers as it pushes forward the
trend for more robust and reliable enhanced sampling methods.
We thank Valerio Rizzi and Umberto Raucci for useful discussions. M.I.
acknowledges support from the Swiss National Science Foundation through an
Early Postdoc.Mobility fellowship. Calculations were carried out on Euler
cluster at ETH Zurich and on workstations provided by USI Lugano.
## Data availability
An open-source implementation of the OPES and OPES-explore methods is
available in the enhanced sampling library PLUMED from version 2.843. All the
data and input files needed to reproduce the simulations presented in this
paper are available on PLUMED-NEST (www.plumed-nest.org), the public
repository of the PLUMED consortium 44, as plumID:22.003 .
Description of the adaptive bandwidth algorithm, computational details
regarding the Müller potential and further biased trajectories, exploration
speed for alanine tetrapeptide using other methods, and description of a
multithermal-multiumbrella simulation to improve upon the OPES-explore run.
## References
* Mezei 1987 Mezei, M. Adaptive umbrella sampling: Self-consistent determination of the non-Boltzmann bias. _Journal of Computational Physics_ 1987, _68_ , 237–248
* Laio and Parrinello 2002 Laio, A.; Parrinello, M. Escaping free-energy minima. _Proceedings of the National Academy of Sciences_ 2002, _99_ , 12562–12566
* Barducci et al. 2008 Barducci, A.; Bussi, G.; Parrinello, M. Well-Tempered Metadynamics: A Smoothly Converging and Tunable Free-Energy Method. _Physical Review Letters_ 2008, _100_ , 020603
* Invernizzi and Parrinello 2020 Invernizzi, M.; Parrinello, M. Rethinking Metadynamics: From Bias Potentials to Probability Distributions. _The Journal of Physical Chemistry Letters_ 2020, _11_ , 2731–2736
* Invernizzi et al. 2020 Invernizzi, M.; Piaggi, P. M.; Parrinello, M. Unified Approach to Enhanced Sampling. _Physical Review X_ 2020, _10_ , 41034
* Invernizzi 2021 Invernizzi, M. OPES: On-the-fly Probability Enhanced Sampling Method. _Il Nuovo Cimento C_ 2021, _44_ , 112
* Invernizzi and Parrinello 2019 Invernizzi, M.; Parrinello, M. Making the Best of a Bad Situation: A Multiscale Approach to Free Energy Calculation. _Journal of Chemical Theory and Computation_ 2019, _15_ , 2187–2194
* Jarzynski 1997 Jarzynski, C. Nonequilibrium Equality for Free Energy Differences. _Physical Review Letters_ 1997, _78_ , 2690–2693
* Donati and Keller 2018 Donati, L.; Keller, B. G. Girsanov reweighting for metadynamics simulations. _The Journal of Chemical Physics_ 2018, _149_ , 072335
* Bal 2021 Bal, K. M. Reweighted Jarzynski Sampling: Acceleration of Rare Events and Free Energy Calculation with a Bias Potential Learned from Nonequilibrium Work. _Journal of Chemical Theory and Computation_ 2021, _17_ , 6766–6774
* Pfaendtner and Bonomi 2015 Pfaendtner, J.; Bonomi, M. Efficient Sampling of High-Dimensional Free-Energy Landscapes with Parallel Bias Metadynamics. _Journal of Chemical Theory and Computation_ 2015, _11_ , 5062–5067
* Valsson and Parrinello 2014 Valsson, O.; Parrinello, M. Variational Approach to Enhanced Sampling and Free Energy Calculations. _Physical Review Letters_ 2014, _113_ , 090601
* White et al. 2015 White, A. D.; Dama, J. F.; Voth, G. A. Designing Free Energy Surfaces That Match Experimental Data with Metadynamics. _Journal of Chemical Theory and Computation_ 2015, _11_ , 2451–2460
* Marsili et al. 2006 Marsili, S.; Barducci, A.; Chelli, R.; Procacci, P.; Schettino, V. Self-healing Umbrella Sampling: A Non-equilibrium Approach for Quantitative Free Energy Calculations. _The Journal of Physical Chemistry B_ 2006, _110_ , 14011–14013
* Sodkomkham et al. 2016 Sodkomkham, D.; Ciliberti, D.; Wilson, M. A.; Fukui, K.-I.; Moriyama, K.; Numao, M.; Kloosterman, F. Kernel density compression for real-time Bayesian encoding/decoding of unsorted hippocampal spikes. _Knowledge-Based Systems_ 2016, _94_ , 1–12
* Fort et al. 2017 Fort, G.; Jourdain, B.; Lelièvre, T.; Stoltz, G. Self-healing umbrella sampling: convergence and efficiency. _Statistics and Computing_ 2017, _27_ , 147–168
* Dama et al. 2014 Dama, J. F.; Rotskoff, G.; Parrinello, M.; Voth, G. A. Transition-tempered metadynamics: Robust, convergent metadynamics via on-the-fly transition barrier estimation. _Journal of Chemical Theory and Computation_ 2014, _10_ , 3626–3633
* Pietrucci 2017 Pietrucci, F. Strategies for the exploration of free energy landscapes: Unity in diversity and challenges ahead. _Reviews in Physics_ 2017, _2_ , 32–45
* Bussi and Laio 2020 Bussi, G.; Laio, A. Using metadynamics to explore complex free-energy landscapes. _Nature Reviews Physics_ 2020,
* Müller and Brown 1979 Müller, K.; Brown, L. D. Location of saddle points and minimum energy paths by a constrained simplex optimization procedure. _Theoretica Chimica Acta_ 1979, _53_ , 75–93
* Tiwary and Parrinello 2015 Tiwary, P.; Parrinello, M. A time-independent free energy estimator for metadynamics. _Journal of Physical Chemistry B_ 2015, _119_ , 736–742
* Valsson et al. 2016 Valsson, O.; Tiwary, P.; Parrinello, M. Enhancing Important Fluctuations: Rare Events and Metadynamics from a Conceptual Viewpoint. _Annual Review of Physical Chemistry_ 2016, _67_ , 159–184
* Branduardi et al. 2012 Branduardi, D.; Bussi, G.; Parrinello, M. Metadynamics with Adaptive Gaussians. _Journal of Chemical Theory and Computation_ 2012, _8_ , 2247–2254
* Dama et al. 2014 Dama, J. F.; Parrinello, M.; Voth, G. A. Well-Tempered Metadynamics Converges Asymptotically. _Physical Review Letters_ 2014, _112_ , 240602
* Marinova and Salvalaglio 2019 Marinova, V.; Salvalaglio, M. Time-independent free energies from metadynamics via mean force integration. _The Journal of Chemical Physics_ 2019, _151_ , 164115
* Giberti et al. 2020 Giberti, F.; Cheng, B.; Tribello, G. A.; Ceriotti, M. Iterative Unbiasing of Quasi-Equilibrium Sampling. _Journal of Chemical Theory and Computation_ 2020, _16_ , 100–107
* Bonati et al. 2020 Bonati, L.; Rizzi, V.; Parrinello, M. Data-Driven Collective Variables for Enhanced Sampling. _The Journal of Physical Chemistry Letters_ 2020, _11_ , 2998–3004
* Bonati et al. 2021 Bonati, L.; Piccini, G.; Parrinello, M. Deep learning the slow modes for rare events sampling. _Proceedings of the National Academy of Sciences_ 2021, _118_ , e2113533118
* Karmakar et al. 2021 Karmakar, T.; Invernizzi, M.; Rizzi, V.; Parrinello, M. Collective variables for the study of crystallisation. _Molecular Physics_ 2021, _119_ , e1893848
* Trizio and Parrinello 2021 Trizio, E.; Parrinello, M. From Enhanced Sampling to Reaction Profiles. _The Journal of Physical Chemistry Letters_ 2021, _12_ , 8621–8626
* Rizzi et al. 2021 Rizzi, V.; Bonati, L.; Ansari, N.; Parrinello, M. The role of water in host-guest interaction. _Nature Communications_ 2021, _12_ , 93
* Ansari et al. 2021 Ansari, N.; Rizzi, V.; Carloni, P.; Parrinello, M. Water-Triggered, Irreversible Conformational Change of SARS-CoV-2 Main Protease on Passing from the Solid State to Aqueous Solution. _Journal of the American Chemical Society_ 2021, _143_ , 12930–12934
* Branduardi et al. 2007 Branduardi, D.; Gervasio, F. L.; Parrinello, M. From A to B in free energy space. _The Journal of Chemical Physics_ 2007, _126_ , 054103
* McCarty and Parrinello 2017 McCarty, J.; Parrinello, M. A variational conformational dynamics approach to the selection of collective variables in metadynamics. _The Journal of Chemical Physics_ 2017, _147_ , 204109
* Mendels et al. 2018 Mendels, D.; Piccini, G.; Parrinello, M. Collective Variables from Local Fluctuations. _Journal of Physical Chemistry Letters_ 2018, _9_ , 2776–2781
* Piaggi and Parrinello 2018 Piaggi, P. M.; Parrinello, M. Predicting polymorphism in molecular crystals using orientational entropy. _Proceedings of the National Academy of Sciences_ 2018, _115_ , 10251–10256
* Capelli et al. 2019 Capelli, R.; Carloni, P.; Parrinello, M. Exhaustive Search of Ligand Binding Pathways via Volume-Based Metadynamics. _The Journal of Physical Chemistry Letters_ 2019, _10_ , 3495–3499
* Ahlawat et al. 2021 Ahlawat, P.; Hinderhofer, A.; Alharbi, E. A.; Lu, H.; Ummadisingu, A.; Niu, H.; Invernizzi, M.; Zakeeruddin, S. M.; Dar, M. I.; Schreiber, F.; Hagfeldt, A.; Grätzel, M.; Rothlisberger, U.; Parrinello, M. A combined molecular dynamics and experimental study of two-step process enabling low-temperature formation of phase-pure $\alpha$-FAPbI 3. _Science Advances_ 2021, _7_ , eabe3326
* Francia et al. 2021 Francia, N. F.; Price, L. S.; Salvalaglio, M. Reducing crystal structure overprediction of ibuprofen with large scale molecular dynamics simulations. _CrystEngComm_ 2021, _23_ , 5575–5584
* Carli and Laio 2021 Carli, M.; Laio, A. Statistically unbiased free energy estimates from biased simulations. _Molecular Physics_ 2021, _119_
* Raucci et al. 2022 Raucci, U.; Rizzi, V.; Parrinello, M. Discover, Sample, and Refine: Exploring Chemistry with Enhanced Sampling Techniques. _The Journal of Physical Chemistry Letters_ 2022, 1424–1430
* Bussi and Branduardi 2015 Bussi, G.; Branduardi, D. _Free-Energy Calculations with Metadynamics: Theory and Practice_ ; John Wiley & Sons, Inc, 2015; pp 1–49
* Tribello et al. 2014 Tribello, G. A.; Bonomi, M.; Branduardi, D.; Camilloni, C.; Bussi, G. PLUMED 2: New feathers for an old bird. _Computer Physics Communications_ 2014, _185_ , 604–613
* The PLUMED consortium 2019 The PLUMED consortium, Promoting transparency and reproducibility in enhanced molecular simulations. _Nature Methods_ 2019, _16_ , 670–673
See pages - of SupportingInformation.pdf
|
# The Based Rings of Two-sided cells in an Affine Weyl group of type
$\tilde{B}_{3}$, II
Yannan Qiu∗ and Nanhua XI† ∗ School of Mathematical Sciences
Zhejiang University
Zhejiang 310058, China
China<EMAIL_ADDRESS>† Academy of Mathematics and Systems Science
Chinese Academy of Sciences
Beijing 100190, China
and
School of Mathematical Sciences
University of Chinese Academy of Sciences
Chinese Academy of Sciences
Beijing 100049, China<EMAIL_ADDRESS>Dedicated to George Lusztig with
greatest respect.
###### Abstract.
We compute the based rings of two-sided cells corresponding to the unipotent
classes in $Sp_{6}(\mathbb{C})$ with Jordan blocks (33), (411), (222),
respectively. The results for the first two two-sided cells also verify
Lusztig’s conjecture on the structure of the based rings of two-sided cells of
an affine Weyl group. The result for the last two-sided cell partially
suggests a modification of Lusztig’s conjecture on the structure of the based
rings of two-sided cells of an affine Weyl group.
Y. Qiu was partially supported by National Natural Science Foundation of
China, No. 12171030.
N. Xi was partially supported by National Key R&D Program of China, No.
2020YFA0712600, and by National Natural Science Foundation of China, No.
11688101.
We are concerned with the based rings of two-sided cells in an affine Weyl
group of type $\tilde{B}_{3}$. In a previous paper we discussed the based ring
of the two-sided cell corresponding to the nilpotent element in
$Sp_{6}(\mathbb{C})$ with 3 equal Jordan blocks and showed that Lusztig’s
conjecture on the structure of the based rings of the two-sided cells of an
affine Weyl group needs modification (see section 4 in [QX]). In this paper we
compute the based rings of two-sided cells corresponding to the unipotent
classes in $Sp_{6}(\mathbb{C})$ with Jordan blocks (411), (33), (222),
respectively. The results for the first two two-sided cells also verify
Lusztig’s conjecture on the structure of the based rings of two-sided cells of
an affine Weyl group. The result for the last two-sided cell partially
suggests a modification of Lusztig’s conjecture on the structure of the based
rings of two-sided cells of an affine Weyl group. For the first two two-sided
cells, the validity of Lusztig’s conjecture on the based rings is already
included in the main theorem in [BO]. Here we construct the bijection in
Lusztig’s conjecture explicitly so that the results in this paper can be used
for computing certain irreducible representations of affine Hecke algebras of
type $\tilde{B}_{3}$. In section 5 we give a description for the based ring of
the two-sided cell corresponding to the nilpotent element in
$Sp_{6}(\mathbb{C})$ with Jordan blocks (222), which can also be used to
compute certain irreducible representations of affine Hecke algebras of type
$\tilde{B}_{3}$.
The contents of the paper are as follows. Section 1 is devoted to
preliminaries, which include some basic facts on (extended) affine Weyl groups
and their Hecke algebras and formulation of Lusztig’s conjecture on the
structure of the based ring of a two-sided cell in an affine Weyl group. In
section 2 we recall some results on cells of the (extended) affine Weyl group
of type $\tilde{B}_{3}$, which are due to J. Du. Sections 3, 4, 5 are devoted
to discussing based rings of two-sided cells corresponding to the unipotent
classes in $Sp_{6}(\mathbb{C})$ with Jordan blocks (411), (33), (222),
respectively.
## 1\. Affine Weyl groups and their Hecke algebras
In this section we fix some notations and refer to [KL, L1, L2, L3, QX] for
more details.
1.1. Extended affine Weyl groups and their Hecke algebras Let $G$ be a
connected reductive algebraic group over the field $\mathbb{C}$ of complex
numbers. Let $W_{0}$ be the Weyl group of $G$ and $W$ the affine Weyl group
attached to $G$. The set of simple reflections of $W$ is denoted by $S$. We
shall denote the length function of $W$ by $l$ and use $\leq$ for the Bruhat
order on $W$. We also often write $y<w$ or $w>y$ if $y\leq w$ and $y\neq w$.
Let $H$ be the Hecke algebra of $(W,S)$ over
$\mathcal{A}=\mathbb{Z}[q^{\frac{1}{2}},q^{-\frac{1}{2}}]$ $(q$ an
indeterminate) with parameter $q$. Let $\\{T_{w}\\}_{w\in W}$ be its standard
basis. Then we have $(T_{s}-q)(T_{s}+1)=0$ and $T_{w}T_{u}=T_{wu}$ if
$l(wu)=l(w)+l(u)$. Let $C_{w}=q^{-\frac{l(w)}{2}}\sum_{y\leq w}P_{y,w}T_{y},\
w\in W$ be the Kazhdan-Lusztig basis of $H$, where $P_{y,w}$ are the Kazhdan-
Lusztig polynomials. The degree of $P_{y,w}$ is less than or equal to
$\frac{1}{2}(l(w)-l(y)-1)$ if $y<w$ and $P_{w,w}=1$. Convention: set
$P_{y,w}=0$ if $y\not\leq w$.
If $y<w$, we write $P_{y,w}=\mu(y,w)q^{\frac{1}{2}(l(w)-l(y)-1)}+\text{lower
degree terms.}$ We shall write $y\prec w$ if $\mu(y,w)\neq 0$. We have
(a) Let $y\leq w$. Assume that $sw\leq w$ for some $s\in S$. Then
$\displaystyle P_{y,w}$ $\displaystyle=P_{sy,w},\ \text{if}\ sy>y;$
$\displaystyle P_{y,w}$
$\displaystyle=q^{1-c}P_{sy,sw}+q^{c}P_{y,sw}-\sum_{\stackrel{{\scriptstyle\stackrel{{\scriptstyle
z\in W}}{{y\leq z\prec
sw}}}}{{sz<z}}}\mu(z,sw)q^{\frac{l(w)-l(z)}{2}}P_{y,z},$
where $c=1$ if $sy<y$ and $c=0$ if $sy>y$.
(b) Let $y\leq w$. Assume that $ws\leq w$ for some $s\in S$. Then
$\displaystyle P_{y,w}$ $\displaystyle=P_{ys,w},\ \text{if}\ ys>y;$
$\displaystyle\ P_{y,w}$
$\displaystyle=q^{1-c}P_{ys,ws}+q^{c}P_{y,ws}-\sum_{\stackrel{{\scriptstyle\stackrel{{\scriptstyle
z\in W}}{{y\leq z\prec
ws}}}}{{zs<z}}}\mu(z,ws)q^{\frac{l(w)-l(z)}{2}}P_{y,z},$
where $c=1$ if $ys<y$ and $c=0$ if $ys>y$.
From the two formulas above one gets (see [KL])
(c) Let $y,w\in W$ and $s\in S$ be such that $y<w,\ sw<w,$ and $sy>y$. Then
$y\prec w$ if and only if $w=sy$. Moreover this implies that $\mu(y,w)=1$.
(d) Let $y,w\in W$ and $s\in S$ be such that $y<w,\ ws<w$, and $ys>y$. Then
$y\prec w$ if and only if $w=ys$. Moreover this implies that $\mu(y,w)=1$.
The following formulas for computing $C_{w}$ (see [KL]) will be used in
sections 3, 4, 5.
(e) Let $w\in W$ and $s\in S$. Then
(1) $\displaystyle
C_{s}C_{w}=\begin{cases}\displaystyle(q^{\frac{1}{2}}+q^{-\frac{1}{2}})C_{w},\quad&\text{if\
}sw<w,\\\ \displaystyle C_{sw}+\sum_{\stackrel{{\scriptstyle z\prec
w}}{{sz<z}}}\mu(z,w)C_{z},\quad&\text{if\ }sw\geq w.\end{cases}$ (2)
$\displaystyle
C_{w}C_{s}=\begin{cases}\displaystyle(q^{\frac{1}{2}}+q^{-\frac{1}{2}})C_{w},\quad&\text{if\
}ws<w,\\\ \displaystyle C_{ws}+\sum_{\stackrel{{\scriptstyle z\prec
w}}{{zs<z}}}\mu(z,w)C_{z},\quad&\text{if\ }ws\geq w.\end{cases}$
1.2. Cells of affine Weyl groups We refer to [KL] for definition of left
cells, right cells and two-sided cells of $W$.
For $h,\,h^{\prime}\in H$ and $x\in W$, write
$hC_{x}=\sum_{y\in W}a_{y}C_{y},\quad C_{x}h=\sum_{y\in W}b_{y}C_{y},\quad
hC_{x}h^{\prime}=\sum_{y\in W}c_{y}C_{y},\quad
a_{y},b_{y},c_{y}\in\mathcal{A}.$
Define $y\underset{L}{\leq}x$ if $a_{y}\neq 0$ for some $h\in H$,
$y\underset{R}{\leq}x$ if $b_{y}\neq 0$ for some $h\in H$, and
$y\underset{LR}{\leq}x$ if $c_{y}\neq 0$ for some $h,h^{\prime}\in H$.
We write $x\underset{L}{\sim}y$ if $x\underset{L}{\leq}y\underset{L}{\leq}x$,
$x\underset{R}{\sim}y$ if $x\underset{R}{\leq}y\underset{R}{\leq}x$, and
$x\underset{LR}{\sim}y$ if $x\underset{LR}{\leq}y\underset{LR}{\leq}x$. Then
$\underset{L}{\sim},\ \underset{R}{\sim},\ \underset{LR}{\sim}$ are
equivalence relations on $W$. The equivalence classes are called left cells,
right cells, and two-sided cells of $W$ respectively. Note that if $\Gamma$ is
a left cell of $W$, then $\Gamma^{-1}=\\{w^{-1}\,|\,w\in\Gamma\\}$ is a right
cell.
For $w\in W$, set $R(w)=\\{s\in S\,|\,ws\leq w\\}$ and $L(w)=\\{s\in
S\,|\,sw\leq w\\}.$ Then we have (see [KL])
(a) $R(w)\subset R(u)$ if $u\underset{L}{\leq}w$ and $L(w)\subset L(u)$ if
$u\underset{R}{\leq}w.$ In particular, $R(w)=R(u)$ if $u\underset{L}{\sim}w$
and $L(w)=L(u)$ if $u\underset{R}{\sim}w.$
1.3. $*$-operations The $*$-operation introduced in [KL] and generalized in
[L1] is a useful tool in the theory of cells of Coxeter groups.
Let $s,t$ be simple reflections in $S$ and assume that $st$ has order $m\geq
3$. Let $w\in W$ be such that $sw\geq w,\ tw\geq w$. The $m-1$ elements $sw,\
tsw,\ stsw,\ ...,$ is called a left string (with respect to $\\{s,t\\})$, and
the $m-1$ elements $tw,\ stw,\ tstw,\ ...,$ is also called a left string (with
respect to $\\{s,t\\})$. Similarly we define right strings (with respect to
$\\{s,t\\}$). Then (see [L1])
(a) A left string in $W$ is contained in a left cell of $W$ and a right string
in $W$ is contained in a right cell of $W$.
Assume that $x$ is in a left (resp. right ) string (with respect to
$\\{s,t\\})$ of length $m-1$ and is the $i$th element of the left (resp.
right) string, define ${}^{*}x$ (resp. $x^{*}$) to be the $(m-i)$th element of
the string, where $*=\\{s,t\\}$. The following result is proved in [X2].
(b) Let $x$ be in $W$ such that $x$ is in a left string with respect to
$*=\\{s,t\\}$ and is also in a right string with respect to
$\star=\\{s^{\prime},t^{\prime}\\}$. Then ${}^{*}x$ is in a right string with
respect to $\\{s^{\prime},t^{\prime}\\}$ and $x^{\star}$ is in a left string
with respect to $\\{s,t\\}$. Moreover ${}^{*}(x^{\star})=({}^{*}x)^{\star}$.
We shall write ${}^{*}x^{\star}$ for ${}^{*}(x^{\star})=({}^{*}x)^{\star}$.
The following result is due to Lusztig [L1].
(c) Let $\Gamma$ be a left cell of $W$ and an element $x\in\Gamma$ is in a
right string $\sigma_{x}$ with respect to $*=\\{s,t\\}$. Then any element
$w\in\Gamma$ is in a right string $\sigma_{w}$ with respect to $*=\\{s,t\\}$.
Moreover $\Gamma^{*}=\\{w^{*}\,|\,w\in\Gamma\\}$ is a left cell of $W$ and
$\Omega=\displaystyle\left(\cup_{w\in\Gamma}\sigma_{w}\right)-\Gamma$ is a
union of at most $m-2$ left cells.
Following Lusztig [L1] we set $\tilde{\mu}(y,w)=\mu(y,w)$ if $y<w$ and
$\tilde{\mu}(y,w)=\mu(w,y)$ if $w<y$. For convenience we also set
$\tilde{\mu}(y,w)=0$ if $y\nless w$ and $w\nless y$. Assume that
$x_{1},x_{2},...,x_{m-1}$ and $y_{1},y_{2},...,y_{m-1}$ are two left strings
with respect to $*=\\{s,t\\}$. Define
$a_{ij}=\begin{cases}\tilde{\mu}(x_{i},y_{j}),\quad&\text{if\ }\\{s,t\\}\cap
L(x_{i})=\\{s,t\\}\cap L(y_{j}),\\\ 0,\quad&\text{otherwise}.\end{cases}$
Lusztig proved the following identities (see Subsection 10.4 in [L1]).
(d) If $m=3$, then $a_{11}=a_{22}$ and $a_{12}=a_{21}$.
(e) If $m=4$, then
(3) $\displaystyle a_{11}=a_{33},\ a_{13}=a_{31},\ a_{22}=a_{11}+a_{13},\
a_{12}=a_{21}=a_{23}=a_{32}.$
1.4. Lusztig’s $a$-function For $x,y\in W$, write
$C_{x}C_{y}=\sum_{z\in W}h_{x,y,z}C_{z},\qquad
h_{x,y,z}\in\mathcal{A}=\mathbb{Z}[q^{\frac{1}{2}},q^{-\frac{1}{2}}].$
Following Lusztig ([L1]), we define
$a(z)={\rm min}\\{i\in\mathbf{N}\ |\
q^{-\frac{i}{2}}h_{x,y,z}\in\mathbb{Z}[q^{-\frac{1}{2}}]{\rm\ for\ all\
}x,y\in W\\}.$
If for any $i$,
$q^{-\frac{i}{2}}h_{x,y,z}\not\in\mathbb{Z}[q^{-\frac{1}{2}}]{\rm\ for\ some\
}x,y\in W$, we set $a(z)=\infty.$ The following properties are proved in [L1].
(a) We have $a(w)\leq l(w_{0})$ for any $w\in W$, where $w_{0}$ is the longest
element in the Weyl group $W_{0}$.
(b) $a(x)\geq a(y)$ if $x\underset{LR}{\leq}y$. In particular, $a(x)=a(y)$ if
$x\underset{LR}{\sim}y$.
(c) $x\underset{L}{\sim}y$ (resp. $x\underset{R}{\sim}y,\
x\underset{LR}{\sim}y$) if $a(x)=a(y)$ and $x\underset{L}{\leq}y$ (resp.
$x\underset{R}{\leq}y,\ x\underset{LR}{\leq}y$).
(d) If $h_{x,y,z}\neq 0$, then $z\underset{R}{\leq}x$ and
$z\underset{L}{\leq}y$. In particular, $a(z)>a(x)$ if
$z\not\underset{R}{\sim}x$, and $a(z)>a(y)$ if $z\not\underset{L}{\sim}y$.
Following Lusztig, we define $\gamma_{x,y,z}$ by the following formula,
$h_{x,y,z}=\gamma_{x,y,z}q^{\frac{a(z)}{2}}+{\rm\ lower\ degree\ terms}.$
Springer showed that $l(z)\geq a(z)$ (see [L2]). Let $\delta(z)$ be the degree
of $P_{e,z}$, where $e$ is the neutral element of $W$. Then actually one has
$l(z)-a(z)-2\delta(z)\geq 0$ (see [L2]). Set
$\mathcal{D}=\\{z\in W\ |\ l(z)-a(z)-2\delta(z)=0\\}.$
The elements of $\mathcal{D}$ are involutions, called distinguished
involutions of $(W,S)$ (see [L2]). The following properties are due to Lusztig
[L2] except the (j) (which is trivial) and (k) (proved in [X2]).
(e) $\gamma_{x,y,z}\neq 0\Longrightarrow x\underset{L}{\sim}y^{-1},\
y\underset{L}{\sim}z,\ x\underset{R}{\sim}z.$
(f) $x\underset{L}{\sim}y^{-1}$ if and only if $\gamma_{x,y,z}\neq 0$ for some
$z\in W$.
(g) $\gamma_{x,y,z}=\gamma_{y,z^{-1},x^{-1}}=\gamma_{z^{-1},x,y^{-1}}$.
(h) $\gamma_{x,d,x}=\gamma_{d,x^{-1},x^{-1}}=\gamma_{x^{-1},x,d}=1$ if
$x\underset{L}{\sim}d$ and $d$ is a distinguished involution.
(i) $\gamma_{x,y,z}=\gamma_{y^{-1},x^{-1},z^{-1}}.$
(j) If $\omega,\tau\in W$ has length 0, then
$\gamma_{\omega x,y\tau,\omega z\tau}=\gamma_{x,y,z},\ \ \gamma_{x\omega,\tau
y,z}=\gamma_{x,\omega\tau y,z}.$
(k) Let $x,y,z\in W$ be such that (1) $x$ is in a left string with respect to
$*=\\{s,t\\}$ and also in a right string with respect to
$\\#=\\{s^{\prime},t^{\prime}\\}$, (2) $y$ is in a left string with respect to
$\\#=\\{s^{\prime},t^{\prime}\\}$ and also in a right string with respect to
$\star=\\{s^{\prime\prime},t^{\prime\prime}\\}$, (3) $z$ is in a left string
with respect to $*=\\{s,t\\}$ and also in a right string with respect to
$\star=\\{s^{\prime\prime},t^{\prime\prime}\\}$. Then
$\gamma_{x,y,z}=\gamma_{{}^{*}x^{\\#},{}^{\\#}y^{\star},{}^{*}z^{\star}}.$
For $w\in W$, set $\tilde{T}_{w}=q^{-l(w)/2}T_{w}$. For $x,y\in W$, write
$\tilde{T}_{x}\tilde{T}_{y}=\sum_{z\in W}f_{x,y,z}\tilde{T}_{z},\qquad
f_{x,y,z}\in\mathcal{A}=\mathbb{Z}[q^{\frac{1}{2}},q^{-\frac{1}{2}}].$
(l) If $x,y,w$ are in a two-sided cell of $W$, $f_{x,y,w}=\lambda
q^{\frac{a(w)}{2}}+$ lower degree terms and as Laurent polynomials in
$q^{\frac{1}{2}}$, deg$f_{x,y,z}\leq a(w)$ for all $z\in W$, then
$\gamma_{x,y,w}=\lambda.$
(m) Each left cell (resp. each right cell) of $W$ contains a unique
distinguished involution.
(n) Each two-sided cell of $W$ contains only finitely many left cells.
(o) Let $I$ be a subset of $S$ such that the subgroup $W_{I}$ of $W$ generated
by $I$ is finite. Then the longest element $w_{I}$ is a distinguished
involution.
Let $d$ be a distinguished involution in $W$.
(p) For any $\omega\in\Omega$, the element $\omega d\omega^{-1}$ is a
distinguished involution.
(q) Suppose $s,t\in S$ and $st$ has order 3. Then $d\in D_{L}(s,t)$ if and
only if $d\in D_{R}(s,t)$. If $d\in D_{L}(s,t)$, then ${}^{*}d^{*}$ is a
distinguished involution.
1.5. Assume $s,t\in S$ and $st$ has order 4. Let $w,u,v$ be in $W$ such that
$l(ststw)=l(w)+4$ and $l(ststv)=l(v)+4$. We have (see [X2, 1.6.3])
* (a)
$\gamma_{tsw,u,tv}=\gamma_{sw,u,stv},$
* (b)
$\gamma_{tsw,u,tsv}=\gamma_{sw,u,sv}+\gamma_{sw,u,stsv},$
* (c)
$\gamma_{tsw,u,tstv}=\gamma_{sw,u,stv},$
* (d)
$\gamma_{tstw,u,tv}+\gamma_{tw,u,tv}=\gamma_{stw,u,stv},$
* (e)
$\gamma_{tstw,u,tsv}=\gamma_{stw,u,stsv},$
* (f)
$\gamma_{tstw,u,tstv}+\gamma_{tw,u,tstv}=\gamma_{stw,u,stv}.$
Assume $s,t\in S$ and $st$ has order 4. Let $w,u,v$ be in $W$ such that
$l(ustst)=l(u)+4$ and $l(vstst)=l(v)+4$. We have (loc.cit)
* (a’)
$\gamma_{w,ut,vst}=\gamma_{w,uts,vs},$
* (b’)
$\gamma_{w,ust,vst}=\gamma_{w,us,vs}+\gamma_{w,usts,vs},$
* (c’)
$\gamma_{w,utst,vst}=\gamma_{w,uts,vs},$
* (d’)
$\gamma_{w,ut,vtst}+\gamma_{w,ut,vt}=\gamma_{w,uts,vts},$
* (e’)
$\gamma_{w,ust,vtst}=\gamma_{w,usts,vts},$
* (f’)
$\gamma_{w,utst,vtst}+\gamma_{w,utst,vt}=\gamma_{w,uts,vts}.$
1.6. The based ring of a two-sided cell For each two-sided cell $c$ of $W$,
let $J_{c}$ be the free $\mathbb{Z}$-module with a basis $t_{w},\ w\in c$.
Define
$t_{x}t_{y}=\sum_{z\in c}\gamma_{x,y,z}t_{z}.$
Then $J_{c}$ is an associative ring with unit $\sum_{d\in\mathcal{D}\cap
c}t_{d}.$
The ring $J=\bigoplus_{c}J_{c}$ is a ring with unit
$\sum_{d\in\mathcal{D}}t_{d}$. Sometimes $J$ is called asymptotic Hecke
algebra since Lusztig established an injective $\mathcal{A}$-algebra
homomorphism
$\phi:H\to J\otimes\mathcal{A},\quad
C_{x}\mapsto\sum_{\stackrel{{\scriptstyle\stackrel{{\scriptstyle
d\in\mathcal{D}}}{{w\in W}}}}{{w\underset{L}{\sim}d}}}h_{x,d,w}t_{w}.$
1.7. Lusztig’s conjecture on the structure of $J_{c}$ In [L3] Lusztig states a
conjecture on $J_{c}$ using equivariant $K$-groups on finite sets.
Let $G$ be a connected reductive group over $\mathbb{C}$. Lusztig establishes
a bijection between the two-sided cells of the extended affine Weyl group $W$
and the unipotent classes of $G$.
For each two-sided cell $c$ of $W$, let $u$ be a unipotent element in the
unipotent class corresponding to $c$ and let $F_{c}$ be a maximal reductive
subgroup of the centralizer of $u$ in $G$.
Conjecture (Lusztig [L3]): Assume that $G$ is a simply connected simple
algebraic group over $\mathbb{C}$. Then there exists a finite set $Y$ with an
algebraic action of $F_{c}$ and a bijection
$\pi:c\to\text{{the set of} isomorphism classes of irreducible $F_{c}$-vector
bundles on}\ Y\times Y.$
such that
(i) The bijection $\pi$ induces a ring homomorphism
$\pi:J_{c}\to K_{F_{c}}(Y\times Y),\ \ t_{x}\mapsto\pi(x).$
(ii) $\pi(x^{-1})_{(a,b)}=\pi(x)_{(b,a)}^{*}$ is the dual representation of
$\pi(x)_{(b,a)}.$
## 2\. Cells in an extended affine Weyl group of type $\tilde{B}_{3}$
In this section $G=Sp_{6}(\mathbb{C})$, so that the extended affine Weyl group
$W$ attached to $G$ is of type $\tilde{B}_{3}$. The left cells and two-sided
cells are described by J. Du (see [D]). We recall his results.
2.1. The Coxeter graph of $W$. As usual, we number the 4 simple reflections
$s_{0},\ s_{1},\ s_{2},\ s_{3}$ in $W$ so that
$\displaystyle s_{0}s_{1}=s_{1}s_{0},\quad s_{0}s_{3}=s_{3}s_{0},\quad
s_{1}s_{3}=s_{3}s_{1},$
$\displaystyle(s_{0}s_{2})^{3}=(s_{1}s_{2})^{3}=e,\quad(s_{2}s_{3})^{4}=e,$
where $e$ is the neutral element in $W$. The relations among the simple
reflections can be read through the following Coxeter graph:
$\tilde{B}_{3}:$$2$$0$$1$$3$
There is a unique nontrivial element $\tau$ in $W$ with length 0. We have
$\tau^{2}=e,\ \tau s_{0}\tau=s_{1},\ \tau s_{i}\tau=s_{i}$ for $i=2,3.$ Note
that $s_{1},s_{2},s_{3}$ generate the Weyl group $W_{0}$ of type $B_{3}$ and
$s_{0},s_{1},s_{2},s_{3}$ generate an affine Weyl group $W^{\prime}$ of type
$\tilde{B}_{3}$. And $W$ is generated by $\tau,\ s_{0},s_{1},s_{2},s_{3}$.
2.2. Cells in $W$ According to [D], the extended affine Weyl group $W$
attached to $Sp_{6}(\mathbb{C})$ has 8 two-sided cells:
$A,\quad B,\quad C,\quad D,\quad E,\quad F,\quad G,\quad H.$
The following table displays some useful information on these two-sided cells.
| | Number | Size of Jordan blocks | Maximal reductive subgroup
---|---|---|---|---
| | of left | of the corresponding | of the centralizer of a unipotent
$X$ | ${a(X)}$ | cells in $X$ | unipotent class in $Sp_{6}(\mathbb{C})$ | element in the corresp. unipotent class
$A$ | 9 | 48 | (111111) | $Sp_{6}(\mathbb{C})$
$B$ | 6 | 24 | (21111) | $Sp_{4}(\mathbb{C})\times\mathbb{Z}/2\mathbb{Z}$
$C$ | 4 | 18 | (2211) | $SL_{2}(\mathbb{C})\times O_{2}(\mathbb{C})$
$D$ | 3 | 12 | (222) | $O_{3}(\mathbb{C})$
$E$ | 2 | 8 | (411) | $SL_{2}(\mathbb{C})\times\mathbb{Z}/2\mathbb{Z}$
$F$ | 2 | 6 | (33) | $SL_{2}(\mathbb{C})$
$G$ | 1 | 4 | (42) | $\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/2\mathbb{Z}$
$H$ | 0 | 1 | (6) | $\mathbb{Z}/2\mathbb{Z}$
The notations for two-sided cells in the table are the same as those in [D],
which will be replaced by other notations in subsequent sections, otherwise
confusion would happen since notations $C,F,G$ are already used for other
objects.
In subsequent sections, for a reduced expression $s_{i_{1}}s_{i_{2}}\cdots
s_{ik}$ of an element in $W$, we often write $i_{1}i_{2}\cdots i_{k}$ instead
of the reduced expression.
In the rest of the paper, $W$ always stands for the affine Weyl group attached
to $Sp_{6}(\mathbb{C})$, $\tau,\ s_{i}$ are as in Subsection 2.1, and all
representations in this paper are rational representations of algebraic
groups.
## 3\. The based ring of the two-sided cell containing $s_{0}s_{1}$
3.1. In this section $c$ stands for the two-sided cell of $W$ containing
$s_{0}s_{1}$. According to [D, Figure I, Theorem 6.4], $c$ has six left cells.
We list the six left cells and representative elements in the left cells given
in [D, figure I]:
$\Gamma_{1},\ 01;\quad\Gamma_{2},\ 012;\quad\Gamma_{3},\
0123;\quad\Gamma_{4},\ 01232;\quad\Gamma_{5},\ 012321;\quad\Gamma_{6},\
012320.$
The values of $a$-function on $c$ is 2.
The corresponding unipotent class in $Sp_{6}(\mathbb{C})$ has Jordan block
sizes (411). Maximal reductive subgroup of the centralizer of an element in
the unipotent class is $F_{c}=\mathbb{Z}/2\mathbb{Z}\times
SL_{2}(\mathbb{C})$. Let $\epsilon$ be the nontrivial one dimensional
representation of $\mathbb{Z}/2\mathbb{Z}$ and $V(k)$ be an irreducible
representation of $SL_{2}(\mathbb{C})$ with highest weight $k$. They can be
regarded as irreducible representations of $F_{c}$ naturally. Up to
isomorphism, the irreducible representations of $F_{c}$ are $V(k),\
\epsilon\otimes V(k),\ k=0,1,2,3,...$. We will denote $\epsilon\otimes V(k)$
by $\epsilon V(k)$.
Let $x_{k}=(s_{0}s_{1}s_{2}s_{3}s_{2})^{k}s_{1}s_{0},\ u_{1}=e,\ u_{2}=s_{2},\
u_{3}=s_{3}s_{2},\ u_{4}=s_{2}s_{3}s_{2},\ u_{5}=s_{1}s_{2}s_{3}s_{2},\
u_{6}=s_{0}s_{2}s_{3}s_{3}.$
According to [D, Theorem 6.4], we have
(a) $c=\\{u_{i}x_{k}u_{j}^{-1},\ u_{i}\tau x_{k}u_{j}^{-1}\,|\,1\leq i,j\leq
6,\ k=0,1,2,3,...\\}.$
(b) $\Gamma_{j}=\\{u_{i}x_{k}u_{j}^{-1},\ u_{i}\tau x_{k}u_{j}^{-1}\,|\,1\leq
i\leq 6,\ k=0,1,2,3,...\\},\ j=1,2,3,4,5,6.$
Let $Y=\\{1,\ 2,\ ...,\ 6\\}$ and let $F_{c}$ act on $Y$ trivially. Then
$K_{F_{c}}(Y\times Y)$ is isomorphic to the $6\times 6$ matix ring
$M_{6}(\text{Rep\,}F_{c})$, where $\text{Rep\,}F_{c}$ is the representation
ring of $F_{c}$. Recall that $F_{c}=\mathbb{Z}/2\mathbb{Z}\times
SL_{2}(\mathbb{C})$ in this section.
The main result in this section is the following theorem.
Theorem 3.2. Let $c$ be the two-sided cell of $W$ (the affine Weyl group $W$
attached to $Sp_{6}(\mathbb{C})$) containing $s_{0}s_{1}$. Then the map
$\pi:c\to M_{6}(\text{Rep\,}F_{c}),\quad u_{i}x_{k}u_{j}^{-1}\mapsto
V(k)_{ij},\ u_{i}\tau x_{k}u_{j}^{-1}\mapsto\epsilon V(k)_{ij}\ $
induces a ring isomorphism
$\pi:J_{c}\to M_{6}(\text{Rep\,}F_{c}),\quad t_{u_{i}x_{k}u_{j}^{-1}}\mapsto
V(k)_{ij},\quad t_{u_{i}\tau x_{k}u_{j}^{-1}}\mapsto\epsilon V(k)_{ij},$
where $V(k)_{ij}$ (resp. $\epsilon V(k)_{ij})$ is the matrix in
$M_{6}(\text{Rep\,}F_{c})$ whose entry at $(p,q)$ is $V(k)$ (resp. $\epsilon
V(k)$) if $(p,q)=(i,j)$ and is 0 otherwise.
Remark: The Theorem 4 in [BO] implies that Lusztig’s conjecture on the
structure of $J_{c}$ is true. Since under the isomorphism $K_{F_{c}}(Y\times
Y)\simeq M_{6}(\text{Rep\,}F)$, irreducible ${F_{c}}$-vector bundles on
$Y\times Y$ correspond to those $V(k)_{ij},\ \epsilon V(k)_{ij}$, hence
Theorem 3.2 provides a computable verification for Lusztig conjecture on the
structure of $J_{c}$.
We prove Theorem 3.2 by establishing three lemmas.
Lemma 3.3. Let $1\leq i,j,m,n\leq 6$ and $k,l$ be nonnegative integers. For
$z_{k}=x_{k}$ or $\tau x_{k}$, $z_{l}=x_{l}$ or $\tau x_{l}$, and
$z_{p}=x_{p}$ or $\tau x_{p}$, we have
(a) $\gamma_{u_{i}z_{k}u_{j}^{-1},u_{m}z_{l}u_{n}^{-1},z}=0\quad\text{if}\
j\neq m\ \text{or}\ z\neq u_{i}\tau^{a}z_{p}u^{-1}_{n},\ a=0,1,\ \text{for
some }p;$
(b)
$\gamma_{u_{i}z_{k}u_{j}^{-1},u_{j}z_{l}u_{n}^{-1},u_{i}z_{p}u_{n}^{-1}}=\gamma_{z_{k},z_{l},z_{p}},\quad\text{for
any nonnegative integer }p.$
Proof. Note that $z_{l}^{-1}=z_{l}$. If
$\gamma_{u_{i}z_{k}u_{j}^{-1},u_{m}z_{l}u_{n}^{-1},z}\neq 0$, then by 1.4(e)
we get
$u_{i}z_{k}u_{j}^{-1}\underset{L}{\sim}(u_{m}z_{l}u_{n}^{-1})^{-1}=u_{n}z_{l}u_{m}^{-1}$,
$u_{i}z_{k}u_{j}^{-1}\underset{R}{\sim}z,\
u_{m}z_{l}u_{n}^{-1}\underset{L}{\sim}z$. By (b) in Subsection 3.1 we see that
the first assertion is true.
Now we prove the second assertion. Let $*=\\{s_{1},s_{2}\\},\
\\#=\\{s_{2},s_{3}\\}$ and $\star=\\{s_{0},s_{2}\\}$. Then
(c)
$\Gamma_{2}=\Gamma_{1}^{*},\quad\Gamma_{4}=\Gamma_{2}^{\\#},\quad\Gamma_{5}=\Gamma_{4}^{*},\quad\Gamma_{6}=\Gamma_{4}^{\star}.$
Applying 1.4 (k) we see that (b) is true if none of $i,j,n$ is 3.
Now assume that $i=3$. By 1.5 (b) we get
$\gamma_{u_{3}z_{k}u_{j}^{-1},u_{j}z_{l}u_{n}^{-1},u_{3}z_{p}u_{n}^{-1}}=\gamma_{u_{2}z_{k}u_{j}^{-1},u_{j}z_{l}u_{n}^{-1},u_{2}z_{p}u_{n}^{-1}}+\gamma_{u_{2}z_{k}u_{j}^{-1},u_{j}z_{l}u_{n}^{-1},u_{4}z_{p}u_{n}^{-1}}.$
By Part (a) of the lemma, we have
$\gamma_{u_{2}z_{k}u_{j}^{-1},u_{j}z_{l}u_{n}^{-1},u_{4}z_{p}u_{n}^{-1}}=0.$
Then using (c) above and 1.4 (k) we get
$\gamma_{u_{3}z_{k}u_{j}^{-1},u_{j}z_{l}u_{n}^{-1},u_{3}z_{p}u_{n}^{-1}}=\gamma_{u_{2}z_{k}u_{j}^{-1},u_{j}z_{l}u_{n}^{-1},u_{2}z_{p}u_{n}^{-1}}=\gamma_{z_{k}u_{j}^{-1},u_{j}z_{l}u_{n}^{-1},z_{p}u_{n}^{-1}}.$
Similarly, if $n=3$, we have
$\gamma_{u_{i}z_{k}u_{j}^{-1},u_{j}z_{l}u_{3}^{-1},u_{i}z_{p}u_{3}^{-1}}=\gamma_{u_{i}z_{k}u_{j}^{-1},u_{j}z_{l}u_{2}^{-1},u_{i}z_{p}u_{2}^{-1}}=\gamma_{u_{i}z_{k}u_{j}^{-1},u_{j}z_{l},u_{i}z_{p}}.$
We have showed for any $1\leq i,n\leq 6$ the following identity holds:
$\gamma_{u_{i}z_{k}u_{j}^{-1},u_{j}z_{l}u_{n}^{-1},u_{i}z_{p}u_{n}^{-1}}=\gamma_{z_{k}u_{j}^{-1},u_{j}z_{l},z_{p}}.$
Note that $z_{k}^{-1}=z_{k}$. By above identity and 1.4 (g), we get
$\gamma_{z_{k}u_{j}^{-1},u_{j}z_{l},z_{p}}=\gamma_{u_{j}z_{l},z_{p}^{-1},u_{j}z_{k}^{-1}}=\gamma_{z_{l},z_{p},z_{k}}=\gamma_{z_{k},z_{l},z_{p}}.$
Assertion (b) is proved and the lemma is proved.∎
Lemma 3.4. For nonnegative integers, and $a,b=0,1$, we have
$\gamma_{\tau^{a}x_{k},\tau^{b}x_{l},\tau^{a+b}x_{p}}=\gamma_{x_{k},x_{l},x_{p}},\quad\gamma_{\tau^{a}x_{k},\tau^{b}x_{l},\tau^{c}x_{p}}=0\
\text{if }\tau^{c}\neq\tau^{a+b}.$
Proof. The assertion follows from 1.4 (j).∎
Lemma 3.5. For nonnegative integers $k,l$ we have
$t_{x_{k}}t_{x_{l}}=\sum_{0\leq p\leq\min\\{k,l\\}}t_{{x_{k+l-2i}}}.$
Proof. If $k=0$ or $l=0$, the identity above is trivial since $x_{0}$ is a
distinguished involution.
Now assume that $k=1$ and $l\geq 1$. Let
$\zeta=q^{\frac{1}{2}}-q^{-\frac{1}{2}}$. By a simple computation we see
$\tilde{T}_{x_{1}}\tilde{T}_{x_{l}}=\zeta^{2}(\tilde{T}_{x_{l+1}}+\tilde{T}_{x_{l-1}}+\tilde{T}_{s_{0}s_{1}s_{3}(s_{2}s_{3}s_{2}s_{0}s_{1})^{l}}+\tilde{T}_{s_{0}s_{2}s_{3}s_{2}s_{1}(s_{2}s_{3}s_{2}s_{0}s_{1})^{l}})+\text{lower
degree terms,}$
Since $a(s_{0}s_{1}s_{3}(s_{2}s_{3}s_{2}s_{0}s_{1})^{l})\geq
a(s_{0}s_{1}s_{3})=3,\
a(s_{0}s_{2}s_{3}s_{2}s_{1}(s_{2}s_{3}s_{2}s_{0}s_{1})^{l})\geq
a(s_{2}s_{1}s_{2})=3$, we see that
$s_{0}s_{1}s_{3}(s_{2}s_{3}s_{2}s_{0}s_{1})^{l}$ and
$s_{0}s_{2}s_{3}s_{2}s_{1}(s_{2}s_{3}s_{2}s_{0}s_{1})^{l}$ are not in the two-
sided cell $c$. By 1.4 (l), we have
$t_{x_{1}}t_{x_{l}}=t_{x_{l+1}}+t_{x_{l-1}}.$
For $k\geq 2$, since $t_{x_{k}}=t_{x_{1}}t_{x_{k-1}}-t_{x_{k-2}}$, we can use
induction on $k$ to prove the lemma. The argument is completed.∎
Proof of Theorem 3.2. Combining Lemmas 3.3, 3.4 and 3.5 we see that Theorem
3.2 is true.
## 4\. The based ring of the two-sided cell containing $s_{1}s_{3}$
4.1. In this section $c$ stands for the two-sided cell of $W$ containing
$s_{1}s_{3}$. According to [D, Figure I, Theorem 6.4], $c$ has eight left
cells. We list the eight left cells and representative elements in the left
cells given in [D, Figure I]:
$\begin{array}[]{lllllllll}&\Gamma_{1},&13;&\Gamma_{2},&132;&\Gamma_{3},&1323;&\Gamma_{4},&1320;\\\
&\Gamma_{5},&03;&\Gamma_{6},&032;&\Gamma_{7},&0323;&\Gamma_{8},&0321.\end{array}$
The values of $a$-function on $c$ is 2.
The corresponding unipotent class in $Sp_{6}(\mathbb{C})$ has Jordan block
sizes (33). Maximal reductive subgroup of the centralizer of an element in the
unipotent class is ${F_{c}}=SL_{2}(\mathbb{C})$. Let $V(k)$ be an irreducible
representation of ${F_{c}}=SL_{2}(\mathbb{C})$ with highest weight $k$. Up to
isomorphism, the irreducible representations of ${F_{c}}$ are $V(k),\
k=0,1,2,3,...$.
Let $x_{k}=(\tau s_{0}s_{3}s_{2})^{k}s_{1}s_{3},\ u_{1}=e,\ u_{2}=s_{2},\
u_{3}=s_{3}s_{2},\ u_{4}=s_{0}s_{2},\ u_{5}=\tau,\ u_{6}=\tau s_{2},\
u_{7}=\tau s_{3}s_{2},\ u_{8}=\tau s_{0}s_{2}.$
According to [D, Theorem 6.4], we have
(a) $c=\\{u_{i}x_{k}u_{j}^{-1}\,|\,1\leq i,j\leq 8,\ k=0,1,2,3,...\\}.$
(b) $\Gamma_{j}=\\{u_{i}x_{k}u_{j}^{-1}\,|\,1\leq i\leq 8,\ k=0,1,2,3,...\\},\
j=1,2,3,4,5,6,7,8.$
Let $Y=\\{1,\ 2,\ ...,\ 7,\ 8\\}$ and let ${F_{c}}$ act on $Y$ trivially. Then
$K_{F_{c}}(Y\times Y)$ is isomorphic to the $8\times 8$ matrix ring
$M_{8}(\text{Rep\,}{F_{c}})$, where $\text{Rep\,}{F_{c}}$ is the
representation ring of ${F_{c}}=SL_{2}(\mathbb{C})$.
The main result in this section is the following.
Theorem 4.2. Let $c$ be the two-sided cell of $W$ (the extended affine Weyl
group attached to $Sp_{6}(\mathbb{C})$) containing $s_{1}s_{3}$. Then the map
$\pi:c\to M_{8}(\text{Rep\,}F_{c}),\quad u_{i}x_{k}u_{j}^{-1}\mapsto
V(k)_{ij}$
induces a ring isomorphism
$\pi:J_{c}\to M_{8}(\text{Rep\,}F_{c}),\quad t_{u_{i}x_{k}u_{j}^{-1}}\mapsto
V(k)_{ij},$
where $V(k)_{ij}$ is the matrix in $M_{8}(\text{Rep\,}F_{c})$ whose entry at
$(p,q)$ is $V(k)$ if $(p,q)=(i,j)$ and is 0 otherwise.
Remark: The Theorem 4 in [BO] implies that Lusztig’s conjecture on the
structure of $J_{c}$ is true. Since under the isomorphism $K_{F_{c}}(Y\times
Y)\simeq M_{8}(\text{Rep\,}F_{c})$, irreducible ${F_{c}}$-vector bundles on
$Y\times Y$ correspond to $V(k)_{ij}$’s, Theorem 4.2 provides a computable
verification for Lusztig’s conjecture on the structure of $J_{c}$.
We prove Theorem 4.2 by establishing two lemmas.
Lemma 4.3. Let $1\leq i,j,m,n\leq 8$ and $k,l$ be nonnegative integers. Then
(a) $\gamma_{u_{i}x_{k}u_{j}^{-1},u_{m}x_{l}u_{n}^{-1},z}=0\quad\text{if}\
j\neq m\ \text{or}\ x\neq u_{i}x_{p}u^{-1}_{n}\ \text{for some }p;$
(b)
$\gamma_{u_{i}x_{k}u_{j}^{-1},u_{j}x_{l}u_{n}^{-1},u_{i}x_{p}u_{n}^{-1}}=\gamma_{x_{k},x_{l},x_{p}},\quad\text{for
any nonnegative integer }p.$
Proof. Note that $x_{l}^{-1}=x_{l}$. If
$\gamma_{u_{i}x_{k}u_{j}^{-1},u_{m}x_{l}u_{n}^{-1},z}\neq 0$, then by 1.4(e)
we get
$u_{i}x_{k}u_{j}^{-1}\underset{L}{\sim}(u_{m}x_{l}u_{n}^{-1})^{-1}=u_{n}x_{l}u_{m}^{-1}$,
$u_{i}x_{k}u_{j}^{-1}\underset{R}{\sim}z,$ and
$u_{m}x_{l}u_{n}^{-1}\underset{L}{\sim}z$. By (b) in Subsection 4.1 we see
that the first assertion is true.
Now we prove the second assertion. Let $*=\\{s_{1},s_{2}\\},\
\\#=\\{s_{2},s_{3}\\}$ and $\star=\\{s_{0},s_{2}\\}$. Then
(c)
$\Gamma_{2}=\Gamma_{1}^{*},\quad\Gamma_{3}=\Gamma_{1}^{\\#},\quad\Gamma_{4}=\Gamma_{2}^{\star},\quad\Gamma_{5}=\tau\Gamma_{1}\tau,\
\Gamma_{6}=\tau\Gamma_{2}\tau,\ \Gamma_{7}=\tau\Gamma_{3}\tau,\
\Gamma_{8}=\tau\Gamma_{4}\tau.$
Applying 1.4 (j) and 1.4 (k) (repeatedly if necessary) we get the following
identity.
$\gamma_{u_{i}x_{k}u_{j}^{-1},u_{j}x_{l}u_{n}^{-1},u_{i}x_{p}u_{n}^{-1}}=\gamma_{x_{k}u_{j}^{-1},u_{j}x_{l},x_{p}}.$
Note that $\tau^{2}=e$. Again using 1.4 (j) and 1.4 (k) (repeatedly if
necessary) we get the following identity.
$\gamma_{x_{k}u_{j}^{-1},u_{j}x_{l},x_{p}}=\gamma_{x_{k},x_{l},x_{p}}.$
Part (b) is proved and the lemma is proved. ∎
Lemma 4.4. For nonnegative integers $k,l$, we have
$t_{x_{k}}t_{x_{l}}=\sum_{0\leq p\leq\min\\{k,l\\}}t_{x_{k+l-2i}}.$
Proof. If $k=0$ or $l=0$, the identity above is trivial since $x_{0}$ is a
distinguished involution.
Now assume that $k=1$ and $l\geq 1$. Put
$\xi=q^{\frac{1}{2}}+q^{-\frac{1}{2}}$. Let $H^{<13}$ be the
$\mathcal{A}$-submodule of $H$ spanned by all $C_{w}$ with $a(w)\geq 3$. By
Subsection 1.2 and 1.4 (b) we know that $H^{<13}$ is a two-sided ideal of $H$.
Before continuing, we make a convention: we shall use the symbol $\Box$ for
any element in the two-sided ideal $H^{<13}$ of $H$. Then $\Box+\Box=\Box$ and
$h\Box=\Box$ for any $h\in H$.
First we have
$C_{x_{1}}=C_{\tau s_{3}s_{0}s_{2}s_{1}s_{3}}=C_{\tau
s_{3}}C_{s_{0}}C_{s_{2}}C_{s_{1}}C_{s_{3}}-C_{\tau s_{0}s_{1}s_{3}},\ \
\text{and}\ C_{\tau s_{0}s_{1}s_{3}}\in H^{<13}.$
Note that ${L}(x_{k})=\\{s_{1},s_{3}\\}$. Hence
(4) $\displaystyle C_{x_{1}}C_{x_{l}}=C_{\tau
s_{3}}C_{s_{0}}C_{s_{2}}C_{s_{1}}C_{s_{3}}C_{x_{l}}+\Box=\xi^{2}C_{\tau
s_{3}}C_{s_{0}}C_{s_{2}}C_{x_{l}}+\Box\in\xi^{2}C_{\tau
s_{3}}C_{s_{0}}C_{s_{2}}C_{x_{l}}+H^{<13}.$
We compute $C_{\tau s_{3}}C_{s_{0}}C_{s_{2}}C_{x_{l}}$ step by step.
Step 1. Compute $C_{s_{2}}C_{x_{l}}.$ We have
$C_{s_{2}}C_{x_{l}}=C_{s_{2}x_{l}}+\sum\limits_{\begin{subarray}{c}z\prec
x_{l}\\\ s_{2}z<z\end{subarray}}\mu(z,x_{l})C_{z}.$
Note that $L(x_{l})=\\{s_{1},s_{3}\\}$. Assume that $z\prec x_{l}$ and
$s_{2}z\leq z$. If $s_{1}z\leq z$, then $\\{s_{1},s_{2}\\}\subset L(z)$ and
$a(z)\geq a(s_{1}s_{2}s_{1})=3$. In this case, we have $C_{z}\in H^{<13}$. If
$s_{1}z\geq z$, by 1.1(c) we must have $s_{1}z=x_{l}$. Then $z=\tau
s_{3}s_{2}x_{l-1}$. This contradicts $s_{2}z\leq z$. Therefore we have
(5) $\displaystyle C_{s_{2}}C_{x_{l}}=C_{s_{2}x_{l}}+\Box\in
C_{s_{2}x_{l}}+H^{<13}.$
Step 2. Compute $C_{s_{0}}C_{s_{2}x_{l}}.$ We have
$C_{s_{0}}C_{s_{2}x_{l}}=C_{s_{0}s_{2}x_{l}}+\sum\limits_{\begin{subarray}{c}z\prec
s_{2}x_{l}\\\ s_{0}z<z\end{subarray}}\mu(z,s_{2}x_{l})C_{z}.$
Note that $L(s_{2}x_{l})=\\{s_{2}\\}$. Assume that $z\prec s_{2}x_{l}$ and
$s_{0}z\leq z$. If $s_{2}z\leq z$, then $\\{s_{0},s_{2}\\}\subset{L}(z)$ and
$a(z)\geq a(s_{0}s_{2}s_{0})=3$. In this case, we have $C_{z}\in H^{<13}$. If
$s_{2}z\geq z$, by 1.1(c) we must have $z=x_{l}$. This contradicts $s_{0}z\leq
z$. Therefore we have
(6) $\displaystyle C_{s_{0}}C_{s_{2}x_{l}}=C_{s_{0}s_{2}x_{l}}+\Box\in
C_{s_{0}s_{2}x_{l}}+H^{<13}.$
Step 3. Compute $C_{\tau s_{3}}C_{s_{0}s_{2}x_{l}}.$ We have
$C_{\tau
s_{3}}C_{s_{0}s_{2}x_{l}}=C_{x_{l+1}}+\sum\limits_{\begin{subarray}{c}z\prec
s_{0}s_{2}x_{l}\\\ s_{3}z<z\end{subarray}}\mu(z,s_{0}s_{2}x_{l})C_{\tau z}.$
Assume that $z\prec s_{0}s_{2}x_{l}$ and $s_{3}z\leq z$. Using 1.4 (b), 1.4
(c) and 1.4 (d), we see that $a(\tau z)\geq 2$ , and if $a(\tau z)=2$ then
$x_{l}\underset{L}{\sim}\tau z\underset{R}{\sim}x_{1}$. We are only concerned
with those $C_{\tau z}$ in above summation with $a(\tau z)=2$. Then $\tau
z=x_{m}$ for some $m<l$ and $L(z)=\\{s_{0},s_{3}\\}$. Note that
$L(s_{0}s_{2}x_{l})=\\{s_{0}\\}$. We then have
$\mu(z,s_{0}s_{2}x_{l})={\tilde{\mu}({}^{\star}z,{}^{\star}(s_{0}s_{2}x_{l}))}=\tilde{\mu}(s_{2}z,s_{2}x_{l})=\tilde{\mu}(^{*}(s_{2}z),^{*}(s_{2}x_{l}))=\tilde{\mu}(s_{1}s_{2}z,x_{l})$,
where $\star=\\{s_{0},s_{2}\\},\ *=\\{s_{1},s_{2}\\}$. Since $m<l$, we have
$\tilde{\mu}(s_{1}s_{2}z,x_{l})=\mu(s_{1}s_{2}z,x_{l})$. Noting that
$s_{3}s_{1}s_{2}z=s_{3}s_{1}s_{2}\tau x_{m}\geq s_{1}s_{2}\tau x_{m}$ and
$s_{3}x_{l}\leq x_{l}$, by 1.1(c) we see $s_{3}s_{1}s_{2}z=x_{l}$, which
implies that $\tau z=x_{l-1}$.
In conclusion, if $z\prec s_{0}s_{2}x_{l}$ and $s_{3}z\leq z$, then either
$C_{z}\in H^{<13}$ or $z=\tau x_{l-1}$. Hence we have
(7) $\displaystyle C_{\tau
s_{3}}C_{s_{0}s_{2}x_{l}}=C_{x_{l+1}}+C_{x_{l-1}}+\Box.$
Combining formulas (4)-(7) we get
$C_{x_{1}}C_{x_{l}}=\xi^{2}(C_{x_{l+1}}+C_{x_{l-1}})+\Box\in\xi^{2}(C_{x_{l+1}}+C_{x_{l-1}})+H^{<13}.$
Therefore we have $t_{x_{1}}t_{x_{l}}=t_{x_{l+1}}+t_{x_{l-1}}.$
For $k\geq 2$, since $t_{x_{k}}=t_{x_{1}}t_{x_{k-1}}-t_{x_{k-2}}$, we can use
induction on $k$ to prove the lemma. The argument is completed. ∎
Proof of Theorem 4.2. Combining Lemmas 4.3 and 4.4 we see that Theorem 4.2 is
true.
## 5\. The based ring of the two-sided cell containing $s_{1}s_{2}s_{1}$
5.1. In this section we consider the two-sided cell in $W$ containing
$s_{1}s_{2}s_{1}$. In [QX] we showed that Lusztig’s conjecture on the
structure of the based ring of the two-sided cell needs modification. In this
section we give a description of the based ring. For consistence, we keep the
notations in [QY] for the two-sided cell of $W$ containing $s_{1}s_{2}s_{1}$.
In particular, we denote $D$ for the two-sided cell of $W$ containing
$s_{1}s_{2}s_{1}$.
According to [D,Figure I, Theorem 6.4], we have the following result.
(a) There are 12 left cells in the two-sided cell $D$ and a representative of
each left cell in $D$ are:
$\begin{array}[]{lllllllll}&D_{013},&013;&D_{2},&0132;&D_{02},&01320;&D_{12},&01321;\\\
&&&&&&&&\\\ &D_{3},&01323;&D_{03},&013203;&D_{01},&013201;&D_{13},&013213;\\\
&&&&&&&&\\\
&D^{\prime}_{2},&0132032;&\widehat{D^{\prime}_{2}},&0132132;&D_{1},&01320321;&D_{0},&01321320.\end{array}$
The value of $a$-function on $D$ is 3.
Let $\Gamma$ and $\Gamma^{\prime}$ be two left cells of $W$. If
$\Gamma^{\prime}=\Gamma^{*}$ for some $*=\\{s,t\\}$ (see Subsection 1.3 for
definition of $*$-operation), then we write $\Gamma\
\overset{\\{s,t\\}}{\text{------}}\ \Gamma^{\prime}$. The following result is
easy to verify.
Lemma 5.2. Keep the notations as above. Then we have
$D_{3}\ \overset{\\{s_{2},s_{3}\\}}{\text{------}}\ D_{013}\
\overset{\\{s_{1},s_{2}\\}}{\text{------}}\ D_{2}.$
$\displaystyle D_{0}\ \overset{\\{s_{0},s_{2}\\}}{\text{------}}$
$\displaystyle\widehat{D^{\prime}_{2}}\
\overset{\\{s_{2},s_{3}\\}}{\text{------}}\ D_{12}\
\overset{\\{s_{0},s_{2}\\}}{\text{------}}\ D_{01}\
\overset{\\{s_{1},s_{2}\\}}{\text{------}}\ D_{02}\
\overset{\\{s_{2},s_{3}\\}}{\text{------}}\ D^{\prime}_{2}\
\overset{\\{s_{1},s_{2}\\}}{\text{------}}\ D_{1}.$ $\displaystyle\
|{\scriptstyle{\\{s_{1},s_{2}\\}}}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\
|{\scriptstyle{\\{s_{0},s_{2}\\}}}$ $\displaystyle
D_{13}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\ D_{03}$
5.3. Let
$\begin{array}[]{lllllll}u_{k}=(s_{0}s_{1}s_{3}s_{2})^{k}s_{0}s_{1}s_{3},&&\\\
x_{k}=(s_{1}s_{2}s_{3}s_{0})^{k}s_{1}s_{2}s_{1},&\ x^{\prime}_{0}=\tau
s_{2}s_{0}s_{1}s_{2}s_{1},&\ x^{\prime}_{k+1}=\tau
s_{0}s_{2}s_{3}s_{0}x_{k},\\\ p_{1}=e,&p_{2}=s_{2},&p_{3}=s_{3}s_{2},\\\
p_{4}=s_{1}s_{2},&p_{5}=s_{0}s_{2},&p_{6}=s_{0}s_{1}s_{2},\\\
p_{7}=s_{3}s_{1}s_{2},&p_{8}=s_{3}s_{0}s_{2},&p_{9}=s_{2}s_{3}s_{1}s_{2},\\\
p_{10}=s_{2}s_{3}s_{0}s_{2},&p_{11}=s_{0}s_{2}s_{3}s_{1}s_{2},&p_{12}=s_{1}s_{2}s_{3}s_{0}s_{2};\\\
q_{4}=p_{1}=e,&q_{5}=\tau,&q_{6}=s_{0},\\\
q_{7}=s_{3},&q_{8}=s_{3}\tau,&q_{9}=s_{2}s_{3},\\\
q_{10}=s_{2}s_{3}\tau,&q_{11}=s_{0}s_{2}s_{3},&q_{12}=s_{1}s_{2}s_{3}\tau.\end{array}$
According to [D, Theorem 6.4], we have
(a) The two-sided cell $D$ consists of the following elements:
$p_{i}u_{k}p_{j}^{-1},\ p_{i}\tau u_{k}p_{j}^{-1},\ q_{l}x_{0}q_{m}^{-1},\
q_{l}x_{0}q_{6}^{-1},\ q_{l}x_{0}q_{6}^{-1}\tau,\
q_{l}x^{\prime}_{0}q_{m}^{-1},$
where $1\leq i,j\leq 12,\ 4\leq l\leq 12,\ 4\leq m\neq 6\leq 12,\ k\geq 0.$
For convenience, we number the left cells in $D$ as follows:
$\begin{array}[]{lllllll}\Gamma_{1}=D_{013},&\ \Gamma_{2}=D_{2},&\
\Gamma_{3}=D_{3},&\ \Gamma_{4}=D_{12},&\ \Gamma_{5}=D_{02},&\
\Gamma_{6}=D_{01},\\\ \Gamma_{7}=D_{13},&\ \Gamma_{8}=D_{03},&\
\Gamma_{9}=\widehat{D^{\prime}_{2}},&\ \Gamma_{10}=D^{\prime}_{2},&\
\Gamma_{11}=D_{0},&\ \Gamma_{12}=D_{1}.\end{array}$
Then (loc. cit) we have
(b1) For $j=1,\ 2,\ 3$, the left cell $\Gamma_{j}$ consists of the following
elements:
$p_{i}u_{k}p_{j}^{-1},\quad p_{i}\tau u_{k}p_{j}^{-1},\qquad 1\leq i\leq 12,\
k\geq 0.$
(b2) For $j=4,\ 5,\ 7,\ 8,\ ...,\ 12$, the left cell $\Gamma_{j}$ consists of
the following elements:
$p_{i}u_{k}p_{j}^{-1},\quad p_{i}\tau u_{k}p_{j}^{-1},\quad
q_{l}x_{0}q_{j}^{-1},\quad q_{l}x^{\prime}_{0}q_{j}^{-1},\qquad{1\leq i\leq
12},\ 4\leq l\leq 12,\ k\geq 0.$
Note that $p_{4}u_{k}p_{4}^{-1}=x_{k+1},\ p_{4}\tau
u_{k}p_{4}^{-1}=x^{\prime}_{k+1}.$
(b3) The left cell $\Gamma_{6}$ consists of the following elements:
$p_{i}u_{k}p_{6}^{-1},\quad p_{i}\tau u_{k}p_{6}^{-1},\quad
q_{l}x_{0}q_{6}^{-1},\quad q_{l}x_{0}q_{6}^{-1}\tau,\qquad{1\leq i\leq 12},\
4\leq l\leq 12,\ k\geq 0.$
5.4. For the two-sided cell $D$, the corresponding unipotent class in
$Sp_{6}(\mathbb{C})$ has Jordan block sizes (222). Maximal reductive subgroup
of the centralizer of an element in the unipotent class is
${F_{c}}=O_{3}(\mathbb{C})=\mathbb{Z}/2\mathbb{Z}\times SO_{3}(\mathbb{C})$.
Let $\tilde{F}_{c}=\mathbb{Z}/2\mathbb{Z}\times SL_{2}(\mathbb{C})$ be the
simply connected covering of $F_{c}$.
Let $Y$ be a set of 12 elements and let $\tilde{F}_{c}$ act on $Y$ trivially.
Then $K_{\tilde{F}_{c}}(Y\times Y)$ is isomorphic to the $12\times 12$ matrix
ring $M_{12}(\text{Rep\,}\tilde{F}_{c})$, where $\text{Rep\,}\tilde{F}_{c}$ is
the representation ring of $\tilde{F}_{c}$.
For nonnegative integer $k$, let $V(k)$ be an irreducible representation of
$SL_{2}(\mathbb{C})$ with highest weight $k$. Let $\epsilon$ be the sign
representation of $\mathbb{Z}/2\mathbb{Z}$. Regarding $V(k)$ and $\epsilon$ as
representations of $\tilde{F}_{c}$ naturally, then, up to isomorphism, the
irreducible representations of $\tilde{F}_{c}$ are $V(k),\ \epsilon V(k),\
k=0,\ 1,\ 2,\ ....$ When $k$ is even, $V(k)$ and $\epsilon V(k)$ are also
irreducible representation of $F_{c}$.
Let $V(k)_{ij}\in M_{12}(\text{Rep\,}\tilde{F}_{c})$ be the matrix whose entry
at $(i,j)$ is $V(k)$ and is 0 elsewhere. Similarly we define $\epsilon
V(k)_{ij}$. The main result in this section is the following theorem.
Theorem 5.5. There is a natural injection
$\displaystyle\pi:c\hookrightarrow$ $\displaystyle M_{12}(\text{Rep\,}\
\tilde{F}_{c}),$ $\displaystyle p_{i}u_{k}p_{j}^{-1}\longmapsto$
$\displaystyle\begin{cases}{V(2k)_{ij},}&{1\leq i,j\leq 3,}\\\
{V(2k+2)_{ij},}&{4\leq i,j\leq 12,}\\\
{V(2k+1)_{ij},}&{\text{otherwise};}\end{cases}$ $\displaystyle p_{i}\tau
u_{k}p_{j}^{-1}{\longmapsto}$ $\displaystyle\begin{cases}{\epsilon
V(2k)_{ij},}&{1\leq i,j\leq 3,}\\\ {\epsilon V(2k+2)_{ij},}&{4\leq i,j\leq
12,}\\\ {\epsilon V(2k+1)_{ij},}&{\text{otherwise};}\end{cases}$
$\displaystyle y{\longmapsto}$ $\displaystyle V(0)_{lm},\quad\text{if $y$ can
be obtained from $x_{0}$ by a}$ sequence of left and/or right star operations,
$\displaystyle y{\longmapsto}$ $\displaystyle\epsilon V(0)_{lm},\quad\text{if
$y$ can be obtained from $x^{\prime}_{0}$ by a}$ sequence of left and/or right
star operations,
where $y=q_{l}x_{0}q_{m}^{-1}$ or $q_{l}x^{\prime}_{0}q_{m}^{-1}\ (m\neq 6)$
or $q_{l}x_{0}q_{6}^{-1}$ or $q_{l}x_{0}q_{6}^{-1}\tau$, $4\leq l,m\leq 12$.
The injection $\pi$ induces an injective ring homomorphism
$\Pi:J_{c}\rightarrow M_{12}(\text{Rep\,}\tilde{F}_{c})\simeq
K_{\tilde{F}_{c}}(Y\times Y),\quad t_{w}\mapsto\pi(w),$
where $Y$ is a set of 12 elements with trivial $\tilde{F}_{c}$ action.
Proof: We need to prove that
(8) $\Pi(t_{w}t_{u})=\pi(w)\cdot\pi(u),\quad\text{for all }w,u\in D.$
Since $D$ is the union of all $\Gamma_{i},\ 1\leq i\leq 12$ and $D=D^{-1}$, we
know that $D$ is the union of all $\Gamma_{i}^{-1}\cap\Gamma_{j},\ 1\leq
i,j\leq 12.$
Assume that $w\in\Gamma_{i}^{-1}\cap\Gamma_{j}$ and
$u\in\Gamma_{k}^{-1}\cap\Gamma_{l}$. Using 1.4(j), 1.4(k) and Lemma 5.2, we
know that it suffices to prove formula (8) for
$w\in\Gamma_{i}^{-1}\cap\Gamma_{j}$, $u\in\Gamma_{k}^{-1}\cap\Gamma_{l}$,
$i,j,k,l\in\\{1,4\\}$. When $j\neq k$, by 1.4 (e) we see that $t_{w}t_{u}=0$,
hence formula (8) holds in this case. When $i=j=k=l$, according to Theorem 3.1
in [QX], we know that formula (8) holds in this case. To complete the proof of
the theorem we need to prove formula (8) for the following cases:
(i) $i=1,\ j=k=1,\ l=4$;
(ii) $i=1,\ j=k=4,\ l=4;$
(iii) $i=1,\ j=k=4,\ l=1;$
(iv) $i=4,\ j=k=1,\ l=1;$
(v) $i=4,\ j=k=1,\ l=4;$
(vi) $i=4,\ j=k=4,\ l=1.$
Keep the notations in the above paragraph. Applying 1.4 (g) and 1.4 (i) we see
that to prove formula (8) we only need to prove it for the following two
cases: ($\clubsuit$) $w\in\Gamma_{4}^{-1}\cap\Gamma_{1}$ and
$u\in\Gamma_{1}^{-1}\cap\Gamma_{1}$; ($\spadesuit$)
$w\in\Gamma_{4}^{-1}\cap\Gamma_{1}$ and $u\in\Gamma_{1}^{-1}\cap\Gamma_{4}$.
Lemma $\clubsuit$: We have
(a) $\Gamma_{4}^{-1}\cap\Gamma_{1}=\\{s_{1}s_{2}u_{k},\ s_{1}s_{2}\tau
u_{k}\,|\,k\geq 0\\}$ and $\Gamma_{1}^{-1}\cap\Gamma_{1}=\\{u_{k},\ \tau
u_{k}\,|\,k\geq 0\\}$.
(b) $\displaystyle t_{s_{1}s_{2}u_{k}}t_{u_{l}}=t_{s_{1}s_{2}\tau
u_{k}}t_{\tau u_{l}}=\sum_{0\leq
i\leq\min\\{2k+1,2l\\}}t_{s_{1}s_{2}u_{k+l-i}}.$
(c) $\displaystyle t_{s_{1}s_{2}\tau
u_{k}}t_{u_{l}}=t_{s_{1}s_{2}u_{k}}t_{\tau u_{l}}=\sum_{0\leq
i\leq\min\\{2k+1,2l\\}}t_{s_{1}s_{2}\tau u_{k+l-i}}.$
###### Proof.
Part (a) is obtained from 5.3 (b1) and 5.3 (b2).
Now we prove (b). Since $u_{0}$ is a distinguished involution, (b) is true for
$l=0$.
Assume that $l>0$. First we will prove
(9) $\displaystyle
t_{s_{1}s_{2}u_{0}}t_{u_{l}}=t_{s_{1}s_{2}u_{l}}+t_{s_{1}s_{2}u_{l-1}},\quad{\text{for
any}}\ l>0.$
Let $\xi=q^{\frac{1}{2}}+q^{-\frac{1}{2}}$. By a simple computation we get
(10) $\displaystyle
C_{s_{1}s_{2}u_{0}}=(C_{s_{1}}C_{s_{2}}-1)C_{s_{0}s_{1}s_{3}}$ (11)
$\displaystyle C_{s_{0}s_{1}s_{3}}C_{u_{l}}=\xi^{3}C_{u_{l}}.$
Hence
(12) $\displaystyle
C_{s_{1}s_{2}u_{0}}C_{u_{l}}=\xi^{3}(C_{s_{1}}C_{s_{2}}-1)C_{u_{l}}.$
Before continuing, we make a convention: we shall use the symbol $\Box$ for
any element in the two-sided ideal $H^{<013}$ of $H$ spanned by all $C_{w}$
with $a(w)>3$. Then $\Box+\Box=\Box$ and $h\Box=\Box$ for any $h\in H$.
In [QX, subsection 3.3, Step 1], we have shown the following identity:
(13) $\displaystyle C_{s_{2}}C_{u_{l}}=C_{s_{2}u_{l}}+\Box\in
C_{s_{2}u_{l}}+H^{<013}.$
Now we compute $C_{s_{1}}C_{s_{2}u_{l}}$. By formula (1) in 1.1 (e), we have
$C_{s_{1}}C_{s_{2}u_{l}}=C_{s_{1}s_{2}u_{l}}+\sum\limits_{\begin{subarray}{c}y\prec
s_{2}u_{l}\\\ s_{1}y<y\end{subarray}}\mu(y,s_{2}u_{l})C_{y}.$
Note that ${L}(s_{2}u_{l})=\\{s_{2}\\}$. First we have
$\mu(u_{l},s_{2}u_{l})=1$ and $s_{1}u_{l}<u_{l}$. By 1.4 (c) and 1.4 (d), if
$C_{y}$ appears in the above summation with nonzero coefficient, $y\neq u_{l}$
and $C_{y}\not\in H^{<013}$, then $y\in\Gamma_{4}^{-1}\cap\Gamma_{1}$.
Assume $y\prec s_{2}u_{l}$, $s_{1}y<y$ and
$y\in\Gamma_{4}^{-1}\cap\Gamma_{1}$. By (a) we must have
$y{=s_{1}s_{2}u_{k}}=s_{1}s_{2}(s_{0}s_{1}s_{3}s_{2})^{k}s_{0}s_{1}s_{3}$ for
some nonnegative integer $k\leq l-1$. Since $s_{2}s_{0}y\geq s_{0}y\geq y$, by
1.3 (d) we get $\mu(y,s_{2}u_{l})=\mu(s_{0}y,u_{l})$. Now $s_{3}u_{l}\leq
u_{l}$ and $s_{3}s_{0}y\geq s_{0}y$, by 1.1(c) we get $s_{3}s_{0}y=u_{l}$.
Hence $y=s_{1}s_{2}u_{l-1}$.
We have shown
(14) $\displaystyle
C_{s_{1}}C_{s_{2}u_{l}}=C_{s_{1}s_{2}u_{l}}+C_{u_{l}}+C_{s_{1}s_{2}u_{l-1}}+\Box.$
Combining formulas (12) (13) and (14), we get formula (9).
Recall the following formula in [QX, 3.3]:
(15) $\displaystyle t_{u_{k}}t_{u_{l}}=\sum_{0\leq
i\leq\min\\{2k,2l\\}}t_{u_{k+l-i}}.$
Now we employ formulas (9) and (15) to prove the identity in (b). We use
induction on $k$. When $k=0$, it is just formula (9). Assume that the formula
in (b) is true for nonnegative integer less than $k$. We have
$\displaystyle t_{s_{1}s_{2}u_{k}}t_{u_{l}}=$
$\displaystyle(t_{s_{1}s_{2}u_{0}}t_{u_{k}}-t_{s_{1}s_{2}u_{k-1}})t_{u_{l}}$
$\displaystyle=$ $\displaystyle t_{s_{1}s_{2}u_{0}}\cdot\sum_{0\leq
i\leq\min\\{2k,2l\\}}t_{u_{k+l-i}}-t_{s_{1}s_{2}u_{k-1}}t_{u_{l}}$
$\displaystyle=$ $\displaystyle\sum_{0\leq
i\leq\min\\{2k,2l\\}}t_{s_{1}s_{2}u_{k+l-i}}+\sum_{0\leq
i\leq\min\\{2k,2l\\}}t_{s_{1}s_{2}u_{k+l-i-1}}-\sum_{0\leq
j\leq\min\\{2k-1,2l\\}}t_{s_{1}s_{2}u_{k+l-1-j}}$ $\displaystyle=$
$\displaystyle\sum_{0\leq
i\leq\min\\{2k,2l\\}}t_{s_{1}s_{2}u_{k+l-i}}+\sum_{1\leq
i\leq\min\\{2k+1,2l+1\\}}t_{s_{1}s_{2}u_{k+l-i}}-\sum_{1\leq
j\leq\min\\{2k,2l+1\\}}t_{s_{1}s_{2}u_{k+l-j}}$ $\displaystyle=$
$\displaystyle\sum_{0\leq i\leq\min\\{2k+1,2l\\}}t_{s_{1}s_{2}u_{k+l-i}}$
Since $\tau u_{k}=u_{k}\tau$, by 1.4 (j) we have $t_{s_{1}s_{2}\tau
u_{k}}t_{\tau u_{l}}=t_{s_{1}s_{2}u_{k}}t_{u_{l}}$. Part (b) is proved.
Since $\tau u_{k}=u_{k}\tau$ and $\tau u_{k+l-i}=u_{k+l-i}\tau$, using 1.4 (j)
we see that Part (c) follows from Part (b).
The proof is completed. ∎
Lemma $\spadesuit$. (a) For $k\geq 0$, we have
$s_{1}s_{2}u_{k}s_{2}s_{1}=x_{k+1}$ and $s_{1}s_{2}\tau
u_{k}s_{2}s_{1}=x^{\prime}_{k+1}$. Moreover,
$\Gamma_{4}^{-1}\cap\Gamma_{4}=\\{x_{k},\ x^{\prime}_{k}\,|\,k\geq 0\\}$.
For nonnegative integers $k,l$ we have
(b) $\displaystyle t_{s_{1}s_{2}u_{k}}t_{u_{l}s_{2}s_{1}}=t_{s_{1}s_{2}\tau
u_{k}}t_{u_{l}\tau s_{2}s_{1}}=\sum_{0\leq
i\leq\min\\{2k+1,2l+1\\}}t_{x_{k+l+1-i}}.$
(c) $\displaystyle t_{s_{1}s_{2}\tau
u_{k}}t_{u_{l}s_{2}s_{1}}=t_{s_{1}s_{2}u_{k}}t_{u_{l}\tau
s_{2}s_{1}}=\sum_{0\leq i\leq\min\\{2k+1,2l+1\\}}t_{x^{\prime}_{k+l+1-i}}.$
###### Proof.
Part (a) follows from the discussion in Subsection 5.3.
Now we prove Part (b). First we prove
(16) $\displaystyle
t_{s_{1}s_{2}u_{0}}t_{u_{l}s_{2}s_{1}}=t_{x_{l+1}}+t_{x_{l}}.$
In [QX, Subsection 4.2], it is shown
$t_{s_{1}s_{2}u_{0}}t_{u_{0}s_{2}s_{1}}=t_{x_{1}}+t_{x_{0}}$. Now assume that
$l\geq 1$. As before, $\xi=q^{\frac{1}{2}}+q^{-\frac{1}{2}}$. Since
$C_{s_{0}s_{1}s_{3}}C_{u_{l}s_{2}s_{1}}=\xi^{3}C_{u_{l}s_{2}s_{1}}$, using
formula (10) we get
(17) $\displaystyle
C_{s_{1}s_{2}u_{0}}C_{u_{l}s_{2}s_{1}}=\xi^{3}(C_{s_{1}}C_{s_{2}}-1)C_{u_{l}s_{2}s_{1}}.$
We compute the right hand side of equality (17) step by step. As in the proof
of Lemma $\clubsuit$, we shall use the symbol $\Box$ for any element in the
two-sided ideal $H^{<013}$ of $H$ spanned by all $C_{w}$ with $a(w)>3$.
Step 1: Compute $C_{s_{2}}C_{u_{l}s_{2}s_{1}}$.
By formula (1) in 1.1 (e), we have
$C_{s_{2}}C_{u_{l}s_{2}s_{1}}=C_{s_{2}u_{l}s_{2}s_{1}}+\sum\limits_{\begin{subarray}{c}y\prec
u_{l}s_{2}s_{1}\\\ s_{2}y<y\end{subarray}}\mu(y,u_{l}s_{2}s_{1})C_{y}.$ Note
that ${L}(u_{l}s_{2}s_{1})=\\{s_{0},s_{1},s_{3}\\}$.
Assume $y\prec u_{l}s_{2}s_{1}$ and $s_{2}y<y$.
* •
If $s_{0}y>y$, then by 1.1(c) we get $s_{0}y=u_{l}s_{2}s_{1}$. This
contradicts the assumption. So $s_{0}y>y$ would not occur.
* •
If $s_{1}y>y$, then by 1.1(c) we get $s_{1}y=u_{l}s_{2}s_{1}$. This
contradicts the assumption. So $s_{1}y>y$ would not occur.
* •
If $s_{0}y<y,s_{1}y<y$ and $s_{2}y<y$, then $a(y)\geq a(w_{012})=6$. So
$C_{y}\in H^{<013}.$
Therefore,
(18) $\displaystyle
C_{s_{2}}C_{u_{l}s_{2}s_{1}}=C_{s_{2}u_{l}s_{2}s_{1}}+\Box.$
Step 2: Similar to the proof for formula (14), we have
(19) $\displaystyle
C_{s_{1}}C_{s_{2}u_{l}s_{2}s_{1}}=C_{s_{1}s_{2}u_{l}s_{2}s_{1}}+C_{u_{l}s_{2}s_{1}}+C_{s_{1}s_{2}u_{l-1}s_{2}s_{1}}+\Box.$
Note $s_{1}s_{2}u_{l}s_{2}s_{1}=x_{l+1}$. Combining formulas (17)-(19), we get
(16).
Now we can prove part (b) using induction on $k$. By 1.4 (j), we know
$t_{s_{1}s_{2}u_{k}}t_{u_{l}s_{2}s_{1}}=t_{s_{1}s_{2}\tau u_{k}}t_{u_{l}\tau
s_{2}s_{1}}$. Thus for $k=0$, Part (b) is equivalent to formula (16), which is
true. Now assume that $k\geq 1$ and Part (b) is true for $k-1$. Using Lemma
$\clubsuit$ and 1.4(i), induction hypothesis and formula (16), we get
$\displaystyle t_{s_{1}s_{2}u_{k}}t_{u_{l}s_{2}s_{1}}=$
$\displaystyle(t_{s_{1}s_{2}u_{0}}t_{u_{k}}-t_{s_{1}s_{2}u_{k-1}})t_{u_{l}s_{2}s_{1}}$
$\displaystyle=$ $\displaystyle{t_{s_{1}s_{2}u_{0}}\cdot}\sum_{0\leq
i\leq\min\\{2l+1,2k\\}}t_{u_{k+l-i}s_{2}s_{1}}-t_{s_{1}s_{2}u_{k-1}}t_{u_{l}s_{2}s_{1}}$
$\displaystyle=$ $\displaystyle\sum_{0\leq
i\leq\min\\{2l+1,2k\\}}(t_{x_{k+l+1-i}}+t_{x_{k+l-i}})-\sum_{0\leq
i\leq\min\\{2l+1,2k-1\\}}t_{x_{k+l-i}}$ $\displaystyle=$
$\displaystyle\sum_{0\leq i\leq\min\\{2l+1,2k+1\\}}t_{x_{k+l+1-i}}.$
This completes the proof for Part (b).
Proof for Part (c) is similar. First, it is easy to check that
$C_{s_{0}s_{2}u_{0}}C_{u_{0}s_{2}s_{1}}=\xi^{3}(C_{\tau
x^{\prime}_{1}}+C_{\tau x^{\prime}_{0}}),$
which implies
$t_{s_{1}s_{2}\tau
u_{0}}t_{u_{0}s_{2}s_{1}}=t_{x^{\prime}_{1}}+t_{x^{\prime}_{0}}.$
Further, we prove that
$t_{s_{1}s_{2}\tau
u_{0}}t_{u_{l}s_{2}s_{1}}=t_{x^{\prime}_{1+1}}+t_{x^{\prime}_{l}}.$
Then using induction on $k$, as the proof for Part (b), we prove Part (c). The
proof for Lemma $\spadesuit$ is completed. ∎
We have completed the proof for Theorem 5.5.
5.6. Motivated by Theorem 5.5 and the discussion of the cocenter of $J$ in
[BDD, Section 5] and some other evidences, we suggest a modification of
Lusztig’s conjecture on the structure of $J_{c}$ which is stated for any
connected reductive groups over $\mathbb{C}$.
Let $W$ be the extended affine Weyl group attached to a connected reductive
group over $\mathbb{C}$ (see Subsection 1.1) and let $c$ be a two-sided cell
of $W$. Let $F_{c}$ be a maximal reductive subgroup of the centralizer of an
element in the corresponding unipotent class of $G$. Then there should exist a
reductive group $\tilde{F}_{c}$ with the following properties:
(i) The reductive group $\tilde{F}_{c}$ is a simply connected covering of
$F_{c}$. That is, the identity component $\tilde{F}_{c}^{\circ}$ has simply
connected derived group, and there is a natural surjective homomorphism
$\tilde{F}_{c}\to F_{c}$ with finite kernel. In particular, if $F_{c}^{\circ}$
has simply connected derived group, then $\tilde{F}_{c}=F_{c}$.
(ii) There exists a finite set $Y$ with an algebraic action of $\tilde{F}_{c}$
and an injection
$\pi:c\hookrightarrow\text{isomorphism classes of irreducible
$\tilde{F}_{c}$-vector bundles on}\ Y\times Y.$
such that
(iii) The injection $\pi$ induces a ring injection
$\Pi:J_{c}\to K_{\tilde{F}_{c}}(Y\times Y),\ \ t_{x}{\mapsto}\pi(x).$
(iv) $\pi(x^{-1})_{(a,b)}=\pi(x)_{(b,a)}^{*}$ is the dual representation of
$\pi(x)_{(b,a)}.$
(v) $K_{\tilde{F}_{c}}(Y\times Y)$ is a finitely generated left (and right as
well) $\Pi(J_{c})$-module.
It seems natural that the $F_{c}$-set $\mathbf{B}_{e}$ defined in a recent
paper (see [L4]) would have an $\tilde{F}_{c}$-action compatible with the
$F_{c}$-action and then $\mathbf{B}_{e}$ should be a good candidate for the
set $Y$ above.
Acknowledgement: Part of the work was done during YQ’s visit to the Academy of
Mathematics and Systems Science, Chinese Academy of Sciences. YQ is very
grateful to the AMSS for hospitality and for financial supports.
## References
* [B] R. Bezrukavnikov, On tensor categories attached to cells in affine Weyl groups, In ”Representation Theory of Algebraic Groups and Quantum Groups”, Adv. Stud. Pure Math., 40, Math. Soc. Japan, Tokyo, 2004, pp. 69-90.
* [BDD] R. Bezrukavnikov, S. Dawydiak, G. Dobrovolska, On the structure of the affine asymptotic Hecke algebras, arXiv:2110.15903.
* [BO] R. Bezrukavnikov and V. Ostrik, On tensor categories attached to cells in affine Weyl groups II, In ”Representation Theory of Algebraic Groups and Quantum Groups”, Adv. Stud. Pure Math., 40, Math. Soc. Japan, Tokyo, 2004, pp.101-119.
* [DLP] C. De Concini, G. Lusztig, C. Procesi,Homology of the zero-set of a nilpotent vector field on a flag manifold, J. Amer. Math. Soc. 1 (1988), 15-34.
* [D] J. Du, The decomposition into cells of the affine Weyl group of type $\tilde{B_{3}}$, Communications in Algebra, 16 (1988), no.7, 1383–1409.
* [KL] D. Kazhdan and G. Lusztig, Representations of Coxeter groups and Hecke algebras, Invent. Math. 53 (1979), 165-184.
* [L1] G. Lusztig, Cells in affine Weyl groups, in “Algebraic groups and related topics”, Advanced Studies in Pure Math., vol. 6, Kinokunia and North Holland, 1985, pp. 255-287.
* [L2] G. Lusztig, Cells in affine Weyl groups, II, J. Alg. 109 (1987), 536-548.
* [L3] G. Lusztig, Cells in affine Weyl groups, IV, Journal of The Faculty of Science, 36 (1989), no.2, 297-328.
* [L4] G. Lusztig, Discretization of Speinger fibers, arXiv:1712.07530v3, 2021.
* [QX] Yannan Qiu and Nanhua Xi, The based ring of two-sided cells in an affine Weyl group of type $\tilde{B}_{3}$, I. Sci. China Math., to appear. arxiv: 2107.08983.
* [X1] N. Xi, Representations of Affine Hecke Algebras, volume 1587, Springer Lecture Notes in Math., 1994.
* [X2] N. Xi, The based ring of two-sided cells of affine Weyl groups of type ${\tilde{A}_{n-1}}$, volume 749, American Mathematical Soc., 2002.
|
# Classical and Quantum Algorithms for Tensor Principal Component Analysis
Matthew B. Hastings Station Q, Microsoft Research, Santa Barbara, CA
93106-6105, USA Microsoft Quantum and Microsoft Research, Redmond, WA 98052,
USA
###### Abstract
We present classical and quantum algorithms based on spectral methods for a
problem in tensor principal component analysis. The quantum algorithm achieves
a quartic speedup while using exponentially smaller space than the fastest
classical spectral algorithm, and a super-polynomial speedup over classical
algorithms that use only polynomial space. The classical algorithms that we
present are related to, but slightly different from those presented recently
in Ref. [1]. In particular, we have an improved threshold for recovery and the
algorithms we present work for both even and odd order tensors. These results
suggest that large-scale inference problems are a promising future application
for quantum computers.
## 1 Introduction
Principal component analysis is a fundamental technique that finds
applications in reducing the dimensionality of data and denoising. While an
optimal choice of principal components for a matrix can be computed
efficiently using linear algebra, the corresponding problem for tensors is
much less well-understood. Ref. [2] introduced a simple statistical model for
tensor principal component analysis, termed the “spiked tensor” problem, and
this paper has lead to a large amount of follow-up research. The model
consists of (see below for more precise definitions) randomly choosing some
unknown “signal vector” $v_{\rm sig}\in{\mathbb{R}}^{N}$; then, the $p$-th
order tensor
$T_{0}=\lambda v_{\rm sig}^{\otimes p}+G$ (1)
is formed, where $G\in({\mathbb{R}^{N}})^{\otimes p}$ is noise chosen from
some random distribution and where $\lambda$ is some scalar representing a
signal-to-noise ratio. One task called recovery is to infer $v_{\rm sig}$ (to
some accuracy) given $T_{0}$. A simpler task called detection is to
distinguish the case $\lambda=0$ from $\lambda=\overline{\lambda}$ for some
$\overline{\lambda}>0$, again just given $T_{0}$.
Ref. [2] presented a variety of algorithms for this problem. Following the
normalization of Ref. [1], the entries of $G$ are chosen independently from a
Gaussian distribution of zero mean and unit variance, with $|v_{\rm
sig}|=\sqrt{N}$. Then, information theoretically, it is possible to recover
for $\lambda$ much larger than $N^{(1-p)/2}$ [2, 3]. However, no polynomial
time algorithm is known that achieves this performance. Rather, the two best
known algorithms are spectral and sum-of-squares. Spectral algorithms were
first suggested in Ref. [2]. There, a matrix is formed from $T_{0}$ (if $p$ is
even, the matrix is $N^{p/2}$-by-$N^{p/2}$, with its entries given by entries
of $T_{0}$) and the leading eigenvector of the matrix is used to determine
$v_{\rm sig}$. For even $p$, this method works for $\lambda$ much larger than
$N^{-p/4}$, and a variant of it is conjectured to perform similarly for odd
$p$. Methods based on the sum-of-squares also perform similarly to the
spectral method. The sum-of-squares method [4, 5] for this problem gives rise
to a sequence of algorithms [6, 7], in which one can recover at $\lambda$
smaller than $N^{-p/4}$ at the cost of runtime and space increasing
exponentially in ${\rm polylog}(N)N^{-p/4}/\lambda$. In Ref. [1], a sequence
of spectral algorithms with similar performance was shown.
In this paper, we present another spectral algorithm for the problem. Our
spectral algorithm for even $p$ is closely related to that of Ref. [1] which
we became aware of while this paper was in preparation, and we use the
normalization in that paper. However, we make several changes. Our main
technical results are the following. First, we prove an improved threshold for
recovery for even $p$ using our algorithm; the improvement is by a constant
factor and relies on a randomized method of recovering. Second, we provide a
different algorithm for odd $p$ with provable guarantees on while no
guarantees were given in Ref. [1] for odd $p$. For both even and odd $p$, we
have provable bounds on recovery for $\lambda$ of order $N^{-p/4}$ (without
any polylogarithmic factors) and we have a sequence of algorithms similar to
that above for $\lambda$ small compared to $N^{-p/4}$. Third, we give a
quantum algorithm for our spectral method, achieving a quartic speedup and
exponential reduction in space. This quantum algorithm involves two main
ideas. The first uses phase estimation and amplitude amplification to obtain a
quadratic speedup in computing the largest eigenvector. The second idea uses a
chosen input state to obtain a further quadratic speedup, giving the overall
quartic speedup.
We emphasize that the quantum speedup is quartic compared to classical
spectral algorithms presented here and in previous work. We are not able to
make an accurate comparison of the runtime to sum-of-squares methods. In part,
given that the runtime of all of these algorithms increases exponentially in
$\lambda^{-1}$, a change in prefactors in some estimates for threshold can
give rise to polynomial changes in runtime. We expect that many of these
estimates of thresholds are not tight (indeed, we expect that they are off by
a polylogarithmic factor), and so either improved analytic methods or
numerical simulations are needed to give an accurate comparison.
At a more heuristic level, we present a rather different motivation for our
spectral algorithm compared to Ref. [1]. Rather than being motivated by the
so-called Kikuchi free energy, we instead are motivated by mean-field
approximations to quantum many-body systems. We consider a system of a some
number ${n_{bos}}$ of qudits, each of dimension $N$, and use the tensor
$T_{0}$ to construct a quantum Hamiltonian on these qudits. Increasing
${n_{bos}}$ gives rise to a similar sequence of algorithms as above, with
increased runtime but improved performance: the required ${n_{bos}}$ increases
polynomially in $\lambda^{-1}$ as ${\rm
polylog}(N)(N^{-p/4}/\lambda)^{4/(p-2)}$, but the runtime increases
exponentially.
Restricting to the symmetric subspace, these ${n_{bos}}$ qudits can be thought
of as a system of bosons. In the case $p=4$, for example, our Hamiltonian has
pairwise interaction terms for all pairs of qudits. It is natural from the
viewpoint of mean-field theory in physics then to expect that the leading
eigenvector of the problem, for large ${n_{bos}}$, can be approximated by a
product state. While the bounds for arbitrary pairwise Hamiltonians would
require rather large ${n_{bos}}$ for given $N$ in order for such a mean-field
approximation to be accurate [8, 9, 10], we will be able to prove that for the
statistical model above the mean-field approximation becomes accurate with
high probablity at much smaller ${n_{bos}}$, depending upon the value of
$\lambda$. In this mean-field regime, the product state is an ${n_{bos}}$-fold
tensor product of a single particle state, and this single particle state is
given by $v_{\rm sig}$ in an obvious way, regarding the vector $v_{\rm sig}$
as a vector in the single-particle Hilbert space. While we will not prove that
this state is a good approximation to the leading eigenvector, it will be a
good approximation to some state in an eigenspace with large eigenvalue. Then,
the single particle density matrix allows one to infer $v_{\rm sig}$ (a
similar matrix was used in Ref. [1] where it was termed a voting matrix).
Classically, implementing this spectral algorithm requires high-dimensional
linear algebra, in particular finding the leading eigenvector of a matrix of
dimension $\approx N^{n_{bos}}$. This makes it a natural candidate for a
quantum algorithm. Since the Hamiltonian here is fairly simple, it can be
simulated efficiently using standard techniques in the literature reviewed
later. This allows us to give a simple algorithm based on preparing a random
initial state and then phase estimating in an attempt to project onto the
leading eigenvector. The probability of success in this projection is inverse
in the dimension of the matrix, so this simple algorithm leads to no speedup
over classical. However, we show that it is possible to apply amplitude
amplification to give a quadratic speedup over classical. More surprisingly,
we show that one can use the tensor $T_{0}$ to prepare an input state to the
algorithm with improved overlap with the leading eigenvector, giving the
quantum algorithm a quartic speedup over classical. Here, when comparing to
classical we are considering classical algorithms based on the power method or
similar algorithms such as Lanczos; these algorithms require exponential space
while the quantum algorithm uses only polynomial space. We also consider
classical algorithms based on ideas in Ref. [11] which use polynomial space
but the quantum algorithm is super-polynomially faster than these algorithms.
We also present some minor improvements to the quantum algorithm which may be
useful in practice.
### 1.1 Definitions, Random Ensembles, and Notation
Let us make some formal definitions. A tensor $T$ of order $p$ and dimension
$N$ is a multi-dimensional array. The entries of the tensor are written
$T_{\mu_{1},\mu_{2},\ldots,\mu_{p}}$ where $p\geq 1$ is an integer and each
$\mu_{a}$ ranges from $1,\ldots,N$. Generalizing previous work on this
problem, we consider two possible cases, one in which entries of a tensor are
chosen to be real numbers, and one in which they may be complex numbers, so
that either $T\in({\mathbb{R}^{N}})^{\otimes p}$ or
$T\in({\mathbb{C}}^{N})^{\otimes p}$; we explain later the reason for this
generalization; a tensor with all entries real will be called a real tensor. A
symmetric tensor is one that is invariant under permutation of its indices.
The symmetrization of a tensor is equal to $1/p!$ times the sum of tensors
given by permuting indices.
The spiked tensor model for given $N,p$ is defined as follows. Let $v_{\rm
sig}$ be a vector in ${\mathbb{R}}^{N}$, normalized by $|v_{\rm
sig}|=\sqrt{N}$, chosen from some probability distribution; this is the
“signal vector”. Let $G$ be a real tensor of order $p$ with entries chosen
from a Gaussian distribution with vanishing mean. We let $T_{0}=\lambda v_{\rm
sig}^{\otimes p}+G$ as above, where $v_{\rm sig}^{\otimes p}$ is defined to be
the tensor with entries
$(v_{\rm sig}^{\otimes p})_{\mu_{1},\ldots,\mu_{p}}=\prod_{a=1}^{p}(v_{\rm
sig})_{\mu_{a}}.$
Here we use the notation that a subscript on a vector denotes an entry of that
vector; we use a similar notation for matrices later.
Remark: some of the best sum-of-squares results are for a different
distribution in which the entries of $T_{0}$ are chosen from a biased
distribution on $\\{-1,+1\\}$, rather than for the Gaussian distribution. We
expect that using that distribution would not affect the results here too
much, but we avoid treating that case also for simplicity.
Since the tensor $v_{\rm sig}^{\otimes p}$ is symmetric, of course it is
natural to replace $T_{0}$ by its symmetrization. Indeed, no information can
be lost by this replacement since given a tensor $T_{0}$ one can symmetrize
the tensor, and then add back in Gaussian noise chosen to vanish under
symmetrization to obtain a tensor drawn from the same distribution as $T_{0}$
was. That is, the cases in which $G$ is symmetrized or not can be reduced to
each other.
A generalization of this problem is the case in which $G$ is chosen to have
complex entries, with each entry having real and imaginary parts chosen from a
Gaussian distribution with vanishing mean and variance $1/2$. We refer to this
as the complex ensemble, while we refer to the case where $G$ has real entries
as the real ensemble; the choice of reducing the variance to $1/2$ is a
convenient normalization for later. It is clear that since $v_{\rm sig}$ is
real, the case of complex $G$ can be reduced to the real case (up to an
overall rescaling for the different variance) simply by taking the real part
of $T_{0}$, and similarly the real case can be reduced to the complex case
(again up to an overall rescaling) by adding Gaussian distributed imaginary
terms to the entries of $T_{0}$. We will see that for odd $p$, at least for
reasons of analyzing the algorithms, it is convenient not to symmetrize
$T_{0}$ and to take complex $G$, while for even $p$ this is not necessary. It
may be possible to avoid doing this for odd $p$ (which may improve the
detection and recovery threshold of the algorithm by constant factors) and we
comment on this later.
We treat $p$ as fixed in the asymptotic notation, but consider the dependence
on $N,{n_{bos}}$. So, throughout this paper, when we refer to a polynomial in
$N$, we mean a polynomial independent of the parameter ${n_{bos}}$. The
polynomial may, however, depend on $p$, such as $N^{p}$.
We make additionally the following assumptions:
###### Assumption 1.
We assume that ${n_{bos}}=O(N^{\theta})$ for some $p$-dependent constant
$\theta>0$ chosen sufficiently small. We will also assume that $\lambda$ is
$\Omega(N^{-\theta^{\prime}})$ for some $p$-dependent constant
$\theta^{\prime}>p/4$.
Finally, we assume that $\lambda=O(N^{-p/4})$. Remark: there is of course no
reason to consider $\lambda$ larger than this since simple spectral methods
succeed if $\lambda$ is $\omega(N^{-p/4})$, but we state this assumption
explicitly as it simplifies some of the big-O notation.
We will explicitly state this Assumption 1 in all theorems where it is needed;
the assumption will be implicit in the statement of the lemmas and will not be
explicitly stated to avoid cluttering the statement of the results.
The first of these assumptions, that ${n_{bos}}=O(N^{\theta})$, is useful to
simplify some of the statements of the results to avoid having to specify the
allowed range of ${n_{bos}}$ in each case. For example, we will say that a
quantity such as ${n_{bos}}^{p}/N$ is $o(1)$, meaning that we must take
$\theta<1/p$. We do not specify the exact value of $\theta$ but it can be
deduced from the proofs if desired.
The second of these assumptions, that $\lambda$ is
$\Omega(N^{-\theta^{\prime}})$, also helps simplify some of the statements of
the results. Since we have assumed that ${n_{bos}}=O(N^{\theta})$ and we will
see that the required ${n_{bos}}$ increases polynomially with $\lambda^{-1}$,
this assumed lower bound on $\lambda$ is not a further restriction on our
results.
We write $\mathbb{E}[\ldots]$ to denote expectation values and ${\rm
Pr}[\ldots]$ to denote a probability. Usually these are expectation values or
probabilities over choices of $G$, though in some cases we consider
expectation values over other random variables. We use $\|\ldots\|$ to denote
the operator norm of an operator, i.e., the largest singular value of that
operator. We use $|\ldots|$ to denote either the $\ell_{2}$ norm of a tensor
or the $\ell_{2}$ norm of a quantum state. All logarithms are natural
logarithms.
We use $\langle\ldots|\ldots\rangle$ to denote inner products and use bra-ket
notation both for vectors and for quantum mechanical states.
We will need to compute expectation values over random choices of $G$ and also
compute expectation values of certain operators in quantum states, such as
$\langle\psi|O|\psi\rangle$ for some state $\psi$ and operator $O$. We refer
to the latter as a quantum expectation value of $O$ in state $\psi$ to
distinguish it from an expectation value over random variables.
### 1.2 Outline
In section 2, we review some results on recovery and boosting from Ref. [1]
and present a randomized recovery procedure that will help in improving the
recovery threshold. In section 3, we give spectral algorithms for the spiked
tensor problem for the case of both even and odd $p$. In that section, we
present algorithms in terms of eigenvalues and eigenvectors of a matrix (more
precisely, vectors in some eigenspace and quantum expectation values of
operators in those vectors) that we call a Hamiltonian. We leave the method to
compute these eigenvalues and expectation values for later in section 5, where
we give classical and quantum algorithms for this and give time bounds for
those algorithms. In section 4, we give some results on the spectra of random
tensors needed for the analysis of these algorithms. A key idea here is
reducing the case of a $p$-th order tensor for odd $p$ to the case of a $q$-th
order tensor for even $q=2(p-1)$. One interesting corollary of this technique,
see corollary 2, is that for odd $p$ and for the minimal value of ${n_{bos}}$
we are able remove a logarithmic factor in some of the bounds (a similar
logarithmic factor has been removed also using in [4] using what they termed
an “unfolding” algorithm). An appendix A gives an introduction to some
techniques used to evaluate expectation values of tensors networks whose
tensors are chosen from a Gaussian distribution; these techniques are used
earlier in the paper. In section 6, we further discuss tensor networks and use
this to consider limitations of certain algorithms and also to explain further
some of the motivation for this algorithm. In section 7, we discuss some
extensions of the results.
The proof of detection is in theorem 2 for the even $p$ case and 4 for the odd
$p$ case. The proof of recovery is in theorem 3 for the even $p$ case and
theorem 5 for the odd $p$ case. The runtime bound for the fastest quantum
algorithm is in theorem 6. This theorem gives a quartic improvement in the
runtime compared to the fastest classical spectral algorithm; more precisely
the log of the runtime with the quantum algorithm divided by the log of the
runtime of the classical algorithm approaches $1/4$ as $N\rightarrow\infty$ at
fixed $N^{-p/4}/\lambda$.
## 2 Recovery
In this section we discuss recovery and define randomized procedures for
recovery that will be useful in boosting the threshold. Following the notation
of Ref. [1], define the correlation between two vectors by a normalized
overlap
${\rm corr}(x,y)=\frac{|\langle x|y\rangle|}{|x|\cdot|y|}.$ (2)
The goal of an algorithm is to produce a vector $x$ with large ${\rm
corr}(x,v_{\rm sig})$. Note that we take the absolute value in the
correlation, ignoring the sign. For even $p$, the sign of $v_{\rm sig}$ is
irrelevant, while for odd $p$ it is easy, given a guess of $v_{\rm sig}$ up to
sign, to try both choices of sign and see which is most likely.
Strong recovery means that ${\rm corr}(x,v_{\rm sig})=1-o(1)$. Proposition 2.6
of Ref. [1], which is noted in that reference as being implicit in Ref. [2],
shows how to “boost” a weaker correlation to strong recovery. It is shown that
given a vector $u$ one can apply a single iteration of the the tensor power
algorithm to obtain a new vector $x$ such that, with high probability,
${\rm corr}(x,v_{\rm sig})\geq 1-c\lambda^{-1}{\rm corr}(u,v_{\rm
sig})^{1-p}N^{(1-p)/2},$ (3)
where $c$ is a constant depending on $p$. So, for any
$\lambda\omega(N^{(1-p)/2}$, if ${\rm corr}(u,v_{\rm sig})=\Omega(1)$, we have
${\rm corr}(x,v_{\rm sig})=1-o(1)$.
Thus, it suffices to construct which outputs some vector $u$ which has, with
high probability, ${\rm corr}(u,v_{\rm sig})=\Omega(1)$. This is termed weak
recovery; indeed, one can be satisfied with even weaker assumptions depending
on the value of $\lambda$; for $\lambda$ close to $N^{-p/4}$, one may have
polynomially small ${\rm corr}(u,v_{\rm sig})$.
The spectral algorithms that we construct later will output a matrix that we
will write $\rho_{1}$. This matrix will be a positive semi-definite Hermitian
matrix with trace $1$. In physics terms, such a matrix is termed a density
matrix. For sufficiently large $\lambda$, the leading eigenvector of the
matrix will have a large overlap with $v_{\rm sig}$. However, for smaller
$\lambda$, we will still have some lower bound that, with high probability,
$\langle v_{\rm sig}|\rho_{1}|v_{\rm sig}\rangle=\Omega(1)|v_{\rm sig}|^{2}$.
The randomized algorithm 1 below shows how to use this to obtain weak
recovery. This randomized algorithm allows us to improve the threshold over
the recovery method [1] of simply using the leading eigenvector $\rho_{1}$
since it works even in some cases where the leading eigenvector has small
correlation with $v_{\rm sig}$.
Putting these results together we find that
###### Corollary 1.
Given an algorithm that, with high probability, outputs a density matrix
$\rho$ with
$\frac{\langle v_{\rm sig}|\rho_{1}|v_{\rm sig}\rangle}{N}=\Omega(1),$
then in polynomial time, with high probability, strong recovery is possible.
We will simply say that such an algorithm achieves recovery.
We present the algorithm for the case that the matrix $\rho_{1}$ may be
complex; if the matrix is real, one can instead sample from a real Gaussian
distribution and the proof of lemma 1 goes through will slightly different
constants.
Algorithm 1 Input: density matrix $\rho$. Output, some vector $w$ obeying
bounds described above
* 1.
Randomly sample a vector $u$ with entries chosen from a correlated complex
Gaussian distribution with zero mean and with covariance
$\mathbb{E}[\overline{u}_{i}u_{j}]=(\rho)_{ij},$ with
$\mathbb{E}[u_{i}u_{j}]=\mathbb{E}[\overline{u}_{i}\overline{u}_{j}]=0$.
* 2.
Let $w=u/|u|$.
We have
###### Lemma 1.
For algorithm 1, with probability at least $1/2$,
$|\langle w|v_{\rm sig}\rangle|\geq c^{\prime}\sqrt{\langle v_{\rm
sig}|\rho|v_{\rm sig}\rangle},$ (4)
for some scalar $c^{\prime}>0$.
###### Proof.
We have $\mathbb{E}[|u|^{2}]={\rm tr}(\rho)=1$. Hence, with probability at
least $3/4$ we have that $|u|^{2}\leq 4/3$. We have $\mathbb{E}[|\langle
u|v_{\rm sig}\rangle|^{2}]=\langle v_{\rm sig}|\rho v_{\rm sig}\rangle$ and
since $\langle u|v_{\rm sig}\rangle$ is a Gaussian random variable with mean
$0$, with probability at least $3/4$ its absolute value is at least some
positive constant $c^{\prime\prime}$ (the exact constant can be deduced from
the error function) times its standard deviation. Hence, the lemma follows for
$c^{\prime}=(3/4)c^{\prime\prime}$. ∎
## 3 Spectral Algorithm
We now give the spectral algorithms for the spiked tensor problem for the case
of both even and odd $p$. In subsection 3.1, we define a Hamiltonian $H(T)$
given an arbitrary tensor $T$. Then, in subsection 3.2, we present the
spectral algorithm in terms of $H(T_{0})$.
For even $p$, the Hamiltonian that we present is very similar to the matrix
$Y$ given in Ref. [1] but it has some minor differences. In our language
(explained below), this matrix $Y$ is obtained by projecting our Hamiltonian
of Eq. (5) into the subspace of the “symmetric subspace” spanned by
$|\mu_{1}\rangle\otimes|\mu_{2}\rangle\otimes\ldots\otimes|\mu_{{n_{bos}}}\rangle$
with $\mu_{1},\ldots,\mu_{{n_{bos}}}$ all distinct. The relative reduction in
the size of the matrix is only $O(1/N)$ in the limit of large $N$.
Also, in our method, we have an $O(N)$ rotational symmetry of the basis which
is very useful in analysis, for example showing that the eigenvalues of
$H(\lambda v_{\rm sig}^{\otimes p})$ are independent of choice of $v_{\rm
sig}$. For the matrix $Y$ of [1], this is not obvious to us and we do not
fully understand the claimed behavior of the largest eigenvalue in that case.
We will use a different notation, using creation and annihilation operators,
which will help make this rotational symmetry more explicit.
For odd $p$, the Hamiltonian that we use is unrelated to that of Ref. [1].
### 3.1 Hamiltonian Definition
For even $p$, given a tensor $T$ we define an linear operator $H(T)$ that we
call a Hamiltonian as follows. This Hamiltonian is a linear operator on a
vector space $({\mathbb{R}}^{N})^{\otimes{n_{bos}}}$ or
$({\mathbb{C}}^{N})^{\otimes{n_{bos}}}$, for integer ${n_{bos}}\geq 1$ chosen
below. We write basis elements of this space as
$|\mu_{1}\rangle\otimes|\mu_{2}\rangle\otimes\ldots\otimes|\mu_{n_{bos}}\rangle$,
and we call this space the full Hilbert space. We define
$H(T)=\frac{1}{2}\sum_{i_{1},\ldots,i_{p/2}}\Bigl{(}\sum_{\mu_{1},\ldots,\mu_{p}}T_{\mu_{1},\mu_{2},\ldots,\mu_{p}}|\mu_{1}\rangle_{i_{1}}\langle\mu_{1+p/2}|\otimes|\mu_{2}\rangle_{i_{2}}\langle\mu_{2+p/2}|\otimes\ldots\otimes|\mu_{p/2}\rangle_{i_{p/2}}\langle\mu_{p}|+{\rm
h.c.}\Bigr{)},$ (5)
where the sum is over distinct $i_{1},i_{2},\ldots,i_{p/2}$ so that there are
$(p/2!){{n_{bos}}\choose p/2}$ terms in the sum and where ${\rm h.c.}$ means
adding the Hermitian conjugate of the given terms, so that $H(T)$ is Hermitian
and where $|\mu\rangle_{i}\langle\nu|$ denotes the operator
$|\mu\rangle\langle\nu|$ on qudit $i$. We require that ${n_{bos}}\geq p/2$ or
else $H(T)$ is trivially zero.
Note of course that if $T$ is real and symmetric, then the term
$\sum_{\mu_{1},\ldots,\mu_{p}}T_{\mu_{1},\mu_{2},\ldots,\mu_{p}}|\mu_{1}\rangle_{i_{1}}\langle\mu_{1+p/2}|\otimes|\mu_{2}\rangle_{i_{2}}\langle\mu_{2+p/2}|\otimes\ldots\otimes|\mu_{p/2}\rangle_{i_{p/2}}\langle\mu_{p}|$
is already Hermitian. $H(T)$ can be regarded as a Hamiltonian acting on a
space of ${n_{bos}}$ qudits, each of dimension $N$, and with interaction
between sets of $p/2$ particles at a time.
Even if $T$ is not symmetrized, $H(T)$ is unchanged if one applies an
arbitrary permutation to the first $p/2$ indices of $T$ and applies the same
permutation to the last $p/2$ indices of $T$.
We may restrict to the symmetric subspace of this Hilbert space. We write
$D(N,{n_{bos}})$ to indicate the dimension of this subspace. For
$N\gg{n_{bos}}$, we can approximate $D(N,{n_{bos}})\approx
N^{n_{bos}}/{n_{bos}}!$.
Within the symmetric subspace, we can write this Hamiltonian in a so-called
“second-quantized” form:
$H(T)=\frac{1}{2}\Bigl{(}\sum_{\mu_{1},\ldots,\mu_{p}}T_{\mu_{1},\mu_{2},\ldots,\mu_{p}}\Bigl{(}\prod_{i=1}^{p/2}a^{\dagger}_{\mu_{i}}\Bigr{)}\Bigl{(}\prod_{i=p/2+1}^{p}a_{\mu_{i}}\Bigr{)}+{\rm
h.c.}\Bigr{)}.$ (6)
This replacement by a second-quantized Hamiltonian is simply a convenient
notation. The operators $a^{\dagger}_{\mu},a_{\mu}$ are bosonic creation and
annihilation operators, obeying canonical commutation relations
$[a_{\mu},a^{\dagger}_{\nu}]=\delta_{\mu,\nu}$. See appendix B for a brief
review of this formalism. We restrict to the subspace with a total of
${n_{bos}}$ bosons, i.e., we define the number operator $n$ by
$n\equiv\sum_{\mu}a^{\dagger}_{\mu}a_{\mu},$ (7)
and restrict to $n={n_{bos}}.$ An orthonormal basis of states for this
symmetric subspace is given by all states equal to some normalization constant
multiplying $a^{\dagger}_{\mu_{1}}a^{\dagger}_{\mu_{2}}\ldots
a^{\dagger}_{\mu_{n_{bos}}}|0\rangle$, where $|0\rangle$ is the vacuum state
(i.e., the state annihilated by $a_{\mu}$ for all $\mu$), and where
$\mu_{1}\leq\mu_{2}\leq\ldots\leq\mu_{n_{bos}}$ is some sequence.
The second quantized Hamiltonian for the symmetric subspace is unchanged under
arbitrary permutation of the first $p/2$ indices of $T$ and arbitrary (not
necessarily the same) permutation of the last $p/2$ indices of $T$.
For odd $p$, we define the Hamiltonian $H(T)$ as follows. Given a tensor $T$
of odd order $p$, define a new tensor $\tilde{T}$ of even order $q=2(p-1)$
with components
$\tilde{T}_{\mu_{1},\ldots,\mu_{(p-1)/2},\nu_{1},\ldots,\nu_{(p-1)/2},\mu_{(p-1)/2+1},\ldots,\mu_{p-1},\nu_{(p-1)/2},\ldots,\nu_{p-1}}=\sum_{\sigma}T_{\mu_{1},\ldots,\mu_{p-1},\sigma}T_{\nu_{1},\ldots,\nu_{p-1},\sigma}.$
(8)
Then define $H(T)=H(\tilde{T})$, using the definition (5) for $H(\tilde{T})$.
Note the order of indices on the left-hand side of Eq. (8). Using the second-
quantized notation, this gives for odd $p$:
$\displaystyle H(T)$ $\displaystyle=$
$\displaystyle\frac{1}{2}\Bigl{(}\sum_{\mu_{1},\ldots,\mu_{p-1}}\sum_{\nu_{1},\ldots,\nu_{p-1}}\sum_{\sigma}T_{\mu_{1},\mu_{2},\ldots,\mu_{p-1},\sigma}T_{\nu_{1},\nu_{2},\ldots,\nu_{p-1},\sigma}\Bigl{(}\prod_{i=1}^{(p-1)/2}a^{\dagger}_{\mu_{i}}a^{\dagger}_{\nu_{i}}\Bigr{)}\Bigl{(}\prod_{i=(p-1)/2+1}^{p-1}a_{\mu_{i}}a_{\nu_{i}}\Bigr{)}+{\rm
h.c.}\Bigr{)},$
Now we require that ${n_{bos}}\geq p-1$ as otherwise $H(T)$ is trivially zero.
For this Hamiltonian, it is convenient to take $G$ from the complex ensemble
because, as we explain more below, it makes $\mathbb{E}[H(G)]$ equal to zero,
as well as canceling out certain terms in higher order moments, making the
proof of the spectral properties of $H(G)$ simpler. We discus later to what
extent we can avoid using the complex ensemble.
### 3.2 Spectral Algorithms
The spectral algorithm for detection and recovery is algorithm 2. In this
subsection we prove correctness of this algorithm, using statistical
properties of $H(G)$ proven later.
This algorithm uses quantities $E_{0}$ and $E_{max}$ defined later; roughly
$E_{max}$ is an upper bound on the eigenvalues of $H(G)$ and $E_{0}$ is the
largest eigenvalue of $H(\lambda v^{\otimes p})$. The algorithm can then
achieve detection by verifying that the largest eigenvalue is significantly
larger than $E_{max}$, which occurs when $E_{0}$ is large enough. Indeed, we
will see that it suffices to have $E_{0}=(1+c)E_{max}$ for any fixed $c>0$
(some results can be extended to the case that $c$ decays slowly with $N$ but
we omit this for brevity). This fixes the scaling of ${n_{bos}}$ with
$\lambda$ so that we need (up to polylogarithmic factors)
${n_{bos}}\gtrsim(N^{-p/4}/\lambda)^{4/(p-2)}$.
One interesting feature of the algorithm is that in step 3, we compute the
density matrix of the leading eigenvector or of any vector in the eigenspace
of eigenvalue $\geq E_{cut}$, for $E_{cut}=(E_{0}+E_{max})/2$ defined in the
algorithm. This might seem surprising, given that the leading eigenvector is
computed in step 1; one might wonder why some other vector should be taken. We
describe the algorithm in this way since, in later classical and quantum
algorithms that we give to compute the spectral properties of the matrix, we
might not extract the leading eigenvector but instead extract only some vector
in this eigenspace due to use of the power method in a classical algorithm or
due to approximations in phase estimation in a quantum algorithm. Thus, much
of our analysis is given to showing that some eigenvalue larger than $E_{cut}$
exists by lower bounding the leading eigenvalue $\lambda_{1}$, but given that
some such eigenvalue exists, we do not worry too much about exactly what
mixture of eigenvectors in the given eigenspace we compute.
Algorithm 2 Spectral algorithm. This algorithm takes a tensor $T_{0}$ as input
and also a scalar $\overline{\lambda}$ and an integer ${n_{bos}}$. The output
is a decision about whether $\lambda=\overline{\lambda}$ or $\lambda=0$, and,
if the algorithm reports that $\lambda=\overline{\lambda}$, it also returns an
approximation of $v_{\rm sig}$ (up to an overall sign). The quantity
${n_{bos}}$ is chosen depending upon the value of $\overline{\lambda}$;
smaller values of $\lambda$ require larger values of ${n_{bos}}$ in order for
$E_{0}$ to be sufficiently larger than $E_{max}$ for the algorithm to be
accurate. See theorems 2,4,3,5. For $E_{0}\geq(1+c)E_{max}$ for any $c>0$, the
algorithm achieves recovery.
* 1.
Compute the eigenvector of $H(T_{0})$ and the leading eigenvalue, denoted
$\lambda_{1}$.
* 2.
(Detection) If
$\lambda_{1}>E_{cut}\equiv(E_{0}+E_{max})/2,$
where $E_{0}=\overline{\lambda}(p/2)!{{n_{bos}}\choose p/2}N^{p/2}$ for even
$p$, and $E_{0}=\overline{\lambda}^{2}(p-1)!{{n_{bos}}\choose p-1}N^{p}$ for
odd $p$, and where $E_{max}$ is defined in theorem 1, then report that
$\lambda=\overline{\lambda}$. Otherwise report $\lambda=0$.
* 3.
(Recovery) Compute the single particle density matrix (defined below) of the
leading eigenvector or of any vector in the eigenspace of eigenvalue $\geq
E_{cut}$. Apply algorithm 1 to recover an approximation to $v_{\rm sig}$.
In section 4, we consider the largest eigenvalue $\lambda_{1}$ of $H(G)$ and
show the following theorem which summarizes the results of lemmas 5,7.
Roughly, up to prefactors, the result for odd $p$ is given by considering the
result for a $q$-th order tensor for even $q=2(p-1)$ and then multiplying the
largest eigenvalue by a factor of $\sqrt{N}$.
###### Theorem 1.
Let $\lambda_{1}$ be the largest eigenvalue of $G$. For even $p$ let
$E_{max}=\sqrt{2J\log(N)}{n_{bos}}^{p/4+1/2}N^{p/4},$
and for odd $p$ let
$E_{max}=2\sqrt{J\log(N)}{n_{bos}}^{p/2}N^{p/2},$
where $J$ is a scalar depends that implicitly on $p,{n_{bos}},N$ and tends to
some function depending only on $p$ for large ${n_{bos}},N$. More precisely,
for even $p$, $J$ is equal to $(p/2)!{{n_{bos}}\choose
p/2!}/{n_{bos}}^{p/2}+o(1)$ for the real ensemble and is twice that for the
complex ensemble, and for odd $p$, $J$ is equal to that for the even case for
$2(p-1)$.
Then, for any $x$, assuming Assumption 1,
${\rm Pr}[\lambda_{1}\geq x]\leq\exp\Bigl{(}-\frac{x-E_{max}}{\xi}\Bigr{)},$
(10)
with for even $p$
$\xi=\frac{\sqrt{J}{n_{bos}}^{p/4-1/2}N^{p/4}}{\sqrt{2\log(N)}}$ (11)
and for odd $p$
$\xi=\frac{\sqrt{J}{n_{bos}}^{p/2-1}N^{p/2}}{\sqrt{\log(N)}}.$ (12)
So, for any $E^{\prime}$ which is $\omega(\xi)$, with high probability
$\lambda_{1}\leq E_{max}+E^{\prime}$.
Now consider the eigenvectors and eigenvalues of $H(T_{0})$. For any symmetric
tensor $T$ of order ${n_{bos}}$, let $|T\rangle$ be the vector on ${n_{bos}}$
qudits (each of dimension $N$) with amplitudes given by the entries of the
tensor in the obvious way:
$|T\rangle=\sum_{\mu_{1},\ldots,\mu_{{n_{bos}}}}T_{\mu_{1},\ldots,\mu_{{n_{bos}}}}|\mu_{1}\rangle\otimes\ldots\otimes|\mu_{{n_{bos}}}\rangle.$
This vector is only normalized if $|T|=1$. So, $\Psi_{\rm sig}\equiv
N^{-{n_{bos}}/2}|v_{\rm sig}^{\otimes{n_{bos}}}\rangle$ is a normalized
vector.
We have the following simple property:
###### Lemma 2.
Let $\lambda_{1}$ be the largest eigenvalues of $H(T_{0})$. Then,
$\lambda_{1}\geq E\equiv\langle\Psi_{\rm sig}|H(T)|\Psi_{\rm sig}\rangle$.
###### Proof.
Immediate from the variational principle. ∎
#### 3.2.1 Even $p$ Case
We now show correctness of the algorithm. All results in this subsubsection
refer to even $p$ even if we do not state it explicitly. First we estimate $E$
of lemma 2 to show detection. Then, we show recovery.
We have
$\displaystyle E$ $\displaystyle=$ $\displaystyle\langle\Psi_{\rm
sig}|H(\lambda v_{\rm sig}^{\otimes p})|\Psi_{\rm sig}\rangle+\langle\Psi_{\rm
sig}|H(G)|\Psi_{\rm sig}\rangle$ (13) $\displaystyle=$ $\displaystyle
E_{0}+\langle\Psi_{\rm sig}|H(G)|\Psi_{\rm sig}\rangle,$ (14)
where
$E_{0}=\lambda(p/2)!{{n_{bos}}\choose p/2}N^{p/2}.$ (15)
To evaluate $\langle\Psi_{\rm sig}|H(G)|\Psi_{\rm sig}\rangle$ it is
convenient to exploit a rotational invariance of this problem. We can apply a
rotation using any matrix $O$ in the orthogonal group $O(N)$, rotating the
creation and annihilation operators by making the replacement
$a^{\dagger}_{\mu}\rightarrow\sum_{\nu}O_{\mu\nu}a^{\dagger}_{\nu}$
and
$a_{\mu}\rightarrow\sum_{\nu}O_{\mu\nu}a_{\nu}.$
This rotation preserves the canonical commutation relations and is equivalent
to rotating the basis states on each qudit by $O$. To preserve the Hamiltonian
$H(T_{0})$, we rotate each leg of the tensor $T_{0}$:
$(T_{0})_{\mu_{1},\ldots,\mu_{p}}\rightarrow\sum_{\nu_{1},\ldots,\nu_{p}}(T_{0})_{\nu_{1},\ldots,\nu_{p}}\prod_{i}O_{\mu_{i},\nu_{i}}.$
This rotation preserves the Gaussian measure on $G$ but changes $v_{\rm sig}$.
So, we can rotate so that $v_{\rm sig}$ is some fixed vector, say
$(\sqrt{N},0,0,\ldots)$ so $(v_{\rm sig})_{1}=\sqrt{N}$. Then
$\langle\Psi_{\rm sig}|H(G)|\Psi_{\rm sig}\rangle$ is equal to
$(p/2)!{{n_{bos}}\choose p/2}$ multiplied by a single entry of $G$, i.e., the
entries with all indices equal to $1$, which is some quantity chosen from a
Gaussian distribution with zero mean and unit variance. So, with high
probability, $E=\lambda(p/2)!{{n_{bos}}\choose
p/2}N^{p/2}+{n_{bos}}^{p/2}O(N^{\eta})$ for any $\eta>0$.
Hence,
###### Theorem 2.
If $\lambda(p/2)!{{n_{bos}}\choose
p/2}N^{p/2}-E_{max}=\omega\Bigl{(}\frac{\sqrt{J}{n_{bos}}^{p/4-1/2}N^{p/4}}{\sqrt{2\log(N)}}\Bigr{)}$,
and if Assumption 1 holds, then with high probability algorithm 2 correctly
determines whether $\lambda=0$ or $\lambda=\overline{\lambda}$.
###### Proof.
This follows from the estimate of $E$ which lowers bounds the largest
eigenvalue when $\lambda=\overline{\lambda}$ and from theorem 1 which upper
bounds $\|H(G)\|$. ∎
Given any state, define the single particle density matrix $\rho_{1}$ in the
basis for the full Hilbert space used in Eq. (5) to be the reduced density
matrix on any of the qudits. Equivalently, this single particle density matrix
can be expressed in terms of creation and annihilation operators by
$(\rho_{1})_{\mu\nu}=\frac{1}{{n_{bos}}}a^{\dagger}_{\mu}a_{\nu}.$ (16)
Note that $\rho_{1}$ is positive semi-definite, Hermitian, and trace $1$.
We have
###### Theorem 3.
Let Assumption 1 hold. Given any vector $\Psi$ such that
$\langle\Psi|H(T_{0})|\Psi\rangle\geq(1+c^{\prime})E_{max}$ for any scalar
$c^{\prime}>0$, then with high probability the corresponding single particle
density matrix $\rho_{1}$ obeys
$\frac{\langle v_{\rm sig}|\rho_{1}|v_{\rm
sig}\rangle}{N}\geq(c^{\prime}-o(1))\frac{E_{max}}{E_{0}}.$ (17)
In particular, for $\langle\Psi|H(T_{0})|\Psi\rangle\geq E_{cut}$ with
$E_{0}\geq(1+c)E_{max}$ we have $c^{\prime}\geq c/2$ so with high probability
$\frac{\langle v_{\rm sig}|\rho_{1}|v_{\rm
sig}\rangle}{N}\geq\frac{1}{2}\frac{c}{1+c}.$ (18)
Hence, algorithm 2 achieves recovery.
###### Proof.
We have $\langle\Psi|H(\lambda v_{\rm sig}^{\otimes
p}|\Psi\rangle+\langle\Psi|H(G)|\Psi\rangle\geq(1+c)E_{max}$. By theorem 1,
with high probability $\langle\Psi|H(G)|\Psi\rangle\leq(1+o(1))E_{max}$.
Hence, with high probability $\langle\Psi|H(\lambda v_{\rm sig}^{\otimes
p})|\Psi\rangle\geq(c-o(1))E_{max}$.
Rotate so that $v_{\rm sig}=(\sqrt{N},0,0,\ldots)$. Then, the Hamiltonian
$H(\lambda v_{\rm sig}^{\otimes p})$ is diagonal in the computational basis
for the full Hilbert space. Let $P(m)$ be the probability, for state $\Psi$,
that exactly $m$ out of ${n_{bos}}$ qudits are in state $|1\rangle$. Then,
$\langle\Psi|H(\lambda v_{\rm sig}^{\otimes
p})|\Psi\rangle=\sum_{m}\lambda(p/2)!{m\choose
p/2}N^{p/2}P(m)\leq\sum_{m}m(E_{0}/{n_{bos}})P(m)=E_{0}\langle v_{\rm
sig}|\rho_{1}|v_{\rm sig}\rangle/N$. ∎
Remark: more generally, the same result holds for any mixed state: given any
mixed state $\sigma$ such that ${\rm tr}(\sigma H(T_{0}))\geq(1+c)E_{max}$ for
any scalar $c>0$, then with high probability the corresponding single particle
density matrix $\rho_{1}$ obeys Eq. (17).
#### 3.2.2 Odd $p$ Case
We now show correctness of the algorithm for odd $p$. All results in this
subsubsection refer to odd $p$ even if we do not state it explicitly.
Let us first estimate $E$ to show detection. We have $E=\langle\Psi_{\rm
sig}|H(T_{0})|\Psi_{\rm sig}\rangle$, but now $H(T_{0})$ is a quadratic
function of $T_{0}$ so there are some cross-terms. We have
$\langle\Psi_{\rm sig}|H(v_{\rm sig}^{\otimes p})|\Psi_{\rm
sig}\rangle=\lambda^{2}(p-1)!{{n_{bos}}\choose p-1}N^{p}\equiv E_{0}.$ (19)
The cross-term is
$\lambda\langle\Psi_{\rm
sig}|\sum_{\mu_{1},\ldots,\mu_{p-1}}\sum_{\nu_{1},\ldots,\nu_{p-1}}\sum_{\sigma}(v_{\rm
sig}^{\otimes}p)_{\mu_{1},\mu_{2},\ldots,\mu_{p-1},\sigma}G_{\nu_{1},\nu_{2},\ldots,\nu_{p-1},\sigma}\Bigl{(}\prod_{i=1}^{(p-1)/2}a^{\dagger}_{\mu_{i}}a^{\dagger}_{\nu_{i}}\Bigr{)}\Bigl{(}\prod_{i=(p-1)/2+1}^{p-1}a_{\mu_{i}}a_{\nu_{i}}\Bigr{)}+{\rm
h.c.}|\Psi_{\rm sig}\rangle.$
Exploiting the same rotational invariance as in the even $p$ case, this is
equal to $2\lambda(p-1)!{{n_{bos}}\choose p-1}N^{p/2}$ multiplied by the real
part of a single entry of $G$, i.e., the entry with all indices equal to $1$.
So, with high probability, this cross-term is bounded by
$\lambda{n_{bos}}^{p-1}O(N^{p/2+\eta})$ for any $\eta>0$.
The term quadratic in $G$ is
$\frac{1}{2}\langle\Psi_{\rm
sig}|\sum_{\mu_{1},\ldots,\mu_{p-1}}\sum_{\nu_{1},\ldots,\nu_{p-1}}\sum_{\sigma}G_{\mu_{1},\mu_{2},\ldots,\mu_{p-1},\sigma}G_{\nu_{1},\nu_{2},\ldots,\nu_{p-1},\sigma}\Bigl{(}\prod_{i=1}^{(p-1)/2}a^{\dagger}_{\mu_{i}}a^{\dagger}_{\nu_{i}}\Bigr{)}\Bigl{(}\prod_{i=(p-1)/2+1}^{p-1}a_{\mu_{i}}a_{\nu_{i}}\Bigr{)}+{\rm
h.c.}|\Psi_{\rm sig}\rangle.$
Exploiting the same rotational invariance as before, fixing $v_{\rm
sig}=(\sqrt{N},0,0,\ldots)$, we must have all $\mu_{i},\nu_{i}=1$. So, this is
a sum of squares of $N$ different entries of $G$, corresponding to $N$
possible choices of $\sigma$; since we use the complex ensemble, this has
vanishing mean. So, with high probability this term is
${n_{bos}}^{p-1}O(N^{1/2+\eta})$ for any $\eta>0$.
So, with high probability,
$E\geq
E_{0}-\lambda{n_{bos}}^{p-1}O(N^{p/2+\eta})-{n_{bos}}^{p-1}O(N^{1/2+\eta})$
(20)
for any $\eta>0$.
Hence,
###### Theorem 4.
Let Assumption 1 hold. If $\lambda^{2}(p-1)!{{n_{bos}}\choose
p-1}N^{p}-E_{max}=\omega\Bigl{(}\frac{\sqrt{J}{n_{bos}}^{p/2-1}N^{p/2}}{\sqrt{\log(N)}}\Bigr{)}$,
then with high probability algorithm 2 correctly determines whether
$\lambda=0$ or $\lambda=\overline{\lambda}$.
###### Proof.
This follows from the estimate of $E$ in Eq. (20) which lowers bounds the
largest eigenvalue when $\lambda=\overline{\lambda}$ and from theorem 1 which
upper bounds $\|H(G)\|$. Note that for $\lambda=O(N^{-p/4})$, the terms
$-\lambda{n_{bos}}^{p-1}O(N^{p/2+\eta})-{n_{bos}}^{p-1}O(N^{1/2+\eta})$ on the
right-hand side of this equation are negligible as they are
$o(\Bigl{(}\frac{\sqrt{J}{n_{bos}}^{p/2-1}N^{p/2}}{\sqrt{\log(N)}}\Bigr{)}$. ∎
We now consider recovery. We will first give a more general bound on cross-
terms that will be useful. We have
$H(T_{0})=H(\lambda v_{\rm sig}^{\otimes p})+H(G)+{\rm cross\,terms}.$
Let us first bound the cross-terms.
###### Lemma 3.
With high probability, the operator norm of the cross-terms is bounded by
$\lambda N^{p/2}O(N^{(p-1)/4}){n_{bos}}^{p-1}$.
###### Proof.
The cross-terms are equal to $2H(T_{cross})$ where $T_{cross}$ is a tensor for
even $q=2(p-1)$ which has components
$\Bigl{(}T_{cross}\Bigr{)}_{\mu_{1},\ldots,\mu_{(p-1)/2},\nu_{1},\ldots,\nu_{(p-1)/2},\mu_{(p-1)/2+1},\ldots,\mu_{p},\nu_{(p-1)/2},\ldots,\nu_{p}}=\lambda\sum_{\sigma}T_{\mu_{1},\ldots,\mu_{p-1},\sigma}\cdot\Bigl{(}\prod_{i=1}^{p-1}(v_{\rm
sig})_{\nu_{i}}\Bigr{)}\cdot(v_{\rm sig})_{\sigma}.$ (21)
Rotating so that $v_{\rm sig}=(\sqrt{N},0,0,\ldots)$, we have
$\Bigl{(}T_{cross}\Bigr{)}_{\mu_{1},\ldots,\mu_{(p-1)/2},\nu_{1},\ldots,\nu_{(p-1)/2},\mu_{(p-1)/2+1},\ldots,\mu_{p},\nu_{(p-1)/2},\ldots,\nu_{p}}=\lambda
N^{p/2}\sum_{\sigma}T_{\mu_{1},\ldots,\mu_{p-1},1}\prod_{i=1}^{p-1}\delta_{\nu_{i},1}.$
(22)
Clearly, $\|H(T_{cross})\|\leq{n_{bos}}^{p-1}\|M_{cross}\|,$ where $M_{cross}$
is an $N^{p-1}$-by-$N^{p-1}$ matrix, whose entries are determined by the
entries of $T_{cross}$ in an obvious way so that the first $p-1$ indices of
$T_{cross}$ determine a column index in $M$ and the last $p-1$ indices of
$T_{cross}$ determine a row index of $M$. Regard
$T_{\mu_{1},\ldots,\mu_{p-1},1}$ as being the entries of some tensor of order
$p-1$ and let $M^{\prime}$ by the $N^{(p-1)/2}$-by-$N^{(p-1)/2}$ matrix whose
entries are determined by the entries of this tensor, again in the obvious way
so that the first $(p-1)/2$ indices of the tensor determine a column index and
the last $(p-1)/2$ indices determine a row index.
Then,
$\|M_{cross}\|=\lambda N^{p/2}\|M^{\prime}\|.$ (23)
However, since the entries of $M^{\prime}$ are independent Gaussian random
variables, it follows from standard random matrix theory results [12] that
with high probability $\|M^{\prime}\|=O(N^{(p-1)/4})$. Remark: the dependence
on ${n_{bos}}$ is not tight here. ∎
So as in the even case, we have
###### Theorem 5.
Let Assumption 1 hold. Given any vector $\Psi$ such that
$\langle\Psi|H(T_{0})|\Psi\rangle\geq(1+c^{\prime})E_{max}$ for any scalar
$c^{\prime}>0$, then with high probability the corresponding single particle
density matrix $\rho_{1}$ obeys
$\frac{\langle v_{\rm sig}|\rho_{1}|v_{\rm
sig}\rangle}{N}\geq(c^{\prime}-o(1))\frac{E_{max}}{E_{0}}.$ (24)
In particular, for $\langle\Psi|H(T_{0})|\Psi\rangle\geq E_{cut}$ with
$E_{0}\geq(1+c)E_{max}$ we have $c^{\prime}\geq c/2$ so with high probability
$\frac{\langle v_{\rm sig}|\rho_{1}|v_{\rm
sig}\rangle}{N}\geq\frac{1}{2}\frac{c}{1+c}.$ (25)
Hence, algorithm 2 achieves recovery.
###### Proof.
We use lemma 3 to bound $\|H(T_{cross})\|$. This bound is asymptotically
negligible compared to $E_{max}$. So, we have $\langle\Psi|H(\lambda v_{\rm
sig}^{\otimes
p}|\Psi\rangle+\langle\Psi|H(G)|\Psi\rangle\geq(1+c-o(1))E_{max}$ where the
$o(1)$ denotes the $o(1)$ contribution from the cross-terms.
Then, the rest of the proof is the same as in the even case, except for a
replacement of $p/2$ by $p-1$. In detail: by theorem 1, with high probability
$\langle\Psi|H(G)|\Psi\rangle\leq(1+o(1))E_{max}$. Hence,
$\langle\Psi|H(\lambda v_{\rm sig}^{\otimes
p}|\Psi\rangle\geq(c-o(1))E_{max}$.
Rotate so that $v_{\rm sig}=(\sqrt{N},0,0,\ldots)$. Then, the Hamiltonian
$H(\lambda v_{\rm sig}^{\otimes p})$ is diagonal in the computational basis
for the full Hilbert space. Let $P(m)$ be the probability, for state $\Psi$,
that exactly $m$ out of ${n_{bos}}$ qudits are in state $|1\rangle$. Then,
$\langle\Psi|H(\lambda v_{\rm sig}^{\otimes
p})|\Psi\rangle=\sum_{m}\lambda^{2}(p-1)!{m\choose
p-1}N^{p}P(m)\leq\sum_{m}m(E_{0}/{n_{bos}})P(m)=E_{0}\langle v_{\rm
sig}|\rho_{1}|v_{\rm sig}\rangle/N$. ∎
## 4 Spectrum of Random Hamiltonian
In this section, we will estimate the eigenvalues of $H(G)$. We consider first
the case of even $p$. Here our proof is very similar to the of Ref. [1],
though the method here also suggests some heuristics that may lead to a
tighter bound in the future. Then, we consider the case of odd $p$ by reducing
it to the case of even $p$.
### 4.1 Even $p$
We first consider the case of even $p$. Let $Z(\tau,p,N,{n_{bos}})$ denote
$\mathbb{E}[{\rm tr}(\exp\\{\tau H(G)\\})]$ for a tensor $G$ of rank $p$, with
entries of $G$ chosen from the Gaussian distribution, with given
$N,{n_{bos}}$, for some real scalar $\tau$. In this subsection, $G$ may be
symmetrized or not, and may be chosen from either the real or complex
ensemble.
The main result in this section is the following lemma:
###### Lemma 4.
For each ensemble, real or complex, symmetrized or not, we have
$Z(\tau,p,N,{n_{bos}})\leq
D(N,{n_{bos}})\exp(\tau^{2}(J/2){n_{bos}}^{p/2}N^{p/2}),$ (26)
where $J$ is a scalar depends that implicitly on $p,{n_{bos}},N$ and tends to
some function depending only on $p$ for large ${n_{bos}},N$. More precisely,
$J$ is equal to $(p/2)!{{n_{bos}}\choose p/2!}/{n_{bos}}^{p/2}+o(1)$ for the
real ensemble and is twice that for the complex ensemble.
###### Proof.
We first give a brief derivation of Eq. (4) below which is a standard result
using quantum field theory techniques. Note that $\tau H(G)=H(\tau G)$ and
$\tau G$ is a tensor with entries chosen from a Gaussian distribution with
zero mean and variance $\tau^{2}$. Hence for any $\tau^{\prime}>\tau$ we have
$Z(\tau^{\prime},p,N,{n_{bos}})=\mathbb{E}_{G,\eta}[{\rm tr}(\exp\\{H(\tau
G+\eta)\\})],$ (27)
where $\mathbb{E}_{G,\eta}[\ldots]$ denotes the expectation value over $G$ and
$\eta$, with the tensor $\eta$ having Gaussian entries with zero mean and
variance $(\tau^{\prime})^{2}-\tau^{2}$. Taking the expectation value over
$\eta$, for $\tau^{\prime 2}-\tau^{2}$ small, we need to keep only the zeroth
and second order terms on the right-hand side. So, we find that
$\displaystyle\partial_{\tau}^{2}Z(\tau,p,N,{n_{bos}})$ $\displaystyle=$
$\displaystyle\int_{0}^{1}{\rm d}s_{2}\int_{0}^{s_{2}}{\rm
d}s_{1}\,\mathbb{E}_{G,\eta}[{\rm tr}\Bigl{(}\exp\Bigl{\\{}(1-s_{2})\tau
H(G)\Bigr{\\}}H(\eta)\exp\Bigl{\\{}(s_{2}-s_{1})\tau
H(G)\Bigr{\\}}H(\eta)\exp\Bigl{\\{}s_{1}\tau H(G)\Bigr{\\}}\Bigr{)}].$
Using cyclic properties of the trace, this can be simplified to
$\partial_{\tau}^{2}Z(\tau,p,N,{n_{bos}})=\frac{1}{2}\int_{0}^{1}{\rm
d}s_{1}\,\mathbb{E}_{G,\eta}[{\rm tr}\Bigl{(}\exp\Bigl{\\{}(1-s_{1})\tau
H(G)\Bigr{\\}}H(\eta)\exp\Bigl{\\{}s_{1}\tau H(G)\Bigr{\\}}H(\eta)\Bigr{)}].$
(29)
We now use a general result. Consider any Hermitian $H$ (we will use $H=\tau
H(G)$) and any operator $O$ (we will use $O=H(\eta)=H(\eta)^{\dagger}$) and
any $s_{1}\in[0,1]$. We claim that
${\rm tr}\Bigl{(}\exp(H)O^{\dagger}O\Bigr{)}+O\leftrightarrow
O^{\dagger}\geq{\rm
tr}\Bigl{(}\exp((1-s_{1})H)O^{\dagger}\exp(s_{1}H)O\Bigr{)}+O\leftrightarrow
O^{\dagger},$
where $O\leftrightarrow O^{\dagger}$ indicates the previous term with
$O,O^{\dagger}$ interchanged. Proof of claim: work in an eigenbasis of $H$. It
suffices to consider the case that $O=|b\rangle\langle a|$ where
$|a\rangle,|b\rangle$ are eigenvectors of $H$ with eigenvalues $E_{a},E_{b}$.
Then the the right-hand side is equal to
$\exp(s_{1}E_{a}+(1-s_{1})E_{b})+\exp(s_{1}E_{b}+(1-s_{1})E_{a})=2\exp((E_{a}+E_{b})/2)\cosh((s_{1}-1/2)(E_{b}-E_{a})$.
The cosh function is maximized on the interval $[0,1]$ at $s_{1}=0,1$ when the
right-hand side becomes equal to the left-hand side. So,
$\partial_{\tau}^{2}Z(\tau,p,N,{n_{bos}})\leq\frac{1}{2}\mathbb{E}_{G,\eta}[{\rm
tr}\Bigl{(}\exp(\tau H(G))H(\eta)^{2}\Bigr{)}].$ (30)
For the real ensemble without symmetrization, we have
$\mathbb{E}[H(\eta)^{2}]=\sum_{\mu_{1},\ldots,\mu_{p/2}}\sum_{\nu_{1},\ldots,\nu_{p/2}}\prod_{i=1}^{p/2}\Bigl{(}a^{\dagger}_{\mu_{i}}a_{\nu_{i}}\Bigr{)}\prod_{i=1}^{p/2}\Bigl{(}a^{\dagger}_{\nu_{i}}a_{\mu_{i}}\Bigr{)}\Bigr{.}$
To leading order in $N$, we may approximate
$\sum_{\nu}a_{\nu}a^{\dagger}_{\nu}=N$ so on the given Hilbert space,
$\mathbb{E}[H(\eta)^{2}]$ is a scalar equal to $(p/2)!{{n_{bos}}\choose
p/2}N^{p/2}+O(N^{p/2-1})$. In general, for the complex or real ensemble,
symmetrized or not, we find that $\mathbb{E}[H(\eta)^{2}]$ is a scalar equal
to $J(p/2)!{{n_{bos}}\choose p/2}N^{p/2}$, where $J$ obeys the claims of the
lemma. To verify that $\mathbb{E}[H(\eta)^{2}]$ is a scalar and to compute the
scalar to all orders in $N$, commute the annihilation operators $a_{\nu}$ to
the right. The result is some linear combination of operators with all
annihilation operators to the right of creation operators which can be written
in the form
$\sum_{\mu_{1},\ldots,\mu_{k}}(\prod_{i=1}^{k}a^{\dagger}_{\mu_{k}})(\prod_{i=1}^{k}a_{\mu_{k}})$,
and each such operator is equal to $k!{{n_{bos}}\choose k}$.
Hence, from Eq. (30),
$\partial_{\tau}^{2}\log(Z(\tau,p,N,{n_{bos}})\leq(J/2){n_{bos}}^{p/2}N^{p/2}$.
∎
Remark: this result is clearly not tight in the regime where random matrix
theory is accurate (${n_{bos}}=p/2$). It is interesting to see what happens
there. The correlation function $\mathbb{E}_{G,\eta}[{\rm
tr}\Bigl{(}\exp\\{(1-s_{1})\tau H(G)\\}H(\eta)\exp\\{s_{1}\tau
H(G)\\}H(\eta)\Bigr{)}]$ is not independent of $s_{1}$, but rather decays as a
function of $s_{1}$ for $s_{1}\leq 1/2$ (of course, it increases again as
$s_{1}$ becomes larger than $1/2$). Considering the regime $s_{1}\ll 1/2$,
using the square-root singularity at the edge of the Wigner semi-circle we can
estimate that it decays as $s_{1}^{-3/2}$. This means that the integral of
this correlation function over $s_{1}$ is dominated by its value for small
$s_{1}$ of order $1/\tau$ so that for $\tau$ large compared to the inverse
width of the semi-circle (though of course $\tau$ not too large) the integral
becomes of order $1/\tau$. This is stronger than the upper bounds here where
we have bounded by the integral by something independent of $\tau$. We may
guess that a tighter analysis will show that a similar effect will happen in
the case of ${n_{bos}}\gg p/2$; however, an important difference occurs. If we
take an eigenstate of $H(G)$ with some eigenvalue $\lambda_{0}$, and apply
$H(\eta)$ for random $\eta$, this only changes $p/2$ out of the ${n_{bos}}$
qudits in the state. So, one might guess that the resulting state will have
expectation value of $H(G)$ that is $\lambda_{0}(1-p/(2{n_{bos}}))$ rather
than an (as in the random matrix case) expectation value of $H(G)$ which is
zero. So, we may guess that the correlation function will be non-negligible
for $s_{1}\lesssim({n_{bos}}/p)\tau^{-1}$. A heuristic estimate in this
fashion suggests that the lemma below for the eigenvalue is tight up to
logarithmic factors.
From lemma 4, the following lemma is an immediate corollary:
###### Lemma 5.
Let $\lambda_{1}$ be the largest eigenvalue of $G$. Let
$E_{max}=\sqrt{2J\log(N)}{n_{bos}}^{p/4+1/2}N^{p/4}.$
Then, for any $x$,
${\rm Pr}[\lambda_{1}\geq x]\leq\exp\Bigl{(}-\frac{x-E_{max}}{\xi}\Bigr{)},$
(31)
with
$\xi=\frac{\sqrt{J}{n_{bos}}^{p/4-1/2}N^{p/4}}{\sqrt{2\log(N)}}$ (32)
So, for any $E^{\prime}$ which is $\omega(\xi)$, with high probability
$\lambda_{1}\leq E_{max}+E^{\prime}$.
###### Proof.
We have ${\rm tr}(\exp\\{\tau H(G)\\})\geq\exp(\tau\lambda_{1})$. Hence, for
any $x$, ${\rm Pr}[\lambda_{1}\geq x]\leq Z(\tau,p,N,{n_{bos}})/\exp(\tau x)$.
Since $D(N,{n_{bos}})\leq N^{n_{bos}}$, minimizing over $\tau$, we find that
${\rm Pr}[\lambda_{1}\geq
x]\leq\exp\Bigl{(}{n_{bos}}\log(N)-\frac{x^{2}}{2J{n_{bos}}^{p/2}N^{p/2}}\Bigr{)}.$
For $x=E_{max}$, the right-hand is equal to $1$ and for $x>E_{max}$ the right-
hand side decays exponentially. ∎
### 4.2 Odd $p$
We now consider the case of odd $p$. Let $Z(\tau,p,N,{n_{bos}})$ denote
$\mathbb{E}[{\rm tr}(\exp(\tau H(G)))]$ for a tensor $G$ of rank $p$, with
entries of $G$ chosen from the Gaussian distribution, with given
$N,{n_{bos}}$. In this subsection, $G$ is complex and not symmetrized. The
Hamiltonian $H(G)$ is given by Eq. (6) for even $p$ and by Eq. (3.1) for odd
$p$.
We will reduce the calculation for odd $p$ to the case for even $p$, up to a
bounded error, showing the following
###### Lemma 6.
For odd $p$, for ${n_{bos}}^{p-1}\tau N^{1/3}=o(1)$,
$Z(\sqrt{2N}\tau,2(p-1),N,{n_{bos}})\leq Z(\tau,p,N,{n_{bos}})\leq
Z(\sqrt{2N}\tau,2(p-1),N,{n_{bos}})\exp(o(N^{(p-1)/2})).$ (33)
All occurrences of $Z(\cdot)$ in the above equation refer to the complex
ensemble without symmetrizing.
Remark: the assumptions require that $\tau$ is $o(1)$; however, that is still
large enough $\tau$ to be useful later in bounding the spectrum since the
largest eigenvalues of $H(G)$ are typically large compared to $1$.
###### Proof.
For arbitrary $H(G)$, the exponential $\exp\\{\tau H(G)\\}$ can be expanded as
a series
$1+\tau H(G)+\frac{\tau^{2}}{2}H(G)^{2}+\ldots.$
We bring the expectation value (over $G$) inside the trace, and compute the
expectation value of this series. This expectation value can be computed using
Wick’s theorem to compute the expectation value of a moment of a Gaussian
distribution.
For odd $p$, each term in $H$ is a sum of two terms, one depending on the
product of two tensors $G$ and one depending on the product of two tensors
$\overline{G}$, where the overline denotes complex conjugation. Hence, the
$m$-th order term $\frac{\tau^{m}}{m!}H(G)^{m}$ is a sum of $2^{m}$ terms,
corresponding to $m$ different choices of $G$ or $\overline{G}$ in each
$H(G)$); we call each such choice a history. Each term is a product of
creation and annihilation operators depending on a product of $2m$ tensors,
some of which are $G$ and some of which are $\overline{G}$. This expectation
value of such a term is non-vanishing only if there are $m$ tensors $G$ and
$m$ tensors $\overline{G}$. In that case, the expectation value is given by
summing over ways of pairing each tensor $G$ with a distinct tensor
$\overline{G}$. There are $m!$ such pairings. Then, for each such pairing, one
computes an operator by taking the operator $\frac{\tau^{m}}{m!}H(G)^{m}$ and
replacing, for every pair, the two tensors
$G_{\mu_{1},\ldots,\mu_{p}}\overline{G}_{\nu_{1},\ldots,\nu_{p}}$ in that pair
with a product of $\delta$-functions
$\prod_{a=1}^{p}\delta_{\mu_{a},\nu_{a}}=\mathbb{E}[G_{\mu_{1},\ldots,\mu_{p}}\overline{G}_{\nu_{1},\ldots,\nu_{p}}].$
Summing this operator over pairings gives the expectation value.
Note also that here we have not symmetrized $G$. If we were to symmetrize $G$,
then
$\mathbb{E}[G_{\mu_{1},\ldots,\mu_{p}}\overline{G}_{\nu_{1},\ldots,\nu_{p}}]$
is given by $1/p!$ times a sum of $p!$ different products of
$\delta$-functions, i.e. $\sum_{\pi}\delta_{\mu_{a},\nu_{\pi(a)}}$, where
$\pi$ is a permutation. This would lead to additional terms that we need to
compute and would make the analysis more difficult (though in practice may
lead to better performance).
For given $m$, given history and given pairing, let us define a cluster as
follows: define a graph with $2m$ vertices, each vertex corresponding to a
single tensor, either $G$ or $\overline{G}$. Let there be an edge between any
two tensors which both appear in the same term in the Hamiltonian. Let there
also be an edge between any two tensors which are in a pair. Hence, this is a
graph of degree $2$ (we allow the possibility of multiple edges connecting two
vertices if we pair two tensor which both appear in the same term in the
Hamiltonian). A cluster is a connected component of this graph.
We refer to a cluster containing four vertices as a minimal cluster. A cluster
with six or more vertices is called a non-minimal cluster. Note that there are
no clusters containing only two terms because each term in $H(G)$ depends on a
product of two tensors $G$ or two tensors $\overline{G}$; if we had not taken
the complex ensemble and instead taken $G$ to be real, we would instead have
these clusters with two terms. We discuss the case of the real ensemble
further after the proof of the lemma.
The minimal clusters will turn out to give the dominant order contribution in
an expansion in $N$. The non-minimal clusters will be subleading order.
We have expressed $\frac{\tau^{m}}{m!}H(G)^{m}$ as a sum over histories and
pairings and so $\mathbb{E}[\exp(\tau H(G))]$ is a sum over $m$, histories,
and pairings. Each term in the sum over $m$, histories, and pairings for
$\mathbb{E}[\exp(\tau H(G))]$ is an operator. Hence, $Z(\tau,p,N,{n_{bos}})$
is also given by a sum over $m$, histories, and pairings as one may take the
trace of each term in $\mathbb{E}[\exp(\tau H(G))]$. Note that each $m$,
history, and pairing gives a non-negative contribution to
$Z(\tau,p,N,{n_{bos}})$, i.e., it has a non-negative trace.
Lower Bound— Now we prove the first inequality in Eq. (33), namely that
$Z(\sqrt{2N}\tau,2(p-1),N,{n_{bos}})\leq Z(\tau,p,N,{n_{bos}})$. To do this,
we first define a similar summation over $m$, histories, and pairings to
compute $Z(\tau,q,N,{n_{bos}})$ for even $q=2(p-1)$ and then we compare the
two summations.
For even $q$, each term in $H$ is a sum of two terms, one depending linearly
on a tensor $G$ and one depending on tensor $\overline{G}$. As before, the
$m$-th order term $\frac{\tau^{m}}{m!}H(G)^{m}$ is a sum of $2^{m}$ histories,
with each history corresponding to $m$ different choices of $G$ or
$\overline{G}$ in each $H(G)$. Each term is a product of creation and
annihilation operators depending on a product of $m$ tensors, some of which
are $G$ and some of which are $\overline{G}$; note the difference here from
the odd case as now there are only $m$ tensors, rather than $2m$. This
expectation value of such a term is non-vanishing only if there are $m/2$
tensors $G$ and $m/2$ tensors $\overline{G}$. In that case, the expectation
value is given as before by summing over pairings and replacing the two
tensors in the pair with $\delta$-functions.
We claim that if we consider $Z(\tau,p,N,{n_{bos}})$ for odd $p$ and consider
only the sum of pairings for which all clusters are minimal, this gives
precisely the sum of terms for $Z(\sqrt{2N}\tau,2(p-1),N,{n_{bos}})$. Since
all terms contribute non-negatively to the trace, this will prove the first
inequality.
To show this claim, let $L_{2(p-1)}$ label a given choice of $m$, history, and
pairing for $Z(\sqrt{2N}\tau,2(p-1),N,{n_{bos}})$ and let $L_{p}$ label a
given choice of $m$, history, and clusters for $Z(\tau,p,N,{n_{bos}})$ for
which all clusters are minimal. There are $2^{m/2}$ different pairings for the
given choice of clusters labelled by $L_{p}$.
We will construct a one-to-one correspondence between $L_{p}$ and $L_{2(p-1)}$
and show that the sum of the terms labelled by $L_{p}$ is equal to the term
labelled by $L_{2(p-1)}$, i.e., that they give the same operator, and hence
have the same trace. Consider given $L_{p}$. Then, $L_{2(p-1)}$ is as follows.
The value of $m$ is the same. The history is also the same: for a given
sequence of choices of $G$ or $\overline{G}$ for $L_{p}$, we use the same
sequence for $L_{2(p-1)}$. Note that in $L_{p}$, each of the $m$ choices of
$G$ or $\overline{G}$ denotes a choice that both tensors in $H(G)$ are equal
to $G$ or both are equal to $\overline{G}$, while in $L_{2(p-1)}$ each of the
$m$ choices is only a choice about a single tensor in $H(G)$.
The pairing is as follows. We introduce notation to label the tensors in
$H(G)^{m}$. For even $p$, we label the tensors by an integer
$i\in\\{1,2,\ldots,m\\}$ depending which of the $m$ factors of $H(G)$ it is
in. Here, we mean that the tensors appear in the product $H(G)^{m}=H(G)\cdot
H(G)\cdot\ldots\cdot H(G)$; we label each of the different factors in the
product $1,2,\ldots,m$ in sequence. For odd $p$, we label the tensors by a
pair $(i,w)$ for $i\in\\{1,2,\ldots,m\\}$ and $w\in\\{1,2\\}$. The integer $i$
labels which of the $m$ factors of $H(G)$ it is in and $w$ labels whether it
is the first or second of the two tensors in $H(G)$. For a pairing for
$Z(\tau,p,N,{n_{bos}})$ with all clusters minimal, then each cluster is of the
form that for some $i,j$ that we pair $(i,1)$ with $(j,1)$ and $(i,2)$ with
$(j,2)$ or of the form that we pair $(i,1)$ with $(j,2)$ and $(i,2)$ with
$(j,1)$. For a given choice of clusters, the corresponding pairing for
$L_{2(p-1)}$ pairs $i$ with $j$.
We sketch the claim about the terms. For a term in $L_{p}$, for each cluster
we have two tensors $T$ and two tensors $\overline{T}$. We replace these
tensors with the expectation value
$\mathbb{E}[\sum_{\mu_{1},\ldots,\mu_{p-1}}\sum_{\nu_{1},\ldots,\nu_{p-1}}\sum_{\sigma}T_{\mu_{1},\mu_{2},\ldots,\mu_{p-1},\sigma}T_{\nu_{1},\nu_{2},\ldots,\nu_{p-1},\sigma}\sum_{\alpha_{1},\ldots,\alpha_{p-1}}\sum_{\beta_{1},\ldots,\beta_{p-1}}\sum_{\overline{\sigma}}\overline{T}_{\alpha_{1},\alpha_{2},\ldots,\alpha_{p-1},\overline{\sigma}}\overline{T}_{\beta_{1},\beta_{2},\ldots,\beta_{p-1},\overline{\sigma}}],$
which is equal to some product of $\delta$-functions. This expectation value
then multiplies the operators
$\Bigl{(}\prod_{i=1}^{(p-1)/2}a^{\dagger}_{\mu_{i}}a^{\dagger}_{\nu_{i}}\Bigr{)}\Bigl{(}\prod_{i=(p-1)/2+1}^{p-1}a_{\mu_{i}}a_{\nu_{i}}\Bigr{)}$
and
$\Bigl{(}\prod_{i=1}^{(p-1)/2}a^{\dagger}_{\alpha_{i}}a^{\dagger}_{\beta_{i}}\Bigr{)}\Bigl{(}\prod_{i=(p-1)/2+1}^{p-1}a_{\alpha_{i}}a_{\beta_{i}}\Bigr{)}$
inserted at the appropriate places into the product $H(G)^{m}$. The
$\delta$-functions constrain $\sigma=\overline{\sigma}$; summing over this
gives a factor of $N$ for each cluster, while there are two pairings for each
cluster giving another factor of $2$ for each cluster. The number of clusters
is equal to $m/2$, giving an overall factor $(2N)^{m/2}$.
This proves the first inequality.
Upper bound— Now we prove the second inequality in Eq. (33). To do this, we
define the following quantity for $q$ which is a multiple of $4$ (note that
$q=2(p-1)$ is a multiple of $4$ if $p$ is odd):
$Z^{\prime}(\tau,\tau^{\prime},q,N,{n_{bos}})\equiv\mathbb{E}[{\rm
tr}(\exp\\{\tau H(G)+\tau^{\prime}H(G^{\prime})^{2}\\})],$ (34)
where the expectation value is over tensors $G$ of order $q$ chosen from the
complex ensemble and tensors $G^{\prime}$ of order $q/2$ chosen also from the
complex ensemble (note that $q/2$ is even). Note that we square
$H(G^{\prime})$ in the exponent.
We will prove that
$Z(\tau,p,N,{n_{bos}})\leq
Z^{\prime}(\sqrt{2N}\tau,\tau^{\prime},2(p-1),N,{n_{bos}})$ (35)
for $\tau^{\prime}=N^{1/3}\tau$. From this, the second inequality in Eq. (33)
follows. To see this, we have, for any $G^{\prime}$,
$\|H(G^{\prime})^{2}\|\leq O({n_{bos}}^{p-1})\|G^{\prime}\|^{2}$ where the
operator norm $\|H(G^{\prime})^{2}\|$ denotes the largest eigenvalue in
absolute value and $\|G^{\prime}\|$ denotes the largest singular value of
$G^{\prime}$, regarding $G^{\prime}$ as a matrix of size
$N^{q/4}$-by-$N^{q/4}$. So, by the Golden-Thompson inequality,
$\displaystyle\mathbb{E}_{G}[{\rm tr}(\exp\\{\tau
H(G)+\tau^{\prime}H(G^{\prime})^{2}\\})]$ $\displaystyle\leq$
$\displaystyle\mathbb{E}_{G}[{\rm tr}(\exp\\{\tau
H(G)\\})]\mathbb{E}_{G^{\prime}}[\exp(O({n_{bos}}^{p-1})\tau^{\prime}\|G^{\prime}\|^{2})]$
$\displaystyle=$ $\displaystyle
Z(\sqrt{2N}\tau,2(p-1),N,{n_{bos}})\mathbb{E}_{G^{\prime}}[\exp(O({n_{bos}}^{p-1})\tau^{\prime}\|G^{\prime}\|^{2})].$
where the subscript $G$ or $G^{\prime}$ denotes the expectation over $G$ or
$G^{\prime}$. The matrix $G^{\prime}$ has complex entries; we can bound its
operator norm by the sum of the operator norms of its Hermitian and anti-
Hermitian parts. For ${n_{bos}}^{p-1}\tau^{\prime}=o(1)$, we have
$\mathbb{E}_{G^{\prime}}[\exp(O({n_{bos}}^{p-1})\tau^{\prime}\|G^{\prime}\|^{2})]=\exp[O({n_{bos}}^{p-1})\tau^{\prime}O(N^{q/4})]$
as can be shown using Hermite polynomials [12] to compute the probability
distribution of the eigenvalues of a random matrix and using the decay of a
Hermite polynomial multiplying a Gaussian (the study of the largest eigenvalue
of a random matrix is a rich field if we consider the probability distribution
near the edge of the Wigner semi-circle but here we are satisfied to consider
the probability distribution for eigenvalues which are some constant factor
$>1$ multiplying the edge of the semi-circle).
To show Eq. (35), let $L_{p}$ label the set of terms for
$Z(\tau,p,N,{n_{bos}})$ with a given choice of $m$, history, clusters (the
clusters need not be minimal), and choice of pairing for all tensors in non-
minimal clusters (we do not specify the pairing of the tensors in the minimal
clusters; if there are $n_{min}$ minimal clusters then there are $2^{n_{min}}$
terms in the set). Let $L^{\prime}_{2(p-1)}$ label a term for
$Z^{\prime}(\sqrt{2N}\tau,\tau^{\prime},2(p-1),N,{n_{bos}})$ with given $m$,
history, clusters, and pairing. Here, for $Z^{\prime}$, a history corresponds
to $m$ choices of either $H(G)$ or $H^{\prime}(G)^{2}$ and also to choices of
$G$ or $\overline{G}$, or $G^{\prime}$ or $\overline{G}^{\prime}$, for each
$H(G)$ or $H(G^{\prime})$. We define a map from choice of $L_{p}$ to choice of
$L^{\prime}_{2(p-1)}$ so that the sum of terms labelled by $L_{p}$ is equal to
the term labelled by the corresponding $L^{\prime}_{2(p-1)}$. This map will be
one-to-one but it will not be onto. However, since all terms are non-negative,
this will establish the needed inequality.
The map is as follows. The $m$ will be the same. Using the notation above, if
tensor $(i,1)$ is in a minimal cluster (and hence, so is $(i,2)$) in the set
labelled by $L_{p}$, then in the history for $L^{\prime}_{2(p-1)}$ for the
$i$-th term we choose $\tau H(G)$, while if $(i,1)$ is not in a minimal
cluster, then we choose $\tau^{\prime}H(G^{\prime})^{2}$. Given a history
labelled by $L_{p}$, corresponding to a choice of tensor or its complex
conjugate (i.e., either $G$ or $\overline{G}$) in each of the $m$ different
$H(G)$ in a term in the series for $Z(\tau,p,N,{n_{bos}})$, then we make the
same choice of tensor or its complex conjugate in each of the $m$ different
$H(G)$ or $H(G^{\prime})^{2}$ in the history labelled by
$L^{\prime}_{2(p-1)}$. That is, if we choose $G$ in the $i$-th $H(G)$ in some
terms in the series for $Z(\tau,p,N,{n_{bos}})$, then we choose $G$ in $H(G)$
or $G^{\prime}$ in both terms in $H(G^{\prime})^{2}$ and similarly if we
choose $\overline{G}$ we choose $\overline{G}$ in $H(G)$ and
$\overline{G}^{\prime}$ in both terms in $H(G^{\prime})^{2}$.
We label the tensors in the expansion for
$Z^{\prime}(\sqrt{2N}\tau,\tau^{\prime},2(p-1),N,{n_{bos}})$ either by an
integer $i$, if it appears in $H(G)$, or by a pair $(i,w)$, if it appears in
$H(G^{\prime})^{2}$, in which case the index $w\in\\{1,2\\}$ labels which of
the two $H(G^{\prime})$ it is in. Finally, we define the pairing labelled by
$L^{\prime}_{2(p-1)}$. For a minimal cluster labelled by $L_{p}$, pairing
$(j,1)$ and $(i,2)$ with $(j,2)$ or $(i,1)$ with $(j,2)$ and $(i,2)$ with
$(j,1)$, then in the pairing labelled by $L^{\prime}_{2(p-1)}$ we pair $i$
with $j$..
If a cluster is non-minimal, then we simply use the same pairing for the
corresponding tensors in $L^{\prime}_{2(p-1)}$. That is, suppose a cluster
pairs $(i_{1},w_{1})$ with $(i_{2},w^{\prime}_{2})$, and pairs $(i_{2},w_{2})$
with $(i_{3},w^{\prime}_{3})$, and so on, where $w^{\prime}_{a}=1$ if
$w_{a}=2$ and $w^{\prime}_{a}=2$ if $w_{a}=1$. Then, we also pair
$(i_{1},w_{1})$ with $(i_{2},w^{\prime}_{2})$, and pairs $(i_{2},w_{2})$ with
$(i_{3},w^{\prime}_{3})$, and so on.
The smallest non-minimal cluster has six vertices. In every cluster, minimal
or not, there is a sum over some index (for example, the sum over the index
$\sigma=\sigma^{\prime}$ in the lower bound calculation above) which gives a
factor $N$. Thus, taking $\tau^{\prime}=\tau N^{1/3}$ accounts for this
factor. No factor of $2$ occurs for the non-minimal clusters. ∎
Remark: since we have chosen the entries of $G$ from the complex ensemble, we
have that $\mathbb{E}[H(G)]=0$ for $p$ odd. If instead we had chosen the
entries of $G$ from the real ensemble we would have (considering the specific
case $p=3$ for simplicity and not symmetrizing $G$, again for simplicity) a
non-vanishing expectation value since
$\mathbb{E}[T_{\mu_{1},\mu_{2},\sigma}T_{\nu_{1},\nu_{2},\sigma}]=N\delta_{\mu_{1},\nu_{1}}\delta_{\mu_{2},\nu_{2}},$
(37)
so that $\mathbb{E}[H(G)]\propto
N\sum_{\mu_{1},\mu_{2}}a^{\dagger}_{\mu_{2}}a^{\dagger}_{\mu_{2}}a_{\mu_{1}}a_{\mu_{1}}$.
Such a term is sometimes called a pairing term or a “cooperon” in the study of
disordered system in physics. In the case ${n_{bos}}=2$ (the smallest possible
for $p=3$), this term has operator norm $N^{2}$. This is much larger than
$N^{1/2}$ times the expected operator norm of $H(G)$ for $p=2(p-1)=4$, i.e.,
that expected operator norm is proportional to $N$ by random matrix theory,
and $N^{2}\gg N^{3/2}$.
There may be other ways to deal with this non-vanishing expectation value
other than using the complex ensemble. One way is to use the real ensemble,
but to consider the Hamiltonian $H(T_{0})-M$, where we define
$M=N\sum_{\mu_{1},\mu_{2}}a^{\dagger}_{\mu_{2}}a^{\dagger}_{\mu_{2}}a_{\mu_{1}}a_{\mu_{1}}$
for $p=3$. In this case, the added term $-M$ cancels the expectation value of
$H(G)$ term-by-term in the perturbation expansion. However, if we do this we
still have some additional terms when we consider clusters of four or more
vertices. We expect that the clusters of six or more vertices are still
negligible, but the structure of the clusters of four vertices becomes more
complicated. We leave the analysis of this case for the future, but we expect
that it would work and may be practically useful.
From lemma 6, the following lemma is an immediate corollary:
###### Lemma 7.
Let $\lambda_{1}$ be the largest eigenvalue of $G$. Let
$E_{max}=2\sqrt{J\log(N)}{n_{bos}}^{p/2}N^{p/2},$
where $J$ is the $J$ of lemma 4 for $Z(\tau,2(p-1),N,{n_{bos}})$. Then, for
any $x$,
${\rm Pr}[\lambda_{1}\geq x]\leq\exp\Bigl{(}-\frac{x-E_{max}}{\xi}\Bigr{)},$
(38)
with
$\xi=\frac{\sqrt{J}{n_{bos}}^{p/2-1}N^{p/2}}{\sqrt{\log(N)}}.$ (39)
So, for any $E^{\prime}$ which is $\omega(\xi)$, with high probability
$\lambda_{1}\leq E_{max}+E^{\prime}$.
###### Proof.
From lemmas 4,6, for ${n_{bos}}^{p-1}\tau N^{1/3}=o(1)$, we have
$Z(\tau,p,N,{n_{bos}})\leq\exp(\tau^{2}J{n_{bos}}^{p-1}N^{p})\exp(o(N^{(p-1)/2})).$
Let $\tau={n_{bos}}^{1-p/2}N^{-p/2}\sqrt{\log(N)}/\sqrt{J}$. For
${n_{bos}}=o(N^{(p/2-1/3)/(1+p/2)}\log(N)^{1/(2+p)}$, the condition
${n_{bos}}^{p-1}\tau N^{1/3}=o(1)$ holds. So, after some algebra, for any $x$,
the result follows. ∎
It is worth noting that lemma 6 has the following corollary:
###### Corollary 2.
For odd $p$, for ${n_{bos}}=p-1$, with high probability
$\lambda_{1}=O(N^{p/2})$.
###### Proof.
To prove this we use the existence of tighter bounds for even $q=2(p-1)$ when
${n_{bos}}=q/2$. By Eq. (33), for ${n_{bos}}^{p}\tau N^{1/3}=o(1)$, we have
$Z(\tau,p,N,{n_{bos}})\leq
Z(\sqrt{2N}\tau,2(p-1),N,{n_{bos}})\exp(o(N^{(p-1)/2}))$. Since we are
considering fixed ${n_{bos}}$, this holds for $\tau N^{1/3}=o(1)$. We have
that $Z(\sqrt{2N}\tau,2(p-1),N,p-1)=\mathbb{E}_{G}[\exp(\sqrt{2N}\tau H(G)]$,
but the Hamiltonian $H(G)$ is a random matrix of size $N^{p-1}$-by-$N^{p-1}$
chosen from the so-called Gaussian unitary ensemble [12]. With high
probability, the largest eigenvalue of this matrix is $\Theta(N^{(p-1)/2})$.
Further, we can choose $\tau=\omega(\log(N)N^{-p/2})$ so that
$Z(\sqrt{2N}\tau,2(p-1),N,p-1)=N\exp(O(\tau N^{p/2}))$; for example, this can
be shown by using orthogonal polynomials to bound the probability distribution
of the largest eigenvalue as discussed previously. We have ${\rm
Pr}[\lambda_{1}>x]\leq Z(\tau,p,N,{n_{bos}})\exp(-\tau x)$. For $x$
sufficiently large compared to $N^{p/2}$, for $\tau=\omega(\log(N)N^{-p/2})$,
the right-hand side is $o(1)$. ∎
## 5 Quantum and Classical Algorithms
We now discuss the complexity of classical and quantum algorithms to implement
the needed linear algebra. In particular, we need to determine if that largest
eigenvalue is larger than $E_{cut}$ and we need to find some vector in the
eigenspace of eigenvalue $\geq E_{cut}$. We emphasize that it is not necessary
to find the leading eigenvector itself.
We will use $\psi_{\rm target}$ to denote this leading eigenvector. Note that
if we lower bound the squared overlap of some vector with $\psi_{\rm target}$,
this will lower bound the probability of success of phase estimation.
We have defined algorithm 2 so that in the detection step it uses a hard
cutoff on eigenvalue: if the leading eigenvalue is $\geq E_{cut}$ it reports
that $\lambda=\overline{\lambda}$ while otherwise it reports that $\lambda=0$.
However, no algorithm, classical or quantum, will be able to compute the
leading eigenvalue exactly; there will always be some limit on the precision.
Fortunately, the proofs of theorems 2,4 show that if $E_{0}\geq(1+c)E_{max}$
for any $c>0$, then if $\lambda=\overline{\lambda}$ then with high probability
$\lambda_{1}\geq(1-\eta)E_{0}+\eta E_{max}$ for any $0<\eta<1$ while if
$\lambda=0$ then with high probability
$\lambda_{1}\leq(1-\eta^{\prime})E_{0}+\eta^{\prime}E_{max}$ for any
$0<\eta^{\prime}<1$. For example, we might take $\eta=1/8$ and
$\eta^{\prime}=1/2$. So, it suffices instead to implement some “soft” estimate
of the leading eigenvalue which will be very likely to give one result (i.e.,
reporting that $\lambda=\overline{\lambda}$) if the leading eigenvalue is
larger than $(7/8)E_{0}+(1/8)E_{max}$ but very unlikely to give that result if
the leading eigenvalue is $\leq E_{cut}$.
In the quantum algorithms, to obtain some vector in the eigenspace of
eigenvalue $\geq E_{cut}$ and to do this soft estimate, we will implement an
approximate projector by phase estimation in the quantum algorithms onto the
eigenspace of eigenvalue $\geq(E_{0}+E_{cut})/2=(3/4)E_{0}+(1/4)E_{cut}$. By
doing this, the phase estimation error will become negligible when considering
the projection of the resulting vector onto the eigenspace with eigenvalue
$<E_{cut}$. Similarly, in the classical power method we will take the number
of iterations sufficiently large that the vector has negligible projection on
the eigenspace with eigenvalue $<E_{cut}$ and further so that it has
expectation value for $H(T_{0})$ greater than $E_{cut}$.
We begin with a description of some classical algorithms. The time and space
requirements for the classical algorithms are of course not intended to
represent a lower bound; rather, they represent times that can be achieved
using standard algorithms in the literature. We then give quantum algorithms.
Finally, we give a further improvement to the quantum algorithm that may be
useful in practice.
When we refer to “space” in a classical algorithm, if we store a
$D(N,{n_{bos}})$-dimensional vector, the space requirement is equal to
$D(N,{n_{bos}})$ multiplied by the number of bits to store a single entry of
the vector. In the classical algorithms, we will not discuss issues with
finite precision arithmetic in detail. Since we will be applying operators of
the form $H(T_{0})^{m}$ to vectors in the “path integral” methods, we might
need to approximate each entry of the vector to accuracy
$D(N,{n_{bos}})^{-1}\|H(T_{0})\|^{-m}=O({\rm poly}(N^{{n_{bos}}m}))^{-1}$.
However, the required number of bits is then only $O(m{n_{bos}}\log(N))$ and
$m$ will be logarithmic in $N$ so the required number of bits will be only
polylogarithmic in $N$.
### 5.1 Classical Algorithms
Classically, the most obvious algorithm is to perform an eigendecomposition on
$H(T_{0})$. This requires storing matrices of size
$D(N,{n_{bos}})$-by-$D(N,{n_{bos}})$ so that the space required is
$\tilde{O}(D(N,{n_{bos}})^{2})$ and the time is $D(N,{n_{bos}})^{\omega}$
where $\omega$ is the matrix multiplication exponent [13], though of course in
practice the time is closer to $D(N,{n_{bos}})^{3}$.
However, there is no need to perform a full eigendecomposition. One can
instead initialize a random vector and then apply the power method to extract
some eigenvector of $H(T_{0})$ in the eigenspace with eigenvalue $\geq
E_{cut}$. The space required is then only $\tilde{O}(D(N,{n_{bos}}))$. The
time required for a single iteration of the power method is
$\tilde{O}(D(N,{n_{bos}}))$. If $\lambda=\overline{\lambda}$, then in
$O(\log(D(N,{n_{bos}}))/\log(E_{0}/E_{max}))$ iterations, the resulting vector
will have an $1-o(1)$ projection onto the eigenspace with eigenvalue
$\geq(E_{0}+E_{cut})/2$ and a negligible projection onto the eigenspace with
eigenvalue $<E_{cut}$. So, after this many iterations, one can compute the
expectation value of $H(T_{0})$ on that vector to perform detection, i.e., if
$\lambda=\overline{\lambda}$ the expectation will be larger than $E_{cut}$ but
if $\lambda=0$ the expectation will be close to $E_{max}$, and one can compute
the single particle density matrix of the vector after those iterations. So,
the time is $\tilde{O}(D(N,{n_{bos}}))O(1/\log(E_{0}/E_{max}))$.
This power method still requires exponential (in ${n_{bos}}$) space. In fact,
one can use only polynomial space. One obvious choice is to perform a “path
integral”. That is, given a random initial state $\Psi_{\rm random}$, the
power method computes the state $H(T_{0})^{m}|\Psi_{\rm random}\rangle$, for
some exponent $m$ which gives the number of iterations in the power method.
So, we wish to compute
$\frac{\langle\Psi_{\rm
random}|H(T_{0})^{m}a^{\dagger}_{\mu}a_{\nu}H(T_{0})^{m}|\Psi_{\rm
random}\rangle}{\langle\Psi_{\rm random}|H(T_{0})^{2m}|\Psi_{\rm
random}\rangle},$
where the denominator is to correctly normalize the state. Let us choose the
initial state to be a (normalized) random state from the basis for the
symmetric basis given before. We make this choice to make the “path integral”
simpler and since this is a complete orthonormal basis of states, a random
state from this basis has expected overlap with the largest eigenvector of
$H(T_{0})$ equal to $1/N^{{n_{bos}}}$. Then, both the numerator and
denominator above can be expressed by a summation over intermediate states
from this basis, requiring only space $\tilde{O}(\log(D(N,{n_{bos}}))m)$; this
summation is a “path integral”. The time required however is now
$\tilde{O}(D(N,{n_{bos}})^{m})$ and so becomes significantly worse if the
number of iterations is much more than $1$.
An improvement to this time can be obtained by using the algorithm of theorem
4.1 of Ref. [11]. This algorithm is expressed in terms of qubits, but we can
translate the algorithm to get an algorithm for a single qudit of dimension
$D(N,{n_{bos}})$, or, using the tradeoffs discussed there, ${n_{bos}}$ qudits
each of dimension $N$.
Let us explain this in detail, following the discussion there. We wish to
compute $\langle y|C|x\rangle$, for some fixed basis states $x,y$ from the
basis above (in our application $x=y$, where $C$ is a “circuit” of some depth
$d$ (for example $C=2m+1$ for the numerator above and $C=m$ for the
denominator above). While in Ref. [11], a circuit was assumed to be unitary,
there in fact is no need to assume that; for non-unitary circuits, the issues
with finite precision arithmetic do become more important but as remarked
above this can be handled with only polylogarithmic overhead. For us, a
circuit is not necessarily unitary; rather it is built out of a sequence of
operations, each of which is either $H(T_{0})$ or $a^{\dagger}_{\mu}a_{\nu}$;
more generally a circuit might include any operations built out of some tensor
of order $O(1)$ multiplying some number of creation and annihilation
operators.
For $d=1$, $\langle y|C|x\rangle$ can be computed computed in time ${\rm
poly}({n_{bos}})$. For $d>1$, we have
$\displaystyle\langle y|C|x\rangle=\sum_{z}\langle y|C_{[d\leftarrow
d/2+1]}|z\rangle\cdot\langle z|C_{[d/2\leftarrow 1]}|x\rangle,$ (40)
where the summation is over states $z$ in the basis and where $C_{[d\leftarrow
d/2+1]}$ and $C_{[d/2\leftarrow 1]}$ are subcircuits from the second and first
half of $C$. If $F(d)$ is the runtime at depth $d$, we have $F(d)\leq 2\cdot
D(N,{n_{bos}})\cdot F(\lceil d/2\rceil)$. So, $F(d)\leq{\rm
poly}({n_{bos}})(2D(N,{n_{bos}}))^{\lceil\log(d)\rceil}.$ So, the runtime is
$\tilde{O}((2D(N,{n_{bos}}))^{\lceil\log(2m+1)\rceil})$. This is much faster
than the path integral method but potentially much slower than the power
method, depending on the required $m$, i.e., for $E_{0}/E_{max}=\Theta(1)$, we
need $m$ proportional to $\log(D(N,{n_{bos}}))$ and so the time required is
superpolynomially worse than the time required if one stores the full vector.
### 5.2 Quantum Algorithms
We now discuss quantum algorithms for the same problem. In contrast to the
classical algorithms above, all these algorithms take only polynomial space.
First let us describe a simple algorithm, given as algorithm 3. We then
describe a sequence of improvements.
For algorithm 3 and all subsequent algorithms, we analyze assuming that
$\lambda=\overline{\lambda}$, i.e., for purposes of analysis we consider the
problem of recovery rather than detection. All these algorithms report success
or failure and, if they fail, they are rerun until they succeed. We give
bounds on the expected runtime under the assumption that
$\lambda=\overline{\lambda}$. If we consider the problem of detection, and if
$\lambda=0$, then the algorithm will not report success in the given runtime
(since it will be unable to succeed in a certain phase estimation procedure)
and so all these algorithms can also be used for detection by running them for
some multiple of the given runtime and reporting that $\lambda=0$ if success
is not reported in that time.
#### 5.2.1 Maximally Entangled (or Maximally Mixed) Input State
Algorithms 3,4 in this subsubsection work on a Hilbert space which is the
tensor product of two Hilbert spaces, each of which have dimension
$D(N,{n_{bos}})$. (In some versions of the Hamiltonian simulation used in the
algorithms, it is convenient to embed the symmetric subspace of dimension
$D(N,{n_{bos}})$ within the full Hilbert space.)
We use a tensor product notation $A\otimes B$ to denote an operator that is a
tensor product of two operators $A,B$ on the two different tensor factors.
Algorithm 3 Quantum Algorithm (simplest, unamplified version). This and all
other quantum algorithms have the same inputs. outputs, and parameter choices
as algorithm 2.
* 1.
Prepare a maximally entangled state between the two qudits.
* 2.
Apply phase estimation using $H(T_{0})\otimes I$. Let $\psi$ be the resulting
state. If the resulting eigenvalue is larger than $(E_{0}+E_{cut})/2$, report
“success”. Otherwise, report “failure”.
* 3.
If success is reported, measure and return
$\langle\psi|a^{\dagger}_{\mu}a_{\nu}\otimes I|\psi\rangle.$
Steps $1-2$ are designed to prepare a state whose density matrix on the first
qudit has large projection onto the eigenspace of eigenvalue $\geq E_{cut}$.
For purposes of analysis, we trace out the second qudit, so that the input
state on the first qudit is a maximally mixed state. If success is reported
then (ignoring phase estimation error) we have indeed projected onto this
eigenspace. Ignoring phase estimation error, the probability of success is
$\geq 1/D(N,{n_{bos}})$.
Remark: we prepare a maximally entangled state between the two qudits so that
the density matrix on the first state is maximally mixed; we could equally
well modify the algorithm to use only a single qudit (reducing the space by a
factor of $2$) and prepare a random state on the first qudit. This
modification however requires some care when we discuss a version of the
algorithm that uses amplitude amplification below. In the unamplified version,
we can choose a new random state (for example, choosing it uniformly from any
orthogonal basis) each time we perform phase estimation; however, in the
amplified version, we must choose a fixed random initial state on the first
qudit and amplify the algorithm with that choice of initial state. We claim
(we omit the proof) that if one picks a tensor product of random single qudit
states $|v_{1}\otimes v_{2}\otimes\ldots\otimes v_{{n_{bos}}}\rangle$ where
$v_{a}$ are independently chosen from a Haar uniform distribution on the
sphere $|v_{a}|=1$, with high probability the state has squared overlap with
the leading eigenvector close to $N^{-{n_{bos}}}$; more precisely, “close”
means that with high probability the logarithm of the squared overlap is at
least $-{n_{bos}}\log(N)\cdot(1+o(1))$. Note that we might not have this
property of the overlap if we had chosen the initial state from the
computational basis, for example. Note also that we can efficiently prepare
states from this distribution. Sketch of proof of claim: consider the
logarithm of the probability that the sequence of measurements
$|v_{i}\rangle_{i}\langle v_{i}|$ succeeds for $i=1,\ldots,{n_{bos}}$ in
sequence. The probability that the $i$-th measurement succeeds, conditioned on
the previous measurements succeeding, can be computed from the trace of
$|v_{i}\rangle\langle v_{i}|$ with the reduced density matrix on the $i$-th
qudit of some state (i.e., the leading eigenvector projected by the previous
measurements), and for any such reduced density matrix with high probability
the logarithm of the trace is at least $-\log(N)\cdot(1+o(1))$.
Step $3$ of the algorithm measures some property of a state in the eigenspace
with eigenvalue $\geq E_{cut}$. It is possible that each time the algorithm is
run, one obtains a different energy measurement and hence a different state,
so that measuring this property of the state gives some expectation value of
$a^{\dagger}_{\mu}a_{\nu}$ in a mixed state. This does not matter since
theorems 3 or 5 also hold for mixed states.
We explain the measurement in step 3 in more detail below. The simplest
possibility is to simply measure one matrix element of
$a^{\dagger}_{\mu}a_{\nu}$. Since there are $N^{2}$ matrix elements, we then
need to repeat the algorithm ${\rm poly}(N,\log(\epsilon))$ times to measure
each matrix element to accuracy $\epsilon$. We explain improvements to this in
subsection 5.3.
We explain the phase estimation in more detail below. First, let us analyze
the algorithm in a rough outline. Let the phase estimation be carried out to a
precision sufficiently smaller than $E_{0}-E_{max}$. To define this, we work
in an eigenbasis of $H(T_{0})$. Let $\tilde{\epsilon}$ be a bound on the error
probability of the phase estimation. More precisely, we will say that phase
estimation implements an operator $E_{PE}$ which is diagonal in the eigenbasis
such that on a normalized eigenvector $v_{i}$ with eigenvalue $\lambda_{i}$ we
have
$\displaystyle\lambda_{i}<E_{cut}\;\rightarrow\;\langle
v_{i}|E_{PE}|v_{i}\rangle\leq\tilde{\epsilon},$ (41)
$\displaystyle\lambda_{i}>(7/8)E_{0}+(1/8)E_{max}\;\rightarrow\;\langle
v_{i}|E_{PE}|v_{i}\rangle\geq 1-\tilde{\epsilon}.$
The reader should appreciate that there are a few energy scales here, chosen
rather arbitrarily. The exact value of the energies are not too important. We
have picked $E_{cut}$ to be partway between $E_{0}$ and $E_{max}$ so that it
is very unlikely that the largest eigenvalue of $H(G)$ is above $E_{cut}$ and
also very unlikely that the $\lambda_{1}<E_{cut}$. We have picked the energy
cutoff in step 2 to be $(E_{0}+E_{cut})/2$ simply to pick some energy partway
between $E_{0}$ and $E_{cut}$ so that it is very unlikely that phase
estimation reports success on an eigenstate with energy $<E_{cut}$; see first
line of Eq. (41). In the last line of Eq. (41) we wrote
$(7/8)E_{0}+(1/8)E_{max}$ simply to pick some energy scale slightly below
$E_{0}$ above which it is very likely that phase estimation reports success;
for us later, it would suffices to have any instead a bound for
$\lambda_{i}\geq(1-\eta)E_{0}+\eta E_{max}$ for any $\eta>0$. Of course, the
two lines in Eq. (41) are not completely symmetric about $(E_{0}+E_{cut})/2$.
Then, choose $\tilde{\epsilon}=\epsilon/D(N,{n_{bos}})$, so that the algorithm
reports success with probability at least
$(1-\tilde{\epsilon})/D(N,{n_{bos}})$ and, given that the algorithm reports
success, the resulting state has projection onto the eigenspace with
eigenvalue $\geq E_{cut}$ which is greater than or equal to
$\frac{(1-\tilde{\epsilon})}{(1-\tilde{\epsilon})+(D(N,{n_{bos}})-1)\tilde{\epsilon}}=1-O(\epsilon),$
where the first term in the denominator is the probability that it reports
success on $\psi_{\rm target}$ as input and the second term is the probability
of reporting success on a state in the eigenspace with eigenvalue $<E_{cut}$,
multiplied by $D(N,{n_{bos}})-1$, i.e., multiplied by an upper bound on the
dimensionality of that eigenspace. Taking $\epsilon\ll 1$, we can obtain a
large projection onto the eigenspace with eigenvalue $\geq E_{cut}$, so that
the cost of phase estimation increases logarithmically with
$D(N,{n_{bos}})/\epsilon$.
The success probability for $\epsilon\ll 1$ is greater than or equal to
$(1/D(N,{n_{bos}}))(1-\epsilon/D(N,{n_{bos}}))$, so for small $\epsilon$ it is
very close to $1/D(N,{n_{bos}})$. Hence, repeating the algorithm until it
succeeds, the expected runtime to obtain a single measurement of one matrix
element of $a^{\dagger}_{\mu}a_{\nu}$ is bounded the time for phase estimation
multiplied by $O(D(N,{n_{bos}}))$.
To perform phase estimation, we use controlled simulation of the Hamiltonian
$H(T_{0})$. There are a large number of quantum simulation algorithms which
would work here, such as Refs. [14, 15, 16, 17, 18] to name just a few. There
are two broad possibilities. The first possibility is to work in the symmetric
subspace of dimension $D(N,{n_{bos}})$. In this case, $H(T_{0})$ is a sparse
Hamiltonian, and sparse simulation algorithms apply. The second possibility is
to use the Hamiltonian of Eq. (5) and embed the symmetric subspace into the
full Hilbert space; in this case, $H(T_{0})$ is a local Hamiltonian, in that
each terms acts on a small number of qudits, each of dimension $N$, and local
simulation algorithms apply. The cost for these algorithms to simulate for a
time $t$ to error $\tilde{\epsilon}$ is ${\rm
poly}(t\|H(T_{0})\|,{n_{bos}},N,\log(\tilde{\epsilon}))$.
Using the simplest phase estimation algorithm of Ref. [19], the number of bits
that we need to phase estimate is $s=O(\log(\|H(T_{0})\|/(E_{0}-E_{max}))$.
The most expensive bit to obtain is the least significant bit, since obtaining
the $j$-th least significant bit requires simulating for a time proportional
to $2^{s-j}(E_{0}-E_{max})^{-1}$. So, we can obtain the least significant bit
to error $\tilde{\epsilon}/2$, then obtain the next least significant bit to
error $\tilde{\epsilon}/4$, and so on, making the total error
$\tilde{\epsilon}$. Of course, a large number of variations of the Kitaev
phase estimation algorithm exist in the literature, and any could be used
here.
With high probability, $\|H(T_{0})\|$ is ${\rm poly}(N)$. Thus, with high
probability the time for phase estimation is ${\rm
poly}(N,{n_{bos}},1/(E_{0}-E_{max}),\log(D(N,{n_{bos}})/\epsilon))$, giving an
algorithm runtime
$D(N,{n_{bos}}){\rm
poly}(N,{n_{bos}},1/(E_{0}-E_{max}),\log(D(N,{n_{bos}})/\epsilon)).$
We can speed this algorithm up quadratically by applying amplitude
amplification [20]. Modify the phase estimation step $2$ of algorithm 3 so
that the algorithm phase estimates the eigenvalue, determines if the
eigenvalue is larger than $E_{cut}$, then uncomputes the eigenvalue, returning
just a single bit of success or failure. See algorithm 4. Then, applying
amplitude amplification, with high probability the algorithm succeeds in
expected time $D(N,{n_{bos}})^{1/2}{\rm
poly}(N,{n_{bos}},1/(E_{0}-E_{max}),\log(D(N,{n_{bos}})/\epsilon)).$
Multiplying by ${\rm poly}(N,1/\epsilon)$ to measure
$a^{\dagger}_{\mu}a_{\nu}$ to accuracy $\epsilon$, the the expected time is
still
$D(N,{n_{bos}})^{1/2}{\rm
poly}(N,{n_{bos}},1/(E_{0}-E_{max}),\log(D(N,{n_{bos}})/\epsilon)),$
giving a quadratic time improvement, up to ${\rm poly}(N)$ factors, and an
exponential space improvement, over the fastest classical algorithm described
above.
Algorithm 4 Quantum Algorithm (amplified version)
* 1.
Apply amplitude amplification to steps $1-2$ of algorithm 3, modifying step
$2$ to uncompute the eigenvalue and return only success or failure.
* 2.
If success is reported, measure and return
$\langle\psi|a^{\dagger}_{\mu}a_{\nu}\otimes I|\psi\rangle.$
#### 5.2.2 Chosen Input State: Simple Version
We can obtain a further quadratic speedup by modifying the initial state that
we phase estimate. In this subsubsection, let us first explain an algorithm
that gives the basic idea of the initial state preparation; we will only be
able to prove some slightly weaker results for this algorithm (in particular
we will prove a lower bound on the average inverse runtime, rather than an
upper bound on the average runtime). We expect that this is primarily a
technical issue and that some concentration of measure argument should allow
us to prove an upper bound on the average runtime. We will then describe in
the next subsubsection a modification to the algorithm which avoids this
technical difficulty and for which we can prove a quadratic improvement
without further assumption.
Instead of a maximally entangled state, we can use the tensor $T_{0}$ to
prepare a state with a larger projection onto $\psi_{\rm target}$. In this
subsubsection, we will work in the $D^{n_{bos}}$-dimensional Hilbert space of
Eq. (5), so we will use ${n_{bos}}$ qudits each of dimension $N$. The
unamplified version is algorithm 5 and a version with amplitude amplification
is algorithm 6.
The most important new step that must be explained is the initial state
preparation (we discuss some other details at the end of this subsubsection).
We use the fact that, given a classical list of amplitudes for some
$M$-dimensional vector, with the vector having unit norm, we can prepare a
quantum state on an $M$-dimensional qudit with the given amplitudes (up to an
ill-defined overall phase, of course) using a quantum circuit of depth $O(M)$
and using $O(M)$ classical computation. For example, labelling the basis
states $|0\rangle,|1\rangle,\ldots,|M-1\rangle$, one can start with initial
state $|0\rangle$ and apply a sequence of $M-1$ rotations in the two
dimensional subspaces spanned by $i\rangle,|i+1\rangle$ for $i=0,\ldots,M-2$.
Algorithm 5 Quantum Algorithm (improved input state, unamplified version)
* 1.
Use $T_{0}$ to prepare the initial state $\Psi_{\rm input}$ of Eq. (42).
* 2.
If the initial state is not in the symmetric subspace, report “failure”. If
the state is in the symmetric subspace, apply phase estimation using
$H(T_{0})$. Let $\psi$ be the resulting state. If the resulting eigenvalue is
larger than $(E_{0}+E_{cut})/2$, report “success”. Otherwise, report
“failure”.
* 3.
If success is reported, measure and return
$\langle\psi|a^{\dagger}_{\mu}a_{\nu}|\psi\rangle.$
Algorithm 6 Quantum Algorithm (amplified version)
* 1.
Apply amplitude amplification to steps $1-2$ of algorithm 5, modifying step
$2$ to uncompute the eigenvalue and uncompute the determination of whether the
state is in the symmetric subspace and to return only success or failure.
* 2.
If success is reported, measure and return
$\langle\psi|a^{\dagger}_{\mu}a_{\nu}|\psi\rangle.$
In this subsubsection, for simplicity in analysis, we assume that the error
$\epsilon$ in phase estimation is small enough to be negligible.
We use the same method to produce the input state for both even and odd $p$.
First consider the case that ${n_{bos}}$ is an integer multiple of $p$. For
any tensor $T$ of order $p$, let $|T\rangle$ denote the vector on $p$ qudits
(each of dimension $N$) with amplitudes given by the entries of the tensor.
This vector is correctly normalized if $|T|=1$. We prepare the input state
$\Psi_{\rm
input}=\frac{1}{|T_{0}|^{{n_{bos}}/p}}|T_{0}\rangle^{\otimes{n_{bos}}/p}.$
(42)
Preparing this state takes circuit depth $O(N^{p})$ since we can prepare
${n_{bos}}/p$ copies of the state $\frac{1}{|T_{0}|}|T_{0}\rangle$ in
parallel.
We want to know the expectation value $\langle\Psi_{\rm
input}|E_{PE}|\Psi_{\rm input}\rangle$, but to get oriented, let us estimate
the overlap $\langle\Psi_{\rm input}|\Psi_{\rm sig}\rangle$.
We have
$\langle v_{\rm sig}^{\otimes p}|T_{0}\rangle=\lambda N^{p}+\langle v_{\rm
sig}^{\otimes p}|G\rangle.$ (43)
The probability distribution of $\langle v_{\rm sig}^{\otimes p}|G\rangle$ is
a Gaussian with zero mean and unit variance, so with high probability,
$\langle v_{\rm sig}^{\otimes p}|G\rangle$ is $o(\lambda N^{p})$; indeed, for
any increasing function of $N$ which diverges as $N\rightarrow\infty$, with
high probability it is bounded by that function.
Hence, with high probability,
$\langle v_{\rm
sig}^{\otimes{n_{bos}}}|T_{0}^{\otimes{n_{bos}}/p}\rangle=(1-o(1))\cdot\lambda^{{n_{bos}}/p}N^{n_{bos}}.$
(44)
At the same time,
$\langle
T_{0}^{\otimes{n_{bos}}/p}|T_{0}^{\otimes{n_{bos}}/p}\rangle=|T_{0}|^{2{n_{bos}}/p}.$
(45)
We have $\mathbb{E}[|G|^{2}]=O(N^{p}),$ where the precise constant in the
big-O notation depends on whether we symmetrize $G$ or not and whether we use
complex entries or not. Further, $|G|^{2}$ is a sum of squares of independent
random variables (the entries of $G$). So, by central limit, with high
probability $|G|^{2}$ is bounded by $O(N^{p})$. So, with high probability,
$|T_{0}|^{2{n_{bos}}/p}=O(N^{{n_{bos}}})$.
So, with high probability,
$\frac{\Bigl{|}\langle v_{\rm
sig}^{\otimes{n_{bos}}}|T_{0}^{\otimes{n_{bos}}/p}\rangle\Bigr{|}^{2}}{\langle
v_{\rm sig}^{\otimes{n_{bos}}}|v_{\rm
sig}^{\otimes{n_{bos}}}\rangle\cdot\langle
T_{0}^{\otimes{n_{bos}}/p}|T_{0}^{\otimes{n_{bos}}/p}\rangle}\geq(1-o(1))\cdot\lambda^{2{n_{bos}}/p}.$
(46)
For $\lambda=CN^{-p/4}$, this is $(1-o(1))C^{2{n_{bos}}/p}N^{-{n_{bos}}/2}$.
If $\psi_{\rm target}$ were equal to $\Psi_{\rm sig}=N^{-{n_{bos}}/2}|v_{\rm
sig}^{\otimes{n_{bos}}}\rangle$, then for fixed $N^{-p/4}/\lambda$, Eq. (46)
would give a lower bound to the squared overlap of the initial state with
$\psi_{\rm target}$ which would be quadratically better (in terms of its
scaling with $N$) than the squared overlap for the maximally entangled input
state. So, after applying amplitude amplification, this would give a quartic
improvement over the fastest classical algorithm.
However, $\psi_{\rm target}$ is not equal to $\Psi_{\rm
sig}=N^{-{n_{bos}}/2}|v_{\rm sig}^{\otimes{n_{bos}}}\rangle$ and so this does
not give a lower bound on $\langle\psi_{\rm target}|\Psi_{\rm input}\rangle$.
However, we have:
###### Lemma 8.
Suppose that $E_{0}\geq(1+c)E_{max}$. Then, the projection of $\Psi_{\rm sig}$
onto the eigenspace with eigenvalue $\geq(7/8)E_{0}+(1/8)E_{max}$ is greater
than or equal to $\Omega(1)\cdot c/(1+c)$.
###### Proof.
The expectation value $\langle\Psi_{\rm sig}|H(T_{0})|\Psi_{\rm sig}\rangle$
was estimated in subsections 3.2.1,3.2.2. We have that with high probability,
this expectation value is $\geq(1-o(1))E_{0}$. With high probability, the
largest eigenvalue of $H(T_{0})$ in absolute value is bounded by
$(1+o(1))(E_{0}+E_{max})$; for the even case this is just the triangle
inequality, while for the odd case this uses lemma 3. Hence, by Markov’s
inequality applied to $\lambda_{1}-H(T_{0})$, the projection of $\Psi_{\rm
sig}$ onto the eigenspace with eigenvalue $\geq(7/8)E_{0}+(1/8)E_{max}$ is
greater than or equal to
$(1-o(1))E_{0}-((7/8)E_{0}+(1/8)E_{max})/((1+o(1))(E_{0}+E_{max}))$ which is
$\geq\Omega(1)\cdot c/(1+c)$. ∎
So, $\Psi_{\rm sig}$ has some non-negligible projection onto the desired
eigenspace. This does not however yet give us a lower bound on
$\langle\Psi_{\rm input}|E_{PE}|\Psi_{\rm input}\rangle$: we can expand
$\Psi_{\rm input}$ as a linear combination of $\Psi_{\rm sig}$ and some
orthogonal state but we have not bounded the cross-terms in the expectation
value.
However, we now give heuristic evidence (not a proof) for a lower bound on
$\mathbb{E}[\langle\Psi_{\rm input}|E_{PE}|\Psi_{\rm input}\rangle]$. The main
assumption we will make is that $\lambda_{1}=E_{0}\cdot(1+o(1/\log(N))$; we
expect that that assumption can be proven to hold with high probability. We
consider just the case of even $p$ (we expect that odd $p$ can be handled
similarly).
The main reason that we do not give a full proof is that a lower bound on
$\mathbb{E}[\langle\Psi_{\rm input}|E_{PE}|\Psi_{\rm input}\rangle]$ will only
imply a lower bound on the expected inverse squared runtime, rather than an
upper bound on the expected runtime, which is what we really want. Instead in
the next subsubsection we give a modified algorithm with an upper bound on the
expected runtime. Let us note that we conjecture that algorithm 6 does have a
quartic speedup for the expect runtime with high probability. One might guess
that this could be proven using the bound on expectation value of the inverse
squared runtime and some concentration of measure argument. However, we have
not been able to make this precise.
Let us clarify some terminology to distinguish two meanings of the word
“expected”, corresponding to averages over $G$ or averages over outcomes of a
quantum algorithm, i.e., to the “expected runtime”. From here on, when we
refer to the “runtime” of a phase estimation algorithm, this is a short way of
saying the expected runtime for a given choice of $G$. When we refer to the
“expectation value of the runtime”, we mean the expectation value over $G$ of
this expected runtime. Applying amplitude amplification, the runtime is
bounded by the time for the state preparation and phase estimation multiplied
by the inverse square-root of $\langle\Psi_{\rm input}|E_{PE}|\Psi_{\rm
input}\rangle$. So, lower bounding $\mathbb{E}_{G}[\langle\Psi_{\rm
input}|E_{PE}|\Psi_{\rm input}\rangle]$ will upper bound the expectation value
over $G$ of the inverse squared runtime and we will find a bound on the
expectation value of the inverse squared runtime by
$\Bigl{(}N^{{n_{bos}}/4}{\rm
poly}(N,{n_{bos}},1/(E_{0}-E_{max}),\log(D(N,{n_{bos}})/\epsilon))\Bigr{)}^{-2}\Bigl{(}N^{-p/4}/\lambda\Bigr{)}^{-2{n_{bos}}/p}$
in the case that $E_{0}\geq E_{max}\cdot(1+c)$ for any $c>0$. For fixed
$N^{-p/4}/\lambda$, this gives a further quadratic improvement, in terms of
the scaling of the runtime with $N$, over algorithm 4.
Given the existence of that modified algorithm of subsubsection 5.2.3, we will
just sketch an outline of a possible proof of the lower bound on
$\mathbb{E}[\langle\Psi_{\rm input}|E_{PE}|\Psi_{\rm input}\rangle]$, leaving
several details out. The basic idea is to lower bound this expectation value
by a tensor network, working in some approximation which amounts to ignoring
fluctuations in $\lambda_{1}$, then average the tensor network by a sum of
pairings, and use this sum to lower bound the expectation value. Roughly the
physical idea is that the terms in $\Psi_{\rm input}$ proportional to $G$ will
tend on average to increase the overlap $\langle\Psi_{\rm input}|\psi_{\rm
target}\rangle$, rather than decrease it.
Consider the operator $\lambda_{1}^{-m}H(T_{0})^{m}$ for large $m$. If we take
$m$ sufficiently large compared to ${n_{bos}}\log(N)/\log(E_{0}/E_{max})$,
this will operator will lower bound $E_{PE}$ up to some negligible error. That
is, we take $m$ large enough that $\lambda_{1}^{-m}H(T_{0})^{m}$ is negligibly
small acting on any eigenvector $v_{i}$ with eigenvalue
$\lambda_{i}\leq(7/8)E_{0}+(1/8)E_{max}$, and for
$\lambda_{i}\geq(7/8)E_{0}+(1/8)E_{max}$, Eq. (41) gives a lower bound on
$E_{PE}$ that is equal to $1$ up to some negligible phase estimation error
$\tilde{\epsilon}$ while clearly $\lambda_{1}^{-m}H(T_{0})^{m}\leq 1$. For
fixed $E_{0}/E_{max}$, it suffices to take $m=O({n_{bos}}\log(N))$.
For the range of $m$ that we consider here, by assumption we can ignore the
fluctuations in $\lambda_{1}$, i.e., we approximate
$\displaystyle|\langle\Psi_{\rm input}|E_{PE}|\Psi_{\rm input}\rangle|^{2}$
$\displaystyle\gtrsim$
$\displaystyle(\mathbb{E}[\lambda_{1}])^{-m}\mathbb{E}[\langle\Psi_{\rm
input}|H(T_{0})^{m}|\Psi_{\rm input}\rangle]$ (47) $\displaystyle\approx$
$\displaystyle(\mathbb{E}[\lambda_{1}])^{-m}\frac{1}{\mathbb{E}[\langle
T_{0}^{\otimes{n_{bos}}/p}|T_{0}^{\otimes{n_{bos}}/p}\rangle]}[\mathbb{E}[\langle
T_{0}^{\otimes{n_{bos}}/p}|H(T_{0})^{m}|T_{0}^{\otimes{n_{bos}}}\rangle],$
(48)
where in the second line we further approximate that we can ignore
fluctuations in the norm $\langle
T_{0}^{\otimes{n_{bos}}/p}|T_{0}^{\otimes{n_{bos}}/p}\rangle$ and treat it as
a constant (the proof is a standard large deviation argument on the norm of
the tensor).
The quantity $\langle
T_{0}^{\otimes{n_{bos}}/p}|H(T_{0})^{m}|T_{0}^{\otimes{n_{bos}}/p}\rangle$ can
be evaluated by a sum of tensor networks, using the full Hilbert space, i.e.,
for each of $m$ choices of $i_{1},\ldots,i_{p/2}$ in each of the $m$ factors
of $H(T_{0})$ we have a tensor network). We can then write each tensor network
as a sum of tensor networks, inserting either $\lambda v_{\rm sig}^{\otimes
n}$ or $G$ for each tensor, and we can average these tensor networks over $G$
using the methods of appendix A by summing over pairings. Since every term in
this sum over networks and pairings is positive, if we restrict to some subset
of terms, we get a lower bound. Let us restrict to the terms in which for
$\Psi_{\rm input}$, we choose $\lambda v_{\rm sig}^{\otimes p}$ for every
tensor. For this set of terms, the tensor network computes precisely
$\mathbb{E}[\langle\lambda^{{n_{bos}}/p}v_{\rm
sig}^{\otimes{n_{bos}}}|H(T_{0})^{m}|\lambda^{{n_{bos}}/p}v_{\rm
sig}^{\otimes{n_{bos}}}\rangle]$. This in turn is
$\geq(E_{0}\cdot(1-o(1/m))^{m}|\lambda^{{n_{bos}}/p}v_{\rm
sig}^{\otimes{n_{bos}}}|^{2}$, since $\langle\Psi_{\rm sig}|H(T_{0})|\Psi_{\rm
sig}\rangle\geq E_{0}\cdot(1-o(1/m))$ for $m=O({n_{bos}}\log(N))$. So the
tensor network is lower bounded by $E_{0}^{m}|\lambda^{{n_{bos}}/p}v_{\rm
sig}^{\otimes{n_{bos}}}|^{2}$.
So, with these approximations we have lower bounded
$\mathbb{E}[\langle\Psi_{\rm input}|E_{PE}|\Psi_{\rm input}\rangle]$.
We make some implementation remarks on the algorithm. The algorithm as
described requires measuring whether we are in the symmetric subspace. Note
that the input state $\Psi_{\rm input}$ need not be in the symmetric subspace.
Such a projection can be done for example by phase estimating a Hamiltonian
which is a sum of permutation operators. One can also omit this projection
onto the symmetric subspace since our upper bounds on $H(G)$ holds both in the
full Hilbert space and in the symmetric subspace.
We have considered the case that ${n_{bos}}$ is an integer multiple of $p$. If
${n_{bos}}=kp+l$ for some integers $k,l$ with $0<l<p$, then one can use $l$
ancilla qudits, and prepare an input state which is equal to
$\frac{1}{|T_{0}|^{k}}|T_{0}\rangle^{\otimes k},$
on $kp$ qudits, tensored with a maximally entangled state between the
remaining $l$ qudits and the remaining $l$ ancillas. The idea is that we get
the additional quadratic improvement in overlap on $kp$ of the qudits, and the
remaining $l$ ancilla only cost $1/{\rm poly}(N)$ overlap since $l=O(1)$.
#### 5.2.3 Chosen Input State: Modified Version
We now modify the algorithm 5 (and its amplified version) to obtain an
algorithm for which we can prove the quadratic improvement over algorithm 4
without any assumption. Consider given $T_{0}$. Let $\Delta$ be a $p$-th order
tensor, chosen from the same distribution as $G$. Consider the tensor
$\displaystyle T_{0}^{\prime}$ $\displaystyle=$ $\displaystyle T_{0}+x\Delta$
$\displaystyle\equiv$ $\displaystyle\lambda v_{\rm sig}^{\otimes
p}+G^{\prime}$
for some real scalar $x$, where the tensor $G^{\prime}\equiv G+x\Delta$ has
Gaussian entries with variance of the entries equal to $1+x^{2}$. We will
assume $x=O(1)$; indeed later we will choose $x=o(1)$. Let us write $\Psi_{\rm
input}(T)\equiv|T|^{-{n_{bos}}/p}|T^{\otimes{n_{bos}}/p}\rangle$ and
$E_{PE}(T)$ to denote the phase estimation operator $E_{PE}$ for Hamiltonian
$H(T)$.
We have
$\displaystyle G$ $\displaystyle=$
$\displaystyle\frac{1}{1+x^{2}}(G+x\Delta)+\frac{x}{1+x^{2}}(xG-\Delta)$
$\displaystyle=$
$\displaystyle\frac{1}{1+x^{2}}G^{\prime}+\frac{x}{\sqrt{1+x^{2}}}\frac{(xG-\Delta)}{\sqrt{1+x^{2}}}$
$\displaystyle=$
$\displaystyle\frac{1}{1+x^{2}}G^{\prime}+\frac{x}{\sqrt{1+x^{2}}}\delta,$
where $\delta=(1+x^{2})^{-1/2}(xG-\Delta)$. The two random variables
$G^{\prime}$ and $\delta$ have vanishing covariance, so Eq. (5.2.3) expresses
$G$ as a scalar multiple of $G^{\prime}$ plus an additional Gaussian random
variable $\delta$ which is independent of $G^{\prime}$. The variable $\delta$
also has variance $1$.
Given $G$ and $G^{\prime}$, let
$G(y)=yG^{\prime}+(1-y)G=(y+\frac{1-y}{1+x^{2}})G^{\prime}+\frac{x(1-y)}{\sqrt{1+x^{2}}}\delta.$
(51)
so that $G(y)$ linearly interpolates between $G$ and $G^{\prime}$.
The idea behind the algorithm is to take a given $G$ as input, randomly
perturb to produce $G^{\prime}$, and then consider several input states
$\Psi_{\rm input}(\lambda v_{\rm sig}^{\otimes p}+G(y))$ with different
choices of $y\in[0,1]$.
Let us first recall a property of normalization. We have $\Psi_{\rm
input}=\frac{1}{|T_{0}|^{{n_{bos}}/p}}|T_{0}\rangle^{\otimes{n_{bos}}/p}.$ Let
us write
$Z=|T_{0}|^{2{n_{bos}}/p},$
so $\Psi_{\rm input}=Z^{-1/2}|T_{0}\rangle^{\otimes{n_{bos}}/p}.$ As shown
before, with high probability
$Z^{1/2}=|T_{0}|^{{n_{bos}}/p}=O(N^{{n_{bos}}/2})$. Further, with high
probability the fluctuations of $Z^{1/2}$ are $o(1)$ compared to its
expectation value. So, from here on we will treat this normalization factor
$Z$ as a constant, i.e., of course the normalization depends on $N,{n_{bos}}$
but we will ignore its dependence on $G$. We emphasize that we are not making
any additional assumption here as with high probability the fluctuations are
asymptotically negligible; we are simply choosing not to write the
normalization explicitly. (Remark: indeed, all we really need is an upper
bound on $|T_{0}|^{{n_{bos}}/p}$ that holds with high probability, since the
normalization constant $|T_{0}|^{{n_{bos}}/p}$ always appears in the
denominator.) Further, we will, without remarking on it further, treat other
normalization factors such as $|\Psi_{\rm input}(\lambda v_{\rm sig}^{\otimes
p}+G(y))|$ as constants, and we will introduce other notation for those
constants. Indeed, because we treat the normalization factors such as $Z$ as
constants, we will mostly work with un-normalized states which simplifies some
of the calculations.
In an abuse of notation, let us define $\Psi_{\rm input}(y)=\Psi_{\rm
input}(\lambda v_{\rm sig}^{\otimes p}+G(y))$, and write $Z(y)=|\lambda v_{\rm
sig}^{\otimes p}+G(y)|^{2{n_{bos}}/p}$. Let us write $T_{0}(y)=\lambda v_{\rm
sig}^{\otimes p}+G(y)$.
Let $\Psi(y)$ denote the un-normalized state
$|T_{0}(y)\rangle^{\otimes{n_{bos}}/p}$ so that $\Psi_{\rm
input}(y)=Z(y)^{-1/2}\Psi(y)$. Let us expand $\Psi(y)$ as a series in $\delta$
and define $\Psi^{0}(y)$ to denote the zeroth order term in $\delta$.
As a warmup, let us consider
$\langle\Psi^{0}(y)|E_{PE}(T_{0}^{\prime})|\Psi^{0}(y)\rangle$. We consider
the higher order terms in $\delta$ later.
We expand the state $\Psi^{0}(y)$ as a series in $(y+\frac{1-y}{1+x^{2}})$.
Doing this means that we express
$\langle\Psi^{0}(y)|E_{PE}(T_{0}^{\prime})|\Psi^{0}(y)\rangle$ as a polynomial
of degree $2{n_{bos}}$ which we write as $\sum_{i\geq
0}a_{i}(y+\frac{1-y}{1+x^{2}})^{i}.$ The zero-th order term $a_{0}$ is simply
equal to $\langle(\lambda v_{\rm sig}^{\otimes
p})^{\otimes{n_{bos}}/p}|E_{PE}(T_{0}^{\prime})|(\lambda v_{\rm sig}^{\otimes
p})^{\otimes{n_{bos}}/p}\rangle$, which is lower bounded in lemma 8. Hence,
$a_{0}\geq\lambda^{2{n_{bos}}/p}N^{{n_{bos}}}\Omega(1)\cdot c/(1+c)$ (52)
for $E_{0}\geq(1+c)E_{max}$.
Now we use a result about polynomials:
###### Lemma 9.
Let $p(z)$ be a polynomial of degree $2{n_{bos}}$. Let $[a,b]$ be an interval
with $0\leq a<b$. Then,
${\rm
max}_{z\in[a,b]}|p(z)|\geq\Bigl{|}\frac{a+b}{b-a}\Bigr{|}^{-2{n_{bos}}}\exp(-O({n_{bos}}))|p(0)|.$
(53)
###### Proof.
We minimize ${\rm max}_{z\in[a,b]}|p(z)|$ over polynomials of degree
$2{n_{bos}}$ with given value $p(0)$. Applying an affine transformation
$z\rightarrow(2/(b-a))(z-(a+b)/2)$, this is equivalent to minimizing ${\rm
max}_{z\in[-1,1]}|p(z)|$ over polynomials of degree $2$ with given value
$p(z_{0})$ for $z_{0}=(a+b)/(a-b)$. We claim that this is minimized by
$\frac{p(z_{0})}{T_{2{n_{bos}}}(z_{0})}T_{2{n_{bos}}}(z),$
where $T_{2{n_{bos}}}$ is a Chebyshev polynomial. Proof of claim: suppose some
other polynomial $q(z)$ has a smaller maximum absolute value on $[-1,+1]$ with
$q(z_{0})=p(z_{0})$. Then the polynomial $p(z)-q(z)$ has a zero at $z_{0}$ but
also has at least $2{n_{bos}}$ zeros on the interval $[-1,+1]$; this follows
from the intermediate value theorem because $T_{2{n_{bos}}}$ has
$2{n_{bos}}+1$ extreme points on the interval which alternate signs. This
gives a contradiction since $p(z)-q(z)$ is degree at most $2{n_{bos}}$.
So, ${\rm max}_{z\in[a,b]}|p(z)|\geq|p(z_{0})|/|T_{2{n_{bos}}}(z_{0})|$. If
$0\not\in[a,b]$ then $|z_{0}|>1$. We can bound $T_{2{n_{bos}}}(z_{0})$ for
$|z_{0}|>1$ by $|z_{0}|^{2{n_{bos}}}$ times the sum of absolute values of
coefficients of $T_{2{n_{bos}}}$. This sum of coefficients is bounded by
$\exp(O({n_{bos}}))$. ∎
We apply this lemma to lower bound ${\rm max}_{y\in[0,1]}|\sum_{i\geq
0}a_{i}(y+\frac{1-y}{1+x^{2}})^{i}|$. For $z=y+\frac{1-y}{1+x^{2}}$, with
$b=1$ and $a=(1+x^{2})^{-1}$, this is ${\rm max}_{z\in[a,b]}|p(z)|$ with
$p(z)=\sum_{i}a_{i}z^{i}$. So, using that $(a+b)/(b-a)\leq(2/x^{2})$, we have
${\rm max}_{y\in[0,1]}|\sum_{i\geq 0}a_{i}(y+\frac{1-y}{1+x^{2}})^{i}|\geq
x^{4{n_{bos}}}\exp(-O({n_{bos}}))|a_{0}|.$ (54)
However, for purposes of an algorithm, we need to consider not just the
maximum of this polynomial $p(z)$ over an interval, but also consider whether
we can come close to this maximum by sampling it at some small number of
discrete points. Fortunately we have the following lemma:
###### Lemma 10.
Let $p(z)$ be a polynomial of degree $2{n_{bos}}$. Let $[a,b]$ be an interval
which does not contain $0$. Then, with probability at least $1/2$, if one
selects a random point $z$ in the interval $[a,b]$ from the uniform
distribution, we have
$|p(z)|\geq\Bigl{|}\frac{a+b}{b-a}\Bigr{|}^{-2{n_{bos}}}\exp(-O({n_{bos}}))|p(0)|$.
###### Proof.
Apply an affine transformation $z\rightarrow(2/(b-a))(z-(a+b)/2)$, which maps
$a$ to $-1$, $b$ to $+1$ and $0$ to $(a+b)/(a-b)$. Write
$p(z)=\prod_{i}(z-z_{i})$ where $z_{i}$ are zeros of the polynomial; we
multiply by $p(z)$ by a normalization constant so that the highest order
coefficient is equal to $1$. Then $\log(|p(z)|)=\sum_{i}{\rm
Re}(\log(z-z_{i}))$. Let $A$ denote the average of this logarithm over the
interval $[-1,+1]$; by calculus this is
$A=\frac{1}{2}\sum_{i}{\rm
Re}\Bigl{(}(z-z_{i})\log(z-z_{i})-(z-z_{i})\Bigr{)}\Bigl{|}_{z=-1}^{z=+1}.$
We claim that
$\log(p((a+b)/(a-b)))-A\leq
2{n_{bos}}\cdot\Bigl{(}\log((a+b)/(a-b))+O(1)\Bigr{)}.$ (55)
To show this let $T_{i}$ denote a given term in the sum for $A$, i.e.
$T_{i}=(1/2){\rm Re}((z-z_{i})\log(z-z_{i})-(z-z_{i}))\Bigl{|}_{z=-1}^{z=+1}$.
Let $D_{i}\equiv{\rm Re}(\log((a+b)/(a-b)-z_{i}))-T_{i}$. By considering
various cases, we will show that $D_{i}\leq\log((a+b)/(a-b))+O(1)$ which
implies Eq. (55). Proof of claim: for $|z_{i}|$ less than or equal to some
fixed constant (for example, $|z_{i}|\leq 10$), $T_{i}$ is lower bounded by
some absolute constant, so $D_{i}\leq\log((a+b)/(a-b))+O(1)$. For $|z_{i}|$
larger than this fixed constant (for example, $|z_{i}|>10$), $T_{i}$ is at
least $\log(|z_{i}|)$ minus some other absolute constant, so $D_{i}\leq{\rm
Re}(\log((a+b)/(a-b)-z_{i})-\log(z_{i}))+O(1)={\rm
Re}(\log((a+b)/(a-b)))+O(1)$.
Further, for each $i$, we see that ${\rm max}_{z\in[-1,+1]}{\rm
Re}(\log(z-z_{i}))$ is upper bounded by $T_{i}+O(1)$. Hence, ${\rm
max}_{z\in[-1,+1]}\log(|p(z)|)$ is upper bounded by $A+O({n_{bos}})$. Hence,
with probability at least $1/2$, for $z$ chosen uniformly in $[-1,+1]$, we
have that $\log(|p(z)|)\geq A-O({n_{bos}})$.
Hence, with probability at least $1/2$, we have that $\log(|p(z)|)\geq
A-2{n_{bos}}\cdot(\log((a+b)/(a-b))-O({n_{bos}})$. ∎
So, noting that for $z=y+\frac{1-y}{1+x^{2}}$, a uniform choice of $y$ on
$[0,1]$ is the same as a uniform choice of $z$ on $[1/(1+x^{2}),1]$, we have
###### Lemma 11.
For $y$ chosen randomly from the uniform distribution on $[0,1]$, the quantity
$\langle\Psi^{0}(y)|E_{PE}(T_{0}^{\prime})|\Psi^{0}(y)\rangle$ is greater than
or equal to $\langle(\lambda v_{\rm sig}^{\otimes
p})^{\otimes{n_{bos}}/p}|E_{PE}(T_{0}^{\prime})|(\lambda v_{\rm sig}^{\otimes
p})^{\otimes{n_{bos}}/p}\rangle\cdot\exp(-O({n_{bos}}))x^{4{n_{bos}}}$.
Using Eq. (52),
$\langle\Psi^{0}(y)|E_{PE}(T_{0}^{\prime})|\Psi^{0}(y)\rangle\geq\lambda^{2{n_{bos}}/p}N^{{n_{bos}}}\exp(-O({n_{bos}}))x^{4{n_{bos}}}\cdot
c/(1+c).$ (56)
We later will worry about optimizing over $x$. Increasing $x$ will increase
the right-hand side of Eq. (56), but it will also change the normalization
constant $Z(y)$ that we must multiply by to correctly normalize the input
state and also will worsen the threshold for recovery (since $E_{max}$ will
increase). For now, let us note for orientation that if we choose, for
example, $x=0.01$, then the tensor $T_{0}^{\prime}$ is chosen from the same
distribution as $T_{0}$ up to a multiplication by a factor of
$(1+0.01^{2})^{1/2}$ which is very close to $1$. This leads to only a very
small reduction in the threshold $\lambda$ at which recovery is possible. At
the same time, the change in normalization $(1+0.01^{2})^{{n_{bos}}}$ is
asymptotically negligible (for fixed ${n_{bos}}$) compared to the improvement
which is polynomial in $N^{n_{bos}}$.
We now consider the additional terms depending on $\delta$, i.e., due to the
difference $\Psi(y)-\Psi^{0}(y)$. At first glance, we have not gained much by
the above trick introducing $T^{\prime}$ since we still have this Gaussian
random variable $\delta$ to consider. However, we have the advantage now that
in $\langle\Psi(y)|E_{PE}(T_{0}^{\prime})|\Psi(y)\rangle$, the operator
$E_{PE}(T_{0}^{\prime})$ does not depend on $\delta$ and the states in the bra
and ket depend on $\delta$ only polynomially (treating the overall
normalization as a constant).
Let us outline our approach. The main worry that we have is that this quantity
$\langle\Psi(y)|E_{PE}(T_{0}^{\prime})|\Psi(y)\rangle$ will have a probability
distribution that is peaked near some “small” value, i.e., much less than the
value of the overlap at $\delta=0$, rather than some “large” value, i.e.,
roughly comparable to the value of the overlap at $\delta=0$. To deal with
this, we will use a trick of adding additional noise to make the expectation
value of the overlap large, i.e., roughly comparable to the value of the
overlap at $\delta=0$. Now we still need to worry about whether the
probability function might have some large probability of having a small
value. So, we will then appeal we appeal to a theorem of Carbery-Wright on
“anti-concentration” of polynomials of Gaussian random variables, bounding the
probability that the probability lies in some small interval. This theorem
gives us a useful bound unless the variance of the polynomial is small;
however, in that case we can show that the polynomial is likely to be close to
its expectation value.
The trick of adding additional noise is as follows. We perturb the input state
by adding additional Gaussian random noise. Let $\Psi(y,x^{\prime})$ denote
the unnormalized state
$|\lambda v_{\rm sig}^{\otimes
p}+(y+\frac{1-y}{1+x^{2}})G^{\prime}+\frac{x(1-y)}{\sqrt{1+x^{2}}}(\delta+\delta^{\prime})\rangle^{\otimes{n_{bos}}/p},$
where $\delta^{\prime}$ is an additional tensor chosen from a Gaussian
distribution with some standard deviation $x^{\prime}$. So, the tensor
$\delta+\delta^{\prime}$ is sampled from some distribution with variance
$1+x^{\prime 2}$. Let $Z(y,x^{\prime})=|\Psi(y,x^{\prime})|^{2}$. Let
$\Psi_{\rm input}(y,x^{\prime})=Z(y,x^{\prime})^{-1/2}\Psi(y,x^{\prime})$ be
the “perturbed input state”.
Then, we consider the expectation value over $\delta^{\prime}$ of
$\langle\Psi(y,x^{\prime})|E_{PE}(T_{0}^{\prime})|\Psi(y,x^{\prime})\rangle$;
we consider this expectation value as a series in the variable $1+x^{\prime
2}$. Using the same treatment as above of this expectation value as a
polynomial, we find that if $x^{\prime 2}$ is chosen uniformly from the
interval $[0,x^{2}]$ then, with probability at least $1/2$, the quantum
expectation value
$\langle\Psi(y,x^{\prime})|E_{PE}(T_{0}^{\prime})|\Psi(y,x^{\prime})\rangle$
is at least equal to the quantum expectation value with $\delta^{\prime}=0$
multiplied by $\exp(-O({n_{bos}}))x^{8{n_{bos}}}$.
Hence,
###### Lemma 12.
With probability at least $1/4$, for uniform random choices of $y$ and
$x^{\prime 2}$ on $[0,1]$ and $[0,x^{2}]$, the expectation value over
$\delta,\delta^{\prime}$ of
$\langle\Psi(y,x^{\prime})|E_{PE}(T_{0}^{\prime})|\Psi(y,x^{\prime})\rangle$
is at least $\langle(\lambda v_{\rm sig}^{\otimes
p})^{\otimes{n_{bos}}/p}|E_{PE}(T_{0}^{\prime})|(\lambda v_{\rm sig}^{\otimes
p})^{\otimes{n_{bos}}/p}\rangle\cdot\exp(-O({n_{bos}}))x^{8{n_{bos}}}$.
We emphasize that throughout we are treating $G^{\prime}$ as fixed and
considering $\delta,\delta^{\prime}$ as random. Now we use anti-concentration
in the Gaussian random variables $\delta,\delta^{\prime}$. The following lemma
is a consequence of the Carbery-Wright theorem [21]
###### Lemma 13.
Let $p(z_{1},\ldots,z_{n})$ be a polynomial of degree at most $k$ in $n$
independent normally distributed random variables. Then, for any
$t\in{\mathbb{R}}$ and any $\delta>0$ with $|t-\mathbb{E}[p]|>\delta$,
$\displaystyle{\rm Pr}[|p(z_{1},\ldots,z_{n})-t|\leq\delta]$
$\displaystyle\leq$
$\displaystyle\Bigl{(}\frac{\delta}{|t-\mathbb{E}[p]|-\delta}\Bigr{)}^{2/(2k+1)}O(k)^{2k/(2k+1)}$
$\displaystyle\leq$
$\displaystyle\Bigl{(}\frac{\delta}{|t-\mathbb{E}[p]|-\delta}\Bigr{)}^{2/(2k+1)}O(k),$
where the big-O notation here hides a universal constant.
As a corollary, choosing $t=0$,
${\rm
Pr}[|p|\leq\delta|\mathbb{E}[p]|]\leq\Bigl{(}\frac{\delta}{1-\delta}\Bigr{)}^{2/(2k+1)}O(k)=O(\delta^{2/(2k+1)})O(k).$
(58)
###### Proof.
Let ${\rm Var}(p)$ denote the variance of $p(\cdot)$. As a trivial bound,
${\rm Pr}[|p(z_{1},\ldots,z_{n})-t|\leq\delta]\leq{\rm
Var}(p)/\Bigl{(}|t-\mathbb{E}[p]|-\delta\Bigr{)}^{2}.$ (59)
By Carbery-Wright,
${\rm Pr}[|p(z_{1},\ldots,z_{n})-t|\leq\delta]\leq O(k)\cdot(\delta/\sqrt{{\rm
Var}(p)})^{1/k}.$ (60)
Maximizing the bound from Eqs. (59,60) over ${\rm Var}(p)$, Eq. (13) follows.
∎
Hence, from Eq. (58),
###### Lemma 14.
Consider a choice of $y,x^{\prime}$ such that the expectation value over
$\delta,\delta^{\prime}$ of the quantum expectation value
$\langle\Psi(y,x^{\prime})|E_{PE}(T_{0}^{\prime})|\Psi(y,x^{\prime})\rangle$
is at least $\langle(\lambda v_{\rm sig}^{\otimes
p})^{\otimes{n_{bos}}/p}|E_{PE}(T_{0}^{\prime})|(\lambda v_{\rm sig}^{\otimes
p})^{\otimes{n_{bos}}/p}\rangle\cdot\exp(-O({n_{bos}}))x^{8{n_{bos}}}$. Then,
for random choice of $\delta,\delta^{\prime}$, the quantum expectation value
$\langle\Psi(y,x^{\prime})|E_{PE}(T_{0}^{\prime})|\Psi(y,x^{\prime})\rangle$
is at least
$\langle(\lambda v_{\rm sig}^{\otimes
p})^{\otimes{n_{bos}}/p}|E_{PE}(T_{0}^{\prime})|(\lambda v_{\rm sig}^{\otimes
p})^{\otimes{n_{bos}}/p}\rangle\cdot\exp(-O({n_{bos}}))x^{8{n_{bos}}}\delta,$
with probability at least $1-O({n_{bos}})O(\delta)^{2/(2{n_{bos}}+1)}$.
Lemma 14 considers unnormalized input states. Dividing by normalization
constant $Z(y,x^{\prime})$ we can get a lower bound on the expectation value
for the normalized input state $\Psi_{\rm input}(y,x^{\prime})$ by (for
$E_{0}\geq(1+c)E_{max}$):
$\displaystyle\frac{1}{Z(y,x^{\prime})}\langle(\lambda v_{\rm sig}^{\otimes
p})^{\otimes{n_{bos}}/p}|E_{PE}(T_{0}^{\prime})|(\lambda v_{\rm sig}^{\otimes
p})^{\otimes{n_{bos}}/p}\rangle\cdot\exp(-O({n_{bos}}))x^{8{n_{bos}}}\delta$
$\displaystyle=$
$\displaystyle\lambda^{2{n_{bos}}/p}\exp(-O({n_{bos}}))x^{8{n_{bos}}}\delta\cdot\Omega(1)$
$\displaystyle=$
$\displaystyle\Bigl{(}\frac{\lambda}{N^{-p/4}}\Bigr{)}^{2{n_{bos}}/p}N^{-{n_{bos}}/2}x^{8{n_{bos}}}\delta\cdot\Omega(1)$
$\displaystyle=$
$\displaystyle\Bigl{(}\frac{\lambda}{N^{-p/4}}\Bigr{)}^{2{n_{bos}}/p}D(N,{n_{bos}})^{-1/2}x^{8{n_{bos}}}\delta\cdot\Omega(1).$
Finally, let us pick $x=1/\log(N)$. Then, we consider the following algorithm
7.
Algorithm 7 Quantum Algorithm (modified improved input state, unamplified
version)
* 1.
Choose random $y,x^{\prime}$. Sample $T_{0}^{\prime},\delta^{\prime}$
randomly. Prepare input state $\Psi_{\rm input}(y,x^{\prime})$.
* 2.
If the initial state is not in the symmetric subspace, report “failure”. If
the state is in the symmetric subspace, apply phase estimation using
$H(T_{0})$ Let $\psi$ be the resulting state. If the resulting eigenvalue is
larger than $(E_{0}+E_{cut})/2$, report “success”. Otherwise, report
“failure”.
* 3.
If success is reported, measure and return
$\langle\psi|a^{\dagger}_{\mu}a_{\nu}|\psi\rangle.$
We apply amplitude amplification to algorithm 7. We apply amplitude
amplification under the assumption that indeed
$\langle\Psi(y,x^{\prime})|E_{PE}(T_{0}^{\prime})|\Psi(y,x^{\prime})\rangle$
is at least $\langle(\lambda v_{\rm sig}^{\otimes
p})^{\otimes{n_{bos}}/p}|E_{PE}(T_{0}^{\prime})|(\lambda v_{\rm sig}^{\otimes
p})^{\otimes{n_{bos}}/p}\rangle\cdot\exp(-O({n_{bos}}))x^{4{n_{bos}}}\delta.$
From the theorems above, with high probability in $\delta^{\prime}$, this
happens at least $1/4$ of the time. If the assumption does not hold, we re-
sample $y,x^{\prime}$ and try again. Then, we find that for $E_{0}\geq
|
# Deterministic Algorithms
for Low Degree Factors
of Constant Depth Circuits
Mrinal Kumar Tata Institute of Fundamental Research, Mumbai, India. Email:
{mrinal, varun.ramanathan<EMAIL_ADDRESS>Research supported by the
Department of Atomic Energy, Government of India, under project
12-R&D-TFR-5.01-0500. Varun Ramanathan11footnotemark: 1 Ramprasad
Saptharishi11footnotemark: 1
###### Abstract
For every constant $d$, we design a subexponential time deterministic
algorithm that takes as input a multivariate polynomial $f$ given as a
constant depth algebraic circuit over the field of rational numbers, and
outputs all irreducible factors of $f$ of degree at most $d$ together with
their respective multiplicities. Moreover, if $f$ is a sparse polynomial, then
the algorithm runs in quasipolynomial time.
Our results are based on a more fine-grained connection between polynomial
identity testing (PIT) and polynomial factorization in the context of constant
degree factors and rely on a clean connection between divisibility testing of
polynomials and PIT due to Forbes [For15] and on subexponential time
deterministic PIT algorithms for constant depth algebraic circuits from the
recent work of Limaye, Srinivasan and Tavenas [LST21].
## 1 Introduction
A long line of research (cf. [vzG83, Kal85, Kal92, Kal03]) on the question of
designing efficient algorithms for multivariate polynomial factorization
concluded with the influential works of Kaltofen [Kal89] and Kaltofen & Trager
[KT88] which gave efficient randomized algorithms for this problem in the
whitebox and blackbox settings respectively.111Throughout this paper, we use
efficient to mean an algorithm whose time complexity is polynomially bounded
in the size, bit-complexity and the degree of the input algebraic circuit.
These results and the technical insights discovered in the course of their
proofs have since found numerous direct and indirect applications in various
areas of complexity theory. This includes applications to the construction of
pseudorandom generators for low degree polynomials [Bog05], algebraic
algorithms [KY08], hardness-randomness tradeoffs in algebraic complexity
[DSY09, GKSS19], algebraic property testing [PS94, AS03, BSCI+20], error
correcting codes [BHKS20], deterministic polynomial identity tests for
constant depth circuits [LST21, CKS18] among others.
Given the fundamental nature of the problem and its many applications, the
question of designing efficient deterministic algorithms for multivariate
polynomial factorization is of great interest and importance. Shpilka &
Volkovich [SV10] observed that this question is at least as hard as PIT in the
sense that a deterministic factoring algorithm (in fact, an algorithm to check
irreducibility suffices for this) for polynomials given by algebraic circuits
implies a deterministic algorithm for PIT for algebraic circuits, a long
standing open problem in computer science. In a later work, Kopparty, Saraf &
Shpilka [KSS15] showed a connection in the other direction as well. They
showed that an efficient deterministic algorithm for PIT for algebraic
circuits implies an efficient deterministic algorithm for polynomial
factorization for algebraic circuits. Thus, the questions are essentially
equivalent to each other.
An intriguing aspect of the aforementioned equivalence is that while
deterministic algorithms for factoring any rich enough class of circuits (for
instance, constant depth circuits) lead to deterministic PIT for the same
class (see Observation 1 in [SV10] for a precise statement), the connection in
the other direction due to Kopparty, Saraf & Shpilka [KSS15] does not appear
to be so fine-grained. In particular, even if we only wish to factor an
otherwise simple class of polynomials, e.g. sparse polynomials (polynomials
with a small number of non-zero monomials), the PIT required as per the proof
in [KSS15] seems to be for significantly more powerful models of algebraic
computation like algebraic branching programs.
As a consequence, while there has been steady progress on the state of the art
of deterministic PIT algorithms in recent years for various interesting sub-
classes of algebraic circuits like sparse polynomials [KS01], depth-$3$
circuits with constant top fan-in [SS09, SS10, KS09], read-once algebraic
branching programs [FS13, FSS14, For14, GKST15, GKS16] and constant depth
circuits [LST21], this progress hasn’t translated to progress on the question
of deterministic factoring algorithms for these circuit classes. In
particular, deterministic factorization algorithms have remained elusive even
for seemingly simple classes of polynomials like sparse polynomials where the
corresponding PIT problem is very well understood. There are only a handful of
results that make progress towards this and related problems to the best of
our knowledge. Shpilka & Volkovich [SV10] showed a close connection between
the problems of polynomial identity testing and that of decomposing a
polynomial given by a circuit into variable disjoint factors and build on
these ideas to give an efficient deterministic algorithm for factoring sparse
multilinear polynomials. In subsequent works, Volkovich [Vol15, Vol17] gave an
efficient deterministic algorithm to factor sparse polynomials that split into
multilinear factors and sparse polynomials with individual degree at most $2$.
More recently, a work of Bhargava, Saraf and Volkovich [BSV18] gives a
quasipolynomial time deterministic algorithm for factoring sparse polynomials
with small individual degree based on some beautiful geometric insights.
In general, when the individual degree of a sparse polynomial is not small, no
non-trivial deterministic factoring algorithms appear to be known, even when
we have the flexibility of describing the output as algebraic circuits. As
Forbes & Shpilka note in their recent survey [FS15] on polynomial
factorization, we do not even have structural guarantees on the complexity of
factors of sparse polynomials even for seemingly coarse measures of complexity
like formula complexity. In fact, questions that might be potentially easier
than factorization like checking if a given sparse polynomial is a product of
constant degree polynomials or checking if a given sparse polynomial is
irreducible are not known to have non-trivial deterministic algorithms.
Perhaps a little surprisingly, till a recent work of Forbes [For15], we did
not even have a non-trivial deterministic algorithm for checking if a given
sparse polynomial is divisible by a given constant degree polynomial! Forbes
gave a quasipolynomial time deterministic algorithm for this problem by
reducing this question to a very structured instance of PIT for depth-$4$
algebraic circuits and then giving a quasipolynomial time deterministic
algorithm for these resulting PIT instances.
This work is motivated by some of these problems, most notably by the question
of designing efficient deterministic algorithms for factoring sparse
polynomials. While we do not manage to solve this problem in this generality,
we make modest progress towards this: we design a deterministic
quasipolynomial time algorithm that outputs all the low degree factors of a
sparse polynomial. More generally, we show that constant degree factors of a
polynomial given by a constant depth circuit can be computed deterministically
in subexponential time.
### 1.1 Our Results
###### Theorem 1.1 (Low degree factors of constant depth circuits).
Let $\mathbb{Q}$ be the field of rational numbers and $\varepsilon>0$,
$d,k\in\mathbb{N}$ be arbitrary constants.
Then, there is a deterministic algorithm that takes as input an algebraic
circuit $C$ of size $s$, bit-complexity $t$, degree $D$ and depth $k$ and
outputs all the irreducible factors of $C$ of degree at most $d$, along with
their respective multiplicities in time $(sDt)^{O((sDt)^{\varepsilon})}$.
We note that the bit-complexity of an algebraic circuit/formula is a measure
of the bit-complexities of the rational numbers appearing in the circuit. See
2.1 for a formal definition.
When the input polynomial is sparse, i.e. has a small depth-$2$ circuit, then
the time complexity of the algorithm in Theorem 1.1 can be improved to be
quasipolynomially bounded in the input size. This gives us the following
theorem.
###### Theorem 1.2 (Low degree factors of sparse polynomials).
Let $d\in\mathbb{N}$ be an arbitrary constant.
Then, there is a deterministic algorithm that takes as input a polynomial
$f\in\mathbb{Q}[\mathbf{x}]$ of sparsity $s$, bit-complexity $t$, degree $D$,
and outputs all the irreducible factors of $f$ of degree at most $d$, along
with their respective multiplicities in time $(sDt)^{\operatorname{poly}(\log
sDt)}$.
These results immediately yield an algorithm (with comparable time complexity)
to check if the polynomial computed by a given low depth circuit is a product
of polynomials of degree at most $d$. More concretely, we have the following
corollary that follows by comparison of degrees of the input polynomial and
the low-degree factors (with multiplicities) listed by the algorithms in the
above theorems.
###### Corollary 1.3.
Let $\mathbb{Q}$ be the field of rational numbers and $\varepsilon>0$,
$d,k\in\mathbb{N}$ be arbitrary constants.
Then, there is a deterministic algorithm that takes as input an algebraic
circuit $C$ of size $s$ , bit-complexity $t$, degree $D$ and depth $k$ and
decides if $C$ is a product of irreducibles of degree at most $d$ in time
$(sDt)^{O((sDt)^{\varepsilon})}$.
Moreover, when $f$ is a sparse polynomial with sparsity $s$, then the
algorithm runs in $(sDt)^{\operatorname{poly}(\log sDt)}$ time.
Note that in the constant depth regime, circuits and formulas are equivalent
upto a polynomial blow-up in size. Thus we will use the terms circuits and
formulas interchangeably without any loss in our final bounds, and most of our
presentation will be for formulas.
### Field dependence of our results
We end this section with a remark about the field dependence of our results.
The field dependence in our results stems from two reasons. We need an
efficient deterministic algorithm for factorization of univariate polynomials
over the underlying field $\mathbb{F}$. In addition to this, our proofs also
need non-trivial deterministic algorithms for polynomial identity testing
(PIT) for constant depth circuits (or very special depth-$4$ circuits for
Theorem 1.2) over the underlying field.
The field of rational numbers satisfies both these requirements: a classical
algorithm of Lenstra, Lenstra and Lovász[LLL82] solves the problem of
deterministic univariate factorization efficiently over $\mathbb{Q}$ and a
recent work of Limaye, Srinivasan and Tavenas [LST21] gives a subexponential
time deterministic algorithm for PIT for constant depth circuits over
$\mathbb{Q}$. For Theorem 1.2, the relevant PIT is for special depth-$4$
circuits and was given in a work of Forbes [For15]. In fact, Forbes’ result
holds even over finite fields.
We restrict our attention to just the field of rational numbers in the
presentation although our results work over any large characteristic field
that supports the above requirements.
### 1.2 Proof Overview
We now give an overview of some of the main ideas in our proofs. In a
nutshell, our proofs are based on relatively simple structural observations on
top of the existing factoring algorithms. The key is to understand the
structure of circuits for which we need a PIT algorithm at every step a little
better, and when looking for low degree factors, we observe that these PIT
instances are relatively simple and their circuit complexity is comparable to
the circuit complexity of the input polynomials themselves. We also crucially
use the divisibility testing idea of Forbes [For15] in our algorithm at two
stages; this helps us handle factors of large multiplicities and also lets us
obtain true factors from the output of Hensel Lifting step of the
factorization algorithms. This idea again helps in reducing the complexity of
the PIT instance we face in these steps, and in particular, we completely
avoid the linear systems solving step in a typical factorization algorithm
that naively (e.g. see [KSS15]) seems to require PIT for algebraic branching
programs. Once the PIT instances are shown to be relatively simple, we invoke
the PIT algorithms of Forbes [For15] and Limaye, Srinivasan & Tavenas [LST21]
to solve these deterministically.
##### Typical steps in a polynomial factorisation algorithm:
Most factorisation algorithms (and ours, modulo minor deviations) follow this
template:
1. 1.
Making $f$ monic: Apply a suitable transformation of the form $x_{i}\mapsto
x_{i}+\alpha_{i}y$ to ensure that $f$ is monic in $y$. We may now assume that
$f\in\mathbb{Q}[\mathbf{x},y]$.
2. 2.
Preparing for Hensel lift: Ensure that $f(\mathbf{x},y)$ is square-free, and
further that $f({\bm{0}},y)$ is also square-free.
3. 3.
Univariate factorisation: Factorise the univariate polynomial $f({\bm{0}},y)$
as a product $g_{0}(y)\cdot h_{0}(y)$ where $\gcd(g_{0},h_{0})=1$. This can be
intepreted as a factorisation $f(\mathbf{x},y)=g_{0}\cdot
h_{0}\bmod{\mathcal{I}}$ where
$\mathcal{I}=\left\langle\mathbf{x}\right\rangle$.
4. 4.
Hensel lifting: Compute an iterated lift to obtain $f=g_{\ell}\cdot
h_{\ell}\bmod{\mathcal{I}^{2^{\ell}}}$ for a suitably large $\ell$.
5. 5.
Reconstruction: From $g_{\ell}$, obtain an honest-to-god factor $g$ of $f$
(unless $f$ is irreducible).
The first two steps typically involve the use of randomness for suitable
polynomial identity tests. In the first step, we would like ${\bm{\alpha}}$ to
be a point that keeps the highest degree homogeneous component of $f$ non-
zero, and the second step is handled by translating $f$ by a point
${\bm{\delta}}$ that keeps the “discriminant” of $f$ non-zero. The Hensel lift
is a deterministic subroutine that eventually yields small circuits for the
lifted factors and the reconstruction step typically involves solving a linear
system. It is mostly due to the “discriminant” that we do not have efficient
deterministic factorisation algorithm even for constant-depth circuits as the
best upper bound for the discriminant we have is an algebraic branching
program and we do not have efficient hitting sets for them. (Yet!)
For our case, it is instructive to focus on a specific factor $g$ of $f$ and
understand what would be required to make the above template yield this
factor. The first observation is that the base case of Hensel Lifting does not
require $f$ to be square-free but rather that the factor $g$ we intend to
reconstruct satisfies $g|f$ and $g^{2}\nmid f$. For now, let us assume this
and also that $f$ (and hence $g$ and $h=f/g$ also) is monic in $y$. We have
that $\gcd(g,h)=1$ but for the Hensel lift, we also need to find a
${\bm{\delta}}$ that ensures that $\gcd(g_{0},h_{0})=1$ where
$g_{0}=g({\bm{\delta}},y)$ and $h_{0}=h({\bm{\delta}},y)$. The set of “good”
${\bm{\delta}}$’s is precisely the points that do not make the resultant
$\operatorname{Res}_{y}(g,h)$ zero and thus we want to understand the circuit
complexity of this resultant.
The resultant $\operatorname{Res}_{y}(g,h)$ is the determinant of a matrix of
dimension $\deg_{y}(g)+\deg_{y}(h)$ and its entries are coefficients of $g,h$
when viewed as univariates in $y$. However, we are only given that $f$ is
computable by a constant-depth formula and we do not have any good bound on
the complexity of $h$. We circumvent this by working with a _pseudo-quotient_
(introduced by Forbes [For15] in the context of divisibility testing)
$\tilde{h}$ of $f$ and $g$; we work with $\operatorname{Res}_{y}(g,\tilde{h})$
and show that it is also computable by constant-depth circuits of not-too-
large size. Fortunately, the result of Limaye, Srinivasan and Tavenas [LST21]
yields sub-exponential sized hitting sets for constant depth formulas and that
enables us to avoid the use of randomness to prepare for the Hensel Lifting
step.
We can then factorise the univariate polynomial $f({\bm{\delta}},y)$ and
attempt all possible factors $g_{0}$ of degree at most $d$ to begin the
lifting process from $g_{0}\cdot h_{0}$ (where
$h_{0}=f({\bm{\delta}},y)/g_{0}$). After an appropriately large lift, we have
small circuits (of possibly unbounded depth) computing $g_{\ell}$ and
$h_{\ell}$ such that $\tilde{f}=f(\mathbf{x}+{\bm{\delta}},y)=g_{\ell}\cdot
h_{\ell}\bmod{\mathcal{I}^{2^{\ell}}}$. If $g_{\ell}$ is guaranteed to be
monic, and the initial choice of $g_{0}$ was indeed $g({\bm{\delta}},y)$, the
uniqueness of Hensel lifting would ensure that $g_{\ell}$ is indeed equal to
$g$ (after truncating higher order terms). We can then use standard
interpolation to obtain $g_{\ell}$ explicitly written as a sum of monomials.
Finally, to ensure that $g_{\ell}$ is indeed a legitimate factor of
$\tilde{f}$, we perform divisibility testing to check if
$g_{\ell}\mid\tilde{f}$.
##### Handling factors of large multiplicity:
The above overview is all we need to obtain any factor $g$ of degree $O(1)$
that divides $f$ with $g^{2}\nmid f$. In order to handle factors with higher
“factor-multiplicity”, we use a simple observation that $g^{a-1}\mid f$ but
$g^{a}\nmid f$ if and only if $g$ divides
$f,\partial_{y}f,\ldots,\partial_{y^{a-1}}f$ but not $\partial_{y^{a}}f$. We
run our algorithm for each of the partial derivatives to collect the list of
candidate factors, and eventually prune them via appropriate divisibility
tests.
##### The specific case of $\Sigma\Pi$-formulas (or sparse polynomials):
The above sketch yields a sub-exponential time algorithm for obtaining
$O(1)$-degree factors of constant depth formulas. However, with some
additional care, we obtain a quasipolynomial time algorithm in the case when
$f$ is a sparse polynomial. The key observation for this is that we do not
really need $f$ to be made monic for the above approach, but we only need $g$
to be monic to exploit the uniqueness of Hensel lifts. Since $g$ is a
polynomial of degree at most $d=O(1)$, we can find a _low Hamming weight_
vector ${\bm{\alpha}}$ such that $g(\mathbf{x}+y{\bm{\alpha}})$ is monic in
$y$. This allows us to control the sparsity increase of $f$ in the process and
we show that the relevant resultant is a polynomial of the form
$\sum_{i}\text{monomial}_{i}\cdot(\text{$O(1)$-degree})^{e_{i}}.$
Forbes [For15] shows that there are quasipolynomial size hitting sets for such
expressions and we use this instead of the more general hitting set of Limaye,
Srinivasan and Tavenas [LST21].
### Organization of the paper
The rest of the paper is organized as follows.
In the next section, we start with a discussion of some of the preliminaries
and known results from algebraic complexity and previous works on polynomial
factorization that we use for the design and analysis of our algorithms. In
Section 3, we describe and analyze the algorithm for computing low degree
factors of multiplicity one of a given constant depth formula. In Section 4,
we build upon this algorithm to compute arbitrary constant degree factors and
complete the proofs of Theorem 1.1 and Theorem 1.2. Finally, we conclude with
some open problems in Section 5.
## 2 Notation and preliminaries
This section consists of all the necessary building blocks to describe and
analyse (in Section 3) the main algorithm.
##### Fair warning:
A large part of this (slightly lengthy) section is standard techniques in
algebraic complexity that are relevant to this specific context, and is
intended to keep the main analysis as self-contained as possible. A reader
with some familiarity with standard algorithmic and structural results in
algebraic complexity might be in a position to directly proceed to Section 3
and revisit this section for relevant results as required.
#### Notation
1. 1.
Throughout this paper, we work over the field $\mathbb{Q}$ of rational
numbers. For some of the statements that are used more generally, we use
$\mathbb{F}$ to denote an underlying field.
2. 2.
We use boldface lower case letters like $\mathbf{x},\mathbf{y},\mathbf{a}$ to
denote tuples, e.g. $\mathbf{x}=(x_{1},x_{2},\ldots,x_{n})$. The arity of the
tuple is either stated or will be clear from the context.
3. 3.
For a polynomial $f$ and a non-negative integer $k$,
$\operatorname{Hom}_{k}[f]$ denotes the homogeneous component of $f$ of degree
_equal_ to $k$. $\operatorname{Hom}_{\leq k}[f]$ denotes the sum of
homogeneous components of $f$ of degree at most $k$, i.e.,
$\operatorname{Hom}_{\leq k}[f]:=\sum_{i=0}^{k}\operatorname{Hom}_{i}[f].$
4. 4.
The _sparsity_ of a polynomial $f$ is the number of monomials with a non-zero
coefficient in $f$.
5. 5.
For a parameter $k\in\mathbb{Z}_{\geq 0}$, we will use $(\Sigma\Pi)^{(k)}$ to
refer to product-depth $k$ circuits222We emphasize that this notation does
_not_ refer to the $k^{\text{th}}$ power of a polynomial computed by a
$\Sigma\Pi$ circuit. with the root gate being $+$ and the deepest layer of
gates being $\times$. Since any constant depth algebraic circuit of depth $k$
and size $s$ can be converted to a formula of depth $k$ and size $s^{k+1}$
i.e. $\operatorname{poly}(s)$, we will use the terms circuits and formulas
interchangeably, without any loss in the final bounds we prove.
6. 6.
Let $f$ and $g$ be multivariate polynomials such that $g\mid f$. Then, the
_multiplicity_ or _factor multiplicity_ of $g$ in $f$ is defined to be the
greatest integer $a$ such that $g^{a}$ divides $f$.
### 2.1 Circuit/formula bit-complexity
###### Definition 2.1 (Bit-complexity of a circuit/formula).
The _bit-complexity_ of a circuit/formula $C$, denoted by
$\operatorname{bit}(C)$, is defined as the sum of $\operatorname{size}(C)$ and
the bit-complexities of all the scalars333For a rational number $r=p/q$, its
bit-complexity $\operatorname{bit}(r)$ is defined as
$\log(\max(\left|p\right|,\left|q\right|))$ present on edges or leaves. By
default, any edge that does not have a scalar on it will be assigned the
scalar 1.
###### Lemma 2.2 (Bit-complexity of evaluations of formulas).
Let $C$ be a formula of bit-complexity $s$ computing a polynomial
$f(\mathbf{x})$. If $\mathbf{a}\in\mathbb{Q}^{n}$ with each entry of
$\mathbf{a}$ having bit-complexity $b$, then the bit-complexity of
$f(\mathbf{a})$ is at most $s\cdot b$.
(Proof deferred to Appendix A)
### 2.2 Relevant subclasses of algebraic circuits
We briefly define subclasses of algebraic circuits that we would use often in
this paper.
###### Definition 2.3 (Power of low-degree polynomials).
For a parameter $d\in\mathbb{Z}_{\geq 0}$, let $\operatorname{Deg}_{d}$ refer
to the class of polynomials of degree at most $d$. We use
$(\operatorname{Deg}_{d})^{\ast}$ to denote the class of polynomials that are
powers of polynomials of degree at most $d$.
###### Definition 2.4
($\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$-formulas).
We will use
$\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$ to
denote the subclass of algebraic formulas that compute expressions of the form
$\sum_{i}f_{i}\cdot g_{i}^{e_{i}}$
where each $f_{i}$ is a $(\Sigma\Pi)^{(k)}$ formula and each $g_{i}$ is a
polynomial of degree at most $d$ and $e_{i}$’s are arbitrary positive
integers. The size and bit-complexity of the above expression is defined as
its size and bit-complexity when viewed as a general algebraic formula.
###### Observation 2.5.
Let $\mathcal{C}$ be the class of
$\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$
formulas for fixed parameters $k$ and $d$. Suppose $P_{1},\ldots,P_{t}$ are
polynomials computed by
$\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$
formulas of size $s$ and bit-complexity $b$ each. Then,
* •
$\sum_{i}P_{i}$ is computable by an
$\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$
formula of size at most $t\cdot s$ and bit-complexity at most $O(t\cdot b)$.
* •
$\prod_{i}P_{i}$ is computable by an
$\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$
formula of size at most $s^{O(t)}$ and bit-complexity at most $b^{O(t)}$.
(Proof deferred to Appendix A.)
### 2.3 Standard preliminaries using interpolation
###### Lemma 2.6 (Univariate interpolation (Lemma 5.3 [Sap15])).
Let $f(x)=f_{0}+f_{1}x+\cdots+f_{d}x^{d}$ be a univariate polynomial of degree
at most $d$. Then, for any $0\leq r\leq d$ and there are444In fact, for any
choice of distinct $\alpha_{0},\ldots,\alpha_{d}$, there are appropriate
$\beta_{r0},\ldots,\beta_{rd}$ satisfying the equation. If the $\alpha_{i}$’s
are chosen to have small bit-complexity, we can obtain a
$\operatorname{poly}(d)$ bound on the bit-complexity of the associated
$\beta_{ri}$’s. field constants $\alpha_{0},\ldots,\alpha_{d}$ and
$\beta_{r0},\ldots,\beta_{rd}$ such that
$f_{r}=\beta_{r0}f(\alpha_{0})+\cdots+\beta_{rd}f(\alpha_{d}).$
Furthermore, the bit-complexity of all field constants is bounded by
$\operatorname{poly}(d)$.
###### Lemma 2.7 (Computing homogeneous components (Lemma 5.4 [Sap15])).
Let $f\in\mathbb{Q}[\mathbf{x}]$ be an $n$-variate degree $d$ polynomial.
Then, for an $0\leq i\leq d$, there are field constants
$\alpha_{0},\ldots,\alpha_{d}$ and $\beta_{i0},\beta_{id}$ of bit-complexity
$\operatorname{poly}(d)$ such that
$\operatorname{Hom}_{i}(f)=\beta_{i0}f(\alpha_{0}\cdot\mathbf{x})+\cdots+\beta_{id}f(\alpha_{d}\cdot\mathbf{x}).$
In particular for $\mathcal{C}=(\Sigma\Pi)^{(k)}$ or
$\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$, if
$f$ is computable by $\mathcal{C}$-formulas of size / bit-complexity at most
$s$ then $\operatorname{Hom}_{i}(f)$ is computable by $\mathcal{C}$-formulas
of size / bit-complexity at most $\operatorname{poly}(s,d)$.
###### Lemma 2.8 (Computing partial derivatives in one variable).
Let $f\in\mathbb{Q}[\mathbf{x}]$ be an $n$-variate degree $d$ polynomial.
Then, for an $0\leq r\leq d$, there are field elements $\alpha_{i}$’s and
$\beta_{ij}$’s in $\mathbb{Q}$ of bit-complexity $\operatorname{poly}(d)$ such
that
$\frac{\partial^{r}f}{\partial
x_{1}^{r}}=\sum_{i=0}^{d}x_{1}^{i}\cdot\left(\beta_{i0}f(\alpha_{0},x_{2},\ldots,x_{n})+\cdots+\beta_{id}f(\alpha_{d},x_{2},\ldots,x_{n})\right)$
In particular for $\mathcal{C}=(\Sigma\Pi)^{(k)}$ or
$\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$, if
$f$ is computable by $\mathcal{C}$-formulas of size / bit-complexity at most
$s$ then $\frac{\partial^{r}f}{\partial x_{1}^{r}}$ is computable by
$\mathcal{C}$-formulas / bit-complexity of size at most $O(s\cdot d^{3})$.
###### Proof.
We may consider the polynomial $f$ as a univariate in $x_{1}$, and extract
each coefficient of $x_{1}^{i}$ using 2.6 and recombine them to get the
appropriate partial derivative. That justifies the claimed expression.
As for the size, note that if $\mathcal{C}$ is $(\Sigma\Pi)^{(k)}$ or
$\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$,
multiplying a size $s$ formula by $x_{1}^{i}$, by using distributivity of the
top addition gate, results in a $\mathcal{C}$-formula of size at most $s\cdot
d$. Thus, the overall size of the above expression for the partial derivative
is at most $O(s\cdot d^{3})$. ∎
We will be making use of the following identity, which can be proved via
appropriate interpolation or by the inclusion-exclusion principle (along the
lines of Lemma 2.2 [Shp02]).
###### Lemma 2.9 (Fischer’s identity [Fis94, Ell69, Shp02]).
If $\mathbb{F}$ is a field of characteristic zero or larger than $D$, then for
any positive integers $e_{1},\ldots,e_{n}$ with $\sum e_{i}=D$ and for
$r\leq\prod_{i=1}^{n}\left(e_{i}+1\right)$, there are homogeneous linear forms
$L_{1},\ldots,L_{r}$ and field constants $\alpha_{1},\ldots,\alpha_{r}$ of
bit-complexity $\operatorname{poly}(d,n)$ such that
$x_{1}^{e_{1}}\cdots x_{n}^{e_{n}}=\sum_{i=1}^{r}\alpha_{i}L_{i}^{D}.$
### 2.4 Polynomial identity testing
###### Lemma 2.10 (Polynomial Identity Lemma [Ore22, DL78, Sch80, Zip79]).
Let $f\in\mathbb{Q}[\mathbf{x}]$ be a non-zero $n$ variate polynomial of
degree at most $d$. Then, for every set $S\subseteq\mathbb{Q}$, the number of
zeroes of $f$ in the set $S^{n}=S\times S\times\cdots\times S$ is at most
$d|S|^{n-1}$.
###### Definition 2.11 (Low Hamming weight set).
Let $n\geq d\geq 0$ be integer parameters. Fix a set
$T_{d}\subseteq\mathbb{Q}$ of size $(d+1)$. The set $\mathcal{H}(d,n)$ is
defined as
$\mathcal{H}(d,n)=\left\\{(a_{1},\ldots,a_{n})\ :\ S\in\binom{[n]}{\leq
d}\;,\;a_{i}\in T_{d}\text{ for all }i\in S\text{ and }a_{j}=0\text{ for all
}j\notin S\right\\}.$
The size of the above set is at most $\binom{n}{\leq
d}\cdot(d+1)^{d}=n^{O(d)}$. Furthermore, choosing $T_{d}$ to consist of
elements of $\mathbb{Q}$ of bit-complexity $\operatorname{poly}(d)$, the bit-
complexity of the set $\mathcal{H}(d,n)$ is bounded by $n^{O(d)}$ as well.
The following lemma is an easy consequence of Lemma 2.10 and will be crucial
for parts of our proof. We also include a short proof sketch.
###### Lemma 2.12 (Hitting set for low degree polynomials).
Let $f\in\mathbb{Q}[\mathbf{x}]$ be a non-zero $n$ variate polynomial of
degree at most $d$. Then, there exists a vector
$\mathbf{a}\in\mathcal{H}(d,n)\subseteq\mathbb{Q}^{n}$ such that
$f(\mathbf{a})\neq 0$.
(Proof deferred to Appendix A.)
###### Theorem 2.13 (PIT for constant depth formulas (modification of
Corollary 6 [LST21])).
Let $\varepsilon>0$ be a real number and $\mathbb{F}$ be a field of
characteristic 0. Let $C$ be an algebraic formula of size and bit-complexity
$s\leq\operatorname{poly}(n)$, depth $k=o(\log\log\log n)$ computing a
polynomial on $n$ variables, then there is a deterministic algorithm that can
check whether the polynomial computed by $C$ is identically zero or not in
time $(s^{O(k)}\cdot n)^{O_{\varepsilon}((sD)^{\varepsilon})}$.
The original statement of Corollary 6 in [LST21] deals specifically with
circuits of size $s=\operatorname{poly}(n)$. The above statement can be
readily inferred from their proof.
###### Theorem 2.14 (PIT for
$\Sigma\left((\Sigma\Pi)^{(1)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$
(Corollary 6.7, [For15])).
Let $t\geq 1$. Then, the class
$\mathcal{C}=\Sigma\left((\Sigma\Pi)^{(1)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$
that computes polynomials of the form $\sum_{i=1}^{s}f_{i}\cdot g_{i}^{d_{i}}$
with each $f_{i}$ being $s$-sparse and each $\deg(g_{i})\leq d$ has a
$\operatorname{poly}(n,s,d\log s)$-explicit hitting set of size
$\operatorname{poly}(s)^{O(d\log s)}$.
We will also crucially use the following lemma that gives an algorithm to
obtain the coefficient vector of a polynomial from an algebraic formula
computing it. In our setting, we invoke this algorithm only for low degree
polynomials, and in that case, we can tolerate the runtime of this algorithm
within our budget.
###### Lemma 2.15 (Interpolating a low degree multivariate polynomial).
There is a deterministic algorithm that, when given a parameter $d$ and an $n$
variate algebraic formula $C\in\mathbb{Q}[\mathbf{x}]$ of size at most $s$,
bit-complexity at most $b$ and degree at most $d$, outputs the coefficient
vector of the polynomial computed by $C$.
The algorithm runs in time $\operatorname{poly}(s,b,n^{d})$.
(Proof deferred to Appendix A.)
### 2.5 Deterministic divisibility testing and PIT
###### Definition 2.16 (Pseudo-quotients).
Let $f,g\in\mathbb{Q}[\mathbf{x}]$ be non-zero polynomials with
$g({\bm{0}})=\beta\neq 0$. The _pseudo-quotient of $f$ and $g$_ is defined as
$\operatorname{Hom}_{\leq
d_{f}-d_{g}}\left(\left(\frac{f(\mathbf{x})}{\beta}\right)\cdot(1+\tilde{g}+\tilde{g}^{2}+\cdots+\tilde{g}^{d_{f}-d_{g}})\right)$
where $d_{f}=\deg(f)$, $d_{g}=\deg(g)$ and $\tilde{g}=1-\frac{g}{\beta}$.
More generally, if ${\bm{\alpha}}\in\mathbb{Q}^{n}$ is such that
$g({\bm{\alpha}})\neq 0$, the _pseudo-quotient of $f$ and $g$ translated by
${\bm{\alpha}}$_ is defined as the pseudo-quotient of
$f(\mathbf{x}+{\bm{\alpha}})$ and $g(\mathbf{x}+{\bm{\alpha}})$.
The following lemma immediately follows from the above definition and Lemma
2.7.
###### Lemma 2.17 (Complexity of pseudo-quotients).
Suppose $k\geq 1$ and $f(\mathbf{x})\in(\Sigma\Pi)^{(k)}$ and
$g(\mathbf{x})\in\operatorname{Deg}_{d}$ of sizes at most $s_{1},s_{2}$
respectively, and suppose $g({\bm{0}})\neq 0$. Then, the pseudo-quotient of
$f,g$ is computable by the $\mathcal{C}$-formulas of size at most
$\operatorname{poly}(s_{1},s_{2})$, where
$\mathcal{C}=\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$.
###### Theorem 2.18 (Divisibility testing to PIT [For15]).
Let $f(\mathbf{x})$ and $g(\mathbf{x})$ be non-zero $n$-variate polynomials
over a field $\mathbb{Q}$ such that $g(\mathbf{0})=\beta\neq 0$. Then, $g$
divides $f$ if and only if the polynomial $R(\mathbf{x})$ defined as
$R(\mathbf{x}):=f(\mathbf{x})-g(\mathbf{x})Q(\mathbf{x})$
is identically zero, where $Q(\mathbf{x})$ is the pseudo-quotient of $f$ and
$g$.
An immediate consequence of this theorem is the following corollary that takes
into account the depth of an algebraic formula computing the polynomial
$R(\mathbf{x})$ given above, assuming that $f$ and $g$ themselves can be
computed by a low depth formula.
###### Corollary 2.19 (Divisibility testing to PIT for constant depth
formulas [For15]).
Suppose $f(\mathbf{x})$ is a non-zero $n$-variate polynomial computed by a
$(\Sigma\Pi)^{(k)}$ formula of size $s$, and suppose $g(\mathbf{x})$ is a
polynomial of degree at most $d$ with $g({\bm{0}})=\beta\neq 0$. Then, we can
test if $g$ divides $f$ in time $T(k,d,s^{\prime})$ where
$s^{\prime}=\operatorname{poly}(s,d)$ and $T(k,d,s)$ is the time required to
test polynomial identities of the size $s$ expressions of the form
$\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right).$
(Proof deferred to Appendix A.)
###### Theorem 2.20 ([For15]).
Let $\mathbb{F}$ be any sufficiently large field. Then, there is a
deterministic algorithm that takes an input two polynomials $f$ and $g$ and
parameters $d,D,n,s$, where $f$ is an $n$-variate polynomial of degree at most
$D$ and sparsity $s$; $g$ is an $n$ variate polynomial of degree $d$, and
outputs whether $g$ divides $f$ or not in time $\exp(O(d\log^{2}snDd))$.
### 2.6 Resultants
###### Definition 2.21 (The Resultant).
Let $\mathcal{R}$ be a commutative ring. Given polynomials $g$ and $h$ in
$\mathcal{R}[y]$, where:
$\displaystyle g(y)$ $\displaystyle=g_{0}+\cdots+y^{d}\cdot g_{d}$
$\displaystyle h(y)$ $\displaystyle=h_{0}+y\cdot h_{1}+\cdots+y^{D}\cdot
h_{D}$
with $g_{d}$ and $h_{D}\neq 0$ the _Resultant_ of $g$ and $h$, denoted by
$\operatorname{Res}_{y}(g,h)$, is the determinant of the $(D+d)\times(D+d)$
Sylvester matrix $\Gamma$ of $g$ and $h$, given by:
$\Gamma=\begin{bmatrix}h_{0}&h_{1}&\dots&&h_{D}&&\\\
&\ddots&\ddots&&\ddots&\ddots&\\\ &&h_{0}&h_{1}&&\dots&h_{D}\\\
g_{0}&\dots&&g_{d}&&&\\\ &g_{0}&\dots&&g_{d}&&\\\ &&\ddots&\ddots&&\ddots&\\\
&&&g_{0}&\dots&&g_{d}\end{bmatrix}$
###### Lemma 2.22 (Resultant and $\gcd$ (Corollary 6.20 [vzGG13])).
Let $\mathcal{R}$ be a unique factorization domain and $g,h\in\mathcal{R}[y]$
be non-zero polynomials. Then:
$\deg_{y}(\gcd(g,h))>0\iff\operatorname{Res}_{y}(g,h)=0$
where $\gcd(g,h)\in\mathcal{R}[y]$ and
$\operatorname{Res}_{y}(g,h)\in\mathcal{R}$.
In this paper, $\mathcal{R}$ will be $\mathbb{Q}[\mathbf{x}]$ (which is a
unique factorization domain), and $\operatorname{Res}_{y}(g,h)$ will denote
the resultant of $g,h\in\mathbb{Q}[\mathbf{x}][y]$ when considered as
polynomials in $\mathcal{R}[y]$. We might also occasionally refer to it as the
_y-resultant_ of $g$ and $h$. For more details about the resultant as well as
a proof of the above lemma, we refer the reader to von zur Gathen and
Gerhard’s book on computer algebra (Chapter 6, [vzGG13]). We mention a simple
observation from the above definition that would be useful for this paper.
###### Observation 2.23 (Resultant under substitutions).
Suppose $g(\mathbf{x},y)=g_{0}(\mathbf{x})+g_{1}(\mathbf{x})y+\cdots
g_{d}(\mathbf{x})y^{d}$ and
$h(\mathbf{x},y)=h_{0}(\mathbf{x})+h_{1}(\mathbf{x})y+\cdots+h_{D}(\mathbf{x})y^{D}$
with $g_{d},h_{D}\neq 0$. Then, for any
$\mathbf{a}\in\mathbb{Q}^{\left|\mathbf{x}\right|}$ that ensures
$g_{d}(\mathbf{a}),h_{D}(\mathbf{a})\neq 0$, we have
$(\operatorname{Res}_{y}(g,h))(\mathbf{a})=\operatorname{Res}_{y}(g(\mathbf{a},y),h(\mathbf{a},y)).$
### 2.7 Hensel Lifting
Now we will state the definition of a _lift_ and the main lemma for Hensel
lifting. For more details, one can look up some of the cited papers or the
standard references in computational algebra [KSS15, ST20, vzGG13, Sud98].
###### Definition 2.24 (Hensel lifts).
Let $\mathcal{I}\subseteq\mathbb{Q}[\mathbf{x},y]$ be an ideal. Let
$f,g,h,u,v\in\mathbb{Q}[\mathbf{x},y]$ such that $f\equiv
gh\bmod{\mathcal{I}}$ and $ug+vh\equiv 1\bmod{\mathcal{I}}$. Then, we call
$g^{\prime},h^{\prime}\in\mathbb{Q}[\mathbf{x},y]$ a _lift_ of $g$ and $h$ if:
1. 1.
$f\equiv g^{\prime}h^{\prime}\bmod{\mathcal{I}^{2}}$,
2. 2.
$g^{\prime}\equiv g\bmod{\mathcal{I}}$ and $h^{\prime}\equiv
h\bmod{\mathcal{I}}$, and
3. 3.
$\exists u^{\prime},v^{\prime}\in\mathbb{Q}[\mathbf{x},y]$ s.t
$u^{\prime}g^{\prime}+v^{\prime}h^{\prime}\equiv 1\bmod{\mathcal{I}^{2}}$.
For the rest of the section, we define $\mathcal{I}$ to be the ideal
$\left\langle x_{1},\dots,x_{n}\right\rangle$ and
$\mathcal{I}_{k}:=\mathcal{I}^{2^{k}}$.
###### Lemma 2.25 (Iterated monic Hensel lifting (Lemma 3.4 [KSS15])).
Suppose we’re given $f\in\mathbb{Q}[\mathbf{x},y]$ such that $f=gh$, $g$ is
monic in $y$ and $\gcd(g,h)=1$. We are also given
$g_{0},h_{0},u_{0},v_{0}\in\mathbb{Q}[\mathbf{x},y]$ such that $g_{0}\equiv
g\bmod\mathcal{I}$, $h_{0}\equiv h\bmod\mathcal{I}$ and
$u_{0}g_{0}+v_{0}h_{0}\equiv 1\bmod\mathcal{I}$. Then, for all
$k\in\mathbb{N},k\geq 1$, there exist
$g_{k},h_{k},u_{k},v_{k}\in\mathbb{Q}[\mathbf{x},y]$ , with each $g_{k}$ being
monic, such that the following conditions hold:
1. 1.
The pair $g_{k},h_{k}$ is a lift of $g_{k-1},h_{k-1}$, with
$u_{k}g_{k}+v_{k}h_{k}\equiv 1\bmod\mathcal{I}_{k}$; in particular, $f\equiv
g_{k}h_{k}\bmod{\mathcal{I}_{k}}$
2. 2.
$g_{k}\equiv g\bmod{\mathcal{I}_{k}}$ and $h_{k}\equiv
h\bmod{\mathcal{I}_{k}}$
Moreover, for each $k$, $g_{k}$ and $h_{k}$ are unique polynomials modulo
$\mathcal{I}_{k}$ satisfying the above conditions when the $g_{k}$s are monic.
For each $k$, we will call $g_{k},h_{k}$ the _$k$ -th iterated lift of
$g_{0}$, $h_{0}$_.
If $\deg_{\mathbf{x}}(g)=d$, we can choose an integer $k^{*}$ such that
$d<2^{k^{*}}\leq 2d$ and use the above Lemma to get $g_{k^{*}}\equiv
g\bmod{\mathcal{I}_{k^{*}}}$, which means we can truncate $g_{k^{*}}$ to
degree $d$ and retrieve $g$. The next lemma tells us that this can be done
with reasonable bounds on the parameters of the underlying circuits.
###### Lemma 2.26 (Small circuit for Hensel lifting (Lemma 3.6 [KSS15])).
Let $f$ be a degree $D$ polynomial in $\mathbb{Q}[\mathbf{x},y]$, computable
by a $(\Sigma\Pi)^{(k)}$ formula of size and bit-complexity $s$, with a
factorization $f=gh$ such that $\gcd(g,h)=1$ and $g$ is monic. Let
$g_{0}=g\bmod{\mathcal{I}}$ and $h_{0}=h\bmod{\mathcal{I}}$ be univariates in
$\mathbb{Q}[y]$ with $\gcd(g_{0},h_{0})=1$.
Then, there are formulas $C_{g},C_{h}$ of size and bit complexity
$(sDk)^{O(k\log D)}$ that compute the $k^{\text{th}}$ iterated lift
$g_{k}$,$h_{k}$ of $g_{0}$,$h_{0}$, where $g_{k}$ is monic. More generally, if
the total degree of $g_{k}$ is at most $d$, then the size and bit complexity
of the formula for $g_{k}$ is at most $(sDk)^{O(\log d)}$.
Moreover, there is a deterministic algorithm, that when given the formulas for
$f$ and $g_{0},h_{0}$ and integer $k$ as input, outputs the formulas for
$g_{k}$ and $h_{k}$ in time $(sDk)^{O(k\log D)}$ ( resp. $(sDk)^{O(\log d)}$
if $g_{k}$ has total degree $d$).
(Proof sketch deferred to Appendix A.)
### 2.8 Results on polynomial factorization
We rely on the following two fundamental results on polynomial factorization
for our results. The first theorem is a classical algorithm of Lenstra,
Lenstra and Lovász for factoring univariate polynomials over the field of
rational numbers.
###### Theorem 2.27 (Factorizing polynomials with rational coefficients
[LLL82, vzGG13]).
Let $f\in\mathbb{Q}[x]$ be a monic polynomial of degree $d$. Then there is a
deterministic algorithm computing all the irreducible factors of $f$ that runs
in time $\operatorname{poly}(d,t)$, where $t$ is the maximum bit-complexity of
the coefficients of $f$.
The second result we need is an easy consequence of the results of Kopparty,
Saraf and Shpilka [KSS15]. They showed that an efficient deterministic
algorithm for PIT for algebraic circuits implies an efficient deterministic
algorithm for polynomial factorization. The formal statement below essentially
invokes this for constant degree polynomials. In this case, the PIT instances
also happen to be of constant degree and hence can be easily solved in time
that is polynomial in the length of the coefficient vector of these
polynomials.
###### Theorem 2.28 ([KSS15]).
There is a deterministic algorithm that when given as input the coefficient
vector of an $n$ variate polynomial $f(\mathbf{x})\in\mathbb{Q}[\mathbf{x}]$
of total degree $d$, runs in time $n^{O(d^{2})}$ and decides if $f$ is
irreducible or not.
## 3 Computing candidate low-degree factors of multiplicity one
We first present the algorithm for computing candidate low-degree factors of
multiplicity one in Algorithm 1 below. In the next section, we use this as a
subroutine in Algorithm 2 to compute factors of all multiplicity and also
eliminate those candidates that were not actual factors.
1
Input : A $(\Sigma\Pi)^{(k)}$-formula of size $s$, bit-complexity $t$, degree
$D$ computing a polynomial $f(\mathbf{x})$.
Output : A list of polynomials of degree at most $d$, that include all factors
of $f$ with degree at most $d$ and multiplicity $1$.
2
3Set the output list $L=\emptyset$.
4Compute hitting-set $H_{1}=\mathcal{H}(d,n)$ (as defined in Definition 2.11).
5Compute hitting-set $H_{2}$ for the class of
$\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$-formulas
that have size $s^{\prime}\leq(sD)^{O(d)}$. (Lemma 3.2, Theorem 2.14)
6for _${\bm{\alpha}},{\bm{\beta}}\in H_{1}$ and ${\bm{\delta}}\in H_{2}$_ do
7
8 Define $F(\mathbf{x},y)=f(\mathbf{x}+{\bm{\alpha}}\cdot
y+{\bm{\beta}}+{\bm{\delta}})=f(x_{1}+\alpha_{1}y+\beta_{1}+\delta_{1},\ldots,x_{n}+\alpha_{n}y+\beta_{1}+\delta_{n})$
9 Using interpolation on the formula for $F(\mathbf{x},y)$ (via 2.6), compute
$F({\bm{0}},y)$ as a sum of monomials.
10 Factorise the polynomial $F({\bm{0}},y)$ into irreducible factors as
$F({\bm{0}},y)=\sigma\cdot F_{1}^{e_{1}}\cdots F_{r}^{e_{r}}.$ where
$0\neq\sigma\in\mathbb{Q}$ and each $F_{r}$ is monic in $y$.
11 for _$T\subseteq[r]$ of size at most $d$_ do
12
13 Define $g_{0}=\prod_{i\in T}F_{i}^{e_{i}}$ and
$h_{0}=\sigma\cdot\prod_{i\notin T}F_{i}^{e_{i}}$, interpretted as polynomials
in $\mathbb{Q}[\mathbf{x},y]$ for Lemma 2.25
14 if _$\deg(g_{0}) >d$_ then
15 Continue to the next choice of $T$ in the current loop.
16
17 Compute polynomials $u_{0},v_{0}$ such that $u_{0}g_{0}+v_{0}h_{0}=1$.
18 Use Hensel-Lifting (Lemma 2.26) to lift the factorisation
$F(\mathbf{x},y)=g_{0}(\mathbf{x},y)\cdot h_{0}(\mathbf{x},y)\bmod{I}$, where
$I=\left\langle\mathbf{x}\right\rangle$, to obtain algebraic circuits for
$g_{\ell},h_{\ell}$ satisfying $F(\mathbf{x},y)=g_{\ell}(\mathbf{x},y)\cdot
h_{\ell}(\mathbf{x},y)\bmod{I^{2^{\ell}}}$ with $g_{\ell}$ being monic and
$d<2^{\ell}<2d$.
19 Using interpolation on the circuit for $g_{\ell}$ (via Lemma 2.15), compute
$g_{\ell}$ as a sum of monomials.
20 Add $\tilde{g}=g_{\ell}(\mathbf{x}-{\bm{\delta}}-{\bm{\beta}},0)$ to $L$.
21
return _$L$_
Algorithm 1 Computing candidate degree $d$ factors of factor-multiplicity one
Before we discuss the proof of correctness and running time of Algorithm 1, we
state two simple observations that we use in the analysis. We defer the proofs
of these observations to the end of the section.
###### Observation 3.1 (Size growth under a translation of low Hamming
weight).
Let $k>0$ be a parameter. Let $f(\mathbf{x})$ be an $n$-variate polynomial of
degree at most $D$ with $(\Sigma\Pi)^{k}\operatorname{-size}$ at most $s$. If
${\bm{\alpha}},{\bm{\beta}}\in\mathcal{H}(d,n)$, the polynomial
$\tilde{f}(\mathbf{x},y)=f(\mathbf{x}+y{\bm{\alpha}}+{\bm{\beta}})$ has
$(\Sigma\Pi)^{k}\operatorname{-size}$ at most $s\cdot D^{O(d)}$.
###### Lemma 3.2.
Let $f(\mathbf{x})$ be an $n$-variate polynomial computed by a
$(\Sigma\Pi)^{(k)}$ formula of size $s$, and let $g(\mathbf{x})$ be an
$n$-variate degree $d$ polynomial with $g({\bm{0}})\neq 0$. Let
$Q(\mathbf{x})$ be the pseudo-quotient of $f$ and $g$. Then, for any variable
$y\in\mathbf{x}$, the polynomial $\operatorname{Res}_{y}(Q,g)$ is computable
by a
$\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$
formula of size at most $s^{O(d)}$.
### 3.1 Proof of correctness of the Algorithm 1
###### Lemma 3.3 (Correctness of Algorithm 1).
For every input polynomial $f$ computed by $(\Sigma\Pi)^{(k)}$ formulas of
size $s$, bit-complexity $t$, degree $D$ and any factor $g$ of degree at most
$d$ with $g\mid f$ and $g^{2}\nmid f$, the polynomial $g$ is included in the
output list of Algorithm 1 on input $f$.
###### Proof.
Algorithm 1 outputs a list of candidate factors; we would like to prove that
every factor of $f$ with degree $\leq d$ and factor-multiplicity one will be
contained in this list. Fix any specific factor $g$ of $f$, with
$\deg(g)=d^{\prime}\leq d$ and factor-multiplicity one, which ensures that
$\gcd(g,f/g)=1$.
1. 1.
Make $g$ monic and $g(\bm{0})\neq 0$
The coefficient of $y^{d^{\prime}}$ in
$g^{\prime}(\mathbf{x},y):=g(\mathbf{x}+y{\bm{\alpha}}+{\bm{\beta}})$ is the
evaluation of $\operatorname{Hom}_{d^{\prime}}(g)$ at $\bm{\alpha}$ and the
constant term of $g^{\prime}(\mathbf{x},y)$ is
$g^{\prime}({\bm{0}},0)=g({\bm{\beta}})$. Thus by Lemma 2.12, there is some
${\bm{\alpha}},{\bm{\beta}}\in H_{1}$ such that
$\operatorname{Hom}_{d^{\prime}}(g)({\bm{\alpha}})\neq 0$ and
$g({\bm{\beta}})\neq 0$. Fix this choice of ${\bm{\alpha}},{\bm{\beta}}$. We
then have that $g^{\prime}(\mathbf{x},y)$ is monic in $y$, has
$\deg_{y}(g^{\prime})=\deg(g)=d^{\prime}$, and has non-zero constant term.
2. 2.
Bound the size of
$\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$
formula for the resultant
With the above properties, the pseudo-quotient $h^{\prime}$ of
$f^{\prime}(\mathbf{x},y):=f(\mathbf{x}+y{\bm{\alpha}}+{\bm{\beta}})$ and
$g^{\prime}(\mathbf{x},y)$ is well-defined and is a polynomial in
$\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$ (by
Lemma 2.17) of size $\operatorname{poly}(s,D,d)\leq\operatorname{poly}(sD)$.
By Lemma 3.2,
$\operatorname{Res}_{y}(g^{\prime},h^{\prime})\in\mathbb{Q}[\mathbf{x}]$ is a
non-zero polynomial computable by
$\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$
formulas of size $(sD)^{O(d)}$.
3. 3.
Maintain $\gcd(g,h)=1$ condition in the univariate setting by hitting the
resultant
Let $\deg_{y}(h^{\prime})=r$ and
$h^{\prime}(\mathbf{x},y)=h_{0}^{\prime}(\mathbf{x})+\cdots+h_{r}^{\prime}(\mathbf{x})y^{r}$.
Since $h^{\prime}$ is computable by size $(sD)^{O(d)}$ formula from
$\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$, so
is the leading term $h_{r}^{\prime}(\mathbf{x})$ by Lemma 2.7. Therefore by
2.5, the polynomial
$\Gamma(\mathbf{x})=\operatorname{Res}_{y}(g^{\prime},h^{\prime})\cdot
h^{\prime}_{r}(\mathbf{x})$ is also computable by
$\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$
formulas of size $s^{\prime}=(sD)^{O(d)}$. Since $H_{2}$ is a hitting set for
$\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$
formulas of size $s^{\prime}$, fix a ${\bm{\delta}}\in H_{2}$ such that
$\Gamma({\bm{\delta}})\neq 0$ and in particular, the conditions required in
2.23 are true (note that the leading coefficient of $g^{\prime}$ is just 1 by
monicness). By Lemma 2.22 and 2.23, we have that $g^{\prime}({\bm{\delta}},y)$
and $h^{\prime}({\bm{\delta}},y)$ are coprime polynomials. Thus, if
$g^{\prime\prime}(\mathbf{x},y)=g^{\prime}(\mathbf{x}+{\bm{\delta}},y)$ and
$h^{\prime\prime}(\mathbf{x},y)=h^{\prime}(\mathbf{x}+{\bm{\delta}},y)$
($h^{\prime}$ being the pseudo-quotient), Theorem 2.18 implies that
$\displaystyle f(\mathbf{x}+{\bm{\alpha}}y+{\bm{\beta}}+{\bm{\delta}})$
$\displaystyle=g^{\prime\prime}(\mathbf{x},y)\cdot
h^{\prime\prime}(\mathbf{x},y)$ $\displaystyle\implies
f({\bm{\alpha}}y+{\bm{\beta}}+{\bm{\delta}})$
$\displaystyle=g^{\prime\prime}({\bm{0}},y)\cdot h^{\prime\prime}({\bm{0}},y)$
$\displaystyle\quad\text{with
}\gcd(g^{\prime\prime}({\bm{0}},y),h^{\prime\prime}({\bm{0}},y))=1.$
4. 4.
Univariate factorization and Hensel Lifting
10 thus factorises the univariate polynomial
$f({\bm{\alpha}}y+{\bm{\beta}}+{\bm{\delta}})$ and one of the sets $T$ in
Algorithm 1 must correspond to $g_{0}(y)$ chosen in Algorithm 1 to satisfy
$g_{0}(y)=g^{\prime\prime}({\bm{0}},y)$ and
$h_{0}(y)=h^{\prime\prime}({\bm{0}},y)$. Thus, we have a factorisation of the
form
$\displaystyle f({\bm{\alpha}}y+{\bm{\beta}}+{\bm{\delta}})$
$\displaystyle=g^{\prime\prime}({\bm{0}},y)\cdot
h^{\prime\prime}({\bm{0}},y)=g_{0}\cdot h_{0}$ $\displaystyle\implies
f(\mathbf{x}+{\bm{\alpha}}y+{\bm{\beta}}+{\bm{\delta}})$
$\displaystyle=g_{0}\cdot h_{0}\bmod{\mathcal{I}},\quad\text{where
$\mathcal{I}=\left\langle\mathbf{x}\right\rangle$.}$
We are therefore set-up to apply Hensel Lifting (Lemma 2.25) to obtain
$g_{\ell},h_{\ell}$ such that $g_{\ell}$ is monic in $y$ and
$f(\mathbf{x}+{\bm{\alpha}}y+{\bm{\beta}}+{\bm{\delta}})=g_{\ell}(\mathbf{x},y)\cdot
h_{\ell}(\mathbf{x},y)\bmod{\mathcal{I}^{2^{\ell}}}.$
From the uniqueness of Hensel Lifting (which is guaranteed by Lemma 2.25), we
must have that
$g_{\ell}(\mathbf{x},y)=g^{\prime\prime}(\mathbf{x},y)=g(\mathbf{x}+{\bm{\alpha}}y+{\bm{\beta}}+{\bm{\delta}})$.
Thus, for this choice of ${\bm{\alpha}},{\bm{\beta}},{\bm{\delta}}$ and $T$,
we would include
$g(\mathbf{x})=g^{\prime\prime}(\mathbf{x}-{\bm{\beta}}-{\bm{\delta}},0)$ in
the set of candidate factors in Algorithm 1.
Finally, since the lift also ensures that there exist $u_{\ell}$ and
$v_{\ell}$ such that
$u_{\ell}g_{\ell}+v_{\ell}h_{\ell}=1\bmod{\mathcal{I}^{2^{\ell}}}$, we also
have that $g_{\ell}^{2}\nmid f$.
∎
### 3.2 Running time analysis
We now bound the time complexity of the algorithm.
###### Lemma 3.4 (Running time of Algorithm 1).
Let $\varepsilon>0,d,k\in\mathbb{N}$ be an arbitrary constants and let
$f\in\mathbb{Q}[\mathbf{x}]$ be a polynomial computable by a
$(\Sigma\Pi)^{(k)}$ formula $C$ of size $s$, degree at most $D$ and bit-
complexity $t$. Then, on input $C$, Algorithm 1 terminates in time at most
$(sD)^{O_{\varepsilon}(kd(sD)^{\varepsilon d})}\cdot t^{O(d\log d)}$.
Moreover, if $k=1$, i.e. $f$ has sparsity at most $s$, then Algorithm 1
terminates in time at most $(sDt)^{(\operatorname{poly}(d)\log sDt)}$.
###### Proof.
Let $T_{k}^{(1)}(s,d)$ be the time-complexity to output the hitting set
$H_{1}$ in Algorithm 1 and $T_{k}^{(2)}(s,D,d)$ be the time-complexity to
output the hitting set $H_{2}$ in Algorithm 1.
From Definition 2.11, we immediately have that $T_{k}^{(1)}(s,d)\leq
s^{O(d)}$. As for $T_{k}^{(2)}(s,D,d)$, in the case of $k=1$, Theorem 2.14
shows that $T_{k}^{(2)}(s,D,d)\leq(sD)^{(\operatorname{poly}(d)\log sD)}$. For
$k$ satisfying $2\leq k=o(\log\log\log s)$, then Theorem 2.13 shows that
$T_{k}^{(2)}(s,D,d)\leq(sD)^{O_{\varepsilon}(kd(sD)^{\varepsilon d})}$ for any
constant $\varepsilon>0$.
Using 2.6, we get that Algorithm 1 takes $\operatorname{poly}(s,D,t)$-time.
Now, each of the coefficients of $F({\bm{0}},y)$ has bit-complexity at most
$\operatorname{poly}(s,D,t)$. Thus, from Theorem 2.27, we get that
$F({\bm{0}},y)$ can be factorized into its irreducible factors in time at most
$\operatorname{poly}(s,D,t)$.
There are at most $D^{d}$ choices for the set $T$ in Algorithm 1. For each
such choice, Algorithms 1 to 1 compute formulas of size
$\operatorname{poly}(s,D,t)$ for $g_{0}$, $h_{0}$, $u_{0}$, $v_{0}$ in time
$\operatorname{poly}(s,D,t)$. By Lemma 2.26, we have that 18 takes time
$(sDt)^{O(\log d)}$ to compute a formula of the same size and bit-complexity
for $g_{\ell}$. From 2.15, we get that we can obtain the coefficient vector of
$g_{\ell}$ in time at most $(sDt)^{O(d\log d)}$.
Therefore, the overall running time of Algorithm 1 is at most
$T_{k}^{(1)}(s,d)\cdot T_{k}^{(2)}(s,D,d)\cdot
D^{d}\cdot\operatorname{poly}(s,D,t)\cdot(sDt)^{O(d\log d)}\,.$
Plugging in the estimates for $T_{k}^{(1)}(s,d)$, $T_{k}^{(2)}(s,D,d)$, we get
the overall bound of $(sD)^{O_{\varepsilon}(kd(sD)^{\varepsilon d})}\cdot
t^{O(d\log d)}$ for $k>1$, which is essentially dominated by
$T_{k}^{(2)}(s,D,d)$.
When $f$ has sparsity $s$, then as discussed in the proof, $T_{k}^{(2)}(s,d)$
is at most $(sD)^{(\operatorname{poly}(d)\log sD)}$. Plugging this back in the
above expression, we get that the running time is at most
$(sDt)^{(\operatorname{poly}(d)\log sDd)}$. ∎
### 3.3 Proof of structural lemmas
In this subsection, we include the proofs of 3.1 and Lemma 3.2. This completes
the analysis of Algorithm 1.
###### Proof of 3.1.
By definition of $\mathcal{H}(d,n)$ (Definition 2.11), the transformation
$\mathbf{x}\mapsto\mathbf{x}+y{\bm{\alpha}}+{\bm{\beta}}$ takes a monomial
$\prod_{i\in[n]}{x_{i}^{e_{i}}}$ to $\left(\prod_{i\in
T}{\left(x_{i}+\alpha_{i}y+\beta_{i}\right)^{e_{i}}}\right)\cdot\left(\prod_{i\in[n]\setminus
T}x_{i}^{e_{i}}\right)$, for some $T\subseteq[n]$ s.t. $|T|=d$. If we expand
$\prod_{i\in T}{\left(x_{i}+\alpha_{i}y+\beta_{i}\right)^{e_{i}}}$ into a sum
of monomials, we will get at most $D^{O(d)}$ monomials (when
$\sum_{i}e_{i}\leq D$). Expanding each
$\prod_{i\in[n]}{\left(x_{i}+\alpha_{i}y+\beta_{i}\right)^{e_{i}}}$ at the
bottom layer into a sum of monomials this way, we get the required
$(\Sigma\Pi)^{(k)}$ formula with size at most $s\cdot D^{O(d)}$. ∎
###### Proof of Lemma 3.2.
Let $\mathbf{x}^{\prime}=\mathbf{x}\setminus\left\\{y\right\\}$ and let
$\mathcal{C}$ be the class
$\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$.
Let us assume that $\deg_{y}(Q)=D\leq s$ and $\deg_{y}(g)=d$. By Lemma 2.17,
we have that $Q(\mathbf{x})$ is computable by a $\mathcal{C}$-formula of size
at most $\operatorname{poly}(s,d)$. Let us consider the $(D+d)\times(D+d)$
Sylvester matrix $\Gamma$ of $Q$ and $g$ with respect to the variable $y$
whose determinant is $\operatorname{Res}_{y}(Q,g)$.
$\displaystyle Q(\mathbf{x})$ $\displaystyle=Q_{0}(\mathbf{x}^{\prime})+y\cdot
Q_{1}(\mathbf{x}^{\prime})+\cdots+y^{D}\cdot Q_{D}(\mathbf{x}^{\prime})$
$\displaystyle g(\mathbf{x})$
$\displaystyle=g_{0}(\mathbf{x}^{\prime})+\cdots+y^{d}\cdot
g_{d}(\mathbf{x}^{\prime})$ $\displaystyle\Gamma$
$\displaystyle=\begin{bmatrix}Q_{0}&Q_{1}&\dots&&Q_{D}&&\\\
&\ddots&\ddots&&\ddots&\ddots&\\\ &&Q_{0}&Q_{1}&&\dots&Q_{D}\\\
g_{0}&\dots&&g_{d}&&&\\\ &g_{0}&\dots&&g_{d}&&\\\ &&\ddots&\ddots&&\ddots&\\\
&&&g_{0}&\dots&&g_{d}\end{bmatrix}$
Note that, by 2.6, each of the $Q_{i}$’s are computed by a
$\mathcal{C}$-formula of size $\operatorname{poly}(s,D)$ and each $g_{i}$ is a
polynomial of degree at most $d$.
For a subset $S$ of rows and $T$ of columns, we will use $\Gamma(S,T)$ to
refer to the submatrix restricted to the rows in $S$ and columns in $T$, and
let $\operatorname{Top}=\left\\{1,\ldots,d\right\\}$ and
$\operatorname{Bot}=\left\\{d+1,\ldots,d+D\right\\}$. The determinant of
$\Gamma$ can then be expressed as
$\det(\Gamma)=\operatorname{Res}_{y}(Q,g)=\sum_{T\in\binom{[D+d]}{d}}\det(\Gamma(\operatorname{Top},T))\cdot\det(\Gamma(\operatorname{Bot},\overline{T}))$
For every choice of $T$, the polynomial $\det(\Gamma(\operatorname{Top},T))$
is the determinant of a $d\times d$ matrix each of whose entries are
computable by $s^{\prime}=\operatorname{poly}(s,D)$ sized
$\mathcal{C}$-formulas. Therefore, using 2.5, the polynomial
$\det(\Gamma(\operatorname{Top},T))$ is computable by $\mathcal{C}$-formulas
of size at most $(sD)^{O(d)}$.
The polynomial $\det(\Gamma(\operatorname{Bot},\overline{T}))$ is a degree $D$
polynomial combination of $g_{0},\ldots,g_{d}$ and can therefore be expressed
as
$\displaystyle\det(\Gamma(\operatorname{Bot},\overline{T}))$
$\displaystyle=\sum_{i=1}^{D^{d+1}}a_{i}\cdot g_{0}^{e_{i,0}}\cdots
g_{d}^{e_{i,d}}$
$\displaystyle=\sum_{i=1}^{D^{d+1}}a_{i}\cdot\left(\sum_{j=1}^{D^{O(d)}}b_{ij}\cdot
f_{ij}^{e_{ij}}\right)\quad\text{(using
\lx@cref{creftype~refnum}{lem:fischers-trick})}.$
for some polynomials $f_{j}$ of degree at most $d$. Thus, using 2.5 again, we
have that $\operatorname{Res}_{y}(Q,g)$ is computable by
$\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$
formulas of size at most $D^{O(d)}\cdot(sD)^{O(d)}=(sD)^{O(d)}$. ∎
## 4 Computing factors of all multiplicity
The following lemma essentially shows that the multiplicity of any factor $g$
of a given polynomial $f$ can be reduced by working with appropriate partial
derivatives of $f$, with respect to variables that are present in $g$. This
naturally yields an algorithm that uses Algorithm 1 as a subroutine, and
computes all irreducible factors of $f$.
###### Lemma 4.1 (Reducing factor multiplicity).
Let $f(\mathbf{x}),g(\mathbf{x})\in\mathbb{Q}[\mathbf{x}]$ be non-zero
polynomials and let $x\in\mathbf{x}$ be such that $\partial_{x}(g)\neq 0$ and
$g$ is square-free. Then, the factor-multiplicity of $g$ in $f$ (i.e. the
integer $a$ satisfying $g^{a}\mid f$ and $g^{a+1}\nmid f$) is also the
smallest non-negative integer $a$ such that
$g\nmid\frac{\partial^{a}f}{\partial x^{a}}$.
###### Proof.
If the factor-multiplicity of $g$ in $f$ is zero, i.e. $g\nmid f$, then claim
is clearly true. Thus let us assume that the factor-multiplicity of $g$ in $f$
is $a\geq 1$. It suffices to show that the factor-multiplicity of $g$ in
$\partial_{x}(f)$ is exactly $a-1$.
Suppose $f=g^{a}\cdot h$ where $\gcd(g,h)=1$. Then,
$\partial_{x}f=\partial_{x}(g^{a})\cdot
h+g^{a}\cdot\partial_{x}(h)=g^{a-1}\cdot(a\cdot\partial_{x}(g)\cdot
h+g\cdot\partial_{x}(h)).$
Hence, we have that the factor-multiplicity of $g$ in $\partial_{x}(f)$ is at
least $(a-1)$.
On the other hand, we have that $\partial_{x}(g)\neq 0$ and $g$ is square-free
and hence $\gcd(g,\partial_{x}(g))=1$. Therefore
$\gcd(g,a\cdot
g\cdot\partial_{x}(h)+h\cdot\partial_{x}(g))=\gcd(g,h\cdot\partial_{x}(g))=\gcd(g,h)=1$
and hence $g^{a}\nmid\partial_{x}(f)$ and therefore the factor-multiplicity of
$f$ ∎
We are now ready to describe the algorithm.
1
Input : A $(\Sigma\Pi)^{(k)}$-formula of size $s$, bit-complexity $t$, degree
$D$ computing a polynomial $f(\mathbf{x})$.
Output : A list of all irreducible factors $f$ of degree at most $d$ and their
multiplicities.
2
3Set the output list $L=\emptyset$.
4Set the intermediate candidates list $L^{\prime}=\emptyset$.
5Compute hitting-set $H_{1}=\mathcal{H}(d,n)$ (as defined in Definition 2.11).
6for _${\bm{\alpha}}\in H_{1}$_ do
7 Define $F(\mathbf{x},y)=f(\mathbf{x}+{\bm{\alpha}}\cdot
y)=f(x_{1}+\alpha_{1}y,\ldots,x_{n}+\alpha_{n}y)$
8 for _$i=0,1,\ldots,\deg(F)$_ do
9
10 Define $\tilde{F}(\mathbf{x},y)=\frac{\partial^{i}F}{\partial y^{i}}$.
11 Compute the list $\tilde{L}$ of all candidate degree $d$ multiplicity-one
factors of $\tilde{F}(\mathbf{x},y)$ using Algorithm 1.
12 foreach _$\tilde{g}(\mathbf{x},y)\in\tilde{L}$_ do
13
14 Add $g(\mathbf{x}):=\tilde{g}(\mathbf{x},0)$ to $L^{\prime}$.
15
16
17
18
19for _$g\in L^{\prime}$_ do
20
21 if $g$ is not irreducible then skip to the next iteration.
22 Let $x$ be a variable that $g$ depends on, so that $\partial_{x}(g)\neq 0$.
23 Find the smallest non-negative integer $e$ such that
$g\nmid\frac{\partial^{e}f}{\partial x^{e}}$.
24 if $e>1$ then add $(g,e)$ to the list $L$.
return _$L$_
Algorithm 2 Computing list of all degree $d$ irreducible factors and their
multiplicities
###### Lemma 4.2 (Correctness of Algorithm 2).
For every input polynomial $f$ computed by a $(\Sigma\Pi)^{(k)}$ formula of
size $s$, degree $D$, bit-complexity $t$ and $d\in\mathbb{N}$, the list $L$
output by Algorithm 2 is precisely the list of all irreducible factors of $f$
of degree at most $d$ (up to scalar multiplication) along with their
multiplicities in $f$.
###### Proof.
From Algorithms 2 to 2 and Lemma 4.1, it is clear that any $(g,e)$ in the
output list ensures that $g$ is an irreducible polynomial, $g^{e}\mid f$ and
$g^{e+1}\nmid f$. Thus, it suffices to show that for every irreducible
polynomial $g$ such that $\deg(g)\leq d$ and $g\mid f$, some non-zero scalar
multiple of $g$ is under consideration in the list $L^{\prime}$. Fix any such
irreducible factor $g$ of degree at most $r\leq d$ and let its factor-
multiplicity be $e$
By Lemma 2.12, there is some ${\bm{\alpha}}\in H_{1}$ such that
$\operatorname{Hom}_{r}(g)({\bm{\alpha}})\neq 0$, where $r$ is the total
degree of $g$. Thus, for this choice of ${\bm{\alpha}}$, we have that
$g^{\prime}(\mathbf{x},y)=g(\mathbf{x}+y{\bm{\alpha}})$ is a factor of
$F(\mathbf{x},y)=f(\mathbf{x}+y{\bm{\alpha}})$ and $g^{\prime}$ is monic in
$y$ and has factor-multiplicity $e$. By Lemma 4.1, we have that $g^{\prime}$
has factor-multiplicity one in
$\tilde{F}(\mathbf{x},y):=\frac{\partial^{e-1}F}{\partial y^{e-1}}$. Thus, by
the correctness of Algorithm 1 (Lemma 3.3), a non-zero multiple of the
polynomial $g^{\prime}(\mathbf{x},y)$ must be included in the list $\tilde{L}$
in Algorithm 2. Therefore, a non-zero multiple of
$g(\mathbf{x})=g^{\prime}(\mathbf{x},0)$ will be added to $L^{\prime}$ in
Algorithm 2. ∎
###### Lemma 4.3 (Running time of Algorithm 2).
Let $\varepsilon>0,k,d\in\mathbb{N}$ be arbitrary constants. Let
$f\in\mathbb{Q}[\mathbf{x}]$ be a polynomial computable by a
$(\Sigma\Pi)^{(k)}$ formula $C$ of size $s$, degree at most $D$ and bit-
complexity $t$. Then, on input $C$ and $d\in\mathbb{N}$, Algorithm 1
terminates in time at most $(sDt)^{O(kd(sDt)^{\varepsilon d})}$.
Moreover, if $k=1$, i.e. $f$ has sparsity at most $s$, then Algorithm 1
terminates in time at most $(snDt)^{O(\operatorname{poly}(d)\cdot\log snDt)}$.
###### Proof.
From Definition 2.11, we have the size of the set $H_{1}$ is $n^{O(d)}$. The
time complexity of computing a formula for $F$ from the given formula for $f$
is at most $O(sD)$. From Lemma 2.8, we have that $(\Sigma\Pi)^{k+1}$ formulas
for all the $y$ derivatives of $F$ can be computed in time at most
$\operatorname{poly}(s,D,t)$, which is also a bound on the bit-complexity and
the size of these formulas. Algorithm 1 is invoked at most $D$ times.
The total time taken to construct the list $L^{\prime}$ is at most $D\cdot
T_{1}$, where $T_{1}$ is the time taken by Algorithm 1 on inputs with formula
size and bit-complexity $\operatorname{poly}(s,D,t)$, and degree parameter
$d$. $D\cdot T_{1}$ is also an upper bound on the size of the list of
candidate factors $L^{\prime}$.
Now, for each $g\in L^{\prime}$, from Theorem 2.28, we have that the
irreducibility test in Algorithm 2 takes at most $(sDt)^{O(d^{2})}$ time.
There are at most $D$ instances of divisibility test performed to determine
the exact multiplicity in $f$ of each $g\in L^{\prime}$. This requires
computing the corresponding derivatives, which as discussed in the previous
paragraph, takes time $\operatorname{poly}(s,D,t)$ and outputs a formula of
size and bit-complexity $\operatorname{poly}(s,D,t)$ for the derivatives, and
then doing a divisibility test, the time complexity of which we denote by
$T_{2}$.
Therefore, the total time taken by the algorithm is at most
$(n^{O(d)}\cdot\operatorname{poly}(s,D,t)\cdot D\cdot T_{1})+(D\cdot
T_{1}\cdot(sDt)^{O(d^{2})}\cdot\operatorname{poly}(s,D,t)\cdot T_{2})$.
Now, if $f$ is $s$ sparse, i.e. $k=1$, then from 2.11, we have that every
vector in $H_{1}$ has at most $d$ non-zero coordinates. Thus, from 3.1, for
every ${\bm{\alpha}}\in H_{1}$,
$F(\mathbf{x},y)=f(\mathbf{x}+{\bm{\alpha}}\cdot y)$ has sparsity and bit-
complexity at most $s^{\prime}\leq s\cdot D^{d}$. Note that the derivatives of
arbitrary order of $F$ with respect to any variable also have the same bound
on their sparsity and bit-complexity of coefficients. Thus, in this case, from
3.4, $T_{1}\leq(sDt)^{\operatorname{poly}(d)\log sDt}$. From Theorem 2.20, we
have that $T_{2}\leq(snD)^{O(d\log^{2}snD)}$. Therefore, the overall running
time of the algorithm is at most $(snDt)^{O(\operatorname{poly}(d)\cdot\log
snDt)}$.
On the other hand, if $k>1$, then from 3.4,
$T_{1}\leq(sD)^{O_{\varepsilon}(kd(sD)^{\varepsilon d})}\cdot t^{O(d\log d)}$.
To bound $T_{2}$ in this case, we note from 2.19, this divisibility testing
instances reduce to PIT instances for $(\Sigma\Pi)^{(k+1)}$ formula of size
and bit-complexity at most $\operatorname{poly}(s,D,t)$ and from Theorem 2.13,
this can be done in at most $(sDt)^{O(k(sDt)^{\varepsilon})}$ time for the
arbitrary constant $\varepsilon$ chosen in the beginning. Thus, the total time
taken is at most $(sDt)^{O_{\varepsilon}(kd(sDt)^{\varepsilon d})}$. ∎
Lemma 4.2 and Lemma 4.3 together imply our main theorems Theorem 1.1 and
Theorem 1.2.
## 5 Open problems
We conclude with some open problems.
* •
Perhaps the most natural open problem here is to obtain efficient
deterministic algorithms that completely factor sparse polynomials or more
generally, polynomials with constant depth formulas (and not just obtain low
degree factors). In the absence of better structural guarantees for the
factors (for instance, if they are sparse or have small constant depth
formulas), we can seek algorithms that output general algebraic circuits for
these factors.
* •
Obtaining improved structural guarantees on the factors of polynomials that
are sparse or have small constant depth formulas as mentioned in the first
open problem is another very interesting open problem.
* •
A first step towards obtaining deterministic algorithms for general
factorization of polynomials with small constant depth formulas could be to
design deterministic algorithms for computing _simple_ factors of such
polynomials. While the notion of simplicity discussed in this paper is that of
low degree factors, there are other natural notions that seem very
interesting. For instance, can we design an efficient deterministic algorithm
that outputs all the sparse irreducible factors of a constant depth formula ?
* •
As alluded to in the introduction, polynomial factorization algorithms have
found numerous applications in computer science. It would be interesting to
understand if there are applications of deterministic factorization algorithms
in general, and in particular the algorithms for computing low degree factors
described in this paper.
## Acknowledgements
A part of this work was done while the first two authors were at the Workshop
on Algebraic Complexity organised at the University of Warwick in March 2023
by Christian Ikenmeyer. We thank Christian for the invitation and the
delightful and stimulating atmosphere at the workshop.
††footnotetext: git info: , ()
## References
* [AS03] Sanjeev Arora and Madhu Sudan. Improved Low-Degree Testing and its Applications. Comb., 23(3):365–426, 2003.
* [BHKS20] Siddharth Bhandari, Prahladh Harsha, Mrinal Kumar, and Madhu Sudan. Decoding Multivariate Multiplicity Codes on Product Sets. Electron. Colloquium Comput. Complex., TR20-179, 2020. Pre-print available at arXiv:TR20-179.
* [Bog05] Andrej Bogdanov. Pseudorandom generators for low degree polynomials. In Proceedings of the 37th Annual ACM Symposium on Theory of Computing, Baltimore, MD, USA, May 22-24, 2005, pages 21–30. ACM, 2005.
* [BSCI+20] Eli Ben-Sasson, Dan Carmon, Yuval Ishai, Swastik Kopparty, and Shubhangi Saraf. Proximity Gaps for Reed–Solomon Codes. In 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS), pages 900–909, 2020.
* [BSV18] Vishwas Bhargava, Shubhangi Saraf, and Ilya Volkovich. Deterministic Factorization of Sparse Polynomials with Bounded Individual Degree. In 59th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2018, pages 485–496. IEEE Computer Society, 2018.
* [CKS18] Chi-Ning Chou, Mrinal Kumar, and Noam Solomon. Hardness vs Randomness for Bounded Depth Arithmetic Circuits. In 33rd Computational Complexity Conference, CCC 2018, June 22-24, 2018, San Diego, CA, USA, volume 102 of LIPIcs, pages 13:1–13:17. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2018.
* [DL78] Richard A. DeMillo and Richard J. Lipton. A Probabilistic Remark on Algebraic Program Testing. Information Processing Letters, 7(4):193–195, 1978.
* [DSY09] Zeev Dvir, Amir Shpilka, and Amir Yehudayoff. Hardness-Randomness Tradeoffs for Bounded Depth Arithmetic Circuits. SIAM J. Comput., 39(4):1279–1293, 2009.
* [Ell69] W. J. Ellison. A ‘Waring’s Problem’ for homogeneous forms. Proceedings of the Cambridge Philosophical Society, 65:663–672, 1969.
* [Fis94] Ismor Fischer. Sums of like powers of multivariate linear forms. Mathematics Magazine, 67(1):59–61, 1994.
* [For14] Michael Forbes. Polynomial Identity Testing of Read-Once Oblivious Algebraic Branching Programs. PhD thesis, Massachusetts Institute of Technology, 2014.
* [For15] Michael A. Forbes. Deterministic Divisibility Testing via Shifted Partial Derivatives. In Proceedings of the 2015 IEEE 56th Annual Symposium on Foundations of Computer Science (FOCS), FOCS ’15, page 451–465, USA, 2015. IEEE Computer Society.
* [FS13] Michael A. Forbes and Amir Shpilka. Quasipolynomial-Time Identity Testing of Non-commutative and Read-Once Oblivious Algebraic Branching Programs. In Proceedings of the 54 Annual IEEE Symposium on Foundations of Computer Science (FOCS 2013), pages 243–252, 2013. Full version at arXiv:1209.2408.
* [FS15] Michael A. Forbes and Amir Shpilka. Complexity Theory Column 88: Challenges in Polynomial Factorization1. SIGACT News, 46(4):32–49, dec 2015.
* [FSS14] Michael A. Forbes, Ramprasad Saptharishi, and Amir Shpilka. Hitting sets for multilinear read-once algebraic branching programs, in any order. In Proceedings of the 46 Annual ACM Symposium on Theory of Computing (STOC 2014), pages 867–875, 2014.
* [GKS16] Rohit Gurjar, Arpita Korwar, and Nitin Saxena. Identity Testing for Constant-Width, and Commutative, Read-Once Oblivious ABPs. In Proceedings of the 31 Annual Computational Complexity Conference (CCC 2016), pages 29:1–29:16, 2016. arXiv:1601.08031.
* [GKSS19] Zeyu Guo, Mrinal Kumar, Ramprasad Saptharishi, and Noam Solomon. Derandomization from Algebraic Hardness: Treading the Borders. In 60th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2019, Baltimore, Maryland, USA, November 9-12, 2019, pages 147–157. IEEE Computer Society, 2019.
* [GKST15] Rohit Gurjar, Arpita Korwar, Nitin Saxena, and Thomas Thierauf. Deterministic Identity Testing for Sum of Read-once Oblivious Arithmetic Branching Programs. In Proceedings of the 30 Annual Computational Complexity Conference (CCC 2015), pages 323–346, 2015. arXiv:1411.7341.
* [Kal85] Erich Kaltofen. Polynomial-Time Reductions from Multivariate to Bi- and Univariate Integral Polynomial Factorization. SIAM Journal of Computing, 14(2):469–489, 1985.
* [Kal89] Erich Kaltofen. Factorization of Polynomials Given by Straight-Line Programs. In Randomness and Computation, pages 375–412. JAI Press, 1989.
* [Kal92] Erich L. Kaltofen. Polynomial Factorization 1987-1991. In LATIN ’92, 1st Latin American Symposium on Theoretical Informatics, São Paulo, Brazil, April 6-10, 1992, Proceedings, volume 583 of Lecture Notes in Computer Science, pages 294–313. Springer, 1992\.
* [Kal03] Erich L. Kaltofen. Polynomial factorization: a success story. In Symbolic and Algebraic Computation, International Symposium ISSAC 2003, Drexel University, Philadelphia, Pennsylvania, USA, August 3-6, 2003, Proceedings, pages 3–4. ACM, 2003.
* [KS01] Adam Klivans and Daniel A. Spielman. Randomness efficient identity testing of multivariate polynomials. In Proceedings of the 33 Annual ACM Symposium on Theory of Computing (STOC 2001), pages 216–223, 2001.
* [KS09] Neeraj Kayal and Shubhangi Saraf. Blackbox polynomial identity testing for depth-$3$ circuits. In Proceedings of the 50 Annual IEEE Symposium on Foundations of Computer Science (FOCS 2009), 2009.
* [KSS15] Swastik Kopparty, Shubhangi Saraf, and Amir Shpilka. Equivalence of Polynomial Identity Testing and Polynomial Factorization. Computational Complexity, 24(2):295–331, 2015. Preliminary version in the _29 Annual IEEE Conference on Computational Complexity (CCC 2014)_.
* [KT88] Erich L. Kaltofen and Barry M. Trager. Computing with Polynomials Given By Black Boxes for Their Evaluation: Greatest Common Divisors, Factorization, Separation of Numerators and Denominators. In 29th Annual Symposium on Foundations of Computer Science, White Plains, New York, USA, 24-26 October 1988, pages 296–305. IEEE Computer Society, 1988.
* [KY08] Swastik Kopparty and Sergey Yekhanin. Detecting Rational Points on Hypersurfaces over Finite Fields. In Proceedings of the 23 Annual IEEE Conference on Computational Complexity (CCC 2008), pages 311–320, 2008.
* [LLL82] Arjen K. Lenstra, Hendrik W. Lenstra Jr., and László Lovász. Factoring polynomials with rational coefficients. Mathematische Annalen, 261(4):515–534, 1982.
* [LST21] Nutan Limaye, Srikanth Srinivasan, and Sébastien Tavenas. Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits. In Proceedings of the 62 Annual IEEE Symposium on Foundations of Computer Science (FOCS 2021), pages 804–814. IEEE, 2021. Preliminary version in the Electronic Colloquium on Computational Complexity (ECCC), Technical Report TR21-081.
* [Ore22] Øystein Ore. Über höhere Kongruenzen. Norsk Mat. Forenings Skrifter, 1(7):15, 1922.
* [PS94] Alexander Polishchuk and Daniel A. Spielman. Nearly-Linear Size Holographic Proofs. In Proceedings of the Twenty-Sixth Annual ACM Symposium on Theory of Computing, STOC ’94, page 194–203, New York, NY, USA, 1994. Association for Computing Machinery.
* [Sap15] Ramprasad Saptharishi. A survey of lower bounds in arithmetic circuit complexity. Github survey, 2015.
* [Sch80] Jacob T. Schwartz. Fast Probabilistic Algorithms for Verification of Polynomial Identities. Journal of the ACM, 27(4):701–717, 1980.
* [Shp02] Amir Shpilka. Affine projections of symmetric polynomials. Journal of Computer and System Sciences, 65(4):639–659, 2002. Special Issue on Complexity 2001.
* [SS09] Nitin Saxena and C. Seshadhri. An Almost Optimal Rank Bound for Depth-3 Identities. In Proceedings of the 24 Annual IEEE Conference on Computational Complexity (CCC 2009), pages 137–148, 2009.
* [SS10] Nitin Saxena and C. Seshadhri. From Sylvester-Gallai Configurations to Rank Bounds: Improved Black-Box Identity Test for Depth-3 Circuits. In Proceedings of the 51 Annual IEEE Symposium on Foundations of Computer Science (FOCS 2010), pages 21–29, 2010.
* [ST20] Amit Sinhababu and Thomas Thierauf. Factorization of Polynomials Given By Arithmetic Branching Programs. In 35th Computational Complexity Conference (CCC 2020), volume 169 of Leibniz International Proceedings in Informatics (LIPIcs), pages 33:1–33:19, Dagstuhl, Germany, 2020. Schloss Dagstuhl–Leibniz-Zentrum für Informatik.
* [Sud98] Madhu Sudan. Lecture notes for the course ‘Algebra and Computation’, 1998. Available from http://people.csail.mit.edu/madhu/FT98/.
* [SV10] Amir Shpilka and Ilya Volkovich. On the Relation between Polynomial Identity Testing and Finding Variable Disjoint Factors. In Automata, Languages and Programming, 37th International Colloquium, ICALP 2010, Bordeaux, France, July 6-10, 2010, Proceedings, Part I, volume 6198 of Lecture Notes in Computer Science, pages 408–419. Springer, 2010.
* [Vol15] Ilya Volkovich. Deterministically Factoring Sparse Polynomials into Multilinear Factors and Sums of Univariate Polynomials. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2015, August 24-26, 2015, Princeton, NJ, USA, volume 40 of LIPIcs, pages 943–958. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2015.
* [Vol17] Ilya Volkovich. On Some Computations on Sparse Polynomials. volume 81 of LIPIcs, pages 48:1–48:21. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2017.
* [VSBR83] Leslie G. Valiant, Sven Skyum, S. Berkowitz, and Charles Rackoff. Fast Parallel Computation of Polynomials Using Few Processors. SIAM Journal of Computing, 12(4):641–644, 1983. Preliminary version in the _6 Internationl Symposium on the Mathematical Foundations of Computer Science (MFCS 1981)_.
* [vzG83] Joachim von zur Gathen. Factoring Sparse Multivariate Polynomials. In Proceedings of the 24 Annual IEEE Symposium on Foundations of Computer Science (FOCS 1983), pages 172–179, 1983.
* [vzGG13] Joachim von zur Gathen and Jürgen Gerhard. Modern Computer Algebra. Cambridge University Press, 3 edition, 2013.
* [Zip79] Richard Zippel. Probabilistic algorithms for sparse polynomials. In Symbolic and Algebraic Computation, EUROSAM ’79, An International Symposiumon Symbolic and Algebraic Computation, volume 72 of Lecture Notes in Computer Science, pages 216–226. Springer, 1979.
## Appendix A Deferred proofs
#### Circuit/formula bit-complexity
###### Proof of Lemma 2.2.
We will prove an equivalent statement: the numerator and denominator of
$f(\mathbf{a})$ have absolute value at most $2^{s\cdot b}$. We prove this by
induction on the size of the formula. We will use $N(\cdot)$ and $D(\cdot)$ to
denote the numerator and denominator of some rational number.
Base case: when $\operatorname{size}(C)=1=s$, there is a single leaf node in
the formula that reads and outputs a single rational number of bit-complexity
$b$, thus $\operatorname{bit}(f(\mathbf{a}))=b\leq s\cdot b$. The induction
hypothesis is that for all formulas $C$ with $\operatorname{size}(C)\leq S$
(for some $S\geq 1$),
$\operatorname{bit}(f(\mathbf{a}))\leq\operatorname{bit}(C)\cdot b$. For the
induction step, we look at formulas $C$ with $\operatorname{size}(C)=S+1$, and
we consider two cases:
1. 1.
When the top gate is a sum gate:
$f=\sum_{i=1}^{k}{\alpha_{i}g_{i}(\mathbf{x})}$, with $C_{i}$ being the
formula computing $g_{i}$ and $\alpha_{i}$s being scalars from $\mathbb{Q}$.
$\displaystyle f(\mathbf{a})$
$\displaystyle=\sum_{i=1}^{k}{\alpha_{i}g_{i}(\mathbf{a})}$
$\displaystyle\left|D(f(\mathbf{a}))\right|$
$\displaystyle=\left|\prod_{i=1}^{k}{D(\alpha_{i})D(g_{i}(\mathbf{a}))}\right|$
$\displaystyle\leq\prod_{i=1}^{k}{2^{\operatorname{bit}(\alpha_{i})}2^{\operatorname{bit}(g_{i}(\mathbf{a}))}}$
$\displaystyle\leq\prod_{i=1}^{k}{2^{\operatorname{bit}(\alpha_{i})}2^{\operatorname{bit}(C_{i})\cdot
b}}$ (induction hypothesis)
$\displaystyle=2^{\sum_{i=1}^{k}{\left(\operatorname{bit}(\alpha_{i})+\operatorname{bit}(C_{i})\cdot
b\right)}}\leq 2^{\operatorname{bit}(C)\cdot b}$
$\displaystyle\left|N(f(\mathbf{a}))\right|$
$\displaystyle\leq\sum_{i=1}^{k}{\left|N(\alpha_{i})\right|\left|N(g_{i}(\mathbf{a}))\right|\prod_{j\neq
i}{\left|D(\alpha_{j})\right|\left|D(g_{j}(\mathbf{a}))\right|}}$
$\displaystyle\leq\sum_{i=1}^{k}{2^{\operatorname{bit}(\alpha_{i})+\operatorname{bit}(g_{i}(\mathbf{a}))}2^{\sum_{j\neq
i}{\operatorname{bit}(\alpha_{j})+\operatorname{bit}(g_{j}(\mathbf{a}))}}}$
$\displaystyle\leq\sum_{i=1}^{k}{2^{\operatorname{bit}(\alpha_{i})+\operatorname{bit}(C_{i})\cdot
b}2^{\sum_{j\neq
i}{\operatorname{bit}(\alpha_{j})+\operatorname{bit}(C_{j})\cdot b}}}$
(induction hypothesis)
$\displaystyle\leq\sum_{i=1}^{k}{2^{\sum_{j=1}^{k}{\operatorname{bit}(\alpha_{j})+\operatorname{bit}(C_{j})\cdot
b}}}\leq
2^{k+\sum_{j=1}^{k}{\operatorname{bit}(\alpha_{j})+\operatorname{bit}(C_{j})\cdot
b}}\leq 2^{\operatorname{bit}(C)\cdot b}$
Thus,
$\operatorname{bit}(f(\mathbf{a}))=\max\\{\operatorname{bit}(N(f(\mathbf{a})),D(f(\mathbf{a})))\\}\leq\operatorname{bit}(C)\cdot
b$.
2. 2.
When the top gate is a product gate:
$f=\prod_{i=1}^{k}{\alpha_{i}g_{i}(\mathbf{x})}$. The proof for the
denominator in the case of sum gate will work here for both the numerator and
the denominator. The required bound follows.
∎
#### Relevant subclasses of algebraic circuits
###### Proof of 2.5.
We prove the size upper bounds here; the bit-complexity upper bounds proceed
along exactly the same lines. The size upper bound for the sum is immediate
and hence we only need to focus on the product. Let the expression for each
$P_{r}$ be
$\displaystyle P_{r}$ $\displaystyle=\sum_{i}P_{r,i}^{(r)}\cdot
g_{r,i}^{a_{r,i}}$ $\displaystyle\implies\prod P_{r}$
$\displaystyle=\sum_{r_{1},\ldots,r_{t}}\left(P_{1,r_{1}}\cdots
P_{t,r_{t}}\right)\cdot\left(g_{1,r_{1}}^{a_{1,r_{1}}}\cdots
g_{t,r_{t}}^{a_{t,r_{t}}}\right)$
where each $P_{i,j}$ is computed by $(\Sigma\Pi)^{(k)}$ formulas of size at
most $s$, and each $g_{i,j}$ is a polynomial of degree at most $d$.
Each $\left(P_{1,r_{1}}\cdots P_{t,r_{t}}\right)$ is computed by a
$(\Sigma\Pi)^{(k)}$ formula of size at most $s^{t}$. By Lemma 2.9,
$g_{1,r_{1}}^{a_{1,r_{1}}}\cdots g_{t,r_{t}}^{a_{t,r_{t}}}$ can be expressed
as a sum $\sum_{\ell=1}^{s^{t}}f_{\ell}^{D}$ where $D=\sum_{j}a_{j,r_{j}}$ and
each $f_{\ell}$ is a degree polynomial of degree at most $d$. Thus,
$\prod_{r}P_{r}$ is computable by a
$\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$
formula of size at most $s^{O(t)}$. ∎
#### Polynomial identity testing
###### Proof of Lemma 2.12.
Since $f$ is a non-zero polynomial of degree at most $d$, there is a monomial
$\mathbf{x}^{\mathbf{e}}$ of degree at most $d$ with a non-zero coefficient in
$f$. Let $S$ be the support of the monomial $\mathbf{x}^{\mathbf{e}}$, i.e.,
$S=\\{x_{i}:e_{i}\neq 0\\}$. Clearly, $|S|\leq d$. We now consider the
polynomial $\tilde{f}$ obtained from $f$ by setting all the variables $x_{j}$
not in the set $S$ to zero. Since $f$ has a non-zero monomial with support
contained in the set $S$, $\tilde{f}$ continues to be a non-zero polynomial of
degree at most $d$. Moreover, it is a $d$ variate polynomial since it only
depends on the variables in $S$. From Lemma 2.10, we get that for any subset
$T_{d}$ of $\mathbb{Q}$ of cardinality at least $d+1$, there exists a vector
$\mathbf{b}\in{T_{d}}^{d}$ such that $\tilde{f}(\mathbf{b})\neq 0$. Let
$\mathbf{a}\in\mathbb{Q}^{n}$ to be such that for every $i\in S$,
$a_{i}=b_{i}$ and for every $i\notin S$, $a_{i}=0$. Then,
$f(\mathbf{a})=\tilde{f}(\mathbf{b})\neq 0$. Moreover, $\mathbf{a}$ is in
$\mathcal{H}(d,n)$. ∎
###### Proof of Lemma 2.15.
Let $T_{d}$ be the set $\\{0,1,2,3,\ldots,d\\}$ and let $\mathcal{H}(d,n)$ be
the set of points defined in Definition 2.11, i.e.,
$\mathcal{H}(d,n)=\left\\{(a_{1},\ldots,a_{n})\ :\ S\in\binom{[n]}{\leq
d}\;,\;a_{i}\in T_{d}\text{ for all }i\in S\text{ and }a_{j}=0\text{ for all
}j\notin S\right\\}.$
From Lemma 2.12, we know that every non-zero polynomial $f$ of degree at most
$d$ must evaluate to zero on some point of $\mathcal{H}(d,n)$. In other words,
two distinct degree $d$ polynomials $f$ and $g$ cannot agree on every point of
$\mathcal{H}(d,n)$. An immediate consequence of this is that if we are given
the evaluations of an unknown polynomial $f$ on all points of
$\mathcal{H}(d,n)$, and we view each of these evaluations as a linear
constraint on the unknown coefficients of $f$, then this linear system has a
unique solution.
Based on this observation, a natural algorithm for computing the coefficient
vector of $C$ is the following, we evaluate the given formula on every input
in $\mathcal{H}(d,n)$, set up the linear system on the coefficients of $C$
obtained from these evaluations, and use any standard linear system solver
over $\mathbb{Q}$ to solve this system.
Note that the size of this linear system is at most $n^{O(d)}$, and from 2.2,
of the constants in this linear system is at most
$\operatorname{poly}(s,b,d)$. Thus, this linear system can be solved in time
$\operatorname{poly}(s,b,d,n^{d})\leq\operatorname{poly}(s,b,n^{d})$ time as
claimed. ∎
#### Deterministic divisibility testing and PIT
###### Proof of Corollary 2.19.
The proof essentially follows immediately from Theorem 2.18. From Theorem
2.18, we have that $g$ divides $f$ if and only if
$R(\mathbf{x}):=f(\mathbf{x})-g(\mathbf{x})Q(\mathbf{x})\equiv 0$, where $Q$
is the pseudo-quotient of $f$ and $g$. It suffices to show that
$R(\mathbf{x})$ has
$\mathcal{C}=\Sigma\left((\Sigma\Pi)^{(k)}\cdot(\operatorname{Deg}_{d})^{\ast}\right)$
formulas of size $\operatorname{poly}(s,d)$, and since
$f(\mathbf{x})\in(\Sigma\Pi)^{(k)}$, it suffices to bound the size of
$\mathcal{C}$-formulas computing $g(\mathbf{x})\cdot Q(\mathbf{x})$.
By Lemma 2.17, the pseudo-quotient $Q(\mathbf{x})$ is computable by
$\mathcal{C}$-formulas of size $\operatorname{poly}(s,d)$. Let one such
computation be of the form
$\displaystyle Q(\mathbf{x})$ $\displaystyle=\sum_{i}f_{i}\cdot
g_{i}^{e_{i}}\quad\text{where each $f_{i}\in(\Sigma\Pi)^{(k)}$ and
$\deg(g_{i})\leq d$ and each $e_{i}\leq s$}$ $\displaystyle\implies
g(\mathbf{x})Q(\mathbf{x})$ $\displaystyle=\sum_{i}f_{i}\cdot(g\cdot
g_{i}^{e_{i}})$
From Lemma 2.9, note that any term of the form $(g\cdot h^{e})$ can be
expressed as
$g\cdot
h^{e}=\sum_{i=1}^{\operatorname{poly}(e)}\beta_{i}\cdot(g+\alpha_{i}h)^{e+1}$
for field constants $\alpha_{i}$’s and $\beta_{i}$’s. Thus, feeding this in
the above expression for $g\cdot Q$, we have
$g(\mathbf{x})\cdot
Q(\mathbf{x})=\sum_{i}\sum_{j}f_{i}\cdot\tilde{g}_{ij}^{e_{ij}}$
for polynomial $\tilde{g}_{ij}$ of degree at most $d$, and thus is also a
$\mathcal{C}$-formula of size at most $\operatorname{poly}(s,d)$. Therefore,
$R(\mathbf{x})=f(\mathbf{x})-g(\mathbf{x})Q(\mathbf{x})$ is also computable by
$\mathcal{C}$-formulas of size $s^{\prime}=\operatorname{poly}(s,d)$. Thus, we
can check if $g$ divides $f$ by checking if $R(\mathbf{x})\equiv 0$ (by
Theorem 2.18) which can be done in $T(k,d,s^{\prime})$ time as claimed. ∎
#### Hensel lifting
###### Proof sketch of Lemma 2.26.
As indicated earlier, the lemma is almost an immediate consequence of Lemma
3.6 in [KSS15]. The precise statement there gives a circuit $\tilde{C_{k}}$ of
size and bit-complexity $\operatorname{poly}(s,D,2^{k})$ for $g_{k},h_{k}$. We
notice that without loss of generality, the degree of $g_{k},h_{k}$ and hence
of $\tilde{C_{k}}$ can be assumed to be at most $(D+2^{k})$ since the $y$
degree is at most $D$ and the $x$ degree is at most $2^{k}$. This incurs at
most a polynomial blow up in the circuit size.
Now, to go from circuits for $g_{k},h_{k}$ to formulas computing these
polynomials, we just invoke the classic depth reduction result of Valiant,
Skyum, Berkowitz and Rackoff [VSBR83], which states that given an $n$-variate
degree-$\Delta$ polynomial $f$ with an arithmetic circuit $\Phi$ of size $s$,
there is an arithmetic circuit $\Phi^{\prime}$ that computes $f$, has size
$\operatorname{poly}(s,n,\Delta)$ and depth $O(\log\Delta)$.
Thus we have a formula of size (and bit-complexity) at most
$\operatorname{poly}(s,D,2^{k})^{\log(D+2^{k})}\leq(sDk)^{k\log D}$. Note that
a better bound of $d$ on the total degree of $g_{k}$ implies that the size and
bit-complexity of the formula for $g_{k}$ is at most $(sDk)^{O(\log d)}$. ∎
|
# UFed-GAN: A Secure Federated Learning Framework with Constrained Computation
and Unlabeled Data
Achintha Wijesinghe, Songyang Zhang, , Siyu Qi and Zhi Ding A. Wijesinghe, S.
Qi, and Z. Ding are with the Department of Electrical and Computer
Engineering, University of California, Davis, CA, 95616. (E-mail:
<EMAIL_ADDRESS><EMAIL_ADDRESS>and [email protected]). S. Zhang
was with the University of California at Davis, Davis, CA, 95616 and now is
with the Department of Electrical and Computer Engineering, University of
Louisiana at Lafayette, Lafayette, LA 70504 (E-mail:
[email protected]).
###### Abstract
To satisfy the broad applications and insatiable hunger for deploying low
latency multimedia data classification and data privacy in a cloud-based
setting, federated learning (FL) has emerged as an important learning
paradigm. For the practical cases involving limited computational power and
only unlabeled data in many wireless communications applications, this work
investigates FL paradigm in a resource-constrained and label-missing
environment. Specifically, we propose a novel framework of UFed-GAN:
Unsupervised Federated Generative Adversarial Network, which can capture user-
side data distribution without local classification training. We also analyze
the convergence and privacy of the proposed UFed-GAN. Our experimental results
demonstrate the strong potential of UFed-GAN in addressing limited
computational resources and unlabeled data while preserving privacy.
###### Index Terms:
Federated learning, unlabeled data, data privacy, generative adversarial
networks.
## I Introduction
The burgeoning rise of deep learning has shown remarkable achievements in
learning, based on the often voluminous amounts of data for centralized
training. However, in many cases of learning-based wireless connections for
collaboration, decentralized learning is vital to handle the heterogeneous
data distribution among nodes (users). Importantly, privacy concerns and
resource limitations also prevent direct data sharing. To ensure data privacy
and communication efficiency, federated learning (FL) [1] has emerged as an
important framework to disengage data collection and model training via local
computation and global model aggregation.
Despite reported successes, existing FL frameworks have certain limitations.
One major obstacle of FL in practice is the heterogeneity of data distribution
among participating FL users. It is known [2] that the accuracy of classic FL
frameworks such as FedAvg [1] could drop by 55% for some datasets showing non-
IID, i.e., not identically and independently distributed, data distributions.
To combat performance loss against non-IID datasets, the more general approach
of FedProx [3] may depend on certain unrealistic dissimilarity assumptions of
local functions [4]. Alternatively, generative adversarial network (GAN) [5]
provides another approach to address data heterogeneity. In GAN-based FL, GAN
models are used as a proxy to share user updates without training the global
model on the user side. For example, in [6], a global classifier is trained
using a user-shared generator of the user-end-trained conditional GANs
(cGANs). Another example is [7], the authors proposed to share the full user-
end GAN for generating a synthetic dataset. However, the high communication
cost and the potential privacy leakage hinder the performance gain of these
known GAN-based FL frameworks as discussed in [8]. Moreover, the training of
the entire GAN and other learning models sometimes can be impractical at the
user end, especially for nodes with limited computation resources, such as
computation-constrained sensors and devices [9]. How to develop a more
efficient GAN-sharing strategy to preserve privacy and handle limited
computation remains an open question in generative FL.
In addition to data heterogeneity and limited computation resources, most of
the existing works focus on supervised FL, where the performance depends
heavily on the availability of labeled training data. In practical
applications such as user clustering and video segmentation [10], the
computational and privacy limitations may prevent user-side data labeling.
Therefore, learning from unlabeled data is equally important for FL to reach
its full potential. Presently, only limited research works have specifically
addressed FL with unlabeled data, primarily due to inherent challenges. A
typical category of FL dealing with unlabeled data [11, 12] focuses on
clustering tasks, which may limit its generalization to other deep learning
tasks. Another line of FL focuses on unsupervised representation learning
[13], where knowledge distillation and contrast learning are applied to
address heterogeneous user data distributions. Other FL works on unlabeled
data [14, 15, 16] leverage the efficiency of latent space and may lack a
general description of the original data. As aforementioned, GAN-based
approaches can be intuitive solutions to capture the data distributions and
assist further applications, even without annotated labels.
In this work, we develop a novel FL framework, the Unsupervised Federated
Generative Adversarial Network (UFed-GAN), for resource-limited distributed
users without labeled data. The novelty is an innovative GAN-based FL and
data-sharing strategy to significantly reduce the computational cost at the
user end and to preserve privacy. Note that, instead of focusing on one
specific unsupervised learning task, we provide a general FL scheme to learn
the data distributions without labels. Our framework can be easily adapted to
handle specific learning tasks, including unsupervised representation
learning, user clustering, and semi-supervised classification.
Our contributions can be summarized as follows:
* •
We propose a novel UFed-GAN as an FL framework to learn and characterize the
non-IID user data distributions from unlabeled user data. Our UFed-GAN
captures the underlying user data distributions without explicitly training a
local GAN model for each user, thereby significantly lowering the
computational cost on the user side. To our best knowledge, this is the first
work to address such constrained computation in GAN-based FL.
* •
We analyze the convergence of UFed-GAN and prove that privacy leakage can be
prevented by our UFed-GAN, in comparison to traditional GAN-based FL.
* •
Our experimental results in several benchmark datasets demonstrate the
performance of UFed-GAN in a semi-supervised classification setup.
In terms of organization, we first introduce the architecture of UFed-GAN and
a training strategy in Section II. Following the study on model convergence
and the privacy analysis in Section III. We present the experimental results
of UFed-GAN on several well-known datasets in Section IV. We provide
concluding remarks in Section V.
## II Method and Architecture
### II-A Problem Setup
Figure 1: UFed-GAN in a distributed learning setup in an untrustworthy
communication scenario, where users are assumed with less computational power.
An attacker may eavesdrop to understand the user data.
As a typical example of a resource-limited FL setup in Fig. 1, a server aims
to learn from user nodes, each with limited computational resources whereas
attackers may attempt to eavesdrop on the network links. Users may not be able
to annotate their raw data. Moreover, since the target tasks may be different
among users and the data may also be skewed, a non-IID data distribution shall
be considered in this scenario. Different from local users, the server has
sufficient computational resources to obtain a model that learns a global data
distribution from all the local data, without initial training data. Such a
setup is applicable in many distributed learning scenarios. For example, a
distributed camera/sensor system placed for object detection can benefit from
the collaboration of different cameras for better feature extraction, where
each digital camera or sensor may have limited computation power.
To demonstrate the privacy protection offered by UFed-GAN, we consider an
attacker that has access to the vulnerable communication links between
distributed users and the central server. Such attacks try to gain users’ data
features based on the information shared through the channels.
Note that, we aim to develop a novel data/model-sharing strategy for FL. The
strategy could handle unlabeled data and capture user data distributions in
the scenario with limited local computation resources. The proposed framework
should offer flexible integration with various unsupervised and semi-
supervised learning tasks, such as latent representation learning, user
clustering, and semi-supervised classification [13].
### II-B UFed-GAN
Figure 2: Communication rounds till the convergence of UFed-GAN. We use the
inception score (IS) as the measure of convergence.
We now explain the framework and training process of UFed-GAN, which allows
private and secure learning from unlabeled data in a distributed learning
setup. Our proposed UFed-GAN aims to train a GAN model on the server to
capture the underlying user data distributions without implementing complex
GAN training on the user side.
First, for each user $u$, we initiate a GAN model on the server, including a
generator ($G_{u}$) and a discriminator ($D_{u}$). Due to label
inaccessibility, we select deep convolutional GANs (DCGANs) [17] as the
backbone. The details of choices for GAN architecture will be elaborated in
Section II-D.
The GAN training comes in three steps, which contains two steps of $D$
training and one step of $G$ training. We split the dual-step $D$ training
into the server side and the user side. On one side, the server initiates a
discriminator $D_{u}$ and then shares the initiated model with the
corresponding user $u$. Subsequently, on the other end, the user performs a
forward pass (FP) on $D_{u}$ using a single batch of real local data $I_{u}$.
The gradients and loss are calculated and then shared with the server.
Compared with training a full GAN on the user side, this step can
significantly reduce the computational cost and be easily deployed in
computation-constrained devices. Upon receiving the updated information of
$D_{u}$ from the user $u$, the server completes the training of $D_{u}$. It
generates a noise vector z to pass through $G_{u}$. Finally, synthetic data
$G_{u}(\textbf{z})$ is generated from the generator. $G_{u}(\textbf{z})$ is
then sent to $D_{u}$ to calculate the corresponding gradients and loss. These
received gradient updates are combined with the gradient updates of the
previous step and then back-propagated through $D_{u}$ to update the
parameters.
Similar to conventional GAN training, starting with a noise vector z’, we
train $G_{u}$ with forward- and backward-propagation to update its parameters.
In each communication round, the above process repeats until GAN convergence
to a favorable point. We illustrate this training process further in Fig. 2.
For model convergence monitoring and the stopping criterion, we use inception
score (IS) [18], which measures the characteristics of the generated images.
After convergence, we create a synthetic dataset using the trained $D_{u}$.
With the generated synthetic dataset, we can design corresponding unsupervised
or semi-supervised algorithms to implement specific learning tasks. For
example, in a semi-supervised classification task, we could apply the MoCo
[19], which uses dictionary lookups in contrastive learning to obtain the
global classifier. The pseudocode of the proposed training strategy is
presented in Algorithm 1.
Algorithm 1 UFed-GAN: Training Algorithm
for each user $u$ do
Server initialization of $G_{u}$ and $D_{u}$
end for
for each communication round $r$, until the GAN convergence do
for each step $t$ do
Share $D_{u}$ with user $u$.
Perform FP in $D_{u}$ with user data $\mathcal{I}_{u}$ and share the gradient
updates $\nabla W_{u}$.
Perform FP in $D_{u}$ with fake data $G(z_{t})$ where $z_{t}$ is a random
noise vector and get the gradient updates $\nabla W_{z}$.
Update $D_{u}$ with ($\nabla W_{u}+\nabla W_{z}$).
Train $G_{u}$ with trained $D_{u}$ and random noise vectors.
end for
Generate an unlabeled dataset using each $G_{u}$.
end for
### II-C Attacker Model
We now introduce the attacker model to quantify the privacy leakage of UFed-
GAN. It is highly challenging, if not impossible, to obtain access to the
global generator $G$ given a secure server or to guess the exact architecture
of $G$, regardless of the computation prowess of the attacker. Therefore, as
suggested in [8], we focus on a reconstruction attack [20] in this work, where
an attacker attempts to reconstruct the training data. Let $\theta$ be the
parameters released to the communication channel by the user. We denote the
releasing mechanism by $\mathcal{M}$ and the information that an attacker
$\mathcal{A}$ could obtain by $\mathcal{M}(\theta)$. According to the UFed-GAN
framework, $\mathcal{M}(\theta)$ is the discriminator gradients and the loss
values. Let $\mathcal{I}$ represent the reconstructed data, we have
$\mathcal{A}:\mathcal{M}(\theta)\mapsto\mathcal{I}.$ (1)
Suppose that the generator $G_{\mathcal{A}}$ of attacker $\mathcal{A}$ has the
same architecture as $G$ but with initial weights $W_{\mathcal{A}}$ different
from those of $G$, represented by $W_{G}$. In parallel to the server training,
we train $G_{\mathcal{A}}$ at the attacker’s end.
### II-D GAN Architecture
Due to label inaccessibility and resource constraints, we adopt the
unsupervised DCGANs [17] over cGANs [8] which saves extra computation needs
for any pseudo-labeling. The generator architecture follows five transposed
convolutional layers. The first layer takes an input with 100 channels and
maps it to 1024 channels. Every subsequent layer reduces the number of
channels by half. Every layer uses ReLU activation except for the Tanh
activation for the final layer. All the layers in both the generator and the
discriminator use $4\times 4$ kernels and batch normalization for each layer
before the final layer. For the discriminator model, we use four convolutional
layers. The first layer accepts similar channel sizes of the data samples and
maps to 256 channels. Every subsequent layer doubles the number of channels
except the final layer, which outputs a single channel. The final layer uses a
Sigmoid activation whereas all other layers use LeakyReLU activation.
## III Convergece and Privacy Analysis of UFed-GAN
In this section, we present the convergence and privacy analysis of the
proposed UFed-GAN. We introduce the major proof steps and refer those
interested to our corresponding references for more details.
### III-A Convergence of the discriminator
Let $\mathcal{G}$ and $\mathcal{D}$ be the generator and the discriminator of
a GAN, respectively. Assume $\mathcal{G}$ is capable of capturing a
distribution $p_{GS}$ on the server and we are interested in learning a user
distribution $p_{data}(x)$.
Proposition 1. Any $\mathcal{D}$ initiated on a server and trained in
accordance to Algorithm 1 with $\mathcal{G}$ and $p_{data}(x)$ converges to a
unique $\mathcal{D}^{*}$ for the given $\mathcal{G}$ as presented in [5],
i.e.,
$\mathcal{D}^{*}=\frac{p_{data(x)}}{p_{data(x)}+p_{GS}}$ (2)
Since UFed-GAN merely splits the GAN training, the proof of Proposition 1
shall directly follow that in [5]. This proposition serves as a guarantee of
the convergence of the server-side discriminator. It shows that the
discriminator is still capable of capturing the user side’s data distribution.
This helps the generator on the server-side to generate user-like data as
illustrated in the following proposition.
### III-B Convergence of the generator
Proposition 2. Any $\mathcal{G}$ initiated on a server and trained in
accordance to Algorithm 1 with $\mathcal{D}$ and $p_{data}(x)$ converges to a
unique $\mathcal{G}^{*}$ which captures $p_{data}(x)$. i.e.
$p_{GS}=p_{data}(x)$.
Since UFed-GAN does not alter the training of $\mathcal{G}$, we can imitate
and adopt the proof steps in [5]. Proposition 2 suggests that the server-side
generator converges to the same generator that could have been trained
locally. Therefore, on the server, we are able to regenerate synthetic samples
which resemble the user data in terms of data distribution.
### III-C Divergence of any discriminator other than $\mathcal{G}$
Proposition 3. Any generator $G$, other than $\mathcal{G}$ trained in
accordance to the Algorithm 1 with $\mathcal{D}$ diverges from unique
$\mathcal{G}^{*}$. i.e. $p_{GS}\neq p_{data}(x)$.
To prove Proposition 3, we adopt a similar proof process as provided in [8].
Following Proposition 1 and Proposition 2, the pair of $\mathcal{G}$ and
$\mathcal{D}$ is unique. Therefore, at each training step
$\mathcal{G}^{{}^{\prime}}=G^{{}^{\prime}}$ must be asserted. Hence, any $G$
difference from $\mathcal{G}$ at any step fails to capture $p_{data}(x)$.
## IV Results and Discussion
In this section, we present the experimental results on both utility and
privacy.
### IV-A Evaluation of the Utility
In a common semi-supervised setting as [13], let $N$ be the number of users,
$\beta$ be the concentration parameter of Dirichlet distribution
($Dir_{N}(\beta)$), and $s_{ij}$ be a sample taken from $Dir_{N}(\beta)$. We
assign $s_{ij}$ in proportion to the $i$-th class size of the user $j$. We
pick $N=10$ and $\beta=0.5$ in accordance with [13]. All model comparisons are
based on the linear evaluation protocol, which trains a linear classifier on
top of representations or regenerated fake data [21].
TABLE I: Classification accuracy comparison of different FL approaches over three datasets. Part of the results in this table are reported from [13]. Method | CIFAR 10 | SVHN | FashionMNIST
---|---|---|---
FedSimCLR | 52.88 | 76.50 | 79.44
\+ FedX | 57.95 | 77.70 | 82.47
FedMoCo | 57.82 | 70.99 | 83.58
\+ FedX | 59.43 | 73.92 | 84.65
FedBYOL | 53.14 | 67.32 | 82.37
\+ FedX | 57.79 | 69.05 | 84.30
FedProtoCL | 52.12 | 50.19 | 83.57
\+ FedX | 56.76 | 69.75 | 83.34
FedU | 50.79 | 66.22 | 82.03
\+ FedX | 57.26 | 68.39 | 84.12
Full GAN | 68.77 | 80.17 | 86.25
UFed-GAN | 67.0 | 80.109 | 86.33
We consider three well-known datasets: CIFAR10 [22], SVHN [23] and
FashionMNIST [24]. Comparative results against five other FL algorithms are
presented in Table I. These algorithms are: FedSimCLR [25], FedMoco [19],
FedBYOL [26], FedProtoCL [27] and FedU [28]. For each method, we present their
accuracy on the respective dataset, together with their accuracy when further
applying FedX [13]. We also compare the results with “full GAN” sharing.
From the results, UFed-GAN outperforms all other FL methods in the benchmark
group, with an improvement of around 8% in CIFAR10, 3% in SVHN, and 2% in
FashionMNIST. The accuracy gain arises from the power of GANs to understand
the underlying data distribution of users and to generate synthetic data by
preserving essential features. Another observation is that UFed-GAN is
dataset-agnostic in terms of performance, delivering the best outputs in all
tested datasets. This observation promotes the generalizability of the
proposed method. In fact, UFed-GAN achieves similar performance as with full
GAN sharing. However, full GAN sharing is prone to severe privacy leakage as
shown in [8] and additionally requires heavy computation at each user node,
which is in conflict with the FL objective of privacy preservation and
conserving computation resources for users.
### IV-B Evaluation of Privacy
We now evaluate the privacy leakage of the proposed UFed-GAN with respect to
the attacker $\mathcal{A}$ as described in Section II-C. Suppose that
$\mathcal{A}$ has access to each communication round. We initialize the
generator of $\mathcal{A}$ as $G_{\mathcal{A}}$, with some random weights and
eavesdrop on user uplink and access $\mathcal{M}(\theta)$. $\mathcal{A}$
trains $G_{\mathcal{A}}$ similarly as with the server-side training. The
design of $G_{\mathcal{A}}$ is constrained by two parameters, the exact
generator architecture and the initial weights $W_{G}$ of the generator at the
server. Therefore, any attacker selecting the accurate generator architecture
and initial weights is practically unachievable. However, in our experiments,
we assume $\mathcal{A}$ knows the exact architecture of $G$, but with
different random initial weights $W_{\mathcal{A}}\neq W_{G}$. We compare the
generated images of the $G$ and $G_{\mathcal{A}}$ trained on the FashionMNIST
dataset in Fig. 3. It can be clearly seen that the generator $G$ on the cloud
server captures the user’s underlying data distribution, whereas
$G_{\mathcal{A}}$ converges to a trivial point. Moreover, almost no useful
visualization information is gained by the attacker as shown in Fig. 3. The
main reason is the uniqueness of the generator and the discriminator pair as
presented in Proposition 1.
(a)
(b)
Figure 3: Generated images from (a) cloud-server’s model. (b) attacker’s model. TABLE II: FID score, IS score, and SSIM of the $\mathcal{A}$ and the cloud server after 100 communication rounds on the FashionMNIST dataset. Metric | Attaker | Cloud Server
---|---|---
FID | 566.83 | 172.06
IS | 1.01 | 3.15
SSIM | 0.0067 | 0.8191
To examine privacy leakage quantitatively, we use the Frechet Inception
Distance (FID) score [29], IS, and structural similarity index measure (SSIM)
[30]. As discussed in the paper [31], the similarity between the generated
images and real data quantifies the privacy leakage. In table IV-B, we record
average FID, IS, and SSIM scores for the data generated by $\mathcal{A}$ and
the cloud server. Lower FID values, higher SSIM, and larger IS values
represent better reconstruction quality. The FID score for $\mathcal{A}$ is
higher than the cloud server by a great margin. This shows that, compared to
the cloud server, $\mathcal{A}$ generated data carries less information about
the training data. We further corroborate this observation by comparing the
SSIM and IS values as well. The experimental results demonstrate the privacy
preservation of UFed-GAN against full-GAN sharing.
## V Conclusion
In this work, we develop a novel framework of UFed-GAN to address the
challenges imposed by the lack of labeled data and by limited local
computation resources in federated learning. Moreover, we propose a separate
training strategy and a sharing scheme based on DCGANs. We provide an analysis
of the convergence and privacy leakage of the proposed framework. Our
empirical results demonstrate the superior performance of UFed-GAN in
comparison against benchmark FL methods. We plan to investigate the
communication overhead reduction for GAN-based FL and the application of
semantic learning in distributed learning in future works.
## References
* [1] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y. Arcas, “Communication-efficient learning of deep networks from decentralized data,” in _Proceedings of the 20th International Conference on Artificial Intelligence and Statistics_ , Lauderdale, FL, USA, Apr. 2017, pp. 1273–1282.
* [2] Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, “Federated learning with non-iid data,” _arXiv preprint arXiv:1806.00582_ , 2018.
* [3] T. Li, A. K. Sahu, M. Sanjabi, M. Zaheer, A. Talwalkar, and V. Smith, “On the convergence of federated optimization in heterogeneous networks,” _arXiv preprint arXiv:1812.06127_ , 2018.
* [4] X. Yuan and P. Li, “On convergence of fedprox: Local dissimilarity invariant bounds, non-smoothness and beyond,” in _Advances in Neural Information Processing Systems_ , New Orleans, LA, USA, Jul. 2022, pp. 10 752–10 765.
* [5] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in _Advances in Neural Information Processing Systems_ , vol. 27, Montreal, Canada, Dec. 2014.
* [6] H. Zhang, Z. Zhang, A. Odena, and H. Lee, “Consistency regularization for generative adversarial networks,” in _International Conference on Learning Representations_ , Addis Ababa, Ethiopia, Apr. 2020.
* [7] X. Cao, G. Sun, H. Yu, and M. Guizani, “Perfed-gan: personalized federated learning via generative adversarial networks,” _IEEE Internet of Things Journal_ , vol. 10, no. 5, pp. 3749–3762, Mar. 2023.
* [8] A. Wijesinghe, S. Zhang, and Z. Ding, “Ps-fedgan: an efficient federated learning framework based on partially shared generative adversarial networks for data privacy,” _arXiv preprint arXiv:2305.11437_ , 2023.
* [9] J. Mu, D. Xie, H. Huang, and X. Jing, “Computation-constrained spectrum sensing in iot-based scenarios,” _IET Communications_ , vol. 14, no. 20, pp. 3631–3638, Nov. 2020.
* [10] J. Bekker and J. Davis, “Learning from positive and unlabeled data: a survey,” _Machine Learning_ , vol. 109, pp. 719–760, Apr. 2020.
* [11] D. K. Dennis, T. Li, and V. Smith, “Heterogeneity for the win: One-shot federated clustering,” in _Proceedings of the 38th International Conference on Machine Learning_ , Jul. 2021, pp. 2611–2620.
* [12] A. Ghosh, J. Chung, D. Yin, and K. Ramchandran, “An efficient framework for clustered federated learning,” in _Advances in Neural Information Processing Systems_ , Dec. 2020, pp. 19 586–19 597.
* [13] S. Han, S. Park, F. Wu, S. Kim, C. Wu, X. Xie, and M. Cha, “Fedx: Unsupervised federated learning with cross knowledge distillation,” in _European Conference on Computer Vision_ , Tel Aviv, Israel, Nov. 2022, pp. 691–707.
* [14] M. Servetnyk, C. C. Fung, and Z. Han, “Unsupervised federated learning for unbalanced data,” in _GLOBECOM 2020 - 2020 IEEE Global Communications Conference_ , Taipei, Taiwan, Dec. 2020, pp. 1–6.
* [15] N. Lu, Z. Wang, X. Li, G. Niu, Q. Dou, and M. Sugiyama, “Federated learning from only unlabeled data with class-conditional-sharing clients,” in _International Conference on Learning Representations_ , Apr. 2022.
* [16] E. S. Lubana, C. I. Tang, F. Kawsar, R. P. Dick, and A. Mathur, “Orchestra: unsupervised federated learning via globally consistent clustering,” _arXiv preprint arXiv:2205.11506_ , 2022.
* [17] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” _arXiv preprint arXiv:1511.06434_ , 2015.
* [18] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training gans,” in _Advances in neural information processing systems_ , Barcelona, Spain, Dec. 2016.
* [19] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , Jun. 2020, pp. 9729–9738.
* [20] J. Hayes, L. Melis, G. Danezis, and E. De Cristofaro, “Logan: evaluating privacy leakage of generative models using generative adversarial networks,” _arXiv preprint arXiv:1705.07663_ , pp. 506–519, 2017.
* [21] F. Zhang, K. Kuang, Z. You, T. Shen, J. Xiao, Y. Zhang, C. Wu, Y. Zhuang, and X. Li, “Federated unsupervised representation learning,” _arXiv preprint arXiv:2010.08982_ , 2020.
* [22] A. Krizhevsky, “Learning multiple layers of features from tiny images,” pp. 32–33, 2009. [Online]. Available: https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf
* [23] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” in _NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011_ , Granada, Spain, Dec. 2011.
* [24] H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,” _arXiv preprint arXiv:1708.07747_ , 2017.
* [25] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in _Proceedings of the 37th International Conference on Machine Learning_ , vol. 119, Vienna, Austria, Jul. 2020, pp. 1597–1607.
* [26] J.-B. Grill, F. Strub, F. Altché, C. Tallec, P. Richemond, E. Buchatskaya, C. Doersch, B. Avila Pires, Z. Guo, M. Gheshlaghi Azar _et al._ , “Bootstrap your own latent-a new approach to self-supervised learning,” _Advances in Neural Information Processing Systems_ , vol. 33, pp. 21 271–21 284, Dec. 2020.
* [27] J. Li, P. Zhou, C. Xiong, and S. C. Hoi, “Prototypical contrastive learning of unsupervised representations,” _arXiv preprint arXiv:2005.04966_ , 2020.
* [28] W. Zhuang, X. Gan, Y. Wen, S. Zhang, and S. Yi, “Collaborative unsupervised visual representation learning from decentralized data,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , Oct. 2021, pp. 4912–4921.
* [29] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, “Gans trained by a two time-scale update rule converge to a local nash equilibrium,” in _Advances in Neural Information Processing Systems_ , vol. 30, Long Beach, CA, USA, Dec. 2017.
* [30] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” _IEEE Transactions on Image Processing_ , vol. 13, no. 4, pp. 600–612, 2004.
* [31] D. Dangwal, V. T. Lee, H. J. Kim, T. Shen, M. Cowan, R. Shah, C. Trippel, B. Reagen, T. Sherwood, V. Balntas _et al._ , “Analysis and mitigations of reverse engineering attacks on local feature descriptors,” _arXiv preprint arXiv:2105.03812_ , 2021.
|
Choice Disjunctive Queries in Logic Programming
Keehang Kwon
Dept. of Computer Engineering, DongA University
Busan 604-714, Korea
<EMAIL_ADDRESS>
Abstract: One of the long-standing research problems on logic programming is
to treat the cut predicate in a logical, high-level way. We argue that this
problem can be solved by adopting linear logic and choice-disjunctive goal
formulas of the form $G_{0}\oplus G_{1}$ where $G_{0},G_{1}$ are goals. These
goals have the following intended semantics: $choose$ the true disjunct
$G_{i}$ and execute $G_{i}$ where $i(=0\ {\rm or}\ 1)$, while $discarding$ the
unchosen disjunct. Note that only one goal can remain alive during execution.
These goals thus allow us to specify mutually exclusive tasks in a high-level
way.
keywords: Prolog, mutual exclusion, cut, linear logic, computability logic
## 1 Introduction
One of the long-standing research problems on logic programming is to treat
the extra-logical primitive in a high-level way. The advances of logic
programming have enriched Horn clauses with additional programming primitives
in a high-level way (higher-order programming, modules, local constants, etc).
Nevertheless some key constructs could not be dealt with in a high-level way,
in particular when we are concerned with mutual exclusion (and the cut
predicate).
Consequently, much attention [10, 11, 6] has been given to finding a semantics
that captures the cut predicate. However, these proposals are not $logical$ in
that a well-defined yet simple declarative meaning as well as its proof theory
are still missing, exposing low-level operational details.
In this paper, inspired by the work in [4], we propose a purely logical
solution to this problem. It involves the direct employment of linear logic
[2] to allow for choice-disjunctive goals. A choice-disjunctive goal is of the
form $G_{0}\oplus G_{1}$ where $G_{0},G_{1}$ are goals. (A more intuitive name
would be $choose(G_{0},G_{1})$.) Executing this goal with respect to a program
$D$ – $ex(D,G_{0}\oplus G_{1})$ – has the following intended semantics:
$\mbox{\rm choose a true one between}\ ex(D,G_{0}),ex(D,G_{1}).$
An illustration of this aspect is provided by the following definition of the
relation $son(X,Y)$ which holds if $Y$ is a son of $X$.:
$son(X,Y)$ ${\rm:-}$ | $(male(X)\land father(Y,X))\ \oplus$
---|---
| $(female(X)\land mother(Y,X)).$
The body of the definition above contains a mutually exclusive goal, denoted
by $\oplus$. As a particular example, solving the query $son(tom,Y)$ would
result in selecting and executing the first goal $male(tom)\land
father(tom,Y)$, while discarding the second one. The given goal will succeed,
producing solutions for $Y$. Of course, we can specify mutually exclusive
goals using cut in Prolog, but it is well-known that cuts complicates the
declarative meaning of the program [1]. Our language makes it possible to
formulate mutually exclusive goals in a high-level way. The class of choice
disjunctive goals is, in a sense, a high-level abstraction for the cut
predicate.
As seen from the example above, choice-disjunctive goals can be used to
perform mutually exclusive tasks. There are several linear logic languages [3,
12] in which goals of the form $G_{0}\oplus G_{1}$ are present. A common yet
problematic aspect of these works is their treatment of the $\oplus$-goals:
these goals are treated as inclusive-OR (or classical disjunctive) goals
rather than exclusive-OR ones:
$ex(D,G_{0}\oplus G_{1})\ {\rm if}\ ex(D,G_{0})\ \lor ex(D,G_{1})$
where $\lor$ represents classical disjunction. Hence, the declarative reading
of $\oplus$ – known as the machine’s choice – is $violated$ in these
languages.
A satisfactory solution can be obtained by adding the choice action, as
discussed above, to their execution model of $\oplus$ so that the execution
respects the declarative reading of $\oplus$, while maintaining provability.
Hence, the main difference is that, once a goal is chosen, the unchosen goal
will be discarded in our language, while it will remain alive (typically
through a creation of a choicepoint) in those languages.
This paper proposes Prolog⊕, an extension of Prolog with choice-disjunctive
operators in goal formulas. The remainder of this paper is structured as
follows. We describe Prolog⊕ in the next section. In Section 3, we present
some examples of Prolog⊕. Section 4 concludes the paper.
## 2 The Language
The language is a version of Horn clauses with choice-disjunctive goals. Note
that we disallow linear clauses here, thus allowing only reusable clauses. It
is described by $G$\- and $D$-formulas given by the syntax rules below:
| $G::=$ | $A\;|\;t=s\;|\;G\land G\;|\;\exists x\ G\;|\;G\oplus G$
---|---|---
| $D::=$ | $A\;|\;G\supset A\ \;|\;\forall x\ D\;|\;D\land D$
In the rules above, $t,s$ represent terms, and $A$ represents an atomic
formula. A $D$-formula is called a Horn clause with choice-disjunctive goals.
The logic programming paradigm such as Prolog was originally founded on the
resolution method. But this approach was difficult to extend to richer logics.
The use of sequent calculus allows us to overcome this limit. Furthermore,
uniform proofs [9] allows us to execute logic programs in an efficient way by
integrating two separate phases – the proof phase and the execution phase –
into a single phase. We adopt this approach below.
We will present a machine’s strategy for this language as a set of rules.
These rules in fact depend on the top-level constructor in the expression, a
property known as uniform provability[8, 9, 7]. Note that execution alternates
between two phases: the goal-reduction phase and the backchaining phase. In
the goal-reduction phase (denoted by $ex(D,G)$), the machine tries to solve a
goal $G$ from a clause $D$ by simplifying $G$ . The rule (6) – (9) are related
to this phase. If $G$ becomes an atom, the machine switches to the
backchaining mode. This is encoded in the rule (5). In the backchaining mode
(denoted by $bc(D_{1},D,A)$), the machine tries to solve an atomic goal $A$ by
first reducing a Horn clause $D_{1}$ to simpler forms (via rule (3) and (4))
and then backchaining on the resulting clause (via rule (1) and (2)).
Definition 1. Let $G$ be a goal and let $D$ be a program. Then the notion of
executing $\langle D,G\rangle$ – $ex(D,G)$ – is defined as follows:
* (1)
$bc(A,D,A)$. % This is a success.
* (2)
$bc((G_{0}\supset A),D,A)$ if $ex(D,G_{0})$.
* (3)
$bc(D_{1}\land D_{2},D,A)$ if $bc(D_{1},D,A)$ or $bc(D_{2},D,A)$.
* (4)
$bc(\forall xD_{1},D,A)$ if $bc([t/x]D_{1},D,A)$.
* (5)
$ex(D,A)$ if $bc(D,D,A)$.
* (6)
$ex(D,t=s)$ if $unify(t,s)$ % $t,s$ are terms.
* (7)
$ex(D,G_{0}\land G_{1})$ if $ex(D,G_{0})$ $and$ $ex(D,G_{1})$.
* (8)
$ex(D,\exists xG_{0})$ if $ex(D,[t/x]G_{0})$.
* (9)
$ex(D,G_{0}\oplus G_{1})$ if select a successful disjunct between
$ex(D,G_{0})$ and $ex(D,G_{1})$. % This goal behaves as exclusive-OR.
In the above rules, only the rule (9) is a novel feature. To be specific, this
goal first attempts to execute $G_{0}$, discarding $G_{1}$. If it succeeds,
then do nothing (and do not leave any choice point for $G_{1}$ ). If it fails,
then $G_{1}$ is attempted.
Implementing $G_{0}\oplus G_{1}$ poses no difficulties. For example, it can be
done by translating it to a Prolog disjunctive goal of the form
$(G_{0},!);G_{1}$ where ; denotes a Prolog disjuction. The cut then destroys
the choice point created for $G_{1}$ if $G_{0}$ succeeds. On the contrary, the
same goal $G_{0}\oplus G_{1}$ will be translated to $G_{0};G_{1}$ in other
linear logic languages.
The following theorem connects our language to linear logic. Its proof is
obtained from [3] and from the simple observation that our modified execution
rule preserves provability.
###### Theorem 1
Let $D$ be a program and let $G$ be a goal. Then, $ex(D,G)$ terminates with a
success if and only if $G$ follows from $D$ in intuitionistic linear logic.
Furthermore, it respects the declarative reading of the operator $\oplus$.
## 3 Examples
Let us first consider the relation $f(X,Y)$ specified by two rules:
* (1)
if $X<2$, then $Y=0$.
* (2)
if $X\geq 2$, then $Y=3$.
The two conditions are mutually exclusive which is expressed by using the cut
in traditional logic programming as shown below:
$f(X,0):-\ X<2,!.$ $f(X,3):-\ X\geq 2.$
Using cut, we can specify mutually exclusive goals, but cuts affect the
declarative meaning of the program. Our language makes it possible to
formulate mutually exclusive goals through the choice-disjunctive goals as
shown below:
$f(X,Y)$ ${\rm:-}$ | $(X\geq 2\land Y=3)\ \oplus$
---|---
| $(X<2\land Y=0)$
The new program, equipped with $\oplus$-goals, is more readable than the
original version with cuts, while preserving the same efficiency. A similar
example is provided by the following “max” program that finds the larger of
two numbers.
$max(X,Y,Max)$ ${\rm:-}$ | $(X\geq Y\land Max=X)\ \oplus$
---|---
| $(X<Y\land Max=Y)$
These two goals in the body of the above clause are mutually exclusive. Hence,
only one of these two goals can succeed. For example, consider a goal
$max(3,9,Max)$. Solving this goal has the effect of choosing and executing the
second goal $(3<9)\land Max=9$, producing the result $Max=9$.
As another example, we consider the relation $member(X,L)$ for establishing
whether $X$ is in the list $L$. A typical Prolog definition of $member(X,L)$
is shown below:
$member(X,[Y|L])$ ${\rm:-}$ | $(Y=X)\ \lor member(X,L)$
---|---
This definition is nondeterministic in the sense that it can find any
occurrence of $X$. Our language in Section 2 makes it possible to change
$member$ to be deterministic and more efficient: only one occurrence can be
found. An example of this is provided by the following program.
$member(X,[Y|L])$ ${\rm:-}$ | $(Y=X)\ \oplus member(X,L)$
---|---
As a final example, we consider the relation $rprime$ for establishing whether
the keyboard input data $X$ is prime or not. An example of this is provided by
the following program.
$rprime$ ${\rm:-}$ | $read(X)\land$
---|---
| $(prime(X)\land\ write(`prime^{\prime}))\oplus$
| $(composite(X)\land\ write(`composite^{\prime}))$
## 4 Conclusion
In this paper, we have considered an extension to Prolog with choice-
disjunctive goals. This extension allows goals of the form $G_{0}\oplus G_{1}$
where $G_{0},G_{1}$ are goals. These goals are particularly useful for
replacing the cut in Prolog, making Prolog more concise and more readable.
In the near future, we plan to investigate the connection between Prolog⊕and
Japaridze’s Computability Logic(CL)[4, 5]. CL is a new semantic platform for
reinterpreting logic as a theory of tasks. Formulas in CL stand for
instructions that can carry out some tasks. We plan to investigate whether our
operational semantics is sound and complete with respect to the semantics of
CL.
## References
* [1] I. Bratko, “Prolog:programming for AI ”, Addison Wesley, 2001 (3rd edition).
* [2] J.Y. Girard, “Linear Logic”, Theoretical Computer Science, vol.50, pp.1–102, 1987\.
* [3] J. Hodas and D. Miller, “Logic Programming in a Fragment of Intuitionistic Linear Logic”, Information and Computation, vol.110, pp.327–365, 1994.
* [4] G. Japaridze, “Introduction to computability logic”, Annals of Pure and Applied Logic, vol.123, pp.1–99, 2003.
* [5] G. Japaridze, “Sequential operators in computability logic”, Information and Computation, vol.206, No.12, pp.1443-1475, 2008.
* [6] J. Kriener and A. King, “RedAlert: Determinacy Inference for Prolog”, Theory and Practice of Logic Programming, vol.11, no.4-5. pp.182–196.
* [7] E. Komendantskaya and V. Komendantsky, “On uniform proof-theoretical operational semantics for logic programming”, In J.-Y. Beziau and A.Costa-Leite, editors, Perspectives on Universal Logic, pages 379–394. Polimetrica Publisher, 2007.
* [8] D. Miller, “A logical analysis of modules in logic programming”, Journal of Logic Programming, vol.6, pp.79–108, 1989.
* [9] D. Miller, G. Nadathur, F. Pfenning, and A. Scedrov, “Uniform proofs as a foundation for logic programming”, Annals of Pure and Applied Logic, vol.51, pp.125–157, 1991.
* [10] A. Porto, “A structured alternative to Prolog with simple compositional semantics”, Theory and Practice of Logic Programming, vol.11, No.4-5, pp.611-627, 2011.
* [11] A. Saurin, “Towards Ludics Programming: Interactive Proof Search”, International Conference on Logic Programming, pages 253–268. 2008.
* [12] M. D. Winikoff, “Logic Programming with Linear Logic”, PhD. Thesis, Univ. Melbourne, 1997\.
|
# Spectral approach to Korteweg-de Vries equations on the compactified real
line
Christian Klein Institut de Mathématiques de Bourgogne, UMR 5584
Université de Bourgogne-Franche-Comté, 9 avenue Alain Savary, 21078 Dijon
Cedex, France
E-mail<EMAIL_ADDRESS>and Nikola Stoilov Institut de
Mathématiques de Bourgogne, UMR 5584
Université de Bourgogne-Franche-Comté, 9 avenue Alain Savary, 21078 Dijon
Cedex, France
E-mail<EMAIL_ADDRESS>
###### Abstract.
We present a numerical approach for generalised Korteweg-de Vries (KdV)
equations on the real line. In the spatial dimension we compactify the real
line and apply a Chebyshev collocation method. The time integration is
performed with an implicit Runge-Kutta method of fourth order. Several
examples are discussed: initial data bounded but not vanishing at infinity as
well as data not satisfying the Faddeev condition, i.e. with a slow decay
towards infinity.
This work is partially supported by the ANR-FWF project ANuI -
ANR-17-CE40-0035, the isite BFC project NAANoD, the ANR-17-EURE-0002 EIPHI and
by the European Union Horizon 2020 research and innovation program under the
Marie Sklodowska-Curie RISE 2017 grant agreement no. 778010 IPaDEGAN
## 1\. Introduction
Generalised Korteweg-de Vries (gKdV) equations,
(1) $u_{t}(x,t)+u_{xxx}(x,t)+u(x,t)^{p-1}u_{x}(x,t)=0,$
where $p\in\mathbb{N}$, $u:\mathbb{R}\times\mathbb{R}^{+}\mapsto\mathbb{R}$,
appear as asymptotic models in hydrodynamics, nonlinear optics, plasma
physics, Bose-Einstein condensates, and essentially in most situations where
predominantly one-dimensional phenomena are discussed and where dispersion
dominates dissipation. Whereas this applies in particular to the case of the
classical Korteweg-de Vries (KdV) equation ($p=2$), there are applications for
instance in electrodynamics for the modified KdV equation ($p=3$), see [21].
Because of their importance in applications, there has been a considerable
activity in developing numerical approaches for the gKdV equations. For
initial data which are periodic or rapidly decreasing, numerical approaches
based on the approximation of $u$ in (1) via truncated Fourier series, i.e.,
trigonometric polynomials, are very efficient, see for instance [14, 15] and
references therein.
The Fourier approach, that is restricting the data to an interval
$L[-\pi,\pi]$ ($L=const$, $L\in\mathbb{R}^{+}$) and continuing them
periodically with period $2\pi L$ on the whole real line works very well for
periodic functions and exhibits _spectral convergence_ , namely an exponential
decrease of the numerical error with the number of Fourier modes. Schwartz
class functions can be treated as periodic as one works with a finite
precision and since $L$ can be chosen large enough such that all necessary
derivatives of $u$ vanish at the domain boundaries with numerical precision.
However, the same approach for initial data which do not tend to zero or are
only slowly decreasing to zero for $x\to\infty$, would in general imply a
Gibbs phenomenon at the domain boundaries. The resulting method would
therefore be only of first order in the number of Fourier modes, making it
impossible to reach the high resolution necessary to treat e.g. rapid
oscillations in the solution. Therefore, such situations are typically treated
on a finite interval. This leads to the problem of how to impose boundary
conditions, so that inside the computational domain the solution is the same
as if the computation was done on the whole real line. Such boundary
conditions are called _transparent_. Bérenger [2] introduced _perfectly
matched layers_ (PML) in electrodynamics to address this problem by extending
the computational domain to layers glued to the domain boundaries. Inside the
layers, the equation under consideration is deformed to a dissipative one,
which is chosen to efficiently dissipate the solution. Whereas this works well
for linear equations, examples for the nonlinear Schrödinger equation, see
[26, 3], showed that in a nonlinear setting there will be back reflections
from the layers to the computational domain. For integrable equations exact
transparent boundary conditions (TBC) can be given, e.g. for the case of
modified KdV see [27] based on [9]. The problem with both PML and TBC is that
they in general require initial data with compact support within the
computational domain, thus limiting the class of solutions that can be
studied. The goal of the present paper is to establish a spectral numerical
approach for generalised KdV equations (1) for initial data that are analytic
on the whole real line, that are slowly decreasing towards infinity, or are
bounded there. This numerical approach exhibits spectral convergence on the
whole real line with a technique similar to [12].
Both the classical KdV equation and the modified KdV equation are completely
integrable, which means they have an infinite number of conserved quantities
(for a comprehensive review on integrability see [25]). In all other cases,
the generalised KdV equations have only three conserved quantities,
$\int_{\mathbb{R}}udx$ and the $L^{2}$ norm of $u$ and the energy
(2)
$E[u]=\int_{\mathbb{R}}\left(\frac{u^{p+1}}{p(p+1)}-\frac{1}{2}u_{x}^{2}\right)dx.$
The complete integrability of the classical KdV equation made it one of the
best studied non-linear dispersive equations with a rather complete
understanding of its solutions. However, even in this case there are open
questions which motivate us to provide numerical tools to complement
analytical studies in this context. The standard inverse scattering approach
to KdV is only applicable if the Faddeev decay condition [8],
(3) $\int_{\mathbb{R}}(1+|x|)|u_{0}(x)|dx<\infty,$
holds for the initial data $u(x,0)=u_{0}(x)$. The direct scattering approach
to KdV involves the determination of the spectrum of the Schrödinger equation
for the potential $u_{0}(x)$,
$\psi_{xx}+u_{0}(x)\psi=E\psi,$
see [25]. It is known that this discrete spectrum is finite if the Faddeev
condition (3) is satisfied, see [17]. But except for the periodic case, see
for instance [1], there is no complete understanding of the case of initial
data not satisfying (3). The goal of the present paper is to provide numerical
tools to study such cases. Of course, numerically one will be only able to
study finite times and thus will not be able to address the question whether
the time evolution of such data can lead to an infinite number of solitons.
The paper is organized as follows: In section 2 we summarize a few theoretical
facts on generalised KdV equations. In section 3 we choose a compactification
of the real line and describe the numerical approach for the generalised KdV
equations. In section 4 we discuss several examples. Concluding remarks are
added in section 5.
## 2\. Theoretical preliminaries
In this section we summarize basic facts about generalised KdV equations
needed in the following.
Though the KdV equations (1) are only completely integrable for $p=2$ and
$p=3$, they have for all integer values of $p\geq 2$ a solitary travelling
wave solution which is explicitly given by $u=Q_{c}(x-x_{0}-ct)$ with
$x_{0},c=const$ and with
(4)
$Q_{c}(z)=\left(\frac{p(p+1)c}{2}\,\mbox{sech}^{2}\frac{\sqrt{c}(p-1)}{2}z\right)^{1/(p-1)}.$
Thus, we have $Q_{c}(z)=c^{1/(p-1)}Q(\sqrt{c}z)$, where we have put
$Q:=Q_{1}$. This simple scaling property of the _solitons_ allows to
concentrate on the case $c=1$. If we refer in the following to the generalised
KdV soliton, it is always implied that $c=1$. It was shown in [4] that these
solitons are linearly unstable for $p>4$. Note that the energy of the soliton
vanishes for $p=5$.
The generalised KdV equation has the following scaling invariance: $x\mapsto
x/\lambda$, $t\mapsto t/\lambda^{3}$ and $u\mapsto\lambda^{2/(p-1)}u$ with
$\lambda=const$. For $p=5$, the $L^{2}$ norm of $u$ is invariant under this
rescaling, and this case is consequently called $L^{2}$-critical. It is shown
in [18] that solutions to the $L^{2}$ critical generalised KdV equation can
have a blow-up in finite time for smooth initial data. The mechanism of the
blow-up for initial data close to the soliton is discussed in [20]. In the
present paper, we will only study the sub-critical cases, $p\leq 4$.
A convenient way to treat solutions varying on length scales of order
$1/\epsilon$ for times of order $1/\epsilon$ for $0<\epsilon\ll 1$ is to
consider the map $x\mapsto\epsilon x$, $t\mapsto\epsilon t$. This leads for
(1) to (once more we use the same symbol $u$ for the transformed and the
original solution)
(5) $u_{t}+\epsilon^{2}u_{xxx}+u^{p-1}u_{x}=0.$
The formal limit $\epsilon\to 0$ of this equation yields a generalised Hopf
equation, $u_{t}+u^{p-1}u_{x}=0$. It is known that such equations can have a
_hyperbolic blow-up_ , i.e., shocks for general smooth initial data, for
instance for data with a single hump. The generic _break-up_ of such solutions
at a point $(x_{c},t_{c},u_{c})$, see for instance the discussion in [7] and
references therein, is characterized by the equations
$\displaystyle a(u_{c})t_{c}+\Phi(u_{c})$ $\displaystyle=x_{c},$ (6)
$\displaystyle a^{\prime}(u_{c})t_{c}+\Phi^{\prime}(u_{c})$ $\displaystyle=0,$
$\displaystyle a^{\prime\prime}(u_{c})t_{c}+\Phi^{\prime\prime}(u_{c})$
$\displaystyle=0,$
where $a(u)=u^{p-1}$. It is known that dispersive regularizations of
dispersionless equations as (5) in our case will lead to _dispersive shock
waves_ (DSWs), i.e., rapid modulated oscillations near the shock of the
solution of the corresponding dispersionless equation for the same initial
data. Dubrovin [5, 6] presented a conjecture that the onset of a DSW is
_universal_ for a large class of dispersive equations and of initial data, and
that it is given by a special solution of the so-called Painlevé $P_{I}^{2}$
equation, see for instance [11]. This conjecture was numerically shown to
apply to the generalised KdV equations in [7]. In this paper we provide the
numerical tools to do so for larger classes of initial data than in [7],
however we leave addressing the universality conjecture in the present context
for a future work.
## 3\. Numerical approach
In this section we present the numerical approach to treat generalised KdV
equations on the compactified real line. We first introduce the
compactification which allows to study the equation on the interval $[-1,1]$
where we use a Chebyshev collocation method. The resulting system of ordinary
differential equations is then integrated in time with an implicit Runge-Kutta
scheme.
### 3.1. Compactification
To numerically treat the KdV equation, we first map the real line via the well
known map (this is the standard compactification used for Minkowski spacetime)
(7) $x=c\tan\frac{\pi l}{2},\quad l\in[-1,1],\quad c=const,$
to the interval $[-1,1]$. This implies
(8) $\partial_{x}=\frac{2}{\pi c}\cos^{2}\frac{l\pi}{2}\partial_{l}.$
The role of the constant $c$ is to control the numerical resolution in various
parts of the real line within certain limits. This is illustrated in Fig. 1
where we have introduced _Chebyshev collocation points_ for $l\in[-1,1]$,
i.e.,
(9) $l_{n}=\cos(n\pi/N),\quad n=0,1,\ldots,N,\quad N\in\mathbb{N}.$
It can be seen in Fig. 1 that the density of the points near $x=0$ is higher
for smaller $c$ (numerically the function $\tilde{v}$ is studied as a function
of $l$, but since in applications the dependence on $x$ is more important, we
plot its $x$-dependence which can be obtained via (7)). Note, however, that
the spectral methods we apply in this paper are global. This means that a high
resolution in part of the studied domain is not necessarily beneficial, only
the overall resolution on the whole interval is important. Therefore we
generally apply values of $c$ close to 1.
Figure 1. Distribution of the Chebyshev collocation points (9) on a Lorentzian
profile under the map (7) for $N=800$; on the left for $c=0.1$, in the middle
for $c=1$, on the right for $c=10$.
### 3.2. Boundary conditions
The map (7) transforms the KdV equation (1) on the real line to an equation on
the interval $[-1,1]$, which is singular at the points $l=\pm 1$. Because of
this singular behavior, no boundary conditions need to be imposed there.
However, in practice it is useful to give boundary conditions at these points
to stabilize the numerical approaches. We apply a vanishing condition for $u$
at these points, and a _clamped boundary condition_ for $l=-1$, i.e., the
three conditions (in an abuse of notation, we denote $u(x,t)$ and $u(l,t)$
with the same letter)
(10) $u(l,t)\biggr{\rvert}_{l=1}=0,\quad u(l,t)\biggr{\rvert}_{l=-1}=0,\quad
u_{l}(l,t)\biggr{\rvert}_{l=-1}=0.$
The approach we present here can be generalised to functions $u$ which do not
tend to 0 for $|x|\to\infty$, but which are bounded there. In this case we
write
(11) $u=v+V,\quad V=A\frac{1+l}{2}+B\frac{(1+l)^{2}}{4}+C\frac{1-l}{2},$
where $A$, $B$, $C$ are constants such that $v(l=\pm 1)=v_{l}(l=-1)=0$. To
treat the clamped boundary condition for $l=-1$, we use as in [22] the ansatz
(12) $v=(1+l)\tilde{v}.$
This leads for (1) to the equation
(13)
$\tilde{v}_{t}+\frac{1}{1+l}[(1+l)\tilde{v}+V]_{xxx}+\frac{1}{1+l}[(1+l)\tilde{v}+V]^{p}[(1+l)\tilde{v}+V]_{x}=0$
which has to be solved for all $t$ with the condition $\tilde{v}(\pm 1)=0$.
### 3.3. Chebyshev differentiation matrices
The dependence of $\tilde{v}$ in (13) on $l$ will be treated in standard way
via Lagrange interpolation of $\tilde{v}$ on Chebyshev collocation points (9)
as discussed in [22]. A derivative of $\tilde{v}$ with respect to $l$ is then
approximated via the derivative of the Lagrange polynomial. This leads to the
action of a matrix on the vector
$\tilde{v}=(\tilde{v}(l_{0}),\ldots,\tilde{v}_{N})$ (again we use the same
symbol for the function $\tilde{v}$ and the vector $\tilde{v}$), the well
known _Chebyshev differentiation matrices_ $D$, see e.g., [23, 24]. This means
with (8) that the derivatives $\partial_{x}$ are approximated by
(14) $\partial_{x}\approx\frac{2}{\pi
c}\mbox{diag}\left(\cos^{2}\frac{l\pi}{2}\right)D,$
where the diagonal matrix has the components
$(\cos^{2}\frac{l_{0}\pi}{2},\ldots,\cos^{2}\frac{l_{N}\pi}{2})$.
The Lagrange interpolation of a function on Chebyshev collocation points is
closely related to an expansion of the function in terms of Chebyshev
polynomials $T_{n}$, $n=0,1,\ldots$
(15) $\tilde{v}(l)\approx\sum_{n=0}^{N}v_{n}T_{n}(l),$
see the discussion in [22]. As shown again in [22], the Chebyshev coefficients
$v_{n}$, $n=0,1,\ldots,N$ can be computed efficiently via a fast cosine
transform which is closely related to the fast Fourier transform. It is also
known that the Chebyshev coefficients for a function analytic on $[-1,1]$
decrease exponentially, and that the numerical error in approximating a
function $\tilde{v}$ via (15) is of the order of the first omitted
coefficients $v_{n}$ in the Chebyshev series.
### 3.4. Time integration
After the discretisation in space equation (13) becomes an $N+1$-dimensional
system of ordinary differential equations (ODEs) in $t$ of the form
$\tilde{v}_{t}=f(\tilde{v})$ which can be numerically integrated in time with
standard techniques. The discussion in [22] shows that Chebyshev
differentiation matrices have a conditioning of the order
$\mbox{cond}(D^{3})=O(N^{6})$. Thus explicit time integration schemes are
problematic since stability conditions would necessitate prohibitively small
time steps. Therefore, here we apply an implicit method, and since we are
interested in capturing rapid oscillations in the expected DSWs, we use a
fourth order method.
Concretely, we apply a fourth order Runge-Kutta (IRK4) scheme, also called the
Hammer-Hollingsworth method, a 2-stage Gauss scheme. The general formulation
of an $s$-stage Runge–Kutta method for the initial value problem
$\tilde{v}^{\prime}=f(\tilde{v},t),\,\,\,\,\tilde{v}(t_{0})=\tilde{v}_{0}$
reads:
(16)
$\displaystyle\tilde{v}_{n+1}=\tilde{v}_{n}+h\underset{i=1}{\overset{s}{\sum}}\,b_{i}K_{i},$
(17) $\displaystyle
K_{i}=f\left(t_{n}+c_{i}h,\,\tilde{v}_{n}+h\underset{j=1}{\overset{s}{\sum}}\,a_{ij}K_{j}\right),$
where $b_{i},\,a_{ij},\,\,i,j=1,...,s$ are real numbers and
$c_{i}=\underset{j=1}{\overset{s}{\sum}}\,a_{ij}$. For the IRK4 method used
here, one has $c_{1}=\frac{1}{2}-\frac{\sqrt{3}}{6}$,
$c_{2}=\frac{1}{2}+\frac{\sqrt{3}}{6}$, $a_{11}=a_{22}=1/4$,
$a_{12}=\frac{1}{4}-\frac{\sqrt{3}}{6}$,
$a_{21}=\frac{1}{4}+\frac{\sqrt{3}}{6}$ and $b_{1}=b_{2}=1/2$.
Applying IRK4 to (13) we get the following system,
$\displaystyle\mathcal{L}K_{1}$
$\displaystyle=-\frac{1}{1+l}[(1+l)(\tilde{v}+ha_{12}K_{2})+V]_{xxx}$
$\displaystyle-\frac{1}{1+l}[(1+l)(\tilde{v}+ha_{11}K_{1}+ha_{12}K_{2})+V]^{p}[(1+l)(\tilde{v}+ha_{11}K_{1}+ha_{12}K_{2})+V]_{x},$
$\displaystyle\mathcal{L}K_{2}$
$\displaystyle=-\frac{1}{1+l}[(1+l)(\tilde{v}+ha_{21}K_{1})+V]_{xxx}$ (18)
$\displaystyle-\frac{1}{1+l}[(1+l)(\tilde{v}+ha_{21}K_{1}+ha_{22}K_{2})+V]^{p}[(1+l)(\tilde{v}+ha_{21}K_{1}+ha_{22}K_{2})+V]_{x},$
where
$\mathcal{L}=\hat{1}+ha_{11}\frac{1}{1+l}\partial_{xxx}(1+l).$
Recall that $\partial_{x}$ is approximated in (18) via (14), and the third
derivative with respect to $x$ is approximated as the cubic power of this.
The system (18) will be solved iteratively with a simplified Newton iteration.
This means that in each step of the iteration the new $K_{1}$ and $K_{2}$ are
obtained by inverting the operator $\mathcal{L}$ only instead of the full
Jacobian. The vanishing boundary conditions for $\tilde{v}$ and thus for
$K_{1}$, $K_{2}$ are imposed as in [22]: the equations
$\mathcal{L}K_{i}=F_{i}$, $i=1,2$, are solved by considering only the
components $1,\ldots,N-1$ of the $K_{i}$; this means that we consider the
reduced equations
$\sum_{m=1}^{N-1}\mathcal{L}_{nm}K_{i,m}=F_{i,n},\quad n=0,\ldots,N-1,\quad
i=1,2,$
and solve for $K_{i,1},\ldots,K_{i,N-1}$ only since the values
$K_{i,0}=K_{i,N}=0$ are imposed.
The resolution in time can be controlled via the conserved quantities of the
generalised KdV equation. We consider in general the energy for functions
vanishing at infinity. For functions which are just bounded at infinity, we
consider a linear combination of the energy and the $L^{2}$ norm of the
solution,
(19) $\tilde{E}=\int_{\mathbb{R}}\left(\frac{u^{p+1}-\lambda
u^{2}}{p(p+1)}-\frac{1}{2}u_{x}^{2}\right)dx,$
where the constant $\lambda$ is chosen such that the integrand is bounded at
infinity. The conserved quantities will in actual computations depend on time
due to numerical errors. But as discussed for instance in [14], the
conservation of these quantities controls the numerical error which is in
general overestimated by up to two orders of magnitude.
## 4\. Examples
In this section we study two types of examples which illustrate the potential
of the presented numerical approach. First we consider initial data in the
form of a mollified (smoothed out) step function. Then we study initial data
not satisfying the Faddeev condition
$\int^{+\infty}_{-\infty}(1+|x|)|u(x)|dx<\infty$, see [25], but nevertheless
vanishing for $|x|\to\infty$. These examples are studied for two types of
nonlinearity, for $p=2$, the completely integrable KdV equation, and for
$p=4$, a non-integrable generalised KdV equation discussed for instance in
[19]. The latter is still sub-critical which means that there is no blow-up
for sufficiently regular initial data.
### 4.1. Mollified step initial data
We consider initial data of the form
(20) $u(x,0)=\begin{cases}1&x<0\\\ \exp(-x^{2n})&x\geq 0\end{cases},$
where $n\in\mathbb{N}$. In Fig. 2 we show these data for $n=4$ on the left,
and the corresponding function $\tilde{v}$ on the right (one has $A=-B=C=1$).
###### Remark 4.1.
The function (20) is analytic everywhere except at zero, where it is
$C^{2n-1}$, thus the convergence rate of a spectral method is expected to be
algebraic. Nevertheless, in practice this does not have a detrimental effect
on our approach and, as we can see below in figure 5 on the right, the
behavior of Chebyshev coefficients is, due to the finite precision used,
virtually the same as in the analytic case.
Figure 2. Initial data (20) for $n=4$ on the left, and the corresponding
$\tilde{v}=\tilde{v}\left(l(x)\right)$ the right.
For the time evolution of these data, we use $c=2$, $N=600$ and $N_{t}=1000$
time steps for $t\in[0,0.01]$. The resulting solution to the KdV equation can
be seen in Fig. 3. The formation of a dispersive shock wave is clearly
visible.
Figure 3. Solution to the KdV equation (1) with $p=2$ for the initial data
(20) for $n=4$ in dependence of time.
The slow decrease of the amplitude of the oscillations towards infinity,
similar to the behaviour of the Airy function, is challenging for any
numerical method. The Chebyshev coefficients $v_{n}$ (15) of the solution are
shown in Fig. 4, on the left for $t=0$, on the right for $t=0.01$. They
decrease exponentially to the order of the rounding error for the initial data
indicating that an analytic (within numerical precision, see remark 4.1)
function is numerically well resolved. The algebraic decay of the spectral
coefficients for $t=0.01$ indicates an oscillatory singularity at infinity as
for the Airy function. The spatial resolution is thus of the order of
$10^{-4}$. The relative conservation of the modified energy (19) is during the
whole computation of the order of $10^{-4}$. This means the solution is
obtained to better than plotting accuracy.
Figure 4. The Chebyshev coefficients (15), on the left for the mollified step
initial data (20) on the right for the solution shown in Fig. 3 for $t=0.01$
Note that the DSW is not the same as in the case of an exact step, the
classical Gurevitch-Pitaevski problem [13]. But one can for instance verify
that this is the correct solution by considering a finite step smoothed out at
both sides,
(21) $u(x,0)=\begin{cases}1&x_{0}<x<0\\\ \exp(-x^{2n})&x\geq 0\\\
\exp(-(x-x_{0})^{2n})&x\leq x_{0}\end{cases}$
which can be conveniently treated with Fourier methods as in [14] to which the
reader is referred for details and references. We use $N=2^{12}$ Fourier modes
for $x\in 10[-\pi,\pi]$ and $N_{t}=1000$ time steps for a fourth order
exponential time differencing method. In Fig. 5 we show on the left the
solution of Fig. 3 for $t=0.01$, and for the initial data (21) with $n=4$ and
$x_{0}=-5\pi$ at the same time.
Figure 5. Solution to the KdV equation (1) with $p=2$ for the initial data
(20) on the left, and for the initial data (21) on the right, both for $n=4$
and $t=0.01$.
The solution to the generalised KdV equation with $p=4$ for the same initial
data as in Fig. 2 can be seen in Fig. 6. We have used the same numerical
parameters as for the case $p=2$, and we obtain the same numerical resolution.
The form of the DSW is very similar to one for the standard KdV equation.
Figure 6. Solution to the generalised KdV equation (1) with $p=4$ for the
initial data (20) for $n=4$ in dependence of time.
### 4.2. Slowly decaying initial data
In this subsection we consider initial data not satisfying the Faddeev
condition, and we are interested in the long time behavior of the
corresponding KdV solutions which is done by introducing a small parameter
$\epsilon$ in (5). Concretely, we study initial data of the form
(22) $u(x,0)=\frac{1}{(1+x^{2})^{a}},\quad a=\frac{1}{2},1.$
We use $c=2$, $N=800$, and $N_{t}=10^{4}$ time steps for $t\in[0,10]$. In Fig.
7 we show the KdV solution ($p=2$) in (5) for the initial data (22) for
$\epsilon=10^{-1}$ and $a=1$ in dependence of time. It can be seen that
several solitons appear.
Figure 7. Solution to the KdV equation (1) with $p=2$ for the initial data
(22) for $a=1$ in dependence of time.
The corresponding KdV solution for the initial data (22) with $a=1/2$ and
$\epsilon=10^{-1}$ can be seen in Fig. 8. Note that in contrast to the case
$a=1$, the initial data do not satisfy the clamped boundary conditions for
$x\to-\infty$, one has $A=-B=\pi/2$ and $C=0$ in (11).
Figure 8. Solution to the KdV equation (1) with $p=2$ for the initial data
(22) for $a=1/2$ in dependence of time.
The solutions at the final time of Fig. 7 and 8 can be seen in Fig. 9, on the
left for $a=1/2$, on the right for $a=1$. The slower decay towards infinity of
the initial data with $a=1/2$ can be recognized. But at the time $t=10$, one
observes the same number of solitons in both cases. The peaks in the solutions
have been fitted to the solitons (4) which are shown in green in the same
figure. It can be seen that the solitons are not yet fully separated from the
background, but that they can be already clearly identified at this early
stage.
Figure 9. Solution to the KdV equation (1) with $p=2$ for the initial data
(22) for $t=10$, on the left for $a=1/2$, on the right for $a=1$; in green
fitted solitons (4).
The relative computed energy is in both cases conserved to the order of
$10^{-10}$. The Chebyshev coefficients for $t=10$ are shown in Fig. 10, on the
left for $a=1/2$, on the right for $a=1$. It can be seen that the coefficients
decrease as expected exponentially, and that the solutions are well resolved
in space as well.
Figure 10. Solution to the KdV equation (1) with $p=2$ for the initial data
(22) for $t=10$, on the left for $a=1/2$, on the right for $a=1$.
If the same initial data as in Fig. 9 are considered for the generalised KdV
equation (5) with $p=4$, one obtains for $t=10$ the solutions shown in Fig.
11. The solitons are here much more peaked than in the KdV case of Fig. 9
which is also illustrated by the fit to the solitons. Consequently the same
numerical parameters as there lead for the generalised KdV solution to a lower
resolution: the relative conservation of the energy is of the order of
$10^{-4}$, and Chebyshev coefficients decrease still exponentially, but only
to the order of $10^{-6}$ in this case.
Figure 11. Solution to the KdV equation (1) with $p=4$ for the initial data
(22) for $t=10$, on the left for $a=1/2$, on the right for $a=1$.
## 5\. Outlook
In this paper we have presented a numerical approach for generalised KdV
equations on the compactified real line which allows to approximate functions
which are smooth on $\mathbb{R}\cup\\{\infty\\}$ with spectral accuracy, i.e.,
with a numerical error decreasing exponentially with the number of collocation
points. The time integration is performed with an implicit fourth order
method. One direction of further research will be to improve the efficiency of
the time integration, ideally an explicit approach, for instance similar to
the approach of [16] in the context of the Schrödinger equation.
Of special interest is, however, the application of the techniques of the
present paper to the numerical study of blow-up in the context of generalised
KdV equations, i.e., for (1) with $p\geq 5$. The current approach would allow
to study the dynamically rescaled generalised KdV equations, see for instance
[15] without the problems there related to the use of Fourier methods. In this
context it would be beneficial to apply a multidomain spectral method as in
[3] for Schrödinger equations where the compactified real axis is divided into
several domains each of which is mapped to the interval $[-1,1]$. This allows
for a more efficient allocation of numerical resolution than via the choice of
the parameter $c$ in (7), in particular a higher resolution near the expected
blow-up. To this end, matching conditions at the domain boundaries (the
solution $u$ has to be $C^{2}$ there) have to imposed which is numerically
problematic. This will be a subject of further research.
## References
* [1] Belokolos E, Bobenko A, Enolskii V, Its A and Matveev V 1994 Algebro-Geometric Approach to Nonlinear Integrable Equations, Springer Series in Nonlinear Dynamics (Berlin: Springer)
* [2] J. Bérenger. A perfectly matched layer for the absorption of electromagnetic waves. J. Comput. Phys. 114, 185-200, 1994.
* [3] M. Birem and C. Klein, Multidomain spectral method for Schrödinger equations, Adv. Comp. Math., 42(2), 395-423 DOI 10.1007/s10444-015-9429-9 (2016)
* [4] J. L. Bona, P. E. Souganidis, W. A. Strauss, Stability and instability of solitary waves of Korteweg-de Vries type, Proc. R. Soc. Lond. A 411 (1987) 395 - 412.
* [5] B. Dubrovin, On Hamiltonian perturbations of hyperbolic systems of conservation laws, II: universality of critical behaviour, Comm. Math. Phys. 267 (2006) 117 - 139.
* [6] B. Dubrovin, On universality of critical behaviour in Hamiltonian PDEs, Amer. Math. Soc. Transl. 224 (2008) 59-109.
* [7] B. Dubrovin, T. Grava and C. Klein, Numerical Study of breakup in generalised Korteweg-de Vries and Kawahara equations, SIAM J. Appl. Math., Vol 71, 983-1008 (2011).
* [8] L.D. Faddeev, On the relation between the S-matrix and potential for the one dimensional Schrödinger operator, Dok. Akad.Nauk SSSR 121 (1958), 63-66, see also Amer. Math. Soc. Transl; (2) 65 (1967), 139-166.
* [9] A.S. Fokas: The generalised Dirichlet-to-Neumann map for certain non linear evolution PDEs. Comm. Pure Appl. Math. 58(5), 639-670 (2005)
* [10] T. Grava and C. Klein, Numerical study of the small dispersion limit of the Korteweg-de Vries equation and asymptotic solutions, Physica D, 10.1016/j.physd.2012.04.001 (2012).
* [11] T. Grava, A. Kapaev, and C. Klein, On the tritronquée solutions of $P^{2}_{I}$ , Constr. Approx. 41, 425 -466, DOI 10.1007/s00365-015-9285-3 (2015).
* [12] C.E. Grosch and S.A. Orszag, Numerical solution of problems in unbounded regions: coordinate transforms, J. Comput. Phys. 25, 273-296 (1977).
* [13] A. V. Gurevich, L. P. Pitaevskii, Nonstationary structure of a collisionless shock wave. Eksper. Teoret. Fiz. 65 (1973), no. 8, 5901-7604; translation in Soviet Physics JETP 38 (1974), no. 2, 2911-7297.
* [14] C. Klein, Fourth order time-stepping for low dispersion Korteweg-de Vries and nonlinear Schrödinger equation, ETNA Vol. 29 116-135 (2008).
* [15] C. Klein and R. Peter, Numerical study of blow-up in solutions to generalised Korteweg-de Vries equations, Physica D 304-305 (2015), 52-78 DOI 10.1016/j.physd.2015.04.003.
* [16] C. Klein, N. Stoilov, Numerical study of the transverse stability of the Peregrine solution, Stud Appl Math. 145 (2020) 36-51. https://doi.org/10.1111/sapm.12306
* [17] V.A. Marchenko, Sturm-Liouville operators and applications. Revised edition, AMS Chelsea Publishing vol. 373 (2001).
* [18] Y. Martel and F. Merle, Stability of Blow-Up Profile and Lower Bounds for Blow-Up Rate for the Critical generalised KdV Equation, Ann. Math., 155, (2002), 235-280.
* [19] Y. Martel, F. Merle, On the nonexistence of pure multi-solitons for the quartic gKdV equation. Int Math Res Notices (2015) (3): 688-739.
* [20] Y. Martel, F. Merle, P. Raphaél, Blow up for the critical gKdV equation I: dynamics near the soliton. Acta Math. 212 (2014), no. 1, 59-140.
* [21] T. L. Perel’man, A. Kh. Fridman, and M. M. EI’yashevich, A modified Korteweg-de Vries equation in electrohydrodynamics, Sov. Phys. JETP, Vol. 39, No.4, October 1974
* [22] L.N. Trefethen, Spectral Methods in Matlab. SIAM, Philadelphia, PA (2000)
* [23] www.comlab.ox.ac.uk/oucl/work/nick.trefethen
* [24] J.A.C. Weideman, S.C. Reddy: A Matlab differentiation matrix suite. ACM TOMS 26, 465-1-7519 (2000)
* [25] V.E. Zakharov ed., . What is integrability?. Berlin: Springer-Verlag, (1991).
* [26] C. Zheng, A perfectly matched layer approach to the nonlinear Schrödinger wave equations. J. Comput. Phys. 227, 537-556 (2007)
* [27] C. Zheng, Numerical simulation of a modified KdV equation on the whole real axis, Numer. Math. (2006) 105:315-335 DOI 10.1007/s00211-006-0044-z
|
# Role of electronic correlations in the Kagome lattice superconductor LaRh3B2
Savita Chaudhary Department of Physical Sciences, Indian Institute of Science
Education and Research (IISER) Mohali, Knowledge City, Sector 81, Mohali
140306, India. Shama Department of Physical Sciences, Indian Institute of
Science Education and Research (IISER) Mohali, Knowledge City, Sector 81,
Mohali 140306, India. Jaskaran Singh Department of Physical Sciences, Indian
Institute of Science Education and Research (IISER) Mohali, Knowledge City,
Sector 81, Mohali 140306, India. Department of Physics, Punjabi University,
Patiala, 147002, India. Armando Consiglio Institut für Theoretische Physik
und Astrophysik and Würzburg-Dresden Cluster of Excellence ct.qmat,
Universität Würzburg, 97074 Würzburg, Germany Domenico Di Sante Department
of Physics and Astronomy, Alma Mater Studiorum, University of Bologna, 40127
Bologna, Italy Center for Computational Quantum Physics, Flatiron Institute,
162 5th Avenue, New York, New York 10010, USA Ronny Thomale Institut für
Theoretische Physik und Astrophysik and Würzburg-Dresden Cluster of Excellence
ct.qmat, Universität Würzburg, 97074 Würzburg, Germany Yogesh Singh
Department of Physical Sciences, Indian Institute of Science Education and
Research (IISER) Mohali, Knowledge City, Sector 81, Mohali 140306, India.
###### Abstract
LaRh3B2 crystallizes in a layered structure where Rh atoms form a perfect
Kagome lattice. The material shows superconductivity at $T_{c}\approx 2.6$ K
and no signature for density wave instabilities. We report our measurements of
electronic transport, magnetization, and heat capacity in the normal and
superconducting state, and derive normal and superconducting parameters. From
first principles calculations of the electronic band structure, we identify
all features of Kagome bands predominantly formed by the Rh $d$ orbitals: a
flat band, Dirac cones, and van Hove singularities. The calculation of the
phonon dispersions and electron-phonon coupling suggests a strong similarity
between LaRh3B2 and AV3Sb5 (A=K,Cs,Rb). For LaRh3B2, it matches quantitatively
with the observed $T_{c}$, supporting a conventional phonon mediated pairing
mechanism. By comparison to the $A$V3Sb5 family, we conjecture a reduced
importance of electron correlations in LaRh3B2.
## I Introduction
The kagome lattice has long been a playground for novel physics in condensed
matter. Insulating kagome lattice realizations with localized magnetic moments
are platforms to explore the effects of geometric magnetic frustration. The
quantum spin liquid (QSL) ground state in the mineral Herbertsmithite
ZnCu3(OH)Cl2 is a prime example of this behaviour Fu2015 ; Han2012 .
Insulating quantum magnets with a kagome network in higher dimensions have
also shown novel frustrated magnetism and QSL behaviour Balz ; Okamoto2007 ;
Singh2013 . More recently metallic kagome lattice materials have been brought
into focus due to the prediction that the electronic structure of electrons on
a kagome lattice might allow to access correlated Dirac cones or van Hove
singularities near the Fermi energyKiesel2013 ; Wang2013 ; Mazin2014 . The
two-dimensional kagome lattice has features in its band structure which
provide, even still in the itinerant limit, the opportunity of marrying non-
trivial topology and strong electron correlations. Search for realizations of
a material with an ideal isolated kagome lattice is therefore a fundamentally
important quest. In recent years a few families of metallic materials
possessing a kagome lattice have indeed been reported or theoretically
predicted. These include the Herbertsmithite related material Ga/ScCu3(OH)6Cl2
Mazin2014 , the magnetic kagome metals CoSn and FeSn Mingu2020 ; Ming2020 ,
and the ferromagnetic kagome metal YMn6Sn6 Li2021 . Most recently, the
$A$V3Sb5 ($A=~{}$K, Rb, Cs) family of materials have been discovered and shown
to host a perfect kagome network of V ions Ortiz2019 ; Neupert2022 . Evidence
for electron correlations and non trivial topology in these materials emerges
from the discovery of charge density waves, superconductivity, anomalous Hall
effect, and multiple van Hove singularities nearby the Fermi energy Ortiz2019
; Ortiz2020 ; Ortiz2021 ; Yang2020 .
Another family of materials possessing the kagome lattice is $R$T${}_{3}X_{2}$
($R=$ Lanthanide, $T=$ $4$d or $5$d transition metal, $X=$ Si, B). These
materials were discovered in the 1980s Ku1980 ; Barz ; Vandenberg and several
of them were reported to show superconductivity with $T_{\rm c}$s between $1$
K to $\sim 7$ K Ku1980 ; Barz ; Vandenberg ; Malik ; Athreya ; Rauchschwalbe .
However, most of these studies were not made in the context of the connection
of properties with the underlying kagome lattice. Only recently LaRu3Si2,
which has the highest $T_{\rm c}=7$ K in this family of materials, has been
studied in relation to the kagome lattice, and several unconventional
properties have been reported possibly arising from electron correlations from
the flat bands Li2011 ; Li2012 ; Li2016 ; Mielke2021 . Materials in the
$R$T${}_{3}X_{2}$ family thus form another promising platform to study the
kagome related features in the band structure, and their interplay with
superconductivity.
We report on the electronic structure, phonon profile, and superconducting
properties of LaRh3B2 which has previously been reported to show
superconductivity at low temperatures, where the reported superconducting
$T_{\rm c}$ ranges from $<1.2$ to $2.8$ K Ku1980 ; Malik . Our electronic band
structure calculations reveal a flat band above the Fermi energy, and van Hove
singularities and Dirac cones at several locations in the Brillouin zone
including close to the Fermi energy $E_{\rm F}$. We find that the $E_{\rm F}$
is located at the top of a sharp peak in the density of states (DOS). We use
this to address the extreme sample dependence of the superconducting $T_{\rm
c}$. The superconductivity is found to be of conventional weak coupling type.
This is supported by estimations of the $T_{c}$ from phonon calculations and
the estimate of electron-phonon coupling. The van Hove singularities in the
band structure are found to be located a few eVs away from $E_{\rm F}$, which
is large against the characteristic ordering scales and thus explains why
these materials do not show signals of correlation-induced phenomena such as
charge density waves or other instabilities. This is also supported by the
phonon calculations which stress the absence of any imaginary frequency mode.
In addition, we observe anomalous temperature dependencies of the magnetic
susceptibility and heat capacity, and a slightly enhanced Sommerfeld
coefficient, which we argue to arise from the narrow band which is part of the
DOS. In comparison with the $A$V3Sb5 materials, our results point to a reduced
importance of electronic-correlations in the LaRh3B2 material.
## II Methods
Polycrystalline samples of LaRh3B2 were synthesized by arc-melting
stoichiometric ratios of La (3N, Alfa Aesar), Rh (5N, Alfa Aesar) and B (6N,
Alfa Aesar). The melted buttons were flipped over and melted $5$–$10$ times to
promote homogeneity. Powder X-ray diffraction (PXRD) on a Bruker D8 Advance
diffractometer system with Cu-K$\alpha$ radiation was used to determine the
phase purity of the arc-melted LaRh3B2 sample. The relative stoichiometry of
La and Rh was confirmed using energy dispersive spectroscopy using a scanning
electron microscope. The dc magnetic susceptibility $\chi$, heat capacity $C$,
and electrical transport were measured using a Quantum Design Physical
Property Measurement System equipped with a He3 insert. To theoretically
simulate the electronic structure of LaRh3B2, we performed first-principles
density functional theory (DFT) calculations using the Vienna Ab initio
simulation package (VASP) Kresse93 ; Kresse94 ; Kresse96a ; Kresse96b . We
considered the projector-augmented wave (PAW) pseudo potential with exchange-
correlation functional of generalized gradient approximation (GGA) of Perdew-
Burke-Ernzerhof Kresse ; Perdew . Starting with the experimental structure,
the lattice relaxation was performed to optimize the crystal structure by
using variable cell relaxation. We adopted a $12\times 12\times 12$ k mesh for
the first Brillouin zone. We have used an energy cut-off of $450$ eV for the
plane wave basis. The convergence criteria for energy and force are set to
$10^{-6}$ eV and $0.02$ eV/Å, respectively. In the DFT calculation, spin-orbit
coupling was not included. However, we have used scalar relativistic potential
which takes scalar relativistic effects into account. Phonon calculations have
been performed using density functional perturbation theory, as implemented in
Quantum Espresso Giannozzi2020 ; Giannozzi2009 ; Giannozzi . Exchange and
correlation effects were included using the generalized gradient approximation
(GGA) with the Perdew-Burke-Ernzerhof (PBE) functional Perdew ; the
pseudopotentials are norm-conserving, with core correction, and scalar
relativistic Hamann .
Self-consistent calculations of the previously relaxed unit cell have been
performed with a 8$\times$8$\times$12 $k$-grid. The kinetic energy cutoff for
the wavefunctions is equal to 100 Ry, while the cutoff for charge density is
400 Ry.
Convergence threshold for ionic minimization and electronic self-consistency
are set to be 1.0D-15.
The self-consistency threshold for phonon calculations is 1.0D-15 as well,
with a $q$-grid of 4$\times$4$\times$2\.
Non-self consistent calculations for the density of states have been performed
with a 60$\times$60$\times$48 $k$-grid.
Finally, the electron-phonon interaction is computed via an interpolation over
the Brillouin Zone Wierzbowska .
Figure 1: (Color online) Powder x-ray diffraction and results of refinement.
Figure 2: (Color online) (a) A schematic of the crystal structure of LaRh3B2
viewed perpendicular to the crystallographic $c$-axis showing the layered
nature of the structure with Rh atomic planes separated along the $c$-axis by
planes made up of La and B atoms. (b) Viewed along the $c$-axis, the Rh atoms
form an undistorted kagome lattice.
## III Structure
LaRh3B2 crystallizes in a honeycomb structure with space group $P6/mmm$. There
are no variable parameters in the structure apart from the unit cell size. The
powder x-ray diffraction (PXRD) is shown in Fig. 1. The PXRD pattern confirmed
that the synthesized material is single phase and a refinement, shown in Fig.
1, of the powder pattern gave lattice parameters $a=5.486$Å and $c=3.136$Å. In
literature, a range of values for the lattice parameters have been reported
and our values fall within this range of values Ku1980 . We will make a
connection of unit cell parameters with the electronic properties later. A
schematic of the crystal structure of LaRh3B2 is shown in Fig. 2. The
structure is made up of layers of Rh planes separated by planes of La and B
stacked along the $c$-axis, as shown in Fig. 2(a). The arrangement of the Rh
atoms within the Rh-planes is a perfect kagome lattice as shown in Fig. 2(b).
These materials therefore have the structural ingredients to show electronic
structure features expected for a kagome metal. It must be noted however, that
the short $c$-axis necessarily means that coupling between kagome planes may
be significant.
## IV Results
### IV.1 Electronic Band Structure
Figure 3: (Color online) (a) The electronic structure of LaRh3B2 along high
symmetry directions in $k$-space. The boxes are to highlight some interesting
features as discussed in the main text (b) The total and partial density of
states as a function of energy measured from the Fermi energy.
Figure 3(a) shows the electronic band structure for LaRh3B2 along some high
symmetry directions in the Brillouin zone. It is evident that several bands
cross the Fermi level, confirming that LaRh3B2 is a metal. The total and
partial density of states (DOS) are shown in Fig. 3(b). The Fermi level
($E_{F}$) is situated near the top of a very narrow band resulting in a fairly
large DOS at $E_{F}$ of $6.6$ states/eV. From the partial DOS it is clear that
the majority contribution to the total DOS comes from Rh $4$d orbitals and
both La and B contribute very small amounts to the total DOS at $E_{F}$. The
narrow band at $E_{F}$ leads to a strong sensitivity of the superconducting
$T_{c}$ on the unit cell size and to other anomalous physical properties as we
will discuss later.
We now turn to the novel features of the band structure arising from the
kagome Rh planes. As can be seen in Fig. 3(a), we observe a flat band (FB) in
the $\Gamma-M-K-\Gamma$ direction about $0.4$ eV above $E_{F}$. This flat band
is separate from any other bands. Another series of disconnected flat bands
are observed along the $\Gamma-A$ direction about $0.75$ eV above $E_{F}$. In
addition to these flat portions of the electronic dispersion, Dirac cones (DC)
are observed at several locations in the band structure. There are Dirac bands
$140$ meV below and $2.75$ eV above $E_{F}$ at $H$ and a Dirac cone about $1$
eV below $E_{F}$ along the $M-K$ direction in the BZ. We also identify van
Hove (VH) singularities located symmetrically above and below the Dirac cone
at 2.75 eV. Thus the band structure of LaRh3B2 possesses the predicted
features of the kagome lattice band structure near $E_{F}$ with modifications
arising most likely from the three-dimensional nature of the material.
### IV.2 Physical Properties
Figure 4 shows the electrical, magnetic, and thermal properties of LaRh3B2 in
the normal and superconducting states. Figure 4 (a) shows the magnetic
susceptibility $\chi$ versus temperature $T$ between $2$ K and $300$ K in an
applied magnetic field of $H=2$ T. At low temperatures, small amounts of
magnetic impurities lead to a Curie like upturn. The $\chi$ is found to be
temperature dependent in the whole temperature range. This is not what is
expected for a Pauli paramagnetic metal where a $T$ independent $\chi$ is
expected. This $T$ dependent $\chi$ arises due to the $E_{F}$ being situated
on a narrow peak in the DOS. The change in temperature results in a change in
the DOS at $E_{F}$ leading to a $T$ dependent Pauli paramagnetic
susceptibility. To support this idea the $\chi(T)$ in the full temperature
range was fit with the expression
$\chi(T)=\chi_{o}[1-(T/T_{E})^{2}]+C/(T-\theta)$, where the first term
represents the $T$ dependent Pauli paramagnetic susceptibility and the second
term represents the contribution from the small amounts of magnetic impurities
which give rise to the Curie like upturn in $\chi(T)$ at the lowest
temperatures. The fitting parameters are $\chi_{o}$ the temperature
independent average Pauli paramagnetic susceptibility, $T_{E}$ which is a
phenomenological parameter related to the Fermi energy, $C$ which is the Curie
constant of the impurities, and $\theta$ which is the Weiss temperature
representing any interactions between the magnetic impurities. A very good fit
with the above expression was obtained and is shown as the solid curve through
the data in Fig. 4 (a). The fit parameters obtained were
$\chi_{o}=11.8(2)\times 10^{-5}$ G cm3/mol, $T_{E}=860(7)$ K, $C=0.0010(4)$ G
cm3 K/mol, and $\theta=-5.5(1)$ K. This value of $C$ is equivalent to $0.25\%$
of $S=1/2$ impurities, which is quite small.
Figure 4: (Color online) (a) The normal state magnetic susceptibility at $H=2$
T, (b) Resistivity versus temperature at zero field. Inset shows the
superconducting transition at various fields, (c) Dimensionless magnetic
susceptibility ($4\pi\chi$) in the superconducting state at various fields,
and (d) the electronic contribution to the zero field specific heat $C_{\rm
el}/T$ versus $T$.
Figure 4 (b) shows the electrical resistivity $\rho$ in zero field between $2$
K and $300$ K. We observe metallic behaviour with a residual resistivity ratio
RRR $=\rho(300{\rm K})/\rho(2{\rm K})\approx 5$. The inset in Fig. 4 (b) shows
the $\rho(T)$ data below $T=4$ K measured in various applied fields. The sharp
drop to zero resistance below $T_{c}\sim 2.6$ K in zero field signals the
onset of superconductivity in LaRh3B2. Further evidence of a superconducting
state is obtained from the diamagnetism observed in magnetic measurements
shown in Fig. 4 (c) and (d). From Fig. 4 (c) we can see that the value of
$4\pi\chi$ in the superconducting state is greater than $-1$ suggesting
demagnetization factors due to the irregular shape of the sample. The
magnetization data in Fig. 4 (d) show behaviour typical of a Type-II
superconductor. The bulk nature of superconductivity is confirmed from heat
capacity measurements. Figure 4 (e) shows the electronic heat capacity $C_{\rm
el}$ divided by $T$ versus $T$. The $C_{\rm el}$ is obtained by subtracting a
lattice term ($\sim T^{3}$) from the total heat capacity. It is of interest to
note that the $C_{\rm el}/T$ in the normal state would be expected to be $T$
independent. However, this is true only for $T\leq 4.5$ K while there is a
strong $T$ dependence of $C_{\rm el}/T$ above these temperatures. This
suggests that the lattice contribution may not just have a $T^{3}$ term but
an-harmonic terms may also contribute to the lattice heat capacity. A sharp
anomaly at the onset of the superconducting transition can clearly be seen in
Fig. 4 (e). To evaluate the magnitude of the jump in heat capacity at the
transition and to obtain an alternate estimate of the bulk superconducting
$T_{c}$, we use an equal entropy construction. The result is shown as the
solid curve through the data near $T_{c}$. This gives a value $T_{c}\approx
2.5$ K. The normal state data above $T_{c}$ can be extrapolated to $T=0$ to
give an estimate of the Sommerfeld coefficient $\gamma_{\rm n}=11.8$ mJ/mol
K2. With this we obtain the jump height at $T_{c}$ to be $\Delta C_{\rm
el}/\gamma_{\rm n}T_{c}\approx 1.2$, which is smaller than the value $1.43$
expected for a weak coupling single gap superconductor. The $C_{\rm el}$ data
at the lowest temperatures seem to extrapolate to a finite value suggesting
some residual contribution from normal electrons. The $C_{\rm el}$ data below
$T=0.8$ K were fit by the expression $C_{\rm el}=\gamma_{\rm res}T+$
Aexp${}^{-\Delta/k_{B}T}$ assuming a fully gapped superconducting state. The
$\gamma_{\rm res}T$ term represents the contribution from any non-
superconducting fraction of electrons. An excellent fit, shown in Fig. 4 (e),
was obtained with the following values for the parameters $\gamma_{\rm
res}=1.7$ mJ/mol K2 and $\Delta=6$ K. This value of $\gamma_{\rm res}$
suggests that $\approx 14\%$ electrons do not participate in the
superconductivity. So we must revise our estimate of $\Delta C_{\rm el}/\gamma
T_{c}$ using $\gamma=\gamma_{n}-\gamma_{\rm res}$. This gives the value
$\Delta C_{\rm el}/\gamma T_{c}\approx 1.44$ which is close to the value
expected from a weak coupling single gap superconductor. These results suggest
that LaRh3B2 is a weak coupling single-gap Type-II superconductor.
From various measurements in finite magnetic field we can track the $T_{c}$ as
a function of the field $H$. The $H$-$T$ diagram obtained from the various
measurements is shown in Fig. 4 (f) where both the lower critical field
$H_{C1}$ and the upper critical field $H_{C2}$ are shown. We observe that the
$H_{C2}$ data from all measurements except the resistivity measurements agree
with each other, while the critical field measured from $\rho$ are
consistently higher than values measured from other bulk probes like
magnetization and heat capacity. Such observations have been reported
previously in some materials and have been linked to surface superconductivity
Zeinali ; Pradip . It has been shown that the critical field for surface
superconductivity is $\approx 1.69H_{c2}$, where $H_{c2}$ is the bulk critical
field. We also plot in Fig. 4 (f) the $H_{C2}$ obtained from resistivity
measurements divided by $1.69$. The critical field so obtained matches the
critical field values obtained from other bulk measurements. So we will treat
the scaled critical field from resistivity measurements as the true bulk
critical field $H_{c2}$. The $H_{c2}$ vs $T$ plot shows an upward curvature in
the whole temperature range. This is unusual and inconsistent with
observations for most conventional superconductors.
To learn about the strength of the electron-phonon coupling we make an
estimate of the electron-phonon coupling constant $\lambda_{ep}$ using
McMillan’s formula, which relates the superconducting transition temperature
$T_{c}$ to $\lambda_{ep}$, the Debye temperature $\theta_{D}$, and the Coulomb
repulsion constant $\mu^{*}$
$T_{c}=\frac{\Theta_{D}}{1.45}\exp\left[-\frac{1.04\left(1+\lambda_{\mathrm{ep}}\right)}{\lambda_{\mathrm{ep}}-\mu^{*}\left(1+0.62\lambda_{\mathrm{ep}}\right)}\right]$
which can be inverted to give $\lambda_{ep}$ in terms of $T_{c}$, $\theta_{D}$
and $\mu^{*}$ as
$\lambda_{\mathrm{ep}}=\frac{1.04+\mu^{*}\ln\left(\frac{\Theta_{D}}{1.45T_{c}}\right)}{\left(1-0.62\mu^{*}\right)\ln\left(\frac{\Theta_{D}}{1.45T_{c}}\right)-1.04}$
From the heat capacity measurements, we had obtained $\theta_{D}$ = 518 K and
using $T_{c}$ = 2.5 K, we get $\lambda_{ep}=0.43$ and 0.52 for $\mu^{*}=0.10$
and 0.15, respectively. These values of $\lambda_{ep}$ suggest moderate
electron-phonon coupling in LaRh3B2. This is supported by the estimates of the
$\lambda_{ep}$ made from our phonon calculations which will be discussed
later.
We now present our estimation of various superconducting parameters using
expressions previously collected in Refs. Singh2007, ; Singh2010, . An
estimate of the $T=0$ upper critical field $H_{c2}(0)$ was made by first
making a linear extrapolation of the data near $T_{c}$ to give the slope
${dH_{c2}\over dT}|_{T_{c}}=-511$ Oe/K. This linear slope can then be used to
get an estimate of $H_{c2}(0)$ using the Werthamer-Helfand-Hohenberg (WHH)
formula for the clean limit $H_{c2}(0)=-0.693T_{c}{dH_{c2}\over
dT}|_{T_{c}}=920$Oe. From the value of $H_{c2}$ we can now estimate the value
of the coherence length $\xi$ by the expression $H_{c2}=\phi_{0}/2\pi\xi^{2}$,
where $\phi_{0}=hc/2e=2.068\times 10^{-7}$ G cm2 is the flux quantum. Using
$H_{c2}(0)=920$ Oe obtained above, we estimate $\xi(0)=60$ nm. At $T=2.3$ K
near $T_{c}$ where $H_{c2}=250$ Oe we get $\xi(0)=114$ nm. We have collected
the various normal and superconducting state parameters in Table 1.
Table 1: Normal and superconducting state parameters for LaRh3B2. Here $\gamma$ is the Sommerfeld coefficient, $\beta$ is the coefficient of the $T^{3}$ term in the low temperature heat capacity, $\theta_{\rm D}$ is the Debye temperature, $n$ is the electron density, $\xi$ is the superconducting coherence length, $\lambda$ is the penetration depth, $l$ is the electron mean free path, and $v_{\rm F}$ is the Fermi velocity. RRR | $\approx 5$
---|---
$\gamma$ (mJ/mol K2) | 11.8
$\beta$ (mJ/mol K4) | 0.084
$\theta_{\rm D}$ (K) | 520
$n$ (cm${}^{-3})$ | $0.97\times 10^{23}$
TC(K) | 2.6
$\xi_{2{\rm K}}$(nm) | 114
$\xi_{0{\rm K}}$(nm) | 60
$\lambda_{0{\rm K}}$(nm) | 5.4
$l_{4{\rm K}}$(nm) | 43
$v_{{\rm F}}$(cm/s) | $3.5\times 10^{8}$
We now address the large variation in the $T_{C}$ of LaRh3B2 which has been
reported in the literature Ku1980 ; Malik . Figure 5 shows the resistivity of
two samples of LaRh3B2 prepared with the same nominal ratios of starting
materials. A refinement of their powder x-ray pattern gave lattice parameters
that are slightly different. The lattice parameters for sample 1 (S1) are
$a=5.484$ Å and $c=3.139$ Å while those for sample 2 (S2) are $a=5.486$ Å and
$c=3.136$ Å. Thus the S1 has slightly smaller in-plane lattice parameters
while its $c$-axis is longer indicating that in this sample the kagome planes
are shrunk while the separation between the kagome lattice increases. S2 on
the other hand has a larger kagome plane but the planes are separated by a
smaller distance along the $c$-axis. From Fig. 5 we see that the electrical
transport properties are sensitive to these small changes. S1 has a larger
residual resistivity ratio RRR $=10$ but has a larger residual resistivity
$\rho_{0}=26~{}\mu\Omega$cm. S2 on the other hand has a smaller RRR $=5$ but a
smaller residual resistivity $\rho_{0}=15~{}\mu\Omega$cm. From the inset it
can be seen that while S2 has a superconductivity onset at $T_{c}=2.6$ K, for
S1 the onset is $T_{c}=2.05$ K, more than $0.5$ K lower. We also find this
large variation in $T_{\rm c}$ with unit cell size of LaRh3B2 in previous
reports Ku1980 ; Malik . For example, superconductivity with a $T_{\rm c}=2.6$
K was reported for a LaRh3B2 sample with lattice parameters $a=5.480$Å and
$c=3.137$ Å Ku1980 , a $T_{\rm c}=2.2$ K was reported for a sample with
lattice parameters $a=5.483$Å, $c=3.142$ Å while no superconductivity down to
$1.2$ K was found for a sample with lattice parameters $a=5.512$Å and
$c=3.115$ Å Malik .
We can address this variation in $T_{\rm c}$ for different samples using our
DFT calculations. Our calculations have shown that the $E_{\rm F}$ lies near
the top of a narrow band in the DOS. This sensitivity of $T_{\rm c}$ most
likely originates from changes in the DOS at $E_{F}$ due to small changes in
$E_{F}$ either by pressure effects (as evidenced by difference in unit cell
sizes) or due to a difference in the electron densities (by minute variation
in the stoichiometry) in the samples. A slight change in $E_{F}$ will lead to
large changes in the DOS at $E_{F}$ because the $E_{F}$ is located on top of a
narrow band in the band structure.
Figure 5: (Color online) Resistivity versus temperature at zero field for two
LaRh3B2 samples. Inset shows the variation in the superconducting transition
temperature.
Our phonon calculations support the motif of weak-coupling phonon-mediated
superconductivity, and also indicate the reason for the absence of a
correlation-induced CDW instability for LaRh3B2. The phonon dispersion for
LaRh3B2 are shown in Figs. 6 a, b. The phonon dispersion does not exhibit any
imaginary frequency mode; this is a sign of dynamical stability, confirming
the absence of experimental signatures of charge density wave states. Low-
frequency phonon modes mostly originate from La atoms, while intermediate
frequencies are essentially due to Rh atoms; the manifold of low-dispersing
bands around 120 cm-1 is then due to the kagome network. Finally, B atoms
contribute to the high-frequency modes. We computed the electron-phonon
coupling to be $\lambda_{\mathrm{e-ph}}\approx 0.55-0.65$. This suggests that
LaRh3B2 is a weak-to-moderate coupling superconductor. The McMillan formula
was then used to estimate the superconducting critical temperature $T_{c}$
McMillan ; Carbotte :
$T_{c}=\frac{\omega_{log}}{1.2}e^{\bigl{[}\frac{-1.04(1+\lambda)}{\lambda(1-0.62\mu^{*})-\mu^{*}}\bigr{]}}$
(1)
with $\omega_{log}$ being related to the Eliashberg function:
$\omega_{log}=e^{\bigl{[}\frac{2}{\lambda}\int{\frac{d\omega}{\omega}\alpha^{2}F(\omega)\log{\omega}}\bigr{]}}$
(2)
While the Coulomb pseudo-potential $\mu^{*}$ lies in the typical range [0.1 -
0.2], we obtain values for $T_{c}$ in fair agreement with the experimental
results. Specifically, $T_{c}\approx 2.6$ K is obtained for $\mu^{*}=0.17$
(Fig. 6 d).
Figure 6: a) Phonon dispersion along high-symmetry lines; b) Corresponding
density of states; c) Eliashberg function; d) Superconducting critical
temperature $T_{c}$ as a function of the Coulomb pseudo-potential $\mu^{*}$.
## V Summary and Discussion:
We report on the electronic structure, phonon spectrum and physical properties
of a kagome lattice superconductor LaRh3B2. The structure of LaRh3B2 is built
up of kagome planes of Rh stacked along the $c$-axis with La-B planes
separating the kagome planes. The electronic structure contains all features
expected for a 2D kagome lattice including a flat band and Dirac bands and van
Hove singularities at various positions in the Brillouin zone and in
particular near $E_{F}$. This is qualitatively consistent with the band-
structure observed for the $A$V3Sb5 kagome metals. In contrast with the
$A$V3Sb5 materials, however, we did not observe signatures of strong
electronic correlations in LaRh3B2. The van Hove singularities in the
electronic band structure are situated further away from $E_{F}$ than for
$A$V3Sb5, which is a fermiological reason why one would not expect charge
ordering or density wave like instabilities as reported for $A$V3Sb5.
The superconductivity in LaRh3B2 seems conventional, and there is no
experimental evidence for charge density wave instabilities as reported for
the $A$V3Sb5 materials. The majority contribution to the DOS at $E_{F}$ in
LaRh3B2 derives from Rh $4$d bands. This suggests that the role of electronic
correlations is weakened in LaRh3B2 compared to the family of $A$V3Sb5 kagome
metals, since the Rh $4d$ orbitals are less strongly-correlated than the V
$3d$ orbitals. Interestingly, the computed $\lambda_{\mathrm{e-ph}}$ for
LaRh3B2 is in good agreement with experimentally reported values of
$\lambda_{\mathrm{e-ph}}$ for the CsV3Sb5 compound Zhong which, together with
the phonon dispersions, suggests a similarity in the principal phonon sector
between LaRh3B2 and $A$V3Sb5. It suggests that a central difference between
LaRh3B2 and the by now more established kagome metals must be found in terms
of differing electronic correlations, which are lower in strength for LaRh3B2.
This may explain the absence of CDW in the LaRh3B2 kagome metal, and points
towards phonon-mediated $s$-wave superconductivity. Given the large number of
materials in the $RT_{3}$B2 and $RT_{3}$Si2 families, their possibility of
Fermiology-tuning around $E_{F}$ presents an exciting direction for future
work.
_Acknowledgments.–_ We thank the X-ray facility at IISER Mohali. JS
acknowledges UGC-CSIR India for a fellowship. The phonon-DFT work was
supported by the Deutsche Forschungsgemeinschaft (DFG, German Research
Foundation) through Project-ID 258499086-SFB 1170 and by the Würzburg-Dresden
Cluster of Excellence on Complexity and Topology in Quantum Matter-ct.qmat
Project-ID 390858490-EXC 2147. The research leading to these results has
received funding from the European Union’s Horizon 2020 research and
innovation programme under the Marie Skłodowska-Curie Grant Agreement No.
897276. The authors acknowledge the Gauss Centre for Supercomputing e.V. for
providing computing time on the GCS Supercomputer SuperMUC-NG at Leibniz
Supercomputing Centre.
## References
* (1) M. Fu, T. Imai, T.-H. Han, and Y. S. Lee, Science 350, 655 (2015).
* (2) T.-H. Han, J. S. Helton, S. Chu, D. G. Nocera, J. A. Rodriguez-Rivera, C. Broholm, and Y. S. Lee, Nature (London) 492, 406 (2012).
* (3) C. Balz, B. Lake, J. Reuther, H. Luetkens, R. Schönemann, T. Herrmannsdörfer, Y. Singh, A. T. M. Nazmul Islam, E. M. Wheeler, J. A. Rodriguez-Rivera, T. Guidi, G. G. Simeoni, C. Baines, and H. Ryll, Nature Physics 12, 942 (2015).
* (4) Y. Okamoto, M. Nohara, H. Aruga-Katori, and H. Takagi, Phys. Rev. Lett. 99, 137207 (2007).
* (5) Y. Singh, Y. Tokiwa, J. Dong, and P. Gegenwart, Phys. Rev. B 88, 220413(R) (2013).
* (6) M. Kiesel, C. Platt, and R. Thomale, Phys. Rev. Lett. 110, 126405 (2013).
* (7) W. Wang, Z. Li, Y. Xiang, and Q. Wang, Phys. Rev. B 87, 115135 (2013).
* (8) I. I. Mazin, H. O. Jeschke, F. Lechermann, H. Lee, M. Fink, R. Thomale, and R. Valenti, Nature Communications 5, 4261 (2014).
* (9) M. Kang, S. Fang, L. Ye, H. C. Po, J. Denlinger, C. Jozwiak, A. Bostwick, E. Rotenberg, E. Kaxiras, J. G. Checkelsky, and R. Comin, Nat. Comm. 11, 4004 (2020).
* (10) M. Kang, et al. Nat. Mater. 19, 163 (2020).
* (11) M. Li, Q. Wang, G. Wang, Z. Yuan, W. Song, R. Lou, Z. Liu, Y. Huang, Z. Liu, H. Lei, Z. Yin, and S. Wang, Nat. Comm. 12, 3129 (2021).
* (12) B. R. Ortiz, L. C. Gomes, J. R. Morey, M. Winiarski, M. Bordelon, J. S. Mangum, I. W. H. Oswald, J. A. RodriguezRivera, J. R. Neilson, S. D. Wilson, E. Ertekin, T. M. McQueen, and E. S. Toberer, Phys. Rev. Materials 3, 094407 (2019).
* (13) T. Neupert, M. Denner, J. Yin, R. Thomale, and M. Z. Hasan, Nat. Phys. 18 137 (2022).
* (14) B. R. Ortiz, S. M. L. Teicher, Y. Hu, J. L. Zuo, P. M. Sarte, E. C. Schueller, A. M. M. Abeykoon, M. J. Krogstad, S. Rosenkranz, R. Osborn, R. Seshadri, L. Balents, J. He, and S. D. Wilson, Phys. Rev. Lett. 125, 247002 (2020).
* (15) B. R. Ortiz, P. M. Sarte, E. M. Kenney, M. J. Graf, S. M. L. Teicher , R. Seshadri, and S. D. Wilson, Phys. Rev. Materials 5, 034801 (2021).
* (16) S.-Y. Yang, Y. Wang, B. R. Ortiz, D. Liu, J. Gayles, E. Derunova, R. Gonzalez-Hernandez, L. Smejkal, Y. Chen, S. S. Parkin, S. D. Wilson, E. S. Toberer, T. Mcqueen, and M. N. Ali , Sci. Adv. 6, eabb6003 (2020).
* (17) H. C. Ku, C. P. Meisner, F. Acker, and D. C. Johnston, Solid State Communications, 35, 91 (1980).
* (18) H. Barz, Mater. Res. Bull. PNAS 15, 1489 (1980).
* (19) J. M. Vandenberg and H. Barz, Mater. Res. Bull. 15, 1493 (1980).
* (20) S. K. Malik, A. M. Umarji, G. K. Shenoy, A. T. Aldred, and D. G. Niarchos, Phys. Rev. B 32, 4742 (1985).
* (21) K. S. Athreya, L. S. Hausermann-Berg, R. N. Shelton, S. K. Malik, A. M. Umarji, and G. K. Shenoy, Phys. Lett. A 113, 330 (1985).
* (22) U. Rauchschwalbe, W. Lieke, F. Steglich, C. Godart, L. C. Gupta, and R. D. Parks, Phys. Rev. B 30, 444(R) (1984).
* (23) S. Li, B. Zeng, X. G. Wan, J. Tao, F. Han, H. Yang, Z. H. Wang, and H.-H. Wen, Phys. Rev. B 84, 214527 (2011).
* (24) S. Li, J. Tao, X. G. Wan, X. Ding, H. Yang, and H.-H. Wen Phys. Rev. B 86, 024513 (2012).
* (25) B. Li, S. Li, and H.-H. Wen, Phys. Rev. B 94, 094523 (2016)
* (26) C. Mielke, Y. Qin, J.-X. Yin, H. Nakamura, D. Das, K. Guo, R. Khasanov, J. Chang, Z. Q. Wang, S. Jia, S. Nakatsuji, A. Amato, H. Luetkens, G. Xu, M. Z. Hasan, and Z. Guguchia, Phys. Rev. Mat. 5, 034803 (2021).
* (27) G. Kresse and J. Hafner, Phys. Rev. B 47, 558(R) (1993).
* (28) G. Kresse and J. Hafner, Phys. Rev. B 49, 14251 (1994).
* (29) G. Kresse and J. Furthmüller, Phys. Rev. B 54, 11169 (1996).
* (30) G. Kresse and J. Furthmüller, Comput. Mater. Sci. 6, 15 (1996).
* (31) G. Kresse and D. Joubert, Phys. Rev. B 59, 1758 (1999).
* (32) J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865–3868 (1996).
* (33) P. Giannozzi, O. Baseggio, P. Bonfà, D. Brunato, R. Car, I. Carnimeo, C. Cavazzoni, S. de Gironcoli, P. Delugas, F. F. Ruffino, A. Ferretti, N. Marzari, I. Timrov, A. Urru, and S. Baroni, The Journal of Chemical Physics 152, 154105 (2020),
* (34) P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococcioni, I. Dabo, A. D. Corso, S. de Gironcoli, S. Fabris, G. Fratesi, R. Gebauer, U. Gerstmann, C. Gougoussis, A. Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scandolo, G. Sclauzero, A. P. Seitsonen, A. Smogunov, P. Umari, and R. M. Wentzcovitch, Journal of Physics: Condensed Matter 21, 395502 (2009).
* (35) P. Giannozzi, O. Andreussi, T. Brumme, O. Bunau, M. B. Nardelli, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, M. Cococcioni, N. Colonna, I. Carnimeo, A. D. Corso, S. de Gironcoli, P. Delugas, R. A. DiStasio, A. Ferretti, A. Floris, G. Fratesi, G. Fugallo, R. Gebauer, U. Gerstmann, F. Giustino, T. Gorni, J. Jia, M. Kawamura, H-Y. Ko, A. Kokalj, E. Küçükbenli, M. Lazzeri, M. Marsili, N. Marzari, F. Mauri, N. L. Nguyen, H-V. Nguyen, A. O. de-la Roza, L. Paulatto, S. Poncé, D. Rocca, R. Sabatini, B. Santra, M. Schlipf, A. P. Seitsonen, A. Smogunov, I. Timrov, T. Thonhauser, P. Umari, N. Vast, X. Wu, and S. Baroni, Journal of Physics: Condensed Matter 29, 465901 (2017).
* (36) D. R. Hamann, Phys. Rev. B 88, 085117 (2013).
* (37) M. Wierzbowska, S. de Gironcoli, and P. Giannozzi, “Origins of low- and high-pressure discontinuities of tc in niobium,” arXiv:cond-mat/0504077 (2005).
* (38) A. Zeinali, T. Golod, and V. M. Krasnov, Phys. Rev. B 94, 214506 (2016).
* (39) Pradip Das, C. V. Tomy, S. S. Banerjee, H. Takeya, S. Ramakrishnan, and A. K. Grover, Phys. Rev. B 78, 214504 (2008).
* (40) Yogesh Singh, A. Niazi, M. D. Vannette, R. Prozorov, and D. C. Johnston, Phys. Rev. B 76, 214510 (2007).
* (41) Yogesh Singh, C. Martin, S. L. Bud’ko, A. Ellern, R. Prozorov, and D. C. Johnston, Phys. Rev. B 82, 144532 (2010).
* (42) W. L. McMillan, Phys. Rev. 167, 331 (1968).
* (43) J. P. Carbotte, Rev. Mod. Phys. 62, 1027 (1990).
* (44) Y. Zhong, S. Li, H. Liu, Y. Dong, Y. Arai, H. Li, Y. Shi, Z. Wang, S. Shin, H. N. Lee, H. Miao, T. Kondo, and K. Okazaki, arXiv:2207.02407 (2022).
|
# Pretraining the Noisy Channel Model for Task-Oriented Dialogue
Qi Liu2, Lei Yu1, Laura Rimell1, and Phil Blunsom1,2
1DeepMind, 2University of Oxford
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
Work completed during an internship at DeepMind.
###### Abstract
Direct decoding for task-oriented dialogue is known to suffer from the
explaining-away effect, manifested in models that prefer short and generic
responses. Here we argue for the use of Bayes’ theorem to factorize the
dialogue task into two models, the distribution of the context given the
response, and the prior for the response itself. This approach, an
instantiation of the noisy channel model, both mitigates the explaining-away
effect and allows the principled incorporation of large pretrained models for
the response prior. We present extensive experiments showing that a noisy
channel model decodes better responses compared to direct decoding and that a
two stage pretraining strategy, employing both open-domain and task-oriented
dialogue data, improves over randomly initialized models.
## 1 Introduction
Task-oriented dialogue agents provide a conversational interface to assist
users in accomplishing specific goals, such as finding a restaurant or booking
a hotel Seneff and Polifroni (2000); Raux et al. (2005); Budzianowski et al.
(2018); Peng et al. (2020a). Increasing demand from industry for natural
language assistants and scalable customer service solutions has recently been
driving a renaissance in the development of task-oriented dialogue models. In
addition, the specification of explicit dialogue agent goals, afforded by the
task-oriented paradigm, makes such research easier to ground and evaluate than
open-domain chatbots.
Current research on task-oriented dialogue is dominated by monolithic
sequence-to-sequence models that directly parameterize the conditional
distribution of the response given the prior dialogue context. However, this
monolithic approach conflates the task-specific and language-general aspects
of dialogue, and adversely favors short and generic responses Bao et al.
(2020) due to the explaining-away effect Klein and Manning (2002).
Here we pursue an alternative to the direct model. Employing Bayes’ rule
allows us to factorize the probability of the response given the context
$p(\mathcal{R}|\mathcal{C})$ into a language model $p(\mathcal{R})$ and a
context model $p(\mathcal{C}|\mathcal{R})$.111Here we abstract away from the
prediction of belief states and dialogue acts, which also form part of our
generative model; see Section 3 for details. Within natural language
processing (NLP), this approach is traditionally known as the noisy channel
model Shannon (1948), and has recently seen renewed interest with its
successful application to neural machine translation Yu et al. (2017, 2020);
Yee et al. (2019).
We hypothesize that the noisy channel reformulation is advantageous for
dialogue because the factorization enables each sub-module to specialize in a
dialogue sub-task. In particular, the context conditional model can help to
discount short and generic responses and mitigate the explaining-away effect,
while the language model helps ensure that responses are natural. We find that
a noisy channel model with the same number of parameters as a direct model
achieves better accuracy on three task-oriented dialogue datasets. Moreover, a
larger noisy channel model can be trained with the same hardware, by training
the sub-modules separately, yielding additional improvements.
It has become common in recent years to pretrain dialogue models on large text
data, either general text Peng et al. (2020b); Budzianowski and Vulić (2019);
Wu et al. (2020a) or dialogue-structured data Roller et al. (2020); Adiwardana
et al. (2020), such as tweets and Reddit posts. We utilise a similar strategy
with Reddit data and find that the benefits of pretraining to the noisy
channel model are similar to those for the direct model. Further, we evaluate
transfer across task-oriented dialogue datasets by implementing a second
pretraining stage using Taskmaster Byrne et al. (2019) and Schema-Guided
Dialogue Rastogi et al. (2020) as training data, before fine-tuning on our
final tasks.
We evaluate the algorithm on three datasets, MultiWOZ 2.0 Budzianowski et al.
(2018), CamRest676 Wen et al. (2017a) and SMCalFlow Andreas et al. (2020),
demonstrating that the noisy channel approach is robust to different dialogue
schema annotations used across datasets. Further analysis demonstrates that
the noisy channel models can decode responses with similar lengths and Zipf
scores compared to ground-truth responses and reduce the likelihood of falling
into repetition loops Holtzman et al. (2019).
## 2 A Seq-to-Seq Dialogue Model
Figure 1: The data flow of one turn in a task-oriented dialogue for train
booking from MultiWOZ.
In this section, we introduce a discriminative sequence-to-sequence model for
task-oriented dialogue. The traditional sequence of steps needed to produce a
system turn in a task-directed dialogue is shown in Figure 1, with an example
from MultiWOZ 2.0 Budzianowski et al. (2018). Given a dialogue context
containing previous user and system utterances, the dialogue system first
predicts a belief state, consisting of a set of slot-value pairs (e.g.
destination: Cambridge), to capture user intent. To ground the system with
external information, the belief state can be converted into a database query
in order to retrieve relevant information, such as the number of matches and
booking information. Next, the system predicts a set of dialogue acts,
representing the abstract meaning of the proposed dialogue response Austin
(1975). Finally, a delexicalized dialogue response is generated, where slot
values are replaced by generic placeholders, such as value_time for a train
departure time, in order to reduce lexical variation. The delexicalized
response can be converted to a lexicalized response in post-processing by
filling in the slot values based on belief states and database information.
We use the MultiWOZ schema for illustration in Section 2 and 3, but our models
easily generalize to different schema annotations (e.g. datasets without
annotated dialogue acts Andreas et al. (2020)).
Since it is well known that pipelined models tend to suffer from error
propagation, many NLP tasks have been reformulated in recent years as end-to-
end text-to-text transformations Raffel et al. (2020); Brown et al. (2020).
State-of-the-art task-oriented dialogue systems have followed this approach
Hosseini-Asl et al. (2020); Peng et al. (2020b). We represent the example from
Figure 1 as follows, serializing turns and using special start and end tokens
to encapsulate each data field: Context: [c] I am looking to … [/u] What is
your … [/r] I’ll be leaving … [/u] [/c] Belief: [b] [train] destination
Cambridge, day Tuesday, arrive 12:30, departure London [/b] Database: [db]
[train] match 1, status not booked [/db] Act: [a] [train] inform arrive,
inform leave, offer reservation [/a] Response: [r] There is a train that
leaves at [value_time] and arrives at [value_time]. Should I book it? [/r]
Given this text representation, the direct discriminative approach models
$p(\mathcal{B},\mathcal{A},\mathcal{R}|\mathcal{C})$, where $\mathcal{C}$,
$\mathcal{B}$, $\mathcal{A}$, and $\mathcal{R}$ represent dialogue context,
belief state, dialogue act, and delexicalized response, respectively.222We do
not model the probabilities of database state or lexicalized response, as
these are deterministic given the belief state and delexicalized response,
respectively. We use the serialized text of the dialogue context as input, and
the concatenation of belief state, dialogue act, and response as target
output, making the task amenable to the application of an autoregressive
sequence-to-sequence model. $\mathcal{B}$, $\mathcal{A}$ and $\mathcal{R}$ can
be generated sequentially with direct decoding methods, such as greedy
decoding and beam search. We use a sequence-to-sequence Transformer Vaswani et
al. (2017) to implement $p(\mathcal{B},\mathcal{A},\mathcal{R}|\mathcal{C})$.
This distribution will also be used to build the noisy channel model in
Section 3.
## 3 Noisy Channel Model for Dialogue
While direct decoding is an effective approach for decoding belief states
Hosseini-Asl et al. (2020), it may be sub-optimal for generating responses.
First, it favors short and generic responses Bao et al. (2020). As a result,
the decoded responses are bland and lack diversity Li et al. (2016). Second,
it suffers from the explaining-away effect Klein and Manning (2002), where
inputs are “explained-away” by highly predictive output prefixes. For example,
if there is one hotel matching the user’s intent as encoded in the belief
state, the model is nevertheless prone to decoding “no” given the output
prefix “there is”, ignoring the input information.
In this work, we propose using the neural noisy channel model Yu et al. (2017)
to mitigate the above problems for response generation. Given an input
sequence $x$ and output sequence $y$, the noisy channel formulation Shannon
(1948) uses Bayes’ rule to rewrite the model $p(y|x)$ as
$\frac{p(x|y)p(y)}{p(x)}\ \propto\ p(x|y)p(y)$. It was originally applied to
speech recognition, where $p(y|x)$ is a conditional model of the source text
given a noisy observation. The channel model $p(x|y)$ estimates the
probability of the observation given the source, while $p(y)$ is an
unconditional language model (or source model), which can be trained on
unpaired data. More recently it has been applied to machine translation, where
$y$ is a translation of input text $x$.
Abstracting away from belief states and dialogue acts, for task-oriented
dialogue we want to estimate $p(\mathcal{R}|\mathcal{C})$, the probability of
a response given a context. The channel model $p(\mathcal{C}|\mathcal{R})$,
given a response, predicts a distribution over contexts which might have
elicited that response. The source model $p(\mathcal{R})$ is an unconditional
language model. In this extension of the noisy channel approach to task-
oriented dialogue, the “channel” can be understood as connecting dialogue
contexts with suitable responses.
For the full task, we develop a noisy channel model for
$p(\mathcal{B},\mathcal{A},\mathcal{R}|\mathcal{C})$. Using the chain rule,
$p(\mathcal{B},\mathcal{A},\mathcal{R}|\mathcal{C})=p(\mathcal{B}|\mathcal{C})\cdot
p(\mathcal{A},\mathcal{R}|\mathcal{C},\mathcal{B})$. Following Hosseini-Asl et
al. Hosseini-Asl et al. (2020), we use the direct model described in Section 2
to parameterize $p(\mathcal{B}|\mathcal{C})$ and decode $\mathcal{B}$, which
our preliminary experiments confirmed to be advantageous.
We use the noisy channel formulation to parameterize
$p(\mathcal{A},\mathcal{R}|\mathcal{C},\mathcal{B})$. Using Bayes’ Rule,
$p(\mathcal{A},\mathcal{R}|\mathcal{C},\mathcal{B})\ \propto\
p(\mathcal{C},\mathcal{B}|\mathcal{A},\mathcal{R})\cdot
p(\mathcal{A},\mathcal{R})$. The channel model
$p(\mathcal{C},\mathcal{B}|\mathcal{A},\mathcal{R})$ and source model
$p(\mathcal{A},\mathcal{R})$ are implemented as Transformers.
We choose to use the noisy channel formulation for decoding $\mathcal{A}$
based on preliminary experiments which showed improved overall accuracy over
direct decoding, possibly because poor dialogue act prediction by the direct
model led to worse quality responses. The serialized text of $\mathcal{A}$ and
$\mathcal{R}$ are concatenated during training, and the decoded sequence is
split into $\mathcal{A}$ and $\mathcal{R}$ with the special start/end tokens
during decoding.
We suggest that the noisy channel model has three advantages over the direct
model for response generation: (1) The channel model can penalize short and
generic responses. Such responses can be mapped to a large number of contexts,
resulting in a flat distribution over contexts. This leads to a lower channel
model score for short and generic responses Zhang et al. (2020b). (2) The
channel model ensures that $(\mathcal{A},\mathcal{R})$ must explain the
corresponding $(\mathcal{C},\mathcal{B})$, alleviating the explaining-away
effect Yu et al. (2017). (3) The source model, an unconditional distribution
over $\mathcal{A}$ and $\mathcal{R}$, can make use of abundant non-dialogue
textual data for pretraining, further improving the fluency of generated
sequences Brants et al. (2007). We leave exploration of this last advantage
for future work, as we pretrain all sub-modules with the same data.
### 3.1 Decoding
Since exact decoding from the noisy channel model
$\operatorname*{arg\,max}_{\mathcal{A},\mathcal{R}}p(\mathcal{C},\mathcal{B}|\mathcal{A},\mathcal{R})\cdot
p(\mathcal{A},\mathcal{R})$333Although exact decoding is also computationally
intractable for the direct model, approximating
$\operatorname*{arg\,max}_{\mathcal{B}}p(\mathcal{B}|\mathcal{C})$ is well-
studied, e.g. beam search. The decoding for $\mathcal{B}$ is therefore omitted
here. is computationally intractable, we experiment with two approximation
methods, noisy channel reranking and noisy channel online decoding. Since
these methods rely on $p(\mathcal{A},\mathcal{R}|\mathcal{C},\mathcal{B})$ as
a proposal distribution for approximation, and both
$p(\mathcal{A},\mathcal{R}|\mathcal{C},\mathcal{B})$ and
$p(\mathcal{B}|\mathcal{C})$ are parameterized with the direct model
introduced in Section 2, our noisy channel model therefore has three sub-
modules: a direct model $p(\mathcal{B},\mathcal{A},\mathcal{R}|\mathcal{C})$,
a channel model $p(\mathcal{C},\mathcal{B}|\mathcal{A},\mathcal{R})$, and a
source model $p(\mathcal{A},\mathcal{R})$.
Input : Context $\mathcal{C}$
Output : Belief, act and response $(\mathcal{B},\mathcal{A},\mathcal{R})$
Decode $\mathcal{B}$ given $\mathcal{C}$ with $p(\mathcal{B}|\mathcal{C})$
Beam: $\mathcal{S}=\\{({\color[rgb]{0,0,0}[a]})\\}$
while _end($\mathcal{S}$) is False_ do
$\mathcal{S^{\prime}}=\varnothing$
for _$\mathcal{O}$ in $\mathcal{S}$_ do
if _$\mathcal{O}.\texttt{last}$() is [/r] or $\mathcal{|O|}$ $>l$ _ then
$\mathcal{S^{\prime}}.\texttt{add}$($\mathcal{O}$)
continue
end if
Get $k_{1}$ tokens $o^{1},...,o^{k_{1}}$ from the direct model
$p(O_{|\mathcal{O}|+1}|\mathcal{C},\mathcal{B},\mathcal{O})$
for _$o^{i}$ in $(o^{1},...,o^{k_{1}})$_ do
$\mathcal{S^{\prime}}.\texttt{add}$($(\mathcal{O},o^{i})$)
end for
end for
$\mathcal{S}=\operatorname*{top\\_k_{2}}\limits_{\mathcal{O}\in\mathcal{S^{\prime}}}\log
p(\mathcal{O}|\mathcal{C},\mathcal{B})+$
$\lambda_{1}\cdot\log p(\mathcal{C},\mathcal{B}|\mathcal{O})+$
$\lambda_{2}\cdot\log p(\mathcal{O})+$
$\lambda_{3}\cdot|\mathcal{O}|$
end while
Select $\mathcal{O}\in\mathcal{S}$ with the largest score using Eq. 1 and
return $(\mathcal{B},\mathcal{A},\mathcal{R})$
Algorithm 1 Online decoding for the noisy channel.
Noisy channel reranking: Noisy channel reranking first decodes $\mathcal{B}$
and then continues decoding a list $\mathcal{S}$ of
$(\mathcal{A},\mathcal{R})$ pairs by beam search with the direct model, prior
to utilizing the noisy channel model to rerank $(\mathcal{A},\mathcal{R})$
pairs. In particular, during beam search, partial sequences are expanded and
pruned with $p(\mathcal{A},\mathcal{R}|\mathcal{C},\mathcal{B})$ (from the
direct model in Section 2). The pairs after decoding are reranked using the
following model combination:
$\displaystyle(\mathcal{A}^{\prime},\mathcal{R}^{\prime})=\operatorname*{arg\,max}\limits_{(\mathcal{A},\mathcal{R})\in\mathcal{S}}\log
p(\mathcal{A},\mathcal{R}|\mathcal{C},\mathcal{B})$ $\displaystyle+$ (1)
$\displaystyle\lambda_{1}\cdot\log
p(\mathcal{C},\mathcal{B}|\mathcal{A},\mathcal{R})$ $\displaystyle+$
$\displaystyle\lambda_{2}\cdot\log p(\mathcal{A},\mathcal{R})$
$\displaystyle+$ $\displaystyle\lambda_{3}\cdot|\mathcal{A},\mathcal{R}|$ ,
where $|\mathcal{A},\mathcal{R}|$ denotes the length of
$(\mathcal{A},\mathcal{R})$, and $\lambda_{1}$, $\lambda_{2}$ and
$\lambda_{3}$ are hyperparameters. Besides the channel model
$p(\mathcal{C},\mathcal{B}|\mathcal{A},\mathcal{R})$ and the source model
$p(\mathcal{A},\mathcal{R})$, we additionally use the direct model
$p(\mathcal{A},\mathcal{R}|\mathcal{C},\mathcal{B})$ and a length bias
$|\mathcal{A},\mathcal{R}|$ to encourage responses with high direct model
likelihood and discourage short responses, respectively.
Noisy channel online decoding: In contrast to reranking, online decoding
applies the noisy channel model during beam search for pruning partial
sequences, thus exploring a larger search space.
As shown in Algorithm 1, we first decode the belief state with
$p(\mathcal{B}|\mathcal{C})$, which comes from the direct model in Section 2.
Then, starting with a beam $\mathcal{S}$ containing a single sequence [a] (the
dialogue act start token), we continuously expand the sequences in
$\mathcal{S}$ until end($\mathcal{S}$) is met, i.e. all sequences in
$\mathcal{S}$ either end with [/r] or have lengths larger than $l$. In each
iteration, we first expand the sequences in the beam, then prune the expanded
beam. To expand a partial act and response sequence (denoted as $\mathcal{O}$
in Algorithm 1), a naive way is to use the noisy channel model to score $|V|$
(the vocabulary size) possible expansions, which is computationally expensive.
Instead, we use the probability of the next token
$p(O_{|\mathcal{O}|+1}|\mathcal{C},\mathcal{B},\mathcal{O})$ (where
$|\mathcal{O}|$ denotes the length of $\mathcal{O}$) to select $k_{1}$
candidates to be scored by the noisy channel model. This next token
probability is from the direct model introduced in Section 2. One
straightforward way to select $k_{1}$ expansions from
$p(O_{|\mathcal{O}|+1}|\mathcal{C},\mathcal{B},\mathcal{O})$ is using the
top-k maximization, but we can also take advantage of the advances in sampling
from a categorical distribution for text generation (e.g. top-k sampling Fan
et al. (2018) and nucleus sampling Holtzman et al. (2019)). After the
expansion, we prune the expanded beam $\mathcal{S}^{\prime}$ to obtain a
smaller beam with $k_{2}$ partial sequences based on the model combination in
Eq. 1. Compared to noisy channel reranking, online decoding applies the noisy
channel model during beam search, which is potentially less biased towards the
direct model.
In summary, we note that beam search for both the direct model and the online
decoding for our noisy channel model decodes
($\mathcal{B},\mathcal{A},\mathcal{R}$) autoregressively. Thus both approaches
are end-to-end models for task-oriented dialogue. The key difference is that
noisy channel online decoding uses Eq. 1 for pruning, while the direct model
uses $p(\mathcal{A},\mathcal{R}|\mathcal{C},\mathcal{B})$.
## 4 Model and Pretraining
We use three Transformer Vaswani et al. (2017) networks to parameterize the
direct model $p(\mathcal{B},\mathcal{A},\mathcal{R}|\mathcal{C})$, the channel
model $p(\mathcal{C},\mathcal{B}|\mathcal{A},\mathcal{R})$ and the source
model $p(\mathcal{A},\mathcal{R})$, respectively. The input to each
Transformer is the sum of four embeddings: word embeddings, position
embeddings, role embeddings (user/system), and turn embeddings (each word
corresponds to a turn number). Cross entropy is used as the loss function.
Given training samples $(\mathcal{C},\mathcal{B},\mathcal{A},\mathcal{R})$, if
we train the channel model using complete $(\mathcal{A},\mathcal{R})$ pairs as
input, a significant discrepancy arises between training and decoding for
noisy channel online decoding. Since the channel model is used to score
partial act and response pairs, i.e. $p(\mathcal{C},\mathcal{B}|\mathcal{O})$
in Algorithm 1, the channel model trained with complete
$(\mathcal{A},\mathcal{R})$ pairs is unsuited to scoring partial sequences. In
order to manually create partial sequences during training that are better
matched for online decoding, we truncate the $(\mathcal{A},\mathcal{R})$ pairs
with a truncation length uniformly sampled from 1 to the sequence length
(inclusive). The direct model and the source model are trained with complete
sequences, as partial sequences occur naturally in their standard
autoregressive training procedure.
As in-domain dialogue data are usually scarce, we use a two-stage pretraining
strategy to enhance the noisy channel model. Although the effectiveness of
pretraining with Reddit data has been validated for open-domain dialogue Zhang
et al. (2020b); Bao et al. (2019); Adiwardana et al. (2020), relatively little
work has applied such data to task-oriented dialogue.444One exception is
Henderson et al. Henderson et al. (2019), who use Reddit data to improve
response retrieval and selection. We focus on response generation in this
work. In the first stage, we explore Reddit pretraining (where the Reddit data
is pre-processed into $(\mathcal{C},\mathcal{R})$, i.e. context-response,
pairs as described below). In the second stage, we use two task-oriented
dialogue datasets, Taskmaster555https://cutt.ly/xkuUHUa Byrne et al. (2019)
and Schema-Guided Dialogue666https://cutt.ly/QkuUZUu Rastogi et al. (2020), to
specialize the Reddit-pretrained models. Since the Reddit data consists of
open-domain-style dialogues (where belief states and dialogue acts are
missing), pretraining on these datasets can familiarize the models with the
sequence-to-sequence representation of task-oriented dialogue. Three models, a
context-to-response model, a response-to-context model and a response language
model, are pretrained to initialize the direct model, the channel model and
the source model, respectively.
Dataset | # Dialog | # Turn | Avg. Turn/Dialog | Avg. Token/Turn | # Domain | Multi-Task | # Unique Slot | # Unique Value
---|---|---|---|---|---|---|---|---
Taskmaster | 17,304 | 341,801 | 19.75 | 7.87 | 7 | ✗ | 281 | 66,659
Schema | 22,825 | 463,284 | 20.3 | 9.86 | 17 | ✓ | 123 | 23,889
CamRest676 | 676 | 5,488 | 8.12 | 10.71 | 1 | ✗ | 4 | 89
MultiWOZ | 10,438 | 143,048 | 13.7 | 15.03 | 7 | ✓ | 46 | 11,828
SMCalFlow | 41,517 | 170,590 | 4.11 | 8.77 | 4 | ✓ | - | -
Table 1: Statistics of task-oriented dialogue datasets. We define a multi-task
dialogue as a dialogue involving multiple tasks, e.g. hotel and restaurant
booking, while its counterpart handles a single task, e.g. hotel booking.
Taskmaster and CamRest676 do not contain any multi-task dialogues.
### 4.1 Implementation Details
Models: All models are implemented with JAX Bradbury et al. (2018) and Haiku
Hennigan et al. (2020). For the direct model introduced in Section 2, we use a
Transformer model with hidden size 512, 12 encoder-decoder layers, and 16
self-attention heads. The model has 114M parameters. For the noisy channel
model, we use a base setting and a large setting. The base setting reduces the
number of layers to 5, hidden size to 384 and self-attention heads to 12. Its
sub-modules, a direct model, a reverse model and a language model, have 43M,
43M and 30M parameters, respectively. We employ the base setting for a fair
comparison with a single direct model using roughly the same number of
parameters (116M vs. 114M). For the large setting, we use the same
hyperparameters as the direct model (114M), so that its sub-modules, a direct
model, a reverse model and a language model, have 114M, 114M and 64M
parameters, respectively. We use this large setting to explore the limits of
the noisy channel model. The large noisy channel model (292M) is 2.56 times
larger compared to the direct model (114M). This illustrates another advantage
of the noisy channel model during training. While training a direct model with
292M parameters will overflow the memory of 16GB TPUs (v3) without using model
parallelism, training the sub-modules of the large noisy channel model can
easily fit into 16GB TPUs, as these modules are independently trained with no
need to load three modules for training. This enables us to train a noisy
channel model with more parameters compared to training a direct model using
the same hardware. For inference, we still need to load the sub-modules into a
TPU. Since gradients are not required during inference, we are able to load
the three sub-modules of the large noisy channel model (292M) into a single
TPU with 16GB memory for decoding. The large noisy channel model (292M) still
consumes more memory than the direct model (114M) during inference.
Pretraining settings: The maximum sequence length $l$ is set to 1024, and
sequences with longer lengths are truncated. We reuse the vocabulary from
GPT-2 Radford et al. (2019), which contains 50,257 BPE tokens. We use PreNorm
Nguyen and Salazar (2019) for faster convergence. GELU Hendrycks and Gimpel
(2016) is applied as the activation function. Following ALBERT Lan et al.
(2020), dropout is disabled during pretraining. We use the normal distribution
truncated to the range $[-0.01,0.01]$ to initialize the input embeddings,
while other parameters are initialized using the normal distribution with zero
mean and standard deviation 0.1. The batch size is set to 256. The LAMB
optimizer You et al. (2020) ($b_{1}=0.9$ and $b_{2}=0.999$) is employed for
optimization. The initial learning rate is 1e-7, and we apply 4000 warmup
steps to increase the learning rate to 1e-3, before utilizing cosine annealing
to decay the learning rate. Gradient clipping with clipping value 1 is applied
to avoid gradient explosion. We use gradient accumulation with accumulation
step 20.
Pretraining: For Reddit pretraining, we download a Reddit dump (with Reddit
posts ranging from 2005-12 to 2019-09) from PushShift.777https://pushshift.io/
Since the comments of a Reddit post are organized into a tree, we extract
paths from a tree as dialogue turns. The last comment of each comment path is
regarded as the response, while the others are used as the dialogue context.
We pretrain each model for 400,000 steps, consuming 102,400,000 (400,000
$\times$ 256) comment paths in total. For the task-oriented pretraining, we
combine the two datasets, Taskmaster and Schema-Guided Dialogue, and pretrain
for 1e5 steps. The statistics of the task-oriented dialogue datasets are shown
in Table 1.
We train each model using 64 TPU chips with 16GB memory each. The pretraining
takes around 4 days to complete.
## 5 Experiments
We fine-tune and evaluate the pretrained models on three dialogue datasets:
MultiWOZ 2.0, CamRest676 and SMCalFlow Andreas et al. (2020). In this section
we describe the datasets (Section 5.1), fine-tuning (Section 5.2), decoding
(Section 5.3) and evaluation metrics (Section 5.4). Results are presented in
Section 6, and analysis and ablation studies in Section 7.
Model | Inform $\uparrow$ | Success $\uparrow$ | BLEU $\uparrow$ | Combined $\uparrow$
---|---|---|---|---
Sequicity Lei et al. (2018) | 66.4 | 45.3 | 15.54 | 71.39
HRED-TS Peng et al. (2019) | 70.0 | 58.0 | 17.50 | 81.50
DSTC8 Track 1 Winner Ham et al. (2020) | 73.0 | 62.4 | 16.00 | 83.50
DAMD Zhang et al. (2020a) | 76.4 | 60.4 | 16.60 | 85.00
SimpleTOD Hosseini-Asl et al. (2020) | 84.4 | 70.1 | 15.01 | 92.26
SOLOIST Peng et al. (2020a) | 85.5 | 72.9 | 16.54 | 95.74
UBAR Yang et al. (2021)† | 88.2 | 79.5 | 16.43 | 100.28
Randomly Initialized
Direct decoding (114M) | 81.0 | 54.7 | 15.12 | 82.97
Noisy channel reranking (116M) | 82.7 | 57.1 | 15.29 | 85.19
Noisy channel online decoding (116M) | 82.9 | 58.9 | 15.33 | 86.23
Noisy channel reranking (292M) | 82.1 | 58.1 | 15.37 | 85.47
Noisy channel online decoding (292M) | 83.9 | 60.9 | 15.57 | 87.97
Reddit Pretraining
Direct decoding (114M) | 81.0 | 69.2 | 17.06 | 92.16
Noisy channel reranking (116M) | 81.3 | 70.1 | 19.01 | 94.71
Noisy channel online decoding (116M) | 81.6 | 71.1 | 19.31 | 95.66
Noisy channel reranking (292M) | 82.2 | 70.9 | 19.89 | 96.44
Noisy channel online decoding (292M) | 82.4 | 71.7 | 20.49 | 97.54
Task-Oriented Pretraining
Direct decoding (114M) | 85.2 | 72.9 | 17.00 | 96.05
Noisy channel reranking (116M) | 85.6 | 73.8 | 19.38 | 99.08
Noisy channel online decoding (116M) | 85.9 | 74.8 | 19.76 | 100.11
Noisy channel reranking (292M) | 86.5 | 74.9 | 20.31 | 101.01
Noisy channel online decoding (292M) | 86.9 | 76.2 | 20.58 | 102.13
Table 2: MultiWOZ test results (end-to-end modeling with generated beliefs)
with seq2seq approaches. Results are significant (p < 0.01) comparing noisy
channel decoding and direct decoding. $\dagger$ Yang et al. (2021) also report
a combined score of 105.1 with an alternative context and evaluation setting,
contributions orthogonal to our work and the other benchmarks reported here.
### 5.1 Datasets
MultiWOZ888https://cutt.ly/0kuUCRS is a multi-domain dataset consisting of
dialogues annotated with $\mathcal{C},\mathcal{B},\mathcal{A},\mathcal{R}$ in
the following seven domains: attraction, hotel, hospital, police, restaurant,
train, and taxi. Since its release, MultiWOZ has been one of the most commonly
used task-oriented dialogue datasets.
CamRest676999https://cutt.ly/SkuUNfE is annotated similarly to MultiWOZ and
consists of dialogues in a single domain: restaurant reservations. Though
CamRest676 is smaller than MultiWOZ and predates it, it still provides a
widely used benchmark for evaluating task-oriented dialogue models.
SMCalFlow consists of dialogues in four domains: calendar, weather, places,
and people. Unlike MultiWOZ and CamRest676, SMCalFlow uses dataflow graphs
instead of slot-value pairs to represent belief states and does not annotate
dialogue acts. We refer readers to Andreas et al. Andreas et al. (2020) for a
detailed description of the dataflow representation. We follow Andreas et al.
Andreas et al. (2020) to convert dataflow graphs into sequences to apply
seq2seq models. This dataset is newer and offers fewer prior models to compare
with, but we use this dataset to study the robustness of the noisy channel
model under different annotation schemas.
We use the public splits for these datasets, where MultiWOZ, CamRest676 and
SMCalFlow are split to 8438/1000/1000, 404/136/136 and 32647/3649/5211
dialogues for training, development and testing, respectively. However, since
SMCalFlow’s test set has not been publicly released, we randomly select 500
dialogues from its training set to tune hyperparameters and use its
development set for testing.
Preprocessing: We use the standard preprocessing procedures for each dataset
in order to facilitate fair comparison with previous
methods.101010https://cutt.ly/TkuU1oM 111111https://cutt.ly/zkuU0Ht
121212https://cutt.ly/vkuU9bT In particular, for MultiWOZ and CamRest676,
delexicalization is used to reduce lexical variation, while SMCalFlow does not
use delexicalization. During delexicalization, slot values are replaced by
generic placeholders based on a pre-defined dictionary. During decoding,
following prior work, our dialogue models generate delexicalized responses.
These delexicalized responses are re-lexicalized in post-processing by
replacing placeholders with their corresponding slot values based on belief
states and database information. Since there is no public code for
lexicalization,131313We confirmed this with the dataset authors by email. we
implement our own functions for lexicalization with regular expressions, for
the purpose of displaying example responses. However, this does not affect
reported results, as the standard metrics for MultiWOZ and CamRest676 which we
adopt here are calculated using delexicalized responses.
Model | Inform $\uparrow$ | Success $\uparrow$ | BLEU $\uparrow$ | Combined $\uparrow$
---|---|---|---|---
Sequicity Lei et al. (2018) | 92.3 | 85.3 | 21.40 | 110.20
GPT-2 fine-tuned Wu et al. (2019b) | - | 86.2 | 19.20 | -
ARDM Wu et al. (2019b) | - | 87.1 | 25.20 | -
SOLOIST Peng et al. (2020a) | 94.7 | 87.1 | 25.50 | 116.40
Randomly Initialized
Direct decoding (114M) | 78.1 | 83.5 | 21.58 | 102.38
Noisy channel online decoding (116M) | 79.8 | 84.1 | 22.83 | 104.78
Noisy channel online decoding (292M) | 80.9 | 84.9 | 23.19 | 106.09
Reddit Pretraining
Direct decoding (114M) | 93.3 | 83.9 | 23.41 | 112.01
Noisy channel online decoding (116M) | 93.7 | 84.5 | 25.14 | 114.24
Noisy channel online decoding (292M) | 93.9 | 84.7 | 25.38 | 114.68
Task-Oriented Pretraining
Direct decoding (114M) | 93.4 | 84.3 | 24.92 | 113.77
Noisy channel online decoding (116M) | 94.3 | 85.2 | 25.98 | 115.73
Noisy channel online decoding (292M) | 95.4 | 85.3 | 26.89 | 117.24
Table 3: CamRest676 test results (end-to-end modeling with generated beliefs) with seq2seq approaches. Noisy channel reranking performs comparable with noisy channel online decoding, and the results are not shown. Results are significant (p < 0.01) comparing noisy channel decoding and direct decoding. Model | SacreBLEU $\uparrow$ | TER $\downarrow$
---|---|---
Randomly Initialized
Direct decoding (114M) | 51.30 | 89.13
Online decoding (116M) | 53.66 | 74.18
Online decoding (292M) | 54.39 | 73.18
Reddit Pretraining
Direct decoding (114M) | 60.68 | 61.99
Online decoding (116M) | 63.29 | 47.16
Online decoding (292M) | 63.91 | 46.43
Task-Oriented Pretraining
Direct decoding (114M) | 61.02 | 59.84
Online decoding (116M) | 63.72 | 46.27
Online decoding (292M) | 64.29 | 45.81
Table 4: SMCalFlow results. Reranking performs worse than online decoding, and
the results are not shown. Results are significant (p < 0.01) comparing noisy
channel decoding and direct decoding.
### 5.2 Fine-Tuning
We apply label smoothing with parameter 0.1. Dropout is used on input
embeddings and hidden representations, with dropout rate 0.1. The Adam
optimizer Kingma and Ba (2015) ($b_{1}=0.9$ and $b_{2}=0.999$) is adopted. We
use a fixed learning rate 1e-4 with gradient clipping for fine-tuning.
### 5.3 Decoding
We use direct decoding for belief state. For dialogue act and response, we
study three decoding methods: direct decoding, noisy channel reranking and
noisy channel online decoding. Since all of these decoding methods require
choosing $k_{1}$ tokens from a categorical distribution during expansion, we
compare four methods, top-k maximization, sampling without replacement, top-k
sampling, and nucleus sampling. Nucleus sampling with cumulative probability
0.98 performs marginally better and is adopted. We perform a range search with
the range $[1,20]$ on development sets for the beam sizes $k_{1}$ and $k_{2}$,
and we set $k_{1},k_{2}=4$, $k_{1},k_{2}=15$ and $k_{1},k_{2}=4$ for MultiWOZ,
CamRest676 and SMCalFlow, respectively. For noisy channel reranking and noisy
channel online decoding, a grid search with range $[0,2]$ is performed for
$\lambda_{1}$, $\lambda_{2}$ and $\lambda_{3}$. We set ($\lambda_{1}=0.8$,
$\lambda_{2}=1$, $\lambda_{3}=0.8$), ($\lambda_{1}=1.2$, $\lambda_{2}=1.2$,
$\lambda_{3}=0.8$) and ($\lambda_{1}=0.4$, $\lambda_{2}=1$, $\lambda_{3}=0.2$)
for MultiWOZ, CamRest676 and SMCalFlow, respectively.
### 5.4 Evaluation Metrics
For MultiWOZ and CamRest676, following previous work, we adopt three automatic
evaluation metrics: inform, success and BLEU score. Peng et al. (2020a) showed
that these metrics are well correlated to human evaluation. The
evaluators141414https://cutt.ly/VkuU3FA 151515https://cutt.ly/MkuU88u provided
with the datasets are used for calculating these metrics. To calculate the
inform score for a dialogue, the evaluator first checks whether certain
placeholders (e.g. [restaurant_name]) appear in decoded responses. If so,
decoded belief states are converted to database queries to retrieve database
records. These database records are compared with the records retrieved with
ground-truth belief states. The inform score is one if these two sets of
database records match. The success score takes all the requestable slots
(e.g. postcode, phone number and address) from a decoded response and compares
these requestable slots with the ones in the ground-truth response. The
success score is one if generated requestable slots coincide with the ground-
truth ones. BLEU score (BLEU-4) compares the n-grams of generated responses
and human responses, and is a widely used metric in NLP for evaluating text
quality. Following Budzianowski et al. (2018), we also calculate a combined
score, which is (Inform + Success) / 2 + BLEU. For SMCalFlow, inform and
success scores are not applicable since calculation of these scores relies on
delexicalization placeholders, and this dataset does not use delexicalization.
We use SacreBLEU161616https://cutt.ly/BkuU7dL and
TER171717https://pypi.org/project/pyter/ to directly measure the quality of
responses. As prior work on this dataset has focused on belief tracking rather
than end-to-end response generation, we are the first to use these metrics on
this dataset.
We perform significance tests, where we use t-test for inform, success and TER
scores and use permutation test for BLEU.
## 6 Results
MultiWOZ: Results on the MultiWOZ test set are shown in Table 2. We observe
several trends. First, the base noisy channel model (116M) performs better
than direct decoding (114M), despite having a similar number of parameters,
showing that the noisy channel factorization is beneficial for task-oriented
dialogue. The large noisy channel setting improves further over the base
setting. Second, Reddit pretraining provides benefits over random
initialization, validating the use of large open-domain dialogue-genre
pretraining for task-oriented dialogue, while the models with a second stage
of task-oriented pretraining obtain further improvements. This effect is
consistent across both direct and noisy channel decoding. Finally, we observe
that online decoding consistently outperforms reranking, indicating the
benefits of tighter model integration during decoding.
Our model performs better on combined score than SOLOIST Peng et al. (2020a),
a closely related baseline which pretrains a GPT2-initialized Transformer with
Taskmaster and Schema-Guided Dialogue and decodes with nucleus sampling.
CamRest676: Results on the CamRest676 test set are shown in Table 3. We
observe that the base noisy channel model (116M) obtains better results
compared to direct decoding (114M), again demonstrating the effectiveness of
the noisy channel model. Reddit pretraining again provides a large benefit
over random initialization for both direct decoding and noisy channel
decoding, while task-oriented pretraining provides a further boost. Our model
again performs better than SOLOIST.
SMCalFlow: Results on the SMCalFlow development set are shown in Table 4. As
end-to-end models have not previously been tested on this dataset, we use it
to demonstrate that the noisy channel model, which we developed primarily on
MultiWOZ, continues to be effective on task-oriented dialogue datasets with
different annotation schema. The results are consistent with MultiWOZ and
CamRest676. The noisy channel model outperforms the direct model by a large
margin, demonstrating that dialogue act annotations are not essential for the
noisy channel model, and that it remains effective across diverse dialogue
representations.
Reddit pretraining confers a similar large benefit on SMCalFlow as on the
other datasets, but we observe that task-oriented pretraining brings only
marginal further improvements. This may be due to differences in domain or
format between our pretraining datasets and SMCalFlow. Alternatively, task-
oriented pretraining may help more on task-specific metrics, such as inform
and success scores, than on text quality metrics such as BLEU and TER scores.
This hypothesis is further supported by the MultiWOZ results in Table 2.
## 7 Analysis
In this section, we use MultiWOZ and CamRest676 to perform ablation studies on
the effects of model combination, large-scale pretraining, and sample
efficiency; as well as analyzing the runtime requirements of our model and the
reasons for its success.
Model | CamRest676 | MultiWOZ
---|---|---
Direct decoding | 115.17 | 96.73
Noisy Channel Online Decoding
Direct + Channel | 115.63 | 98.54
Direct + Source | 115.91 | 99.12
Direct + Length | 115.56 | 97.57
Channel + Source | 115.82 | 99.18
Channel + Length | 115.60 | 98.71
Source + Length | 115.62 | 99.19
All - Direct | 115.96 | 100.89
All - Channel | 116.56 | 100.93
All - Source | 116.38 | 99.92
All - Length | 116.52 | 101.11
All | 116.91 | 102.62
Table 5: Ablation results for model combination on development sets (combined
score). Results for reranking are similar and are not shown. ‘All’, ‘Direct’
‘Source’, and ‘Channel’ denote no ablation, direct model, source model and
channel model, respectively. Rows with ‘+’ are combinations of two sub-
modules, while the rows with ‘-’ are combinations of three sub-modules.
(a) Reddit pretraining, CamRest676
(b) Task-oriented pretraining, CamRest676
(c) Reddit pretraining, MultiWOZ
(d) Task-oriented pretraining, MultiWOZ
Figure 2: Results showing the effect of pretraining scale.
(a) Direct decoding, CamRest676
(b) Reranking, CamRest676
(c) Online decoding, CamRest676
(d) Direct decoding, MultiWOZ
(e) Reranking, MultiWOZ
(f) Online decoding, MultiWOZ
Figure 3: Pretraining improves sample efficiency during fine-tuning. Model | CamRest676 | MultiWOZ
---|---|---
Direct decoding | 4.89 | 6.48
Reranking | 5.43 | 6.92
Online decoding | 8.73 | 10.97
Table 6: Average decoding time (in seconds) for each turn with different decoding methods. Model | CamRest676 | MultiWOZ
---|---|---
Ground truth | 14.50 | 16.91
Direct decoding | 12.07 | 12.85
Direct decoding + Length | 15.98 | 17.73
Reranking | 15.09 | 17.47
Online decoding | 15.14 | 17.32
Table 7: The average length of responses with different decoding methods (on
test set). The value closest to the ground truth is bold.
Model | CamRest676 | MultiWOZ
---|---|---
Ground truth | 1.07 | 1.22
Direct decoding | 0.84 | 0.91
Reranking | 0.87 | 0.99
Online decoding | 0.89 | 1.03
Table 8: The Zipf scores of responses with different decoding methods (on test
set). The value closest to the ground truth is bold.
Model | CamRest676 | MultiWOZ
---|---|---
Direct decoding | 0.24 | 0.31
Reranking | 0.12 | 0.14
Online decoding | 0.08 | 0.11
Table 9: The likelihood (%) of falling into repetition loops for different
decoding methods (on test set).
### 7.1 Ablation on Model Combination
Noisy channel decoding involves a combination of four sub-modules, as in Eq.
1: the direct model, channel model, language model, and length bias. We
perform an ablation study to determine whether all model components are
important to the result, using the large model. Results on the development
sets of CamRest676 and MultiWOZ are presented in Table 5. Note that the
ablation is performed after applying the direct model to obtain $k_{1}$
expansions at each beam search step for noisy channel online decoding. We find
that the combination of all four sub-modules performs the best, followed by
combinations of three and then two sub-modules. The results are significant
when comparing ‘All’ and the baselines ($p<0.01$). This result demonstrates
the effectiveness of the noisy channel factorization, and the importance of
each model component.
### 7.2 Effect of Pretraining Scale
We investigate the importance of scale for both our pretraining stages. We
select different checkpoints for Reddit pretraining, and truncate the two
task-oriented dialogue datasets for task-oriented pretraining. We fine-tune
these models using the full training data of CamRest676 or MultiWOZ. The
results of three decoding methods (with the large noisy channel model) on the
development sets are shown in Figure 2. In Figure 2 (a) and (c), the combined
scores of all three decoding methods improve with more Reddit pretraining
steps, demonstrating the advantage of increasing amounts of data in the open-
domain dialogue pretraining stage. In Figure 2 (b) and (d), the combined
scores further increase with more task-oriented data, confirming that
additional task-oriented pretraining data is useful.
Model | Delexicalized Response | SacreBLEU $\uparrow$
---|---|---
Dialogue: MUL1624, Turn Number: 4 |
Ground truth | Sure. The trainid is $[$train_id$]$. It leaves at $[$value_time$]$ and arrives at $[$value_time$]$. Would you like to make a reservation? | -
Direct decoding | The travel time is $[$value_count$]$ minutes. | 2.03
Reranking | $[$train_id$]$ leaves $[$value_place$]$ at $[$value_time$]$ and arrives in $[$value_place$]$ at $[$value_time$]$. Would you like me to book that for you? | 46.33
Online decoding | $[$train_id$]$ leaves at $[$value_time$]$ and arrives at $[$value_time$]$. Would you like me to book that? | 60.90
Dialogue: MUL1276, Turn Number: 6 |
Ground truth | The $[$restaurant_name$]$ has a postal code of $[$restaurant_postcode$]$. Will this work for you? I can book it if you please. | -
Direct decoding | There are $[$value_count$]$ places. What type of food would you like? | 1.63
Reranking | How about $[$restaurant_name$]$ located at $[$restaurant_address$]$, postcode $[$restaurant_postcode$]$. | 24.90
Online decoding | How about $[$restaurant_name$]$ located at $[$restaurant_address$]$, postcode $[$restaurant_postcode$]$? | 22.00
Dialogue: MUL1898, Turn Number: 2 |
Ground truth | Yes, $[$attraction_name$]$ is on $[$attraction_address$]$ and is in the $[$value_area$]$ side of town. Is there anything else you need to know? | -
Direct decoding | $[$attraction_name$]$ is located in the $[$value_area$]$ part of town and has free admission. | 27.53
Reranking | $[$attraction_name$]$ is located in the $[$value_area$]$ of town at $[$attraction_address$]$. The entrance fee is free. Can I help you with anything else? | 41.66
Online decoding | $[$attraction_name$]$ is located in the $[$value_area$]$ part of town at $[$attraction_address$]$. Can I help you with anything else? | 42.38
Table 10: Case study on the responses decoded by direct decoding, noisy
channel reranking and noisy channel online decoding. The large noisy channel
model is used.
### 7.3 Sample Efficiency of Fine-Tuning
We investigate whether pretraining can improve sample efficiency during fine-
tuning. We gradually increase the amount of fine-tuning data and evaluate the
randomly-initialized, Reddit pretrained and task-oriented pretrained models.
The results on the development sets are shown in Figure 3. Combined scores
increase with more training data under all conditions. Crucially, Reddit
pretrained models show better performance with a smaller amount of fine-tuning
data than randomly initialized models, and task-oriented pretrained models
better still. We conclude that both our pretraining stages can improve sample
efficiency, which is especially important when the target task has little
training data.
### 7.4 Decoding Runtime
In Table 6, we report the average clock time for decoding one turn (including
its belief state, dialogue act and response). Noisy channel reranking is
slightly slower compared to direct decoding, with overhead due to the
reranking step in Eq. 1. Noisy channel online decoding is significantly
slower, since it needs to apply Eq. 1 at each beam search step. In future work
we will investigate ways to improve the efficiency of online decoding.
### 7.5 Decoding Properties
In this section we analyze why the noisy channel model performed better than
direct decoding.
Length: In Table 7 we show the average length of generated responses. Direct
decoding produces shorter responses than the ground truth, confirming that the
direct model prefers short and generic responses. Adding a length bias to
direct decoding (with lambda tuned on the development sets) produces responses
longer than the ground truth, which may be a disadvantage. The noisy channel
models produce responses with average length closest to the ground truth.
Zipf: Table 9 shows the Zipf scores of responses. We find that the word
distributions of responses generated by the noisy channel models are closer to
the word distribution of ground-truth responses.
Repetition: In Table 9 we examine the likelihood of falling into repetition
loops Holtzman et al. (2019) for different decoding methods. Repetition loops
are rare for all decoding methods, but noisy channel decoding can further
decrease their likelihood. The channel model can discount a sequence with a
repetition loop, since it conveys less information than a natural sequence of
the same length, making it harder to “explain” the context.
Examples: Some examples of responses are shown in Table 10. We observe that
noisy channel models decode longer responses compared to direct decoding, and
that the responses can explain their dialogue contexts well to meet users’
requirements.
## 8 Related Work
Task-oriented dialogue models: Most task-oriented dialogue systems break down
the task into three components: belief tracking Henderson et al. (2013);
Mrkšić et al. (2016); Rastogi et al. (2017); Nouri and Hosseini-Asl (2018); Wu
et al. (2019a); Zhang et al. (2019); Zhou and Small (2019); Heck et al.
(2020), dialogue act prediction Wen et al. (2017a); Tanaka et al. (2019) and
response generation Chen et al. (2019); Budzianowski et al. (2018); Lippe et
al. (2020). Traditionally, a modular approach is adopted, where these
components are optimized independently (i.e. a pipeline design) or learned via
multi-task learning (i.e. some parameters are shared among the components) Wen
et al. (2017b); Neelakantan et al. (2019); Zhao et al. (2019); Mehri et al.
(2019); Tseng et al. (2020); Lee et al. (2020). However, it is known that
improvements in one component do not necessarily lead to overall performance
improvements Ham et al. (2020), and the modular approach suffers from error
propagation in practice Liu and Lane (2018). These observations gave rise to
the sequence-to-sequence approach Lei et al. (2018); Pei et al. (2019);
Budzianowski and Vulić (2019); Wu et al. (2019b); Zhang et al. (2020a); Ham et
al. (2020); Hosseini-Asl et al. (2020); Peng et al. (2020a); Yang et al.
(2021), where dialogue beliefs and acts are represented as text spans, and a
sequence-to-sequence model is applied to subsume the three components. Our
work is situated within this general approach. In contrast to previous work,
however, which uses a direct model for decoding, we introduce the noisy
channel model to improve task-oriented dialogue.
Pretraining models for dialogue: Recent work has applied pretraining Peters et
al. (2018); Devlin et al. (2019); Radford et al. (2019) to dialogue. For open-
domain dialogue, DialoGPT Zhang et al. (2020b) and CGRG Wu et al. (2020b)
extend GPT-2 Radford et al. (2019) for response generation. PLATO Bao et al.
(2019) and PLATO-2 Bao et al. (2020) pretrain a latent variable model with
social media data for diversified response generation. Meena Adiwardana et al.
(2020) collects a large-scale social media corpus for pretraining and proposes
a metric named sensibleness and specificity average for evaluation. Roller et
al. Roller et al. (2020) study various strategies for building an open-domain
chatbot with Reddit for pretraining. For task-oriented dialogue, ToD-BERT Wu
et al. (2020a) fine-tunes BERT Devlin et al. (2019) for four tasks, including
intention detection, belief tracking, dialogue act prediction, and response
selection. SC-GPT Peng et al. (2020b) fine-tunes GPT-2 for few-shot response
generation with given dialogue acts. Ham et al. Ham et al. (2020) fine-tune
GPT-2 for belief tracking and context-to-response generation. SimpleTOD
Hosseini-Asl et al. (2020) proposes a method to serialize dialogue beliefs and
acts into text spans and fine-tunes GPT-2 for end-to-end dialogue modeling.
SOLOIST Peng et al. (2020a) uses a series of task-oriented dialogue datasets
to further pretrain GPT-2 before fine-tuning it on final tasks for evaluation.
Unlike these BERT- or GPT-initialized task-oriented dialogue models, which are
essentially pretrained with general text, such as Wikipedia and BookCorpus, we
use a Reddit dump to pretrain the models to learn from open-domain dialogues.
## Conclusion
We introduced two noisy channel models, noisy channel reranking and noisy
channel online decoding, for task-oriented dialogue. Large-scale pretraining
was further adopted to tackle data scarcity in downstream tasks. Extensive
experiments on MultiWOZ, CamRest676 and SMCalFlow demonstrated that (1) the
noisy channel models significantly outperform direct decoding; (2) models with
pretraining improve over randomly-initialized models; (3) the models are
robust to different dialogue schema annotations; (4) the noisy channel models
can decode responses closer to ground-truth responses than direct decoding.
## Acknowledgements
We would like to thank the action editors (Maggie, Wenjie Li and Eneko Agirre)
and three anonymous reviewers for their insightful comments. We also thank
Angeliki Lazaridou, Gábor Melis, Nando de Freitas, Chris Dyer and the DeepMind
language team for their helpful discussions.
## References
* Adiwardana et al. (2020) Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, et al. 2020\. Towards a human-like open-domain chatbot. _arXiv preprint arXiv:2001.09977_.
* Andreas et al. (2020) Jacob Andreas, John Bufe, David Burkett, Charles Chen, Josh Clausman, Jean Crawford, Kate Crim, Jordan DeLoach, Leah Dorner, Jason Eisner, et al. 2020. Task-oriented dialogue as dataflow synthesis. _Transactions of the Association for Computational Linguistics_ , 8:556–571.
* Austin (1975) John Langshaw Austin. 1975. _How to do things with words_ , volume 88. Oxford university press.
* Bao et al. (2019) Siqi Bao, Huang He, Fan Wang, and Hua Wu. 2019. Plato: Pre-trained dialogue generation model with discrete latent variable. _arXiv preprint arXiv:1910.07931_.
* Bao et al. (2020) Siqi Bao, Huang He, Fan Wang, Hua Wu, and Haifeng Wang. 2020. PLATO: pre-trained dialogue generation model with discrete latent variable. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020_ , pages 85–96. Association for Computational Linguistics.
* Bradbury et al. (2018) James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, and Skye Wanderman-Milne. 2018. JAX: composable transformations of Python+NumPy programs.
* Brants et al. (2007) Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language models in machine translation. In _Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)_ , pages 858–867, Prague, Czech Republic. Association for Computational Linguistics.
* Brown et al. (2020) Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In _Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual_.
* Budzianowski and Vulić (2019) Paweł Budzianowski and Ivan Vulić. 2019. Hello, it’s GPT-2 - how can I help you? towards the use of pretrained language models for task-oriented dialogue systems. In _Proceedings of the 3rd Workshop on Neural Generation and Translation_ , pages 15–22, Hong Kong. Association for Computational Linguistics.
* Budzianowski et al. (2018) Pawel Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gasic. 2018. Multiwoz - A large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018_ , pages 5016–5026. Association for Computational Linguistics.
* Byrne et al. (2019) Bill Byrne, Karthik Krishnamoorthi, Chinnadhurai Sankar, Arvind Neelakantan, Ben Goodrich, Daniel Duckworth, Semih Yavuz, Amit Dubey, Kyu-Young Kim, and Andy Cedilnik. 2019. Taskmaster-1: Toward a realistic and diverse dialog dataset. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019_ , pages 4515–4524. Association for Computational Linguistics.
* Chen et al. (2019) Wenhu Chen, Jianshu Chen, Pengda Qin, Xifeng Yan, and William Yang Wang. 2019. Semantically conditioned dialog response generation via hierarchical disentangled self-attention. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 3696–3709, Florence, Italy. Association for Computational Linguistics.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers)_ , pages 4171–4186. Association for Computational Linguistics.
* Fan et al. (2018) Angela Fan, Mike Lewis, and Yann N. Dauphin. 2018. Hierarchical neural story generation. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers_ , pages 889–898. Association for Computational Linguistics.
* Ham et al. (2020) Donghoon Ham, Jeong-Gwan Lee, Youngsoo Jang, and Kee-Eung Kim. 2020. End-to-end neural pipeline for goal-oriented dialogue systems using GPT-2. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 583–592, Online. Association for Computational Linguistics.
* Heck et al. (2020) Michael Heck, Carel van Niekerk, Nurul Lubis, Christian Geishauser, Hsien-Chin Lin, Marco Moresi, and Milica Gasic. 2020. Trippy: A triple copy strategy for value independent neural dialog state tracking. In _Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGdial 2020, 1st virtual meeting, July 1-3, 2020_ , pages 35–44. Association for Computational Linguistics.
* Henderson et al. (2013) Matthew Henderson, Blaise Thomson, and Steve Young. 2013. Deep neural network approach for the dialog state tracking challenge. In _Proceedings of the SIGDIAL 2013 Conference_ , pages 467–471.
* Henderson et al. (2019) Matthew Henderson, Ivan Vulic, Daniela Gerz, Iñigo Casanueva, Pawel Budzianowski, Sam Coope, Georgios Spithourakis, Tsung-Hsien Wen, Nikola Mrksic, and Pei-Hao Su. 2019. Training neural response selection for task-oriented dialogue systems. In _Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers_ , pages 5392–5404. Association for Computational Linguistics.
* Hendrycks and Gimpel (2016) Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). _arXiv preprint arXiv:1606.08415_.
* Hennigan et al. (2020) Tom Hennigan, Trevor Cai, Tamara Norman, and Igor Babuschkin. 2020. Haiku: Sonnet for JAX.
* Holtzman et al. (2019) Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. _CoRR_ , abs/1904.09751.
* Hosseini-Asl et al. (2020) Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A simple language model for task-oriented dialogue. In _Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual_.
* Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_.
* Klein and Manning (2002) Dan Klein and Christopher D Manning. 2002. Conditional structure versus conditional estimation in nlp models. In _Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002)_ , pages 9–16.
* Lan et al. (2020) Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In _8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020_.
* Lee et al. (2020) Hwaran Lee, Seokhwan Jo, HyungJun Kim, Sangkeun Jung, and Tae-Yoon Kim. 2020. Sumbt+ larl: End-to-end neural task-oriented dialog system with reinforcement learning. _arXiv preprint arXiv:2009.10447_.
* Lei et al. (2018) Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, and Dawei Yin. 2018\. Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1437–1447.
* Li et al. (2016) Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In _NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016_ , pages 110–119. The Association for Computational Linguistics.
* Lippe et al. (2020) Phillip Lippe, Pengjie Ren, Hinda Haned, Bart Voorn, and Maarten de Rijke. 2020\. Diversifying task-oriented dialogue response generation with prototype guided paraphrasing. _CoRR_ , abs/2008.03391.
* Liu and Lane (2018) Bing Liu and Ian Lane. 2018. End-to-end learning of task-oriented dialogs. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop_ , pages 67–73.
* Mehri et al. (2019) Shikib Mehri, Tejas Srinivasan, and Maxine Eskenazi. 2019. Structured fusion networks for dialog. _arXiv preprint arXiv:1907.10016_.
* Mrkšić et al. (2016) Nikola Mrkšić, Diarmuid O Séaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2016. Neural belief tracker: Data-driven dialogue state tracking. _arXiv preprint arXiv:1606.03777_.
* Neelakantan et al. (2019) Arvind Neelakantan, Semih Yavuz, Sharan Narang, Vishaal Prasad, Ben Goodrich, Daniel Duckworth, Chinnadhurai Sankar, and Xifeng Yan. 2019. Neural assistant: Joint action prediction, response generation, and latent knowledge reasoning. _arXiv preprint arXiv:1910.14613_.
* Nguyen and Salazar (2019) Toan Q Nguyen and Julian Salazar. 2019. Transformers without tears: Improving the normalization of self-attention. _arXiv preprint arXiv:1910.05895_.
* Nouri and Hosseini-Asl (2018) Elnaz Nouri and Ehsan Hosseini-Asl. 2018. Toward scalable neural dialogue state tracking model. _arXiv preprint arXiv:1812.00899_.
* Pei et al. (2019) Jiahuan Pei, Pengjie Ren, and Maarten de Rijke. 2019. A modular task-oriented dialogue system using a neural mixture-of-experts. _arXiv preprint arXiv:1907.05346_.
* Peng et al. (2020a) Baolin Peng, Chunyuan Li, Jinchao Li, Shahin Shayandeh, Lars Liden, and Jianfeng Gao. 2020a. Soloist: Few-shot task-oriented dialog with a single pre-trained auto-regressive model. _arXiv preprint arXiv:2005.05298_.
* Peng et al. (2020b) Baolin Peng, Chenguang Zhu, Chunyuan Li, Xiujun Li, Jinchao Li, Michael Zeng, and Jianfeng Gao. 2020b. Few-shot natural language generation for task-oriented dialog. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, EMNLP 2020, Online Event, 16-20 November 2020_ , pages 172–182. Association for Computational Linguistics.
* Peng et al. (2019) Shuke Peng, Xinjing Huang, Zehao Lin, Feng Ji, Haiqing Chen, and Yin Zhang. 2019\. Teacher-student framework enhanced multi-domain dialogue generation. _arXiv preprint arXiv:1908.07137_.
* Peters et al. (2018) Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT_ , pages 2227–2237.
* Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. _OpenAI Blog_ , 1(8):9.
* Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. _J. Mach. Learn. Res._ , 21:140:1–140:67.
* Rastogi et al. (2017) Abhinav Rastogi, Dilek Hakkani-Tür, and Larry Heck. 2017. Scalable multi-domain dialogue state tracking. In _2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)_ , pages 561–568. IEEE.
* Rastogi et al. (2020) Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. In _The Thirty-Fourth AAAI Conference on Artificial Intelligence, February 7-12, 2020_ , pages 8689–8696. AAAI Press.
* Raux et al. (2005) Antoine Raux, Brian Langner, Dan Bohus, Alan W Black, and Maxine Eskenazi. 2005\. Let’s go public! taking a spoken dialog system to the real world. In _Ninth European conference on speech communication and technology_.
* Roller et al. (2020) Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M Smith, et al. 2020. Recipes for building an open-domain chatbot. _arXiv preprint arXiv:2004.13637_.
* Seneff and Polifroni (2000) Stephanie Seneff and Joseph Polifroni. 2000. Dialogue management in the mercury flight reservation system. In _ANLP-NAACL 2000 Workshop: Conversational Systems_.
* Shannon (1948) Claude Shannon. 1948. A mathematical theory of communication. _Bell System Technical Journal_ , 27:379–423.
* Tanaka et al. (2019) Koji Tanaka, Junya Takayama, and Yuki Arase. 2019. Dialogue-act prediction of future responses based on conversation history. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop_ , pages 197–202.
* Tseng et al. (2020) Bo-Hsiang Tseng, Jianpeng Cheng, Yimai Fang, and David Vandyke. 2020. A generative model for joint natural language understanding and generation. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020_ , pages 1795–1807. Association for Computational Linguistics.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Advances in neural information processing systems_ , pages 5998–6008.
* Wen et al. (2017a) Tsung-Hsien Wen, Yishu Miao, Phil Blunsom, and Steve J. Young. 2017a. Latent intention dialogue models. _CoRR_ , abs/1705.10229.
* Wen et al. (2017b) Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina Maria Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve J. Young. 2017b. A network-based end-to-end trainable task-oriented dialogue system. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers_ , pages 438–449. Association for Computational Linguistics.
* Wu et al. (2020a) Chien-Sheng Wu, Steven C. H. Hoi, Richard Socher, and Caiming Xiong. 2020a. TOD-BERT: pre-trained natural language understanding for task-oriented dialogue. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020_ , pages 917–929. Association for Computational Linguistics.
* Wu et al. (2019a) Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019a. Transferable multi-domain state generator for task-oriented dialogue systems. In _Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers_ , pages 808–819. Association for Computational Linguistics.
* Wu et al. (2019b) Qingyang Wu, Yichi Zhang, Yu Li, and Zhou Yu. 2019b. Alternating recurrent dialog model with large-scale pre-trained language models. _arXiv preprint arXiv:1910.03756_.
* Wu et al. (2020b) Zeqiu Wu, Michel Galley, Chris Brockett, Yizhe Zhang, Xiang Gao, Chris Quirk, Rik Koncel-Kedziorski, Jianfeng Gao, Hannaneh Hajishirzi, Mari Ostendorf, et al. 2020b. A controllable model of grounded response generation. _arXiv preprint arXiv:2005.00613_.
* Yang et al. (2021) Yunyi Yang, Yunhao Li, and Xiaojun Quan. 2021. Ubar: Towards fully end-to-end task-oriented dialog systems with gpt-2. _The Thirty-Fifth AAAI Conference on Artificial Intelligence_.
* Yee et al. (2019) Kyra Yee, Yann N. Dauphin, and Michael Auli. 2019. Simple and effective noisy channel modeling for neural machine translation. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019_ , pages 5695–5700. Association for Computational Linguistics.
* You et al. (2020) Yang You, Jing Li, Sashank J. Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. 2020\. Large batch optimization for deep learning: Training BERT in 76 minutes. In _8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020_.
* Yu et al. (2017) Lei Yu, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Tomás Kociský. 2017. The neural noisy channel. In _5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings_.
* Yu et al. (2020) Lei Yu, Laurent Sartran, Wojciech Stokowiec, Wang Ling, Lingpeng Kong, Phil Blunsom, and Chris Dyer. 2020. Better document-level machine translation with bayes’ rule. _Transactions of the Association for Computational Linguistics_ , 8:346–360.
* Zhang et al. (2019) Jian-Guo Zhang, Kazuma Hashimoto, Chien-Sheng Wu, Yao Wan, Philip S Yu, Richard Socher, and Caiming Xiong. 2019. Find or classify? dual strategy for slot-value predictions on multi-domain dialog state tracking. _arXiv preprint arXiv:1910.03544_.
* Zhang et al. (2020a) Yichi Zhang, Zhijian Ou, and Zhou Yu. 2020a. Task-oriented dialog systems that consider multiple appropriate responses under the same context. In _The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020_ , pages 9604–9611. AAAI Press.
* Zhang et al. (2020b) Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020b. DIALOGPT : Large-scale generative pre-training for conversational response generation. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, ACL 2020, Online, July 5-10, 2020_ , pages 270–278. Association for Computational Linguistics.
* Zhao et al. (2019) Tiancheng Zhao, Kaige Xie, and Maxine Eskénazi. 2019. Rethinking action spaces for reinforcement learning in end-to-end dialog agents with latent variable models. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers)_ , pages 1208–1218. Association for Computational Linguistics.
* Zhou and Small (2019) Li Zhou and Kevin Small. 2019. Multi-domain dialogue state tracking as dynamic knowledge graph enhanced question answering. _arXiv preprint arXiv:1911.06192_.
|
Bochner-Riesz Means Convergence of Prolate Spheroidal Series and Their
Extensions
Mourad Boulsanea 111 Corresponding author: Mourad Boulsane, Email:
<EMAIL_ADDRESS>Ahmed Souabnia
a Carthage University, Faculty of Sciences of Bizerte, Department of
Mathematics, Jarzouna, 7021, Tunisia.
###### Abstract
In this paper, we study the $L^{p}$-Bochner-Riesz mean summability problem
related to the spectrum of some particular Sturm-Liouville operators in the
weighted $L^{p}([a,b],\omega).$ Our purpose is to establish suitable
conditions under which the Bochner-Riesz expansion of a function $f\in
L^{p}([a,b],\omega)$,$1<p<\infty$, in two generalisations of Slepian’s basis,
converges to $f$ in $L^{p}([a,b],\omega)$.
Keywords: Bochner-Riesz mean convergence, eigenfunctions and eigenvalues,
prolate spheroidal wave functions.
2010 Mathematics Subject Classification. 42C10, 41A60.
## 1 Introduction
The $L^{p}$-Bochner-Riesz mean convergence of orthogonal series has attracted
special attention since several decades ago. This kind of convergence is
briefly described as follows. Let $1\leq p<\infty$ , $a,b\in\mathbb{R}$ and
$\\{\varphi_{n}\\}$ an orthonormal set of eigenfunctions of a positive self-
adjoint differential operator $\mathcal{L}$ associated with eigenvalues
$\chi_{n}$ on a weighted Hilbert space $L^{2}(I,\omega)$, where $\omega$ is a
positive bounded weight function. We define the expansion coefficients of
$f\in L^{p}([a,b],\omega)$ by
$a_{n}(f)=\int_{a}^{b}f(x)\varphi_{n}(x)\omega(x)dx.$ The orthonormal set
$\\{\varphi_{n}\\}$ is said to have the Bochner-Riesz mean convergence of
order $p$ over the Banach space $L^{p}(I,\omega)$ if for some suitable
$\delta>0$ and for all $f\in L^{p}(I,\omega),$ we have
$\lim_{R\to\infty}\int_{a}^{b}|f(x)-\Psi_{R}^{\delta}f(x)|^{p}\omega(x)dx=0,\mbox{
where
}\displaystyle\Psi_{R}^{\delta}f=\sum_{n=0}^{\infty}\Big{(}1-\frac{\chi_{n}}{R}\Big{)}^{\delta}_{+}a_{n}(f)\varphi_{n}.$
(1)
To the best of our knowledge, M. Riesz was the first, in 1911, to investigate
this problem in some special cases. Our problem is a modified summability
method of Riesz mean introduced by Salomon Bochner given by (1) . In [7],
S.Bochner started by studying this problem for the trigonometric exponential
case in higher dimension. Furthermore, in [13], the authors have proved a
Bochner-Riesz mean convergence for the orthonormal eigenvectors system of a
second order elliptic differential operator on a compact N-dimensional
manifold M for $1\leq p\leq 2\frac{N+1}{N+3}$ and
$\delta>N\left|\frac{1}{p}-\frac{1}{2}\right|-\frac{1}{2}$. Mauceri and Müller
have also studied this problem in [20] and [22] in the framework of the
Heisenberg group. This problem has been analysed for Fourier-Bessel expansions
series in [10] and [11]. Moreover, in [8], authors have also solved this
question in the case of sublaplacien on the sphere $S^{2n-1}$ in the complex
n-dimensional space $\mathbb{C}^{n}$,where it has been shown that we have
convergence for $\delta>(2n-1)\Big{|}\frac{1}{2}-\frac{1}{p}\Big{|}$. The weak
type convergence is investigated in this problem. Indeed, we say that an
orthonormal family $\\{\varphi_{n}\\}$ of $L^{p}(I,\omega)$ has a weakly
Bochner-Riesz mean convergence if $\Psi_{R}^{\delta}f$ converge to $f$ almost
everywhere for every $f\in L^{p}(I,\omega)$. This problem has been solved in
some special cases of orthonormal systems like Jacobi and Laguerre polynomials
in [21] and for the eigenfunctions of the Hermite operator in higher dimension
in [9].
In this work, we extend the $L^{p}$-Bochner-Riesz means convergence to the
circular and the generalized (or weighted) prolate spheroidal wave functions
denoted by (CPSWFs) and (GPSWFs), respectively. The two last families are
defined respectively as the eigenfunctions of the operators
$\mathcal{H}_{c}^{\alpha}f(x)=\int_{0}^{1}\sqrt{cxy}J_{\alpha}(cxy)f(y)dy,\quad\mathcal{F}_{c}^{(\alpha)}f(x)=\int_{-1}^{1}e^{icxy}f(y)(1-y^{2})^{\alpha}\,dy,$
where $\alpha>-1/2,\,c>0$ are two real numbers. These two sets of orthonormal
functions are characterized as solutions of some Sturm-Liouville problems. The
second family we consider is the weighted, some times called generalized,
prolate spheroidal wave functions (GPSWFs) introduced by Wang-Zhang [30]. Note
that the classical PSWFs correspond to the special case of the GPSWFs with
$\alpha=0.$
Our aim in this paper is to prove the $L^{p}$convergence of Bochner-Riesz mean
expansion in the GPSWFs and CPSWFs bases.
This work is organised as follows. In section 2, we give some mathematical
preliminaries on Sturm-Liouville theory and some properties of the CPSWfs and
GPSWFs. Note that these functions can be considered as generalizations of the
spherical Bessel functions $j_{n}^{(\alpha)}$ and Gegenbauer’s polynomials
$\widetilde{P}_{n}^{(\alpha)}$, respectively. In section 3, we state our two
main theorems and section 4 and 5 are respectively devoted to the proof of
sufficient and necessary conditions of the main results.
## 2 Mathematical preliminaries
In this paragraph, we give some mathematical preliminaries that will be
frequently used in the proofs of the different results of this work.
### 2.1 Some facts about Sturm-Liouville theory
The Sturm-Liouville differential operator is defined as follows, see for
example [1],
$\mathcal{L}y(x)=\frac{d}{dx}[p(x)y^{\prime}(x)]+q(x)y(x),\quad x\in I=(a,b).$
(2)
with $r=\frac{1}{p},q\in L^{1}(I,\mathbb{R}).$ The Sturm-Liouville eigenvalues
problem is given by the following differential equation :
$\mathcal{L}.u(x)=-\chi\omega(x)u(x),\quad\sigma\in L^{1}(I,\mathbb{R}).$ (3)
That is
$\frac{d}{dx}\Big{[}p(x)\frac{du}{dx}\Big{]}+q(x)u(x)+\chi\omega(x)u(x)=0,\quad
x\in I.$ (4)
Note that a Sturm-Liouville operator satisfies the following properties,
1. 1.
$u\mathcal{L}v-v\mathcal{L}u=\Big{[}p(uv^{\prime}-vu^{\prime})\Big{]}^{\prime}$
( Lagrange’s identity )
2. 2.
The eigenvalues of $\mathcal{L}$ are real and form an infinite countable set
$\chi_{0}<\chi_{1}<\cdots<\chi_{n}<\cdots$ with
$\lim_{n\rightarrow+\infty}\chi_{n}=+\infty.$
3. 3.
For each eigenvalue $\chi_{n}$ there exists an eigenfunction $\phi_{n}$ having
n zeros on $[a,b].$
4. 4.
Eigenfunctions corresponding to different eigenvalues are orthogonal with
respect to the following inner product
${\left\langle{f,g}\right\rangle}_{\omega}=\int_{a}^{b}f(x)g(x)\omega(x)dx,\quad
f,g\in L^{2}(I,\omega).$
In the sequel, we assume that $\omega(x)\geq 0$, for $x\in(a,b).$
### 2.2 Some facts about GPSWFs and CPSWFs
We first recall that, for $c>0$, the prolate spheroidal wave functions PSWFs,
denoted $\psi_{n,c}$, have been introduced by D.Slepian as solutions of the
following energy maximization problem
$\mbox{ Find }f=\arg\max_{f\in
B_{c}}\frac{\int_{-1}^{1}|f(t)|^{2}dt}{\int_{\mathbb{R}}|f(t)|^{2}dt},$
where $B_{c}$ is the classical Paley-Wiener space, defined by
$B_{c}=\left\\{f\in L^{2}(\mathbb{R}),\,\,\mbox{Support
}\widehat{f}\subseteq[-c,c]\right\\}.$ (5)
Here, $\widehat{f}$ is the Fourier transform of $f\in L^{2}(\mathbb{R}).$ It
has been shown that they are also eigenfunctions of the integral operator with
sinc kernel. A breakthrough in the theory of Slepian functions is due to
Slepian,Pollard and Landau who have proved that PSWFs are also eigenfunctions
of a Sturm-Liouville operator by proving a commutativity property. For more
details about Slepian’s functions we refer reader to [25, 26, 27]. In this
work we are interested in two generalizations of the PSWFs.
The first basis is called circular prolate spheroidal wave functions (CPSWFs)
or radial part of the 2d-Slepian, introduced by D.Slepian[27] as solutions of
the following problem
$\mbox{ Find }f=\arg\max_{f\in
HB^{\alpha}_{c}}\frac{\int_{0}^{1}|f(t)|^{2}dt}{\int_{0}^{\infty}|f(t)|^{2}dt},$
where $HB^{\alpha}_{c}$ is the Hankel Paley-Wiener space, defined by
$HB^{\alpha}_{c}=\left\\{f\in L^{2}(\mathbb{R}),\,\,\mbox{Support
}\mathcal{H}^{\alpha}f\subseteq[-c,c]\right\\}.$ (6)
Here the Hankel transform $\mathcal{H}^{\alpha}$ is defined, for $f\in
L^{1}(0,\infty)$, by
$\mathcal{H}^{\alpha}f(x)=\int_{0}^{\infty}\sqrt{xy}J_{\alpha}(xy)f(y)dy.$
Here $J_{\alpha}(.)$ is the Bessel function and $\alpha>-1/2$. Like Fourier
transform, $\mathcal{H}^{\alpha}$ can be extended into a unitary operator on
$L^{2}(0,\infty)$. They are also the different band-limited eigenfunctions of
the finite Hankel transform $\mathcal{H}_{c}^{\alpha}$ defined on $L^{2}(0,1)$
with kernel $H_{c}^{\alpha}(x,y)=\sqrt{cxy}J_{\alpha}(cxy)$ where $J_{\alpha}$
is the Bessel function of the first type and order $\alpha>-\frac{1}{2}$(see
for example [27]). That is
$\mathcal{H}_{c}^{\alpha}(\varphi^{\alpha}_{n,c})=\mu_{n,\alpha}(c)\varphi^{\alpha}_{n,c}.$
(7)
In his pioneer work [27], D. Slepian has shown that the compact integral
operator $\mathcal{H}_{c}^{\alpha}$ commutes with the following Sturm-
Liouville differential operator $\mathcal{L}^{\alpha}_{c}$ defined on
$C^{2}([0,1])$ by
$\mathcal{L}_{c}^{\alpha}(\phi)=-\dfrac{d}{dx}\left[(1-x^{2})\dfrac{d}{dx}\phi\right]+\left(c^{2}x^{2}-\dfrac{\dfrac{1}{4}-\alpha^{2}}{x^{2}}\right)\phi.$
(8)
Hence, $\varphi^{\alpha}_{n,c}$ is the $n-$th bounded eigenfunction of the
positive self-adjoint operator $\mathcal{L}_{c}^{\alpha}$ associated with the
eigenvalue $\chi_{n,\alpha}(c),$ that is
$-\dfrac{d}{dx}\left[(1-x^{2})\dfrac{d}{dx}\varphi^{\alpha}_{n,c}(x)\right]+\left(c^{2}x^{2}-\dfrac{\dfrac{1}{4}-\alpha^{2}}{x^{2}}\right)\varphi^{\alpha}_{n,c}(x)=\chi_{n,\alpha}(c)\varphi^{\alpha}_{n,c}(x),\quad
x\in[0,1].$ (9)
The orthonormal family $\varphi_{n,c}^{\alpha}$ form an orthonormal basis of
$L^{2}(0,1)$ and the associated eigenvalues family $\chi_{n,\alpha}(c)$
satisfy the following inequality, (see [27])
$(2n+\alpha+1/2)(2n+\alpha+3/2)\leq\chi_{n,\alpha}(c)\leq(2n+\alpha+1/2)(2n+\alpha+3/2)+c^{2}$
(10)
The second family we consider in this work is the weighted, (some times called
generalized), prolate spheroidal wave functions introduced by Wang-Zhang [30]
as solutions of a Sturm-Liouville problem or equivalently eigenfunctions of an
integral operator. GPSWFs are also solutions of the following problem as given
in [18]
$\mbox{Find }f={\displaystyle arg\max_{f\in
B^{\alpha}_{c}}\frac{\|f\|^{2}_{L^{2}_{\omega_{\alpha}}(I)}}{\|\widehat{f}\|^{2}_{L^{2}(\omega_{-\alpha}(\frac{\cdot}{c}))}}},$
where $\omega_{\alpha}(x)=(1-x^{2})^{\alpha}$ and $B^{(\alpha)}_{c}$ is the
restricted Paley-Winer space, defined by
$B_{c}^{(\alpha)}=\\{f\in L^{2}(\mathbb{R}),\,\,\mbox{Support
}\widehat{f}\subseteq[-c,c],\,\,\widehat{f}\in
L^{2}\big{(}(-c,c),\omega_{-\alpha}(\frac{\cdot}{c})\big{)}\\}.$
More precisely, the GPSWFs are the eigenfunctions of the weighted finite
Fourier transform operator $\mathcal{F}_{c}^{(\alpha)}$ defined by
$\mathcal{F}_{c}^{(\alpha)}f(x)=\int_{-1}^{1}e^{icxy}f(y)\,\omega_{\alpha}(y)\,\mathrm{d}y.$
(11)
It is well known, (see [18, 30]) that they are also eigenfunctions of the
compact and positive operator
$\mathcal{Q}_{c}^{(\alpha)}=\frac{c}{2\pi}\mathcal{F}_{c}^{({\alpha})^{*}}\circ\mathcal{F}_{c}^{(\alpha)}$
which is defined on $L^{2}(I,\omega_{\alpha})$ by
$\mathcal{Q}_{c}^{(\alpha)}g(x)=\int_{-1}^{1}\frac{c}{2\pi}\mathcal{K}_{\alpha}(c(x-y))g(y)\omega_{\alpha}(y)\d{y}$
(12)
Here,
$\mathcal{K}_{\alpha}(x)=\sqrt{\pi}2^{\alpha+1/2}\Gamma(\alpha+1)\frac{J_{\alpha+1/2}(x)}{x^{\alpha+1/2}}$
It has been shown in [18, 30] that the last two integral operators commute
with the following Sturm-Liouville operator $\mathcal{L}_{c}^{(\alpha)}$
defined on $C^{2}[-1,1]$ by
$\mathcal{L}_{c}^{(\alpha)}(f)(x)=-\frac{1}{\omega_{\alpha}(x)}\frac{d}{dx}\left[\omega_{\alpha}(x)(1-x^{2})f^{\prime}(x)\right]+c^{2}x^{2}f(x).$
(13)
Also, note that the $(n+1)-$th eigenvalue $\chi_{n,\alpha}(c)$ of
$\mathcal{L}_{c}^{(\alpha)}$ satisfies the following classical inequalities,
$n(n+2\alpha+1)\leq\chi_{n,\alpha}(c)\leq n(n+2\alpha+1)+c^{2},\quad\forall
n\geq 0.$ (14)
## 3 Statement of results
In this section, we will state the main results of this paper that we will
prove in the following sections. As mentioned before, the main issue studied
in this paper is to get a necessary and sufficient conditions of Bochner-Riesz
expansion convergence of a function $f$ in the GPSWFs’s and CPSWFs’s basis.
Let’s start by studying the case of GPSWFs in the following theorem.
###### Theorem 1.
Let $0\leq\alpha<3/2$, $\delta$ and $c$ be two positive number and
$(\psi_{n,c}^{(\alpha)})_{n\geq 0}$ be the family of weighted prolate
spheroidal wave functions. For a smooth function $f$ on $I=(-1,1)$, we define
$\Psi_{R}^{\delta}f=\sum_{n=0}^{\infty}\left(1-\frac{\chi_{n,\alpha}(c)}{R}\right)^{\delta}_{+}{\left\langle{f,\psi_{n,c}^{(\alpha)}}\right\rangle}_{L^{2}(I,\omega_{\alpha})}\psi_{n,c}^{(\alpha)}.$
Then, for every $1\leq p<\infty$, $\Psi^{\delta}_{R}$ can be extended to a
bounded operator $L^{p}(I,\omega_{\alpha})\to L^{p}(I,\omega_{\alpha})$.
Further, $\Psi^{\delta}_{R}f$ is uniformly bounded if ,and only if,
$\delta>\max\\{\frac{\gamma_{\alpha}(p^{\prime})}{2},0\\}$ and
$p\not=p_{0}=2-\frac{1}{\alpha+3/2}$ where
$\gamma_{\alpha}(p)=\begin{cases}0&\mbox{ if }1<p<p^{\prime}_{0}\\\
\epsilon&\mbox{ if }p=p^{\prime}_{0}\\\
2(\alpha+1)\left[\frac{1}{2}-\frac{1}{p}\right]-\frac{1}{2}&\mbox{ if
}p>p^{\prime}_{0}\\\ \alpha+1&\mbox{ if }p=1\end{cases}.$
and $\epsilon$ is an arbitrary real number. Note that $p^{\prime}$ denote here
the dual exponent of $p$.
###### Remark 1.
The sufficient condition in both GPSWFs’s and CPSWFs’s case still valid even
for all $\alpha>-1/2$.
###### Remark 2 (Two special cases).
Recall that $\psi^{(\alpha)}_{n,0}=\widetilde{P}^{(\alpha,\alpha)}_{n}$, then
we recover the same result for the case of normalized Geganbauer polynomials.
Note that both conditions (A) and (B), defined in the proof of the last
theorem, are still valid even for $\widetilde{P}_{n}^{(\alpha,\beta)}$ with
exactly the same proof and by noticing that the transferring theorem, which is
the key step of the necessary condition, has been proven in [16] , the last
result is valid also for Jacobi polynomials for all $\alpha,\beta>-1/2$ . For
$\alpha=0$ and $c>0$, $\psi^{0}_{n,c}=\psi_{n,c}$ presents the classical
prolate spheroidal wave functions PSWFs which satisfy the Bochner-Riez mean
convergence if and only if
$\delta>\max\\{0,\frac{\gamma_{0}(p^{\prime})}{2}\\}.$
Let us now focus on the circular case.
###### Theorem 2.
Let $\alpha\geq 1/2$, $c>0$ and $(\varphi_{n,c}^{(\alpha)})_{n\geq 0}$ be the
family of Hankel prolate spheroidal wave functions. For a smooth function $f$
on $I=(0,1)$, we define
$\Psi_{R}^{\delta}f=\sum_{n=0}^{\infty}\left(1-\frac{\chi_{n,\alpha}(c)}{R}\right)^{\delta}_{+}{\left\langle{f,\varphi_{n,c}^{(\alpha)}}\right\rangle}_{L^{2}(0,1)}\varphi_{n,c}^{(\alpha)}.$
Then, for every $1\leq p<\infty$, $\Psi^{\delta}_{R}$ can be extended to a
bounded operator $L^{p}(0,1)\to L^{p}(0,1)$. Further $\Psi^{\delta}_{R}f$ is
uniformly bounded if ,and only if,
$\delta>\max\\{\frac{\gamma(p^{\prime})}{2},0\\}$ where
$\gamma(p)=\begin{cases}\frac{1}{p}-\frac{1}{2}&\mbox{ if }1<p<4\\\
\epsilon-\frac{1}{4}&\mbox{ if }p=4\\\
\frac{1}{3}\left[\frac{1}{p}-1\right]&\mbox{ if }p>4\\\ 1&\mbox{ if
}p=1\end{cases}.$
.
## 4 Proof of sufficient condition
Let $(I,\omega)$ be a measured space such that $\omega$ is a bounded weight
function. We denote by $p^{\prime}=\frac{p}{p-1}$ the dual index of $p$.
Throughout this section, $\mathcal{L}$ denotes a Sturm-Liouville operator and
$\varphi_{n}$ (respectively $\lambda_{n}$) the sequence of the associated
eigenfunctions (respectively eigenvalues). The Riesz means of index $\delta>0$
associated with $\mathcal{L}$ of a function
$f\in\mathcal{C}^{\infty}(I,\mathbb{R})$ are consequently defined as
$\Psi^{\delta}_{R}f=\sum_{n=0}^{\infty}\Big{(}1-\frac{\lambda_{n}}{R}\Big{)}^{\delta}_{+}a_{n}(f)\varphi_{n}\quad\mbox{with}\quad
a_{n}(f)=\int_{I}f(y)\varphi_{n}(y)d\mu(y).$ (15)
$\Psi^{\delta}_{R}f$ can also be written as
$\Psi^{\delta}_{R}.f(x)=\int_{I}K_{R}^{\delta}(x,y)f(y)d\mu(y)\quad\mbox{where}\quad
K_{R}^{\delta}(x,y)=\sum_{n=0}^{\infty}\Big{(}1-\frac{\lambda_{n}}{R}\Big{)}^{\delta}_{+}\varphi_{n}(x)\varphi_{n}(y)$
Our aim in this section is to prove at the same time the sufficient conditions
of our two main theorems. More precisely, we will define several conditions on
$\varphi_{n}$ that will ensure the convergence of $\Psi_{R}^{\delta}.f$ to $f$
in the $L^{p}$norm as $R\to\infty$ and verify that both two families satisfy
these conditions. Assume that $\varphi_{n}$ satisfies the following conditions
:
* $(A)$
For every $1\leq p\leq\infty$, every $n$, $\varphi_{n}\in L^{p}(I,\omega)$.
Further, we assume that there is a constant $\gamma(p)\geq 0$ such that
${\left\|{\varphi_{n}}\right\|}_{L^{p}(\mu)}\leq Cn^{\gamma(p)}$.
* $(B)$
The sequence $(\lambda_{n})$ of the eigenvalues of the operator $\mathcal{L}$
satisfies the following properties
1. 1.
$\displaystyle\sum_{\lambda_{n}\in(m,M)}1\leq C(M-m)$ for all $0\leq m<M$.
2. 2.
There exists $\varepsilon>0$ such that
$\lambda_{n}\geq Cn^{\varepsilon}.$
First of all, we start by giving sense to $\Psi^{\delta}_{R}.f$ for every
$f\in L^{p}(\mu)$. Indeed,
${\left\|{K^{\delta}_{R}}\right\|}_{L^{p}(\mu)\otimes
L^{p^{\prime}}(\mu)}\leq\sum_{\lambda_{n}<R}{\left\|{\varphi_{n}}\right\|}_{p}{\left\|{\varphi_{n}}\right\|}_{p^{\prime}}\leq\sum_{\lambda_{n}<R}n^{\gamma(p)+\gamma(p^{\prime})}\leq
CR^{\frac{\left(\gamma(p)+\gamma(p^{\prime})\right)}{\varepsilon}+1},$
So that the integral operator $\Psi_{R}^{\delta}$ can be extended to a
continuous operator $L^{p}(\mu)\to L^{p}(\mu)$ with
${\left\|{\Psi_{R}^{\delta}}\right\|}_{L^{p}\to
L^{p}}\leq{\left\|{K^{\delta}_{R}}\right\|}_{L^{p}(\mu)\otimes
L^{p^{\prime}}(\mu)}.$
The following theorem is one of the main results of this paper.
###### Theorem 3.
With the above notation and under conditions $(A)$ and $(B)$ with
$\delta>\delta(p)=\max\\{\frac{\gamma(p^{\prime})}{\varepsilon},0\\}$, there
exists a constant $C>0$ satisfying the following inequality
${\left\|{\Psi_{R}^{\delta}}\right\|}_{(L^{p}(I,w),L^{p}(I,w))}\leq C.$ (16)
The following lemma will be used in the proof of the previous theorem.
###### Lemma 1.
Let $1\leq p\leq 2$ then for every $f\in L^{p}(I,\omega),$ we have
${\left\|{\sum_{\lambda_{n}\in(m,M)}a_{n}(f)\varphi_{n}}\right\|}_{L^{2}(I,\omega)}\leq
C(p)M^{\frac{\gamma(p^{\prime})}{\varepsilon}}(M-m)^{\frac{1}{2}}{\left\|{f}\right\|}_{L^{p}(I,\omega)}.$
(17)
###### Proof.
By orthogonality and Hölder’s inequality, we have
$\displaystyle{\left\|{\sum_{\lambda_{n}\in(m,M)}a_{n}(f)\varphi_{n}}\right\|}^{2}_{L^{2}(I,\omega)}$
$\displaystyle=$
$\displaystyle\sum_{\lambda_{n}\in(m,M)}a_{n}^{2}(f)\leq\sum_{\lambda_{n}\in(m,M)}{\left\|{\varphi_{n}}\right\|}^{2}_{L^{p^{\prime}}(I,\omega)}{\left\|{f}\right\|}^{2}_{L^{p}(I,\omega)}$
From condition $(A)$, we have
${\left\|{\varphi_{n}}\right\|}_{L^{p^{\prime}}(I,\omega)}\leq
n^{\gamma(p^{\prime})}$. We also obtain by using condition ($B1$)
$\displaystyle{\left\|{\sum_{\lambda_{n}\in(m,M)}a_{n}(f)\varphi_{n}}\right\|}^{2}_{L^{2}(I,\omega)}$
$\displaystyle\leq$
$\displaystyle\sum_{\lambda_{n}\in(m,M)}n^{2\gamma(p^{\prime})}{\left\|{f}\right\|}^{2}_{L^{p}(I,\omega)}\leq
C\sum_{\lambda_{n}\in(m,M)}\lambda_{n}^{\frac{2\gamma(p^{\prime})}{\varepsilon}}{\left\|{f}\right\|}^{2}_{L^{p}(I,\omega)}$
$\displaystyle\leq$ $\displaystyle
CM^{\frac{2\gamma(p^{\prime})}{\varepsilon}}\left(\sum_{\lambda_{n}\in(m,M)}1\right){\left\|{f}\right\|}^{2}_{L^{p}(I,\omega)}$
$\displaystyle\leq$ $\displaystyle
CM^{(\frac{2\gamma(p^{\prime})}{\varepsilon})}(M-m){\left\|{f}\right\|}^{2}_{L^{p}(I,\omega)}.$
Then one gets
${\left\|{\sum_{\lambda_{n}\in(m,M)}a_{n}(f)\varphi_{n}}\right\|}_{L^{2}(I,\omega)}\leq
CM^{(\frac{\gamma(p^{\prime})}{\varepsilon})}(M-m)^{\frac{1}{2}}{\left\|{f}\right\|}_{L^{p}(I,\omega)}.$
∎
###### Proof of Theorem1.
We should mention here that some parts of the proof of this theorem are
inspired from [8]. Without loss of generality, we can consider $1\leq p<2$ and
conclude by duality. To prove (16), we are going to have to decompose the
multiplier $\Psi_{R}^{\delta}$. In order to do so, let
$\phi\in\mathcal{C}^{\infty}_{0}(\mathbb{R})$ with support on $(1/2,2)$ such
that $\displaystyle\sum_{k\in\mathbb{Z}}\phi(2^{k}t)=1$ and
$\displaystyle\phi_{0}(t)=1-\sum_{k=1}^{+\infty}\phi(2^{k}t)$ for all $t>0$.
We define
$\phi_{R,k}^{\delta}(t)=\left(1-\frac{t}{R}\right)_{+}^{\delta}\phi\left(2^{k}(1-\frac{t}{R})\right).$
We recall that, from [8], this last function has the following properties :
1. 1.
$\mbox{supp}\left(\phi_{R,k}^{\delta}\right)\subseteq(R(1-2^{-k+1}),R(1-2^{-k-1}))$,
2. 2.
$\sup_{t\in\mathbb{R}}|\phi_{R,k}^{\delta}(t)|\leq C2^{-k\delta}$,
3. 3.
$\forall N\geq 0,$ there exists $C_{N}>0$ such that
$|\partial_{t}^{N}\phi_{R,k}^{\delta}(t)|\leq
C_{N}\Big{(}\frac{2^{k}}{R}\Big{)}^{N}.$
Furthermore, we denote by
$\Psi_{R,k}^{\delta}.f=\sum_{n=0}^{\infty}\phi_{R,k}^{\delta}(\lambda_{n})a_{n}(f)\varphi_{n}\qquad
k=1,2,\cdots$ (18)
Then, we have
$\displaystyle\Psi_{R}^{\delta}f$ $\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}\left(1-\frac{\lambda_{n}}{R}\right)_{+}^{\delta}a_{n}(f)\varphi_{n}$
$\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}\phi_{0}(1-\frac{\lambda_{n}}{R})\left(1-\frac{\lambda_{n}}{R}\right)_{+}^{\delta}a_{n}(f)\varphi_{n}+\sum_{n=0}^{\infty}\left(\sum_{k=1}^{+\infty}\phi(2^{k}(1-\frac{\lambda_{n}}{R}))\right)\left(1-\frac{\lambda_{n}}{R}\right)_{+}^{\delta}a_{n}(f)\varphi_{n}$
$\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}\phi_{0}(1-\frac{\lambda_{n}}{R})\left(1-\frac{\lambda_{n}}{R}\right)_{+}^{\delta}a_{n}(f)\varphi_{n}+\sum_{k=1}^{\left[\frac{\log(R)}{\log(2)}\right]}\sum_{n=0}^{\infty}\phi_{R,k}^{\delta}(\lambda_{n})a_{n}(f)\varphi_{n}+\sum_{k=\left[\frac{\log(R)}{\log(2)}\right]+1}^{\infty}\sum_{n=0}^{\infty}\phi_{R,k}^{\delta}(\lambda_{n})a_{n}(f)\varphi_{n}$
$\displaystyle=$
$\displaystyle\psi_{R,0}^{\delta}f+\sum_{k=1}^{\left[\frac{\log(R)}{\log(2)}\right]}\Psi_{R,k}^{\delta}f+\mathcal{R}_{R}^{\delta}f.$
It is clear that the main term is the second one. With the same approach used
in [8], we will prove the following proposition :
###### Proposition 1.
Let $1\leq p<2$ and $\delta>\delta(p)=\frac{\gamma(p^{\prime})}{\varepsilon}$.
There exists $\beta>0$ such that for every $f\in L^{p}(I,w)$, we have
${\left\|{\Psi_{R,k}^{\delta}f}\right\|}_{L^{p}(I,w)}\leq
C2^{-k\beta}{\left\|{f}\right\|}_{L^{p}(I,w)},$ (19)
where $C$ is a constant independent of $R$ and $f$.
###### Proof.
Let $x_{0}=\frac{a+b}{2}\in(a,b)$ and $r=\frac{b-a}{4}>0$ such that
$(x_{0}-r,x_{0}+r)\subseteq(a,b).$ Note that, for every $1\leq
k\leq\left[\frac{\log(R)}{\log(2)}\right]=k_{R}$, we have
$r_{k}^{\alpha}=\left(\frac{2^{k}}{R}\right)^{\mu(p)}r<r$ where
$\mu(p)=\frac{(\frac{\gamma(p^{\prime})}{\varepsilon}+\frac{1}{2})}{(\frac{1}{p}-\frac{1}{2})}$
. So we notice that
$I=(a,b)=(x_{0}-r_{k}^{\alpha},x_{0}+r_{k}^{\alpha})\cup\\{y\in(a,b),|y-x_{0}|>r_{k}^{\alpha}\\}=I_{k,1}^{\alpha}\cup
I_{k,2}^{\alpha}$.
We start by providing an $L^{p}$ bound of
${\left\|{\Psi_{R,k}^{\delta}}\right\|}_{L^{p}(I^{\alpha}_{k,1},\omega)}$. To
do so, we proceed in the way to reduce the $L^{p}$ inequality (19) to certain
$(L^{p},L^{2})$ inequality using the last lemma.
Using Parseval formula and the fact that
$\mbox{supp}\left(\phi_{R,k}^{\delta}\right)\subseteq(R_{k,1},R_{k,2})$, where
$R_{k,1}=R(1-2^{-k+1})$ and $R_{k,2}=R(1-2^{-k-1}),$ we have
$\displaystyle{\left\|{\Psi_{R,k}^{\delta}f}\right\|}^{2}_{L^{2}(I,w)}$
$\displaystyle=$
$\displaystyle{\left\|{\sum_{n=0}^{\infty}\phi_{R,k}^{\delta}(\lambda_{n})a_{n}(f)\varphi_{n}}\right\|}^{2}_{L^{2}(I,w)}$
$\displaystyle=$ $\displaystyle{\left\|{\sum_{R_{k,1}\leq\lambda_{n}\leq
R_{k,2}}\phi_{R,k}^{\delta}(\lambda_{n})a_{n}(f)\varphi_{n}}\right\|}^{2}_{L^{2}(I,w)},$
Using the previous lemma with $m=R_{k,1}$, $M=R_{k,2}$ and the fact that
$\displaystyle\sup_{t\in\mathbb{R}}|\phi_{R,k}^{\delta}(t)|\leq
C2^{-k\delta}$, one gets
$\displaystyle{\left\|{\Psi_{R,k}^{\delta}f}\right\|}^{2}_{L^{2}(I,w)}$
$\displaystyle\leq$ $\displaystyle
C2^{-2k\delta}{\left\|{\sum_{R_{k,1}\leq\lambda_{n}\leq
R_{k,2}}a_{n}(f)\varphi_{n}}\right\|}^{2}_{L^{2}(I,w)}$ $\displaystyle\leq$
$\displaystyle
C2^{-2k\delta}R^{(2\frac{\gamma(p^{\prime})}{\varepsilon})}\left(\frac{3R}{2^{k+1}}\right){\left\|{f}\right\|}^{2}_{L^{p}(I,w)}.$
Hence, we have
${\left\|{\Psi_{R,k}^{\delta}f}\right\|}_{L^{2}(I,w)}\leq
C2^{-k(\delta+\frac{1}{2})}R^{(\frac{\gamma(p^{\prime})}{\varepsilon})+\frac{1}{2})}{\left\|{f}\right\|}_{L^{p}(I,w)}.$
(20)
By combining Hölder inequality and (20), we obtain
$\displaystyle{\left\|{\Psi_{R,k}^{\delta}f}\right\|}_{L^{p}(I_{k,1}^{\alpha},w)}$
$\displaystyle\leq$
$\displaystyle(\mu(I_{k,1}))^{\frac{1}{p}-\frac{1}{2}}{\left\|{\Psi_{R,k}^{\delta}f}\right\|}_{L^{2}(I_{k,1}^{\alpha},w)}$
(21) $\displaystyle\leq$
$\displaystyle(2r_{k}^{\alpha})^{\frac{1}{p}-\frac{1}{2}}{\left\|{\Psi_{R,k}^{\delta}f}\right\|}_{L^{2}(I,w)}$
$\displaystyle\leq$ $\displaystyle
C2^{-k(\delta-\frac{\gamma(p^{\prime})}{\varepsilon})}{\left\|{f}\right\|}_{L^{p}(I,w)}.$
Let $\displaystyle
s_{R,k}^{\delta}(u,v)=\sum_{n=0}^{\infty}\phi_{R,k}^{\delta}(\lambda_{n})\varphi_{n}(x)\varphi_{n}(y)$
be the kernel of $\Psi_{R,k}^{\delta}$. We just have to find an estimate of
$||\Psi_{R,k}^{\delta}f||_{L^{p}(I_{k,2}^{\alpha},w)}$, so we will use the
Schur test with the symmetric property of $s_{R,k}^{\delta},$ then it suffices
to prove the following inequality
$\sup_{u\in
I_{k,2}^{\alpha}}{\left\|{s_{R,k}^{\delta}(u,.)}\right\|}_{L^{1}(I_{k,2}^{\alpha})}\leq
C2^{-k\varepsilon}$
for some $\varepsilon>0$ and $C>0$ depending only on $p.$
We consider
$g_{R,k}^{\delta}(\lambda)=\left(1-\frac{\lambda^{2}}{R}\right)_{+}^{\delta}e^{\lambda^{2}/R}\phi(2^{k}(1-\frac{\lambda^{2}}{R}))$
satisfying the following properties, see [8]
1. 1.
For every non-negative integer $i$ there exists a constant $C_{i}$ such that
for all $s>0$
$\int_{|t|\geq s}|\hat{g}_{R,k}^{\delta}(t)|dt\leq
C_{i}s^{-i}R^{-i/2}2^{(i-\delta)k}$ (22)
2. 2.
${\left\|{g_{R,k}^{\delta}(\sqrt{\mathcal{L}})}\right\|}_{(L^{2},L^{2})}\leq
C2^{-k\delta}.$ (23)
For our purpose, we will consider such a positive self-adjoint operator
$\mathcal{L}$ on $L^{2}(\mathbb{R})$ such that the semigroup
$e^{-t\mathcal{L}}$, generated by $-\mathcal{L}$, has the kernel $p_{t}(x,y)$
obeying the Gaussian upper bound
$|p_{t}(u,v)|\leq\frac{C}{\sqrt{t}}\exp{\left(-\frac{|u-v|^{2}}{Ct}\right)}.$
(24)
for a constant $C>0$. (see [14])
For all $u\in\mathbb{R}$ and $t>0$, one gets the following estimate
${\left\|{p_{t}(u,.)}\right\|}_{L^{2}(\mathbb{R})}\leq C.$ (25)
On the other hand, there exists $i_{0}\in\mathbb{N}$ such that
$2^{i_{o}-1}<R^{\mu(p)}<2^{i_{0}}$ and we can see that
$I_{k,2}^{\alpha}\subseteq\displaystyle\cup_{\mu(p)k-i_{0}\leq j\leq 0}D_{j}$
where $D_{j}=\\{y,2^{j}r\leq|y-x_{0}|<2^{j+1}r\\}.$ Since, $\mathcal{L}$ is a
positive self-adjoint operator, then it’s clear that
$\phi_{R,k}^{\delta}(\mathcal{L})=g_{R,k}^{\delta}(\sqrt{\mathcal{L}})\exp{\left(-\mathcal{L}/R\right)}.$
(26)
Hence one gets
$\displaystyle s_{R,k}^{\delta}(u,v)$ $\displaystyle=$ $\displaystyle
g_{R,k}^{\delta}(\sqrt{\mathcal{L}})\left(p_{1/R}(u,.)\right)(v)$
$\displaystyle=$ $\displaystyle
g_{R,k}^{\delta}(\sqrt{\mathcal{L}})\left(p_{1/R}(u,.)\chi_{\\{w,|x_{0}-w|<2^{j-1}r\\}}\right)(v)+g_{R,k}^{\delta}(\sqrt{\mathcal{L}})\left(p_{1/R}(u,.)\chi_{\\{w,|x_{0}-w|\geq
2^{j-1}r\\}}\right)(v)$ $\displaystyle=$ $\displaystyle
s_{R,k}^{\delta,1}(u,v)+s_{R,k}^{\delta,2}(u,v).$
Using the fact that $g_{R,k}^{\delta}$ is an even function, with the inversion
formula, we have
$g_{R,k}^{\delta}(\sqrt{\lambda})=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}\hat{g}_{R,k}^{\delta}(t)\cos{(t\sqrt{\lambda})}dt.$
Hence, we obtain
$\displaystyle s_{R,k}^{\delta,1}(u,v)$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}\hat{g}_{R,k}^{\delta}(t)\cos{(t\sqrt{\mathcal{L}})}\left(p_{1/R}(u,.)\chi_{\\{w,|x_{0}-w|<2^{j-1}r\\}}\right)(v)dt.$
Moreover, the operator $\cos{(t\sqrt{\mathcal{L}})}$ is bounded in $L^{2}$
with support kernel $\mathcal{K}_{t}$ satisfying, see [14, 28]
$\mbox{Supp}\left(\mathcal{K}_{t}\right)=\\{(u,v)\in\mathbb{R}^{2},|u-v|\leq
c_{0}|t|\\}$
From (23), (25) and the previous analysis, one gets
$\displaystyle{\left\|{s_{R,k}^{\delta,1}(u,.)}\right\|}_{L^{1}(D_{j})}$
$\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2\pi}}{\left\|{\int_{\mathbb{R}}\hat{g}_{R,k}^{\delta}(t)\cos{(t\sqrt{\mathcal{L}})}\left(p_{1/R}(u,.)\chi_{\\{w,|x_{0}-w|<2^{j-1}r\\}}\right)dt}\right\|}_{L^{1}(D_{j})}$
$\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2\pi}}{\left\|{\int_{|t|>\frac{2^{j-1}r}{c_{0}}}\hat{g}_{R,k}^{\delta}(t)\cos{(t\sqrt{\mathcal{L}})}\left(p_{1/R}(u,.)\chi_{\\{w,|x_{0}-w|<2^{j-1}r\\}}\right)dt}\right\|}_{L^{1}(D_{j})}$
$\displaystyle\leq$
$\displaystyle\frac{\mu^{1/2}(D_{j})}{\sqrt{2\pi}}\int_{|t|>\frac{2^{j-1}r}{c_{0}}}|\hat{g}_{R,k}^{\delta}(t)|{\left\|{p_{1/R}(u,.)}\right\|}_{L^{2}(D_{j})}dt$
$\displaystyle\leq$
$\displaystyle\frac{C}{\sqrt{2\pi}}2^{\frac{j+1}{2}}\int_{|t|>\frac{2^{j-1}r}{c_{0}}}|\hat{g}_{R,k}^{\delta}(t)|dt$
Let $i>\frac{\mu+\frac{1}{2}}{2(\mu+1-\frac{1}{p})}$ where
$\mu=\frac{\gamma(p^{\prime})}{\varepsilon}>0.$ Then by (22), there exists a
constant $C_{i}>0$ such that
$\displaystyle{\left\|{s_{R,k}^{\delta,1}(u,.)}\right\|}_{L^{1}(D_{j})}$
$\displaystyle\leq$
$\displaystyle\frac{C_{i}}{\sqrt{2\pi}}2^{j/2}(\frac{2^{j}}{c_{0}})^{-i}R^{-i/2}2^{(i-\delta)k}$
$\displaystyle\leq$
$\displaystyle\frac{C_{i}}{\sqrt{2\pi}}c_{0}^{i}2^{(i-\delta)k}2^{j(1/2-i)}.$
Then, we obtain
$\displaystyle{\left\|{s_{R,k}^{\delta,1}(u,.)}\right\|}_{L^{1}(I_{k,2})}$
$\displaystyle\leq$ $\displaystyle\sum_{\mu(p)k-i_{0}\leq j\leq
0}{\left\|{s_{R,k}^{\delta,1}(u,.)}\right\|}_{L^{1}(D_{j})}$
$\displaystyle\leq$
$\displaystyle\frac{C_{i}}{\sqrt{2\pi}}c_{0}^{i}2^{(i-\delta)k}\sum_{\mu(p)k-i_{0}\leq
j\leq 0}2^{j(1/2-i)}$ $\displaystyle\leq$
$\displaystyle\frac{C_{i}}{\sqrt{2\pi}}c_{0}^{i}2^{(i-\delta)k}2^{(i-1/2)(i_{0}-\mu(p)k+1)}$
$\displaystyle\leq$ $\displaystyle C^{\prime}_{i}2^{-k\varepsilon_{1}}.$
From our assumption on $i,$
$\varepsilon_{1}=\delta-i+(i-1/2)(\frac{\mu+\frac{1}{2}}{(\frac{1}{p}-\frac{1}{2})}))>0.$
Then, to have an estimate of the kernel $s_{R,k}^{\delta,1}$ on
$L^{1}(I_{k,2}^{\delta})$, it suffices to find an estimate of the kernel
$s_{R,k}^{\delta,2}$ on $L^{1}(I_{k,2}^{\delta})$.
From (23), (24) and using the fact that $R\leq R^{\mu(p)}$, one gets the
following inequality
$\displaystyle{\left\|{s_{R,k}^{\delta,2}(u,.)}\right\|}_{L^{1}(D_{j})}$
$\displaystyle=$
$\displaystyle\int_{D_{j}}|g_{R,k}^{\delta}(\sqrt{\mathcal{L}})\left(p_{1/R}(u,.)\chi_{\\{w,|w-x_{0}|>2^{j-1}r\\}}\right)(v)|dv$
$\displaystyle\leq$
$\displaystyle{\left\|{g_{R,k}^{\delta}(\sqrt{\mathcal{L}})}\right\|}_{(L^{2},L^{2})}{\left\|{p_{1/R}(u,.)\chi_{\\{w,|w-x_{0}|>2^{j-1}r\\}}}\right\|}_{L^{2}(D_{j})}$
$\displaystyle\leq$ $\displaystyle
C2^{-k\delta}\sqrt{R}e^{(-CR2^{2j-2})}\left(\mu(D_{j})\right)^{1/2}$
$\displaystyle\leq$ $\displaystyle
C2^{-k\delta}2^{\frac{i_{0}+j}{2}}e^{-C2^{2(i_{0}+j)}}.$
Hence, we conclude that
$\displaystyle{\left\|{s_{R,k}^{\delta,2}(u,.)}\right\|}_{L^{1}(I_{k,2})}$
$\displaystyle\leq$ $\displaystyle\sum_{\mu(p)k-i_{0}\leq j\leq
0}{\left\|{s_{R,k}^{\delta,2}(u,.)}\right\|}_{L^{1}(D_{j})}$
$\displaystyle\leq$ $\displaystyle
C2^{-k\delta}\sum_{i=i_{o}+j\geq\mu(p)k}2^{\frac{i}{2}}e^{-C2^{2i}}$
$\displaystyle\leq$ $\displaystyle C^{\prime}2^{-k\delta}.$
∎
###### Proposition 2.
Let $1\leq p\leq 2$ and
$\delta>\delta(p)=\frac{\gamma(p^{\prime})}{\varepsilon}$, then for all $f\in
L^{p}(I,w)$, we have
${\left\|{\psi_{R,0}^{\delta}f}\right\|}_{L^{p}(I,w)}\leq
C{\left\|{f}\right\|}_{L^{p}(I,w)}.$ (27)
where $C$ is a constant independent of $f$ and $R.$
###### Proof.
It suffices to use the same techniques as those used in the previous proof to
get an estimate of ${\left\|{\psi_{R,0}^{\delta}f}\right\|}_{L^{p}(I_{1},w)}$
and ${\left\|{\psi_{R,0}^{\delta}f}\right\|}_{L^{p}(I_{2},w)}$ for all $f\in
L^{p}(I,w),$ where $I=(a,b)=I_{1}\cup I_{2}$ with
$I_{1}=(x_{0}-r^{\alpha}_{0},x_{0}+r^{\alpha}_{0})$ and
$I_{2}=\\{y,|y-x_{0}|>r^{\alpha}_{0}\\}$ where
$r^{\alpha}_{0}=\frac{r}{R^{\mu(p)}}.$ ∎
To conclude the theorem’s proof it suffices to find a uniform bound of
$\mathcal{R}_{R}^{\delta}$.
###### Proposition 3.
Let $1\leq p\leq 2$ and
$\delta>\delta(p)=\frac{\gamma(p^{\prime})}{\varepsilon}$, then for all $f\in
L^{p}(I,w)$, we have
${\left\|{\mathcal{R}_{R}^{\delta}f}\right\|}_{L^{p}(I,w)}\leq
C{\left\|{f}\right\|}_{L^{p}(I,w)}.$ (28)
where $C$ depends only on $p$.
###### Proof.
From Holder’s inequality and the previous lemma, we have
$\displaystyle{\left\|{\mathcal{R}_{R}^{\delta}f}\right\|}^{2}_{L^{p}(I,w)}$
$\displaystyle\leq$ $\displaystyle
2^{2(\frac{1}{p}-\frac{1}{2})}{\left\|{\mathcal{R}_{R}^{\delta}f}\right\|}^{2}_{L^{2}(I,w)}$
$\displaystyle\leq$ $\displaystyle
2^{2(\frac{1}{p}-\frac{1}{2})}\sum_{k=K_{R}+1}^{\infty}\sum_{n=0}^{\infty}{\left\|{\phi_{R,k}^{\delta}(\lambda_{n})a_{n}(f)\varphi_{n}}\right\|}_{L^{2}(I,w)}$
$\displaystyle\leq$ $\displaystyle
C2^{2(\frac{1}{p}-\frac{1}{2})}\sum_{k=K_{R}+1}^{\infty}2^{-2k\delta}\sum_{R_{k,1}\leq\lambda_{n}\leq
R_{k,2}}{\left\|{a_{n}(f)\varphi_{n}}\right\|}^{2}_{L^{2}(I,w)}$
$\displaystyle\leq$ $\displaystyle
C2^{2(\frac{1}{p}-\frac{1}{2})}\sum_{k=K_{R}+1}^{\infty}2^{-2k(\delta+\frac{1}{2})}R^{2(\frac{1}{2}+\frac{\gamma(p^{\prime})}{\varepsilon})}{\left\|{f}\right\|}^{2}_{L^{p}(I,\omega)}$
$\displaystyle\leq$ $\displaystyle
C2^{2(\frac{1}{p}-\frac{1}{2})}2^{-2(\delta+\frac{1}{2})\big{(}\left[\frac{\log(R)}{\log(2)}\right]+1\big{)}}R^{2(\frac{1}{2}+\frac{\gamma(p^{\prime})}{\varepsilon})}{\left\|{f}\right\|}^{2}_{L^{p}(I,\omega)}$
$\displaystyle\leq$ $\displaystyle
C2^{2(\frac{1}{p}-\frac{1}{2})}R^{-2(\delta-(\frac{\gamma(p^{\prime})}{\varepsilon}))}{\left\|{f}\right\|}^{2}_{L^{p}(I,\omega)}$
Finally we obtain
$\displaystyle{\left\|{\mathcal{R}_{R}^{\delta}f}\right\|}_{L^{p}(I,w_{\alpha,\beta})}$
$\displaystyle\leq$ $\displaystyle
C2^{(\frac{1}{p}-\frac{1}{2})}R^{-(\delta-(\frac{\gamma(p^{\prime})}{\varepsilon}))}{\left\|{f}\right\|}_{L^{p}(I,\omega_{\alpha,\beta})}$
$\displaystyle\leq$ $\displaystyle
C(p){\left\|{f}\right\|}_{L^{p}(I,\omega_{\alpha,\beta})}.$
∎
∎
###### Corollary 1.
Under the notation and conditions of the previous Theorem, we have for all
$f\in L^{p}(I,w)$
$\Psi_{R}^{\delta}f\to f~{}~{}\mbox{as}~{}~{}R\to\infty.$ (29)
###### Proof.
Step1: We prove that, for every $f\in\mathcal{C}^{\infty}(I,\mathbb{R})$,
$\Psi^{\delta}_{R}f\to f$ in $L^{p}(I,\omega)$. Note that
$\displaystyle\Big{|}\Big{(}1-\frac{\lambda_{n}}{R}\Big{)}^{\delta}_{+}{\left\langle{f,\varphi_{n}}\right\rangle}_{L^{2}(I,\omega)}\Big{|}$
$\displaystyle\leq$
$\displaystyle\Big{|}{\left\langle{f,\varphi_{n}}\right\rangle}_{L^{2}(I,\omega)}\Big{|}=\frac{1}{\lambda_{n}}\Big{|}{\left\langle{f,\mathcal{L}\varphi_{n}}\right\rangle}_{L^{2}(I,\omega)}\Big{|}$
(30) $\displaystyle=$
$\displaystyle\frac{1}{\lambda_{n}}\Big{|}{\left\langle{\mathcal{L}f,\varphi_{n}}\right\rangle}_{L^{2}(I,\omega)}\Big{|}=\cdots=\frac{1}{\lambda_{n}^{k}}\Big{|}{\left\langle{\mathcal{L}^{k}.f,\varphi_{n}}\right\rangle}_{L^{2}(I,\omega)}\Big{|}$
$\displaystyle\leq$ $\displaystyle
n^{-k\varepsilon}{\left\|{\mathcal{L}^{k}.f}\right\|}_{L^{2}(I,\omega)}.$
Since ${\left\|{\varphi_{n}}\right\|}_{L^{2}(I,\omega)}\leq n^{\gamma(p)}$, it
suffices to take $k$ big enough to have $\gamma(p)-k\varepsilon<-1$ and obtain
the convergence of the series in $L^{p}(I,\omega)$.
Since
$\displaystyle{\left\|{\Psi^{\delta}_{R}.f-f}\right\|}_{2}^{2}=\sum_{n=0}^{\infty}\Big{(}(1-\frac{\lambda_{n}}{R})^{\delta}_{+}-1\Big{)}^{2}|a_{n}(f)|^{2}\to
0$ as $R\to\infty$,then the result remains true for $1\leq p<\infty$.
Step2: For all $\varepsilon>0$. By density of
$\mathcal{C}^{\infty}_{0}(I,\mathbb{R})$ in $L^{p}(I,\omega)$, there exists
$g\in\mathcal{C}^{\infty}_{0}(I,\mathbb{R})$ such that
${\left\|{f-g}\right\|}_{L^{p}(I,\omega)}<\varepsilon$ and there exists $R>0$
such that
${\left\|{\Psi^{\delta}_{R}.f-\Psi^{\delta}_{R}.g}\right\|}_{L^{p}(I,\omega)}<\varepsilon$.
By writing,
${\left\|{\Psi^{\delta}_{R}.f-f}\right\|}_{L^{p}(I,\omega)}\leq{\left\|{\Psi^{\delta}_{R}.f-\Psi^{\delta}_{R}.g}\right\|}_{L^{p}(I,\omega)}+{\left\|{\Psi^{\delta}_{R}.g-g}\right\|}_{L^{p}(I,\omega)}+{\left\|{f-g}\right\|}_{L^{p}(I,\omega)},$
one gets the desired result. ∎
To conclude for the proof of sufficient conditions of both theorems 1 and 2,
it suffices to verify that the two considered bases satisfy conditions (A) and
(B).We will prove this result only for the case of GPSWFs. The other case is
almost identical.
We first recall that from (13), the GPSWFs are the eigenfunctions of the
Sturm-Liouville operator $\mathcal{L}_{c}^{(\alpha)}.$ Also, note that the
$(n+1)-$th eigenvalue $\chi_{n,\alpha}(c)$ of $\mathcal{L}_{c}^{(\alpha)}$
satisfies the following classical inequalities,
$n^{2}\leq n(n+2\alpha+1)\leq\chi_{n,\alpha}(c)\leq
n(n+2\alpha+1)+c^{2},\quad\forall n\geq 0.$
Moreover, for every $0\leq m<M$ such that $M-m>1$, we have
$\displaystyle\sum_{\chi_{n,\alpha}(c)\in(m,M)}1$ $\displaystyle\leq$
$\displaystyle\sum_{n(n+2\alpha+1)\in(\max(0,m-c^{2}),M)}1$
$\displaystyle\leq$
$\displaystyle\sum_{(n+\alpha+1/2)^{2}-(\alpha+1/2)^{2}\in(\max(0,m-c^{2}),M)}1$
$\displaystyle\leq$
$\displaystyle\sum_{n\in\left((\max(0,m-c^{2})+(\alpha+1/2)^{2})^{\frac{1}{2}}-(1/2+\alpha),(M+(\alpha+1/2)^{2})^{\frac{1}{2}}-(1/2+\alpha)\right)}1$
$\displaystyle\leq$ $\displaystyle C(M-m).$
It follows that condition (B) is satisfied.
Form [6] Lemma $2.6$, one can conclude that condition (A) is satisfied for
weighted prolate spheroidal wave functions for $1<p<\infty$. Moreover, it has
been shown in [18] that ${\left\|{\psi^{(\alpha)}_{n,c}}\right\|}_{\infty}\leq
C\Big{(}\chi_{n,\alpha}(c)\Big{)}^{\frac{\alpha+1}{2}}.$ Then, by using (14),
we obtain ${\left\|{\psi^{(\alpha)}_{n,c}}\right\|}_{1}\leq
C\Big{(}\chi_{n,\alpha}(c)\Big{)}^{\frac{\alpha+1}{2}}\leq Cn^{\alpha+1}.$
###### Remark 3.
The uniform norm of the CPSWFs has been given in [5].
## 5 Proof of necessary condition
The transferring theorem from the uniform boundedness of $\Psi_{R}^{\delta}$
to the uniform boundedness of the Hankel multiplier transform operator
$\mathcal{M}_{\alpha}$ defined by
$\mathcal{M}_{\alpha}(f)=\mathcal{H}_{\alpha}\left(\phi(.)\mathcal{H}_{\alpha}(f)\right)$
can be used to derive necessary condition. Note here that $\phi$ is a bounded
function on $\mathbb{R}$, continuous except on a set of Lebesgue measure zero
and $\mathcal{H}_{\alpha}$ is the modified Hankel operator defined by
$\mathcal{H}_{\alpha}(f)(x)=\int_{0}^{\infty}\frac{J_{\alpha}(xy)}{(xy)^{\alpha}}f(y)y^{2\alpha+1}dy.$
From [12] and the transferring theorem, the uniform boundedness of
$\Psi_{R}^{\delta}$ holds true if and only if
$\delta>\max\\{2(\alpha+1)|\frac{1}{p}-\frac{1}{2}|-\frac{1}{2},0\\}.$ It’s
easy to check that
$\max\\{2(\alpha+1)|\frac{1}{p}-\frac{1}{2}|-\frac{1}{2},0\\}\geq\max\\{\frac{\gamma_{\alpha}(p^{\prime})}{2},0\\}$
for every $p\not=2-\frac{1}{\alpha+3/2}$, then one gets our necessary
condition. To be more precise, let’s study each transferring theorem
separately.
### 5.1 GPSWFs’s case
Let’s recall that the family of weighted prolate spheroidal wave functions
$\\{\psi_{n,c}^{(\alpha)}(\cos\theta)\\}_{n}$ form an orthonormal system on
$(0,\pi)$ with respect to the measure $(\sin\theta)^{2\alpha+1}d\theta$.
For a function $f(\theta)$ integrable on $(0,\pi)$ with respect to the measure
defined above, we have formally
$f(\theta)=\sum_{n=0}^{\infty}a_{n}(f)\psi_{n,c}^{(\alpha)}(\cos\theta)\qquad
a_{n}(f)=\int_{0}^{\pi}f(\theta)\psi_{n,c}^{(\alpha)}(\cos\theta)(\sin\theta)^{2\alpha+1}d\theta$
For $p\geq 1$ and a function $f$ on $(0,\pi)$ we define a norm
${\left\|{f}\right\|}_{p}=\Bigg{(}\int_{0}^{\pi}|f(\theta)|^{p}(\sin\theta)^{2\alpha+1}\Bigg{)}^{1/p}.$
Before stating an adequate transferring theorem, let’s define a GPSWFs-
multiplier.
###### Definition 1.
Let $\lambda>0$ be a sufficiently large real, the bounded sequence
$\\{\phi(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})\\}_{n}$ is called a
Weighted prolate multiplier if there exist a constant $C>0$ such that for
every $f\in L^{p}(I,\omega_{\alpha})$, we have
${\left\|{\sum_{n=0}^{\infty}\phi(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})a_{n}(f)\psi_{n,c}^{(\alpha)}}\right\|}_{L^{p}(I,\omega_{\alpha})}\leq
C{\left\|{f}\right\|}_{L^{p}(I,\omega_{\alpha})}.$
The smallest constant $C$ verifying this last inequality is written
${\left\|{\phi(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})}\right\|}_{p}$. In
the same context, the function $\phi$ is called an $\L^{p}$-Hankel transform
multiplier if
$\mathcal{M}_{\alpha}(f)=\mathcal{H}_{\alpha}(\phi(.)\mathcal{H}_{\alpha}(f))$
is uniformly bounded on
$L^{p}\left((0,\infty),\theta^{2\alpha+1}d\theta\right)$.
###### Theorem 4 (Transferring theorem).
Let $1<p<\infty$, $0\leq\alpha<3/2$ and $\phi$ be a bounded function on
$(0,\infty)$ continuous except on a set of Lebesgue measure zero such that
$\\{\phi(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})\\}_{n}$ is a Weighted
prolate multiplier for all large $\lambda>0$ and
$\displaystyle\liminf_{\lambda\to\infty}{\left\|{\phi(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})}\right\|}_{p}$
is finite then $\phi$ is an $L^{p}$-Hankel transform multiplier and we have
${\left\|{\mathcal{M}_{\alpha}}\right\|}_{p}\leq\liminf_{\lambda\to\infty}{\left\|{\phi\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)}\right\|}_{p}.$
###### Proof.
Let $g$ be an infinitely differentiable function with compact support in
$[0,M]$ and put $g_{\lambda}(\theta)=g(\lambda\theta)$. Here $\lambda$ is a
positive real so that $supp(g_{\lambda})\subset[0,\pi]$.
Recall that we have by assumption
${\left\|{\sum_{n=0}^{\infty}\phi(\chi^{1/2}_{n,\alpha}(c)/\lambda)a_{n}(g)\psi_{n,c}^{(\alpha)}(\cos(.))}\right\|}_{p}\leq{\left\|{\phi(\chi^{1/2}_{n,\alpha}(c)/\lambda)}\right\|}_{p}{\left\|{g}\right\|}_{p}.$
(31)
Via a simple change of variable, one can write
$\lim_{\lambda\to\infty}\lambda^{2\alpha+2}{\left\|{g_{\lambda}}\right\|}_{p}^{p}=\lim_{\lambda\to\infty}\int_{0}^{M}|g(\tau)|^{p}\Big{(}\lambda\sin(\tau/\lambda)\Big{)}^{2\alpha+1}d\tau=\int_{0}^{\infty}|g(\tau)|^{p}\tau^{2\alpha+1}d\tau.$
By using (31) together with Fatou’s lemma, one gets
$\displaystyle\displaystyle\int_{0}^{\infty}$
$\displaystyle\liminf_{\lambda\to\infty}\Big{|}\chi_{(0,\pi\lambda)}(\tau)\sum_{n=0}^{\infty}\phi(\chi^{1/2}_{n,\alpha}(c)/\lambda)a_{n}(g_{\lambda})\psi_{n,c}^{(\alpha)}(\cos\tau/\lambda)\Big{|}^{p}\tau^{2\alpha+1}d\tau$
$\displaystyle=$
$\displaystyle\int_{0}^{\infty}\liminf_{\lambda\to\infty}\Big{|}\chi_{(0,\pi\lambda)}(\tau)\sum_{n=0}^{\infty}\phi(\chi^{1/2}_{n,\alpha}(c)/\lambda)a_{n}(g_{\lambda})\psi_{n,c}^{(\alpha)}(\cos\tau/\lambda)\Big{|}^{p}\lambda^{2\alpha+1}\sin(\tau/\lambda)^{2\alpha+1}d\tau$
$\displaystyle\leq$
$\displaystyle\liminf_{\lambda\to\infty}\lambda^{2\alpha+1}\int_{0}^{\infty}\Big{|}\chi_{(0,\pi\lambda)}(\tau)\sum_{n=0}^{\infty}\phi(\chi^{1/2}_{n,\alpha}(c)/\lambda)a_{n}(g_{\lambda})\psi_{n,c}^{(\alpha)}(\cos\tau/\lambda)\Big{|}^{p}\sin(\tau/\lambda)^{2\alpha+1}d\tau$
$\displaystyle\leq$
$\displaystyle\liminf_{\lambda\to\infty}\lambda^{2\alpha+2}{\left\|{\phi(\chi^{1/2}_{n,\alpha}(c)/\lambda)}\right\|}_{p}{\left\|{g_{\lambda}}\right\|}^{p}_{p}=\liminf_{\lambda\to\infty}{\left\|{\phi(\chi^{1/2}_{n,\alpha}(c)/\lambda)}\right\|}_{p}\Bigg{[}\int_{0}^{\infty}|g(\tau)|^{p}\tau^{2\alpha+1}d\tau\Bigg{]}.$
Then there exists a sequence
$\lambda_{1}<\lambda_{2}<\cdots<\lambda_{p}\to\infty$ that
$G(\tau,\lambda)=\displaystyle\chi_{(0,\pi\lambda)}(\tau)\sum_{n=0}^{\infty}\phi(\chi^{1/2}_{n,\alpha}(c)/\lambda)a_{n}(g_{\lambda})\psi_{n,c}^{(\alpha)}(\cos\frac{\tau}{\lambda})$
converges weakly to a function $G(\tau)$. Furthermore, $G$ satisfies
$\Bigg{[}\int_{0}^{\infty}|G(\tau)|^{p}\tau^{2\alpha+1}d\tau\Bigg{]}^{1/p}\leq\liminf_{\lambda\to\infty}{\left\|{\phi(\chi^{1/2}_{n,\alpha}(c)/\lambda)}\right\|}_{p}\Bigg{[}\int_{0}^{\infty}|g(\tau)|^{p}\tau^{2\alpha+1}d\tau\Bigg{]}^{1/p}.$
Let us now prove that $G=\mathcal{H}_{\alpha}(\phi.\mathcal{H}_{\alpha}(g))$.
Let
$G(\tau,\lambda)=\chi_{(0,\pi\lambda)}(\tau)\Big{[}\sum_{n=0}^{N[\lambda]}+\sum_{N[\lambda]+1}^{\infty}\Big{]}\phi(\chi^{1/2}_{n,\alpha}(c)/\lambda)a_{n}(g_{\lambda})\psi_{n,c}^{(\alpha)}(\cos\tau/\lambda)=G^{N}(\tau,\lambda)+H^{N}(\tau,\lambda)$
We start by giving the following lemma that will be proved later,
###### Lemma 2.
We have
$\int_{0}^{\infty}\Big{|}H^{N}(\tau,\lambda)\Big{|}^{2}\tau^{2\alpha+1}d\tau=O(\frac{1}{N})\mbox{
uniformly in }\lambda$
Therefore, by the diagonal argument, there exists a subsequence also noted
$\\{\lambda_{j}\\}$ for a sake of clarity, such that $H^{N}(\tau,\lambda_{j})$
converges weakly to a function $H^{N}(\tau)$ and
$\displaystyle\int_{0}^{\pi}\Big{|}H^{N}(\tau)\Big{|}^{p}\tau^{2\alpha+1}d\tau=O(\frac{1}{N^{2}}).$
Then, there exists a subsequence $H^{N_{j}}$ denoted for the same reason
$H^{N}$ that converges to zero a.e .
Since $G^{N}(\tau,\lambda)=G(\tau,\lambda)-H^{N}(\tau,\lambda)$,
$G^{N}(\tau,\lambda)$ converge weakly to a limit $G^{N}(\tau)$ and
$G(\tau)=G^{N}(\tau)+H^{N}(\tau)$. Thus $G^{N}(\tau)$ converges to $G(\tau)$
almost everywhere. On the other hand, we will prove the following lemma
###### Lemma 3.
We have
$\lim_{\lambda\to\infty}G^{N}(\tau,\lambda)=\int_{0}^{N}\phi(v)\mathcal{H}_{\alpha}.g(v)\frac{J_{\alpha}(v\tau)}{(v\tau)^{\alpha}}v^{2\alpha+1}dv,$
which implies that
$G(\tau)=\int_{0}^{\infty}\phi(v)\mathcal{H}_{\alpha}.g(v)\frac{J_{\alpha}(v\tau)}{(v\tau)^{\alpha}}v^{2\alpha+1}dv,$
and achieves our proof. ∎
###### Proof of Lemma 2.
We have
$\displaystyle\int_{0}^{M}|H^{N}(\tau,\lambda)|^{2}\Big{(}\lambda\sin\frac{\tau}{\lambda}\Big{)}^{2\alpha+1}d\tau$
$\displaystyle=$
$\displaystyle\lambda^{2\alpha+2}\int_{0}^{\pi}|H^{N}(\lambda\tau,\lambda)|^{2}(\sin\tau)^{2\alpha+1}d\tau$
(32) $\displaystyle=$
$\displaystyle\lambda^{2\alpha+2}\sum_{N[\lambda]+1}^{\infty}|\phi(\frac{n}{\lambda})|^{2}|a_{n}(g_{\lambda})|^{2}.$
Recall that in [19], authors have given the following uniform approximation of
GPSWFs in term of Jacobi polynomials for $0\leq\alpha<3/2$,
$\psi_{n,c}^{(\alpha)}(\cos\theta)=A_{n}\widetilde{P}_{n}^{(\alpha)}(\cos\theta)+R_{n,c}^{(\alpha)}(\cos\theta)\qquad{\left\|{R_{n,c}}\right\|}^{(\alpha)}_{\infty}\leq
C_{\alpha,c}\frac{1}{2n+2\alpha+1}.$ (33)
We also know that (see for example [29])
$n(\sin\theta)^{2\alpha+1}\widetilde{P}_{n}^{(\alpha,\alpha)}(\cos\theta)=2\frac{h^{\alpha+1}_{n-1}}{h^{(\alpha)}_{n}}\frac{d}{d\theta}\Big{[}(\sin\theta)^{2\alpha+2}\widetilde{P}_{n-1}^{(\alpha+1,\alpha+1)}(\cos\theta)\Big{]}$
(34)
By combining (33) and (34), one gets
$(\sin\theta)^{2\alpha+1}\psi_{n,c}^{(\alpha)}(\cos\theta)=\frac{2}{n}\frac{h^{\alpha+1}_{n-1}}{h^{(\alpha)}_{n}}\frac{d}{d\theta}\Big{[}(\sin\theta)^{2\alpha+2}\widetilde{P}_{n-1}^{(\alpha+1,\alpha+1)}(\cos\theta)\Big{]}+R_{n,c}^{(\alpha)}(\cos\theta).$
Then, integrating by parts one gets
$\displaystyle a_{n}(g_{\lambda})$ $\displaystyle=$
$\displaystyle\frac{C}{n}\int_{0}^{\pi}\frac{g^{\prime}(\lambda\theta)}{\sin\theta}\widetilde{P}_{n-1}^{\alpha+1}(\cos\theta)(\sin\theta)^{2\alpha+3}d\theta+\int_{0}^{\pi}R_{n,c}^{(\alpha)}(\cos\theta)g(\lambda\theta)d\theta$
$\displaystyle=$ $\displaystyle a_{n,1}(g_{\lambda})+a_{n,2}(g_{\lambda})$
Let’s come back to (32). We have by Bessel’s inequality
$\displaystyle\lambda^{2\alpha+2}\sum_{N[\lambda]+1}^{\infty}|\phi(\frac{n}{\lambda})|^{2}|a_{n,1}(g_{\lambda})|^{2}$
$\displaystyle\leq$ $\displaystyle
C\lambda^{2\alpha+2}\Big{[}\frac{\lambda}{N(\lambda-1)}\Big{]}^{2}\sum_{N[\lambda]+1}^{\infty}|\frac{n}{\lambda}a_{n,1}(g_{\lambda})|^{2}$
(35) $\displaystyle\leq$
$\displaystyle\frac{C}{N^{2}}\lambda^{2\alpha+2}\int_{0}^{\pi}\Big{|}\frac{g^{\prime}(\lambda\theta)}{\sin\theta}\Big{|}^{2}(\sin\theta)^{2\alpha+3}d\theta$
$\displaystyle=$
$\displaystyle\frac{C}{N^{2}}\int_{0}^{M}|g^{\prime}(\theta)|^{2}\Big{(}\lambda\sin\frac{\theta}{\lambda}\Big{)}^{2\alpha+1}d\theta$
$\displaystyle=$ $\displaystyle O(\frac{1}{N^{2}})\mbox{ uniformly in
}\lambda.$
On the other hand, using Cauchy-Schwarz’s inequality
$\displaystyle\lambda^{2\alpha+2}\sum_{N[\lambda]+1}^{\infty}|\phi(\frac{n}{\lambda})|^{2}|a_{n,2}(g_{\lambda})|^{2}$
$\displaystyle\leq$ $\displaystyle
C\lambda^{2\alpha+2}\sum_{N[\lambda]+1}^{\infty}{\left\|{R_{n,c}^{\alpha}}\right\|}^{2}_{2}{\left\|{g(\lambda.)}\right\|}_{2}^{2}$
$\displaystyle\leq$ $\displaystyle
C\sum_{N[\lambda]+1}^{\infty}\frac{1}{n^{2}}\int_{0}^{M}|g(\theta)|^{2}\big{(}\lambda\sin\frac{\theta}{\lambda}\big{)}^{2\alpha+1}d\theta$
$\displaystyle=$ $\displaystyle O(\frac{1}{N}).$
Then, one conclude that
$\int_{0}^{M}|H^{N}(\tau,\lambda)|^{2}\tau^{2\alpha+1}d\tau=O(\frac{1}{N})\mbox{
uniformly in }\lambda.$
∎
###### Proof of lemma 3.
We use now the following uniform approximation of GPSWFs in term of Bessel
function (we refer the reader once again to [19])
$\psi_{n,c}^{(\alpha)}(\cos\frac{\tau}{\lambda})=A_{\alpha}(q)\frac{\chi_{n,c}^{1/4}S(\cos\frac{\tau}{\lambda})^{1/2}J_{\alpha}(\chi_{n,c}^{1/2}S(\cos\frac{\tau}{\lambda}))}{(\sin\frac{\tau}{\lambda})^{\alpha+1/2}(1-q\cos^{2}\frac{\tau}{\lambda})^{1/4}}+E_{n,c}(\cos\frac{\tau}{\lambda}),$
(36)
where
$\Big{|}E_{n,c}(\cos\theta)\Big{|}\leq\frac{C.A_{\alpha}(q)}{(1-q)}\frac{(\sin\theta)^{1/2}}{(1-q\cos^{2}\theta)^{1/4}}\qquad\forall\theta\in[0,\pi]\quad\mbox{and}\quad
S(x)=\int_{x}^{1}\sqrt{\frac{1-qt^{2}}{1-t^{2}}}dt.$
Note that it has also been shown in [4] that
$\frac{\sin\theta\sqrt{1-q\cos^{2}\theta}}{S(\cos\theta)}=1+\Big{(}\frac{q}{1-q}+\frac{3}{4}\Big{)}(1-\cos\theta)+o(1-\cos\theta).$
Thus, we can write, for $n\leq N[\lambda]$,and by taking into account that
$\sqrt{x}J_{\alpha}(x)$ is bounded then
$\displaystyle\frac{\psi_{n,c}^{(\alpha)}(\cos\frac{\tau}{\lambda})}{\lambda^{\alpha}}$
$\displaystyle=$ $\displaystyle
n^{1/2}\frac{J_{\alpha}(\frac{n\tau}{\lambda})}{\Big{(}\lambda.\sin\frac{\tau}{\lambda}\Big{)}^{\alpha}}-n^{1/2}\frac{J_{\alpha}(\frac{n\tau}{\lambda})}{\Big{(}\lambda.\sin\frac{\tau}{\lambda}\Big{)}^{\alpha}}\big{(}\frac{q}{1-q}+3/4\big{)}\frac{\tau^{2}}{4\lambda^{\alpha+2}}+O(\frac{1}{n.\lambda^{\alpha+2}})$
(37) $\displaystyle=$ $\displaystyle
n^{1/2}J_{\alpha}(\frac{n\tau}{\lambda})\Big{(}\frac{1}{\tau}\Big{)}^{\alpha}+o(\frac{1}{n}).$
On the other hand,
$\displaystyle\lambda^{\alpha}a_{n}(g_{\lambda})$ $\displaystyle=$
$\displaystyle\lambda^{\alpha-1}\int_{0}^{M}g(\tau)\psi_{n,c}^{(\alpha)}(\cos\frac{\tau}{\lambda})(\sin\frac{\tau}{\lambda})^{2\alpha+1}d\tau$
$\displaystyle=$
$\displaystyle\frac{1}{\lambda^{2}}\Bigg{[}A_{\alpha}(q)n^{1/2}\int_{0}^{\infty}g(\tau)J_{\alpha}(\frac{n\tau}{\lambda})\Big{(}\lambda\sin\frac{\tau}{\lambda}\Big{)}^{\alpha+1}d\tau\Bigg{]}+o(\frac{1}{\lambda^{2}})$
$\displaystyle=$
$\displaystyle\frac{n^{1/2}}{\lambda^{2}}\int_{0}^{\infty}g(\tau)J_{\alpha}(\frac{n\tau}{\lambda})\tau^{\alpha+1}d\tau+o(\frac{1}{\lambda^{2}}).$
Then, by combining the last two estimates, one gets
$G^{N}(\tau,\lambda)=\sum_{n=0}^{N[\lambda]+1}\phi(\frac{n}{\lambda})\mathcal{H}_{\alpha}.g(\frac{n}{\lambda})J_{\alpha}(\frac{n\tau}{\lambda})\frac{1}{\tau^{\alpha}}\frac{n}{\lambda^{2}}+\frac{n}{\lambda^{2}}o(1).$
Therefore, by letting $\lambda\to\infty$, we conclude for the proof of lemma3.
∎
### 5.2 CPSWF’s case
As for the example studied in the previous section, we start by establishing
an adequate transferring theorem for the circular case. To do this, we
introduce a suitable terminology.
###### Definition 2.
Let $\lambda>0$ be a sufficiently large real, a bounded sequence
$\\{m(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})\\}_{n}$ is called to be a
Circular prolate multiplier, if there exists a constant $C>0$ such that for
every $f\in L^{p}(0,1)$, we have
${\left\|{\sum_{n=0}^{\infty}m(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})a_{n}(f)\varphi^{(\alpha)}_{n,c}}\right\|}_{L^{p}(0,1)}\leq
C{\left\|{f}\right\|}_{L^{p}(0,1)}.$
The smallest constant $C$ verifying the last inequality is written
${\left\|{m\Big{(}\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\Big{)}}\right\|}_{p}$.
Here
$\mathcal{M}:=\mathcal{M}_{0}=\mathcal{H}_{0}\Big{(}m(.)\mathcal{H}_{0}(f)\Big{)}$
is the multiplier related to the Hankel transform operator.
###### Theorem 5 (Circular transferring theorem).
Let $1<p<\infty$, $\alpha\geq 1/2$ and $m$ be a bounded function on
$(0,\infty)$ continuous except on a set of Lebesgue measure zero such that
$\\{m(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})\\}_{n}$ is a Circular prolate
multiplier for all large $\lambda>0$ and
$\displaystyle\liminf_{\lambda\to\infty}{\left\|{m(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})}\right\|}_{p}$
is finite then $m$ is an $L^{p}$-Hankel transform multiplier and we have
${\left\|{\mathcal{M}}\right\|}_{p}\leq\liminf_{\lambda\to\infty}{\left\|{m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)}\right\|}_{p}.$
###### Proof.
Let $\lambda>0$ and $g\in C^{\infty}_{c}(0,\infty)$ supported in $(0,M)$ such
that $\lambda>\frac{2}{\pi}M$. Let $g_{\lambda}(\tau)=g(\lambda\tau)$ for
every $\tau\in(0,1)$ and $G_{\lambda}=g_{\lambda}\circ\arccos.$
By asymption, we have
${\left\|{\sum_{n=0}^{\infty}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)a_{n}(G_{\lambda})\varphi_{n}}\right\|}_{L^{p}\left(0,1\right)}\leq{\left\|{m(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})}\right\|}_{p}{\left\|{G_{\lambda}}\right\|}_{L^{p}\left(0,1\right)}.$
Then, we get
${\left\|{\chi_{(0,\lambda\frac{\pi}{2})}\sum_{n=0}^{\infty}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)a_{n}\left(G_{\lambda}\right)\varphi_{n}(\cos(\frac{.}{\lambda}))}\right\|}^{p}_{L^{p}((0,\infty),\sin(\frac{.}{\lambda}))}\leq{\left\|{m(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})}\right\|}^{p}_{p}{\left\|{g}\right\|}^{p}_{L^{p}\left((0,\infty),\sin(\frac{.}{\lambda})\right)}$
We denote by
$F_{\lambda}(\theta)=\chi_{(0,\lambda\frac{\pi}{2})}(\theta)\sum_{n=0}^{\infty}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)a_{n}\left(G_{\lambda}\right)\varphi_{n}\left(\cos(\frac{\theta}{\lambda})\right),$
hence we have
${\left\|{F_{\lambda}}\right\|}^{p}_{L^{p}((0,\infty),\sin(\frac{.}{\lambda}))}\leq{\left\|{m(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})}\right\|}^{p}_{p}{\left\|{g}\right\|}^{p}_{L^{p}((0,\infty),\sin(\frac{.}{\lambda}))}.$
(38)
By using (38), Fatou’s Lemma and the fact that
$\displaystyle\lim_{\lambda\to\infty}\lambda\sin(\frac{\theta}{\lambda})=\theta$,
we obtain
${\left\|{\displaystyle\liminf_{\lambda\to\infty}F_{\lambda}}\right\|}^{p}_{L^{p}((0,\infty),\theta
d\theta)}\leq\displaystyle\liminf_{\lambda\to\infty}{\left\|{m(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})}\right\|}^{p}_{p}{\left\|{g}\right\|}^{p}_{L^{p}((0,\infty),\theta
d\theta)}$ (39)
Let
$L=\displaystyle\liminf_{\lambda\to\infty}{\left\|{m(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})}\right\|}_{p}$,
then there exists a sequence of $(\lambda_{j})_{j\in\mathbb{N}}$ such that
$\displaystyle\lim_{j\to\infty}\lambda_{j}=+\infty$ verifying
${\left\|{F_{\lambda_{j}}}\right\|}_{L^{p}((0,\infty),\theta
d\theta)}\leq(L+1/j){\left\|{g}\right\|}_{L^{p}((0,\infty),\theta d\theta)}.$
(40)
On the other hand, as $m$ is bounded and from Perseval’s formula, we have
${\left\|{F_{\lambda_{j}}}\right\|}_{L^{2}((0,\infty),\theta
d\theta)}\leq(L+1/j){\left\|{g}\right\|}_{L^{2}((0,\infty),\theta d\theta)}.$
(41)
From (40) and (41) there exists a subsequence of
$(\lambda_{j})_{j\in\mathbb{N}}$ denoted also $(\lambda_{j})_{j\in\mathbb{N}}$
such that the sequence $\\{F_{\lambda_{j}}\\}$ converge weakly to a function
$F$ in $L^{p}\cap L^{2}((0,\infty),\theta d\theta)$ and satisfying the
following inequality
${\left\|{F}\right\|}_{L^{p}((0,\infty),\theta d\theta)}\leq
L{\left\|{g}\right\|}_{L^{p}((0,\infty),\theta d\theta)}.$ (42)
Our purpose now is to show that
$F=\mathcal{H}_{0}\left(m(.)\mathcal{H}_{0}(g)\right)$ almost everywhere on
$(0,\infty).$
Let $N\geq 1$ and $\theta\in(0,\infty)$
$\displaystyle F_{\lambda}(\theta)$ $\displaystyle=$
$\displaystyle\chi_{(0,\lambda)}(\theta)\sum_{n=0}^{\infty}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)a_{n}(G_{\lambda})\varphi_{n}\left(\cos(\frac{\theta}{\lambda})\right)$
$\displaystyle=$
$\displaystyle\chi_{(0,\lambda)}(\theta)\left[\sum_{n=0}^{N[\lambda]}+\sum_{n=N[\lambda]+1}^{\infty}\right]m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)a_{n}(G_{\lambda})\varphi_{n}\left(\cos(\frac{\theta}{\lambda})\right)$
$\displaystyle=$ $\displaystyle
F^{N}_{\lambda}(\theta)+K^{N}_{\lambda}(\theta).$
Using (9), the function
$F(\theta)=\varphi^{\alpha}_{n,c}\left(\cos(\theta)\right)$ satisfies the
following differential equation
$\displaystyle\mathcal{L}(F)(\theta)$ $\displaystyle=$
$\displaystyle-F^{\prime\prime}(\theta)-\frac{\cos(\theta)}{\sin(\theta)}F^{\prime}(\theta)+\left(c^{2}\cos^{2}(\theta)-\frac{1/4-\alpha^{2}}{\cos^{2}(\theta)}\right)F(\theta)$
$\displaystyle=$ $\displaystyle\chi_{n,\alpha}(c)F(\theta).$
Using the symmetry of $\mathcal{L}$ on $C^{\infty}_{c}(0,\infty)$, we obtain
$\displaystyle a_{n}(G_{\lambda})$ $\displaystyle=$
$\displaystyle{\left\langle{G_{\lambda},\varphi^{\alpha}_{n,c}}\right\rangle}_{L^{2}(0,1)}$
$\displaystyle=$
$\displaystyle\frac{1}{\chi_{n,\alpha}(c)}\int_{0}^{\frac{\pi}{2}}g_{\lambda}(\theta)\chi_{n,\alpha}(c)\varphi^{\alpha}_{n,c}\left(\cos(\theta)\right)\sin(\theta)d\theta$
$\displaystyle=$
$\displaystyle\frac{1}{\chi_{n,\alpha}(c)}\int_{0}^{\frac{\pi}{2}}g_{\lambda}(\theta)\mathcal{L}(F)(\theta)\sin(\theta)d\theta$
$\displaystyle=$
$\displaystyle\frac{\lambda^{2}}{\chi_{n,\alpha}(c)}\int_{0}^{\frac{\pi}{2}}\frac{1}{\lambda^{2}}\mathcal{L}\left(g_{\lambda}\right)(\theta)F(\theta)\sin(\theta)d\theta=\frac{\lambda^{2}}{\chi_{n,\alpha}(c)}a_{n}\left(\frac{1}{\lambda^{2}}\mathcal{L}\left(g_{\lambda}\right)\right).$
Using the previous equality, Perseval’s formula and the fact that $m$ is
bounded with the well known inequality
$\frac{2}{\pi}\theta\leq\sin(\theta)\leq\theta$, for
$0\leq\theta\leq\frac{\pi}{2}$ and (10), we obtain
$\displaystyle{\left\|{K_{\lambda}^{N}}\right\|}_{L^{2}((0,\infty),\theta
d\theta)}$ $\displaystyle=$
$\displaystyle\left[\int_{0}^{\infty}\chi_{(0,\lambda\frac{\pi}{2})}(\theta)\left|\sum_{n=N[\lambda]+1}^{\infty}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)a_{n}(G_{\lambda})\varphi_{n}(\cos(\frac{\theta}{\lambda}))\right|^{2}\theta
d\theta\right]^{1/2}$ $\displaystyle=$
$\displaystyle\left[\int_{0}^{\lambda\frac{\pi}{2}}\left|\sum_{n=N[\lambda]+1}^{\infty}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)a_{n}(G_{\lambda})\varphi_{n}(\cos\left(\frac{\theta}{\lambda})\right)\right|^{2}\theta
d\theta\right]^{1/2}$ $\displaystyle\leq$
$\displaystyle\sqrt{\frac{\pi}{2}}\left[\lambda\int_{0}^{\lambda\frac{\pi}{2}}\left|\sum_{n=N[\lambda]+1}^{\infty}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)a_{n}(G_{\lambda})\varphi_{n}\left(\cos(\frac{\theta}{\lambda})\right)\right|^{2}\sin\left(\frac{\theta}{\lambda}\right)d\theta\right]^{1/2}$
$\displaystyle=$
$\displaystyle\sqrt{\frac{\pi}{2}}\left[\lambda^{2}\sum_{n=N[\lambda]+1}^{\infty}m^{2}\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)a^{2}_{n}(G_{\lambda})\right]^{1/2}$
$\displaystyle\leq$ $\displaystyle
C\,\sqrt{\frac{\pi}{2}}\left[\frac{\lambda^{2}}{N^{4}}\sum_{n=N[\lambda]+1}^{\infty}a^{2}_{n}\left(\frac{1}{\lambda^{2}}\mathcal{L}(g_{\lambda})\right)\right]^{1/2}$
$\displaystyle\leq$
$\displaystyle\frac{C}{N^{2}}\,\sqrt{\frac{\pi}{2}}\left[{\left\|{g^{\prime\prime}+\frac{g^{\prime}}{\theta}}\right\|}_{L^{2}\left((0,\infty),\theta
d\theta\right)}+C{\left\|{g}\right\|}_{L^{2}\left((0,\infty),\theta
d\theta\right)}\right]$
Then we obtain ${\left\|{K^{N}_{\lambda}}\right\|}_{L^{2}((0,\infty),\theta
d\theta)}=O(\frac{1}{N^{2}})$ uniformly in $\lambda.$
Thus by the diagonal argument there exists a subsequence of
$\\{\lambda_{j}\\}$ noted again $\\{\lambda_{j}\\}$ such that for every $N\geq
1$, $\\{K^{N}_{\lambda_{j}}\\}_{j\in\mathbb{N}}$ converge weakly to a function
$K^{N}$ in $L^{2}((0,\infty),\theta d\theta)$ satisfy
${\left\|{K^{N}}\right\|}_{L^{2}((0,\infty),\theta
d\theta)}=O(\frac{1}{N^{2}})$, one conclude that there exists a sequence
$\\{N_{k}\\}$ such that $\\{K^{N_{k}}\\}_{k\in\mathbb{N}}$ converge to zero
almost everywhere on $(0,\infty)$. Let $F^{N_{k}}=F-K^{N_{k}}$, clearly we
have $\\{F_{{\lambda_{j}}}^{N_{k}}\\}_{j\in\mathbb{N}}$ converge weakly to
$F^{N_{k}}$ in $L^{2}(0,\infty)$ for every $k\in\mathbb{N}$. Moreover,
$\\{F^{N_{k}}\\}$ converge to $F$ almost everywhere on $(0,\infty).$
We prove now the following equality
$\lim_{j\to\infty}F_{{\lambda_{j}}}^{N_{k}}(x)=\int_{0}^{N_{k}}m(y)J_{0}(xy)\mathcal{H}_{0}(g)(y)ydy$
(43)
for every $x\in(0,\infty)$, the weak convergence of
$\\{F_{{\lambda_{j}}}^{N_{k}}\\}_{j\in\mathbb{N}}$ to $F^{N_{k}}$, in
particular,
${\left\langle{F_{{\lambda_{j}}}^{N_{k}},\chi_{(r,s)}}\right\rangle}$ converge
to ${\left\langle{F^{N_{k}},\chi_{(r,s)}}\right\rangle}$ for every
$0<r<s<\infty$ and by using the Lebesgue dominated convergence theorem which
give as ${\left\langle{F_{{\lambda_{j}}}^{N_{k}},\chi_{(r,s)}}\right\rangle}$
converge to
${\left\langle{\mathcal{H}_{\alpha}\left(\chi_{(0,N_{k}\pi)}m(.)\mathcal{H}_{\alpha}(g)\right),\chi_{(r,s)}}\right\rangle}$,
one conclude that
$F^{N_{k}}=\mathcal{H}_{\alpha}\left(\chi_{(0,N_{k}\pi)}m(.)\mathcal{H}_{\alpha}(g)\right)$
almost everywhere on $(0,\infty).$ Finally, as $k\to\infty$, we get our
purpose.
For the proof of (43), we need the uniform approximation of the family of
CPSWFs on $(0,1)$ which is given by the following estimates
$\varphi_{n,c}^{\alpha}(\cos(\frac{\theta}{\lambda}))=(-1)^{n}B_{n}\left(\cos(\frac{\theta}{\lambda})\right)^{\alpha+1/2}P_{n}^{(0,\alpha)}\left(\cos(\frac{2\theta}{\lambda})\right)+\gamma_{n}^{\alpha+1/2}O(\frac{c^{2}}{n})$
(44)
for every $\theta\in\mathbb{(}\lambda t_{n},\lambda\frac{\pi}{2})$, where
$t_{n}=\arccos(\gamma_{n})$ and
$\gamma_{n}\sim\frac{\sqrt{\alpha^{2}-1/4}}{\chi_{n,\alpha}^{1/2}(c)}$.
$\varphi_{n,c}^{\alpha}(\cos(\frac{\theta}{\lambda}))=A_{n}\,\chi^{1/4}_{n,\alpha}(c)\frac{\sqrt{S(\cos(\frac{\theta}{\lambda}))}J_{0}\left(\chi^{1/2}_{n,\alpha}(c)S(\cos(\frac{\theta}{\lambda}))\right)}{(\sin(\frac{\theta}{\lambda}))^{\frac{1}{2}}r_{n}\left(\cos(\frac{\theta}{\lambda})\right)^{1/4}}+R_{n}(\cos(\frac{\theta}{\lambda}))$
(45)
for every $\theta\in\mathbb{(}0,\lambda t_{n})$, where $A_{n}\sim 1,$
$r_{n}(t)=1-qt^{2}+\frac{1/4-\alpha^{2}}{\chi_{n,\alpha}^{1/2}(c)t^{2}}$ and
$\displaystyle\sup_{\theta\in\mathbb{(}0,t_{n})}\left|R_{n}(\cos(\theta))\right|\leq\frac{C}{\chi^{1/2}_{n,\alpha}(c)}$,
for more details see [17].
By a straightforward computation, we have
$\frac{\sqrt{S(\cos(\frac{\theta}{\lambda}))}}{(\sin(\frac{\theta}{\lambda}))^{1/2}r_{n}\left(\cos(\frac{\theta}{\lambda})\right)^{1/4}}=1-\beta(q)(1-\cos(\frac{\theta}{\lambda}))+o(1-\cos(\frac{\theta}{\lambda}))$
then, we can easily check that
$\varphi_{n,c}^{\alpha}(\cos(\frac{\theta}{\lambda}))=\chi_{n,\alpha}^{1/4}(c)J_{0}\left(\frac{\chi_{n,\alpha}^{1/2}(c)}{\lambda}\theta\right)+R_{n}(\cos(\frac{\theta}{\lambda})).$
Let $N>0$ and $\lambda>\max\\{\frac{2M}{\pi},N^{3}\\}$. By (45) we have, for
every $n\leq N[\lambda]$
$\displaystyle a_{n}(G_{\lambda})$ $\displaystyle=$
$\displaystyle{\left\langle{G_{\lambda},\varphi_{n,c}^{\alpha}}\right\rangle}_{L^{2}(0,1)}$
$\displaystyle=$
$\displaystyle\frac{1}{\lambda}\int_{0}^{\lambda\frac{\pi}{2}}\left(g_{\lambda}\circ\arccos\right)(\cos(\frac{\theta}{\lambda}))\varphi_{n,c}^{\alpha}(\cos(\frac{\theta}{\lambda}))\sin(\frac{\theta}{\lambda})d\theta$
$\displaystyle=$
$\displaystyle\frac{\chi_{n,\alpha}^{1/4}(c)}{\lambda}\int_{0}^{\lambda\frac{\pi}{2}}J_{0}\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\theta\right)g(\theta)\sin(\frac{\theta}{\lambda})d\theta-\frac{\chi_{n,\alpha}^{1/4}(c)}{\lambda}\int_{\lambda
t_{n}}^{\lambda\frac{\pi}{2}}J_{0}\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\theta\right)g(\theta)\sin(\frac{\theta}{\lambda})d\theta$
$\displaystyle+$ $\displaystyle\frac{(-1)^{n}B_{n}}{\lambda}\int_{\lambda
t_{n}}^{\lambda\frac{\pi}{2}}g(\theta)P_{n}^{(0,\alpha)}\left(\cos(\frac{2\theta}{\lambda})\right)\left(\cos(\frac{\theta}{\lambda})\right)^{\alpha+1/2}\sin(\frac{\theta}{\lambda})d\theta+\frac{1}{n^{a}\chi^{1/4}_{n,\alpha}(c)}O(\frac{1}{\lambda^{b}})$
$\displaystyle=$
$\displaystyle\frac{\chi_{n,\alpha}^{1/4}(c)}{\lambda^{2}}\mathcal{H}_{0}(g)(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})+\frac{1}{n^{a}\chi^{1/4}_{n,\alpha}(c)}O(\frac{1}{\lambda^{b}}).$
where $a>1$ and $b>0.$ Indeed, using the fact that
$\sup_{x>0}|\sqrt{x}J_{\alpha}(x)|\leq C_{\alpha}$, see [23], and
$\lambda\sin(\frac{\theta}{\lambda})\leq\theta$ we have
$\displaystyle\left|\frac{\chi_{n,\alpha}^{1/4}(c)}{\lambda}\int_{\lambda
t_{n}}^{\lambda\frac{\pi}{2}}J_{0}\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\theta\right)g(\theta)\sin(\frac{\theta}{\lambda})d\theta\right|$
$\displaystyle\leq$ $\displaystyle\frac{1}{\lambda^{3/2}}\int_{\lambda
t_{n}}^{\lambda\frac{\pi}{2}}\left|\frac{\chi_{n,\alpha}^{1/4}(c)}{\lambda^{1/2}}J_{0}\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\theta\right)\right||g(\theta)|\theta
d\theta$ $\displaystyle\leq$
$\displaystyle\frac{1}{\lambda^{3/2}}\left[\int_{\lambda
t_{n}}^{\lambda\frac{\pi}{2}}\left|\frac{\chi_{n,\alpha}^{1/4}(c)\theta^{1/2}}{\lambda^{1/2}}J_{0}\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\theta\right)\right|^{2}\frac{d\theta}{\theta}\right]^{1/2}{\left\|{\theta
g}\right\|}_{L^{2}(0,\infty)}$ $\displaystyle\leq$
$\displaystyle\frac{C_{0}}{\lambda^{3/2}}(\ln(\frac{\pi}{2t_{n}}))^{1/2}{\left\|{\theta
g}\right\|}_{L^{2}(0,\infty)}$ $\displaystyle\leq$
$\displaystyle\frac{C_{0}}{\lambda^{3/2}\chi_{n,\alpha}^{1/4}(c)}{\left\|{\theta
g}\right\|}_{L^{2}(0,\infty)}.$
Moreover, using the fact that
$\left|P_{n}^{(0,\alpha)}(\cos(\frac{2\theta}{\lambda}))\right|\leq
P_{n}^{(0,\alpha)}(1)=O(n^{\alpha})$ and the deacreasing cosinus function with
$|B_{n}|=O(n^{1/2})$, we obtain
$\left|\displaystyle\frac{(-1)^{n}B_{n}}{\lambda}\int_{\lambda
t_{n}}^{\lambda\frac{\pi}{2}}g(\theta)P_{n}^{(0,\alpha)}\left(\cos(\frac{2\theta}{\lambda})\right)\left(\cos(\frac{\theta}{\lambda})\right)^{\alpha+1/2}\sin(\frac{\theta}{\lambda})d\theta\right|$
$\displaystyle\leq$ $\displaystyle\frac{|B_{n}|}{\lambda^{2}}\int_{\lambda
t_{n}}^{\lambda\frac{\pi}{2}}\left|g(\theta)\right|\left|P_{n}^{(0,\alpha)}\left(\cos(\frac{2\theta}{\lambda})\right)\right|\left(\cos(\frac{\theta}{\lambda})\right)^{\alpha+1/2}\theta
d\theta$ $\displaystyle\leq$
$\displaystyle\frac{C}{\lambda^{2}\chi^{1/4}_{n,\alpha}(c)}{\left\|{\theta^{3/2}g}\right\|}_{L^{2}(0,\infty)}$
Finally, there exist a constant $a>1$ and $b>0$ such that
$a_{n}(G_{\lambda})=\frac{\chi_{n,\alpha}^{1/4}(c)}{\lambda^{2}}\mathcal{H}_{0}(g)(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})+\frac{1}{n^{a}\chi^{1/4}_{n,\alpha}(c)}O(\frac{1}{\lambda^{b}}).$
(46)
Hence, we obtain
$F_{{\lambda_{j}}}^{N_{k}}(\theta)=\chi_{(0,\frac{\pi}{2}\lambda_{j})}(\theta)\displaystyle\sum_{n=0}^{N_{k}[\lambda_{j}]}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda_{j}}\right)a_{n}(G_{\lambda_{j}})\varphi^{\alpha}_{n,c}(\cos(\frac{\theta}{\lambda_{j}}))$
$\displaystyle=$
$\displaystyle\sum_{n=0}^{N_{k}[\lambda_{j}]}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda_{j}}\right)a_{n}(G_{\lambda_{j}})\left(\chi_{(0,\lambda_{j}t_{n})}(\theta)\varphi_{n}(\cos(\frac{\theta}{\lambda_{j}}))+\chi_{(\lambda_{j}t_{n},\frac{\pi}{2}\lambda_{j})}(\theta)\varphi_{n}(\cos(\frac{\theta}{\lambda_{j}}))\right)$
$\displaystyle=$
$\displaystyle\sum_{n=0}^{N_{k}[\lambda_{j}]}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda_{j}}\right)\left(\frac{\chi_{n,\alpha}^{1/4}(c)}{\lambda^{2}_{j}}\mathcal{H}_{0}(g)(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda_{j}})+\frac{1}{n^{a}\chi^{1/4}_{n,\alpha}(c)}O(\frac{1}{\lambda_{j}^{b}})\right)\chi_{(0,\lambda_{j}\frac{\pi}{2})}(\theta)\chi_{n,\alpha}^{1/4}(c)J_{0}\left(\frac{\chi_{n,\alpha}^{1/2}(c)}{\lambda}\theta\right)$
$\displaystyle+$
$\displaystyle\sum_{n=0}^{N_{k}[\lambda_{j}]}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda_{j}}\right)a_{n}(G_{\lambda_{j}})\chi_{(\lambda_{j}t_{n},\frac{\pi}{2}\lambda_{j})}(\theta)\left((-1)^{n}B_{n}\left(\cos(\frac{\theta}{\lambda})\right)^{\alpha+1/2}P_{n}^{(0,\alpha)}\left(\cos(\frac{2\theta}{\lambda})\right)-\chi_{n,\alpha}^{1/4}(c)J_{0}\left(\frac{\chi_{n,\alpha}^{1/2}(c)}{\lambda}\theta\right)\right)$
$\displaystyle+$ $\displaystyle O(\frac{1}{\lambda^{\varepsilon}_{j}})$
$\displaystyle=$
$\displaystyle\chi_{(0,\frac{\pi}{2}\lambda_{j})}(\theta)\frac{1}{\lambda_{j}}\sum_{n=0}^{N_{k}[\lambda_{j}]}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda_{j}}\right)\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda_{j}}J_{0}\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda_{j}}\theta\right)\mathcal{H}_{0}(g)(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda_{j}})+O(\frac{1}{\lambda^{\varepsilon}_{j}}).$
where $\varepsilon>0.$ Indeed, from [29], we have
$\displaystyle(-1)^{n}B_{n}\left(\cos(\frac{\theta}{\lambda})\right)^{\alpha+1/2}P_{n}^{(0,\alpha)}\left(\cos(\frac{2\theta}{\lambda})\right)$
$\displaystyle=$
$\displaystyle(2n+\alpha+1)^{1/2}\left(\cos(\frac{\theta}{\lambda})\right)^{1/2}\left(\frac{\theta/\lambda}{\sin(\theta/\lambda)}\right)^{1/2}J_{0}\left(2(n+\frac{\alpha+1}{2})\frac{\theta}{\lambda}\right)$
$\displaystyle+$
$\displaystyle\frac{1}{\lambda^{1/2}}O(\frac{(2\theta)^{1/2}}{n})$
$\displaystyle=$
$\displaystyle(2n+\alpha+1)^{1/2}J_{0}\left((2n+\alpha+1)\frac{\theta}{\lambda}\right)+O(\frac{1}{n}),$
and by using (10), one gets $\chi_{n,\alpha}(c)\sim(2n+\alpha+1)^{2}$, and
concludes that
$(-1)^{n}B_{n}\left(\cos(\frac{\theta}{\lambda})\right)^{\alpha+1/2}P_{n}^{(0,\alpha)}\left(\cos(\frac{2\theta}{\lambda})\right)-\chi_{n,\alpha}^{1/4}(c)J_{0}\left(\frac{\chi_{n,\alpha}^{1/2}(c)}{\lambda}\theta\right)=O(\frac{1}{n}).$
Further, from [5], we have
${\left\|{\varphi^{\alpha}_{n,c}}\right\|}_{L^{\infty}(0,1)}=O(\chi^{1/2}_{n,\alpha}(c))$,
then we obtain
$a_{n}(G_{\lambda_{j}})=O(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda^{2}}).$
Finally, as $j\to\infty$, we get
$F^{N_{k}}=\mathcal{H}_{0}\left(\chi_{(0,N_{k})}m(.)\mathcal{H}_{0}(g)\right).$
∎
## References
* [1] W.O. Amrein, A. M. Hinz and D. B. Pearson, Sturm-Liouville Theory: Past and Present, Birkhäuser, Basel-Boston-Berlin, (2005).
* [2] A. I. Aptekarev, V. S. Buyarov & I. S. Degeza, Asymptotic behavior of the $L^{p}$-norms and the entropy for general orthogonal polynomials. Russian Acad. Sci. Sb. Math. 82 (1995), 373–395.
* [3] J.J. Betankor, K.Stempak , Relating multipliers and transplantation for Fourier Bessel expansions and Hankel transform. Tohoku Math. J. 53 (2001), 109–129.
* [4] A.Bonami and A. Karoui , Uniform Approximation and Explicit Estimates for the Prolate Spheroidal Wave Functions. Constr.Approx. 43 (2016), 15–45.
* [5] M. Boulsane and A. Karoui, The Finite Hankel Transform Operator: Some Explicit and Local Estimates of the Eigenfunctions and Eigenvalues Decay Rates, J. Four. Anal. Appl, 24 , (2018), 1554-1578.
* [6] M. Boulsane, P. Jaming and A. Souabni, Mean convergence of prolate spheroidal series and their extensions, J. Functional Analysis, 277, (2019)
* [7] S. Bochner , Summation of multiple Fourier series by spherical means. Trans. Amer. Math. Soc 40 (1936), 175–207.
* [8] V. Casarino and M. M. Peloso, $L^{p}$-summability of Riesz means for the sublaplacian on complex spheres, J. London Math. Soc. (2) 83 (2011) 137–152.
* [9] P.Chen, X. T. Duong, D. He, S. Lee and L. Yan, Almost everywhere convergence of Bochner-Riesz means for the Hermite operator. Arxiv:2006.05689v3.
* [10] Ó. Ciaurri and L. Roncal, The Bochner–Riesz means for Fourier–Bessel expansions Journal of Functional Analysis 228 (2005), 83–113.
* [11] Ó.Ciaurri and L.Roncal, The Bochner–Riesz means for Fourier–Bessel expansions: Norm inequalities for the maximal operator and almost everywhere convergence. Journal of Approximation Theory 167 (2013), 121–146.
* [12] S.Ciaurri and J. L. Varona, An Uniform Boundedness for Bochner-Riesz Operators Related to the Hankel Transform. J of lnequal and AppL, 2002, 7(6), 759–777.
* [13] L. Colzani and G. Travaclini, Estimates for Riesz Kernels of Eigenfunction Expansions of Elliptic Differential Operators on Compact Manifolds.Journal of Functional Analysis 96,( 1991) l-30
* [14] E.B. Davies, Heat Kernels and Spectral Theory, Cambridge University Press, Cambridge, (1989).
* [15] J. Horvath, L’oeuvre mathématique de Marcel Riesz I. Cahiers du séminaire d’histoire des mathématiques, tome 3 (1982), 83-121.
* [16] S. Igari, On The Multipliers of Hankel Transform. Tohoku Math. Journal. 24(1972), 201-206.
* [17] A. Karoui and I. Mehrzi, Asymptotic behaviors and numerical computations of the eigenfunctions and eigenvalues associated with the classical and circular prolate spheroidal wave functions. Journal of Applied Mathematics and Computation 218 (2012), 10871–10888.
* [18] A. Karoui and A. Souabni, Generalized Prolate Spheroidal Wave Functions: Spectral Analysis and Approximation of Almost Band-limited Functions. J. Four. Anal. Appl. 22 (2016), 383–412.
* [19] A. Karoui and A. Souabni, Weighted finite fourier transform operator:Uniform approximations of the eigenfunctions, eigenvalues decay and behaviour. J.Sci.Comp. 71 (2) (2017), 547–570.
* [20] G. Mauceri , Riesz means for the eigenfunction expansions for a class of hypo-elliptic differential operators. Ann. Inst. Fourier 31 (1981), 115–140.
* [21] C. Meaney, Divergent Cesaro and Riesz means of Jacobi and Laguerre expansions.Proceedings of the American Mathematical Society. 131 (2003), 3123–3128.
* [22] D. Müller, On Riesz means of eigenfunction expansions for the Kohn Laplacian. J. Reine Angew. Math 401 (1989), 113–121.
* [23] A. YA. Olenko. Upper bound on $\sqrt{x}J_{\mu}(x)$ and its applications. Integral Transforms and Special Functions. 17 (2006), 455–467.
* [24] M. Riesz, Sur les fonctions conjugées. Math. Z. 27 (1927), 218–244.
* [25] D. Slepian and H. O. Pollak, Prolate spheroidal wave functions, Fourier analysis and uncertainty I, Bell System Tech. J. 40 (1961), 43–64.
* [26] D.Slepian, Prolate spheroidal wave functions, Fourier analysis and uncertainty–IV: Extensions to many dimensions; generalized prolate spheroidal functions, Bell System Tech. J. 43 (1964), 3009–3057.
* [27] D. Slepian, Some comments on Fourier analysis, uncertainty and modeling. SIAM Rev. 25 (1983) 379-393.
* [28] L. Song , J. Xiao and X. Yan, Preduals of quadratic campanato spaces associated to operators with heat kernel bounds. J. Potential Analysis 41,(2014) 849–867.
* [29] G. Szegö, Orthogonal polynomials, Fourth edition, American Mathematical Society, Colloquium Publications, Vol. XXIII. American Mathematical Society, Providence, R.I., 1975.
* [30] L. L. Wang and J. Zhang, A new generalization of the PSWFs with applications to spectral approximations on quasi-uniform grids. Appl. Comput. Harmon. Anal. 29 (2010), 303–329.
|
# Continuous Newton-like Methods featuring Inertia and Variable Mass
Camille Castera∗
Department of Mathematics
University of Tübingen
Germany Hedy Attouch
IMAG
Université Montpellier
CNRS, France Jalal Fadili
ENSICAEN
Normandie Université
CNRS, GREYC, France Peter Ochs
Department of Mathematics
University of Tübingen
Germany
###### Abstract
We introduce a new dynamical system, at the interface between second-order
dynamics with inertia and Newton’s method. This system extends the class of
inertial Newton-like dynamics by featuring a time-dependent parameter in front
of the acceleration, called _variable mass_. For strongly convex optimization,
we provide guarantees on how the Newtonian and inertial behaviors of the
system can be non-asymptotically controlled by means of this variable mass. A
connection with the Levenberg–Marquardt (or regularized Newton’s) method is
also made. We then show the effect of the variable mass on the asymptotic rate
of convergence of the dynamics, and in particular, how it can turn the latter
into an accelerated Newton method. We provide numerical experiments supporting
our findings. This work represents a significant step towards designing new
algorithms that benefit from the best of both first- and second-order
optimization methods.
11footnotetext: Corresponding author<EMAIL_ADDRESS>
## 1 Introduction
### 1.1 Problem Statement
A major challenge in modern unconstrained convex optimization consists in
building fast algorithms while maintaining low computational cost and memory
footprint. This plays a central role in many key applications such as large-
scale machine learning problems or data processing. The problems we are aiming
to study are of the form
$\min_{x\in\mathbb{R}^{n}}f(x).$
Large values of $n$ demand for algorithms at the interface of first- and
second-order optimization. Limited computational capabilities explain why
gradient-based (first-order) algorithms remain prominent in practice.
Unfortunately, they often require many iterations which is true even for the
provably best algorithms for certain classes of optimization problems; for
example that of convex and strongly convex functions with Lipschitz continuous
gradient [37, 33, 34]. On the other hand, algorithms using second-order
information (the Hessian of $f$)—with Newton’s method as prototype—adapt
locally to the geometry of the objective, allowing them to progress much
faster towards a solution. However, each iteration comes with high
computational and memory costs, which highlights a challenging trade-off. It
is therefore essential to develop algorithms that take the best of both
worlds. They are commonly referred to as (limited-memory) quasi-Newton
methods. Several quasi-Newton algorithms partly address this issue, for
example BFGS methods [18, 23, 25, 39, 31], yet, in very large-scale
applications, first-order algorithms often remain the preferred choice.
In order to reach a new level of efficiency, deep insights into the mechanism
and relations between algorithms are required. To that aim, an insightful
approach is to see optimization algorithms as discretization of ordinary
differential equations (ODEs): for small-enough step-sizes, iterates can be
modeled by a continuous-time trajectory [32, 13]. Obtaining a fast algorithm
following this strategy depends on two ingredients: choosing an ODE for which
rapid convergence to a solution can be proved, and discretizing it with an
appropriate scheme that preserves the favorable properties of the ODE.
Both steps are highly challenging, our work focuses on the ODE matter. We
study the following second-order dynamical system in a general setting:
$\varepsilon(t)\ddot{x}(t)+\alpha(t)\dot{x}(t)+\beta\nabla^{2}f(x(t))\dot{x}(t)+\nabla
f(x(t))=0,\quad t\geq 0,$ (VM-DIN-AVD)
where $f\colon\mathbb{R}^{n}\to\mathbb{R}$ is a smooth convex twice
continuously differentiable function, with gradient $\nabla f$ and Hessian
$\nabla^{2}f$ defined on $\mathbb{R}^{n}$ equipped with scalar product
$\langle\cdot,\cdot\rangle$, and induced norm $\|\cdot\|$. Additionally, $f$
is assumed to be coercive, and strongly convex on bounded subsets of
$\mathbb{R}^{P}$. The functions
$\varepsilon,\alpha\colon\mathbb{R}_{+}\to\mathbb{R}_{+}$ (where
$\mathbb{R}_{+}=[0,+\infty[$) are differentiable, non-increasing, and
$\varepsilon(t)>0$ for all $t\geq 0$. Together with $\beta>0$, they are
control parameters that define the type of dynamics that drives the trajectory
(or solution) $x\colon\mathbb{R}_{+}\to\mathbb{R}^{n}$, whose first- and
second-order derivatives are denoted $\dot{x}$ and $\ddot{x}$ respectively. We
call the above dynamics (VM-DIN-AVD), which stands for “Variable Mass
Dynamical Inertial Newton-like system with Asymptotically Vanishing Damping”
since it generalizes a broad class of ODEs whose original member is DIN [2],
where $\varepsilon$ and $\alpha$ were constant. DIN was then extended to the
case of non-constant _asymptotically vanishing dampings_ (AVD) $\alpha$ [9].
In this work we introduce the non-constant parameter $\varepsilon$ called
_variable mass_ (VM) in front of the acceleration $\ddot{x}$, in the same way
that $\alpha$ is called (viscous) _damping_ by analogy with classical
mechanics. A key feature of these ODEs, that positions them at the interface
of first- and second-order optimization, is that they possess equivalent forms
involving only $\nabla f$ but not $\nabla^{2}f$, significantly reducing
computational costs, hence enabling the design of practical algorithms, see
e.g., [20, 10, 21]. The key idea behind this is the relation
$\nabla^{2}f(x(t))\dot{x}(t)=\frac{\mathop{}\\!\mathrm{d}}{\mathop{}\\!\mathrm{d}t}\nabla
f(x(t))$, see Section 2 for an equivalent formulation of (VM-DIN-AVD)
exploiting this.
This paper emphasizes the relation between (VM-DIN-AVD) and well-studied
special cases. Indeed, taking $\varepsilon=\alpha=0$, one obtains111CN is
usually considered with $\beta=1$, we put $\beta$ in the system to ease the
discussions below. the Continuous Newton (CN) method [24]
$\beta\nabla^{2}f(x_{N}(t))\dot{x}_{N}(t)+\nabla f(x_{N}(t))=0,\quad t\geq 0,$
(CN)
known notably for being invariant to affine transformations and yielding fast
vanishing of the gradient (see Section 3). In fact, this observation shows
that (VM-DIN-AVD) is a singular perturbation of (CN), which also justifies the
terminology “Newton-like” in DIN. When $\alpha\neq 0$ but $\varepsilon=0$, we
recover the Levenberg–Marquardt (LM) method,
$\alpha(t)\dot{x}_{LM}(t)+\beta\nabla^{2}f(x_{LM}(t))\dot{x}_{LM}(t)+\nabla
f(x_{LM}(t))=0,\quad t\geq 0,$ (LM)
also known as regularized Newton method since it stabilizes (CN). In the rest
of the paper, the solutions of (CN) and (LM) will always be denoted by $x_{N}$
and $x_{LM}$ respectively. Alvarez et al. [2] showed that for $\alpha=0$,
$\beta=1$, and $\varepsilon$ constant and small, (VM-DIN-AVD) is a “perturbed”
Newton method since the distance between the solutions of (VM-DIN-AVD) and
(CN) is at most proportional to $\sqrt{\varepsilon}$ at all time. Yet, despite
the benefits of this class of ODEs, such as stabilization properties, see
e.g., [9, 10], no improvement222DIN-like systems were thought to yield faster
vanishing of the gradient compared to inertial first-order dynamics, until
recently [4]. of the rate of convergence (in values) has been shown compared
to inertial first-order dynamics [37, 41]. This raises the question:
“ _are these ODEs really of Newton type?_ ”,
which is crucial in view of designing faster algorithms from them.
Figure 1: Left: phase diagram on distances from (VM-DIN-AVD) to (CN) and (LM)
(see Section 3). For each patch, the color indicates which of the distances
$\|x_{N}(t)-x(t)\|$ and $\|x_{LM}(t)-x(t)\|$ is considered, the scaling of a
corresponding upper-bound on this distance is written; in white for prior work
and in black for our contributions. The green line separates the cases
$\varepsilon\geq\alpha$ (above) and $\varepsilon\leq\alpha$ (below). Right: 2D
illustration of the trajectories of (VM-DIN-AVD) for several choices of
$\varepsilon$ on a quadratic function. Using fast-vanishing $\varepsilon(t)$
(dark-blue solid curves), one can bring the solution of (VM-DIN-AVD) close to
that of (CN), making it, for example, more robust to bad conditioning compared
to first-order dynamics (such as gradient descent).
### 1.2 Main Contributions
We show that the answer to this question is partially positive, and closely
related to the choices of $\varepsilon$ and $\alpha$. We provide general
results on the role played by these two control parameters and how they can be
chosen to control (VM-DIN-AVD), and make it close to (CN) _for all time_ , as
illustrated on the right-hand side of Figure 1, but also to obtain fast
convergence. This represents a first step towards building new fast practical
algorithms. Our main contributions are the following:
– We provide a first-order equivalent formulation of (VM-DIN-AVD), and address
the questions of existence and uniqueness of the solutions of (VM-DIN-AVD)
under mild assumptions.
– We generalize the perturbed Newtonian property discussed above to non-
constant and possibly vanishing variable masses $\varepsilon$, and “not too
large” positive dampings $\alpha$, and derive bounds that (formally) take the
form $\|x(t)-x_{N}(t)\|=O(\sqrt{\varepsilon(t)})$. We then extend these
results to larger dampings $\alpha$ and make the connection between (VM-DIN-
AVD) and (LM). This contribution is summarized in the phase diagram of Figure
1.
– Using quadratic functions as a model for strongly convex functions, we shed
light on techniques to efficiently approximate solutions of (VM-DIN-AVD). We
then show how $\varepsilon$ and $\alpha$ affect the speed of convergence.
Depending on their setting, the solutions of (VM-DIN-AVD) may either converge
as fast as that of (CN), _faster_ , or rather have a (LM) nature, as
summarized in Table 1.
– We provide numerical experiments supporting our theoretical findings.
Table 1: Informal summary of Section 4. Comparison of (VM-DIN-AVD) with other dynamics Parameters of (VM-DIN-AVD) | | Speed of convergence
---|---|---
Dominant parameter | Integrability in $+\infty$ | | w.r.t. (CN) | w.r.t. (LM)
variable mass $\varepsilon$ | yes | | as fast | as fast
no | | faster | faster
viscous damping $\alpha$ | yes | | as fast | only depends on $\varepsilon$
no | | slower
### 1.3 Related work
The system (VM-DIN-AVD) belongs to the class of inertial systems with viscous
and geometric (“Hessian-driven”) dampings, initially introduced with constant
$\varepsilon=1$ and constant $\alpha$ in [2] and called DIN (for Dynamical
Inertial Newton-like system). Except in a few cases [20, 19], most of the
follow-up work then considered extensions of DIN with non-constant AVD
$\alpha$, with in particular the DIN-AVD system with $\alpha(t)=\alpha_{0}/t$
as introduced in [9]. The reason for this popular choice for $\alpha$ is its
link with Nesterov’s method [41]. Non-constant choices for $\beta$ have been
considered [10, 1, 29, 11]. We keep it constant here, and rather focus on non-
constant $\varepsilon$, unlike prior work that used constant $\varepsilon=1$.
The mass $\varepsilon$ was only considered in the original work [2], but only
for fixed $\varepsilon$, $\beta=1$ and constant $\alpha=0$. VM-DIN-AVD is
however closely related to the IGS system considered in [11] as it is actually
equivalent to the latter after dividing both sides of (VM-DIN-AVD) by
$\varepsilon(t)$. Our approach—which consists in studying the connections with
other second-order dynamics as $\varepsilon$ vanishes asymptotically—is
however different from the one followed in [11], which is of independent
interest. The literature on DIN is rich, let us mention further connections
with Nesterov’s method [40, 1], extensions with Tikhonov regularization [16]
and closed-loop damping [12, 29]. The non-smooth and possibly non-convex cases
have been considered in [5, 6, 20]. Finally, avoidance of strict saddle points
in smooth non-convex optimization has been shown in [19].
The influence of the damping $\alpha$ on the (LM) dynamics has been studied in
[7, 8]. Interestingly, the conditions enforced on $\alpha$ in these papers
(formally a sub-exponential decay) are very similar to those we make on
$\varepsilon$ and $\alpha$ for (VM-DIN-AVD) (see Assumptions 1 and 2).
Regarding the second part of our analysis, which deals with the case where $f$
is quadratic, Attouch et al. [10] provided closed-form solutions to (VM-DIN-
AVD) for $\varepsilon\equiv 1$ and special choices of $\alpha$. Our work
rather deals with approximate solutions which allows considering a wide class
of functions $\varepsilon$ and $\alpha$. We rely on the Liouville–Green (LG)
method [30, 26] presented in Section 4. Generalizations of LG are also often
referred to as WKB methods [44, 28, 17] and seem to be mostly used in physics
so far. To the best of the authors’ knowledge, the current work seem to be one
of the first to use the LG method in optimization, and the first for DIN-like
systems.
### 1.4 Organization
The paper is organized as follows. We discuss the existence of solutions in
Section 2. Our main results, from a non-asymptotic control perspective, are
then presented in Section 3. An analysis of the role played asymptotically by
$\varepsilon$ and $\alpha$ is then carried out on quadratic functions in
Section 4. Finally, numerical experiments are presented in Section 5, and some
conclusions are then drawn.
## 2 Existence and Uniqueness of Solutions
In the sequel, we fix $x_{0}\in\mathbb{R}^{n}$ and
$\dot{x}_{0}\in\mathbb{R}^{n}$, such that, unless stated otherwise, (VM-DIN-
AVD) is always considered with initial condition
$(x(0),\dot{x}(0))=(x_{0},\dot{x}_{0})$, and (CN) and (LM) with initial
condition $x_{N}(0)=x_{LM}(0)=x_{0}$. We also fix initial values for the
control parameters $\varepsilon(0)=\varepsilon_{0}>0$,
$\varepsilon^{\prime}(0)=\varepsilon^{\prime}_{0}\leq 0$ and
$\alpha(0)=\alpha_{0}\geq 0$. In addition to the definitions of $\varepsilon$
and $\alpha$ in Section 1.1, we assume that $\varepsilon$ is twice
differentiable, with bounded second derivative. In order to use the
Cauchy–Lipschitz Theorem, we reformulate (VM-DIN-AVD) into a first-order (in
time) system by introducing an auxiliary variable
$y:\mathbb{R}_{+}\to\mathbb{R}^{n}$. Notably, our reformulation does not
involve $\nabla^{2}f$, in the same fashion as [2, 9]. For all $t$, defining
$\nu(t)=\alpha(t)-\varepsilon^{\prime}(t)-\frac{1}{\beta}\varepsilon(t)$, we
show in Appendix A that (VM-DIN-AVD) is equivalent to
$\begin{cases}\varepsilon(t)\dot{x}(t)+\beta\nabla
f(x(t))+\nu(t)x(t)+y(t)&=0\\\
\dot{y}(t)+\nu^{\prime}(t)x(t)+\frac{\nu(t)}{\beta}x(t)+\frac{1}{\beta}y(t)&=0\end{cases}$
(gVM-DIN-AVD)
with initial conditions
$(x(0),y(0))=\left(x_{0},-\varepsilon_{0}\dot{x}_{0}-\beta\nabla
f(x_{0})-(\alpha_{0}-\varepsilon^{\prime}_{0}-\frac{1}{\beta}\varepsilon_{0})x_{0}\right)$.
One can notice that in the special case where $\varepsilon$ is taken constant
and equal to $1$ (that is when (VM-DIN-AVD) is simply the DIN-AVD system [9]),
we recover the same first-order formulation as that in [9].
We can then apply the Cauchy–Lipschitz Theorem. For all $t\geq 0$ and
$(u,v)\in\mathbb{R}^{n}\times\mathbb{R}^{n}$, define the mapping
$G\left(t,(u,v)\right)=\begin{pmatrix}\frac{1}{\varepsilon(t)}(-\beta\nabla
f(u)-\nu(t)u-v)\\\
-\nu^{\prime}(t)u-\frac{\nu(t)}{\beta}u-\frac{1}{\beta}v\end{pmatrix},$
so that (gVM-DIN-AVD) rewrites
$(\dot{x}(t),\dot{y}(t))=G\left(t,(x(t),y(t))\right)$ for all $t\geq 0$. Since
$f$ is twice continuously differentiable, one can see that $G$ is continuously
differentiable w.r.t. its second argument $(u,v)$. Consequently $G$ is locally
Lipschitz continuous w.r.t. $(u,v)$ and by the Cauchy–Lipschitz Theorem, for
each initial condition, there exists a unique local solution to (gVM-DIN-AVD)
and thus to (VM-DIN-AVD). We then show that this solution is actually global
(in Appendix A) by proving the boundedness of $(x,y)$. We omit the existence
and uniqueness of the solutions of (CN) and (LM) since these are standard
results, obtained with similar arguments.
## 3 Non-asymptotic Control of (VM-DIN-AVD)
The purpose of this section is to understand how close $x$ might be to $x_{N}$
and $x_{LM}$, as a function of $\alpha$ and $\varepsilon$. Since $f$ is
coercive and strongly convex on bounded sets, it has a unique minimizer
$x^{\star}\in\mathbb{R}^{n}$. Consequently, any two trajectories that converge
to $x^{\star}$ will eventually be arbitrarily close to each other. Thus,
asymptotic results of the form $\|x(t)-x_{N}(t)\|\xrightarrow[t\to+\infty]{}0$
are not precise enough to claim, for example, that $x$ has a “Newtonian
behavior”. Instead, we will derive upper bounds on the distance between
trajectories that hold _for all time_ $t\geq 0$, and which typically depend on
$\varepsilon$ and/or $\alpha$. We first present the case where $\alpha$ is
small relative to $\varepsilon$ and then generalize.
### 3.1 Comparison with (CN) under Moderate Viscous Dampings
When the viscous damping $\alpha$ remains moderate w.r.t. the variable mass
$\varepsilon$, one can expect the solutions of (VM-DIN-AVD) to be close to
that of (CN). We make the following assumptions.
###### Assumption 1.
There exists $c_{1},c_{2}\geq 0$ such that for all $t\geq 0$,
$|\varepsilon^{\prime}(t)|\leq c_{1}\varepsilon(t)$ and $\alpha(t)\leq
c_{2}\varepsilon(t)$.
The assumption states that $\alpha$ must decrease at least as fast as
$\varepsilon$ (up to a constant).333Assumption 1 can actually hold only after
some large-enough $t_{0}\geq 0$, we take $t_{0}=0$ for the sake of simplicity.
The reason for assuming $|\varepsilon^{\prime}(t)|\leq c_{1}\varepsilon(t)$ is
technical and will appear more clearly in the proofs below. It formally means
that $\varepsilon$ can decrease at most exponentially fast.444This is a
consequence of Gronwall’s lemma, see e.g., [22]. This is a relatively mild
assumption that holds, for example, for any polynomial decay
$\varepsilon_{0}/(t+1)^{a}$, $a\in\mathbb{N}$.
We start with the main result of this section.
###### Theorem 3.1.
Let $x_{N}$ be the solution of (CN), and let $c_{1},c_{2}\geq 0$. There exist
$C_{0},C_{1},C_{2}\geq 0$, depending on $c_{1}$, $c_{2}$, such that for all
$(\varepsilon,\alpha)$ for which Assumption 1 holds with constants $c_{1}$ and
$c_{2}$, the corresponding solution $x$ of (VM-DIN-AVD) is such that for all
$t\geq 0$,
$\|x(t)-x_{N}(t)\|\leq
C_{0}e^{-\frac{t}{\beta}}\varepsilon_{0}\|\dot{x}_{0}\|+C_{1}\sqrt{\varepsilon(t)}+C_{2}\int_{s=0}^{t}e^{\frac{1}{\beta}(s-t)}\sqrt{\varepsilon(s)}\mathop{}\\!\mathrm{d}s.$
(1)
This extends a previous result from [2, Proposition 3.1] which states a
similar bound for constant $\varepsilon$, $\alpha\equiv 0$ and $\beta=1$.
Theorem 3.1 corresponds to the blue parts in the phase diagram of Figure 1
(see also Corollary 3.6 below).
###### Remark 3.2.
The strength of the above result comes from the fact that the constants
$C_{0},C_{1},C_{2}$ do not depend on $\varepsilon$ and $\alpha$, and that the
result is _non-asymptotic_. This allows in particular choosing
$(\varepsilon,\alpha)$ to control the distance from $x$ to $x_{N}$, for all
time $t\geq 0$.
###### Remark 3.3.
Under Assumption 1, the dynamics (VM-DIN-AVD) is dominated by the variable
mass. The damping $\alpha$ does not appear in Theorem 3.1.
The above theorem and remarks emphasize the “Newtonian nature” of (VM-DIN-
AVD). We present two lemmas before proving Theorem 3.1, and then state a
simpler bound than (1), see Corollary 3.6.
###### Lemma 3.4.
Let $(\varepsilon,\alpha)$, and let $x$ be the corresponding solution of (VM-
DIN-AVD). For all $t\geq 0$, define the function,
$U(t)=\frac{\varepsilon(t)}{2}\|\dot{x}(t)\|^{2}+f(x(t))-f(x^{\star}).$
Then $U$ is differentiable and for all $t>0$,
$\frac{\mathop{}\\!\mathrm{d}U}{\mathop{}\\!\mathrm{d}t}(t)=\frac{\varepsilon^{\prime}(t)}{2}\|\dot{x}(t)\|^{2}-\alpha(t)\|\dot{x}(t)\|^{2}-\beta\langle\nabla^{2}f(x(t))\dot{x}(t),\dot{x}(t)\rangle\leq
0.$
Therefore, in particular, $U$ is non-increasing.
###### Proof.
Let $t\geq 0$, since $x$ is twice differentiable, $U$ is differentiable and,
$\frac{\mathop{}\\!\mathrm{d}U}{\mathop{}\\!\mathrm{d}t}(t)=\frac{\varepsilon^{\prime}(t)}{2}\|\dot{x}(t)\|^{2}+\varepsilon(t)\langle\dot{x}(t),\ddot{x}(t)\rangle+\langle\dot{x}(t),\nabla
f(x(t))\rangle.$
We use the fact that $x$ is solution of (VM-DIN-AVD), to substitute
$\varepsilon(t)\ddot{x}(t)$ by its expression,
$\frac{\mathop{}\\!\mathrm{d}U}{\mathop{}\\!\mathrm{d}t}(t)=\frac{\varepsilon^{\prime}(t)}{2}\|\dot{x}(t)\|^{2}-\alpha(t)\|\dot{x}(t)\|^{2}-\beta\langle\nabla^{2}f(x(t))\dot{x}(t),\dot{x}(t)\rangle.$
By assumption $\varepsilon$ is non-increasing so for all $t>0$,
$\varepsilon^{\prime}(t)\leq 0$. Furthermore $f$ is convex so
$\langle\nabla^{2}f(x(t))\dot{x}(t),\dot{x}(t)\rangle\geq 0$. Hence $U$ is
non-increasing. ∎
We then state the following bound.
###### Lemma 3.5.
There exists $C\geq 0$ such that for any $(\varepsilon,\alpha)$ and the
corresponding solution $x$ of (VM-DIN-AVD), for all $t\geq 0$ it holds that,
$\varepsilon(t)\|\dot{x}(t)\|\leq C\sqrt{\varepsilon(t)}.$
###### Proof.
Let $t\geq 0$, according to Lemma 3.4, $U$ is non-increasing so $U(t)\leq
U(0)$, or equivalently,
$\frac{\varepsilon(t)}{2}\|\dot{x}(t)\|^{2}+f(x(t))-f(x^{\star})\leq\frac{\varepsilon_{0}}{2}\|\dot{x}_{0}\|^{2}+f(x_{0})-f(x^{\star}).$
This implies in particular that,
$\varepsilon(t)\|\dot{x}(t)\|^{2}\leq\varepsilon_{0}\|\dot{x}_{0}\|^{2}+2(f(x_{0})-f(x^{\star})),$
and hence by multiplying both sides by $\varepsilon(t)$ and composing with the
square-root we obtain that,
$\varepsilon(t)\|\dot{x}(t)\|\leq C\sqrt{\varepsilon(t)},$
where $C=\sqrt{\varepsilon_{0}\|\dot{x}_{0}\|^{2}+2(f(x_{0})-f(x^{\star}))}$.
∎
###### Proof of Theorem 3.1.
Let $(\varepsilon,\alpha)$ as defined in Sections 1.1 and 2, and let $x$ be
the corresponding solution of (VM-DIN-AVD). Then, according to Lemma 3.4, for
all $t\geq 0$, $U(t)\leq U(0)$, so in particular
$f(x(t))\leq f(x_{0})+\frac{\varepsilon_{0}}{2}\|\dot{x}_{0}\|^{2}.$
Denoting $M_{0}=f(x_{0})+\frac{\varepsilon_{0}}{2}\|\dot{x}_{0}\|^{2}$, the
set $\mathsf{K}_{0}=\left\\{y\in\mathbb{R}^{n}\mid f(y)\leq M_{0}\right\\}$ is
bounded, since $f$ is coercive ($\lim_{\|y\|\to+\infty}f(y)=+\infty$). So for
all $t\geq 0$, $x(t)\in\mathsf{K}_{0}$. Since $M_{0}$ (and hence
$\mathsf{K}_{0}$) depends only on $\varepsilon_{0}$, $x_{0}$ and
$\dot{x}_{0}$, we have proved that for any choice $(\varepsilon,\alpha)$, the
corresponding solution $x$ of (VM-DIN-AVD) is inside $\mathsf{K}_{0}$ at all
time. Let $x_{N}$ be the solution of (CN). One can see similarly that for all
$t\geq 0$, $f(x_{N}(t))\leq f(x_{N}(0))=f(x_{0})\leq M_{0}$. So we also have
$x_{N}(t)\in\mathsf{K}_{0}$ for all $t\geq 0$.
Now, fix $c_{1},c_{2}>0$, and let $(\varepsilon,\alpha)$ such that Assumption
1 is satisfied with constants $c_{1},c_{2}$. Let $x$ be the corresponding
solution of (VM-DIN-AVD). Since $f$ is strongly convex on bounded sets, it is
strongly convex on $\mathsf{K}_{0}$. We denote $\mu>0$ the strong-convexity
parameter of $f$ on $\mathsf{K}_{0}$. Equivalently, we have that $\nabla f$ is
strongly monotone on $\mathsf{K}_{0}$, that is, $\forall
y_{1},y_{2}\in\mathsf{K}_{0}$,
$\langle\nabla f(y_{1})-\nabla
f(y_{2}),y_{1}-y_{2}\rangle\geq\mu\|y_{1}-y_{2}\|^{2}.$ (2)
Let $t\geq 0$, since $x(t)\in\mathsf{K}_{0}$ and $x_{N}(t)\in\mathsf{K}_{0}$,
by combining (2) with the Cauchy–Schwarz inequality, we deduce that
$\|x(t)-x_{N}(t)\|\leq\frac{1}{\mu}\|\nabla f(x(t))-\nabla f(x_{N}(t))\|.$ (3)
Therefore, it is sufficient to bound the difference of gradients in order to
bound $\|x(t)-x_{N}(t)\|$. First, remark that (CN) can be rewritten as
follows: $\frac{\mathop{}\\!\mathrm{d}}{\mathop{}\\!\mathrm{d}t}\nabla
f(x_{N}(t))+\frac{1}{\beta}\nabla f(x_{N}(t))=0$. So we can integrate, for all
$t\geq 0$,
$\nabla f(x_{N}(t))=e^{-\frac{t}{\beta}}\nabla f(x_{0}).$ (4)
We now turn our attention to $\nabla f(x(t))$, for which we cannot find a
closed-form solution in general. We rewrite (VM-DIN-AVD) in the following
equivalent form
$\frac{\mathop{}\\!\mathrm{d}}{\mathop{}\\!\mathrm{d}t}\left[\varepsilon(t)\dot{x}(t)+\beta\nabla
f(x(t))\right]+\frac{1}{\beta}\varepsilon(t)\dot{x}(t)+\nabla
f(x(t))=\left(\frac{1}{\beta}\varepsilon(t)+\varepsilon^{\prime}(t)-\alpha(t)\right)\dot{x}(t).$
Introducing the variable $\omega(t)=\varepsilon(t)\dot{x}(t)+\beta\nabla
f(x(t))$, the latter is thus solution to
$\begin{cases}\dot{\omega}(t)+\frac{1}{\beta}\omega(t)=\left(\frac{1}{\beta}\varepsilon(t)+\varepsilon^{\prime}(t)-\alpha(t)\right)\dot{x}(t),\quad
t\geq 0,\\\ \omega(0)=\varepsilon_{0}\dot{x}_{0}+\beta\nabla
f(x_{0}).\end{cases}$
This is a non-homogeneous first-order ODE in $\omega$, whose solution can be
expressed using the integrating factor
$\omega(t)=e^{-\frac{t}{\beta}}(\varepsilon_{0}\dot{x}_{0}+\beta\nabla
f(x_{0}))+e^{-\frac{t}{\beta}}\int_{0}^{t}e^{\frac{s}{\beta}}\left(\frac{1}{\beta}\varepsilon(s)+\varepsilon^{\prime}(s)-\alpha(s)\right)\dot{x}(s)\mathop{}\\!\mathrm{d}s.$
We thus have the following expression for $\nabla f(x)$, for all $t\geq 0$,
$\beta\nabla f(x(t))=\beta e^{-\frac{t}{\beta}}\nabla
f(x_{0})+e^{-\frac{t}{\beta}}\varepsilon_{0}\dot{x}_{0}-\varepsilon(t)\dot{x}(t)+e^{-\frac{t}{\beta}}\int_{0}^{t}e^{\frac{s}{\beta}}\left(\frac{1}{\beta}\varepsilon(s)+\varepsilon^{\prime}(s)-\alpha(s)\right)\dot{x}(s)\mathop{}\\!\mathrm{d}s.$
(5)
We can now use (4) and (5) in (3) to get
$\|x(t)-x_{N}(t)\|\leq\frac{1}{\beta\mu}\left\|e^{-\frac{t}{\beta}}\varepsilon_{0}\dot{x}_{0}-\varepsilon(t)\dot{x}(t)+e^{-\frac{t}{\beta}}\int_{0}^{t}e^{\frac{s}{\beta}}\left(\frac{1}{\beta}\varepsilon(s)+\varepsilon^{\prime}(s)-\alpha(s)\right)\dot{x}(s)\mathop{}\\!\mathrm{d}s\right\|.$
Using the triangle inequality, we obtain,
$\|x(t)-x_{N}(t)\|\leq\frac{\varepsilon_{0}\|\dot{x}_{0}\|}{\beta\mu}e^{-\frac{t}{\beta}}+\frac{\varepsilon(t)\|\dot{x}(t)\|}{\beta\mu}+\frac{1}{\beta\mu}\int_{0}^{t}e^{\frac{1}{\beta}(s-t)}\left|\frac{1}{\beta}\varepsilon(s)+\varepsilon^{\prime}(s)-\alpha(s)\right|\|\dot{x}(s)\|\mathop{}\\!\mathrm{d}s.$
(6)
The first term in (6) corresponds to the first one in (1) with
$C_{0}=1/(\beta\mu)$. As for the second-one, by direct application of Lemma
3.5, there exists $C>0$ such that for all $t\geq 0$,
$\frac{\varepsilon(t)\|\dot{x}(t)\|}{\beta\mu}\leq C\sqrt{\varepsilon(t)}$, so
we set $C_{1}=C/(\beta\mu)$. Regarding the last term in (6), using Assumption
1 and again Lemma 3.5, it holds that, for all $s\geq 0$,
$\left|\frac{1}{\beta}\varepsilon(s)+\varepsilon^{\prime}(s)-\alpha(s)\right|\|\dot{x}(s)\|\leq\left(\frac{1}{\beta}+c_{1}+c_{2}\right)\varepsilon(s)\|\dot{x}(s)\|\leq\left(\frac{1}{\beta}+c_{1}+c_{2}\right)C\sqrt{\varepsilon(s)}.$
This proves the theorem with
$C_{2}=\frac{C}{\beta\mu}\left(\frac{1}{\beta}+c_{1}+c_{2}\right)$. ∎
Let us analyze the bound in Theorem 3.1. The first term in (1) decays
exponentially fast and can even be zero if the initial speed is
$\dot{x}_{0}=0$, the second-one decays like $\sqrt{\varepsilon(t)}$, however,
the rate at which the last term decreases is less obvious. The following
corollary gives a less-tight but easier-to-understand estimate.
###### Corollary 3.6.
Consider the same assumptions and variables as in Theorem 3.1. If furthermore
$c_{1}<\frac{2}{\beta}$, then there exists $C_{3}>0$ such that for all $t\geq
0$,
$\|x(t)-x_{N}(t)\|\leq
C_{0}e^{-\frac{t}{\beta}}\varepsilon_{0}\|\dot{x}_{0}\|+C_{3}\sqrt{\varepsilon(t)}.$
###### Proof of Corollary 3.6.
For all $t\geq 0$, define
$J(t)=\int_{0}^{t}e^{\frac{s}{\beta}}\sqrt{\varepsilon(s)}\mathop{}\\!\mathrm{d}s$.
An integration by parts yields
$J(t)=\left[\beta
e^{\frac{s}{\beta}}\sqrt{\varepsilon(s)}\right]_{s=0}^{t}-\int_{s=0}^{t}\beta
e^{\frac{s}{\beta}}\frac{\varepsilon^{\prime}(s)}{2\sqrt{\varepsilon(s)}}\mathop{}\\!\mathrm{d}s=\beta
e^{\frac{t}{\beta}}\sqrt{\varepsilon(t)}-\beta\varepsilon_{0}+\int_{s=0}^{t}\beta
e^{\frac{s}{\beta}}\frac{-\varepsilon^{\prime}(s)}{2\varepsilon(s)}\sqrt{\varepsilon(s)}\mathop{}\\!\mathrm{d}s.$
(7)
By assumption, $0\leq c_{1}<\frac{2}{\beta}$ such that for all $s>0$,
$|\varepsilon^{\prime}(s)|\leq c_{1}\varepsilon(s)$, which in our setting is
equivalent to $\frac{-\varepsilon^{\prime}(s)}{\varepsilon(s)}\leq c_{1}$. So
we deduce from (7) that
$J(t)\leq\beta
e^{\frac{t}{\beta}}\sqrt{\varepsilon(t)}+c_{1}\frac{\beta}{2}\int_{s=0}^{t}e^{\frac{s}{\beta}}\sqrt{\varepsilon(s)}\mathop{}\\!\mathrm{d}s=\beta
e^{\frac{t}{\beta}}\sqrt{\varepsilon(t)}+c_{1}\frac{\beta}{2}J(t).$
So, $\left(1-c_{1}\frac{\beta}{2}\right)J(t)\leq\beta
e^{t}\sqrt{\varepsilon(t)}$. By assumption $1-c_{1}\frac{\beta}{2}>0$,
therefore,
$J(t)\leq\frac{2}{2-c_{1}\beta}e^{\frac{t}{\beta}}\sqrt{\varepsilon(t)}$.
Finally, using this in (1) and setting
$C_{3}=C_{1}+C_{2}\frac{2}{2-c_{1}\beta}$, we obtain the result. ∎
###### Remark 3.7.
The above proofs suggest that an extension to the case where $f$ is non-smooth
but strongly convex is possible using regularization techniques. This is left
for future work.
So far our results only cover the case where $\alpha$ is “not too large”
w.r.t. $\varepsilon$, and do not study (LM). We now state a more general
result that covers these cases.
### 3.2 Generalization to Arbitrary Viscous Dampings with Sub-exponential
Decay
This time we do not assume a link between $\varepsilon$ and $\alpha$ but only
sub-exponential decays.
###### Assumption 2.
There exists $c_{1},c_{2}\geq 0$ such that for all $t\geq 0$,
$|\varepsilon^{\prime}(t)|\leq c_{1}\varepsilon(t)$ and
$|\alpha^{\prime}(t)|\leq c_{2}\alpha(t)$.
We are now in position to state the main result of this section.
###### Theorem 3.8.
Let $x_{N}$ and $x_{LM}$ be the solution of (CN) and (LM) respectively, and
let $c_{1},c_{2}\geq 0$. There exist constants $C,\tilde{C}\geq 0$, depending
on $c_{1}$, $c_{2}$, such that for all $\varepsilon$ and $\alpha$ for which
Assumption 2 holds with $c_{1}$ and $c_{2}$, the corresponding solution $x$ of
(VM-DIN-AVD) is such that for all $t\geq 0$,
$\|x(t)-x_{N}(t)\|\leq
C\left[e^{-\frac{t}{\beta}}+\sqrt{\varepsilon(t)}+\alpha(t)+\int_{s=0}^{t}e^{\frac{1}{\beta}(s-t)}(\sqrt{\varepsilon(s)}+\alpha(s))\mathop{}\\!\mathrm{d}s\right],$
(8)
and,
$\|x(t)-x_{LM}(t)\|\leq\tilde{C}\left[e^{-\frac{t}{\beta}}+\sqrt{\varepsilon(t)}+\alpha(t)+\int_{s=0}^{t}e^{\frac{1}{\beta}(s-t)}(\sqrt{\varepsilon(s)}+\alpha(s))\mathop{}\\!\mathrm{d}s\right].$
(9)
The proof is postponed to Appendix B. Although it follows a similar reasoning
as that of Theorem 3.1, more involved estimates are needed.
Let us comment on these results. The bound (8) generalizes Theorem 3.1,
although the constant involved will, in general, be larger than those in (1)
(see the proof of Theorem 3.8 in appendix). Theorem 3.8 allows for far more
flexibility in the choice of $\varepsilon$ and $\alpha$ in order to control
$x$ and make it possibly close to $x_{N}$. The bound in (9) is the same as
that in (8) (up to a constant), but this time w.r.t. $x_{LM}$, thus connecting
(VM-DIN-AVD) to (LM). We make the following two important remarks. First (9)
involves $\alpha$, suggesting that making $x$ close to $x_{LM}$ requires not
only $\varepsilon$ but also $\alpha$ to vanish asymptotically. Additionally,
Theorem 3.8 does not state to which of $x_{N}$ and $x_{LM}$ the solution of
(VM-DIN-AVD) is the closest. It remains an open question to know whether one
can make (9) independent of $\alpha$, and to state to which trajectory $x$ is
the closest. Yet, the numerical experiments in Section 5 suggest that neither
are possible. Indeed, we observe that for some functions $f$, $x$ is
_sometimes_ closer to $x_{N}$ than to $x_{LM}$, even when
$\varepsilon(t)\leq\alpha(t)$.
Nevertheless, Theorem 3.8 answers the question asked in the introduction: yes,
(VM-DIN-AVD) is really of second-order nature since it can be brought close to
the second-order dynamics (CN) and (LM). Doing so, it benefits from the good
properties of these methods, such as the robustness to bad conditioning, as
previously illustrated on the right of Figure 1. This concludes the analysis
from a control perspective. We will now derive an approximation of the
solution $x$ in order to study the impact that $\varepsilon$ and $\alpha$ have
on the speed of convergence of $x$ to $x^{\star}$ compared to the speeds of
convergence of $x_{N}$ and $x_{LM}$.
## 4 Approximate Solutions and Asymptotic Analysis on Quadratic Functions
We consider the particular case where $f$ is a strongly-convex quadratic
function in order to study the asymptotic behavior of (VM-DIN-AVD) w.r.t. (CN)
and (LM). Quadratic functions are the prototypical example of strongly-convex
functions. In particular, any strongly-convex function can be locally
approximated by a quadratic one around its minimizer, making the latter a good
model for understanding the local behavior of dynamics. In this section, $f$
is quadratic: $f(y)=\frac{1}{2}\|Ay-b\|_{2}^{2}$ for all $y\in\mathbb{R}^{n}$,
where $A\in\mathbb{R}^{n\times n}$ is symmetric positive definite and
$b\in\mathbb{R}^{n}$. Without loss of generality, we take $b=0$, so that the
unique minimum is $x^{\star}=0$.
### 4.1 Setting: the Special Case of Quadratic Functions
Quadratic functions are particularly interesting in our setting since DIN-like
ODEs take a simpler form (as observed in [10, 40]). Indeed, $\forall
y\in\mathbb{R}^{n}$, $\nabla f(y)=A^{T}Ay$ and $\nabla^{2}f(y)=A^{T}A$. Since
$\nabla^{2}f(y)$ is independent of $y$ we can rewrite (VM-DIN-AVD) in an
eigenspace555This can be generalized to the case where $A^{T}A$ is only semi-
definite by considering orthogonal projections on an eigenspace spanned by the
positive eigenvalues of $A^{T}A$. of $A^{T}A$. That is, we can study the ODE
coordinate-wise by looking at one-dimensional problems of the form
$\varepsilon(t)\ddot{x}(t)+(\alpha(t)+\beta\lambda)\dot{x}(t)+\lambda
x(t)=0,\quad t\geq 0.$ (Q1-VM-DIN-AVD)
Here (and throughout what follows) $\lambda>0$ denotes any eigenvalue of
$A^{T}A$ and $x\colon\mathbb{R}_{+}\to\mathbb{R}$ now denotes the
corresponding coordinate (function) of the solution of (VM-DIN-AVD) in an
eigenspace of $A^{T}A$. The dynamics (Q1-VM-DIN-AVD) is a _linear_ second-
order ODE in $x$ with non-constant coefficients. Similarly, (LM) can be
rewritten coordinate-wise as
$(\alpha(t)+\beta\lambda)\dot{x}_{LM}(t)+\lambda x_{LM}(t)=0,\quad t\geq 0,$
(Q1-LM)
where $x_{LM}\colon\mathbb{R}_{+}\to\mathbb{R}$, and (CN) becomes
$\beta\dot{x}_{N}(t)+x_{N}(t)=0,\quad t\geq 0,$ (Q1-CN)
where again, $x_{N}\colon\mathbb{R}_{+}\to\mathbb{R}$ is one-dimensional.
Observe in particular that (CN) and (LM) are now first-order _linear_ ODEs,
whose solutions have the closed forms, for all $t\geq 0$,
$x_{N}(t)=x_{0}e^{-\frac{t}{\beta}}\quad\text{and}\quad
x_{LM}(t)=x_{0}\exp\left(-\int_{0}^{t}\frac{\lambda}{\alpha(s)+\beta\lambda}\mathop{}\\!\mathrm{d}s\right).$
(10)
Since the minimizer is $x^{\star}=0$, we directly see that $x_{N}$ converges
exponentially fast to $x^{\star}$, with a rate independent of $\lambda$. The
rate of $x_{LM}$ depends however on $\lambda$ and how fast $\alpha$ vanishes.
Unfortunately, except for some special choices of $\varepsilon$ and $\alpha$
(see [10]), one cannot solve the second-order linear ODE (Q1-VM-DIN-AVD) in
closed form in general. Additionally, it is hopeless to circumvent the
difficulty by finding a closed form for $\nabla f(x)$, accordingly to what we
did in Section 3, since here $\nabla f(x)=\lambda x$. In order to study the
speed of convergence of $x$ despite not having access to a closed form, we
will approximate it with a controlled error, via a method that we now present.
### 4.2 The Liouville–Green Method
In what follows, we rely on the Liouville–Green method [30, 26], a technique
for obtaining _non-asymptotic_ approximations to solutions of linear second-
order ODEs with non-constant coefficients. First, we give the intuition behind
the method, following the presentation of [35]. Consider the differential
equation
$\ddot{z}(t)-r(t)z(t)=0,\quad t\geq 0,$ (11)
where $r$ is real-valued, positive, and twice continuously differentiable. Any
linear second-order ODE can be reformulated in the form (11), see Lemma 4.5
below. Since for all $t\geq 0$, $r(t)\neq 0$, we can use the changes of
variables $\tau=\int_{0}^{t}\sqrt{r(s)}\mathop{}\\!\mathrm{d}s$ and
$w=r^{1/4}z$ and show that $w$ is solution to
$\ddot{w}(\tau)-(1+\psi(\tau))w(\tau)=0,\quad t\geq 0,$ (12)
where666We express $\psi(\tau)$ using $t$ for the sake of readability, using
the one-to-one correspondence between $\tau$ and $t$.
$\psi(\tau)=\frac{4r(t)r^{\prime\prime}(t)-5r^{\prime}(t)^{2}}{16r(t)^{3}}$.
The LG method consists in neglecting the term $\psi(\tau)$ in (12), which
simply yields two approximate solutions $\hat{w}_{1}(\tau)=e^{\tau}$ and
$\hat{w}_{2}(\tau)=e^{-\tau}$. Expressing this in terms of $z$ and $t$, we
obtain
$\hat{z}_{1}(t)=r(t)^{-1/4}\exp\left(\int_{0}^{t}\sqrt{r(s)}\mathop{}\\!\mathrm{d}s\right)\quad\text{and}\quad\hat{z}_{2}(t)=r(t)^{-1/4}\exp\left(\int_{0}^{t}-\sqrt{r(s)}\mathop{}\\!\mathrm{d}s\right).$
(13)
Those are the LG approximations of the solutions of (11). They are formally
valid on any interval $[0,T]$, $T>0$ when $\psi$ is “not too large”, provided
that $\sqrt{r}$ is integrable on $[0,T]$.
###### Remark 4.1.
There exists other (but less intuitive) ways to derive the approximations
above, which allow for generalization to higher-order linear ODEs, see e.g.,
[14, Chapter 10].
The advantage of this approach is the possibility to estimate the error made
using (13) w.r.t. the true solutions of (11). This is expressed in the
following theorem which gathers results from [15, 36, 42].
###### Theorem 4.2 (Olver [35]).
Let $r\colon\mathbb{R}_{+}\to\mathbb{R}$ be a real, positive, twice
continuously differentiable function, and define
$\varphi(t)=\frac{4r(t)r^{\prime\prime}(t)-5r^{\prime}(t)^{2}}{16r(t)^{5/2}}$
for all $t\geq 0$. Then for any $T>0$, the differential equation,
$\ddot{z}(t)-r(t)z(t)=0,\quad t\in[0,T],$ (14)
has two real and twice continuously differentiable solutions defined for all
$t\in[0,T]$ by,
$z_{1}(t)=\frac{1+\delta_{1}(t)}{r(t)^{1/4}}\exp\left(\int_{0}^{t}\sqrt{r(s)}\mathop{}\\!\mathrm{d}s\right)\quad\text{and}\quad
z_{2}(t)=\frac{1+\delta_{2}(t)}{r(t)^{1/4}}\exp\left(-\int_{0}^{t}\sqrt{r(s)}\mathop{}\\!\mathrm{d}s\right),$
where
$\displaystyle|\delta_{1}(t)|\leq\exp\left(\frac{1}{2}\int_{0}^{t}|\varphi(s)|\mathop{}\\!\mathrm{d}s\right)-1$
and
$\displaystyle|\delta_{2}(t)|\leq\exp\left(-\frac{1}{2}\int_{t}^{T}|\varphi(s)|\mathop{}\\!\mathrm{d}s\right)-1$.
If in addition
$\displaystyle\int_{0}^{+\infty}|\varphi(s)|\mathop{}\\!\mathrm{d}s<+\infty$,
then the results above also hold for $T=+\infty$.
###### Remark 4.3.
We make the following remarks regarding the above result.
– Note that $z_{1}$ and $z_{2}$ in Theorem 4.2 are _exact_ solutions to (14).
The LG approximations $\hat{z}_{1}$ and $\hat{z}_{2}$ are obtained by
neglecting the unknown functions $\delta_{1}$ and $\delta_{2}$ in $z_{1}$ and
$z_{2}$. The theorem gives a _non-asymptotic_ bound for the errors
$|z_{1}(t)-\hat{z}_{1}(t)|$ and $|z_{2}(t)-\hat{z}_{2}(t)|$, $t\geq 0$.
– Since we assumed $r$ to be twice continuously differentiable and positive,
$\varphi$ is continuous, so it is integrable except maybe for $t\to+\infty$.
– For the sake of simplicity, the formulation of Theorem 4.2 slightly differs
from that in [35], the original formulation can be recovered by a change of
variable.
### 4.3 Liouville–Green Approximation of (VM-DIN-AVD)
We now proceed to make use of the LG method for approximating the solutions of
(Q1-VM-DIN-AVD). The reader only interested in the result can jump directly to
the Section 4.4. We first make the following assumption.
###### Assumption 3.
The functions $\alpha$ and $\varepsilon$ are three times continuously
differentiable, and $\varepsilon_{0}$ is such that $\forall t\geq 0$,
$\varepsilon_{0}<\frac{(\beta\lambda)^{2}}{2|\alpha^{\prime}(t)|+4\lambda}$.
###### Remark 4.4.
The condition on $\varepsilon_{0}$ in Assumption 3 is only technical, so that
$r$ defined below is positive. It can be easily satisfied since
$|\alpha^{\prime}(t)|$ is uniformly bounded. Indeed, $\alpha$ is non-
increasing and non-negative (see Section 1.1), from which one can deduce that
$\int_{0}^{+\infty}|\alpha^{\prime}(s)|\mathop{}\\!\mathrm{d}s\leq\alpha_{0}$.
We now rewrite (Q1-VM-DIN-AVD) in the form (14).
###### Lemma 4.5.
Suppose that Assumption 3 holds, and let $x$ be the solution of (Q1-VM-DIN-
AVD). For all $t\geq 0$, define
$p(t)=\frac{\alpha(t)+\beta\lambda}{\varepsilon(t)}\quad\text{and}\quad
r(t)=\frac{p(t)^{2}}{4}+\frac{p^{\prime}(t)}{2}-\frac{\lambda}{\varepsilon(t)}.$
(15)
Then, $p$ and $r$ are twice continuously differentiable, $r$ is positive and
the function $y$ defined for all $t\geq 0$ by
$y(t)=x(t)\exp\left(\int_{0}^{t}\frac{p(s)}{2}\mathop{}\\!\mathrm{d}s\right)$
is a solution to
$\ddot{y}(t)-r(t)y(t)=0,\quad t\geq 0,$ (16)
with initial condition
$(y(0),\dot{y}(0))=(x_{0},\dot{x}_{0}+\frac{p(0)}{2}x_{0})$.
###### Proof.
We first check that for all $t\geq 0$, $r(t)$ is positive. Let $t>0$,
$r(t)>0\iff\frac{(\alpha(t)+\beta\lambda)^{2}}{4\varepsilon(t)^{2}}+\frac{\alpha^{\prime}(t)}{2\varepsilon(t)}-\frac{(\alpha(t)+\beta\lambda)\varepsilon^{\prime}(t)}{\varepsilon(t)^{2}}-\frac{\lambda}{\varepsilon(t)}>0.$
(17)
Since $\varepsilon^{\prime}(t)\leq 0$ and $\alpha^{\prime}(t)\leq 0$, one can
check that a sufficient condition for (17) to hold is,
$r(t)>0\impliedby\frac{(\alpha(t)+\beta\lambda)^{2}}{4}>\left(\frac{|\alpha^{\prime}(t)|}{2}+\lambda\right)\varepsilon(t)\impliedby\frac{(\beta\lambda)^{2}}{2|\alpha^{\prime}(t)|+4\lambda}>\varepsilon_{0}.$
So under Assumption 3, for all $t\geq 0$, $r(t)>0$. We then check that $y$ is
indeed solution to (16). Let $t>0$,
$\dot{y}(t)=\frac{p(t)}{2}x(t)\exp\left(\int_{0}^{t}\frac{p(s)}{2}\mathop{}\\!\mathrm{d}s\right)+\dot{x}(t)\exp\left(\int_{0}^{t}\frac{p(s)}{2}\mathop{}\\!\mathrm{d}s\right),$
and
$\ddot{y}(t)=\exp\left(\int_{0}^{t}\frac{p(s)}{2}\mathop{}\\!\mathrm{d}s\right)\left[\left(\frac{p(t)^{2}}{4}+\frac{p^{\prime}(t)}{2}\right)x(t)+p(t)\dot{x}(t)+\ddot{x}(t)\right].$
Since $x$ solves (Q1-VM-DIN-AVD), it holds that
$\ddot{x}(t)=-p(t)\dot{x}(t)-\frac{\lambda}{\varepsilon(t)}x(t)$, so,
$\ddot{y}(t)=\exp\left(\int_{0}^{t}\frac{p(s)}{2}\mathop{}\\!\mathrm{d}s\right)\left(\frac{p(t)^{2}}{4}+\frac{p^{\prime}(t)}{2}-\frac{\lambda}{\varepsilon(t)}\right)x(t)\\\
=\left(\frac{p(t)^{2}}{4}+\frac{p^{\prime}(t)}{2}-\frac{\lambda}{\varepsilon(t)}\right)y(t)=r(t)y(t).$
∎
Lemma 4.5 gives a reformulation of (Q1-VM-DIN-AVD) suited to apply Theorem
4.2. To use the theorem for all $t\geq 0$, we need to ensure that
$\varphi(t)=\frac{4r(t)r^{\prime\prime}(t)-5r^{\prime}(t)^{2}}{16r(t)^{5/2}}$
is integrable. To this aim we make the following assumption.
###### Assumption 4.
The functions $\varepsilon$ and $\alpha$ have first, second and third-order
derivatives that are integrable on $[0,+\infty[$. In addition,
$\lim\limits_{t\to\infty}{\varepsilon(t)}=0$ and
$\varepsilon^{\prime}(t)^{2}/\varepsilon(t)$ is integrable on $[0,+\infty[$.
###### Remark 4.6.
Assumption 4 holds for most decays used in practice, with in particular any
polynomial decay of the form $\frac{\varepsilon_{0}}{(t+1)^{a}}$ and
$\frac{\alpha_{0}}{(t+1)^{b}}$, $a\in\mathbb{N}\setminus\\{0\\}$ and
$b\in\mathbb{N}$. Note that $\varepsilon$ and $\alpha$ need not be integrable
and $\alpha$ can even be constant.
The next lemma states the integrability of $\varphi$ on $[0,+\infty[$.
###### Lemma 4.7.
Under Assumption 3 and 4,
$\int_{0}^{+\infty}|\varphi(s)|\mathop{}\\!\mathrm{d}s<+\infty$.
The proof of this lemma, relies on relatively simple arguments but involves
long computations and is thus postponed to Appendix C. We can now use Theorem
4.2 to obtain an exact form for the solution of (Q1-VM-DIN-AVD) based on the
LG approximations.
###### Theorem 4.8.
Suppose that Assumptions 3 and 4 hold. There exists $A,B\in\mathbb{R}$ such
that $x(0)=x_{0}$, $\dot{x}(0)=\dot{x}_{0}$ and for all $t\geq 0$, the
solution of (Q1-VM-DIN-AVD) is
$\displaystyle\begin{split}&x(t)=A\frac{1+\delta_{1}(t)}{r(t)^{1/4}}\frac{\sqrt{\alpha(t)+\beta\lambda}}{\sqrt{\varepsilon(t)}}\exp\left(\int_{0}^{t}-\frac{\lambda}{\alpha(s)+\beta\lambda}-\frac{\lambda^{2}\varepsilon(s)}{(\alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s))\mathop{}\\!\mathrm{d}s\right)\\\
+&B\frac{1+\delta_{2}(t)}{r(t)^{1/4}}\frac{\sqrt{\varepsilon(t)}}{\sqrt{\alpha(t)+\beta\lambda}}\exp\left(\int_{0}^{t}-\frac{\alpha(s)+\beta\lambda}{\varepsilon(s)}+\frac{\lambda}{\alpha(s)+\beta\lambda}+\frac{\lambda^{2}\varepsilon(s)}{(\alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s))\mathop{}\\!\mathrm{d}s\right),\end{split}$
(18)
where for all $t\geq 0$,
$|\delta_{1}(t)|\leq\exp\left(\frac{1}{2}\int_{0}^{t}|\varphi(s)|\mathop{}\\!\mathrm{d}s\right)-1\quad\text{and}\quad|\delta_{2}(t)|\leq\exp\left(-\frac{1}{2}\int_{t}^{+\infty}|\varphi(s)|\mathop{}\\!\mathrm{d}s\right)-1.$
(19)
Thanks to the bounds (19), we now have an approximation of $x$. We will use it
in particular to compare $x$ asymptotically to the solutions of (Q1-LM) and
(Q1-CN). Before this, we prove Theorem 4.8.
###### Proof of Theorem 4.8.
Let $x$ be the solution of (Q1-VM-DIN-AVD) define $p,r$ as in (15). Let us
also define
$y(t)\stackrel{{\scriptstyle\textrm{def}}}{{=}}x(t)\exp\left(\int_{0}^{t}\frac{p(s)}{2}\mathop{}\\!\mathrm{d}s\right)$.
According to Lemma 4.5, $r$ is positive and $y$ is solution to (16). Then,
from Lemma 4.7, $\int_{t}^{T}|\varphi(s)|\mathop{}\\!\mathrm{d}s<+\infty$, so
we can apply Theorem 4.2 to $y$ on $[0,+\infty[$. Therefore, there exists
$A,B\in\mathbb{R}$, such that $\forall t\geq 0$,
$y(t)=A\frac{1+\delta_{1}(t)}{r(t)^{1/4}}\exp\left(\int_{0}^{t}\sqrt{r(s)}\mathop{}\\!\mathrm{d}s\right)+B\frac{1+\delta_{2}(t)}{r(t)^{1/4}}\exp\left(\int_{0}^{t}-\sqrt{r(s)}\mathop{}\\!\mathrm{d}s\right),$
where $A$ and $B$ are determined by the initial conditions, and $\delta_{1}$,
$\delta_{2}$ are such that (19) holds.
Going back to
$x(t)=y(t)\exp\left(\int_{0}^{t}-\frac{p(s)}{2}\mathop{}\\!\mathrm{d}s\right)$,
we obtain that for all $t\geq 0$,
$x(t)=A\frac{1+\delta_{1}(t)}{r(t)^{1/4}}\exp\left(\int_{0}^{t}-\frac{p(s)}{2}+\sqrt{r(s)}\mathop{}\\!\mathrm{d}s\right)+B\frac{1+\delta_{2}(t)}{r(t)^{1/4}}\exp\left(\int_{0}^{t}-\frac{p(s)}{2}-\sqrt{r(s)}\mathop{}\\!\mathrm{d}s\right).$
(20)
It now remains to expand the terms in the two exponentials in (20) in order to
obtain (18). To this aim, we approximate $\sqrt{r(s)}$, let $s\geq 0$,
$\displaystyle\begin{split}\sqrt{r(s)}&=\frac{p(s)}{2}\sqrt{1+\frac{2p^{\prime}(s)}{p(s)^{2}}-\frac{4\lambda}{\varepsilon(s)p(s)^{2}}}\\\
&=\frac{p(s)}{2}\left(1+\frac{p^{\prime}(s)}{p(s)^{2}}-\frac{2\lambda}{\varepsilon(s)p(s)^{2}}-\frac{1}{8}\left(\frac{2p^{\prime}(s)}{p(s)^{2}}-\frac{4\lambda}{\varepsilon(s)p(s)^{2}}\right)^{2}+o(\varepsilon(s)^{2})\right)\\\
&=\frac{p(s)}{2}+\frac{p^{\prime}(s)}{2p(s)}-\frac{\lambda}{\varepsilon(s)p(s)}-\frac{1}{16}\left(\frac{2p^{\prime}(s)}{p(s)^{3/2}}-\frac{4\lambda}{\varepsilon(s)p(s)^{3/2}}\right)^{2}+o(\varepsilon(s))\\\
&=\frac{p(s)}{2}+\frac{p^{\prime}(s)\varepsilon(s)}{2(\alpha(s)+\beta\lambda)}-\frac{\lambda}{\alpha(s)+\beta\lambda}-\frac{1}{16}\left(\frac{2p^{\prime}(s)}{p(s)^{3/2}}-\frac{4\lambda\sqrt{\varepsilon(s)}}{(\alpha(s)+\beta\lambda)^{3/2}}\right)^{2}+o(\varepsilon(s))\\\
=\frac{p(s)}{2}&+\frac{\alpha^{\prime}(s)}{2(\alpha(s)+\beta\lambda)}-\frac{\varepsilon^{\prime}(s)}{2\varepsilon(s)}-\frac{\lambda}{\alpha(s)+\beta\lambda}-\frac{1}{16}\left(\frac{2p^{\prime}(s)}{p(s)^{3/2}}-\frac{4\lambda\sqrt{\varepsilon(s)}}{(\alpha(s)+\beta\lambda)^{3/2}}\right)^{2}+o(\varepsilon(s))\end{split}$
(21)
Focusing on the first exponential term in (20), we deduce from (21) that for
all $t\geq 0$,
$\displaystyle\begin{split}&\exp\left(\int_{0}^{t}-\frac{p(s)}{2}+\sqrt{r(s)}\mathop{}\\!\mathrm{d}s\right)\\\
&=\exp\left(\int_{0}^{t}\frac{\alpha^{\prime}(s)/2}{\alpha(s)+\beta\lambda}-\frac{\varepsilon^{\prime}(s)}{2\varepsilon(s)}-\frac{\lambda}{\alpha(s)+\beta\lambda}-\frac{1}{16}\left(\frac{2p^{\prime}(s)}{p(s)^{3/2}}-\frac{4\lambda\sqrt{\varepsilon(s)}}{(\alpha(s)+\beta\lambda)^{3/2}}\right)^{2}+o(\varepsilon(s))\mathop{}\\!\mathrm{d}s\right)\\\
&=\frac{\sqrt{\alpha(t)+\beta\lambda})}{\sqrt{\alpha_{0}+\beta\lambda}}\frac{\sqrt{\varepsilon_{0}}}{\sqrt{\varepsilon(t)}}\exp\left(\int_{0}^{t}\frac{-\lambda}{\alpha(s)+\beta\lambda}-\frac{1}{16}\left(\frac{2p^{\prime}(s)}{p(s)^{3/2}}-\frac{4\lambda\sqrt{\varepsilon(s)}}{(\alpha(s)+\beta\lambda)^{3/2}}\right)^{2}+o(\varepsilon(s))\mathop{}\\!\mathrm{d}s\right)\\\
&=\frac{\sqrt{\alpha(t)+\beta\lambda})}{\sqrt{\alpha_{0}+\beta\lambda}}\frac{\sqrt{\varepsilon_{0}}}{\sqrt{\varepsilon(t)}}\exp\left(\int_{0}^{t}\frac{-\lambda}{\alpha(s)+\beta\lambda}-\frac{\lambda^{2}\varepsilon(s)}{(\alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s))\mathop{}\\!\mathrm{d}s\right),\end{split}$
where the last line relies on further computations postponed to Lemma C.1 in
Appendix C. Performing the exact same type of computations on
$\exp\left(\int_{0}^{t}-\frac{p(s)}{2}-\sqrt{r(s)}\mathop{}\\!\mathrm{d}s\right)$,
and up to redefining $A$ and $B$ so as to encompass all the constants, we
obtain (18) and the result is proved. ∎
### 4.4 Comparison of $x$ with $x_{LM}$ and $x_{N}$
We now have an expression for $x$ which is almost explicit: we do not know
$\delta_{1}$ and $\delta_{2}$ in closed form, but they are uniformly bounded
(by Lemma 4.7). We will now compare the asymptotic behavior of (18) with those
of the solutions of (Q1-LM) and (Q1-CN) that we denoted $x_{LM}$ and $x_{N}$
respectively. Our main result of Section 4 is the following, where
$\sim_{+\infty}$ denotes the asymptotic equivalence777Two real-valued
functions $g_{1}$ and $g_{2}$ are asymptotically equivalent in $+\infty$ if
and only if $\lim_{t\to\infty}\frac{g_{1}(t)}{g_{2}(t)}=1$. between two
functions as $t\to\infty$.
###### Theorem 4.9.
Let $x$ be the solution of (Q1-VM-DIN-AVD), given in (18), and $x_{LM}$ and
$x_{N}$ whose closed forms are stated in (10). Under Assumptions 3 and 4,
there exists $C>0$ such that the following asymptotic equivalences hold:
$\displaystyle\begin{split}x(t)&\sim_{+\infty}x_{LM}(t)C\exp\left(\int_{0}^{t}-\frac{\lambda^{2}\varepsilon(s)}{(\alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s))\mathop{}\\!\mathrm{d}s\right),\quad\text{and}\\\
x(t)&\sim_{+\infty}x_{N}(t)C\exp\left(\int_{0}^{t}\frac{\alpha(s)}{\beta(\alpha(s)+\beta\lambda)}-\frac{\lambda^{2}\varepsilon(s)}{(\alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s))\mathop{}\\!\mathrm{d}s\right).\end{split}$
(22)
As a consequence, the convergence of $x$ to $x^{\star}$ is:
1. (i)
_Faster_ than that of $x_{LM}$ if $\varepsilon$ is non-integrable and as fast
otherwise.
2. (ii)
Slower than that of $x_{N}$ if $\alpha$ is non-integrable and as fast if
$\alpha$ is integrable, in the case where $\forall t\geq 0$,
$\alpha(t)>\varepsilon(t)$.
3. (iii)
_Faster_ than that of $x_{N}$ if $\varepsilon$ is non-integrable and as fast
if $\varepsilon$ is integrable, in the case where $\forall t\geq 0$,
$\alpha(t)<\varepsilon(t)$.
While the results of Section 3 were related to the closeness of (VM-DIN-AVD)
w.r.t. (CN) and (LM) from a control perspective, Theorem 4.9 provides a
different type of insight. First, the results are asymptotic, so they only
allow to control (VM-DIN-AVD) for large $t$. They provide however a clear
understanding of the nature of the solutions of (VM-DIN-AVD) and their
convergence. The conclusions (summarized in Table 1) are in accordance with
what we would expect: when the viscous damping is larger than the variable
mass, (VM-DIN-AVD) behaves more like the Levenberg–Marquardt method than the
Newton one, but it actually becomes an accelerated Levenberg–Marquardt
dynamics when $\varepsilon$ is non-integrable but vanishing. However, when the
variable mass $\varepsilon$ is larger than $\alpha$, the dynamics is closer to
the one of the Newton method, and can actually be an accelerated Newton
dynamics, again for non-integrable $\varepsilon$. This is analogous to the
necessary condition that $\alpha$ must be non-integrable in order to
accelerate first-order methods in convex optimization (see [3]). We conclude
this section by proving Theorem 4.9.
###### Proof of Theorem 4.9.
Thanks to Assumptions 3 and 4, Theorem 4.8 tells us that $x$ has the form
(18). We now analyze the two terms in (18).
First, we know from Theorem 4.8 that $\delta_{1}(0)=0$ and
$\lim_{t\to+\infty}\delta_{2}(t)=0$. In addition, by Lemma 4.7, $\delta_{1}$
and $\delta_{2}$ are uniformly bounded by some positive constant. Then
$r(t)^{-1/4}$ decays asymptotically like $\sqrt{\varepsilon(t)}$ and $\alpha$
is bounded. So
$A\frac{1+\delta_{1}(t)}{r(t)^{1/4}}\frac{\sqrt{\alpha(t)+\beta\lambda}}{\sqrt{\varepsilon(t)}}$
is asymptotically equivalent to some constant $c_{1}\in\mathbb{R}$ as
$t\to+\infty$. Similarly,
$B\frac{1+\delta_{2}(t)}{r(t)^{1/4}}\frac{\sqrt{\varepsilon(t)}}{\sqrt{\alpha(t)+\beta\lambda}}$
is equivalent to $c_{2}\varepsilon(t)$, with $c_{2}\in\mathbb{R}$.
We now analyze the “exponential factors” in (18). On the one hand,
$\frac{\lambda}{\alpha(s)+\beta\lambda}+\frac{\lambda^{2}\varepsilon(s)}{(\alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s))$
converges to $\frac{1}{\beta}$ as $s\to\infty$, while
$\frac{\alpha(s)+\beta\lambda}{\varepsilon(s)}$ diverges to $+\infty$.
Therefore, we deduce that,
$\exp\left(\int_{0}^{t}-\frac{\alpha(s)+\beta\lambda}{\varepsilon(s)}+\frac{\lambda}{\alpha(s)+\beta\lambda}+\frac{\lambda^{2}\varepsilon(s)}{(\alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s))\mathop{}\\!\mathrm{d}s\right)\\\
=o\left(\exp\left(\int_{0}^{t}-\frac{\lambda}{\alpha(s)+\beta\lambda}-\frac{\lambda^{2}\varepsilon(s)}{(\alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s))\mathop{}\\!\mathrm{d}s\right)\right).$
As a consequence, the second term in (18) will decrease to $0$ faster than the
first-one (let alone the additional $\varepsilon(t)$ decay that we have just
discussed). The asymptotic behavior of $x$ will thus be governed by the first
term in (18).
Let us now focus on the first term in (18). Observe that
$\exp\left(\int_{0}^{t}-\frac{\lambda}{\alpha(s)+\beta\lambda}\mathop{}\\!\mathrm{d}s\right)$
is exactly the decay of $x_{LM}$ in (10). Thus, we have proved that there
exists $C>0$, such that the following asymptotic equivalence holds,
$A\frac{1+\delta_{1}(t)}{r(t)^{1/4}}\frac{\sqrt{\alpha(t)+\beta\lambda}}{\sqrt{\varepsilon(t)}}\exp\left(\int_{0}^{t}-\frac{\lambda}{\alpha(s)+\beta\lambda}-\frac{\lambda^{2}\varepsilon(s)}{(\alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s))\mathop{}\\!\mathrm{d}s\right)\\\
\sim_{+\infty}x_{LM}(t)C\exp\left(\int_{0}^{t}-\frac{\lambda^{2}\varepsilon(s)}{(\alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s))\mathop{}\\!\mathrm{d}s\right),$
which proves the first part of (22). The second equivalence in (22) is
obtained using the following identity,
$\int_{0}^{t}-\frac{\lambda}{\alpha(s)+\beta\lambda}\mathop{}\\!\mathrm{d}s=\int_{0}^{t}-\frac{1}{\beta}+\frac{\alpha(s)}{\beta(\alpha(s)+\beta\lambda)}\mathop{}\\!\mathrm{d}s=-\frac{t}{\beta}+\int_{0}^{t}\frac{\alpha(s)}{\beta(\alpha(s)+\beta\lambda)}\mathop{}\\!\mathrm{d}s$
(23)
and $e^{-t/\beta}$ is precisely the rate at which $x_{N}$ decreases. So (22)
holds.
It finally remains to deduce the conclusions of the theorem from (22).
– Regarding the comparison with $x_{LM}$, the integral
$\int_{0}^{t}-\frac{\lambda^{2}\varepsilon(s)}{(\alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s))\mathop{}\\!\mathrm{d}s$
converges if and only if $\varepsilon$ is integrable on $[0,+\infty[$, and
diverges to $-\infty$ when $\varepsilon$ is not. So $x$ converges to $0$ at
least as fast as $x_{LM}$ and faster when $\varepsilon$ is not integrable.
– As for the comparison with $x_{N}$, if $\alpha(s)>\varepsilon(s)\geq 0$ for
all $s\geq 0$, then the integral
$\int_{0}^{t}\frac{\alpha(s)}{\beta(\alpha(s)+\beta\lambda)}-\frac{\lambda^{2}\varepsilon(s)}{(\alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s))\mathop{}\\!\mathrm{d}s$
is convergent in $+\infty$ if and only if $\alpha$ is integrable and diverges
to $+\infty$ when $\alpha$ is non-integrable. So when $\alpha$ is integrable,
the speed of convergence of $x$ is the same as that of $x_{N}$. When $\alpha$
is not integrable, the convergence to $0$ is slower but still holds. Indeed,
for all $s\geq 0$
$\frac{\alpha(s)}{\beta(\alpha(s)+\frac{1}{\beta})}<\frac{\alpha(s)}{\beta\alpha(s)}=\frac{1}{\beta}$.
Thus for all $t>0$,
$-\frac{t}{\beta}+\int_{0}^{t}\frac{\alpha(s)}{\beta(\alpha(s)+\beta\lambda)}\mathop{}\\!\mathrm{d}s<0$.
– Finally, the comparison with $x_{N}$ in the case $\varepsilon(s)>\alpha(s)$
is exactly the same as the comparison with $x_{LM}$ using (23). ∎
## 5 Numerical Experiments
Figure 2: Comparison of the solutions $x_{N}$, $x_{LM}$ and $x$ of (CN), (LM)
and (VM-DIN-AVD) respectively, for a strongly-convex function of the form
$f(x)=e^{-\|x\|^{2}}+\frac{1}{2}\|Ax\|^{2}$. Left figures: distance
$\|x(t)-x_{N}(t)\|$ versus time $t$, each curve corresponds to a different
choice of $\varepsilon$; middle figures: distance $\|x(t)-x_{LM}(t)\|$, again
for several $\varepsilon$. Right figures: distance to the optimum $x^{\star}$
for reference, $x_{N}$ and $x_{LM}$ are in dotted and dashed lines, other
curves correspond to (VM-DIN-AVD) for several choices of $\varepsilon$. The
brown curve is often hidden behind the purple (and sometimes the pink) curve.
Top and bottom rows show results respectively for non-integrable and
integrable viscous dampings $\alpha$. The theoretical bounds from Theorem 3.8
are only displayed on Figure 4 below, for the sake of readability.
Figure 3: Similar experiment and figures as those described in Figure 2, but
for the function
$f(x)=\log\left(\sum_{i=1}^{n}e^{x_{i}}+e^{-x_{i}}\right)+\frac{1}{2}\|Ax\|^{2}$.
Figure 4: Similar experiment and figures as those described in Figure 2, but
for the function $f(x)=\|x\|^{50}+\frac{1}{2}\|Ax\|^{2}$. The thin “dash
dotted” curves represent the theoretical bounds from Theorem 3.8 for each
choice of $(\varepsilon,\alpha)$ considered.
Figure 5: Numerical validation of Theorem 4.9: distance to the optimum
$x^{\star}$ as a function of time. on a quadratic function
$f(x)=\frac{1}{2}\|Ax\|^{2}$. Left: speed comparison w.r.t. (CN) for several
choices of $\varepsilon$ and $\alpha$. Right: Comparison with LM in for
$\alpha$ integrable or not and several choices of $\varepsilon$. Shades of
blue represent cases where $\varepsilon(t)>\alpha(t)$ while shades of red
represent the opposite setting.
We present two set of experiments that illustrate our main results from
Sections 3 and 4. We first detail the general methodology.
### 5.1 Methodology
We compare the solutions of (CN), (LM) and (VM-DIN-AVD) obtained for strongly-
convex functions in dimension $n=100$. Since closed-form solutions are not
available, they are estimated via discretization schemes with small step-sizes
$\gamma=10^{-1}$. We used Euler semi-explicit schemes, where a linear system
is solved at each iteration, for the sake of stability. The resulting
algorithms are detailed in Appendix D.
### 5.2 First Experiment: Distance between Trajectories
We begin with an empirical validation of the results of Section 3 on the
distance between $x$, $x_{LM}$ and $x_{N}$. Each of Figures 2, 3 and 4 (as
well as Figures 6 in Appendix D) corresponds to a different strongly-convex
function, specified below its corresponding figure. In order to ensure strong
convexity, each function contains a quadratic term of the form $\|Ax\|^{2}$,
where $A$ is symmetric positive definite.
Several observations can be made from the numerical results, but we first note
on the right plots of Figures 2 to 4 that $x_{N}$ always converges
asymptotically linearly (i.e., exponentially fast). This is also the case for
$x$ and $x_{LM}$ in some (but not all) cases. This is important because
$\|x(t)-x_{N}(t)\|\leq\|x(t)-x^{\star}\|+\|x_{N}(t)-x^{\star}\|$, so if both
$x$ and $x_{N}$ converge linearly, then the bounds of Theorems 3.1 and 3.8
need not be tight asymptotically. That being said, the strength of these
results is to be non-asymptotic and this is highlighted by the experiments as
we now explain.
Looking at the left and middle plots of Figures 2, 3 and 4, we observe that
Theorems 3.1 and 3.8 seem empirically validated, since the distances
$\|x(t)-x_{N}(t)\|$ and $\|x(t)-x_{LM}(t)\|$ decrease relatively fast to zero.
Again, when $x$ converges rapidly to $x^{\star}$ this is not very insightful,
however, the main interest of our theorems appears on the left of Figures 3
and 4: the blue and green curves, corresponding to slowly decaying choices of
$\varepsilon$, converge more slowly than other trajectories. However, when
taking faster decays, we recover fast convergence and closeness to $x_{N}$
(this is particularly true for the purple curve). Very similar observations
are made w.r.t. $x_{LM}$ on the middle plots. Despite not being stated in the
theorems of Section 3, the experiments match the intuition that when
$\varepsilon>\alpha$, $x$ may be closer to $x_{N}$ and when
$\varepsilon\leq\alpha$, $x$ would rather be closer to $x_{LM}$. This is more
noticeable on the top rows of the figures, where $\alpha$ is not integrable.
Figure 4 suggests that the bounds in Theorem 3.8 are rather tight for small
$t$, since, for example, the blue and green curves on the left show a
relatively slow vanishing of $\|x(t)-x_{N}(t)\|$ for slowly decaying
$\varepsilon$. The experiments show that the bounds seem however often too
pessimistic for large $t$, for which the second part of our study provides
better insights (see Section 4 and below). Interestingly, slow decays of
$\varepsilon$ may sometimes result in faster convergence for $x$ than fast
decays (and also faster convergence than $x_{LM}$), notably on Figure 2. We
also note that $\varepsilon(t)=\varepsilon_{0}/t$ combined either with
$\alpha(t)=\alpha_{0}/t$ or $\alpha(t)=\alpha_{0}/t^{2}$ seems to always yield
fast convergence on these experiments (and sometimes the fastest of all
dynamics).
### 5.3 Second Experiment: Empirical Validation of Theorem 4.9
We now turn our attention to the solutions $x$, $x_{N}$ and $x_{LM}$ for a
quadratic function of the form $f(y)=\frac{1}{2}\|Ay\|^{2}$,
$y\in\mathbb{R}^{n}$, and for several choices of $\varepsilon$ and $\alpha$.
The results in Figure 5 exactly match the expected behavior summarized in
Table 1. Indeed, looking first at the right-hand side of Figure 5, $x$ is as
fast as the corresponding888That is, the solution of (LM) for the same
$\alpha$ as that considered for (VM-DIN-AVD). $x_{LM}$ when $\varepsilon$ is
not integrable and regardless of $\alpha$, and $x$ is faster when
$\varepsilon$ is non-integrable. Then on the left-hand side, when comparing to
$x_{N}$, $x$ is slower in settings where $\alpha$ is larger than $\varepsilon$
and non-integrable (red curves), or almost as fast when $\alpha$ is integrable
(pink curve). However, acceleration w.r.t. to $x_{N}$ is indeed achieved for
non-integrable $\varepsilon$ regardless of $\alpha$ (first-two blue curves),
and the rate is the same as that of $x_{N}$ when $\varepsilon$ is integrable
(third blue curve).
## 6 Conclusions and Perspectives
We introduced a general ODE (VM-DIN-AVD) featuring variable mass, and provided
a deep understanding on the behavior of its solutions w.r.t. time dependent
control parameters $\varepsilon$ and $\alpha$, both, asymptotically and non-
asymptotically. We can conclude that (VM-DIN-AVD) is indeed of (regularized)
Newton type, since it can be controlled to be close to both (CN) and (LM). Yet
we also showed that (VM-DIN-AVD) fundamentally differs from the other two
dynamics in its nature. In particular, Theorem 4.9 and the numerical
experiments emphasized that $\varepsilon$ and $\alpha$ can accelerate (or slow
down) (VM-DIN-AVD) w.r.t. (CN) and (LM). We also note that our bounds in
Theorems 3.1 and 3.8 seem relatively tight, in particular for functions with
large gradients (see Figure 4). Our contribution yields a complete and
satisfying picture on the relation between the three systems, which was only
partially understood. We believe that our results build a strong foundation
for the development of algorithms that combine the best properties of first-
and second-order optimization methods.
As for future work, we showed that (VM-DIN-AVD) is promising from an
optimization perspective. So far we approximated solutions of (VM-DIN-AVD) via
schemes that required solving a linear system at each iteration (this is also
true for (CN) and (LM)). Our new understanding on $(\varepsilon,\alpha)$ paves
the way towards designing new Newton-like algorithms with a significantly
reduced computational cost, which is crucial for large-scale optimization.
Another open question is whether it is possible to preserve the properties
evidenced in this work when $\varepsilon$ is defined in a closed-loop manner
(formally depending on $x$ rather than on $t$). Finally, it would be worth
investigating how the current work can be extended to general convex and/or
non-smooth functions.
## Acknowledgment
C. Castera, J. Fadili and P. Ochs are supported by the ANR-DFG joint project
TRINOM-DS under number ANR-20-CE92-0037-01. The numerical experiments were
made thanks to the development teams of the following libraries: Python [38],
Numpy [43] and Matplotlib [27].
Appendices
## Appendix A Equivalent First-order System and Global Existence of Solutions
### A.1 First-order Equivalent Formulation
We reformulate (VM-DIN-AVD) as a system of ODE involving only first-order time
derivatives and the gradient of $f$. For this purpose, notice that for all
$t>0$ (VM-DIN-AVD) can be rewritten as,
$\displaystyle\begin{split}\frac{\mathop{}\\!\mathrm{d}}{\mathop{}\\!\mathrm{d}t}\left[\varepsilon(t)\dot{x}(t)\right]+\beta\frac{\mathop{}\\!\mathrm{d}}{\mathop{}\\!\mathrm{d}t}\nabla
f(x(t))+\alpha(t)\dot{x}(t)-\varepsilon^{\prime}(t)\dot{x}(t)+\nabla
f(x(t))=0,\quad t\geq 0.\end{split}$ (24)
We then integrate (24) for all $t\geq 0$,
$\varepsilon(t)\dot{x}(t)+\beta\nabla
f(x(t))-\varepsilon_{0}\dot{x}_{0}-\beta\nabla
f(x_{0})+\int_{0}^{t}(\alpha(s)-\varepsilon^{\prime}(s))\dot{x}(s)+\nabla
f(x(s))\mathop{}\\!\mathrm{d}s=0.$ (25)
For all $t\geq 0$, we define the variable,
$z(t)=\int_{0}^{t}(\alpha(s)-\varepsilon^{\prime}(s))\dot{x}(s)+\nabla
f(x(s))\mathop{}\\!\mathrm{d}s-\varepsilon_{0}\dot{x}_{0}-\beta\nabla
f(x_{0}).$
We differentiate $z$, for all $t>0$,
$\dot{z}(t)=(\alpha(t)-\varepsilon^{\prime}(t))\dot{x}(t)+\nabla f(x(t))$, so
that we can rewrite (25) as,
$\begin{cases}\varepsilon(t)\dot{x}(t)+\beta\nabla f(x(t))+z(t)=0\\\
\dot{z}(t)-(\alpha(t)-\varepsilon^{\prime}(t))\dot{x}(t)-\nabla
f(x(t))=0\end{cases},\quad t\geq 0.$
We substitute the first line in the second-one,
$\begin{cases}\varepsilon(t)\dot{x}(t)+\beta\nabla f(x(t))+z(t)=0\\\
\beta\dot{z}(t)-\beta(\alpha(t)-\varepsilon^{\prime}(t)-\frac{1}{\beta}\varepsilon(t))\dot{x}(t)+z(t)=0\end{cases},\quad
t\geq 0.$ (26)
To ease the readability, we recall the notation
$\nu(t)=\alpha(t)-\varepsilon^{\prime}(t)-\frac{1}{\beta}\varepsilon(t)$ from
Section 2. Then define for all $t\geq 0$, $y(t)=z(t)-\nu(t)x(t)$, and
differentiate, $\dot{y}(t)=\dot{z}(t)-\nu(t)\dot{x}(t)-\nu^{\prime}(t)x(t)$.
We finally rewrite (26) as,
$\begin{cases}\varepsilon(t)\dot{x}(t)+\beta\nabla
f(x(t))&+\nu(t)x(t)+y(t)=0\\\
\dot{y}(t)+\nu^{\prime}(t)x(t)&+\frac{\nu(t)}{\beta}x(t)+\frac{1}{\beta}y(t)=0\end{cases}.{}$
which is (gVM-DIN-AVD). Finally, the initial condition on $y$ is
$y(0)=z(0)-\nu(0)x(0)=-\varepsilon_{0}\dot{x}_{0}-\beta\nabla
f(x_{0})-(\alpha_{0}-\varepsilon^{\prime}_{0}-\frac{1}{\beta}\varepsilon_{0})x_{0}.$
###### Remark A.1.
Notice that the quantity
$\nu(t)=\alpha(t)-\varepsilon^{\prime}(t)-\frac{1}{\beta}\varepsilon(t)$
involved in (gVM-DIN-AVD) also plays a key role in our analysis of Section 3,
see e.g., (6). In particular the sign of $\nu(t)$ changes the nature of (VM-
DIN-AVD) and is related to Assumption 1.
### A.2 Local Solutions are Global
Using the formulation (gVM-DIN-AVD), we proved local existence and uniqueness
of solutions of (VM-DIN-AVD) in Section 2. Using the same notations, we
justify that the local solution $(x,y)$ actually exists globally. According to
Lemma 3.4, the Lyapunov function
$U(t)=\frac{\varepsilon(t)}{2}\|\dot{x}(t)\|^{2}+f(x(t))-f(x^{\star})$ is non-
negative and decreasing. Thus, it is uniformly bounded on $\mathbb{R}_{+}$ and
the same holds for $t\mapsto f(x(t))$ since for all $t\geq 0$, $U(t)\geq
f(x(t))$. Then, $f$ is coercive by assumption, so $x$ is uniformly bounded on
$\mathbb{R}_{+}$ (otherwise $f(x)$ could not remain bounded). We now prove
that $y$ is also uniformly bounded. From (gVM-DIN-AVD), for all $t>0$,
$\dot{y}(t)=-\frac{1}{\beta}y(t)-(\frac{\nu(t)}{\beta}+\nu^{\prime}(t))x(t)$
so we can use the following integrating factor,
$y(t)=e^{-\frac{t}{\beta}}y_{0}-e^{-\frac{t}{\beta}}\int_{0}^{t}\frac{1}{\beta}e^{\frac{s}{\beta}}(\nu(s)+\beta\nu^{\prime}(s))x(s)\mathop{}\\!\mathrm{d}s.$
Using triangle inequalities, for all $t\geq 0$,
$\|y(t)\|\leq e^{-\frac{t}{\beta}}\|y_{0}\|+\sup_{s\geq
0}\|(\nu(s)+\beta\nu^{\prime}(s))x(s)\|e^{-\frac{t}{\beta}}\int_{0}^{t}\frac{1}{\beta}e^{\frac{s}{\beta}}\mathop{}\\!\mathrm{d}s\leq\|y_{0}\|+\sup_{s\geq
0}\|(\nu(s)+\beta\nu^{\prime}(s))x(s)\|.$ (27)
Using the definition of $\varepsilon$ and $\alpha$ from Sections 1.1 and 2,
observe that $\varepsilon$, $\alpha$, $\varepsilon^{\prime}$ and
$\alpha^{\prime}$ are all bounded on $\mathbb{R}_{+}$, and
$\varepsilon^{\prime\prime}$ is assumed to be bounded. So $\nu$ and
$\nu^{\prime}$ are bounded, and since we also proved that $x$ is uniformly
bounded on $\mathbb{R}_{+}$, we deduce from (27) that $y$ is uniformly bounded
as well. Hence, the unique local solution $(x,y)$ is global.
## Appendix B Proof of Theorem 3.8
This section is devoted to proving the general result of Section 3. Fix some
constants $c_{1},c_{2}>0$ and let $\varepsilon$ and $\alpha$ such that
Assumption 2 is satisfied with these constants. Let $x$ be the corresponding
solution of (VM-DIN-AVD), $x_{N}$, and $x_{LM}$ that of (CN) and (LM),
respectively. Following the same arguments as in the beginning of the proof of
Theorem 3.1, for all $t\geq 0$, $x(t)$, $x_{N}(t)$ and $x_{LM}(t)$ belong to
the bounded set $\mathsf{K}_{0}$ defined in that proof. Since $f$ is
$\mu$-strongly convex on $\mathsf{K}_{0}$, the proof relies again on bounding
difference of gradients, indeed, for all $t\geq 0$,
$\|x(t)-x_{N}(t)\|\leq\frac{1}{\mu}\|\nabla f(x(t))-\nabla f(x_{N}(t))\|\
\text{and}\ \|x(t)-x_{LM}(t)\|\leq\frac{1}{\mu}\|\nabla f(x(t))-\nabla
f(x_{LM}(t))\|.$ (28)
Recall also that the closed form of $\nabla f(x_{N})$ is given in (4).
#### Expressing $\nabla f(x)$.
We follow the exact same steps as in the proof of Theorem 3.1 to obtain the
expression of $\nabla f(x)$ given in (5), which we recall, for all $t\geq 0$,
$\beta\nabla f(x(t))=\beta e^{-\frac{t}{\beta}}\nabla
f(x_{0})+e^{-\frac{t}{\beta}}\varepsilon_{0}\dot{x}_{0}-\varepsilon(t)\dot{x}(t)+\int_{0}^{t}e^{\frac{s-t}{\beta}}\left(\frac{1}{\beta}\varepsilon(s)+\varepsilon^{\prime}(s)-\alpha(s)\right)\dot{x}(s)\mathop{}\\!\mathrm{d}s.$
Here we do not assume any relation between $\varepsilon$ and $\alpha$, and we
thus need to find a more suitable expression for $\nabla f(x(t))$. We first
expand the terms in the integral, for all $t\geq 0$,
$\beta\nabla f(x(t))=\beta e^{-\frac{t}{\beta}}\nabla
f(x_{0})+e^{-\frac{t}{\beta}}\varepsilon_{0}\dot{x}_{0}-\varepsilon(t)\dot{x}(t)\\\
+\int_{0}^{t}e^{\frac{s-t}{\beta}}\left(\frac{1}{\beta}\varepsilon(s)+\varepsilon^{\prime}(s)\right)\dot{x}(s)\mathop{}\\!\mathrm{d}s-\int_{0}^{t}e^{\frac{s-t}{\beta}}\alpha(s)\dot{x}(s)\mathop{}\\!\mathrm{d}s.$
(29)
Then, for all $s\geq 0$, we have the identity,
$e^{\frac{s}{\beta}}\dot{x}(s)=e^{\frac{s}{\beta}}\dot{x}(s)+\frac{1}{\beta}e^{\frac{s}{\beta}}x(s)-\frac{1}{\beta}e^{\frac{s}{\beta}}x(s)=\frac{\mathop{}\\!\mathrm{d}}{\mathop{}\\!\mathrm{d}s}(e^{\frac{s}{\beta}}x(s))-\frac{1}{\beta}e^{\frac{s}{\beta}}x(s),$
(30)
which we use to perform an integration by part on the last integral in (29),
$\int_{0}^{t}e^{\frac{s}{\beta}}\alpha(s)\dot{x}(s)\mathop{}\\!\mathrm{d}s=\left[\alpha(s)e^{\frac{s}{\beta}}x(s)\right]_{0}^{t}-\int_{0}^{t}\left(\alpha^{\prime}(s)+\frac{\alpha(s)}{\beta}\right)e^{\frac{s}{\beta}}x(s)\mathop{}\\!\mathrm{d}s.$
Therefore,
$e^{-\frac{t}{\beta}}\int_{0}^{t}e^{\frac{s}{\beta}}\alpha(s)\dot{x}(s)\mathop{}\\!\mathrm{d}s=\alpha(t)x(t)-e^{-\frac{t}{\beta}}\alpha_{0}x_{0}-\int_{0}^{t}e^{\frac{s-t}{\beta}}\left(\alpha^{\prime}(s)+\frac{\alpha(s)}{\beta}\right)x(s)\mathop{}\\!\mathrm{d}s,$
(31)
and we can substitute in (29),
$\beta\nabla f(x(t))=\beta e^{-\frac{t}{\beta}}\nabla
f(x_{0})+e^{-\frac{t}{\beta}}\varepsilon_{0}\dot{x}_{0}-\varepsilon(t)\dot{x}(t)+\int_{0}^{t}e^{\frac{s-t}{\beta}}\left(\frac{1}{\beta}\varepsilon(s)+\varepsilon^{\prime}(s)\right)\dot{x}(s)\mathop{}\\!\mathrm{d}s\\\
-\alpha(t)x(t)+e^{-\frac{t}{\beta}}\alpha_{0}x_{0}+\int_{0}^{t}e^{\frac{s-t}{\beta}}\left(\alpha^{\prime}(s)+\frac{\alpha(s)}{\beta}\right)x(s)\mathop{}\\!\mathrm{d}s.$
(32)
#### Uniform boundedness.
In view of exploiting (32), we recall that for all $(\varepsilon,\alpha)$, $x$
is uniformly bounded. So there exists $R>0$ such that for all
$(\varepsilon,\alpha)$, the corresponding solution $x$ of (VM-DIN-AVD) is such
that
$\sup_{t\geq 0}\|x(t)\|\leq R.$ (33)
We are now in position to prove (8).
#### Distance from $x$ to $x_{N}$.
We first gather (4) and (32). For all $t\geq 0$,
$\beta\nabla f(x(t))-\beta\nabla
f(x_{N}(t))=e^{-\frac{t}{\beta}}\varepsilon_{0}\dot{x}_{0}-\varepsilon(t)\dot{x}(t)+\int_{0}^{t}e^{\frac{s-t}{\beta}}\left(\frac{1}{\beta}\varepsilon(s)+\varepsilon^{\prime}(s)\right)\dot{x}(s)\mathop{}\\!\mathrm{d}s\\\
+e^{-\frac{t}{\beta}}\alpha_{0}x_{0}-\alpha(t)x(t)+\int_{0}^{t}e^{\frac{s-t}{\beta}}\left(\alpha^{\prime}(s)+\frac{\alpha(s)}{\beta}\right)x(s)\mathop{}\\!\mathrm{d}s.$
We then use (28) and triangle inequalities,
$\beta\mu\|x(t)-x_{N}(t)\|\leq
e^{-\frac{t}{\beta}}\varepsilon_{0}\|\dot{x}_{0}\|+\varepsilon(t)\|\dot{x}(t)\|+\int_{0}^{t}e^{\frac{s-t}{\beta}}\left|\frac{\varepsilon(s)}{\beta}+\varepsilon^{\prime}(s)\right|\|\dot{x}(s)\|\mathop{}\\!\mathrm{d}s\\\
+e^{-\frac{t}{\beta}}\alpha_{0}\|x_{0}\|+\alpha(t)\|x(t)\|+\int_{0}^{t}e^{\frac{s-t}{\beta}}\left|\frac{\alpha(s)}{\beta}+\alpha^{\prime}(s)\right|\|x(s)\|\mathop{}\\!\mathrm{d}s.$
(34)
By Assumption 2, for all $s\geq 0$,
$|\frac{\varepsilon(s)}{\beta}+\varepsilon^{\prime}(s)|\leq(\frac{1}{\beta}+c_{1})\varepsilon(s)$
and
$|\frac{\alpha(s)}{\beta}+\alpha^{\prime}(s)|\leq(\frac{1}{\beta}+c_{2})\alpha(s)$.
We then use Lemma 3.5 (denoting by $C>0$ the constant stated in the lemma) on
the first line of (34), and we use the boundedness (33) on the second line to
obtain,
$\beta\mu\|x(t)-x_{N}(t)\|\leq
e^{-\frac{t}{\beta}}\varepsilon_{0}\|\dot{x}_{0}\|+C\sqrt{\varepsilon(t)}+C\left(\frac{1}{\beta}+c_{1}\right)\int_{0}^{t}e^{\frac{s-t}{\beta}}\sqrt{\varepsilon(s)}\mathop{}\\!\mathrm{d}s\\\
+e^{-\frac{t}{\beta}}\alpha_{0}\|x_{0}\|+R\alpha(t)+R\left(\frac{1}{\beta}+c_{2}\right)\int_{0}^{t}e^{\frac{s-t}{\beta}}\alpha(s)\mathop{}\\!\mathrm{d}s.$
This proves (8).
#### Expressing $\nabla f(x_{LM})$.
We now repeat previous arguments but for (LM). First, (LM) is equivalent to
$\frac{\mathop{}\\!\mathrm{d}}{\mathop{}\\!\mathrm{d}t}\nabla
f(x_{LM}(t))+\frac{1}{\beta}\nabla f(x_{LM}(t))=-\alpha(t)\dot{x}_{LM}(t).$
So using an integrating factor one can check that for all $t\geq 0$,
$\nabla f(x_{LM}(t))=e^{-\frac{t}{\beta}}\nabla
f(x_{0})-e^{-\frac{t}{\beta}}\int_{0}^{t}\frac{1}{\beta}e^{\frac{s}{\beta}}\alpha(s)\dot{x}_{LM}(s)\mathop{}\\!\mathrm{d}s.$
We can then follow exactly steps (30) to (31) so as to obtain,
$e^{-\frac{t}{\beta}}\int_{0}^{t}e^{\frac{s}{\beta}}\alpha(s)\dot{x}_{LM}(s)\mathop{}\\!\mathrm{d}s=\alpha(t)x_{LM}(t)-e^{-\frac{t}{\beta}}\alpha_{0}x_{0}-e^{-\frac{t}{\beta}}\int_{0}^{t}\left(\alpha^{\prime}(s)+\frac{\alpha(s)}{\beta}\right)e^{\frac{s}{\beta}}x_{LM}(s)\mathop{}\\!\mathrm{d}s.$
Finally, remark that for all $t\geq 0$,
$\frac{\mathop{}\\!\mathrm{d}}{\mathop{}\\!\mathrm{d}t}f(x_{LM}(t))=-\alpha(t)\|\dot{x}_{LM}(t)\|^{2}-\beta\langle\dot{x}_{LM}(t),\nabla^{2}f(x_{LM}(t))\dot{x}_{LM}(t)\rangle\leq
0.$
So $f(x_{LM}(t))\leq f(x_{0})$ and using the coercivity of $f$ as before we
deduce that for all choices $\alpha$,
$\sup_{t\geq 0}\|x_{LM}(t)\|\leq R.$ (35)
#### Distance from $x$ to $x_{LM}$.
We substract gradients,
$\beta\nabla f(x(t))-\beta\nabla
f(x_{LM}(t)))=e^{-\frac{t}{\beta}}\varepsilon_{0}\dot{x}_{0}-\varepsilon(t)\dot{x}(t)+\int_{0}^{t}e^{\frac{s-t}{\beta}}\left(\frac{1}{\beta}\varepsilon(s)+\varepsilon^{\prime}(s)\right)\dot{x}(s)\mathop{}\\!\mathrm{d}s\\\
-\alpha(t)(x(t)-x_{LM}(t))-\int_{0}^{t}e^{\frac{s-t}{\beta}}\left(\alpha^{\prime}(s)+\frac{\alpha(s)}{\beta}\right)(x(s)-x_{LM}(s))\mathop{}\\!\mathrm{d}s,$
and we proceed as before using (28), Assumption 2 and Lemma 3.5. It holds
that,
$\beta\mu\|x(t)-x_{LM}(t)\|\leq
e^{-\frac{t}{\beta}}\varepsilon_{0}\|\dot{x}_{0}\|+C\sqrt{\varepsilon(t)}+C\left(\frac{1}{\beta}+c_{1}\right)\int_{0}^{t}e^{\frac{1}{\beta}(s-t)}\sqrt{\varepsilon(s)}\mathop{}\\!\mathrm{d}s\\\
+\alpha(t)\|x(t)-x_{LM}(t)\|+\left(\frac{1}{\beta}+c_{2}\right)\int_{0}^{t}e^{\frac{1}{\beta}(s-t)}\|x(s)-x_{LM}(s)\|\mathop{}\\!\mathrm{d}s.$
Finally, using (33) and (35), for all $s\geq 0$, $\|x(s)-x_{LM}(s)\|\leq 2R$,
which concludes the proof.
## Appendix C Integrability of $\varphi$ and Additional Asymptotic
Computations
Below we prove Lemma 4.7.
###### Proof.
We suppose that Assumptions 3 and 4 hold. As stated in Remark 4.3, since
$\varphi$ is continuous, we only need to check its integrability when $t$
tends to $+\infty$. Let $t>0$, we first establish some useful identities, we
omit the dependence on $t$ for the sake of readability.
$\displaystyle p^{\prime}$
$\displaystyle=\frac{\alpha^{\prime}\varepsilon-(\alpha+\beta\lambda)\varepsilon^{\prime}}{\varepsilon^{2}},$
$\displaystyle p^{\prime\prime}$
$\displaystyle=\frac{\alpha^{\prime\prime}\varepsilon^{2}-2\alpha^{\prime}\varepsilon^{\prime}\varepsilon-(\alpha+\beta\lambda)\varepsilon^{\prime\prime}\varepsilon+2(\alpha+\beta\lambda)(\varepsilon^{\prime})^{2}}{\varepsilon^{3}}.$
Then,
$\displaystyle\begin{split}r&=\frac{p^{2}}{4}\left(1+\frac{2p^{\prime}}{p^{2}}-\frac{4\lambda}{\varepsilon
p^{2}}\right)=\frac{(\alpha+\beta\lambda)^{2}}{4\varepsilon^{2}}\left(1+\frac{2p^{\prime}\varepsilon^{2}}{(\alpha+\beta\lambda)^{2}}-\frac{4\lambda\varepsilon}{(\alpha+\beta\lambda)^{2}}\right)\\\
&=\frac{(\alpha+\beta\lambda)^{2}}{4\varepsilon^{2}}\left(1+\frac{2\alpha^{\prime}\varepsilon}{(\alpha+\beta\lambda)^{2}}-\frac{2\varepsilon^{\prime}}{(\alpha+\beta\lambda)}-\frac{4\lambda\varepsilon}{(\alpha+\beta\lambda)^{2}}\right).\end{split}$
(36)
An important consequence of Assumption 4 is that
$|\varepsilon^{\prime}(t)|=o(\varepsilon(t))$,
$|\varepsilon^{\prime\prime}(t)|=o(\varepsilon^{\prime}(t))$ (and the same
holds for $\alpha$ w.r.t. to its derivatives). Therefore, we deduce from (36)
that
$r(t)\sim_{+\infty}\frac{(\alpha(t)+\beta\lambda)^{2}}{4\varepsilon(t)^{2}},$
and we note that $1/r$ decays at the same speed as $\varepsilon^{2}$, which
will be useful later. In order to study $\varphi$, we now differentiate $r$,
$\displaystyle\begin{split}r^{\prime}&=\frac{p^{\prime}p}{2}\left(1+\frac{2p^{\prime}}{p^{2}}-\frac{4\lambda}{\varepsilon
p^{2}}\right)+\frac{1}{4}\left(2p^{\prime\prime}-\frac{4(p^{\prime})^{2}}{p}+\frac{8\lambda
p^{\prime}}{\varepsilon
p}+\frac{4\lambda\varepsilon^{\prime}}{\varepsilon^{2}}\right)\\\
&=\frac{2p^{\prime}}{p}r+\frac{1}{4}\left(2p^{\prime\prime}-\frac{4(p^{\prime})^{2}}{p}+\frac{8\lambda
p^{\prime}}{\varepsilon
p}+\frac{4\lambda\varepsilon^{\prime}}{\varepsilon^{2}}\right),\end{split}$
and
$\displaystyle\begin{split}r^{\prime\prime}=&2\frac{p^{\prime\prime}p-(p^{\prime})^{2}}{p^{2}}r+\frac{2p^{\prime}}{p}r^{\prime}\\\
&+\frac{1}{4}\left(2p^{\prime\prime\prime}+4\frac{(p^{\prime})^{3}-2p^{\prime\prime}p^{\prime}p}{p^{2}}+8\lambda\frac{p^{\prime\prime}p\varepsilon-(p^{\prime})^{2}\varepsilon-p^{\prime}p\varepsilon^{\prime}}{\varepsilon^{2}p^{2}}+\frac{4\lambda\varepsilon^{\prime\prime}}{\varepsilon^{2}}-\frac{8\lambda(\varepsilon^{\prime})^{2}}{\varepsilon^{3}}\right).\end{split}$
(37)
Then, to justify that $\varphi$ is integrable, we prove that
$\frac{r^{\prime\prime}}{r^{3/2}}$ and $\frac{(r^{\prime})^{2}}{r^{5/2}}$ are
integrable. Since we know that $1/r$ decays at the same speed as
$\varepsilon^{2}$, we can equivalently show that
$\varepsilon^{3}r^{\prime\prime}$ and $\varepsilon^{5}(r^{\prime})^{2}$ are
integrable. To this aim we fully expand all the terms in (37), and compute
$(r^{\prime})^{2}$, which is extremely long and involved. On the one hand, it
holds that, $\displaystyle\small r^{\prime 2}\varepsilon^{5}=$
$\displaystyle\left[-\frac{(\alpha+\beta\lambda)^{2}\left(-\frac{4\lambda\varepsilon}{(\alpha+\beta\lambda)^{2}}+1+\frac{\left(-{2(\alpha+\beta\lambda)\varepsilon^{\prime}}+{2\alpha^{\prime}\varepsilon}\right)}{(\alpha+\beta\lambda)^{2}}\right)\varepsilon^{\prime}}{2\sqrt{\varepsilon}}+\frac{(\alpha+\beta\lambda)^{2}\sqrt{\varepsilon}}{4}\left(-\frac{4\lambda\varepsilon^{\prime}}{(\alpha+\beta\lambda)^{2}}+\frac{8\lambda\alpha^{\prime}\varepsilon}{(\alpha+\beta\lambda)^{3}}\right.\right.$
$\displaystyle\left.\left.+\frac{2\left(-\frac{2(\alpha+\beta\lambda)\varepsilon^{\prime}}{\varepsilon}+{2\alpha^{\prime}\varepsilon}\right)\varepsilon^{\prime}}{(\alpha+\beta\lambda)^{2}}+\frac{\left(\frac{4(\alpha+\beta\lambda)\varepsilon^{\prime
2}}{\varepsilon}-{2(\alpha+\beta\lambda)\varepsilon^{\prime\prime}}-{4\alpha^{\prime}\varepsilon^{\prime}}+{2\alpha^{\prime\prime}\varepsilon}\right)}{(\alpha+\beta\lambda)^{2}}-\frac{2\left(-{2(\alpha+\beta\lambda)\varepsilon^{\prime}}+{2\alpha^{\prime}\varepsilon}\right)\alpha^{\prime}}{(\alpha+\beta\lambda)^{3}}\right)\right.$
$\displaystyle\left.+\frac{(\alpha+\beta\lambda)}{2}{\left(-\frac{4\lambda\varepsilon}{(\alpha+\beta\lambda)^{2}}+1+\frac{\left(-{2(\alpha+\beta\lambda)\varepsilon^{\prime}}+{2\alpha^{\prime}\varepsilon}\right)}{(\alpha+\beta\lambda)^{2}}\right)}\alpha^{\prime}\sqrt{\varepsilon}\phantom{\frac{\frac{\beta}{\beta}}{\beta}}\right]^{2}.$
On the other hand, we have, $\displaystyle
r^{\prime\prime}\varepsilon(t)^{3}=$
$\displaystyle-\lambda\varepsilon^{\prime\prime}\varepsilon+\frac{4\lambda\alpha^{\prime}\varepsilon^{\prime}\varepsilon}{\alpha+\beta\lambda}+\frac{2\lambda\alpha^{\prime\prime}\varepsilon^{2}}{\alpha+\beta\lambda}-\frac{6\lambda\alpha^{\prime
2}\varepsilon^{2}}{(\alpha+\beta\lambda)^{2}}+\frac{(\alpha+\beta\lambda)^{2}}{2}\left(-\frac{3\varepsilon^{\prime
2}}{\varepsilon}+\varepsilon^{\prime\prime}\right)\left(\frac{4\lambda\varepsilon}{(\alpha+\beta\lambda)^{2}}-1+\frac{2\left({(\alpha+\beta\lambda)\varepsilon^{\prime}}-\alpha^{\prime}\varepsilon\right)}{(\alpha+\beta\lambda)^{2}}\right)$
$\displaystyle+{2(\alpha+\beta\lambda)\left(\frac{4\lambda\varepsilon}{(\alpha+\beta\lambda)^{2}}-1+\frac{2\left((\alpha+\beta\lambda)\varepsilon^{\prime}-\alpha^{\prime}\varepsilon\right)}{(\alpha+\beta\lambda)^{2}}\right)\alpha^{\prime}\varepsilon^{\prime}}-\frac{\left((\alpha+\beta\lambda)\alpha^{\prime\prime}+\alpha^{\prime
2}\right)}{2}\left(\frac{4\lambda\varepsilon}{(\alpha+\beta\lambda)^{2}}-1+\frac{2\left({(\alpha+\beta\lambda)\varepsilon^{\prime}}-\alpha^{\prime}\varepsilon\right)}{(\alpha+\beta\lambda)^{2}}\right)\varepsilon$
$\displaystyle-{\left(\frac{(\alpha+\beta\lambda)\varepsilon^{\prime}}{\varepsilon}-\alpha^{\prime}\right)\varepsilon^{\prime
2}}-\left({(\alpha+\beta\lambda)\varepsilon^{\prime}}-\alpha^{\prime}\varepsilon\right)\varepsilon^{\prime\prime}-2\left(-\frac{2(\alpha+\beta\lambda)\varepsilon^{\prime
2}}{\varepsilon}+{(\alpha+\beta\lambda)\varepsilon^{\prime\prime}}+{2\alpha^{\prime}\varepsilon^{\prime}}-\alpha^{\prime\prime}\varepsilon\right)\varepsilon^{\prime}$
$\displaystyle+{2\left(2\lambda\varepsilon^{\prime}-\frac{4\lambda\alpha^{\prime}\varepsilon}{\alpha+\beta\lambda}+2\left(\frac{(\alpha+\beta\lambda)\varepsilon^{\prime}}{\varepsilon}-\alpha^{\prime}\right)\varepsilon^{\prime}+\left(-\frac{2(\alpha+\beta\lambda)\varepsilon^{\prime
2}}{\varepsilon}+{(\alpha+\beta\lambda)\varepsilon^{\prime\prime}}+{2\alpha^{\prime}\varepsilon^{\prime}}-\alpha^{\prime\prime}\varepsilon\right)-\frac{2\left({(\alpha+\beta\lambda)\varepsilon^{\prime}}-\alpha^{\prime}\varepsilon\right)\alpha^{\prime}}{\alpha+\beta\lambda}\right)\varepsilon^{\prime}}$
$\displaystyle-\frac{1}{2}\left(\frac{6(\alpha+\beta\lambda)\varepsilon^{\prime
3}}{\varepsilon}-{6(\alpha+\beta\lambda)\varepsilon^{\prime}\varepsilon^{\prime\prime}}+{(\alpha+\beta\lambda)\varepsilon^{\prime\prime\prime}\varepsilon}-{6\alpha^{\prime}\varepsilon^{\prime
2}}+{3\alpha^{\prime}\varepsilon^{\prime\prime}\varepsilon}+{3\alpha^{\prime\prime}\varepsilon^{\prime}\varepsilon}-\alpha^{\prime\prime\prime}\varepsilon^{2}\right)$
$\displaystyle+\frac{1}{\alpha+\beta\lambda}\left({4\left({(\alpha+\beta\lambda)\varepsilon^{\prime}}-\alpha^{\prime}\varepsilon\right)\alpha^{\prime}\varepsilon^{\prime}}+{\left({(\alpha+\beta\lambda)\varepsilon^{\prime}}-\alpha^{\prime}\varepsilon\right)\alpha^{\prime\prime}\varepsilon}+{2\left(-{2(\alpha+\beta\lambda)\varepsilon^{\prime
2}}+{(\alpha+\beta\lambda)\varepsilon^{\prime\prime}\varepsilon}+{2\alpha^{\prime}\varepsilon^{\prime}\varepsilon}-\alpha^{\prime\prime}\varepsilon^{2}\right)\alpha^{\prime}}\right)$
$\displaystyle-\frac{2}{\alpha+\beta\lambda}\left(2\lambda\varepsilon^{\prime}-\frac{4\lambda\alpha^{\prime}\varepsilon}{\alpha+\beta\lambda}+2\left(\frac{(\alpha+\beta\lambda)\varepsilon^{\prime}}{\varepsilon}-\alpha^{\prime}\right)\varepsilon^{\prime}+\left(-\frac{2(\alpha+\beta\lambda)\varepsilon^{\prime
2}}{\varepsilon}+{(\alpha+\beta\lambda)\varepsilon^{\prime\prime}}+{2\alpha^{\prime}\varepsilon^{\prime}}-\alpha^{\prime\prime}\varepsilon\right)-\frac{2\left({(\alpha+\beta\lambda)\varepsilon^{\prime}}-\alpha^{\prime}\varepsilon\right)\alpha^{\prime}}{\alpha+\beta\lambda}\right)\alpha^{\prime}\varepsilon$
$\displaystyle-\frac{3\left({(\alpha+\beta\lambda)\varepsilon^{\prime}\varepsilon}-\alpha^{\prime}\varepsilon^{2}\right)\alpha^{\prime
2}}{(\alpha+\beta\lambda)^{2}}.$ We then analyze the integrability of each of
the terms above. By Assumption 4, $\varepsilon^{\prime}$,
$\varepsilon^{\prime\prime}$ and $\varepsilon^{\prime\prime\prime}$ are
integrable and the same goes for $\alpha^{\prime}$, $\alpha^{\prime\prime}$
and $\alpha^{\prime\prime\prime}$, which is enough to justify the
integrability of almost all the terms above. We finally see that we also need
$\frac{(\varepsilon^{\prime})^{2}}{\varepsilon}$ and
$\frac{(\varepsilon^{\prime})^{3}}{\varepsilon}$ to be integrable, which holds
by Assumption 4. Overall, $\varphi$ is integrable on $\mathbb{R}_{+}$. ∎
We now state and prove the following result which was used at the end of the
proof of Theorem 4.8.
###### Lemma C.1.
Under Assumptions 3 and 4, for all $s\geq 0$,
$\frac{1}{16}\left(\frac{2p^{\prime}(s)}{p(s)^{3/2}}-\frac{4\lambda\sqrt{\varepsilon(s)}}{(\alpha(s)+\beta\lambda)^{3/2}}\right)^{2}=\frac{\lambda^{2}\varepsilon(s)}{(\alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s)).$
###### Proof.
We omit the time dependence on $s\geq 0$ for the sake of readability. Using
Assumption 3 we can define and expand the following quantity,
$\displaystyle\frac{1}{16}\left(\frac{2p^{\prime}}{p^{3/2}}-\frac{4\lambda\sqrt{\varepsilon}}{(\alpha+\beta\lambda)^{3/2}}\right)^{2}=\frac{\lambda^{2}\varepsilon}{(\alpha+\beta\lambda)^{3}}-\frac{p^{\prime}\lambda\sqrt{\varepsilon}}{p^{3/2}(\alpha+\beta\lambda)^{3/2}}+\frac{(p^{\prime})^{2}}{4p^{3}}$
$\displaystyle=\frac{\lambda^{2}\varepsilon}{(\alpha+\beta\lambda)^{3}}-\lambda(\alpha^{\prime}\varepsilon-(\alpha+\beta\lambda)\varepsilon^{\prime})+\frac{1}{4(\alpha+\beta\lambda)^{3}}\left((\alpha^{\prime})^{2}\varepsilon+\frac{(\varepsilon^{\prime})^{2}}{\varepsilon}(\alpha+\beta\lambda)^{2}-2\alpha^{\prime}\varepsilon^{\prime}(\alpha+\beta\lambda)\right).$
Assumption 4, implies in particular that
$|\varepsilon^{\prime}(t)|=o(\varepsilon(t))$ and that $\alpha^{\prime}(t)\to
0$, which we use in the equality above to obtain the desired conclusion. ∎
## Appendix D Additional Experiments and Details
We first detail the discretization that we used for approximating the
solutions of the three ODEs considered in Section 5. We use Euler
discretization schemes with fixed step-size $\gamma>0$ and approximate the
solutions at times $t_{k}=\gamma k$, for all $k\in\mathbb{N}$. For a
trajectory $x$, we use the notation
$x(t_{k})\stackrel{{\scriptstyle\textrm{def}}}{{=}}x^{(k)}$. The approximation
of (CN) is obtained by explicit discretization, so that for all
$k\in\mathbb{N}$, we have,
$x_{N}^{(k+1)}=x_{N}^{(k)}-\gamma\left[\beta\nabla^{2}f(x_{N}^{(k)})\right]^{-1}\nabla
f(x_{N}^{(k)}).$ (38)
Then, defining $\varepsilon_{k}=\varepsilon(t_{k})$ and
$\alpha_{k}=\alpha(t_{k})$, (LM) and (VM-DIN-AVD) are obtained via Euler semi-
implicit discretization. The solution of (LM) is approximated by,
$x_{LM}^{(k+1)}=x_{LM}^{(k)}-\gamma\left[\alpha_{k}I_{n}+\beta\nabla^{2}f(x_{LM}^{(k)})\right]^{-1}\nabla
f(x_{LM}^{(k)}),$ (39)
where $I_{n}$ is the identity matrix on $\mathbb{R}^{n}$. The solution of (VM-
DIN-AVD) is obtained similarly,
$x^{(k+1)}=x^{(k)}+\left[(\varepsilon_{k}+\gamma\alpha_{k})I_{n}+\gamma\beta\nabla^{2}f(x^{(k)})\right]^{-1}\left(\varepsilon_{k}(x^{(k)}-x^{(k-1)})-\gamma^{2}\nabla
f(x^{(k)})\right).$ (40)
As safety check, one can see that for $\varepsilon_{k}=0$, (40) is equivalent
to (39), which is itself equivalent to (38) when $\alpha_{k}=0$.
Figure 6: Similar experiment and figures as those described in Figure 2, but
for a poorly conditioned quadratic $f(x)=\frac{1}{2}\|Ax\|^{2}$ (first-two
rows) and the function
$f(x)=\log\left(\sum_{i=1}^{n}e^{x_{i}}\right)+\frac{1}{2}\|Ax\|^{2}$ (last-
two rows).
## References
* Alecsa et al. [2021] Cristian Daniel Alecsa, Szilárd Csaba László, and Titus Pinţa. An extension of the second order dynamical system that models Nesterov’s convex gradient method. _Applied Mathematics & Optimization_, 84(2):1687–1716, 2021.
* Alvarez et al. [2002] Felipe Alvarez, Hedy Attouch, Jérôme Bolte, and Patrick Redont. A second-order gradient-like dissipative dynamical system with Hessian-driven damping: Application to optimization and mechanics. _Journal de Mathématiques Pures et Appliquées_ , 81(8):747–779, 2002.
* Attouch and Cabot [2017] Hedy Attouch and Alexandre Cabot. Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity. _Journal of Differential Equations_ , 263(9):5412–5458, 2017.
* Attouch and Fadili [2022] Hedy Attouch and Jalal Fadili. From the ravine method to the Nesterov method and vice versa: A dynamical system perspective. _SIAM Journal on Optimization_ , 32(3):2074–2101, 2022.
* Attouch and László [2020] Hedy Attouch and Szilárd Csaba László. Newton-like inertial dynamics and proximal algorithms governed by maximally monotone operators. _SIAM Journal on Optimization_ , 30(4):3252–3283, 2020.
* Attouch and László [2021] Hedy Attouch and Szilárd Csaba László. Continuous Newton-like inertial dynamics for monotone inclusions. _Set-Valued and Variational Analysis_ , 29(3):555–581, 2021.
* Attouch and Svaiter [2011] Hedy Attouch and Benar Fux Svaiter. A continuous dynamical Newton-like approach to solving monotone inclusions. _SIAM Journal on Control and Optimization_ , 49(2):574–598, 2011.
* Attouch et al. [2013] Hedy Attouch, Patrick Redont, and Benar Fux Svaiter. Global convergence of a closed-loop regularized Newton method for solving monotone inclusions in hilbert spaces. _Journal of Optimization Theory and Applications_ , 157(3):624–650, 2013.
* Attouch et al. [2016] Hedy Attouch, Juan Peypouquet, and Patrick Redont. Fast convex optimization via inertial dynamics with Hessian driven damping. _Journal of Differential Equations_ , 261(10):5734–5783, 2016.
* Attouch et al. [2020] Hedy Attouch, Zaki Chbani, Jalal Fadili, and Hassan Riahi. First-order optimization algorithms via inertial systems with Hessian driven damping. _Math. Program._ , 194(4):1–43, 2020.
* Attouch et al. [2022a] Hedy Attouch, Aïcha Balhag, Zaki Chbani, and Hassan Riahi. Fast convex optimization via inertial dynamics combining viscous and Hessian-driven damping with time rescaling. _Evolution Equations and Control Theory_ , 11(2):487–514, 2022a.
* Attouch et al. [2022b] Hedy Attouch, Radu Ioan Boţ, and Ernö Robert Csetnek. Fast optimization via inertial dynamics with closed-loop damping. _Journal of the European Mathematical Society (published online)_ , 2022b.
* Benaïm et al. [2005] Michel Benaïm, Josef Hofbauer, and Sylvain Sorin. Stochastic approximations and differential inclusions. _SIAM Journal on Control and Optimization_ , 44(1):328–348, 2005.
* Bender and Orszag [1999] Carl M Bender and Steven A Orszag. _Asymptotic methods and perturbation theory_. Springer, 1999.
* Blumenthal [1912] Otto Blumenthal. Über asymptotische Integration linearer Differentialgleichungen mit Anwendung auf eine asymptotische Theorie der Kugelfunktionen. _Archiv der Mathematik und Physik_ , 19:136–174, 1912.
* Boţ et al. [2021] Radu Ioan Boţ, Ernö Robert Csetnek, and Szilárd Csaba László. Tikhonov regularization of a second order dynamical system with Hessian driven damping. _Math. Program._ , 189(1):151–186, 2021.
* Brillouin [1926] Léon Brillouin. Remarques sur la mécanique ondulatoire. _Journal de Physique et Le Radium_ , 7(12):353–368, 1926.
* Broyden [1970] Charles George Broyden. The convergence of a class of double-rank minimization algorithms. _IMA Journal of Applied Mathematics_ , 6(1):76–90, 1970.
* Castera [2021] Camille Castera. Inertial Newton algorithms avoiding strict saddle points. _arXiv:2111.04596_ , 2021.
* Castera et al. [2021] Camille Castera, Jérôme Bolte, Cédric Févotte, and Edouard Pauwels. An inertial Newton algorithm for deep learning. _Journal of Machine Learning Research_ , 22(134):1–31, 2021.
* Chen and Luo [2019] Long Chen and Hao Luo. First order optimization methods based on Hessian-driven Nesterov accelerated gradient flow. _arXiv:1912.09276_ , 2019.
* Emmrich [1999] Etienne Emmrich. _Discrete versions of Gronwall’s lemma and their application to the numerical analysis of parabolic problems_. TU Fachbereich, 1999.
* Fletcher [1970] Roger Fletcher. A new approach to variable metric algorithms. _The computer journal_ , 13(3):317–322, 1970\.
* Gavurin [1958] Mark Konstantinovich Gavurin. Nonlinear functional equations and continuous analogues of iteration methods. _Izvestiya Vysshikh Uchebnykh Zavedenii. Matematika_ , 1(5):18–31, 1958.
* Goldfarb [1970] Donald Goldfarb. A family of variable-metric methods derived by variational means. _Mathematics of computation_ , 24(109):23–26, 1970.
* Green [1838] George Green. On the motion of waves in a variable canal of small depth and width. _Transactions of the Cambridge Philosophical Society_ , 6:457–462, 1838.
* Hunter [2007] John D Hunter. Matplotlib: A 2D graphics environment. _Computing in science & engineering_, 9(3):90–95, 2007.
* Kramers [1926] Hendrik Anthony Kramers. Wellenmechanik und halbzahlige Quantisierung. _Zeitschrift für Physik_ , 39(10):828–840, 1926.
* Lin and Jordan [2022] Tianyi Lin and Michael I Jordan. A control-theoretic perspective on optimal high-order optimization. _Math. Program._ , 195(1):929–975, 2022.
* Liouville [1836] Joseph Liouville. Mémoire sur le développement des fonctions ou parties de fonctions en séries dont les divers termes sont assujétis à satisfaire à une même équation différentielle du second ordre, contenant un paramètre variable. _Journal de Mathématiques Pures et Appliquées_ , 2:253–265, 1836.
* Liu and Nocedal [1989] Dong C Liu and Jorge Nocedal. On the limited memory BFGS method for large scale optimization. _Math. Program._ , 45(1):503–528, 1989.
* Ljung [1977] Lennart Ljung. Analysis of recursive stochastic algorithms. _IEEE transactions on automatic control_ , 22(4):551–575, 1977.
* Nesterov [1983] Yurii Nesterov. A method for unconstrained convex minimization problem with the rate of convergence ${O}(1/k^{2})$. In _Doklady an USSR_ , volume 269, pages 543–547, 1983.
* Nesterov [2003] Yurii Nesterov. _Introductory lectures on convex optimization: A basic course_ , volume 87. Springer Science & Business Media, 2003.
* Olver [1997] Frank Olver. _Asymptotics and special functions_. Academic Press, 1997.
* Olver [1961] Frank William John Olver. Error bounds for the Liouville–Green (or WKB) approximation. In _Mathematical Proceedings of the Cambridge Philosophical Society_ , volume 57, pages 790–810. Cambridge University Press, 1961.
* Polyak [1964] Boris T Polyak. Some methods of speeding up the convergence of iteration methods. _USSR Computational Mathematics and Mathematical Physics_ , 4(5):1–17, 1964.
* Rossum [1995] Guido Rossum. _Python reference manual_. CWI (Centre for Mathematics and Computer Science), 1995.
* Shanno [1970] David F Shanno. Conditioning of quasi-Newton methods for function minimization. _Mathematics of computation_ , 24(111):647–656, 1970.
* Shi et al. [2022] Bin Shi, Simon S Du, Michael I Jordan, and Weijie J Su. Understanding the acceleration phenomenon via high-resolution differential equations. _Math. Program._ , 195:79–148, 2022.
* Su et al. [2014] Weijie Su, Stephen Boyd, and Emmanuel Candes. A differential equation for modeling Nesterov’s accelerated gradient method: Theory and insights. In _Advances in Neural Information Processing Systems (NeurIPS)_ , volume 27, pages 2510–2518, 2014.
* Taylor [1982] James G Taylor. Improved error bounds for the Liouville-Green (or WKB) approximation. _Journal of Mathematical Analysis and Applications_ , 85(1):79–89, 1982.
* Walt et al. [2011] Stéfan van der Walt, Chris Colbert, and Gael Varoquaux. The NumPy array: a structure for efficient numerical computation. _Computing in Science & Engineering_, 13(2):22–30, 2011.
* Wentzel [1926] Gregor Wentzel. Eine Verallgemeinerung der Quantenbedingungen für die Zwecke der Wellenmechanik. _Zeitschrift für Physik_ , 38(6):518–529, 1926.
|
# Zeckendorf expansion, Dirichlet series and infinite series involving the
infinite Fibonacci word
Shuo LI<EMAIL_ADDRESS>
###### Abstract
Let $\beta=\frac{1+\sqrt{5}}{2}$, $(a_{n})_{n\in\mathbb{N}^{+}}$ be a non-
uniform morphic sequence involving the infinite Fibonacci word and
$(\delta(n))_{n\in\mathbb{N}^{+}}$ be a positive sequence such that for all
positive integers $n$, $\delta(n)=\frac{1}{\sqrt{5}}\sum_{j\geq
0}\epsilon_{j}\beta^{j+2}$ if the unique Zeckendorf expansion of $n$ is
$n=\sum_{j\geq 0}\epsilon_{j}F_{j+2}$ with Fibonacci numbers
$F_{0},F_{1},F_{2}...$. We define and study some Dirichlet series in the form
of $\sum_{n\geq 1}\frac{a_{n}}{(\delta(n))^{s}}$ and relations between them.
Moreover, we compute the values of some infinite series involving the infinite
Fibonacci word.
## 1 Introduction and main results
Several infinite products and series involving sum-of-digits functions as well
as block-counting functions were extensively studied in
[AC85][AS89][ASS07][ARS18][Hu16]. The major ideal in these articles is that
sum-of-digits functions and block-counting functions are all closely related
to the notion of the automatic sequence, which can be defined as the images of
the fixed points of uniform morphisms under some codings (more background is
given in [AS03]). However, we may expect to compute infinite products or
series alone with functions in a more general class, for example, the morphic
words. Herein, we offer a formal definition of the morphic words: let $\sum$
and $\Delta$ denote two non-empty sets of symbols and let $\sum^{*}$ and
$\Delta^{*}$ respectively denote the free monoids on $\sum$ and $\Delta$. A
sequence $(a_{n})_{n\in\mathbb{N}^{+}}\in\Delta^{*}$ is called morphic if
there exists a sequence $(e_{n})_{n\in\mathbb{N}^{+}}\in\sum^{*}$, a morphism
$f:\sum^{*}\to\sum^{*}$ and a morphism called coding $\rho:\sum\to\Delta$ such
that $(a_{n})_{n\in\mathbb{N}^{+}}=\rho((e_{n})_{n\in\mathbb{N}^{+}})$ under
the condition that the sequence $(e_{n})_{n\in\mathbb{N}^{+}}$ is a fixed
point of the morphism $f$. Particularly, if the morphism $f$ is uniform, that
is to say the length of $f(a)$ is constant for all elements $a\in\sum$, the
sequence $(a_{n})_{n\in\mathbb{N}^{+}}$ is called automatic. A typical example
of automatic sequences is the Thue-Morse sequence, which is the fixed point of
the morphism $0\to 0,1$ and $1\to 1,0$. In this article, we focus on non-
uniform morphic sequences involving the infinite Fibonacci word, which is the
fixed point of the non-uniform morphism $0\to 0$ and $1\to 0,1$. The main
ideal of this article arises from the fact that the infinite Fibonacci word
can be deduced by counting the number of $0$s at the end of the Zeckendrof
expansion of each integer.
Let the Fibonacci numbers be defined by $F_{0}=0,F_{1}=1$, and
$F_{n}=F_{n-1}+F_{n-2}$ for $n\geq 2$. From Zeckendorf’s theorem [Zec72],
every positive integer $n$ can be uniquely written as $n=\sum_{j\geq
0}\epsilon_{j}F_{j+2}$ with $\epsilon_{j}\in\left\\{0,1\right\\}$ under the
condition that $\epsilon_{j}\epsilon_{j+1}=0$ for all $j\geq 0$. This
expansion is called the Zeckendorf expansion. Using this notion, we can define
some other sequences which present useful properties in analysis as well as in
combinatorics. On the one hand, if we let $\beta$ denote
$\frac{1+\sqrt{5}}{2}$, then
$F_{n}=\frac{1}{\sqrt{5}}(\beta^{n}-(-\beta)^{-n})$ for all $n\geq 0$. Thus,
it is natural to define and study the following two sequences
$(\delta(n))_{n\in\mathbb{N}}$ and $(\delta^{\prime}(n))_{n\in\mathbb{N}}$:
for all integers $n\geq 0$, if $n=\sum_{j\geq 0}\epsilon_{j}F_{j+2}$ is the
Zeckendorf expansion of $n$, then
$\delta(n)=\frac{1}{\sqrt{5}}\sum_{j\geq
0}\epsilon_{j}\beta^{j+2}\;\text{and}\;\delta^{\prime}(n)=\frac{1}{\sqrt{5}}\sum_{j\geq
0}\epsilon_{j}(-\beta)^{-j-2}.$
In Section two, we will study the arithmetic properties of the sequences
$(\delta(n))_{n\in\mathbb{N^{+}}}$ and
$(\delta^{\prime}(n))_{n\in\mathbb{N^{+}}}$, as well as some properties of the
Dirichlet series $F(s)=\sum_{n\geq 1}\frac{1}{(\delta(n))^{s}}$. On the other
hand, some morphic sequences can also be defined by using the Zeckendorf
expansion, namely, the Fibonacci-automatic sequences defined and studied in
[MSS16][DMSS16][DMR+17]. Among these sequences, a typical example is the
infinite Fibonacci sequence $(f(n))_{n\in\mathbb{N^{+}}}$. In Section three,
we study some combinatorial properties of the infinite Fibonacci sequence and
other morphic sequences. In Section four, we present a combinatorial proof of
the meromorphic continuation of $F$ on the whole complex plane; the main
theorem is announced as follows:
###### Theorem 1
The Dirichlet series $F$ converges absolutely on the set
$\left\\{s|\Re(s)>1\right\\}$, and has a meromorphic continuation on the whole
complex plane. Moreover, its poles are located on the set of zeros of the
function $s\to 1-2\beta^{-s}+\beta^{-3s}$.
Moreover, we extend this result to other Dirichlet series with morphic
coefficients. Similar results can be found in [Sou19]. Finally, in Section
five, we use the method introduced in [AC85] to compute some infinite series:
###### Theorem 2
Letting $(f(n))_{n\in\mathbb{N}^{+}}$, $(d(n))_{n\in\mathbb{N}^{+}}$,
$(r(n))_{n\in\mathbb{N}^{+}}$, $(s(n))_{n\in\mathbb{N}^{+}}$ and
$(t(n))_{n\in\mathbb{N}^{+}}$ be integer sequences defined in the following
sections and letting $\beta=\frac{\sqrt{5}+1}{2}$, then we have
$\sum_{n\geq
2}r(n)(\frac{\sqrt{5}}{(n-1)\sqrt{5}-\\{\beta(n-1)\\}+1-f(n-1)}-\frac{\sqrt{5}}{n\sqrt{5}-\\{\beta
n\\}+1-f(n)})=\frac{\beta-1}{\beta^{2}}\ln(\beta),$
$\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=1\end{subarray}}(\frac{\sqrt{5}}{(n-1)\sqrt{5}-\\{\beta(n-1)\\}+1-f(n-1)}-\frac{\sqrt{5}}{n\sqrt{5}-\\{\beta
n\\}+1-f(n)})=\beta^{-1}\ln(\beta),$ $\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=2\end{subarray}}(\frac{\sqrt{5}}{(n-1)\sqrt{5}-\\{\beta(n-1)\\}+1-f(n-1)}-\frac{\sqrt{5}}{n\sqrt{5}-\\{\beta
n\\}+1-f(n)})=\beta^{-2}\ln(\beta),$ $\sum_{n\geq
2}t(n)(\frac{\sqrt{5}}{(n-1)\sqrt{5}-\\{\beta(n-1)\\}+1-f(n-1)}-\frac{\sqrt{5}}{n\sqrt{5}-\\{\beta
n\\}+1-f(n)})=\beta^{2}-\frac{\beta-1}{\beta^{2}}\ln(\beta),$ $\sum_{n\geq
2}\frac{s(n)\sqrt{5}}{n\sqrt{5}-\\{\beta
n\\}+1-f(n)}=(\frac{3}{2}\beta^{-4}-\beta^{-2})\ln(\beta),$
where $\\{x\\}$ is the fractional part of $x$.
## 2 Arithmetic properties of $(\delta(n))_{n\in\mathbb{N^{+}}}$
###### Proposition 1
Let $e(n)$ denote the last bit in the Zeckendorf expansion of $n$; that is to
say, for a given integer $n$, $e(n)$ is the coefficient $\epsilon_{0}$ in the
expansion $n=\sum_{j\geq 0}\epsilon_{j}F_{j+2}$, and we then have the
following equations:
$\delta(n+1)-\delta(n)=\begin{cases}\frac{1}{\sqrt{5}}\beta^{2}\;\;\;\text{if}\;e(n)=0,\\\
\frac{1}{\sqrt{5}}\beta\;\;\;\;\text{if}\;e(n)=1.\end{cases}$
Consequently, the sequence $(\delta(n))_{n\in\mathbb{N}}$ is an unbounded
increasing sequence.
* Proof
For a given integer $n$, if $e(n)=1$, from the convention that there does not
exist the factor $\overline{1,1}$ in the Zeckendorf expansion of $n$, we can
suppose that this expansion is in the form
$\overline{\epsilon_{k},\epsilon_{k-1},\epsilon_{k-2},...,\epsilon_{s},0,\underbrace{0,1}_{i\;\text{times}}}$,
where $\epsilon_{i}$ are either $0$ or $1$. From the recurrent relations
between the Fibonacci numbers and the uniqueness of the Zeckendrof expansion,
the expansion of $n+1$ is in the form
$\overline{\epsilon_{k},\epsilon_{k-1},\epsilon_{k-2},...,\epsilon_{s},0,1,\underbrace{0}_{2i-1\;\text{times}}}$.
Therefore,
$\delta(n+1)-\delta(n)=\frac{1}{\sqrt{5}}(\beta^{2i+1}-\sum_{k=1}^{i}\beta^{2k}).$
Using the equation $\beta^{2}-\beta=1$ recurrently, we have
$\delta(n+1)-\delta(n)=\frac{1}{\sqrt{5}}\beta$.
In the same way, if $e(n)=0$ and the Zeckendorf expansion of $n$ ends up with
a suffix $\overline{1,0}$, then we can suppose that this expansion is in the
form
$\overline{\epsilon_{k},\epsilon_{k-1},\epsilon_{k-2},...,\epsilon_{s},0,\underbrace{0,1}_{i\;\text{times}},0}$.
Thus, the expansion of $n+1$ is in the form
$\overline{\epsilon_{k},\epsilon_{k-1},\epsilon_{k-2},...,\epsilon_{s},0,1,\underbrace{0}_{2i\;\text{times}}}$.
Therefore,
$\delta(n+1)-\delta(n)=\frac{1}{\sqrt{5}}(\beta^{2i+2}-\sum_{k=1}^{i}\beta^{2k+1})=\frac{1}{\sqrt{5}}\beta^{2}.$
In the last case, if $e(n)=0$ but the Zeckendorf expansion of $n$ ends up with
a suffix $\overline{0,0}$, then we can suppose that this expansion is in the
form
$\overline{\epsilon_{k},\epsilon_{k-1},\epsilon_{k-2},...,\epsilon_{s},0,0}$.
Thus, the expansion of $n+1$ is in the form
$\overline{\epsilon_{k},\epsilon_{k-1},\epsilon_{k-2},...,\epsilon_{s},0,1}$,
so that
$\delta(n+1)-\delta(n)=\frac{1}{\sqrt{5}}\beta^{2}.$
###### Proposition 2
Let us define the sequence $(d(n))_{n\in\mathbb{N^{+}}}$ in the following way:
$d(n)=\begin{cases}0\;\text{if the Zeckendorf expansion of $n$ admits the
string $\overline{1}$ as a suffix},\\\ 1\;\text{if the Zeckendorf expansion of
$n$ admits a string of type $\overline{1,\underbrace{0}_{2i+1;\text{times}}}$
as a suffix},\\\ 2\;\text{if the Zeckendorf expansion of $n$ admits a string
of type $\overline{1,\underbrace{0}_{2i+2;\text{times}}}$ as a
suffix},\end{cases}$
where $i$ is a positive integer. We then have
$|\delta^{\prime}(n)|<\frac{1}{\beta\sqrt{5}}$ for all $n>0$. Moreover,
$\begin{cases}\delta^{\prime}(n)>0\;\text{if}\;d(n)=0\;\text{or}\;d(n)=2,\\\
\delta^{\prime}(n)<0\;\text{if}\;d(n)=1.\end{cases}$
* Proof
It is clear that $|-\beta^{-1}|<1$. For a given integer $n$, letting
$n=\sum_{j\geq 0}\epsilon_{j}F_{j+2}$ with
$\epsilon_{j}\in\left\\{0,1\right\\}$ be the Zeckendorf expansion of $n$, then
$\displaystyle|\delta^{\prime}(n)|$
$\displaystyle=|\frac{1}{\sqrt{5}}\sum_{i\geq 0}\epsilon_{i}(-\beta)^{-i-2}|$
(1) $\displaystyle\leq\frac{1}{\sqrt{5}}\sum_{i\geq
0}\epsilon_{i}(\beta^{-i-2})$ $\displaystyle\leq\frac{1}{\sqrt{5}}\sum_{i\geq
0}(\beta^{-2i-2})$
$\displaystyle\leq\frac{1}{\sqrt{5}}\beta^{-2}\frac{1}{1-\beta^{-2}}.$
Substituting the equation $\beta^{2}-\beta-1=0$ into Equation 1, we have
$|\delta^{\prime}(n)|\leq\frac{1}{\sqrt{5}}\beta^{-1}\frac{\beta^{-1}}{1-\beta^{-2}}=\frac{1}{\sqrt{5}}\beta^{-1}$.
For the second part of the proposition, let $n$ be an integer and let $i$ be
the smallest index such that $\epsilon_{i}=1$ in the Zeckendorf expansion of
$n$. It is easy to verify that $(-\beta)^{-i-2}>0$ if $d(n)=0$ or $2$, and
$(-\beta)^{-i-2}<0$ if $d(n)=1$. It follows from Equation 1 that
$|\frac{1}{\sqrt{5}}\sum_{k\geq
i+1}\epsilon_{k}(-\beta)^{-k-2}|\leq\beta^{-i-3}$, so the sign of
$\delta^{\prime}(n)$ is the same as that of $(-\beta)^{-i-2}$.
###### Proposition 3
The function $F(s)=\sum_{n\geq 1}\frac{1}{(\delta(n))^{s}}$ is a Dirichlet
series which converges absolutely on $\left\\{s|\Re(s)>1\right\\}$ and has a
meromorphic continuation on $\left\\{s|\Re(s)>0\right\\}$. Moreover, $F(s)$
has a single pole at $s=1$ with residue $1$.
* Proof
From Propositions 1 and 2, the sequence $(\delta(n))_{n\in\mathbb{N}}$ is
increasing and $n-1<\delta(n)<n+1$ for all positive integers $n$. Thus, $F(s)$
converges absolutely on $\left\\{s|\Re(s)>1\right\\}$. For the extension, let
$\zeta$ be the Riemann zeta function,
$\displaystyle\zeta(s)-F(s)$ $\displaystyle=\sum_{n\geq
1}\frac{1}{n^{s}}-\frac{1}{\delta^{s}(n)}$ (2) $\displaystyle=\sum_{n\geq
1}\frac{1}{(\delta(n)-\delta^{\prime}(n))^{s}}-\frac{1}{\delta^{s}(n)}$
$\displaystyle=\sum_{n\geq
1}\frac{1}{\delta^{s}(n)}\frac{1}{(1-\frac{\delta^{\prime}(n)}{\delta(n)})^{s}}-\frac{1}{\delta^{s}(n)}$
$\displaystyle=\sum_{n\geq 1}\frac{1}{\delta^{s}(n)}(\sum_{m\geq
0}\binom{-s}{m}(-\frac{\delta^{\prime}(n)}{\delta(n)})^{m}-1)$
$\displaystyle=\sum_{m\geq 1}\binom{-s}{m}\sum_{n\geq
1}\frac{(-\delta^{\prime}(n))^{m}}{\delta^{s+m}(n)}$
For any given positive integer $m$, $|\sum_{n\geq
1}\frac{(-\delta^{\prime}(n))^{m}}{\delta^{s+m}(n)}|\leq\sum_{n\geq
1}|\frac{(\delta^{\prime}(n))^{m}}{\delta^{s+m}(n)}|\leq\sum_{n\geq
1}\frac{\beta^{-m}}{\delta^{\Re(s)+m}(n)}$. Since the term $\sum_{n\geq
1}\frac{1}{\delta^{\Re(s)+m}(n)}$ is bounded for large $m$, the righthand side
of equation (2) converges for all $s$ such that $\Re(s)>0$. Consequently, the
function $F(s)$ has a meromorphic continuation on
$\left\\{s|\Re(s)>0\right\\}$ and has the same pole with the same residue as
the Riemann zeta function on $s=1$.
## 3 Fibonacci sequence and its first differences sequence
Let $(f(n))_{n\in\mathbb{N^{+}}}$ be the Fibonacci sequence defined as the
fixed point of the morphism $1\to 0,1$ and $0\to 0$ and let us recall the
sequence $(d(n))_{n\in\mathbb{N^{+}}}$ defined in Proposition 2:
$d(n)=\begin{cases}0\;\text{if the Zeckendorf expansion of $n$ admits the
string $\overline{1}$ as a suffix}\\\ 1\;\text{if the Zeckendorf expansion of
$n$ admits a string of type $\overline{1,\underbrace{0}_{2i+1;\text{times}}}$
as a suffix}\\\ 2\;\text{if the Zeckendorf expansion of $n$ admits a string of
type $\overline{1,\underbrace{0}_{2i+2;\text{times}}}$ as a
suffix},\end{cases}$
with some positive integer $i$. In this section, we will show the relations
between $(d(n))_{n\in\mathbb{N^{+}}}$, $(f(n))_{n\in\mathbb{N^{+}}}$ and the
first differences sequence of $(f(n))_{n\in\mathbb{N^{+}}}$.
###### Proposition 4
The sequence $(d(n))_{n\in\mathbb{N^{+}}}$ satisfies the following properties:
1, if $d(n)=0$, then $d(n+1)=1$;
2, if $d(n)=1$, then $d(n+1)=2$ or $0$;
3, if $d(n)=2$, then $d(n+1)=0$.
* Proof
If $d(n)=0$, then the Zeckendorf expansion of $n$ is in the form
$\overline{\epsilon_{k},\epsilon_{k-1},\epsilon_{k-2},...,\epsilon_{s},0,\underbrace{0,1}_{i\;\text{times}}}$
for some $i\geq 1$. From the recurrent relations between the Fibonacci numbers
and the uniqueness of the Zeckendrof expansion , the expansion of $n+1$ is in
the form
$\overline{\epsilon_{k},\epsilon_{k-1},\epsilon_{k-2},...,\epsilon_{s},0,1,\underbrace{0}_{2i-1\;\text{times}}}$.
Consequently, $d(n+1)=1$.
If $d(n)=2$, then the Zeckendorf expansion of $n$ is in the form
$\overline{\epsilon_{k},\epsilon_{k-1},\epsilon_{k-2},...,\epsilon_{s},0,0}$.
Thus, the expansion of $n+1$ is in the form
$\overline{\epsilon_{k},\epsilon_{k-1},\epsilon_{k-2},...,\epsilon_{s},0,1}$.
Therefore, $d(n+1)=0$.
If $d(n)=1$, from the definition, the Zeckendorf expansion of $n$ ends up with
a suffix $\overline{1,\underbrace{0}_{2i+1\;\text{times}}}$ for some $i\geq
0$. There are two cases: if $i=0$, then the Zeckendorf expansion of $n$ is in
the form
$\overline{\epsilon_{k},\epsilon_{k-1},\epsilon_{k-2},...,\epsilon_{s},0,\underbrace{0,1}_{i\;\text{times}},0}$.
Thus, the expansion of $n+1$ is in the form
$\overline{\epsilon_{k},\epsilon_{k-1},\epsilon_{k-2},...,\epsilon_{s},0,1,\underbrace{0}_{2i\;\text{times}}}$.
In this case, $d(n+1)=2$. If $i\geq 1$, then the Zeckendorf expansion of $n$
is in the form
$\overline{\epsilon_{k},\epsilon_{k-1},\epsilon_{k-2},...,\epsilon_{s},0,0}$.
Thus, the expansion of $n+1$ is in the form
$\overline{\epsilon_{k},\epsilon_{k-1},\epsilon_{k-2},...,\epsilon_{s},0,1}$.
Therefore, $d(n+1)=0$.
###### Proposition 5
The sequence $(f(n))_{n\in\mathbb{N^{+}}}$ is the image of
$(d(n))_{n\in\mathbb{N^{+}}}$ under the morphism $0\to 0,\;1\to 1$ and $2\to
0$. Moreover, if we define $(h(n))_{n\in\mathbb{N^{+}}}$ as the first
differences sequence of $(f(n))_{n\in\mathbb{N^{+}}}$, that is to say,
$h(n)=f(n)-f(n+1)$, then $(h(n))_{n\in\mathbb{N^{+}}}$ is the image of
$(d(n))_{n\in\mathbb{N^{+}}}$ under the morphism $0\to-1,\;1\to 1$ and $2\to
0$.
* Proof
It is mentioned in [MSS16] that $f(n)$ is the output of the following direct
automaton
$0$start$1$010
when the input is the Zeckendorf expansion of $n-1$. In other words, $d(n)=1$
if and only if the Zeckendorf expansion of $n-1$ ends up with a value of $1$
and $d(n)=0$ if and only if the Zeckendorf expansion of $n-1$ ends up with a
value of $0$. On the other hand, from Proposition 4, if $d(n)=1$, then
$d(n-1)=0$, so that the Zeckendorf expansion of $n-1$ ends up with a value of
$1$, and thus, $f(n)=1$; similarly, if $d(n)=0$ or $2$, then $d(n-1)=1$ or
$2$, so that the Zeckendorf expansion of $n-1$ ends up with a value of $0$,
and thus, $f(n)=0$. The second part of this proposition is a direct
consequence of the previous result.
###### Remark 1
From the previous proposition, the sequence $(f(n))_{n\in\mathbb{N^{+}}}$ is
the output of the direct automaton
$0$start$1$100
when the input is the Zeckendorf expansion of $n$. Moreover, from the
descriptions of the sequences A00384, A001468, A014677 and A270788[SI20], the
sequence $(d(n))_{n\in\mathbb{N^{+}}}$ is the image of the morphic sequence
A270788 under the coding $1\to 0,\;2\to 1,\;3\to 2$.
###### Proposition 6
Let us define two functions $\tau_{0}$, $\tau_{1}:\mathbb{N}\to\mathbb{N}$ in
the following way: for every positive integer $n$, letting $n=\sum_{j\geq
0}\epsilon_{j}F_{j+2}$ be the Zeckendorf expansion, then
$\tau_{0}(n)=\sum_{j\geq 0}\epsilon_{j}F_{j+3};$ $\tau_{1}(n)=\sum_{j\geq
0}\epsilon_{j}F_{j+3}+F_{2}.$
The following relations exist between the sets:
$\left\\{\tau_{0}(n)|n\in\mathbb{N}^{+}\right\\}=\left\\{n|n\in\mathbb{N}^{+},d(n)=1\;\text{or}\;2\right\\},$
$\left\\{\tau_{1}(n)|n\in\mathbb{N}^{+},d(n)=1\;\text{or}\;2\right\\}=\left\\{n|n\in\mathbb{N}^{+},n\geq
2,d(n)=0\right\\},$
$\left\\{\tau_{1}(n)|n\in\mathbb{N}^{+},d(n)=0\right\\}=\left\\{n|n\in\mathbb{N}^{+},d(n)=2\right\\}.$
Consequently,
$\left\\{\tau_{0}(n)|n\in\mathbb{N}^{+}\right\\}\cup\left\\{\tau_{1}(n)|n\in\mathbb{N}^{+}\right\\}=\left\\{n|n\in\mathbb{N}^{+},n\geq
2\right\\},$
$\left\\{\tau_{0}(n)|n\in\mathbb{N}^{+}\right\\}\cap\left\\{\tau_{1}(n)|n\in\mathbb{N}^{+}\right\\}=\left\\{n|n\in\mathbb{N}^{+},d(n)=2\right\\}.$
* Proof
If $d(n)=1$ or $2$, then the Zeckendorf expansion of $n$ is in the form
$\overline{\epsilon_{k},\epsilon_{k-1},\epsilon_{k-2},...,\epsilon_{s},0}$.
Thus, the expansion of $\tau_{0}(n)$ is in the form
$\overline{\epsilon_{k},\epsilon_{k-1},\epsilon_{k-2},...,\epsilon_{s},0,0}$
and the expansion of $\tau_{1}(n)$ is in the form
$\overline{\epsilon_{k},\epsilon_{k-1},\epsilon_{k-2},...,\epsilon_{s},0,1}$.
Therefore, $d(\tau_{0}(n))=1$ or $2$ and $d(\tau_{1}(n))=0$.
If $d(n)=0$, then the Zeckendorf expansion of $n$ is in the form
$\overline{\epsilon_{k},\epsilon_{k-1},\epsilon_{k-2},...,\epsilon_{s},0,\underbrace{1,0}_{i\;\text{times}},1}$
for some $i\geq 0$. Thus, the expansion of $\tau_{0}(n)$ is in the form
$\overline{\epsilon_{k},\epsilon_{k-1},\epsilon_{k-2},...,\epsilon_{s},0,\underbrace{1,0}_{i+1\;\text{times}}}$
and the expansion of $\tau_{1}(n)$ is in the form
$\overline{\epsilon_{k},\epsilon_{k-1},\epsilon_{k-2},...,\epsilon_{s},1,\underbrace{0}_{2i+2\;\text{times}}}$.
Therefore, $d(\tau_{0}(n))=1$ or $2$ and $d(\tau_{1}(n))=0$.
###### Corollary 1
$\left\\{\beta\delta(n)|n\in\mathbb{N}^{+}\right\\}\cup\left\\{\beta\delta(n)+\frac{1}{\sqrt{5}}\beta^{2}|n\in\mathbb{N}^{+}\right\\}=\left\\{\delta(n)|n\in\mathbb{N}^{+},n\geq
1\right\\}$
$\left\\{\beta\delta(n)|n\in\mathbb{N}^{+}\right\\}\cap\left\\{\beta\delta(n)+\frac{1}{\sqrt{5}}\beta^{2}|n\in\mathbb{N}^{+}\right\\}=\left\\{\delta(n)|n\in\mathbb{N}^{+},d(n)=2\right\\}$
* Proof
It is directly from this fact that if $\delta(n)=\frac{1}{\sqrt{5}}\sum_{j\geq
0}\epsilon_{j}\beta^{j+2}$, then $\beta\delta(n)=\frac{1}{\sqrt{5}}\sum_{j\geq
0}\epsilon_{j}\beta^{j+3}$ and
$\beta\delta(n)+\frac{1}{\sqrt{5}}\beta^{2}=\frac{1}{\sqrt{5}}(\sum_{j\geq
0}\epsilon_{j}\beta^{j+3}+\beta^{2})$.
## 4 Dirichlet series involving the sequence $(d(n))_{n\in\mathbb{N}^{+}}$
Let us recall the Dirichlet series $F(s)=\sum_{n\geq
1}\frac{1}{(\delta(n))^{s}}$, and define four other Dirichlet series:
$G(s)=\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=0\end{subarray}}\frac{1}{(\delta(n))^{s}};$ $H(s)=\sum_{n\geq
1}\frac{1}{(\beta\delta(n)+\frac{1}{\sqrt{5}}\beta^{2})^{s}};$
$I(s)=\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=1\end{subarray}}\frac{1}{(\delta(n))^{s}};$
$J(s)=\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=2\end{subarray}}\frac{1}{(\delta(n))^{s}}.$
In this section, we prove the meromorphic continuation of all of these series
on the whole complex plane. First, let us prove Theorem 1.
* Proof of Theorem 1
From Corollary 1, if $\Re(s)>1$, then
$\displaystyle F(s)$ $\displaystyle=\sum_{n\geq
2}\frac{1}{\delta^{s}(n)}+\frac{1}{\delta^{s}(1)}$ (3)
$\displaystyle=\sum_{n\geq
1}\frac{1}{(\beta\delta(n))^{s}}+\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=0\end{subarray}}\frac{1}{(\delta(n))^{s}}+\frac{1}{\delta^{s}(1)}$
$\displaystyle=\beta^{-s}F(s)+G(s)+\frac{1}{\delta^{s}(1)}$
$\displaystyle G(s)$ $\displaystyle=\sum_{n\geq
1}\frac{1}{(\beta\delta(n)+\frac{1}{\sqrt{5}}\beta^{2})^{s}}-\sum_{\begin{subarray}{c}n\geq
1\\\ d(n)=2\end{subarray}}\frac{1}{(\delta(n))^{s}}$ (4)
$\displaystyle=H(s)-\sum_{m\geq 1}\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=0\end{subarray}}\frac{1}{(\beta^{2m}\delta(n))^{s}}$
$\displaystyle=H(s)-\frac{\beta^{-2s}}{1-\beta^{-2s}}G(s)$
Substituting (4) into (3), we have
$(1-\beta^{-s})F(s)=(1-\beta^{-2s})H(s)+\frac{1}{\delta^{s}(1)}$ (5)
Moreover, with the fact that $0<\frac{1}{\sqrt{5}}\beta<1$, we have
$\displaystyle H(s)$ $\displaystyle=\sum_{n\geq
1}\frac{1}{(\beta\delta(n)+\frac{1}{\sqrt{5}}\beta^{2})^{s}}$ (6)
$\displaystyle=\sum_{n\geq
1}\frac{1}{(\beta\delta(n))^{s}}\frac{1}{(1+\frac{\frac{1}{\sqrt{5}}\beta}{\delta(n)})^{s}}$
$\displaystyle=\sum_{n\geq 1}\frac{1}{(\beta\delta(n))^{s}}\sum_{m\geq
0}\binom{-s}{m}(\frac{\frac{1}{\sqrt{5}}\beta}{\delta(n)})^{m}$
$\displaystyle=\beta^{-s}\sum_{m\geq
0}(\frac{1}{\sqrt{5}}\beta)^{m}\binom{-s}{m}F(s+m)$
From (5) and (6), we can deduce that
$(1-2\beta^{-s}+\beta^{-3s})F(s)=(\beta^{-s}-\beta^{-3s})\sum_{m\geq
1}(\frac{1}{\sqrt{5}}\beta)^{m}\binom{-s}{m}F(s+m)+\frac{1}{\delta^{s}(1)}$
(7)
For any given complex number $s$ such that $\Re(s)>1$, the sequence
$(F(s+k))_{k\in\mathbb{N}}$ is bounded. Thus, the righthand side of equation
(7) converges uniformly for $\Re(s)>0$. Hence, $F(s)$ has a meromorphic
extension for $0<\Re(s)\leq 1$. Now, if $0<\Re(s)\leq 1$, the righthand side
converges, with the exception of those $s$ for which $s$ is a zero of
$1-2\beta^{-s}+\beta^{-3s}$. This yields a meromorphic extension of $F$ for
$\Re(s)>-1$. Iterating this process shows that $F$ has a meromorphic extension
to the whole complex plane. Moreover, the poles of $F$ are located on the set
of zeros of the function $s\to 1-2\beta^{-s}+\beta^{-3s}$.
###### Corollary 2
The Dirichlet series $G,H,I,J$ all admit meromorphic continuations on the
whole complex plane and have simple poles at $s=1$. Moreover, their residues
at $s=1$ are respectively $1-\beta^{-1}$,$\beta^{-1}$,$1-\beta^{-1}$ and
$\beta^{-1}-\beta^{-2}$.
* Proof
The meromorphic continuations of $G$ and $H$ are given respectively by (3) and
(5), along with their residues.
To see the meromorphic continuation of $I$ and $J$, on the set
$\left\\{s|\Re(s)>1\right\\}$, we have
$I(s)=\sum_{m\geq 0}\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=0\end{subarray}}\frac{1}{(\beta^{2m+1}\delta(n))^{s}}=\frac{\beta^{-s}}{1-\beta^{-2s}}G(s);$
(8)
and
$J(s)=\sum_{m\geq 1}\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=0\end{subarray}}\frac{1}{(\beta^{2m}\delta(n))^{s}}=\frac{\beta^{-2s}}{1-\beta^{-2s}}G(s).$
(9)
For the residues, we can use the fact that the residue of $F$ at $s=1$ is $1$.
###### Proposition 7
Let $a$, $b$ be two real numbers such that $|b|\leq|a|$ and let $i=0,1,2$ or
$3$. Letting $K_{a,b}^{(i)}(s)$ be the function
$K_{a,b}^{(i)}(s)=\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=i\end{subarray}}\frac{1}{(a\delta(n))^{s}}-\frac{1}{(a\delta(n)+b)^{s}}$
for $i=0,1,2$ and letting $K_{a,b}^{(3)}(s)=\sum_{i=0}^{2}K_{a,b}^{(i)}(s)$,
then the function $K_{a,b}^{(i)}(s)$ has a meromorphic continuation on the
whole complex plane for any $i$. Moreover, it converges absolutely on
$\left\\{s|\Re(s)>1\right\\}$, converges pointwisely on
$\left\\{s|\Re(s)>0\right\\}$ and $\lim_{s\to
0}K_{a,b}^{(0)}(s)=\frac{b}{a}(1-\beta^{-1})$, $\lim_{s\to
0}K_{a,b}^{(1)}(s)=\frac{b}{a}(1-\beta^{-1})$,$\lim_{s\to
0}K_{a,b}^{(2)}(s)=\frac{b}{a}(\beta^{-1}-\beta^{-2})$ and $\lim_{s\to
0}K_{a,b}^{(3)}(s)=\frac{b}{a}$.
* Proof
From the hypothesis, we have $|\frac{b}{a\delta(n)}|<1$ for all $n$. Thus, for
$i=0,1$ or $2$, on the set $\left\\{s|\Re(s)>1\right\\}$
$\displaystyle K_{a,b}^{(i)}(s)$ $\displaystyle=\sum_{\begin{subarray}{c}n\geq
1\\\
d(n)=i\end{subarray}}(\frac{1}{(a\delta(n))^{s}}-\frac{1}{(a\delta(n)+b)^{s}})$
(10) $\displaystyle=\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=i\end{subarray}}\frac{1}{(a\delta(n))^{s}}(1-\frac{1}{(1+\frac{b}{a\delta(n)})^{s}})$
$\displaystyle=\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=i\end{subarray}}\frac{1}{(a\delta(n))^{s}}(1-\sum_{m\geq
0}\binom{-s}{m}(\frac{b}{a\delta(n)})^{m})$
$\displaystyle=-\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=i\end{subarray}}\frac{1}{(a\delta(n))^{s}}\sum_{m\geq
1}\binom{-s}{m}(\frac{b}{a\delta(n)})^{m}$ $\displaystyle=-a^{-s}\sum_{m\geq
1}\binom{-s}{m}(\frac{b}{a})^{m}\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=i\end{subarray}}\frac{1}{(\delta(n))^{s+m}}$
$\displaystyle=-a^{-s}\sum_{m\geq
2}\binom{-s}{m}(\frac{b}{a})^{m}X^{(i)}(s+m)+sa^{-s}\frac{b}{a}X^{(i)}(s+1),$
where $X^{(i)}$ represents respectively the functions $G$, $I$ and $J$ for
$i=0,1$ and $2$.
On the other hand, from Proposition 1,
$\delta(n)>\delta(1)+(n-1)\frac{1}{\sqrt{5}}\beta=\frac{1}{\sqrt{5}}(1+n\beta)$.
Thus, for all $s$ such that $\Re(s)>1$,
$\displaystyle|X^{(i)}(s)|$ $\displaystyle\leq\sum_{n\geq
1}\frac{1}{|(\delta(n))^{s}|}$ (11) $\displaystyle\leq\sum_{n\geq
1}\frac{1}{(\frac{1}{\sqrt{5}}(1+n\beta))^{\Re(s)}}$
$\displaystyle\leq(\frac{\sqrt{5}}{\beta^{2}})^{\Re(s)}+\int_{x=1}^{\infty}\frac{1}{(\frac{1}{\sqrt{5}}(1+x\beta))^{\Re(s)}}$
$\displaystyle\leq(\frac{\sqrt{5}}{\beta^{2}})^{\Re(s)}+\frac{\sqrt{5}}{\beta(\Re(s)-1)}(\frac{\sqrt{5}}{\beta^{2}})^{\Re(s)-1}.$
Consequently, $\sum_{m\geq k}\binom{-s}{m}(\frac{b}{a})^{m}X^{(i)}(s+m)$
converges for all $s$ such that $\Re(s)>-k$. Combining the fact that $X^{(i)}$
has a meromorphic continuation on $\mathbb{C}$, we prove the meromorphic
continuation of $K^{(i)}_{a,b}$ on $\left\\{s|\Re(s)>-k\right\\}$ for any non-
negative number $k$. In particular, for $k=0$, we have the pointwise
convergence of $X^{(i)}$ on $\left\\{s|\Re(s)>0\right\\}$. Now, to see the
limit at $0$, if $|s|\leq\frac{1}{2}$, then for any positive integer $m$, we
have
$\displaystyle|\binom{-s}{m}|$
$\displaystyle=|\frac{|s|\times(|s|+1)\times(|s|+2)...\times(|s|+m-1)}{1\times
2\times 3...\times m}|$ (12)
$\displaystyle\leq|s|\prod_{n=1}^{m}\frac{|s|+n}{n+1}$ $\displaystyle\leq|s|.$
So that $|-a^{-s}\sum_{m\geq
2}\binom{-s}{m}(\frac{b}{a})^{m}X^{(i)}(s+m)|\leq|s|a^{-\Re(s)}\sum_{m\geq
2}X^{(i)}(\Re(s)+m)$. With the fact that $\sum_{m\geq 2}X^{(i)}(|s|+m)$ is
bounded, we have
$\lim_{s\to 0}K^{(i)}_{a,b}(s)=\lim_{s\to 0}sa^{-s}\frac{b}{a}X^{(i)}(s+1).$
(13)
## 5 Dirichlet series and infinite series
### 5.1 Infinite series involving $(r(n))_{n\in\mathbb{N}^{+}}$
Let $(r(n))_{n\in\mathbb{N}^{+}}$ be the image of the sequence
$(d(n))_{n\in\mathbb{N}^{+}}$ under the map $0\to 0,\;1\to 1,\;2\to-1$. From
Remark 1 and the description of the sequence A270788, the sequence
$(r(n))_{n\in\mathbb{N}^{+}}$ is the fixed point of the morphism $0\to 0,1$;
$1\to-1$ and $-1\to 0,1$.
Now let us consider the following functions:
$P(s)=\sum_{n\geq 2}r(n)(\frac{1}{\delta(n-1)^{s}}-\frac{1}{\delta(n)^{s}});$
From Proposition 4 and Proposition 6, $r(n)=1$ if and only if $d(n-1)=0$ and
$\delta(n)=\delta(n-1)+\frac{\beta}{\sqrt{5}}$; similarly, $r(n)=-1$ if and
only if there exists an $m$ such that $d(m)=0$ and $\tau_{0}(m)=n-1$;
moreover, $\delta(n)=\delta(n-1)+\frac{\beta^{2}}{\sqrt{5}}$.
Thus, for all $s$ such that $\Re(s)>1$,
$\displaystyle P(s)$ $\displaystyle=\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=1\end{subarray}}(\frac{1}{\delta(n-1)^{s}}-\frac{1}{\delta(n)^{s}})-\sum_{\begin{subarray}{c}n\geq
1\\\ d(n)=2\end{subarray}}(\frac{1}{\delta(n-1)^{s}}-\frac{1}{\delta(n)^{s}})$
(14) $\displaystyle=\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=0\end{subarray}}(\frac{1}{\delta(n)^{s}}-\frac{1}{\delta(n+1)^{s}})-\sum_{\begin{subarray}{c}n\geq
1\\\
d(n)=0\end{subarray}}(\frac{1}{\delta(\tau_{0}(n))^{s}}-\frac{1}{(\delta(\tau_{0}(n)+1))^{s}})$
$\displaystyle=\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=0\end{subarray}}(\frac{1}{\delta(n)^{s}}-\frac{1}{(\delta(n)+\frac{\beta}{\sqrt{5}})^{s}})-\sum_{\begin{subarray}{c}n\geq
1\\\
d(n)=0\end{subarray}}(\frac{1}{(\beta\delta(n))^{s}}-\frac{1}{(\beta\delta(n)+\frac{\beta^{2}}{\sqrt{5}})^{s}})$
$\displaystyle=(1-\beta^{-s})\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=0\end{subarray}}(\frac{1}{\delta(n)^{s}}-\frac{1}{(\delta(n)+\frac{\beta}{\sqrt{5}})^{s}}).$
This equation yields two consequences. First, from Proposition 7, the infinite
sum on the righthand side has a meromorphic continuation on the whole complex
plane and $P(0)=0$; secondly, the function $P$ is a derivative on a
neighbourhood of $s=0$ and
$P^{\prime}(s)=\ln(\beta)\beta^{-s}K^{(0)}_{1,\frac{\beta}{\sqrt{5}}}(s)+(1-\beta^{-s})\frac{d}{ds}K^{(0)}_{1,\frac{\beta}{\sqrt{5}}}(s),$
and thus
$P^{\prime}(0)=\ln(\beta)\frac{\beta-1}{\sqrt{5}}.$ (15)
Now let us compute an alternative presentation of $P^{\prime}(0)$. From (14),
we can compute further
$\displaystyle P(s)$
$\displaystyle=(1-\beta^{-s})\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=0\end{subarray}}(\frac{1}{\delta(n)^{s}}-\frac{1}{(\delta(n)+\frac{\beta}{\sqrt{5}})^{s}})$
(16) $\displaystyle=(\beta^{-s}-1)\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=0\end{subarray}}\frac{1}{\delta(n)^{s}}\sum_{m\geq
1}\binom{-s}{m}(\frac{\beta}{\sqrt{5}\delta(n)})^{m}$
$\displaystyle=(\beta^{-s}-1)\sum_{m\geq
1}\binom{-s}{m}(\frac{\beta}{\sqrt{5}})^{m}\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=0\end{subarray}}\frac{1}{\delta(n)^{m+s}}$
$\displaystyle=(\beta^{-s}-1)\sum_{m\geq
1}\binom{-s}{m}(\frac{\beta}{\sqrt{5}})^{m}G(m+s).$
On the other hand, for all $s$ such that $\Re(s)>1$,
$\displaystyle P(s)$ $\displaystyle=\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=1\end{subarray}}(\frac{1}{\delta(n-1)^{s}}-\frac{1}{\delta(n)^{s}})-\sum_{\begin{subarray}{c}n\geq
1\\\ d(n)=2\end{subarray}}(\frac{1}{\delta(n-1)^{s}}-\frac{1}{\delta(n)^{s}})$
(17) $\displaystyle=\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=0\end{subarray}}\frac{1}{\delta(n)^{s}}-\sum_{\begin{subarray}{c}n\geq
1\\\
d(n)=1\end{subarray}}\frac{1}{\delta(n)^{s}}-\sum_{\begin{subarray}{c}n\geq
1\\\
d(n)=0\end{subarray}}\frac{1}{\delta(\tau_{0}(n))^{s}}+\sum_{\begin{subarray}{c}n\geq
2\\\ d(n)=0\end{subarray}}\frac{1}{\delta(n)^{s}}$
$\displaystyle=(1-\beta^{-1}-\frac{\beta^{-s}}{1-\beta^{-2s}}+\frac{\beta^{-2s}}{1-\beta^{-2s}})\sum_{\begin{subarray}{c}n\geq
1\\\ d(n)=0\end{subarray}}\frac{1}{\delta(n)^{s}}$
$\displaystyle=\frac{1-\beta^{-s}-\beta^{-2s}}{1+\beta^{-s}}G(s).$
Substituting (17) into (16), we have
$\displaystyle P(s)$ $\displaystyle=(\beta^{-s}-1)\sum_{m\geq
1}\binom{-s}{m}(\frac{\beta}{\sqrt{5}})^{m}G(m+s)$ (18)
$\displaystyle=(\beta^{-s}-1)\sum_{m\geq
1}\binom{-s}{m}(\frac{\beta}{\sqrt{5}})^{m}\frac{1+\beta^{-m-s}}{1-\beta^{-m-s}-\beta^{-2m-2s}}P(m+s).$
The infinite sum on the righthand side converges uniformly on
$\left\\{s|\Re(s)>0\right\\}$: thus, $P(1)=\sum_{n\geq
2}r(n)(\frac{1}{\delta(n-1)}-\frac{1}{\delta(n)})$. Moreover, from the fact
that $P(0)=0$, to compute $P^{\prime}(0)$, it is sufficient to compute
$\lim_{s\to 0}\frac{P(s)}{s}$. Thus,
$\displaystyle\lim_{s\to 0}\frac{P(s)}{s}$ $\displaystyle=\lim_{s\to
0}(\beta^{-s}-1)\sum_{m\geq
1}\frac{\binom{-s}{m}}{s}(\frac{\beta}{\sqrt{5}})^{m}\frac{1+\beta^{-m-s}}{1-\beta^{-m-s}-\beta^{-2m-2s}}P(m+s)$
(19) $\displaystyle=\lim_{s\to
0}(1-\beta^{-s})(\frac{\beta}{\sqrt{5}})\frac{1+\beta^{-1-s}}{1-\beta^{-1-s}-\beta^{-2-2s}}P(1+s)$
$\displaystyle+\lim_{s\to 0}(\beta^{-s}-1)\sum_{m\geq
2}(-1)^{m}(\frac{\beta}{\sqrt{5}})^{m}\frac{1+\beta^{-m-s}}{1-\beta^{-m-s}-\beta^{-2m-2s}}P(m+s)$
$\displaystyle=(1+\beta^{-1})(\frac{\beta}{\sqrt{5}})P(1)\lim_{s\to
0}\frac{1-\beta^{-s}}{1-\beta^{-1-s}-\beta^{-2-2s}}$
$\displaystyle=(\frac{\beta+1}{\sqrt{5}})P(1)\lim_{s\to
0}\frac{\ln(\beta)\beta^{-s}}{\ln(\beta)\beta^{-1-s}+2\ln(\beta)\beta^{-2-2s}}$
$\displaystyle=(\frac{\beta+1}{\sqrt{5}})P(1).$
Combining Equation 15, Equation 19 and Remark 1, we have
###### Proposition 8
Letting $(r(n))_{n\in\mathbb{N}^{+}}$ be the image of the sequence A270788 in
OEIS under the map $1\to 0,2\to 1$ and $3\to-1$, then we have
$\sum_{n\geq
2}r(n)(\frac{1}{\delta(n-1)}-\frac{1}{\delta(n)})=\frac{\beta-1}{\beta^{2}}\ln(\beta).$
(20)
###### Corollary 3
Letting $(d(n))_{n\in\mathbb{N}^{+}}$ be the sequence defined as above, then
we have
$\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=1\end{subarray}}\frac{1}{\delta(n-1)}-\frac{1}{\delta(n)}={\beta}^{-1}\ln(\beta).$
(21) $\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=2\end{subarray}}\frac{1}{\delta(n-1)}-\frac{1}{\delta(n)}=\beta^{-2}\ln(\beta).$
(22)
* Proof
First, it is easy to check that the infinite series
$\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=1\end{subarray}}\frac{1}{\delta(n-1)}-\frac{1}{\delta(n)}$ and
$\sum_{\begin{subarray}{c}n\geq 2\\\
d(n)=1\end{subarray}}\frac{1}{\delta(n-1)}-\frac{1}{\delta(n)}$ are both well
defined. Second, from (14),
$\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=2\end{subarray}}\frac{1}{\delta(n-1)}-\frac{1}{\delta(n)}=\beta^{-1}\sum_{\begin{subarray}{c}n\geq
1\\\ d(n)=1\end{subarray}}\frac{1}{\delta(n-1)}-\frac{1}{\delta(n)}.$
Third, from (20),
$\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=1\end{subarray}}\frac{1}{\delta(n-1)}-\frac{1}{\delta(n)}-\sum_{\begin{subarray}{c}n\geq
1\\\
d(n)=2\end{subarray}}\frac{1}{\delta(n-1)}-\frac{1}{\delta(n)}=\frac{\beta-1}{\beta^{2}}\ln(\beta).$
Combining the last two equations, we complete the proof.
Moreover, let us recall the fact
$\delta(2)=\sum_{n\geq 2}(\delta(n)-\delta(n+1))$ (23)
By calculating (23)-(20), we have
###### Proposition 9
Letting $(t(n))_{n\in\mathbb{N}^{+}}$ be the image of the sequence A270788 in
OEIS under the map $1\to 1,2\to 0$ and $3\to 2$, then we have
$\sum_{n\geq
2}t(n)(\frac{1}{\delta(n-1)}-\frac{1}{\delta(n)})=\beta^{2}-\frac{\beta-1}{\beta^{2}}\ln(\beta).$
(24)
### 5.2 Infinite series involving $(s(n))_{n\in\mathbb{N}^{+}}$
Here let us consider another example. Let $(s(n))_{n\in\mathbb{N}^{+}}$ be a
sequence defined in the following way:
$s(n)=\begin{cases}-1\;\text{if}\;d(s)=0\;\text{or}\;2\\\ 2\;\text{if the
Zeckendorf expansion of $n$ admits the string $\overline{1,0}$ as a suffix}\\\
1\;\text{otherwise},\end{cases}$
and let us consider the following function:
$Q(s)=\sum_{n\geq 1}\frac{s(n)}{\delta(n)^{s}}.$
From Proposition 6 and Corollary 1, $s(n)=2$ if and only if there exists a
$m$, such that $d(m)=0$ and $\tau_{0}(m)=n$. Thus, for all $s$ such that
$\Re(s)>1$,
$\displaystyle Q(s)$ $\displaystyle=\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=1\end{subarray}}\frac{1}{\delta(n)^{s}}-\sum_{\begin{subarray}{c}n\geq
0\\\
d(n)=0\end{subarray}}\frac{1}{\delta(n)^{s}}-\sum_{\begin{subarray}{c}n\geq
1\\\
d(n)=2\end{subarray}}\frac{1}{\delta(n)^{s}}+\sum_{\begin{subarray}{c}n\geq
1\\\ d(n)=0\end{subarray}}\frac{1}{\delta(\tau_{0}(n))^{s}}$ (25)
$\displaystyle=\frac{\beta^{-s}}{1-\beta^{-2s}}\sum_{\begin{subarray}{c}n\geq
1\\\
d(n)=0\end{subarray}}\frac{1}{\delta(n)^{s}}-\sum_{\begin{subarray}{c}n\geq
0\\\
d(n)=0\end{subarray}}\frac{1}{\delta(n)^{s}}-\frac{\beta^{-2s}}{1-\beta^{-2s}}\sum_{\begin{subarray}{c}n\geq
1\\\
d(n)=0\end{subarray}}\frac{1}{\delta(n)^{s}}+\beta^{-s}\sum_{\begin{subarray}{c}n\geq
1\\\ d(n)=0\end{subarray}}\frac{1}{\delta(n)^{s}}$
$\displaystyle=\frac{\beta^{-s}+\beta^{-2s}-1}{1+\beta^{-s}}G(s).$
Substituting (3) into (25), we have
$\displaystyle Q(s)$
$\displaystyle=\frac{\beta^{-s}+\beta^{-2s}-1}{1+\beta^{-s}}G(s)=\frac{(\beta^{-s}+\beta^{-2s}-1)}{1+\beta^{-s}}((1-\beta^{-s})F(s)-\delta(1)^{-s})$
(26)
$\displaystyle=\frac{(-\beta^{-3s}+2\beta^{-s}-1)}{1+\beta^{-s}}F(s)-\frac{(\beta^{-s}+\beta^{-2s}-1)\delta(1)^{-s}}{1+\beta^{-s}}.$
On the other hand,
$Q(s)=\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=1,\;\text{or}\;2\end{subarray}}\frac{1}{\delta(n)^{s}}-\sum_{\begin{subarray}{c}n\geq
1\\\
d(n)=0,\;\text{or}\;2\end{subarray}}\frac{1}{\delta(n)^{s}}-(\sum_{\begin{subarray}{c}n\geq
1\\\
d(n)=1,\;\text{or}\;2\end{subarray}}\frac{1}{\delta(n)^{s}}-\sum_{\begin{subarray}{c}n\geq
1\\\
d(n)=0\end{subarray}}\frac{1}{\delta(\tau_{0}(n))^{s}})+\sum_{\begin{subarray}{c}n\geq
0\\\ d(n)=1\end{subarray}}\frac{1}{\delta(n)^{s}}.$ (27)
From Proposition 4 and Proposition 6,
$\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=1,\;\text{or}\;2\end{subarray}}\frac{1}{\delta(n)^{s}}=\sum_{n\geq
1}\frac{1}{\delta(\tau_{0}(n))^{s}}=\sum_{n\geq
1}\frac{1}{\delta(\tau_{0}(\tau_{0}(n)))^{s}}+\sum_{\begin{subarray}{c}n\geq
1\\\ d(n)=0\end{subarray}}\frac{1}{\delta(\tau_{0}(n))^{s}}.$
$\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=0,\;\text{or}\;2\end{subarray}}\frac{1}{\delta(n)^{s}}=\sum_{n\geq
1}\frac{1}{\delta(\tau_{0}(n)+1)^{s}}+\frac{1}{\delta(1)^{s}}$
$\sum_{\begin{subarray}{c}n\geq 1\\\
d(n)=1\end{subarray}}\frac{1}{\delta(n)^{s}}=\sum_{\begin{subarray}{c}n\geq
1\\\
d(n)=0,\;\text{or}\;2\end{subarray}}\frac{1}{\delta(\tau_{0}(n))^{s}}=\sum_{n\geq
1}\frac{1}{\delta(\tau_{0}(\tau_{0}(n)+1))^{s}}+\frac{1}{\delta(2)^{s}}$
Thus,
$\displaystyle Q(s)$ $\displaystyle=\sum_{n\geq
1}\frac{1}{\delta(\tau_{0}(n))^{s}}-(\sum_{n\geq
1}\frac{1}{\delta(\tau_{0}(n)+1)^{s}}+\frac{1}{\delta(1)^{s}})-\sum_{n\geq
1}\frac{1}{\delta(\tau_{0}(\tau_{0}(n)))^{s}}+\sum_{n\geq
1}\frac{1}{\delta(\tau_{0}(\tau_{0}(n)+1))^{s}}+\frac{1}{\delta(2)^{s}}$ (28)
$\displaystyle=\sum_{n\geq
1}(\frac{1}{(\beta\delta(n))^{s}}-\frac{1}{(\beta\delta(n)+\frac{\beta^{2}}{\sqrt{5}})^{s}})-\sum_{n\geq
1}(\frac{1}{(\beta^{2}\delta(n))^{s}}-\frac{1}{(\beta^{2}\delta(n)+\frac{\beta^{3}}{\sqrt{5}})^{s}})-\frac{1}{\delta(1)^{s}}+\frac{1}{\delta(2)^{s}}$
$\displaystyle=(1-\beta^{-s})\sum_{m\geq
1}(\frac{1}{(\beta\delta(n))^{s}}-\frac{1}{(\beta\delta(n)+\frac{\beta^{2}}{\sqrt{5}})^{s}}))-\frac{1}{\delta(1)^{s}}+\frac{1}{\delta(2)^{s}}.$
From Proposition 7, for all $s$ such that $\Re(s)>1$,
$\displaystyle Q(s)$ $\displaystyle=(1-\beta^{-s})\sum_{m\geq
1}(\frac{1}{(\beta\delta(n))^{s}}-\frac{1}{(\beta\delta(n)+\frac{\beta^{2}}{\sqrt{5}})^{s}}))-\frac{1}{\delta(1)^{s}}+\frac{1}{\delta(2)^{s}}$
(29) $\displaystyle=(\beta^{-2s}-1)\sum_{m\geq
1}\binom{-s}{m}(\frac{\beta}{\sqrt{5}})^{m}F(s+m)-\frac{1}{\delta(1)^{s}}+\frac{1}{\delta(2)^{s}}.$
Substituting (26) into (29), we have
$\displaystyle Q(s)$ $\displaystyle=(\beta^{-2s}-1)\sum_{m\geq
1}\binom{-s}{m}(\frac{\beta}{\sqrt{5}})^{m}\frac{1+\beta^{-m-s}}{-\beta^{-3(m+s)}+2\beta^{-(m+s)}-1}(Q(s+m)+\frac{(\beta^{-m-s}+\beta^{-2m-2s}-1)\delta(1)^{-m-s}}{1+\beta^{-m-s}})$
(30) $\displaystyle-\frac{1}{\delta(1)^{s}}+\frac{1}{\delta(2)^{s}}$
From Equation 28 and Proposition 7, we can prove that the function $Q$ has a
meromorphic continuation on the whole complex plane. Moreover, we have
$Q(0)=0$ and
$Q^{\prime}(s)=\ln(\beta)\beta^{-s}K^{(3)}_{\beta,\frac{\beta^{2}}{\sqrt{5}}}(s)+(1-\beta^{-s})\frac{d}{ds}K^{(3)}_{\beta,\frac{\beta^{2}}{\sqrt{5}}}(s)-\ln(\beta),$
thus
$Q^{\prime}(0)=\ln(\beta)\frac{\beta}{\sqrt{5}}-\ln(\beta).$ (31)
From (30), the infinite sum on the righthand side converges uniformly on
$\left\\{s|\Re(s)>0\right\\}$, thus $Q(1)=\sum_{n\geq
1}\frac{s(n)}{\delta(n)}$. To compute $Q^{\prime}(0)$, it is sufficient to
compute $\lim_{s\to 0}\frac{Q(s)}{s}$. Thus,
$\displaystyle\lim_{s\to 0}\frac{Q(s)}{s}$ $\displaystyle=\lim_{s\to
0}(\beta^{-2s}-1)\sum_{m\geq
1}\binom{-s}{m}(\frac{\beta}{\sqrt{5}})^{m}(\frac{1+\beta^{-m-s}}{-\beta^{-3(m+s)}+2\beta^{-(m+s)}-1}Q(s+m)+\frac{1}{1-\beta^{-m-s}}\delta(1)^{-m-s})$
(32) $\displaystyle+\lim_{s\to
0}\frac{-\frac{1}{\delta(1)^{s}}+\frac{1}{\delta(2)^{s}}}{s}$
$\displaystyle=\lim_{s\to
0}(1-\beta^{-2s})(\frac{\beta}{\sqrt{5}})(\frac{1+\beta^{-m-s}}{-\beta^{-3(1+s)}+2\beta^{-(1+s)}-1}Q(s+1)+\frac{1}{1-\beta^{-1-s}}\delta(1)^{-1-s})$
$\displaystyle-\lim_{s\to 0}\frac{\frac{1}{\delta(1)^{s}}-1}{s}+\lim_{s\to
0}\frac{\frac{1}{\delta(2)^{s}}-1}{s}$ $\displaystyle+\lim_{s\to
0}(\beta^{-2s}-1)\sum_{m\geq
2}\binom{-s}{m}(\frac{1+\beta^{-m-s}}{-\beta^{-3(m+s)}+2\beta^{-(m+s)}-1}Q(s+m)+\frac{1}{1-\beta^{-m-s}}\delta(1)^{-m-s})$
$\displaystyle=(\frac{\beta}{\sqrt{5}})Q(1)\lim_{s\to
0}\frac{1-\beta^{-2s}}{-\beta^{-3(1+s)}+2\beta^{-(1+s)}-1}(1+\beta^{-1-s})+\ln(\delta(1))-\ln(\delta(2))$
$\displaystyle=(\frac{\beta}{\sqrt{5}})Q(1)(1+\beta^{-1})\lim_{s\to
0}\frac{2\ln(\beta)\beta^{-2s}}{3\ln(\beta)\beta^{-3-3s}-2\ln(\beta)\beta^{-1-s}}-\ln(\beta)$
$\displaystyle=(\frac{\beta}{\sqrt{5}})Q(1)\frac{2+2\beta^{-1}}{3\beta^{-3}-2\beta^{-1}}-\ln(\beta).$
Combining (31) and (32), we have
###### Proposition 10
Letting $(s(n))_{n\in\mathbb{N}^{+}}$ be the sequence defined as above, then
$\sum_{n\geq
2}\frac{s(n)}{\delta(n)}=(\frac{3}{2}\beta^{-4}-\beta^{-2})\ln(\beta).$ (33)
To obtain the Theorem 2, we only need to apply the following proposition:
###### Proposition 11
For any positive integer $n$, $\delta(n)=n-\frac{\\{\beta
n\\}-1+f(n)}{\sqrt{5}},$ where $\\{x\\}$ is the fractional part of $x$.
* Proof
Letting $n$ be a positive integer, then $n=\delta(n)-\delta^{\prime}(n)$ and
$\tau_{0}(n)=\beta\delta(n)+\beta^{-1}\delta^{\prime}(n)$. Consequently,
$\beta
n-\tau_{0}(n)=-(\beta+\beta^{-1})\delta^{\prime}(n)=-\sqrt{5}\delta^{\prime}(n)$.
Moreover, from Proposition 2, $|\delta^{\prime}(n)|<\frac{1}{\sqrt{5}\beta}$,
we have $|\sqrt{5}\delta^{\prime}(n)|<1$. Thus, $\\{\beta
n\\}=1-\sqrt{5}\delta^{\prime}(n)$ if $\delta^{\prime}(n)>0$ and $\\{\beta
n\\}=-\sqrt{5}\delta^{\prime}(n)$ if $\delta^{\prime}(n)<0$. Consequently,
$\delta(n)=n-\frac{\\{\beta n\\}-1}{\sqrt{5}}$ if $f(n)=0$ and
$\delta(n)=n-\frac{\\{\beta n\\}}{\sqrt{5}}$ if $f(n)=1$.
## References
* [AC85] J.-P. Allouche and H. Cohen. Dirichlet Series and Curious infinite Products. Bulletin of the London Mathematical Society, 17(6):531–538, 1985\.
* [ARS18] J.-P. Allouche, S. Riasat, and J. Shallit. More infinite products: Thue–Morse and the gamma function. The Ramanujan Journal, 49(1):115–128, 2018.
* [AS89] J.-P. Allouche and J. Shallit. Infinite Products Associated with Counting Blocks in Binary Strings. Journal of the London Mathematical Society, s2-39(2):193–204, 1989\.
* [AS03] J.-P. Allouche and J. Shallit. Automatic Sequences: Theory, Applications, Generalizations. Cambridge University Press, 2003.
* [ASS07] J.-P. Allouche, J. Shallit, and J. Sondow. Summation of series defined by counting blocks of digits. Journal of Number Theory, 123(1):133–143, 2007.
* [DMR+17] C. F. Du, H. Mousavi, E. Rowland, L. Schaeffer, and J. Shallit. Decision algorithms for Fibonacci-automatic words, II: Related sequences and avoidability. Theoretical Computer Science, 657:146–162, 2017.
* [DMSS16] C. F. Du, H. Mousavi, L. Schaeffer, and J. Shallit. Decision Algorithms for Fibonacci-Automatic Words, III: Enumeration and Abelian Properties. International Journal of Foundations of Computer Science, 27(08):943–963, 2016.
* [Hu16] Y. Hu. Patterns in numbers and infinite sums and products. Journal of Number Theory, 162:589–600, 2016.
* [MSS16] H. Mousavi, L. Schaeffer, and J. Shallit. Decision algorithms for Fibonacci-automatic Words, I: Basic results. RAIRO - Theoretical Informatics and Applications, 50(1):39–66, 2016.
* [SI20] Neil J. A. Sloane and The OEIS Foundation Inc. The on-line encyclopedia of integer sequences, 2020.
* [Sou19] A. Sourmelidis. On the meromorphic continuation of Beatty Zeta-functions and Sturmian Dirichlet series. Journal of Number Theory, 194:303–318, 2019.
* [Zec72] E. Zeckendrof. Représentations des nombres naturels par une somme de nombres de fibonacci on de nombres de lucas. Bulletin de La Society Royale des Sciences de Liege, pages 179–182, 1972.
|
# Template
###### Abstract
We revisit the gravity path integral formalism of JT gravity. We explain how
to gauge fix the path integral in the presence of asymptotic boundaries and
conical defects, and resolve an ambiguity regarding the dilaton gravity
operator that creates a conical defect. Along the way we study JT gravity
coupled to matter on surfaces with defects of special opening angles,
obtaining expressions for partition and two-point functions of matter fields.
The two point function involves a summation over all geodesics on the surface,
including self-intersecting geodesics, which we formally manage to include.
Revisiting the second order formalism of JT gravity
Guanda Lin1, Mykhaylo Usatyuk1,2
1 Center for Theoretical Physics and Department of Physics, Berkeley, CA,
94720, USA
2 Kavli Institute for Theoretical Physics, Santa Barbara, CA 93106, USA
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Contents
1. 1 Introduction
2. 2 Determinants and correlators on hyperbolic surfaces
1. 2.1 Building hyperbolic geometries
2. 2.2 Two-point functions and determinants on quotient surfaces
3. 2.3 JT examples
1. 2.3.1 Double trumpet
2. 2.3.2 Conical defect
3. 2.3.3 Two conical defects
4. 2.3.4 The handle disk
3. 3 JT gravity path integral
1. 3.1 Gauge fixing the path integral
1. 3.1.1 Integrating over moduli space
2. 3.2 Examples
1. 3.2.1 Disk
2. 3.2.2 Conical defect
3. 3.2.3 Conical defect: $z$ coordinates
4. 3.2.4 General surfaces
4. 4 Discussion
5. A Consistency of disk measure with general surface
6. B Double trumpet gluing measure
7. C The determinant calculation
## 1 Introduction
Jackiw-Teitelboim (JT) gravity is a simple two dimensional model that has been
central to many recent developments in quantum gravity[1, 2, 3, 4, 5, 6, 7, 8,
9, 10, 11]. In this paper we will be revisiting various unresolved aspects of
the second order formalism of the JT gravity path integral.
The most common approach to JT gravity is through the first order formalism
[7, 12], especially when dealing with the issue of gauge fixing the path
integral. There has been some progress on understanding JT gravity from the
second order formalism. For compact surfaces the path integral was properly
gauge fixed in [7], whereas for surfaces with asymptotically AdS boundaries
significant progress was made in [13, 14, 15, 16]. Furthermore, when
considering matter coupled to JT it is easiest to use gravitational variables.
In this paper we will consider JT gravity on surfaces with asymptotically AdS
boundaries with the addition of conical defects[17, 18, 19, 20]. We will
calculate determinants of Laplace operators, matter field correlators, and
gauge fix the gravity path integral on such surfaces from the perspective of
the second order formalism.
Determinants of Laplace operators have recently appeared in studies of JT
gravity coupled to matter where they correspond to partition functions of
matter fields on the geometry[13, 21, 22], and in the evaluation of Lorentzian
JT gravity amplitudes with topology changing wormholes [23, 24]. Similarly,
correlation functions of matter fields on wormhole geometries have recently
appeared in [25, 26, 27, 22, 21], where they probe the length of the wormhole.
Conical defect geometries have played an important role in the counting of
black hole microstates[10], and matter field correlators on defect geometries
were considered in [26, 28]. The procedure for gauge fixing the gravitational
path integral is well known from the string perturbation theory literature[29,
30, 31], and requires the evaluation of various determinants on hyperbolic
surfaces. This naturally connects the problem of gauge fixing the path
integral to understanding matter minimally coupled to JT gravity.
We will find that the second order formalism has certain advantages. A proper
gauge fixing will resolve an ambiguity regarding the correct definition of the
conical defect creation operator. Furthermore, it is straightforward to
incorporate matter, and we obtain closed-form expressions for determinants and
two-point functions of matter coupled to JT gravity. We now summarize our main
results.
### Summary of results
In section 2 we begin by explaining how to construct constant negative
curvature surfaces by taking a quotient of the upper half-plane by a group
$\Gamma\subset\text{PSL}(2,\mathbb{R})$, giving a surface
$\Sigma=\mathbb{H}/\Gamma$. Using the quotient construction a variety of
hyperbolic surfaces can be constructed including compact surfaces, surfaces
with asymptotic AdS boundaries, and surfaces that include conical defects with
special opening angles $\theta=\frac{2\pi}{n}$ with integer $n\geq 2$. We then
explain how to compute determinants of Laplace type operators on surfaces
obtained through the quotient method. The final result is the following.
Figure 1: The determinant and two point function are reproduced by a sum over
all geodesics on the surface $\Sigma$. On the left we have highlighted three
geodesics that contribute to the determinant on the surface with one handle
and one defect. On the right we have shown some geodesics that contribute to
the two-point function, with the geodesics ending on the operator insertions
at the boundary. There are infinitely many additional geodesics on these
surfaces that contribute to these quantities.
##### Determinants.
Let $\Sigma=\mathbb{H}/\Gamma$ be a hyperbolic surface, either compact or with
asymptotic AdS boundaries, with some number of conical defects. Define the
scalar Laplacian $\Delta=-g^{ab}\nabla_{a}\nabla_{b}$. The determinant of the
Laplace operator on the surface $\Sigma$ is given by
$\det\left({\Delta}+s(s-1)\right)_{\Sigma}=\underbrace{Z_{\text{hyp.}}(s)}_{\text{geodesics}}\underbrace{Z_{\text{ell.}}(s)}_{\text{defects}}\left(\text{const}\right),$
(1.1)
see section 2.2 and appendix C for additional technical details. The most
important contribution to the determinant is the Selberg zeta function111For
compact surfaces this form of the determinant is well known [32, 33], and
comes from the Selberg trace formula. For surfaces with asymptotically AdS
boundaries the techniques for calculating determinants are relatively recent
[34, 35, 36], and one of the new results is to incorporate conical defects in
the calculation, alongside evaluating the determinant for the vector Laplacian
$\det\left(\Delta_{1}+s(s-1)\right)_{\Sigma}$ in appendix C.
$Z_{\text{hyp.}}(s)=\prod_{\ell_{\gamma}\in\mathcal{L}_{\Sigma}}\prod_{m=0}^{\infty}\left(1-e^{-(s+m)\ell_{\gamma}}\right),$
(1.2)
which is defined to be a sum over all closed geodesics $\mathcal{L}_{\Sigma}$
on the surface $\Sigma$, with lengths given by $\ell_{\gamma}$. See figure 1.
The integer $m$ counts the winding of the geodesics. The additional term
$Z_{\text{ell.}}(s)$ comes from the presence of conical defects, and does not
have an obvious geometric interpretation. We also obtain a similar expression
for the determinant of the vector Laplacian
$\det\left(\Delta_{1}+s(s-1)\right)_{\Sigma}$, see appendix C.
##### Correlators.
As an intermediate step in the determinant calculation we obtain a closed form
expression for the two-point function of a scalar field
$\langle\phi(x)\phi(y)\rangle$ on an arbitrary surface $\Sigma$. When the
points $x,y$ are sent to asymptotic AdS boundaries the geodesic approximation
becomes exact, and the correlator reduces to a sum over all geodesics
connecting $x$ and $y$. Dressing the operator insertions to the boundary
Schwarzian, we find that the correlator for a field of mass $m^{2}=s(s-1)$ on
a surface $\Sigma=\mathbb{H}/\Gamma$ is given by
$G_{\Sigma}(\tau_{1},\tau_{2})=\sum_{\text{geodesics
}\gamma}e^{-s\ell_{\gamma}}~{}=\sum_{T\in\Gamma}\left(\frac{F^{\prime}(\tau_{1})(T\cdot
F(\tau_{2}))^{\prime}}{\left(F(\tau_{1})-T\cdot
F(\tau_{2})\right)^{2}}\right)^{s},$ (1.3)
where the sum over all geodesics connecting the points is given by the sum
over the group $\Gamma$ and the Schwarzian fluctuations are given by
$F(\tau)$, see equation (2.27).222In this formula we have the action $T\cdot
F=\frac{aF+b}{cF+d}$. The formula also applies in the case that the operators
are inserted on different asymptotic boundaries with independent Schwarzian
fluctuations $F_{i}(\tau)$. See section 2.3 for an example. This formula
includes a summation over all geodesics connecting the two operators including
self-intersecting geodesics, see figure 1.
In section 2.3 we work out a number of examples for both determinants and
correlators. We reproduce previous results for the defect and double trumpet
geometries, and obtain new expressions for the two defect and handle disk
geometries.
##### Gravity path integral.
In section 3 we turn to the problem of gauge fixing the gravity path integral
for JT gravity. Consider performing the JT gravity path integral on surfaces
$\Sigma$ of genus $g$ with asymptotically AdS boundaries of regularized
lengths $\vec{\beta}=\left(\beta_{1},\ldots,\beta_{n}\right)$, along with the
insertion of $k$ conical defect operators $\mathcal{V}_{\alpha}$ giving rise
to conical defects with opening angles $2\pi\left(1-\alpha_{i}\right)$
specified by $\vec{\alpha}=\left(\alpha_{1},\ldots,\alpha_{k}\right)$. We show
that the path integral reduces to the Weil-Petersson measure on the moduli
space $\mathcal{M}_{g,\vec{\alpha},\vec{\beta}}$ of associated hyperbolic
surfaces. More compactly,
$Z_{g,{\vec{\alpha}}}(\beta_{1},\ldots,\beta_{n})=\int\frac{\mathcal{D}g\mathcal{D}\Phi}{\text{V}(\text{Diff})}e^{-I_{\text{J}T}[g,\Phi]}\mathcal{V}_{\alpha_{1}}\ldots\mathcal{V}_{\alpha_{k}}=\int_{\mathcal{M}_{g,{\vec{\alpha},\vec{\beta}}}}d\left(\text{Weil-
Pet.}\right)e^{-I_{\text{bdy}}},$ (1.4)
where on the right-hand side the boundary action is for the Schwarzian on the
asymptotic AdS boundaries. In the above we have used determinants computed in
section 2.333We are only able to compute the necessary determinants for
certain values of $\alpha$, but we argue that the reduction to the Weil-
Petersson measure should occur for all $\alpha$. We are now left with the
problem of evaluating the integral on the right-hand side. It’s well known
from the string perturbation theory literature how to perform the integral
over moduli space [29, 31], formally we have
$\int_{\mathcal{M}}d(\text{Weil-Pet.})\hskip
2.84544pte^{-I_{\text{bdy}}}=\underbrace{\prod_{n}\int
dm_{n}d\overline{m}_{n}}_{\text{coordinates on
$\mathcal{M}$}}\underbrace{\frac{\det\langle\mu,\phi\rangle\det\langle\overline{\mu},\overline{\phi}\rangle}{\sqrt{\det\langle\phi,\phi\rangle\det\langle\overline{\phi},\overline{\phi}\rangle}}}_{\text{WP
measure}}\hskip 2.84544pte^{-I_{\text{bdy}}[m_{n},\overline{m}_{n}]}.$ (1.5)
The above equation can be understood as making a choice of coordinates $m_{n}$
for the space we are integrating over, and computing the Weil-Petersson
measure in that set of coordinates. The quantities appearing in the measure
are known as quadratic differentials $\phi$ and Beltrami differentials $\mu$,
and we explain the technical details of the measure in section 3.1.1.
Computing the measure for a particular choice of coordinates is challenging
for general surfaces, but in simple examples it is possible. In the case of
the disk with and without a conical defect the measure can be evaluated, and
in section 3.2 we perform the full integral over moduli space finding
agreement with the standard Schwarzian calculation.
##### Conical defect operator.
With the above we are ready to determine the dilaton gravity operator
$\mathcal{V}_{\alpha}$ that creates a conical defect when inserted into the
path integral (1.4). When a defect is added to the surface the real dimension
of the moduli space increases by two, which corresponds to the two directions
the defect can be moved on the surface. We consider a surface $\Sigma$ with a
conical defect, and choose two of the coordinates $m_{n}$ for the moduli space
to be the position $x$ of the defect. Using (1.5) we can immediately write
down the correct operator to be
$\mathcal{V}_{\alpha}=\underbrace{\int_{\Sigma}d^{2}x}_{\text{defect
position}}\underbrace{\frac{\langle\mu,\phi_{1}\rangle\langle\overline{\mu},\overline{\phi}_{1}\rangle}{\sqrt{\langle\phi_{1},\phi_{1}\rangle\langle\overline{\phi}_{1},\overline{\phi}_{1}\rangle}}}_{\text{measure
for $x$ coords}}e^{-2\pi\left(1-\alpha_{i}\right)\Phi(x)},$ (1.6)
in the above we have reintroduced the exponential of the dilaton since in the
measure $\eqref{eqn:intro_measure}$ we have already integrated out the
dilaton.444When combined with the JT action, integrating out the dilaton
creates a delta function source for the Ricci scalar at position $x$. For
additional details on the definition of this measure see section 3.1.1 around
equation (3.21).
This measure cannot be explicitly evaluated for a general surface $\Sigma$.
However, in section 3.2.4 we explain that in the special limit that $\alpha\to
1$, with opening angle $\theta=2\pi\alpha$, we can evaluate the measure on an
arbitrary surface $\Sigma$. We find that the conical defect operator takes the
form
$\text{General surface:}\qquad\lim_{\alpha\to
1}\mathcal{V}_{\alpha}=2\pi(1-\alpha)\int_{\Sigma}d^{2}x\sqrt{g(x)}e^{-2\pi(1-\alpha)\Phi(x)}+\mathcal{O}\left((1-\alpha)^{2}\right),$
(1.7)
up to corrections in the $\left(1-\alpha\right)$ expansion.555One interesting
aspect is that the metric $g(x)$ in (1.7) is for the surface $\Sigma$ with no
defect, which is important for our argument for the recursion relation for
Weil-Petersson volumes found by [20]. This is consistent with previous
expectations for the form of the operator in the blunt defect limit $\alpha\to
1$ [18, 17, 19, 20], and constitutes a gravity path integral argument for the
form of the operator. In the case of the disk we can do better, and in section
3.2.3 We evaluate the full measure and find
$\text{Disk:}\qquad\mathcal{V}_{\alpha}=\pi(1-\alpha^{2})\int
d^{2}x\sqrt{g(x)}e^{-2\pi(1-\alpha)\Phi(x)}.$ (1.8)
In appendix A we show how the calculation on the disk can be thought of as
performing a resummation of the $\left(1-\alpha\right)$ expansion appearing
for general surfaces (1.7). We thus conjecture that the general form of the
conical defect operator on an arbitrary surface should take the form
$\textbf{Conjecture for general surfaces:
}\qquad\mathcal{V}_{\alpha}=\pi(1-\alpha^{2})\int_{\Sigma}d^{2}x\sqrt{g(x)}e^{-2\pi(1-\alpha)\Phi(x)},$
(1.9)
after the resummation in (1.7) is performed. Our argument for the above form
is that the operator should be independent of the surface it is inserted on,
as is the case when these operators are defined as limits of minimal string
operators [19, 20], and so the answer we find on the disk should carry over to
other surfaces.
##### Applications to dilaton gravity.
There are two immediate applications of the conical defect operator. The first
is it clarifies the correct dilaton potential that corresponds to JT gravity
coupled to a gas of conical defects. In [18, 17, 19, 20] JT gravity coupled to
a gas of conical defects was defined by a summation over conical Weil-
Petersson volumes with a coupling $\lambda$ weighing each defect. The bulk
dilaton gravity action that corresponds to this theory is thus given by
$I[g,\Phi]=-\frac{1}{2}\int d^{2}x\sqrt{g}\left(\Phi R+2U(\Phi)\right),$
(1.10)
with the dilaton potential following from equation (1.9)
$U(\Phi)=\Phi+\pi\lambda(1-\alpha^{2})e^{-2\pi\left(1-\alpha\right)\Phi}.$
(1.11)
At each order in the $\lambda$ expansion the path integral will localize onto
singular hyperbolic surfaces and reproduce the appropriate Weil-Petersson
volume with the required coupling $\lambda^{k}$ for surfaces with $k$ defects.
A secondary application is that the conical defect operator can be used to
give a gravity path integral argument for a recursion relation of Weil-
Petersson volumes found by [20]. Namely, in the limit that one defect on a
surface becomes blunt the volume becomes related to the volume without the
defect through
$\frac{dV_{g,m,n+1}\left(\vec{\alpha}_{n+1},\vec{b}_{m}\right)}{d\alpha_{n+1}}\bigg{\rvert}_{\alpha_{n+1}=1}=4\pi^{2}\chi\left(\Sigma\right)V_{g,m,n}\left(\vec{\alpha}_{n},\vec{b}_{m}\right).$
(1.12)
In section 3.2.4 we give a gravity path integral argument for this recursion
relation using the conical defect operator.
## 2 Determinants and correlators on hyperbolic surfaces
### 2.1 Building hyperbolic geometries
We now explain how to build constant negative curvature geometries with
conical defects and asymptotic boundaries by taking quotients of the upper
half-plane (UHP) by an appropriate group. For additional details see [36]. The
quotient construction is useful for calculating determinants and correlation
functions on such surfaces since the method of images can be used. Consider
the upper half-plane $\mathbb{H}$ with the standard AdS2 metric
$ds^{2}=\frac{dzd\overline{z}}{\left(\operatorname{Im}z\right)^{2}}\,,$ (2.1)
The group of isometries is given by PSL$(2,\mathbb{R})$, and elements of the
group $T\in\text{PSL}(2,\mathbb{R})$ are either hyperbolic, elliptic, or
parabolic. Hyperbolic elements satisfy $\Tr T>2$, elliptic satisfy $\Tr T<2$,
while parabolic satisfy $\Tr T=2$. We will primarily be interested in
hyperbolic and elliptic elements, and the most general form of these elements
is given by
$\text{Hyperbolic:
}T_{\ell}=\Lambda^{-1}\left(\begin{array}[]{ll}e^{\ell/2}&0\\\
0&e^{-\ell/2}\end{array}\right)\Lambda,\qquad\text{Elliptic:
}T_{\theta}=\Lambda^{-1}\left(\begin{array}[]{ll}\cos(\frac{\theta}{2})&-\sin(\frac{\theta}{2})\\\
\sin(\frac{\theta}{2})&~{}~{}\cos(\frac{\theta}{2})\end{array}\right)\Lambda\,,$
(2.2)
where $\Lambda\in\text{PSL}(2,\mathbb{R})$. The action of $T$ on the upper
half-plane is defined by
$T\cdot z=\frac{az+b}{cz+d}\,.$ (2.3)
As an example, setting $\Lambda=1$ the hyperbolic element acts as
$T_{\ell}\cdot z=e^{\ell}z$. Elliptic elements leave a fixed point in the
interior which will become a bulk conical defect when we consider the quotient
geometry.666Parabolic elements leave a fixed point at the asymptotic boundary.
The fixed point corresponds to a conical singularity with deficit angle
$2\pi$, and is known as a cusp. We will not consider cusps because the
quotient method becomes much more complicated since the cusp lives on the
asymptotic boundary. However, all of our claims should obviously generalize to
surfaces that include cusps.
One way to build a hyperbolic surface is to take a subgroup of the isometry
group $\Gamma\subset\text{PSL}(2,\mathbb{R})$ and take the quotient of the
upper half-plane by this subgroup $\Gamma\backslash\mathbb{H}$. That is, we
identify two points as equivalent if $z\cong T\cdot z$ for any $T\in\Gamma$.
To build a good hyperbolic surface we must restrict to Fuchsian groups
$\Gamma\subset\text{PSL}(2,\mathbb{R})$, which are discrete subgroups of
$\text{PSL}(2,\mathbb{R})$. This restricts the elliptic elements we are
allowed to consider to $T_{\theta}^{n}=\pm\operatorname{I}$, which only allows
opening angles $\theta=2\pi/n$ with integer $n\geq 2$.777Restricting to
Fuchsian groups is equivalent to imposing the condition that a sufficiently
small ball around the identity element, in the $SL(2,\mathbb{R})$ group
manifold, contains no other elements. Intuitively, when quotienting $z\cong
T\cdot z$ we do not want there to exist elements $T\in\Gamma$ arbitrarily
close to the identity, since then we will be identifying arbitrarily close
points $z,T\cdot z$, and the quotient surface will be degenerate. We will take
the Fuchsian group to have a finite number of generators $T_{i}$ given by some
set set of hyperbolic and elliptic elements (2.2).
We are interested in hyperbolic surfaces with both asymptotic boundaries and
conical defects. Locally, a conical defect at a point $x_{i}$ is characterized
by the fact that we can travel around the point by going through an angle
$\theta<2\pi$. For a conical defect of opening angle $\theta=2\pi\alpha$ this
translates to a condition on the scalar curvature given by
$\frac{1}{2}\sqrt{g}\left(R+2\right)=2\pi\left(1-\alpha\right)\delta^{2}(x-x_{i})\,,$
(2.4)
where we restrict to $\alpha\in(0,1)$. A natural question is whether there
exists a hyperbolic surface with $k$ defects with angles specified by
$\vec{\alpha}=\left(\alpha_{1},\ldots,\alpha_{k}\right)$. It turns out that
such a surface always exists provided that the specification of the defects
$\alpha_{i}$ does not violate the Gauss-Bonnet theorem [37, 38]
$2\pi\chi(\Sigma)=\frac{1}{2}\int_{\Sigma}\sqrt{g}R+\int_{\partial\Sigma}\sqrt{h}K\,,$
(2.5)
where $K$ is the extrinsic curvature on the boundary of the surface, $h$ is
the induced boundary metric, and $\chi(\Sigma)=2-2g-n$ where $n$ is the number
of boundaries and $g$ is the genus. Conical defects fall into two classes:
sharp defects with opening angle $\theta\leq\pi$, and blunt defects with
opening angle $\theta>\pi$ [19, 20]. This translates to the condition
$\alpha\leq\frac{1}{2}$ for sharp and $\alpha>\frac{1}{2}$ for blunt.888One
nice property of sharp defects is that for surfaces with an asymptotic
boundary and at least two sharp defects, the defects are separated from the
boundary by a closed geodesic homotopic to the boundary. This can be seen from
the Gauss-Bonnet theorem. The quotient construction only allows us to build
surfaces with sharp defects $\alpha\leq\frac{1}{2}$.
We now summarize existence theorems on the types of surfaces that can be built
using the quotient method. First, consider the moduli space $\mathcal{M}_{g}$
of compact surfaces of genus $g\geq 2$. For each compact hyperbolic surface
$\Sigma$ there exists a Fuchsian group $\Gamma$ such that
$\Sigma=\Gamma\backslash\mathbb{H}$, and such a representation of the surface
is known as a Fuchsian model. As an example, for $g=2$ the Fuchsian group is
generated by $4$ hyperbolic elements $T_{i}$ with a non-trivial
constraint999The constraint enforces that the closed loop generated by the
action of the following group element will be contractible on the surface. on
the generators
$\Gamma=\langle
T_{1},T_{2},T_{3},T_{4}~{}|~{}T_{4}^{-1}T_{3}^{-1}T_{4}T_{3}T_{2}^{-1}T_{1}^{-1}T_{2}T_{1}=1\rangle.$
(2.6)
In general, the Fuchsian group of a compact genus $g$ surface is generated by
$2g$ distinct hyperbolic elements satisfying non-trivial constraints.
Now consider the moduli space $\mathcal{M}_{g,\vec{\alpha}}$ of compact
surfaces of genus $g$ with $k$ conical defects with deficit angles specified
by $\vec{\alpha}=\left(\alpha_{1},\ldots,\alpha_{k}\right)$. When the opening
angles take special values $\theta_{i}=2\pi\alpha_{i}$ with
$\alpha_{i}=\frac{1}{n_{i}}$ and integer $n_{i}\geq 2$ then every surface can
be obtained by a quotient with a suitable Fuchsian group[39, 40]. The group
will contain $k$ elliptic elements that leave a fixed point
$z_{i}\in\mathbb{H}$ which becomes the location of the conical
defect.101010The order of the elliptic element determines the strength of the
opening angle to be $2\pi/n$. Surfaces with other deficit angles cannot be
constructed by the quotient method.
In JT gravity we typically consider non-compact surfaces with $n$ asymptotic
boundaries with regularized lengths
$\vec{\beta}=\left(\beta_{1},\ldots,\beta_{n}\right)$. All hyperbolic surfaces
with asymptotic boundaries, but without conical defects,
$\mathcal{M}_{g,\vec{\beta}}$ can be obtained by a quotient with a Schottky
group (see theorem 15.3 in [36]). We are not aware of a similar theorem in the
case that we include conical defects
$\mathcal{M}_{g,\vec{\alpha},\vec{\beta}}$, but we will assume that if we
limit the opening angles $\vec{\alpha}$ to the previously mentioned special
values that these surfaces can also be obtained through a quotient.
### 2.2 Two-point functions and determinants on quotient surfaces
The quotient construction makes it possible to calculate the determinant and
two-point function on the associated surface. In this subsection we explain
how to evaluate these quantities.
Let $\Sigma$ be a hyperbolic surface with metric $g$ obtained by quotienting
$\mathbb{H}$ with a Fuchsian group $\Gamma$. The scalar Laplacian is defined
to be $\Delta=-g^{ab}\nabla_{a}\nabla_{b}$. The two-point function on the
surface $\Sigma$, also known as the resolvent, is defined through the equation
$\left(\Delta+s(s-1)\right)R_{\Sigma}(s;z,w)=\frac{\delta^{2}(z-w)}{\sqrt{g}}\,,$
(2.7)
where $z,w$ are two points on the surface $\Sigma$, and $s\geq 1$ is related
to the mass of a scalar field through $m^{2}=s(s-1)$. For surfaces obtained
through the quotient method we can obtain a formula for the resolvent by using
the method of images. The key idea is that the resolvent on the upper half
plane $R_{\mathbb{H}}$, which is explicitly known, locally satisfies the
desired equation (2.7), but globally we must have that
$R_{\Sigma}(s;z,w)=R_{\Sigma}(s;T\cdot z,w)$ since $z\cong T\cdot z$ for all
$T\in\Gamma$. Performing a sum over the quotient group gives a function that
precisely satisfies all the properties that define the resolvent on $\Sigma$
$R_{\Sigma}(s;z,w)=\sum_{T\in\Gamma}R_{\mathbb{H}}(s;T\cdot z,w).$ (2.8)
Once the resolvent is known it is straightforward to calculate the
determinant. For smooth compact surfaces it is well known that the determinant
of the Laplacian reduces to a product over geodesics due to the Selberg trace
formula[32, 33]. However, for non-compact surfaces this method is not
applicable, and other techniques need to be used[34, 35, 36]. We will review
the technique for surfaces with asymptotic boundaries[36], and extend it to
include conical defects. The basic procedure of the calculation and key
results will be presented here, leaving technical details to appendix C.
To compute the determinant of an operator we must solve the eigenvalue problem
$\left(\Delta+s(s-1)\right)\phi_{n}=\lambda_{n}\phi_{n}$ and take the product
of all the eigenvalues. However, doing this directly is quite challenging. A
simpler technique is to define the determinant through the trace of the
resolvent. As an example, consider the case that the eigenvalues are discrete
with eigenfunctions $\phi_{n}$. The resolvent is given by
$R_{\Sigma}(s;z,w)=\sum_{n}\frac{\phi_{n}(z)\phi_{n}(w)}{\lambda_{n}+s(s-1)}\,,$
(2.9)
where the eigenfunctions satisfy
$\int_{\Sigma}d^{2}z\sqrt{g}\phi_{n}(z)\phi_{m}(z)=\delta_{nm}$. One can check
that the determinant defined as a product of eigenvalues is related to the
resolvent trace through
$\frac{1}{2s-1}\frac{d}{ds}\log(\det(\Delta+s(s-1)))={\rm
tr}R_{\Sigma}(s)\equiv\int_{\Sigma}d^{2}z\sqrt{g}R_{\Sigma}(s;z,z)\,.$ (2.10)
Conversely, if a closed form expression for the resolvent can be obtained then
the determinant can be obtained by inverting the above formula.
For simple surfaces the resolvent trace is easy to evaluate since the
resolvent is known on the UHP, and we can perform the sum over the Fuchsian
group defining the surface. Consider the double trumpet with geodesic throat
of size $\ell$. The Fuchsian group is generated by a single hyperbolic element
$\Gamma=\langle T_{\ell}\rangle$, producing the surface
$\Sigma=\mathbb{H}/\langle T_{\ell}\rangle$. The resolvent trace is obtained
after using equations (2.8) and (2.10)
$\displaystyle\sum_{\begin{subarray}{c}m\in\mathbb{Z}_{\neq
0}\end{subarray}}\int_{\mathbb{H}/\langle
T_{\ell}\rangle}d^{2}z\sqrt{g}R_{\mathbb{H}}\left(s;T_{\ell}^{m}\cdot
z,z\right)=\frac{2\ell}{2s-1}\sum_{m=1}^{\infty}\frac{e^{-sm\ell}}{1-e^{-m\ell}}\,.$
(2.11)
The integer $m$ counts the number of windings of the closed geodesic. The
integral can be implemented within a fundamental domain given by
$1<|z|<e^{\ell}$; as for details of integration, see appendix C.
For general surfaces the summation over the group (2.8) will reduce the
resolvent trace to simpler building blocks, such as the above calculation on
the double trumpet. Let us explain how this works. The sum over the group can
be broken up into a sum over conjugacy classes $[\gamma]$ of primitive
elements as
$\sum_{\Gamma}=\sum_{[\gamma]}\sum_{g\in\Gamma/C_{\langle\gamma\rangle}}$.111111A
primitive element $\gamma$ is not a power of any other element in the group.
The centralizer is defined as the set of elements that leave the group
generated by $\gamma$ to be invariant
$C_{\langle\gamma\rangle}=\\{g\langle\gamma\rangle
g^{-1}=\langle\gamma\rangle|g\in\Gamma\\}$. The set of left cosets
$\Gamma/C_{\langle\gamma\rangle}$ generates a list of distinct elements that
we should conjugate $\gamma$ by sum over the full conjugacy class. We must
take the centralizer of the group generated by $\gamma$,
$C_{\langle\gamma\rangle}$ because we want to break up the sum into conjugacy
classes of $[\gamma^{n}]$. It turns out that the centralizer for primitive
$\gamma$ is simply given by $C_{\langle\gamma\rangle}=\langle\gamma\rangle$,
which we use notationally throughout. To avoid over-counting, we must sum over
an element from each coset $g\in\Gamma/C_{\langle\gamma\rangle}$ and conjugate
$g\gamma g^{-1}$ to sum over the entire conjugacy class. The primitive
elements will either be hyperbolic or elliptic which geometrically corresponds
to either closed geodesics or conical defects respectively, and we must also
include a sum over conjugacy classes of multiples of the primitive elements
$\gamma^{n}$. The final result is that every group element can be reached by a
conjugation $g^{-1}\gamma^{n}g$ of some power of a primitive element $\gamma$.
Suppose we have a list of primitive elements denoted by $\Pi$, the summation
over the group is then given by121212When $\gamma$ is elliptic the summation
over powers $\gamma^{m}$ should be cutoff when we reach $\gamma^{m}=I$.
$R_{\Sigma}(s;z,w)=R_{\mathbb{H}}\left(s;z,w\right)+\sum_{\gamma\in\Pi}\sum_{g\in\Gamma/\langle\gamma\rangle}\sum_{m\neq
0}^{\infty}R_{\mathbb{H}}\left(s;g^{-1}\gamma^{m}g\cdot z,w\right)\,,$ (2.12)
where the first term is the identity contribution.
When calculating the trace of the resolvent (2.10) the sum over
$\Gamma/\langle T_{\ell}\rangle$ for hyperbolic elements $T_{\ell}$ will
transform the domain of integration from the surface $\Sigma$ to the
fundamental domain of a double trumpet with a closed geodesic $\ell$, of which
we already have the answer in (2.11)
$\displaystyle\sum_{\begin{subarray}{c}m\in\mathbb{Z}_{\neq
0}\end{subarray}}\sum_{g\in\Gamma/\langle
T_{\ell}\rangle}\int_{\mathbb{H}/\Gamma}d^{2}z\sqrt{g}R_{\mathbb{H}}\left(s;z,T^{m}_{\ell}g\cdot
z,g\cdot z\right)$
$\displaystyle=\sum_{\begin{subarray}{c}m\in\mathbb{Z}_{\neq
0}\end{subarray}}\int_{\mathbb{H}/\langle
T_{\ell}\rangle}d^{2}z\sqrt{g}R_{\mathbb{H}}\left(s;z,T_{\ell}^{m}\cdot
z\right)$
$\displaystyle=\frac{2\ell}{2s-1}\sum_{m=1}^{\infty}\frac{e^{-sm\ell}}{1-e^{-m\ell}}\,,$
(2.13)
where the RHS is already given in (2.11). This gives a contribution to the
determinant (2.10) for each closed geodesic $\ell$ on the surface $\Sigma$.
The same logic applies to elliptic elements $T_{\theta}$ in the sum (2.12)
except now the fundamental domain becomes a cone.
The full determinant calculation is quite involved and carried out in appendix
C. We quote the final result. Consider a hyperbolic surface $\Sigma$ with $k$
conical defects with opening angles $\theta_{i}=\frac{2\pi}{n_{i}}$ with
integer $n_{i}\geq 2$. The surface can be compact or with asymptotic AdS
boundaries. The determinant is given by
$\det\left({\Delta}+s(s-1)\right)=\underbrace{Z_{\text{hyp.}}(s)}_{\text{geodesics}}\underbrace{Z_{\text{ell.}}(s)}_{\text{defects}}\left(\text{const}\right)$
(2.14)
The most important contribution is given by $Z_{\text{hyp.}}$, which is the
Selberg zeta-function on $\Sigma$. It comes from the summation over hyperbolic
elements in (2.12). It is defined by a product over contributions from all
closed geodesics on the surface. Define the set of lengths $\ell_{\gamma}$ of
primitive closed geodesics to be $\mathcal{L}_{\Sigma}$,131313Excluding
exceptional cases, geodesics are endowed with an orientation. For each
geodesic $\gamma$ there is a mirror geodesic with opposite orientation and the
same length. Both a geodesic and it’s mirror need to be independently included
in the Selberg zeta-function. An example of a non-orientable geodesic is given
by the two defect surface considered below. and the Selberg zeta-function for
the surface $\Sigma$ is defined to be
$Z_{\text{hyp.}}(s)=\prod_{\ell_{\gamma}\in\mathcal{L}_{\Sigma}}\prod_{m=0}^{\infty}\left(1-e^{-(s+m)\ell_{\gamma}}\right),$
(2.15)
which can be derived from (2.11). In the above equation, the first product is
over all primitive geodesics, while the second is over all integer $m$
windings of the geodesic. Primitive geodesics include self-intersecting ones
as well as those that touch conical defects. We also mention that geodesics
with opposite orientations are counted as distinct in (2.15).
The second term comes from the summation over elliptic elements in (2.12).
Since elliptic elements give conical defects we label this the “defect”
contribution, and it is given by
$Z_{\text{ell.}}(s)=\prod_{i=1}^{k}\prod_{r=0}^{n_{i}-1}\Gamma\left(\frac{s+r}{n_{i}}\right)^{\frac{2r+1-n_{i}}{n_{i}}}.$
(2.16)
For each conical defect we get a highly non-analytic contribution in the
opening angles $\theta_{i}=\frac{2\pi}{n_{i}}$. Unlike the Selberg zeta-
function, the contribution due to elliptic elements does not seem to have an
obvious geometric interpretation. We emphasize that every term in equation
(2.14) depends on the choice of deficit angles in some way. Either implicitly
through the types of geodesics that are included in the Selberg zeta-function,
or explicitly as some function of the angles as above. The constant term in
(2.14) is due to the identity element contribution and given in appendix C.
### 2.3 JT examples
Let us briefly explain how these determinants and resolvents arise for JT
gravity coupled to matter. Consider a massive scalar field minimally coupled
to JT gravity. The path integral is given, up to boundary terms, by
$Z_{\text{JT +
matter}}=\int\mathcal{D}g\mathcal{D}\Phi\exp\left(\frac{1}{2}\int\sqrt{g}\phi\left(R+2\right)\right)\hskip
1.42271ptZ_{\text{matter}}[g],$ (2.17)
where the matter partition function is
$Z_{\text{matter}}[g]=\int\mathcal{D}\phi
e^{-S[\phi,g]}=\frac{1}{\sqrt{\det\left(\Delta+m^{2}\right)}},\qquad
S[\phi,g]=\frac{1}{2}\int_{\Sigma}\sqrt{g}\left(\phi\Delta\phi+m^{2}\phi^{2}\right).$
(2.18)
where we define the mass through $m^{2}=s(s-1)$, with $s\geq 1$ the scaling
dimension of the boundary operator dual to $\phi$. We see the matter partition
function reduces down to a determinant. After integrating out the dilaton we
localize onto the moduli space of constant negative curvature
geometries,141414By appropriately deforming the JT gravity action we can also
localize onto the moduli space of constant negative curvature surfaces with
conical points which we will describe in detail in section 3. and so is given
by the determinant calculated in section 2.2. Another interesting observable
when matter is present is the two-point function/resolvent
$\langle\phi(z)\phi(w)\rangle$ given by $R_{\Sigma}(s;z,w)$. On the upper half
plane the scalar Laplacian is given by
$ds^{2}=\frac{dx^{2}+dy^{2}}{y^{2}},\qquad\Delta=-y^{2}\left(\partial_{x}^{2}+\partial_{y}^{2}\right).$
(2.19)
Defining the complex coordinates $z=x+iy$, the resolvent is given by
$R_{\mathbb{H}}(s;z,w)=\frac{\Gamma(s)^{2}}{4\pi\Gamma(2s)}\sech^{2s}\left(\frac{\ell(z,w)}{2}\right)\hskip
1.42271pt{}_{2}F_{1}\left(s,s,2s;\sech^{2}\left(\frac{\ell(z,w)}{2}\right)\right).$
(2.20)
where $\ell(z,w)$ is the geodesic distance between $z$ and $w$. In JT gravity
we want to dress the two-point function to the boundary schwarzian. Since the
geodesic distance goes to infinity near the boundary the two-point function
simplifies to
$R_{\mathbb{H}}(z,w)\mathrel{\mathop{=}\limits_{\ell\to\infty}}\frac{4^{s}\Gamma(s)^{2}}{4\pi\Gamma(2s)}e^{-s\ell(z,w)}\left(1+\mathcal{O}\left(e^{-\ell}\right)\right)\,,$
(2.21)
and so the geodesic approximation becomes exact. On an arbitrary surface
$\Sigma$ the two-point function is obtained by summing over the group in
equation (2.8) so that we obtain
$\displaystyle R_{\Sigma}(s;z,w)$
$\displaystyle=\frac{\Gamma(s)^{2}}{4\pi\Gamma(2s)}\sum_{T\in\Gamma}\sech^{2s}\left(\frac{\ell(z,T\cdot
w)}{2}\right)\hskip
1.42271pt{}_{2}F_{1}\left(s,s,2s;\sech^{2}\left(\frac{\ell(z,T\cdot
w)}{2}\right)\right)$ (2.22)
$\displaystyle\hskip-6.0pt\mathrel{\mathop{=}\limits_{\ell\to\infty}}\frac{4^{s}\Gamma(s)^{2}}{4\pi\Gamma(2s)}\sum_{T\in\Gamma}e^{-s\ell(z,T\cdot
w)}\left(1+\mathcal{O}\left(e^{-\ell}\right)\right).$
This implements a summation over all possible geodesics connecting the points
$z,w$. To dress the operator insertions to the boundary fluctuations we
parameterize the boundary by the curve $(x,y)=(F(\tau),\epsilon
F^{\prime}(\tau))$ where $\tau$ is an affine time, $\epsilon$ is the boundary
cutoff, and we have enforced the condition that the induced metric on the
boundary is $\sqrt{h}=1/\epsilon$. We consider inserting operators at two
boundary points labelled by times $\tau_{1},\tau_{2}$. The complex coordinates
at these times are $z=F(\tau_{1})+i\epsilon F^{\prime}(\tau_{1})$ and
$w=F(\tau_{2})+i\epsilon F^{\prime}(\tau_{2})$. The geodesic distance between
these points is given by
$\ell(z,w)=\log\left(\frac{1}{\epsilon^{2}}\frac{\left(F(\tau_{1})-F(\tau_{2})\right)^{2}}{F^{\prime}(\tau_{1})F^{\prime}(\tau_{2})}+\mathcal{O}\left(\frac{1}{\epsilon}\right)\right)\,.$
(2.23)
We will define the two-point function dressed to the boundary fluctuations by
$G(\tau_{1},\tau_{2})$, and subtract off the usual divergences to obtain
$G(\tau_{1},\tau_{2})=\lim_{\epsilon\rightarrow
0}\frac{4\pi\Gamma(2s)}{4^{s}\Gamma(s)^{2}\epsilon^{2s}}R_{\mathbb{H}}(s;z,w)=\left(\frac{F^{\prime}(\tau_{1})F^{\prime}(\tau_{2})}{\left(F(\tau_{1})-F(\tau_{2})\right)^{2}}\right)^{s}\,.$
(2.24)
This is the standard result for the matter two-point function dressed to the
Schwarzian boundary at the level of the disk. In terms of the thermal circle
reparameterization $f(\tau)$ defined by
$F(\tau)\equiv\tan\frac{\pi}{\beta}f(\tau)$ the regularized resolvent is
$G(\tau_{1},\tau_{2})=\left(\frac{f^{\prime}(\tau_{1})f^{\prime}(\tau_{2})}{\frac{\beta^{2}}{\pi^{2}}\sin^{2}\frac{\pi}{\beta}\left(f(\tau_{1})-f(\tau_{2})\right)}\right)^{s}\,.$
(2.25)
On a more complicated surface obtained from a quotient by $\Gamma$ there are
additional geodesics connecting the two boundary points. Since $w\cong T\cdot
w$ for $T\in\Gamma$ the geodesic connecting $z$ to $T\cdot w$ on the disk
becomes a distinct geodesic connecting $z$ to $w$ on the quotient geometry. We
can find the geodesic distance between the point $z$ and $T\cdot w$ to be
$\ell\left(z,T\cdot
w\right)=\log\left(\frac{1}{\epsilon^{2}}\frac{\left(F(\tau_{1})-T\cdot
F(\tau_{2})\right)^{2}}{F^{\prime}(\tau_{1})\left(T\cdot
F(\tau_{2})\right)^{\prime}}+\mathcal{O}\left(\frac{1}{\epsilon}\right)\right),\qquad
T\cdot F(\tau)=\frac{aF(\tau)+b}{cF(\tau)+d},$ (2.26)
The boundary dressed two-point function on a quotient surface can then be
immediately obtained from equation (2.22)
$G_{\Sigma}(\tau_{1},\tau_{2})=\lim_{\epsilon\rightarrow
0}\frac{4\pi\Gamma(2s)}{4^{s}\Gamma(s)^{2}\epsilon^{2s}}\sum_{T\in\Gamma}e^{-s\ell(z,T\cdot
w)}=\sum_{T\in\Gamma}\left(\frac{F^{\prime}(\tau_{1})(T\cdot
F(\tau_{2}))^{\prime}}{\left(F(\tau_{1})-T\cdot
F(\tau_{2})\right)^{2}}\right)^{s}.$ (2.27)
This formula includes all possible geodesics on the surface $\Sigma$
connecting the specified boundary points, and different group elements $T$
lead to geodesics with different windings/self-intersections on $\Sigma$. This
includes geodesics that self-intersect any number of times. However, we are
still left with the challenge of integrating over the Schwarzian and
performing the sum over the group elements. We now explain some simple
examples.
#### 2.3.1 Double trumpet
The metric for the double trumpet can be written as
$ds^{2}=dr^{2}+\cosh^{2}\left(r\right)d\theta^{2},\qquad\theta\cong\theta+b.$
(2.28)
The two asymptotic boundaries are at $r\to\pm\infty$, and the only primitive
geodesic of length $b$ is at $r=0$. The double trumpet is obtained by taking
the quotient of the upper half-plane $\mathbb{H}/\Gamma$ by the group
$\Gamma=\langle T_{b}\rangle$ generated by the element
$T_{b}=\begin{pmatrix}\exp\left(\frac{b}{2}\right)&0\\\
0&\exp\left(-\frac{b}{2}\right)\end{pmatrix},$ (2.29)
which identify points in the UHP plane by $z\cong e^{b}z$. In the fundamental
domain the geodesic throat of length $b$ goes along the imaginary axis from
$z=i$ to $z=ie^{b}$. The group elements are hyperbolic and take the form
$T_{b}^{m}$ for $m\in\mathbb{Z}$, and correspond to the geodesics that winds
$m$ times around the throat. The resolvent is given by
$R_{\mathbb{H}/\langle
T_{b}\rangle}(s;z,w)=\sum_{m}R_{\mathbb{H}}(s;T_{b}\cdot z,w)\,.$ (2.30)
The resolvent trace and determinant have been discussed in section 2.2 and we
simply repeat the result
${\rm tr}R_{\mathbb{H}/\langle
T_{b}\rangle}=\frac{2b}{2s-1}\sum_{m=1}^{\infty}\frac{e^{-smb}}{1-e^{-mb}},\qquad\det\left({\Delta}+s(s-1)\right)_{\text{DT}}=\prod_{m=0}^{\infty}\left(1-e^{-\left(s+m\right)b}\right)^{2}\,.$
(2.31)
where the quantity in parenthesis is squared since there are two orientations
for the primitive geodesic, and the determinant should be understood to be up
to multiplicative constants.
##### One sided two-point function.
We first consider the correlator where we insert two matter operators
$\langle\phi_{L}(\tau_{1})\phi_{R}(\tau_{2})\rangle$ onto the same asymptotic
boundary and dress them to the boundary fluctuations. We take the operator to
be located at $z=F_{R}(\tau_{1})+i\epsilon F_{R}^{\prime}(\tau_{1})$ and
$w=F_{R}(\tau_{2})+i\epsilon F_{R}^{\prime}(\tau_{2})$. The quotient $z\cong
T\cdot z$ enforces a periodicity constraint on the boundary151515This is
different from the $\tanh$ one in the JT review, but equivalent The Schwarzian
derivative itself is $\operatorname{PSL}(2,\mathbb{R})$ invariant: if
$F=\frac{aG+b}{cG+d}$, then $\\{F,t\\}=\\{G,t\\}$.
$F_{R}(\tau+\beta)=T_{b}\cdot F_{R}(\tau),\qquad T_{b}^{m}\cdot
F(\tau)=e^{mb}F(\tau).$ (2.32)
We can introduce the parameterization
$F_{R}(\tau)=\exp\left(\frac{b}{\beta}f_{R}(\tau)\right)$ with
$f_{R}(\tau+\beta)=f_{R}(\tau)+\beta_{R}$ which respects this identification.
We can immediately apply (2.27) to find the two-point function
$G^{\text{RR}}_{\text{DT}}(\tau_{1},\tau_{2})=\sum_{m=-\infty}^{\infty}\left(\frac{F_{R}^{\prime}(\tau_{1})(T_{b}^{m}\cdot
F_{R}(\tau_{2}))^{\prime}}{(F_{R}(\tau_{1})-T_{b}^{m}\cdot
F_{R}(\tau_{2}))^{2}}\right)^{s}=\sum_{m=-\infty}^{\infty}\left(\frac{f_{R}^{\prime}(\tau_{1})f_{R}^{\prime}(\tau_{2})}{\frac{4\beta_{R}^{2}}{b^{2}}\sinh^{2}\left[\frac{b}{2\beta_{R}}\left(f_{R}(\tau_{1})-f_{R}(\tau_{2}+m\beta_{R})\right)\right]}\right)^{s}\,.$
(2.33)
We have used that the summation over the group $\Gamma$ is the same as a
summation over powers of $T_{b}^{m}$. The sum over $m$ is over windings of the
geodesic around the double trumpet as it connects the two points. Positive and
negative integers indicate which direction the geodesic winds around the
trumpet. The largest contribution is given by the shortest geodesic which
doesn’t wind and is given by $m=0$. This reproduces the one sided two-point
function on the double trumpet computed in [41].
Figure 2: Two-point function on the double trumpet. We have geodesics
connecting the two operator insertions that wind an integer $m$ times around
the trumpet.
##### Two sided two-point function.
The correlator with matter insertions on opposite boundaries
$\langle\phi_{L}(\tau_{1})\phi_{R}(\tau_{2})\rangle$ also immediately follows
from the previous result. The operators are now dressed to independent
boundary wiggles $z=F_{L}(\tau_{1})+i\epsilon F_{L}^{\prime}(\tau_{1})$ and
$w=F_{R}(\tau_{2})+i\epsilon F_{R}^{\prime}(\tau_{2})$. The quotient enforces
the periodicity constraint on both boundaries
$F_{L,R}(\tau+\beta_{L,R})=T_{b}\cdot F_{L,R}(\tau)$, and we can define
$F_{L,R}(\tau)=\mp\exp\left(\frac{b}{\beta_{L,R}}f_{L,R}(\tau)\right)$ which
satisfies the constraint as long as
$f_{L,R}(\tau+\beta_{L,R})=f_{L,R}(\tau)+\beta_{L,R}$ where the two boundaries
can have independent temperatures $\beta_{L,R}$. The relative minus sign is
because the left boundary is at $\real(z)<0$, so we must have that $F_{L}<0$
whereas $F_{R}>0$. Performing the sum over the group (2.27) as in the one
sided case we obtain
$G^{\text{LR}}_{\text{DT}}(\tau_{1},\tau_{2})=\sum_{m=-\infty}^{\infty}\left(\frac{F_{L}^{\prime}(\tau_{1})(T_{b}^{m}\cdot
F_{R}(\tau_{2}))^{\prime}}{(F_{L}(\tau_{1})-T_{b}^{m}\cdot
F_{R}(\tau_{2}))^{2}}\right)^{s}=\sum_{m=-\infty}^{\infty}\left(\frac{f_{L}^{\prime}(\tau_{1})f_{R}^{\prime}(\tau_{2})}{\frac{4\beta_{L}\beta_{R}}{b^{2}}\cosh^{2}\left[\frac{b}{2}\left(\frac{f_{L}(\tau_{1})}{\beta_{L}}-\frac{f_{R}(\tau_{2}+m\beta_{R})}{\beta_{R}}\right)\right]}\right)^{s}\,.$
(2.34)
For the two-sided correlator the geodesic can also wind $m$ times around the
trumpet. The dominant contribution is given by the geodesic with smallest
length, which is given by $m=0$ winding.
#### 2.3.2 Conical defect
Figure 3: Two-point function on the conical defect geometry. The geodesics
connecting the two operators can wind up to $n-1$ times around the defect.
We now consider the geometry with a single conical defect of opening angle
$\theta=\frac{2\pi}{n}$ by taking the quotient by the group $\Gamma=\langle
T_{\theta}\rangle$. The generator is given by
$T_{\theta}=\left(\begin{array}[]{ll}\cos\frac{\pi}{n}&-\sin\frac{\pi}{n}\\\
\sin\frac{\pi}{n}&~{}~{}\cos\frac{\pi}{n}\end{array}\right)\,,$ (2.35)
and the conical defect is located at the fixed point $z=i$. This group has $n$
elements corresponding to powers of $T_{\theta}^{m}$ for $m=0,\ldots,n-1$. The
determinant on this geometry is not very interesting since the cone has no
closed geodesics. Instead we will consider the two-point function dressed to
the boundary schwarzian. As in the case of the disk, we insert the two
operators at $z=F(\tau_{1})+i\epsilon F^{\prime}(\tau_{1})$ and
$w=F(\tau_{2})+i\epsilon F^{\prime}(\tau_{2})$. The identification $z\cong
T_{\theta}\cdot z$ enforces the condition
$F(\tau+\beta)=T_{\theta}\cdot F(\tau).$ (2.36)
We can also use the parameterization
$F(\tau)=\tan\frac{\theta}{2\beta}f(\tau)$, with
$f(\tau+\beta)=f(\tau)+\beta$. Applying the two-point function formula (2.27)
we have
$G_{\text{Defect}}(\tau_{1},\tau_{2})=\sum_{m=0}^{n-1}\left(\frac{F^{\prime}(\tau_{1})(T_{\theta}^{m}\cdot
F(\tau_{2}))^{\prime}}{\left(F(\tau_{1})-T_{\theta}^{m}\cdot
F(\tau_{2})\right)^{2}}\right)^{s}=\sum_{m=0}^{n-1}\left(\frac{f^{\prime}(\tau_{1})f^{\prime}(\tau_{2})}{\frac{4\beta^{2}}{\theta^{2}}\sin^{2}\frac{\theta}{2\beta}\left[f(\tau_{1}+m\beta)-f(\tau_{2})\right]}\right)^{s}\,,$
(2.37)
There are $n$ geodesics connecting the two operator insertions on the
boundary. These geodesics wind around the defect up to $n-1$ times which is
the summation over $m$. Note that for $n=1$ the opening angle becomes
$\theta=2\pi$ and we get the answer for the disk. The above is not the
complete answer, since we must still integrate over the boundary fluctuations.
If we consider the case of a massless field $s=1$ we can perform the summation
over winding geodesics to find
$G^{s=1}_{\text{Defect}}(\tau_{1},\tau_{2})=\frac{f^{\prime}(\tau_{1})f^{\prime}(\tau_{2})}{\frac{\beta^{2}}{\pi^{2}}\sin^{2}\frac{\pi}{\beta}\left[f(\tau_{1})-f(\tau_{2})\right]},$
(2.38)
which is precisely the two-point function for a massless field on the disk
(2.25). This is the expected answer for a conformal scalar since the
correlator on the defect geometry is related to the correlator on the disk
through a conformal rescaling. In this case the rescaling is trivial at the
boundary so the answers agree. We see that including all of the winding
geodesics is crucial to recover the correct properties of the matter
fields.161616We still have to integrate over the schwarzian mode. On the
defect geometry the $n=2$ boundary modes are no longer zero modes and must be
included in the integral. Even though we have reduced the two point function
to the standard disk answer, we must still include these boundary modes when
computing the two-point function.
#### 2.3.3 Two conical defects
To get a disk with two conical defects we need to generate a Fuchsian group
with two elliptic generators $\Gamma=\langle
T_{\theta_{1}},T_{\theta_{1}}\rangle$. We choose the generators to be
Figure 4: The disk with two defects can be represented in the UHP by the
pictured fundamental domain, with the two defects represented by “x”. The
single primitive closed geodesic is in blue, and travels between the defects.
We also have any number of windings of this geodesic.
$T_{\theta_{1}}=\begin{pmatrix}\cos\frac{\theta_{1}}{2}&e^{-\ell/2}\sin\frac{\theta_{1}}{2}\\\
e^{\ell/2}\sin\frac{\theta_{1}}{2}&\cos\frac{\theta_{1}}{2}\end{pmatrix}\,,\quad
T_{\theta_{2}}=\begin{pmatrix}\cos\frac{\theta_{2}}{2}&e^{\ell/2}\sin\frac{\theta_{2}}{2}\\\
e^{-\ell/2}\sin\frac{\theta_{2}}{2}&\cos\frac{\theta_{2}}{2}\end{pmatrix}\,.$
(2.39)
In the UHP the fixed point of $T_{\theta_{1}}$ is at $z=ie^{-\ell/2}$ while
the fixed point of $T_{\theta_{2}}$ is at $z=ie^{\ell/2}$, which are the
locations of the conical defects with opening angles $\theta_{1},\theta_{2}$
respectively. The simplest example, which captures many features of multi
defect surfaces, is to consider the case where both of the angles take the
value $\theta_{1,2}=\pi$. In this case the simplest group products take the
form
$T_{\theta_{1}}^{2}=T_{\theta_{2}}^{2}=\mathbb{I},\qquad T_{2\ell}\equiv
T_{\theta_{2}}T_{\theta_{1}}=\begin{pmatrix}e^{\ell}&0\\\
0&e^{-\ell}\end{pmatrix},$ (2.40)
where we see that a product of two elliptic elements becomes a hyperbolic
element $T_{2\ell}$. There are thus three primitive elements in the group: two
elliptic $T_{\theta_{1}},T_{\theta_{2}}$ and one hyperbolic $T_{2\ell}$. From
the general formula for the determinant (2.14) we immediately get171717The
careful reader may wonder why the Selberg zeta function does not get a square
like the double trumpet case. The reason is that the closed geodesic in this
case is not orientable. The closed geodesic with total length $2\ell$ is
actually obtained by gluing a copy of a geodesic segement with length $\ell$
back to itself in the opposite direction.
$\det(\Delta+s(s-1))=\underbrace{\prod_{m=0}^{\infty}\left(1-e^{-2\ell(s+m)}\right)}_{\text{closed
geodesic}}\underbrace{\frac{\Gamma\left(s+\frac{1}{2}\right)}{\Gamma\left(s\right)}.}_{\text{defect
contribution}}$ (2.41)
The Selberg zeta function contribution is given by the single geodesic
traversing between the two defects, which has length $2\ell$. This originates
from the single primitive hyperbolic element $T_{2\ell}$. We have also
included the contribution coming from the elliptic elements in the resolvent,
which correspond to the two defects. One interesting feature of this surface
is that the hyperbolic element is generated by products of elliptic elements,
and that the resulting closed geodesic touches the conical defects.
To calculate the two-point function (2.27) we must sum over the group. By
inspection we can find that all elements of the group take the form
$T_{2\ell}^{m}=\begin{pmatrix}e^{m\ell}&0\\\ 0&e^{-m\ell}\end{pmatrix}\,,\quad
T_{\theta_{1}}T_{2\ell}^{m}=\begin{pmatrix}0&e^{\left(m+\frac{1}{2}\right)\ell}\\\
e^{-\left(m+\frac{1}{2}\right)\ell}&0\end{pmatrix}\,,$ (2.42)
with $m$ an integer. From the fundamental domain of the schwarzian we see we
must identify $F(\tau+\beta)=e^{2\ell}F(\tau)$, which can be expanded in terms
of the identification $F(\tau)=\exp\left(\frac{2\ell}{\beta}f(\tau)\right)$.
We can immediately write the answer for the resolvent to be
$\displaystyle
G_{\text{Defects}}(\tau_{1},\tau_{2})=\sum_{m=-\infty}^{\infty}\left(\frac{F^{\prime}(\tau_{1})(T_{2\ell}^{m}\cdot
F(\tau_{2}))^{\prime}}{\left(F(\tau_{1})-T_{\theta}^{m}\cdot
F(\tau_{2})\right)^{2}}\right)^{s}+\left(\frac{F^{\prime}(\tau_{1})(T_{\theta_{1}}T_{2\ell}^{m}\cdot
F(\tau_{2}))^{\prime}}{\left(F(\tau_{1})-T_{\theta}^{m}\cdot
F(\tau_{2})\right)^{2}}\right)^{s}$
$\displaystyle=\sum_{m=-\infty}^{\infty}\left(\frac{f^{\prime}(\tau_{1})f^{\prime}(\tau_{2})}{\frac{\beta^{2}}{b^{2}}\sinh^{2}\left[\frac{b}{\beta}\left(f(\tau_{1})-f(\tau_{2}+m\beta)\right)\right]}\right)^{s}+\left(\frac{f^{\prime}(\tau_{1})f^{\prime}(\tau_{2})}{\frac{\beta^{2}}{b^{2}}\sinh^{2}\left[\frac{b}{\beta}\left(f(\tau_{1})+f(\tau_{2}+m\beta)-\frac{\beta}{2}\right)\right]}\right)^{s}\,.$
(2.43)
The first term corresponds to geodesics that wind around the throat, whereas
the second term are geodesics that wind around the defects.
The above example was for the special case where both defects have deficit
angle $\pi$, but the construction immediately extends to more general opening
angles smaller than $\pi$. We briefly explain the main differences.
* •
To ensure the quotient surface has one asymptotic boundary and two defects,
the parameter $\ell$ must satisfy
$e^{\ell}>\frac{\sin\theta_{1}\left(1+\cos\theta_{2}\right)}{\sin\theta_{2}\left(1-\cos\theta_{1}\right)}\,.$
(2.44)
Geometrically, it means that two sharp defects can not be arbitrarily close to
each other.
* •
There are infinitely many closed geodesics on the surface, generated by the
primitive “words” of $T_{\theta_{1,2}}$ such as
$T_{\theta_{1}}^{2},T_{\theta_{1}}T_{\theta_{2}},T_{\theta_{2}}^{2}T_{\theta_{1}}^{7},\ldots$
as long as the “word” is hyperbolic.
* •
Such geodesics generically self-intersect. For example,
$T_{\theta_{1}}T_{\theta_{2}}^{-1}$ is a geodesic winding around both defects
once with a self-intersection.
#### 2.3.4 The handle disk
We now consider the handle disk which has a non-trivial topology. To build the
surface we can cut the handle disk along two geodesics, and flatten it onto
the UHP with the identification in figure 5. Therefore, the Fuchsian group is
generated by two hyperbolic elements that identify the associated geodesics:
the first semicircle with the third and the second semicircle with the
fourth.181818Fuchsian groups generated by identifying pairs of semicircles are
called the Schottky groups. The generators will be denoted by $T_{1}$ and
$T_{2}$,191919The form of the hyperbolic generators is given as follows. A
hyperbolic element that identifies semi-circles centered along the real axis
at points $c_{1,2}$ with radii $r_{1,2}$ is
$T(c_{1},c_{2};r_{1},r_{2})=\left(r_{1}r_{2}(c_{12}-r_{12})(c_{12}+r_{12})\right)^{-\frac{1}{2}}\begin{pmatrix}-c_{2}c_{12}+r_{2}r_{12}&(c_{1}c_{2}+r_{1}r_{2})(c_{12}-r_{12})\\\
-c_{12}&c_{1}c_{12}-r_{1}r_{12}\end{pmatrix}$ where $c_{12}=|c_{1}-c_{2}|$ and
$r_{12}=|r_{1}-r_{2}|$. and the fundamental domain is to remove the four half
disks from $\mathbb{H}$.
Figure 5: The handle disk can be constructed by identifying the orange and
blue geodesic semi-circles in the UHP. The fundamental domain is the exterior
of all the semi-circles. The two identifications are the generators of the
Fuchsian group that generate the surface. The red and purple curves are closed
geodesics on the surface. In the right figure we omit the purple closed
geodesic because it is broken into four segments with the pictured fundamental
domain.
As in the case of the two-defect surface any hyperbolic word
$T_{1}^{k_{1}}T_{2}^{k_{2}}T_{1}^{k_{3}}\ldots\,$, on the condition that it is
primitive, corresponds to a primitive closed geodesic on the handle disk. A
generic hyperbolic word corresponds to an integer winding of a primitive
geodesic. As an example, the dark red geodesic in figure 5 is associated to
$T_{2}$ while the purple one is the closed geodesic separating the torus with
the geodesic boundary, generated by $T_{1}T_{2}T_{1}^{-1}T_{2}^{-1}$. More
complicated closed geodesics typically have self-intersections. Each closed
geodesics (with an associated hyperbolic word) contributes to the determinant,
and so it is difficult to write down a simple expression for the full
determinant.
For the two-point function we must consider boundary anchored geodesics such
as the orange and the blue geodesics in figure 5. It is convenient to use a
different fundamental domain for the handle disk where the asymptotic boundary
is in one connected segment of the UHP. To achieve this, one can first
diagonalize $T_{1}T_{2}T_{1}^{-1}T_{2}^{-1}$, which is associated to the
geodesic separating the asymptotic boundary from the handle, to be
$-\text{diag}(e^{\ell/2},e^{-\ell/2})$. This modifies the generators of the
group to be $\tilde{T}_{1},\tilde{T}_{2}$ with generated group
$\tilde{\Gamma}$.202020To get $\tilde{T}_{1,2}$ and $\tilde{\Gamma}$, one
should conjugate every element in $\Gamma$ with a proper SL(2,$\mathbb{R}$)
matrix $V$ so that $V\cdot T_{1}T_{2}T_{1}^{-1}T_{2}^{-1}\cdot
V^{-1}=-\text{diag}(e^{\ell/2},e^{-\ell/2})$. Then $\tilde{\Gamma}$ is
$V\cdot\Gamma\cdot V^{-1}$, and the generators are $\tilde{T}_{i}=V\cdot
T_{i}V^{-1}$ for which an explicit although complicated form can be obtained.
The resulting fundamental domain is pictured in figure 6. The fundamental
domain looks very similar to the double trumpet, and we can parameterize the
Schwarzian boundary by $z=F(\tau)+i\epsilon F^{\prime}(\tau)$ with
$F(\tau)=\exp\left(\frac{\ell}{\beta}f(\tau)\right)$. The two point function
is given by
$G(\tau_{1},\tau_{2})=\sum_{\gamma\in\tilde{\Gamma}}\left(\frac{F^{\prime}(\tau_{1})\left(\gamma\cdot
F(\tau_{2})\right)^{\prime}}{\left(F(\tau_{1})-\gamma\cdot
F(\tau_{2})\right)^{2}}\right)^{s}\,.$ (2.45)
Note that the parameterization is the same as the one-sided two-point function
on the double trumpet (2.33), but the handle disk case is much more
complicated due to its group $\tilde{\Gamma}$. There are distinct classes of
geodesics on the handle disk, and we now explain how the sum over geodesics
can be categorized into equivalence classes using large diffeomorphisms.
Figure 6: An alternative representation of the handle disk fundamental domain.
All the semi-circles are geodesics, and fundamental domain is the bounded
region inside all the semi-circles. The identifications are red/pink,
blue/light blue/gray, green/green. The purple geodesic winds around the
handle. In this representation the asymptotic boundary is in one connected
segment. The two green semi-circles are mapped to each other by the group
element $\tilde{T}_{1}\tilde{T}_{2}\tilde{T}_{1}^{-1}\tilde{T}_{2}^{-1}.$
The moduli space of the handle disk is composed of schwarzian fluctuation and
the moduli of the torus with geodesic boundary denoted by $\mathcal{M}_{1,1}$.
The moduli space is obtained from Teichmuller space $\mathcal{T}_{1,1}$ after
modding out by the action of large diffs, also known as the mapping class
group (MCG). The MCG is generated by three elements $\\{\sigma,P,U\\}$ and has
a simple action on the generators of the Fuchsian group [42]
$\begin{array}[]{ll}\sigma(\tilde{T}_{1})=\tilde{T}_{1}^{-1},&P(\tilde{T}_{1})=\tilde{T}_{2},\\\
\sigma(\tilde{T}_{2})=\tilde{T}_{2},&P(\tilde{T}_{2})=\tilde{T}_{1},\end{array}\
\begin{aligned} &\tilde{U}(T_{1})=\tilde{T}_{1}\tilde{T}_{2},\\\
&U(\tilde{T}_{2})=\tilde{T}_{2}\,.\end{aligned}$ (2.46)
The MCG acts on a group element built from products of the generators by
acting on each generator independently. This action can be understood as a map
between cycles of the surface. Since each cycle has an associated closed
geodesic, we can think of this action as generating a map between closed
geodesics on the surface. When considering boundary anchored geodesics, this
becomes a map between boundary anchored geodesics. As an example, it can be
checked that the MCG preserves the group element
$\tilde{T}_{1}\tilde{T}_{2}\tilde{T}_{1}^{-1}\tilde{T}_{2}^{-1}$ up to
conjugation,212121For example,
$\sigma\left(\tilde{T}_{1}\tilde{T}_{2}\tilde{T}_{1}^{-1}\tilde{T}_{2}^{-1}\right)$
is $\tilde{T}_{1}^{-1}\tilde{T}_{2}\tilde{T}_{1}\tilde{T}_{2}^{-1}$, which is
conjugate to
$\left(\tilde{T}_{1}\tilde{T}_{2}\tilde{T}_{1}^{-1}\tilde{T}_{2}^{-1}\right)^{-1}$
by $\tilde{T}_{1}$. so the geodesic splitting the handle from the asymptotic
boundary is fixed. We can consider a group element $\gamma\in\tilde{\Gamma}$
corresponding to a particular boundary anchored geodesic and act on it with
the MCG to generate the equivalence class $\\{\gamma\\}_{\rm MCG}$.222222The
geodesic corresponding to $\gamma$ is the geodesic connecting the boundary
points $F(\tau_{1})$ and $\gamma\cdot F(\tau_{2})$ if we unwrap the handle
disk on the UHP. As an example the element $\tilde{T}_{1}$ corresponds to the
purple boundary anchored geodesics that winds the handle in figure 6. We can
generate all other geodesics that wind the handle and do not self-intersect by
acting with the MCG
$\text{non self-intersecting geodesics: }\\{\tilde{T}_{1}\\}_{\rm
MCG}=\\{\tilde{T}_{1},\tilde{T}_{2},\tilde{T}_{1}\tilde{T}_{2},\tilde{T}_{1}^{2}\tilde{T}_{2},\tilde{T}_{1}^{3}\tilde{T}_{2},\tilde{T}_{1}^{2}\tilde{T}_{2}^{3},\ldots\\}\,.$
(2.47)
The summation over the elements $\\{\tilde{T}_{1}\\}_{\rm MCG}$ in the two
point function (2.33) implements a sum over all non-self intersecting
geodesics that wind the handle. These contributions were resumed in [25] and
proved to give the ramp behavior for the late time two-point function.
This method can also be used to partially classify self-intersecting geodesics
that wind around the handle. A complete closed form classification appears to
be highly non-trivial, but for one self-intersection we can find explicit
examples of simple equivalence classes. For boundary anchored once self-
intersecting geodesics that wind the handle, two equivalence classes are given
by starting with base elements $\tilde{T}_{1}^{2}\tilde{T}_{2}^{2}$,
$\tilde{T}_{1}\tilde{T}_{2}\tilde{T}_{1}\tilde{T}_{2}\tilde{T}_{1}^{-1}\tilde{T}_{2}^{-1}$
and acting with the MCG
one self-intersection:
$\displaystyle\\{\tilde{T}_{1}^{2}\tilde{T}_{2}^{2}\\}_{\rm
MCG}=\\{\tilde{T}_{1}^{2}\tilde{T}_{2}^{2},\tilde{T}_{1}\tilde{T}_{2}\tilde{T}_{1}\tilde{T}_{2}^{-1},\tilde{T}_{1}^{3}\tilde{T}_{2}\tilde{T}_{1}\tilde{T}_{2},\ldots\\},$
(2.48)
$\displaystyle\\{\tilde{T}_{1}\tilde{T}_{2}\tilde{T}_{1}\tilde{T}_{2}\tilde{T}_{1}^{-1}\tilde{T}_{2}^{-1}\\}_{\rm
MCG}=\\{\tilde{T}_{1}\tilde{T}_{2}\tilde{T}_{1}\tilde{T}_{2}\tilde{T}_{1}^{-1}\tilde{T}_{2}^{-1},\tilde{T}_{1}\tilde{T}_{2}\tilde{T}_{1}^{3}\tilde{T}_{2}\tilde{T}_{1}^{-1}\tilde{T}_{2}^{-1},\ldots\\}\,.$
Note that the above is not a complete classification of all once self-
intersecting geodesics, but we believe the first equivalence class will always
contain the shortest self-intersecting geodesics on the surface.232323The
argument is that the length of self-intersecting geodesics increases with
additional windings, and the number of windings is controlled by the word-
length of the generator. We can write down other equivalence classes
systematically using the group theory description.242424Note that geodesics
with more self-intersections are generated by words with at least six
generators. The greater length of the words implies a greater length of the
corresponding geodesic. It would be interesting to perform the integral over
the boundary fluctuations in (2.45) for the self-intersecting geodesics
falling into the equivalence classes (2.48) to confirm that they do not
contribute to the late time two-point function [25].
## 3 JT gravity path integral
### 3.1 Gauge fixing the path integral
We now explain how to gauge fix the gravity path integral for JT gravity. For
the case of compact surfaces see [7], while for non-compact surfaces also see
[13, 14, 15, 16]. The integral is defined by
$Z=\int\frac{\mathcal{D}g\mathcal{D}\Phi}{\text{V}(\text{Diff})}e^{-I_{\text{J}T}[g,\Phi]},$
(3.1)
where we divide out by the volume of diffeomorphisms. The JT gravity action is
given by
$I_{\text{J}T}[g,\Phi]=-S_{0}\chi(\Sigma)-\bigg{[}\frac{1}{2}\int_{\Sigma}\sqrt{g}\Phi\left(R+2\right)+\underbrace{\int_{\partial\Sigma}\sqrt{h}\Phi\left(K-1\right)}_{-I_{\text{bdy}}}\bigg{]},$
(3.2)
where the first term is topological and $\chi(\Sigma)=2-2g-b$ for surfaces
with $g$ handles and $b$ boundaries. When we include asymptotic boundaries we
choose boundary conditions to fix the induced metric $\sqrt{h}=1/\epsilon$ and
the proper length of the boundary to be $\beta/\epsilon$, and we fix the
dilaton to asymptotically approach $\phi_{b}=\gamma/\epsilon$.
To integrate over metrics we must first specify a measure on the moduli space
of all metrics, that is we must specify a metric $\langle.\hskip
1.70709pt,.\rangle$ on the tangent space of moduli space. The tangent space is
naturally the space of metric deformations, and the standard metric for the
space of these deformations is the ultra-local measure[43]
$\langle\delta g_{ab},\delta
g_{cd}\rangle=\mathcal{N}\int_{\Sigma}\sqrt{g}g^{ac}g^{bd}\delta g_{ab}\delta
g_{cd},$ (3.3)
which can be extended to other tensor deformations in an obvious way. The
measure is defined up to a normalization constant $\mathcal{N}$ that we will
fix later. Metric deformation decompose into three orthogonal parts
$\delta g_{ab}=\underbrace{\omega g_{ab}}_{\text{weyl}}\hskip
2.84544pt\oplus\hskip
2.84544pt\underbrace{\operatorname{range}P_{1}}_{\text{small diff}}\hskip
2.84544pt\oplus\hskip
2.84544pt\underbrace{\operatorname{ker}P_{1}^{{\dagger}}}_{\text{moduli}},$
(3.4)
where $(P_{1}V)_{ab}=\nabla_{a}V_{b}+\nabla_{b}V_{a}-(\nabla_{c}V^{c})g_{ab}$,
and $(P_{1}^{\dagger}\delta g)_{a}=-2\nabla^{b}\delta g_{ab}$. A general
infinitesimal metric deformation is a combination of a Weyl transformation, a
small diffeomorphism, and a deformation of the moduli. Since the metric
deformations are orthogonal (3.4) the path integral measure breaks up into
integrals over these deformations.
We will first consider the case where we are integrating over all metrics on a
compact surface of genus $g\geq 2$. To perform the integral we gauge fix to
conformal gauge and write the metric as
$g=f^{*}\left(e^{2\omega}\hat{g}\right)$ with $f$ a small diffeomorphism,
$\omega$ a Weyl factor, and $\hat{g}$ a constant negative curvature metric
$\hat{R}=-2$. The measure works out to be[31, 29, 30]
$Z_{g}=\int_{\mathcal{M}_{g}}\underbrace{d\left(\text{Weil-Pet.}\right)\hskip
2.84544pt(\det\hat{P}_{1}^{\dagger}\hat{P}_{1})^{1/2}\mathcal{D}\omega}_{\mathcal{D}g/\text{Vol}}\mathcal{D}\Phi
e^{-I_{\text{J}T}[e^{2\omega}\hat{g},\Phi]}e^{-26S_{L}[\omega,\hat{g}]}.$
(3.5)
The integral is carried out over the moduli space of constant negative
curvature surfaces $\mathcal{M}_{g}$ of genus $g$.252525When integrating over
small diffeomorphisms we pick up a factor of the volume of small diffs. The
ratio of small diffs with all diffs gives the set of large diffs, which
reduces the integral
$\frac{\text{Vol$($Diff${}_{0})$}}{\text{Vol(Diff)}}\int_{\mathcal{T}_{g}}=\int_{\mathcal{M}_{g}}$
to the moduli space of $R=-2$ surfaces. Here $\mathcal{T}_{g}$ is the set of
metrics mod Weyl $[e^{2\omega}g]$ before quotienting by diffeomorphisms, also
known as Teichmuller space. In the above
$(\det\hat{P}_{1}^{\dagger}\hat{P}_{1})^{1/2}$ appears from the ghost path
integral when gauge fixing to conformal gauge, and the notation $\hat{P_{1}}$
implies the operator is defined with respect to $\hat{g}$. We have an integral
over the Weyl factor $\mathcal{D}\omega$, and the Liouville action $S_{L}$
appears due to the conformal anomaly arising from defining the integration
measures with respect to $\hat{g}$.262626Every integral in (3.5), such as
$\mathcal{D}_{\hat{g}}\Phi$, is defined with the appropriate ultra-local
measure (3.3) with metric $\hat{g}$. The Liouville action will give no
contribution after we perform the $\mathcal{D}\omega$ integral so we discard
it from now on. When restricted to constant negative curvature metrics
$\hat{g}$, the measure (3.3) is by definition the Weil-Petersson measure [44],
which is the origin of $d\left(\text{Weil-Pet.}\right)$.
We can write the part of the JT gravity action (3.2) that contains the dilaton
as
$I[e^{2\omega}\hat{g},\Phi]=-\frac{1}{2}\int_{\Sigma}\sqrt{\hat{g}}\Phi\left(\hat{R}-2\hat{\nabla}^{2}\omega+2e^{2\omega}\right).$
(3.6)
We can perform the integral over the dilaton by rotating the integration
contour to be along the imaginary axis, giving a delta function constraint
that localizes onto constant negative curvature geometries
$\int\mathcal{D}\omega\int_{i\mathbb{R}}\mathcal{D}\Phi
e^{-I_{\text{J}T}[e^{2\omega}\hat{g},\Phi]}=\int\mathcal{D}\omega\hskip
2.84544pt\delta\left(\hat{R}-2\hat{\nabla}^{2}\omega+2e^{2\omega}\right)=\frac{1}{\det(-2\hat{\nabla}^{2}+4)},$
(3.7)
where in the last equality we localize onto $\omega=0$. A straightforward
calculation shows that the gauge fixing differential operator takes the form
$\hat{P}_{1}^{\dagger}\hat{P}_{1}=2\left(-\hat{\nabla}_{1}^{2}+1\right)$ on a
surface with constant negative curvature, with $\hat{\nabla}^{2}_{1}$ the
vector laplacian. Putting everything together we find that the JT gravity path
integral (3.5) is
$\displaystyle Z_{g}$
$\displaystyle=(\text{const})^{\chi}\int_{\mathcal{M}_{g}}d\left(\text{Weil-
Pet.}\right)\frac{\det\left(-\hat{\nabla}_{1}^{2}+1\right)^{1/2}}{\det(-\hat{\nabla}^{2}+2)}$
(3.8)
$\displaystyle=(\text{const})^{\chi}\int_{\mathcal{M}_{g}}d\left(\text{Weil-
Pet.}\right).$ (3.9)
In the above we have discarded some multiplicative constants from the
determinants, since in zeta function regularization multiplying all the
eigenvalues by a constant shifts the determinant by
$(\text{const})^{\chi}$.272727All of these constants can be absorbed into a
redefinition of $S_{0}$ in the topological term of the action (3.2).
Furthermore we also used our result for the vector laplacian determinant
$\frac{\det\left(-\hat{\nabla}_{1}^{2}+1\right)^{1/2}}{\det(-\hat{\nabla}^{2}+2)}=2^{\frac{1}{2}\chi},$
(3.10)
given in equation (C.10). The above conclusion immediately generalizes beyond
compact surfaces.
Consider the moduli space of genus $g$ hyperbolic surface with $n$ asymptotic
boundaries of regularized lengths
$\vec{\beta}=\left(\beta_{i},\ldots,\beta_{n}\right)$ and $k$ conical defects
with opening angles specified by $2\pi\alpha_{i}$. We take
$\vec{\alpha}=\left(\alpha_{1},\ldots,\alpha_{k}\right)$ with
$\alpha_{i}=n_{i}^{-1}$ and integer $n_{i}\geq 2$. To localize onto geometries
with such singularities we must insert an operator $\mathcal{V}_{\alpha}$ into
the path integral (3.1). We explain the dilaton gravity form of this operator
slightly later, but for now in all integrals we assume we have inserted this
operator and integrated over the dilaton to localize onto the relevant
hyperbolic geometries.
When integrating over hyperbolic surfaces with arbitrary conical singularities
and asymptotic boundaries we again obtain the Weil-Petersson measure for the
associated moduli spaces. This is because the measure (3.3) for metrics with
conical singularities is again by definition the Weil-Petersson metric on such
spaces[45]. Carrying out the JT gravity path integral in the same way as for a
compact surface we arrive at
$\displaystyle Z_{g,{\vec{\alpha}}}(\beta_{1},\ldots,\beta_{n})$
$\displaystyle=(\text{const})^{\chi}\int_{\mathcal{M}_{g,{\vec{\alpha},\vec{\beta}}}}d\left(\text{Weil-
Pet.}\right)\frac{\det\left(-\hat{\nabla}_{1}^{2}+1\right)^{1/2}}{\det(-\hat{\nabla}^{2}+2)}e^{-I_{\text{bdy}}}$
$\displaystyle=(\text{const})^{\chi}\int_{\mathcal{M}_{g,{\vec{\alpha},\vec{\beta}}}}d(\text{boundary
wiggles})d\left(\text{bulk moduli}\right)e^{-I_{\text{bdy}}}.$ (3.11)
In the above we have split the integral over the moduli $d\left(\text{Weil-
Pet.}\right)$ into the boundary wiggles and the bulk moduli. The Weil-
Petersson measure (3.3) will induce a measure for the boundary wiggles, which
we will calculate in section 3.2.1 and see is given by the standard schwarzian
measure. While we are only able to compute the required determinants for
special defect angles, it seems likely that the cancellation between
determinants will remain the case for general conical defects.
#### 3.1.1 Integrating over moduli space
We now explain how to formally carry out the integration over moduli space.
For reviews see [29, 31, 46, 47, 48]. From (3.4) the infinitesimal
deformations of the moduli correspond to variations of the metric $\delta
g\in\operatorname{Ker}P_{1}^{\dagger}$. In conformal gauge where the metric is
off-diagonal this translates to the condition $\overline{\partial}\delta
g_{zz}=\partial\delta g_{\bar{z}\bar{z}}=0.$ This implies that the moduli
deformations are holomorphic/anti-holomorphic deformations of the form
$\phi_{n}=\phi_{n}(z)dz^{2},\qquad\overline{\phi_{n}}=\overline{\phi_{n}}(\bar{z})d\bar{z}^{2}.$
(3.12)
These are known as quadratic differentials, and they provide a basis for
infinitesimal deformations of the moduli.
We briefly explain some properties of quadratic differentials. On a compact
genus $g\geq 2$ surface there is a basis of $3g-3$ globally defined pairs of
holomorphic and anti-holomorphic quadratic differentials giving a moduli space
of real dimension $6g-6$. When we consider hyperbolic surfaces with conical
defects, the quadratic differentials are allowed to have simple poles at the
location of the defect $\phi\sim z^{-1}dz^{2}$ [45].282828For examples of
quadratic differential on surfaces with defects see [49]. These deformations
correspond to moving the conical defect on the surface, and so the real
dimension of the moduli space with $n$ conical defects is $6g-6+2n$. When we
consider the inclusion of asymptotic boundaries the boundary fluctuations are
also part of the moduli, and there are infinitely many quadratic differentials
associated to turning on modes of the boundary fluctuations as we will see in
section 3.2.1.
An arbitrary infinitesimal moduli deformation, in conformal gauge, can be
written in terms of quadratic differentials as292929The normalization factor
of (3.14) is our convention. We will fix the normalization factor of the
metric perturbation inner product later, and are satisfied with the
proportionality here as a motivation.
$\delta g=\sum_{n}\delta
c_{n}\phi_{n}(z)dz^{2}+\delta\overline{c}_{n}\overline{\phi}_{n}(\bar{z})d\bar{z}^{2},$
(3.13)
where $\delta c_{n}$ are deformation parameters. Using the inner product for
metric deformations (3.3) we can define an inner product for quadratic
differentials, given by
$\langle\phi_{n},\phi_{m}\rangle\equiv\int_{\Sigma}d^{2}z\sqrt{g}g^{z\overline{z}}g^{z\overline{z}}\phi_{n}(z)\overline{\phi}_{m}(\bar{z}),$
(3.14)
where the second quantity is to be complex conjugated. It is standard to
ignore the normalization in (3.3), restoring it only when computing the full
path integral measure.
To define the measure on moduli space we will need one more quantity known as
the Beltrami differential $\mu$. To integrate over moduli space we must first
choose coordinates $m_{n},\bar{m}_{n}$ on the space by specifying a set of
metrics $g(z,\bar{z};m_{n},\bar{m}_{n})$ that give a slice through the space.
We must then compute the Jacobian for this set of coordinates. This can be
accomplished as follows. Consider a metric $g(m_{n},\bar{m}_{n})$ on a surface
which in some local patch takes form
$ds^{2}=e^{2\omega}|dz|^{2}=2g_{z\bar{z}}|dz|^{2}$ for some complex coordinate
$z$. As we change the moduli by moving to a nearby metric at $m_{n}+\delta
m_{n}$ the new metric can no longer be $\propto|dz|^{2}$ since it is not in
the same equivalence class as the original metric. However, there must be
complex coordinates $z^{\prime}=z+\delta m_{n}\,v(z,\bar{z})+\ldots$ where the
new metric takes the form $e^{2(\omega+\delta\omega)}|dz^{\prime}|^{2}$. We
thus find that the deformed metric in the old $z$ coordinates is
$ds^{\prime 2}=e^{2(\omega+\delta\omega)}\left(dzd\bar{z}+\delta m_{n}\hskip
1.9919pt\overline{\mu}_{n}dz^{2}+\delta\overline{m}_{n}\hskip
1.9919pt\mu_{n}d\bar{z}^{2}+\ldots\right),\qquad\delta(ds^{2})=\delta
m_{n}e^{2\omega}\overline{\mu}_{n}dz^{2}+\text{c.c.}$ (3.15)
where we have defined the Beltrami differentials $\mu=\overline{\partial}v$,
and $\overline{\mu}=\partial v$,303030The Beltrami differential are commonly
viewed as $(-1,1)$ forms $\mu=\mu_{\bar{z}}^{~{}z}dz^{-1}d\bar{z}$ which can
be integrated against $(2,0)$ forms $\phi_{zz}dz^{2}$, and it is convention to
define the metric deformation by beltrami differentials as
$\overline{\mu}dz^{2}$. which capture how the metric infinitesimally changes
as we change the $m_{n}$ coordinates. In the second equation we have written
the variation to linear order.313131All these statements are up to Weyl
rescalings since rescalings do not move the metric in moduli space. The
overlap between the quadratic differentials and the change in the metric due
to a deformation in $m_{n}$ coordinates can be obtained from the measure
(3.3), which is most commonly expressed as an overlap between the Beltrami
differentials $\mu$ and the quadratic differentials
$\langle\mu_{n},\phi_{m}\rangle\equiv 2\int_{\Sigma}d^{2}z\mu_{n}\hskip
1.42271pt\phi_{m},$ (3.16)
where the overlap is defined without a conjugation. Notice that deforming the
metric by a quadratic differential $\phi(z)dz^{2}$ (3.13) is equivalent to
deforming it by a beltrami differential
$\overline{\mu}=\frac{\phi}{2g_{z\bar{z}}}$ to linear order in the
deformation. To compute the correct measure we must project the above
deformation onto the space of genuine moduli deformations (as opposed to
diffeomorphisms and Weyl rescalings). Projecting onto the basis of quadratic
differentials we get
$\delta g=\delta
m_{n}\phi_{j}\langle\overline{\mu}_{n},\overline{\phi}_{i}\rangle\langle\phi_{j},\phi_{i}\rangle^{-1}+\delta\overline{m}_{n}\overline{\phi}_{j}\langle\mu_{n},\phi_{i}\rangle\langle\overline{\phi}_{j},\overline{\phi}_{i}\rangle^{-1},$
(3.17)
where the notation $\langle...\rangle^{-1}$ is an inverse matrix. Using (3.3)
we can compute the metric
$ds^{2}_{\text{Weil-
Pet.}}=2\mathcal{N}\langle\overline{\mu}_{n},\overline{\phi}_{j}\rangle\langle\mu_{m},\phi_{i}\rangle\langle\phi_{i},\phi_{j}\rangle^{-1}dm_{n}d\overline{m}_{m}.$
(3.18)
Taking the square root of the determinant and treating the metric as a product
of matrices, we find that in terms of $m_{n}$ coordinates the integral over
moduli space takes the form
$\int_{\mathcal{M}}d(\text{Weil-Pet.})\hskip
2.84544pte^{-I}=\prod_{n}\mathcal{N}\int
dm_{n}d\overline{m}_{n}\frac{\det\langle\mu,\phi\rangle\det\langle\overline{\mu},\overline{\phi}\rangle}{\sqrt{\det\langle\phi,\phi\rangle\det\langle\overline{\phi},\overline{\phi}\rangle}}\hskip
2.84544pte^{-I[m_{n},\overline{m}_{n}]}.$ (3.19)
We have also included a possibility of some action $I$ that depends on the
moduli. In JT gravity this action will be given by the boundary term in (3.2).
We have also reinstated the overall normalization constant $\mathcal{N}$
appearing in the measure (3.3). In [13], and in appendix B using quadratic
differentials, it can be shown that in the second order formalism the gluing
measure for geodesic boundaries is given by
$4\mathcal{N}\int bdb,$ (3.20)
and so to get standard form of the Weil-Petersson measure we must choose the
normalization $\mathcal{N}=\frac{1}{4}$. We keep the normalization constant
present throughout, inserting it’s numerical value when necessary.
#### Conical defect operator
From the expression for the Weil-Petersson measure we can extract the dilaton
gravity operator that creates a conical defect on the surface. When we have a
defect we have two additional moduli that move the defect. We choose the
moduli to be parameterized by the location $z_{i}$ of the defect, and we can
formally write the measure as
$\mathcal{N}\int_{\Sigma}d^{2}z_{i}\frac{\langle\mu,\phi_{1}\rangle\langle\overline{\mu},\overline{\phi}_{1}\rangle}{\sqrt{\langle\phi_{1},\phi_{1}\rangle\langle\overline{\phi}_{1},\overline{\phi}_{1}\rangle}},$
(3.21)
where $\mu$ is the beltrami differential that infinitesimally moves the defect
to $z_{i}+\delta z_{i}$, and $\phi_{1}$ is the quadratic differential with a
simple pole at $z_{i}$. From the JT gravity perspective this is the measure
after the dilaton has already been integrated out, and so restoring it we
arrive at the dilaton gravity operator that creates a conical defect
$\mathcal{V}_{\alpha}=\mathcal{N}\int_{\Sigma}d^{2}z_{i}\frac{\langle\mu,\phi_{1}\rangle\langle\overline{\mu},\overline{\phi}_{1}\rangle}{\sqrt{\langle\phi_{1},\phi_{1}\rangle\langle\overline{\phi}_{1},\overline{\phi}_{1}\rangle}}e^{-2\pi(1-\alpha)\Phi(z_{i})}.$
(3.22)
One complication for evaluating the above integral is that quadratic
differentials are not generally known, apart from their singular behavior near
the defect. However, for the hyperbolic disk everything can be worked out
explicitly, and in section 3.2.3 we evaluate the measure on the disk. In
section 3.2.4 we argue that for general surfaces in the blunt defect limit
$\alpha\to 1$ the measure can be worked out to leading order in the
$(1-\alpha)$ expansion by using the universal divergent behavior of the
quadratic differential.
### 3.2 Examples
#### 3.2.1 Disk
In this section we explain how to apply the above formalism for integrating
over moduli space to reproduce disk partition function $Z(\beta)$ in JT
gravity[14, 15, 16]. The AdS2 disk metric in complex coordinates is given
by323232Our conventions for complex coordinates are $z=x+iy$ with $\int
d^{2}z=2\int d^{2}x$. The dirac delta function is defined by
$\overline{\partial}\partial\log|z|^{2}=2\pi\delta^{2}(z,\bar{z})$, with $\int
d^{2}z\delta^{2}(z,\bar{z})=1$. A Weyl rescaled metric
$g=e^{2\omega}dzd\bar{z}$ has curvature
$R=-8e^{-2\omega}\overline{\partial}\partial\omega$.
$ds^{2}=\frac{4dzd\overline{z}}{\left(1-|z|^{2}\right)^{2}}.$ (3.23)
This is related to the standard metric on the hyperbolic disk through the
coordinate change
$ds^{2}=dr^{2}+\sinh^{2}r\hskip 1.70709ptd\tau^{2},\qquad
z=e^{i\tau}\tanh\frac{r}{2}.$ (3.24)
As explained previously, deformations of the moduli correspond to holomorphic
quadratic differentials and their conjugates. On the disk it is simple to
write down a basis of holomorphic functions given by
$\phi_{n}=\sqrt{\frac{n^{3}-n}{2\pi}}z^{n-2}dz^{2},\qquad\overline{\phi}_{n}=\sqrt{\frac{n^{3}-n}{2\pi}}\overline{z}^{n-2}d\overline{z}^{2},\qquad\langle\phi_{n},\phi_{m}\rangle=\delta_{n,m},$
(3.25)
where for holomorphicity we restrict to integers $n\geq 2$. Infinitesimal
deformations of the disk are given by deforming by linear combinations of the
quadratic differentials
$ds^{2}=\frac{4dzd\overline{z}}{\left(1-|z|^{2}\right)^{2}}+c_{n}\sqrt{\frac{n^{3}-n}{2\pi}}z^{n-2}dz^{2}+\overline{c}_{n}\sqrt{\frac{n^{3}-n}{2\pi}}\overline{z}^{n-2}d\overline{z}^{2}.$
(3.26)
Note that the above metric is no longer AdS2. We will take our coordinates for
moduli space to be given by $c_{n},\overline{c}_{n}$, and so we must work out
the measure for these coordinates along with the boundary action of JT gravity
(3.2) in terms of $c_{n},\overline{c}_{n}$. The measure is straightforward
since for the Beltrami differentials in (3.19) we just use the dual of the
quadratic differentials
$\overline{\mu}_{n}=\frac{\phi_{n}}{2g_{z\overline{z}}}$. Working out the
action is slightly more complicated, which we now do.
The finite version of the metric for the above deformation was worked out by
[16], and is given by333333We mention that we still mostly work with
infinitesimal perturbations of the metric and the following finite metric
deformation is not unique.
$\displaystyle
ds^{2}=e^{C+\overline{C}}\frac{\left(1-|z|^{2}\right)^{2}}{\left(1-|e^{C}z|^{2}\right)^{2}}\left[\left(|1+z\partial
C|^{2}+|\overline{z}\overline{\partial}C|^{2}\right)\frac{4dzd\overline{z}}{\left(1-|z|^{2}\right)^{2}}{+}\left(1+z\partial
C\right)\varepsilon^{\prime\prime\prime}(z)dz^{2}+\text{c.c.}\right],$ (3.27)
$\displaystyle
C(z,\overline{z})={{-}}\frac{1}{2z}\left(\varepsilon(z)-z^{2}\overline{\varepsilon}(\overline{z})-z(1-z\overline{z})\overline{\varepsilon}^{\prime}(\overline{z})-\frac{1}{2}(1-z\overline{z})^{2}\overline{\varepsilon}^{\prime\prime}(\overline{z})\right).$
The functions $\varepsilon(z),\overline{\varepsilon}(\overline{z})$ are
holomorphic/anti-holomorphic, and the primes
$\varepsilon^{\prime}(z),\overline{\varepsilon}^{\prime}(\overline{z})$ denote
holomorphic/anti-holomorphic derivatives. The metric (3.27) is AdS2, which can
be seen by using coordinates $w=ze^{C}$ in terms of which it takes the
standard form (3.23). This coordinate change is a large diffeomorphism and
corresponds to a physically distinct configuration. The relation between
(3.26) and (3.27) comes from identifying
$\varepsilon(z)=\sum_{n\geq
2}\frac{1}{\sqrt{2\pi(n^{3}-n)}}\,c_{n}z^{n+1},\qquad\overline{\varepsilon}(\overline{z})=\sum_{n\geq
2}\frac{1}{\sqrt{2\pi(n^{3}-n)}}\,\overline{c}_{n}\overline{z}^{n+1}.$ (3.28)
Expanding the metric (3.27) to leading order in $\varepsilon$ we recover the
infinitesimal form (3.26) up to a Weyl factor of order $\varepsilon^{2}$.
We now explain how turning on these deformations corresponds to turning on
modes of the Schwarzian. In $z$ coordinates (3.27) the boundary cutoff is
located at fixed radial distance while in $w=ze^{C}$ coordinates the metric
takes the form
$ds^{2}=\frac{4dwd\overline{w}}{(1-|w|^{2})^{2}},$ (3.29)
and the cutoff surface fluctuates. Taking $z=e^{i\theta}$ we find that at the
boundary and linear order of $c_{n}$
$w=e^{if(\theta)},\qquad f(\theta)=\theta+\sum_{n\geq
2}\frac{1}{2i\sqrt{2\pi(n^{3}-n)}}\left(c_{n}e^{in\theta}-\overline{c}_{n}e^{-in\theta}\right).$
(3.30)
We would like to find the cutoff surface in terms of the $f(\theta)$
reparameterization. We can do this perturbatively in the cutoff $\delta$
finding $w\left(\theta\right)=e^{if(\theta)}\left(1-\delta
f^{\prime}(\theta)+\frac{\delta^{2}f^{\prime}(\theta)^{2}}{2}+\mathcal{O}(\delta^{3})\right)$.
Where we have found the radial distance in terms of $f(\theta)$ using that the
induced metric along the cutoff surface is fixed to be $\sqrt{h}=1/\delta$.
The extrinsic curvature can then be evaluated
$\displaystyle
K=1+\delta^{2}\left(\frac{f^{\prime\prime\prime}}{f^{\prime}}-\frac{3f^{\prime\prime
2}}{2f^{\prime 2}}+\frac{1}{2}f^{\prime
2}\right)+\mathcal{O}(\delta^{3})=1+\delta^{2}\operatorname{Sch}\left(\tan\frac{f(\theta)}{2},\theta\right),$
(3.31)
which is the standard Schwarzian answer. From (3.30) we see that the quadratic
differentials $\phi_{n},\overline{\phi}_{n}$ precisely correspond to turning
on the modes of the Schwarzian. The boundary action (3.2) can now be
evaluated. Reinstating the temperature $\beta$ and the value of the dilaton
$\phi=\gamma/\delta$ we find
$\displaystyle I_{\text{bdy}}$
$\displaystyle=-\frac{2\pi\gamma}{\beta}\int_{0}^{2\pi}d\theta\left(\frac{f^{\prime\prime\prime}}{f^{\prime}}-\frac{3f^{\prime\prime
2}}{2f^{\prime 2}}+\frac{1}{2}f^{\prime 2}\right),$ (3.32)
$\displaystyle=I_{0}+\frac{\pi\gamma}{2\beta}\sum_{n\geq 2}n\hskip
0.28436ptc_{n}\overline{c}_{n},\qquad I_{0}=-\frac{2\pi^{2}\gamma}{\beta}.$
Where in the second line we have used that the Schwarzian path-integral is
one-loop exact [50] so that it is sufficient to expand to quadratic order in
$c_{n},\overline{c}_{n}$ and carry out the integral[7, 51], and $I_{0}$ is the
classical contribution. We can now evaluate the path integral over the moduli
space using (3.19)
$\displaystyle Z_{\text{disk}}\left(\beta\right)$ $\displaystyle=\prod_{n\geq
2}\mathcal{N}\int
dc_{n}d\overline{c}_{n}\left|\frac{\det\langle\mu,\phi\rangle}{\sqrt{\det\langle\phi,\phi\rangle}}\right|^{2}e^{-I_{\text{bdy}}}=e^{\frac{2\pi^{2}\gamma}{\beta}}\prod_{n\geq
2}\mathcal{N}\int
dc_{n}d\overline{c}_{n}\exp\left(-\frac{\pi\gamma}{2\beta}\sum_{n\geq
2}n\hskip 0.28436ptc_{n}\overline{c}_{n}\right)$ (3.33)
$\displaystyle=e^{\frac{2\pi^{2}\gamma}{\beta}}\prod_{n\geq
2}\mathcal{N}\frac{4\beta}{\gamma
n}=\frac{1}{8}\sqrt{\frac{\gamma^{3}}{2\pi\mathcal{N}^{3}\beta^{3}}}\hskip
2.84544pte^{\frac{2\pi^{2}\gamma}{\beta}}.$
In the first line we used our normalization for the quadratic differentials
$\langle\phi_{n},\phi_{m}\rangle=\delta_{n,m}$ and that the dual beltrami
differentials $\overline{\mu}_{n}=\frac{\phi_{n}}{2g_{z\overline{z}}}$ satisfy
$\langle\mu_{n},\phi_{m}\rangle=\delta_{n,m}$. In the last line we used zeta
function regularization. This reproduces the JT gravity disk partition
function obtained from the first order formalism [7].
#### 3.2.2 Conical defect
We now explain how the above calculation is modified in the case of a disk
with a single conical defect of opening angle $2\pi\alpha$. If the defect is
at the center of the disk then the metric and Ricci scalar are given by
$ds^{2}=\frac{4\alpha^{2}|z|^{2(\alpha-1)}}{(1-|z|^{2\alpha})^{2}}dzd\bar{z}\,,\qquad\frac{1}{2}\sqrt{g}\left(R+2\right)=2\pi(1-\alpha)\delta^{(2)}(z).$
(3.34)
The quadratic differentials are now given by
$\phi_{n}=\sqrt{\frac{n^{3}-n\alpha^{2}}{2\pi}}\hskip
2.84544ptz^{n-2},\qquad\overline{\phi}_{n}=\sqrt{\frac{n^{3}-n\alpha^{2}}{2\pi}}\hskip
2.84544pt\overline{z}^{n-2}d\overline{z}^{2},\qquad\langle\phi_{n},\phi_{m}\rangle=\delta_{n,m}.$
(3.35)
When a conical defect is present on the geometry the quadratic differentials
are allowed to have simple poles[45] at the location of the defect so we now
allow for $n\geq 1$ modes. This increases the real dimension of the moduli
space by two, and the new deformation should be interpreted as moving the
defect on the surface. The quadratic differentials infinitesimally deform the
metric by
$ds^{2}=\frac{4\alpha^{2}|z|^{2(\alpha-1)}}{(1-|z|^{2\alpha})^{2}}dzd\bar{z}+c_{n}\sqrt{\frac{n^{3}-n\alpha^{2}}{2\pi}}\hskip
2.84544ptz^{n-2}dz^{2}+\overline{c}_{n}\sqrt{\frac{n^{3}-n\alpha^{2}}{2\pi}}\hskip
2.84544pt\overline{z}^{n-2}d\overline{z}^{2}.$ (3.36)
Similar to the case of the disk, there is a vector field $\xi$ that
infinitesimally removes these deformations, and we can find it by solving for
$2\nabla_{z}\xi_{z}=-c_{n}\sqrt{\frac{n^{3}-n\alpha^{2}}{2\pi}}\hskip
2.84544ptz^{n-2},\qquad
2\nabla_{\bar{z}}\xi_{\bar{z}}=-\overline{c}_{n}\sqrt{\frac{n^{3}-n\alpha^{2}}{2\pi}}\hskip
2.84544pt\overline{z}^{n-2},\qquad\nabla_{z}\xi_{\bar{z}}+\nabla_{\bar{z}}\xi_{z}=0.$
(3.37)
The vector field can be found and takes the form
$\displaystyle\xi^{\bar{z}}=\frac{\overline{c}_{n}\overline{z}^{n+1}}{2\sqrt{2\pi(n^{3}-n\alpha^{2})}}-\frac{c_{n}\overline{z}(z\overline{z})^{-\alpha}\left(2\alpha^{2}z^{n}(z\overline{z})^{\alpha}+n^{2}z^{n}\left(1-(z\overline{z})^{\alpha}\right)^{2}+n\alpha
z^{n}\left(1-(z\overline{z})^{2\alpha}\right)\right)}{4\alpha^{2}\sqrt{2\pi(n^{3}-n\alpha^{2}})},$
(3.38)
$\displaystyle\xi^{z}=\frac{c_{n}z^{n+1}}{2\sqrt{2\pi(n^{3}-n\alpha^{2})}}-\frac{c_{n}z(z\overline{z})^{-\alpha}\left(2\alpha^{2}\overline{z}^{n}(z\overline{z})^{\alpha}+n^{2}\overline{z}^{n}\left(1-(z\overline{z})^{\alpha}\right)^{2}+n\alpha\overline{z}^{n}\left(1-(z\overline{z})^{2\alpha}\right)\right)}{4\alpha^{2}\sqrt{2\pi(n^{3}-n\alpha^{2}})}.$
The finite form of the deformed metric can be recovered by choosing
coordinates $w=z\exp\left(\frac{1}{z}\xi^{z}\right)$ where in $w$ coordinates
we have the metric (3.34) (compare to (3.27)), and we implicitly sum over
$c_{n},\overline{c}_{n}$ in $\xi^{z}$. We can now follow the same procedure to
find the shape of the cutoff in $w$ coordinates as we followed for the disk.
Following the curve $z=e^{i\theta}$ we find in $w$ coordinates
$w=e^{if(\theta)},\qquad f(\theta)=\theta+\sum_{n\geq
1}\frac{1}{2i\sqrt{2\pi(n^{3}-n\alpha^{2})}}\left(c_{n}e^{in\theta}-\overline{c}_{n}e^{-in\theta}\right).$
(3.39)
Fixing the induced metric along the cutoff surface, we again find the same
functional form with a defect present
$w\left(\theta\right)=e^{if(\theta)}\left(1-\delta
f^{\prime}(\theta)+\frac{\delta^{2}f^{\prime}(\theta)^{2}}{2}+\mathcal{O}(\delta^{3})\right)$.
Calculating the extrinsic curvature along this curve we find
$\displaystyle
K=1+\delta^{2}\left(\frac{f^{\prime\prime\prime}}{f^{\prime}}-\frac{3f^{\prime\prime
2}}{2f^{\prime 2}}+\frac{\alpha^{2}}{2}f^{\prime
2}\right)+\mathcal{O}(\delta^{3})=1+\delta^{2}\operatorname{Sch}\left(\tan\frac{\alpha
f(\theta)}{2},\theta\right).$ (3.40)
Which is precisely the Schwarzian with a conical defect[52] which is again
one-loop exact. Restoring temperature and the dilaton as in the case of the
disk we find the action to quadratic order to be
$\displaystyle I_{\text{bdy}}$
$\displaystyle=-\frac{2\pi\gamma}{\beta}\int_{0}^{2\pi}d\theta\left(\frac{f^{\prime\prime\prime}}{f^{\prime}}-\frac{3f^{\prime\prime
2}}{2f^{\prime 2}}+\frac{\alpha^{2}}{2}f^{\prime 2}\right),$ (3.41)
$\displaystyle=I_{0}+\frac{\pi\gamma}{2\beta}\sum_{n\geq 1}n\hskip
0.28436ptc_{n}\overline{c}_{n},\qquad
I_{0}=-\frac{2\pi^{2}\gamma}{\beta}\alpha^{2}.$
Integrating over the moduli space with (3.19) we need to now include the $n=1$
mode compared with the disk calculation
$\displaystyle Z_{\text{defect}}\left(\beta,\alpha\right)$
$\displaystyle=\prod_{n\geq 1}\mathcal{N}\int
dc_{n}d\overline{c}_{n}\left|\frac{\det\langle\mu,\phi\rangle}{\sqrt{\det\langle\phi,\phi\rangle}}\right|^{2}e^{-I_{\text{b}dy}}=e^{\frac{2\pi^{2}\gamma}{\beta}\alpha^{2}}\prod_{n\geq
1}\mathcal{N}\int
dc_{n}d\overline{c}_{n}\exp\left(-\frac{\pi\gamma}{2\beta}\sum_{n\geq
1}n\hskip 0.28436ptc_{n}\overline{c}_{n}\right)$ (3.42)
$\displaystyle=e^{\frac{2\pi^{2}\gamma}{\beta}\alpha^{2}}\prod_{n\geq
1}\mathcal{N}\frac{4\beta}{\gamma
n}=\sqrt{\frac{\gamma}{8\pi\mathcal{N}\beta}}e^{\frac{2\pi^{2}\gamma}{\beta}\alpha^{2}},$
where again the measure simplifies due to our normalization (3.35) as in the
case of the disk (3.33). This again matches the calculation of the partition
function with a single defect [52, 7].
#### 3.2.3 Conical defect: $z$ coordinates
In the previous section we integrated over moduli space using $c_{n}$
coordinates which correspond to deformations of the metric by quadratic
differentials. In this section we will carry out the previous calculation
using a coordinate choice that is more physically intuitive, namely the
position $z_{i}$ of the defect on the disk. This choice of coordinates is
applicable to the $n=1$ mode of the Schwarzian which can be thought of as
moving the defect on the disk. To integrate over the moduli space with $z_{i}$
coordinates we must again work out both the measure and the action in (3.19),
which we now do.
The metric with a conical defect at a general position $z_{i}$ can be obtained
by applying an $\text{SL}(2,\mathbb{R})$ transformation to the metric with a
defect at the center
$ds^{2}=\frac{4\alpha^{2}|w(z)|^{2(\alpha-1)}}{(1-|w(z)|^{2\alpha})^{2}}|w^{\prime}(z)|^{2}dzd\bar{z}\,,\qquad
w(z)=\frac{z-z_{i}}{1-z\overline{z}_{i}},$ (3.43)
where in $w$ coordinates the defect is at $w=0$, but is at $z=z_{i}$ in $z$
coordinates. To work out the measure in (3.19) we need to know the Beltrami
differential $\mu$ which contains information on how the metric
infinitesimally changes as we move the defect $z_{i}\to z_{i}+\delta z_{i}$.
As explained in section 3.1.1, this can be obtained by finding a set of
coordinates $z^{\prime}(z,\bar{z})$ where the deformed metric is flat
$\propto|dz^{\prime}|^{2}$ and the defect is mapped to the new point
$z^{\prime}(z_{i})=z_{i}+\delta z_{i}$ with the asymptotic boundary
undeformed.343434The coordinates $z^{\prime}$ should not affect the boundary
of the disk since such a deformation would both move the defect and turn on
some additional Schwarzian modes, and we want to isolate the deformation that
moves the defect. To extract the Beltrami we only need this coordinate chart
to linear order in $\delta z_{i}$, where it is given by
$z^{\prime}(z,\bar{z})=\frac{\delta
z_{i}\frac{1-z\bar{z}}{1-z_{i}\bar{z_{i}}}+z(1-z_{i}\bar{z})}{\delta
z_{i}\bar{z}_{i}\frac{1-z\bar{z}}{1-z_{i}\bar{z_{i}}}+(1-z_{i}\bar{z})}.$
(3.44) $|dz^{\prime}|^{2}\propto|dz+\delta z_{i}\hskip 1.13791pt\mu
d\bar{z}+\mathcal{O}((\delta
z_{i})^{2})|^{2},\qquad\mu=-\frac{(z-z_{i})(1-z\bar{z}_{i})}{(1-z_{i}\bar{z}_{i})(1-\bar{z}z_{i})^{2}}\,.$
(3.45)
Additionally, the quadratic differential that moves the defect when it is
located $z=z_{i}$ is given by353535It can be checked that
$\langle\mu,\phi_{n}\rangle=0$ for $n\geq 2$, implying that the only action of
$\mu$ is to move the defect.
$\phi_{1}=\frac{1}{w}(\partial
w)^{2}dz^{2}=\frac{(1-z_{i}\bar{z}_{i})^{2}}{(z-z_{i})(1-z\bar{z}_{i})^{3}}dz^{2},\qquad\langle\phi_{1},\phi_{1}\rangle=\frac{2\pi}{1-\alpha^{2}}.$
(3.46)
Which is precisely $\phi_{1}=\frac{1}{w}dw^{2}$ in $w$ coordinates, implying
the norm relation noted above. Using the Beltrami and quadratic differential
we can compute the overlap
$\langle\mu,\phi_{1}\rangle=2\int
dzd\bar{z}\frac{1-|z_{i}|^{2}}{(1-z\bar{z}_{i})^{2}(1-z_{i}\bar{z})^{2}}=\frac{4\pi}{1-|z_{i}|^{2}}\,.$
(3.47)
We can now immediately compute the Weil-Petersson measure (3.19) for the
$z_{i}$ coordinates that move the defect
$\mathcal{N}\int
d^{2}z_{i}\left|\frac{\langle\mu,\phi_{1}\rangle}{\sqrt{\langle\phi_{1},\phi_{1}\rangle}}\right|^{2}=8\pi\mathcal{N}\int
d^{2}z_{i}\frac{1-\alpha^{2}}{(1-|z_{i}|^{2})^{2}}=4\pi\mathcal{N}(1-\alpha^{2})\int
d^{2}z_{i}\sqrt{g}\,,$ (3.48)
where in the last equality we have noticed the measure takes the form of the
metric on the disk without a defect. We are now left to compute the action in
the $z_{i}$ coordinates. We follow the same procedure as in the previous
section, if we take the cutoff to be at $z=e^{i\theta}$ then in
$w=e^{if(\theta)}$ coordinates (3.43) we have
$w(e^{i\theta})=e^{i\theta}\frac{1-z_{i}e^{-i\theta}}{1-\bar{z}_{i}e^{i\theta}}\
\Rightarrow\
f(\theta)=\theta+i^{-1}\log\left(\frac{1-z_{i}e^{-i\theta}}{1-\bar{z}_{i}e^{i\theta}}\right)\,.$
(3.49)
Note that this reparameterization only takes into account the $n=1$ mode of
the Schwarzian. We previously worked out the action for $f(\theta)$ in (3.41),
giving
$\displaystyle I^{n=1}_{\text{bdy}}$
$\displaystyle=-\frac{2\pi\gamma}{\beta}\int_{0}^{2\pi}d\theta\left(\frac{f^{\prime\prime\prime}}{f^{\prime}}-\frac{3f^{\prime\prime
2}}{2f^{\prime 2}}+\frac{\alpha^{2}}{2}f^{\prime 2}\right),$
$\displaystyle=-\frac{2\pi\gamma}{\beta}\int_{0}^{2\pi}d\theta\frac{1}{2}\left[\alpha^{2}\frac{(1-|z_{i}|^{2})^{2}}{|1-z_{i}e^{-i\theta}|^{4}}-\frac{(z_{i}e^{-i\theta}+\bar{z}_{i}e^{i\theta}-2)(z_{i}e^{-i\theta}+\bar{z}_{i}e^{i\theta}-2|z_{i}|^{2})}{|1-z_{i}e^{-i\theta}|^{4}}\right],$
$\displaystyle=-\frac{2\pi^{2}\gamma}{\beta}\left(1-(1-\alpha^{2})\frac{1+|z_{i}|^{2}}{1-|z_{i}|^{2}}\right).$
(3.50)
We can now write the full answer using (3.19), (3.48), and (3.2.3). Including
all the $n\geq 2$ modes of the Schwarzian we get
$Z_{\text{defect}}(\beta,\alpha)=4\pi\mathcal{N}\int
d^{2}z_{i}\frac{\left(1-\alpha^{2}\right)}{\left(1-|z_{i}|^{2}\right)^{2}}e^{-I^{n=1}_{\text{bdy}}}\times\prod_{n\geq
2}\mathcal{N}\frac{4\beta}{\gamma
n}=e^{\frac{2\pi^{2}\gamma}{\beta}\alpha^{2}}\prod_{n\geq
1}\mathcal{N}\frac{4\beta}{\gamma n},$ (3.51)
which is the expected answer (3.42). From the above we see that when
integrating over the position of the defect we pick up a measure factor
proportional to $(1-\alpha^{2})$ along with an action proportional to the
same. In the limit $\alpha\to 1$ the defect becomes very blunt and one expects
that it does not backreact strongly on the geometry, so that semiclassical
methods can be applied. However, we see that the path integral measure is also
becoming important in this limit, and must be taken into account.
We can rewrite the integral over $z_{i}$ in a way that connects it to the
semiclassical calculation of a single defect on the disk. The contribution of
the $n=1$ mode of the Schwarzian is
$Z^{n=1}_{\text{defect}}(\beta,\alpha)=4\pi\mathcal{N}(1-\alpha^{2})\int
d^{2}z_{i}\sqrt{g(z_{i})}e^{-\pi(1-\alpha^{2})\phi_{\text{cl.}}(z_{i})},\qquad\phi_{\text{cl.}}(z_{i})=\frac{2\pi\gamma}{\beta}\frac{1+|z_{i}|^{2}}{1-|z_{i}|^{2}},$
(3.52)
where in the above $\sqrt{g}$ is the metric on the hyperbolic disk without a
defect and $\phi_{\text{cl.}}$ is the classical solution for the dilaton on
the disk. It has previously been observed that a semiclassical calculation
[17, 18] of the disk with a single defect seems to agree with the full
Schwarzian calculation if an appropriate measure factor is inserted for the
integral over the dilaton. The semiclassical calculation is given by
$\lim_{\alpha\to 1}Z_{\text{defect}}^{\text{semiclassical}}\approx
2\pi\left(1-\alpha\right)\int
d^{2}z\sqrt{g}e^{-2\pi\left(1-\alpha\right)\phi_{\text{cl.}}(z)},$ (3.53)
where we approximate the integral over the dilaton by it’s classical value,
and a measure factor of $2\pi\left(1-\alpha\right)$ is inserted by hand to get
agreement with the Schwarzian. Comparing with (3.52) we see why this semi-
classical calculation works, the integral over the position cancels out the
measure in both calculations giving the same answer.
#### 3.2.4 General surfaces
In this section we will give a general argument that the correct form of the
path integral measure for a conical defect in the limit that the angle becomes
blunt is given by
$\lim_{\alpha\to 1}\mathcal{N}\int
d^{2}z\left|\frac{\langle\mu,\phi_{1}\rangle}{\sqrt{\langle\phi_{1},\phi_{1}\rangle}}\right|^{2}=2\pi(1-\alpha)\int
d^{2}z\sqrt{g(z)}+\mathcal{O}\left(\left(1-\alpha\right)^{2}\right),$ (3.54)
where we have inserted $\mathcal{N}=\frac{1}{4}$ to normalize the operator
with respect to the standard gluing measure $\int bdb$. Consider a surface
$\Sigma$ with at least one conical defect363636For simplicity, we do not
consider geodesic boundary temporarily. located at $z=z_{i}$. The quadratic
differential that moves this defect has a simple pole at $z_{i}$
$\phi_{1}=\frac{1}{z-z_{i}}dz^{2}+\ldots\,,$ (3.55)
where the subleading terms are holomorphic on $\Sigma$.373737We can also
include a residue at the pole, as we did for the disk in (3.46) and (3.47),
but it will cancel out of the path integral measure. In principle we need to
know the quadratic differential on the entire surface to calculate the
measure, but in the blunt defect limit $\alpha\to 1$ we will argue that the
singular behavior of the quadratic differential dominates.
To calculate the inner products we also need the metric on the surface
$ds^{2}=e^{2\omega}dzd\bar{z}$ where we are in local coordinates $z$ near the
defect. Demanding that the opening angle is $2\pi\alpha$ at $z_{i}$ the metric
is given by solving Liouville’s equation383838We are solving for the
constraint
$\frac{1}{2}\sqrt{g}\left(R+2\right)=2\pi(1-\alpha)\delta^{2}(z-z_{i})$ and we
have used that $R=-2e^{-2\omega}\hat{\nabla}^{2}\omega+e^{-2\omega}\hat{R}$
for a Weyl transformed metric $g=e^{2\omega}\hat{g}$. with a source
$-2\overline{\partial}\partial{\omega}+\frac{1}{2}e^{2\omega}=2\pi(1-\alpha)\delta^{2}(z-z_{i})\,.$
(3.56)
Solving for the metric perturbatively in $(1-\alpha)$ we find
$\omega=\omega_{0}-\frac{1}{2}(1-\alpha)\big{[}\log|z-z_{i}|^{2}+r\big{]}+\mathcal{O}\left(\left(1-\alpha\right)^{2}\right)\,,$
(3.57)
where $\omega_{0}$ is the Weyl factor for the surface $\Sigma$ without the
defect. The linear order term in $(1-\alpha)$ is split up into a logarithm
which gives the delta function singularity, along with a holomorphic function
$r$ that satisfies $\overline{\partial}\partial
r=e^{2\omega_{0}}\log|z-z_{i}|$.
One final part we need for the calculation of the measure is the beltrami
differential $\mu$ that corresponds to infinitesimally moving the defect from
$z_{i}\to z_{i}+\delta z_{i}$. As explained in the case of the disk, the
beltrami differential can be extracted from a coordinate chart
$z^{\prime}=z+\delta z_{i}\hskip 1.42271ptv(z,\bar{z})+\mathcal{O}((\delta
z_{i})^{2})$ such that $z^{\prime}(z_{i})=z_{i}+\delta z_{i}$, which implies
that $v(z_{i},\bar{z}_{i})=1$. We only need this coordinate chart to linear
order in $\delta z_{i}$. The beltrami differential is then be given by
$\mu=\overline{\partial}v(z,\bar{z})$. We will also demand that
$v|_{\text{bdy}}=0$ which is the condition that we are infinitesimally moving
the defect but not changing any asymptotically AdS boundaries, which will
allow us to integrate by parts.
We can now calculate all the components of the path integral measure for
moving the defect position in $z_{i}$ coordinates:
$\langle\mu,\phi_{1}\rangle=\int_{\Sigma}d^{2}z\hskip
1.42271pt\overline{\partial}v\left(\frac{1}{z-z_{i}}+\text{holo.}\right)=2\pi,$
(3.58)
where we have integrated by parts and assumed all boundary terms
vanish,393939This can be compared to the calculation performed on the disk
where we obtain the same answer by performing the full integral (3.47). used
that $\phi_{1}$ is holomorphic away from the defect, and that
$v(z_{i},\bar{z}_{i})=1$.404040In our conventions
$\overline{\partial}z^{-1}=2\pi\delta^{2}(z)$ and $\int
d^{2}z\delta^{2}(z)=1$. Computing the inner product of the quadratic
differential we find
$\displaystyle\langle\phi_{1},\phi_{1}\rangle$
$\displaystyle=2\int_{\Sigma}d^{2}ze^{-2\omega}\left|\frac{1}{z-z_{i}}+\ldots\right|^{2}=2\int_{\Sigma}d^{2}ze^{-2\omega_{0}}\frac{|z-z_{i}|^{2(1-\alpha)}}{|z-z_{i}|^{2}}+\text{less
singular},$ (3.59) $\displaystyle\mathrel{\mathop{=}\limits_{\alpha\to
1}}-\frac{4\pi
e^{-2\omega_{0}(z_{i})}}{(1-\alpha)}+\mathcal{O}(1)=-\frac{2\pi}{(1-\alpha)\sqrt{g(z_{i})}}+\mathcal{O}(1).$
where in the last line we take $\alpha\to 1$ and localize onto the most
singular part of the integral near $z=z_{i}$,414141The integral must be
performed with a radial cutoff around $z_{i}$, after which the leading order
answer in $(1-\alpha)$ is cutoff independent. and in the second equation we
notice that we have picked up a factor of the metric $\sqrt{g}$ with the
defect removed from the surface, evaluated at $z_{i}$. Putting everything
together, we find that the Weil-Petersson measure (3.19) for the integral over
the defect position in the blunt angle limit is given by
$\lim_{\alpha\to 1}\mathcal{N}\int
d^{2}z_{i}\left|\frac{\langle\mu,\phi_{1}\rangle}{\sqrt{\langle\phi_{1},\phi_{1}\rangle}}\right|^{2}=2\pi\left(1-\alpha\right)\int
d^{2}z_{i}\sqrt{g(z_{i})}+\mathcal{O}\left((1-\alpha)^{2}\right).$ (3.60)
Note that on the right $g(z_{i})$ is the metric on the surface without the
conical defect. From the above we immediately have that the dilaton-gravity
operator that inserts a conical defect in the blunt defect limit is given by
$\lim_{\alpha\to 1}\mathcal{V}_{\alpha}=2\pi(1-\alpha)\int
d^{2}z_{i}\sqrt{g(z_{i})}e^{-2\pi(1-\alpha)\Phi(z_{i})}+\mathcal{O}\left((1-\alpha)^{2}\right).$
(3.61)
#### Recursion Relation For WP Volumes
This result also allows us to give a gravitational path integral argument for
a recursion relation of Weil-Petersson volumes with blunt defects derived in
[20]. Consider the Weil-Petersson volume of surfaces $\Sigma$ of genus $g$
with $m$ geodesic boundaries of lengths $\vec{b}_{m}=(b_{1},\ldots,b_{m})$ and
$n+1$ conical defects of opening angles $2\pi\alpha_{i}$ with
$\vec{\alpha}_{n+1}=(\alpha_{1},\ldots,\alpha_{n},\alpha_{n+1})$. In the limit
where one of the defects becomes blunt $\alpha_{n+1}\to 1$ [20] proved the
following relation
$\frac{dV_{g,m,n+1}\left(\vec{\alpha}_{n+1},\vec{b}_{m}\right)}{d\alpha_{n+1}}\bigg{\rvert}_{\alpha_{n+1}=1}=-2\pi|\Sigma|V_{g,m,n}\left(\vec{\alpha}_{n},\vec{b}_{m}\right),$
(3.62)
where $|\Sigma|$ is the hyperbolic area of the surface with $n$ defects
satisfying
$|{\Sigma}|=-2\pi\left(2-2g-m-\sum_{i=1}^{n}(1-\alpha_{i})\right)=-\frac{1}{2}\int_{\Sigma/\\{x_{i}\\}}\sqrt{g}R.$
(3.63)
We can prove this formula be decomposing the volume into an integral over a
coordinate parameterizing the position of the $\alpha_{n+1}$ defect, and all
the other moduli of the surface. Working near the blunt defect limit the
integral over the defect takes the simplified form (3.60)
$V_{g,m,n+1}\left(\vec{\alpha}_{n+1},\vec{b}_{m}\right)=\int
d\left(\text{other
moduli}\right)\times\left(2\pi(1-\alpha_{n+1})\int_{\Sigma/\\{z_{j}\\}}d^{2}z\sqrt{g}+\mathcal{O}\left((1-\alpha_{n+1})^{2}\right)\right),$
(3.64)
where the integral over the “other moduli” take the form of the measure (3.19)
with various determinants of beltrami and quadratic differentials. The measure
for the other moduli implicitly depends on $\alpha_{n+1}$ through the
appearance of the defect metric in the inner products defining the measure.
However, as we take $\alpha_{n+1}\to 1$ this dependence goes away since the
surface no longer has a defect. Therefore, in the limit that the defect
vanishes the integral over the other moduli becomes the Weil-Petersson volume
of the surface without the $\alpha_{n+1}$ defect. Another important point as
explained around (3.60) is that the metric $\sqrt{g}$ appearing in the above
is the metric for the surface without the $\alpha_{n+1}$ defect. We have also
excluded the integral over the points $z_{j}$ where the other
$\vec{\alpha}_{n}$ defects are located, as these configurations are at the
boundary of moduli space.424242This exclusion is necessary to reproduce the
relation (3.62) since the Euler characteristic excludes these points. Taking a
derivative we immediately find
$\frac{dV_{g,m,n+1}\left(\vec{\alpha}_{n+1},\vec{b}_{m}\right)}{d\alpha_{n+1}}\bigg{\rvert}_{\alpha_{n+1}=1}=\left(-2\pi\int_{\Sigma/\\{z_{j}\\}}d^{2}z\sqrt{g}\right)\times
V_{g,m,n}\left(\vec{\alpha}_{n},\vec{b}_{m}\right),$ (3.65)
which is the desired recursion relation (3.62). Note that this argument also
goes through if we replace the measure for the defect with
$\pi\left(1-\alpha^{2}\right)\int d^{2}z\sqrt{g}$, as we found for the disk.
From the above argument it might be suspected that the volumes also satisfy
$\lim_{\alpha_{n}\to
1}V_{g,n,m}\left(\vec{\alpha}_{n},\vec{b}_{m}\right)\stackrel{{\scriptstyle?}}{{=}}0$.
In [20] this was shown to be true when there are no geodesic boundaries, but
is false when such boundaries are present. We do not have a gravity path
integral argument for this. One possibility is that with geodesic boundaries
there are additional boundary terms that enter into the measure through
(3.58), where we assumed that all boundary terms vanished.
## 4 Discussion
In this paper we studied various unresolved aspects of the gravitational path
integral of JT gravity. We carried out the gauge fixing of the path integral
in second order formalism for general hyperbolic surfaces with asymptotic
boundaries and conical singularities. The second order formalism allowed us to
clarify the procedure for calculating the proper measure for the conical
defect operator, and resolved the question of which dilaton gravity potential
should be used for JT gravity coupled to a gas of conical defects [17, 18, 19,
20]. This also allowed us to give a gravity path integral argument for certain
recursion relations of Weil-Petersson volumes derived using algebraic geometry
techniques [20]. An open problem is to carry out the full computation of the
measure for the conical defect operator to all orders in the $(1-\alpha)$
expansion on a general surface to prove the conjectured form given in equation
1.9, which we were only able to fully compute on the disk topology.434343As
explained in the introduction, our reasoning for extending the normalization
of the operator found on the disk to arbitrary surfaces is that the operator
should be surface independent.
Along the way we computed determinants of Laplace operators on hyperbolic
surfaces with asymptotic boundaries. These determinants are straightforwardly
related to partition functions of matter fields minimally coupled to JT
gravity. It would be interesting to better understand matter fields coupled to
JT with a gas of conical defects, where the bulk geometry would not be pure
AdS2. Tangentially, we computed the determinant of the vector Laplacian
$\det(\Delta_{1}+s(s-1))$ on a cone geometry. This can be related to the bulk
entanglement entropy of a gauge field on one side of the TFD. Since gauge
fields in two dimensions have no propagating degrees of freedom a non-trivial
entanglement entropy should arise from edge modes, and it would be interesting
to understand this better in AdS2.444444We thank Sean Colin-Ellerin for
discussion on this point.
We obtained an exact expression for the two-point function of matter fields on
an arbitrary surface $\Sigma$ obtainable through the quotient method. The
correlator takes into account all geodesics on the surface including self-
intersecting geodesics, but does not include an integration over the
Schwarizan mode. In [53] diagrammatic rules were derived for correlation
functions, including the appearance of $6j$ symbols when geodesics intersect,
but this has yet to be derived from the second order perspective. It would be
interesting to integrate over the boundary fluctuations and get a closed form
expression for the full correlator reproducing the expected $6j$ symbols. This
would also allow one to incorporate the contribution of self-intersecting
geodesics to the late time two-point function calculation on the handle-disk,
where it was argued in [25] that such contributions should decay with time.
### Acknowledgments
We thank David Borthwick, Sean Colin-Ellerin, Luca Iliesiu, Geoff Penington,
and Joaquin Turiaci for discussion. MU is supported in part by the NSF
Graduate Research Fellowship Program under grant DGE1752814, by the National
Science Foundation under Grants No. NSF PHY-1748958 and PHY-2309135, by the
Berkeley Center for Theoretical Physics, by the DOE under award DE-SC0019380
and under the contract DE-AC02-05CH11231, by NSF grant PHY1820912, by the
Heising-Simons Foundation, the Simons Foundation, and National Science
Foundation Grant No. NSF PHY-1748958.
## Appendix A Consistency of disk measure with general surface
In this section we explain how the exact calculation for a disk with a single
defect is consistent with the measure derived for a general surface. Recall
that we found the measure for the defect to be
$\mathcal{N}\int
d^{2}z_{i}\left|\frac{\langle\mu,\phi_{1}\rangle}{\sqrt{\langle\phi_{1},\phi_{1}\rangle}}\right|^{2}=\int
d^{2}z_{i}~{}\times~{}\begin{cases}\pi\left(1-\alpha^{2}\right)\sqrt{g(z_{i})},&\text{disk,}\\\
2\pi\left(1-\alpha\right)\sqrt{g(z_{i})}+\mathcal{O}\left(\left(1-\alpha\right)^{2}\right),&\text{general
surface}.\end{cases}$
The point is that since the disk calculation is exact we implicitly re-summed
the $\left(1-\alpha\right)$ corrections to the measure. We now make this
explicit in the case that the defect is at the center. The tower of
$\left(1-\alpha\right)$ corrections comes from the inner product for the
quadratic differential
$\langle\phi_{1},\phi_{1}\rangle_{\text{disk}}=\frac{4\pi}{1-\alpha^{2}}\frac{1}{\sqrt{g(0)}},\qquad\langle\phi_{1},\phi_{1}\rangle_{\text{general}}\mathrel{\mathop{=}\limits_{\alpha\to
1}}\frac{2\pi}{\left(1-\alpha\right)\sqrt{g(0)}}+\mathcal{O}\left(1\right).$
(A.1)
We can express the metric for the disk with a conical defect in the same form
we used for a general surface
$ds^{2}=e^{2\omega}dzd\bar{z},\qquad\omega=\omega_{0}+\log\left(|z|^{\alpha-1}\right)-\log\left(\frac{1-|z|^{2\alpha}}{\alpha^{2}(1-|z|^{2})}\right),\qquad\omega_{0}=\log\left(\frac{2}{1-|z|^{2}}\right),$
(A.2)
where $\omega_{0}$ is the Weyl factor for a disk without a defect. We can now
calculate the inner product on the disk, doing a series expansion in
$(1-\alpha)$ for the last term in the Weyl factor
$\displaystyle\langle\phi_{1},\phi_{1}\rangle_{\text{disk}}$
$\displaystyle=\int
d^{2}z\left(\frac{e^{2\omega_{0}}}{2}\right)^{-1}\frac{|z|^{2(1-\alpha)}}{|z|^{2}}\times\underbrace{\frac{(1-|z|^{2})^{2}}{\alpha^{2}(1-|z|^{2\alpha})^{2}}}_{\text{expand}},$
(A.3)
$\displaystyle=\frac{2\pi}{(1-\alpha)\sqrt{g(0)}}+\frac{\pi}{\sqrt{g(0)}}+\mathcal{O}\left(1-\alpha\right),$
(A.4) $\displaystyle\stackrel{{\scriptstyle\tiny\text{re-
sum}}}{{=}}\frac{4\pi}{(1-\alpha^{2})\sqrt{g(0)}}.$ (A.5)
Which is in agreement with the expression for general surfaces in the
$\left(1-\alpha\right)$ expansion, and the re-summation of the series
reproduces the full disk answer (A.1).
## Appendix B Double trumpet gluing measure
The Weil-Petersson measure for gluing two geodesic boundaries is famously
given by $\int dbd\tau$ [54, 55], where $b\in[0,\infty)$ is the length of the
geodesic boundary and $\tau\in[0,b]$ is a relative twist between the two
boundaries being glued. In this appendix we will review how this measure can
be derived using Beltrami differentials.
##### The Beltrami differential.
We will derive the gluing measure for the double trumpet geometry which can be
represented by the quotient of the UHP with metric
$ds^{2}=\frac{dzd\overline{z}}{\left(\operatorname{Im}z\right)^{2}}$, where
the geometry is represented by $\mathbb{H}/\langle T_{b}\rangle$ where
$T_{b}\cdot z=e^{b}z$. The fundamental domain $\mathcal{F}$ is the region
between the two semicircles $r=1$ and $r=e^{b}$. The main idea is to
explicitly find the two quasiconformal deformations that infinitesimally
deform the geodesic length $b$ and twist $\tau$, and calculate the measure
from equation (3.19). The transformation that infinitesimally deforms the
length $b\to b+\epsilon$ is given by the map
$z\rightarrow f_{b}(z,\bar{z})=z(z\bar{z})^{\frac{\epsilon}{2b}}\,.$ (B.1)
This map satisfies $\left|f_{b}({|z|{=}e^{mb}})\right|=e^{m(b+\epsilon)}$,
meaning that semicircles with radius $e^{mb}$ are mapped to semicircles with
radius $e^{m(b+\epsilon)}$. Thus this transformation increases the geodesic
throat size of the double trumpet. For the twist the appropriate deformation
is given by
$z\rightarrow
f_{\tau}(z,\bar{z})=ze^{\epsilon\Phi(\theta)}\,,\qquad\Phi(0)=0,~{}~{}\Phi(\pi)=1\,.$
(B.2)
In the above $\Phi(\theta)$ is an arbitrary smooth function of the angle
$z=re^{i\theta}$. The properties of $\Phi$ ensure that the left boundary of
the double trumpet is smoothly infinitesimally twisted relative to the right
boundary. For example, a point on the left boundary $z=-1$ is sent to
$z=-(1+\epsilon)$. These maps are known as quasiconformal
transformations.454545Quasiconformal transformations satisfy
$|\bar{\partial}f(z,\bar{z})|<\partial f(z,\bar{z})$. A conformal
transformation can be viewed as a special class of quasiconformal
transformation with $\bar{\partial}f=0$. Geometrically, a conformal map
preserves angles and maps small circles to circles; while a quasiconformal map
does not preserve angles and maps small circles to ovals. Roughly speaking,
the eccentricity of the ovals are given by the ratio $\bar{\partial}f/\partial
f$. The transformation is known as an infinitesimaly quasiconformal
transformation if this ratio can be made infinitesimal. Such transformations
infinitesimally deform the metric in moduli space, since the new metric is no
longer Weyl equivalent to the original metric.
The Beltrami differential is then defined to be
$\mu=\frac{\bar{\partial}f}{\partial f}$.464646Note that the definition of
$\mu$ here is the complex conjugate of the definition from most of the math
literature. Taking $\epsilon$ to be infinitesimal we can define an
infinitesimal beltrami $\hat{\mu}=\lim_{\epsilon\rightarrow 0}\mu/\epsilon$
which through an abuse of notation is the definition of Beltrami differentials
used in the main text. As explained in the main text, these infinitesimal
beltrami differentials capture the infinitesimal change in the metric as we
deform the moduli. We can directly calculate $\hat{\mu}$ from (B.1) and (B.2)
and find
$\hat{\mu}_{\tau}=\frac{iz}{2\bar{z}}\Phi^{\prime}(\theta)dz^{-1}d\bar{z}\,,\quad\hat{\mu}_{b}=\frac{z}{2b\bar{z}}dz^{-1}d\bar{z}\,,$
(B.3)
where we have emphasized that these differentials should be thought of as
$(-1,1)$ tensors. It can however be the case that a Beltrami differential does
not correspond to a pure deformation of the moduli, for example it can deform
the metric by a Weyl transformation. It is thus customary to act on Beltrami
differentials by a projection operator that restricts them to pure moduli
deformations. This does not affect the calculation of the path integral
measure, but ensures that the differentials live in the tangent space to
moduli space. The projection is orginally given by [56]
$P[\hat{\mu}](z)\equiv\frac{6}{\pi}(\operatorname{Im}z)^{2}\int_{\mathbb{H}}d^{2}\xi\frac{\overline{\hat{\mu}}(\xi,\bar{\xi})}{({\xi}-\bar{z})^{4}}\,.$
(B.4)
This gives us the projected differentials474747Note that $P[\hat{\mu}_{\tau}]$
is independent of $\Phi^{\prime}(\theta)$, which shows that the apparent
entire function’s worth of degrees of freedom in $\Phi$ do not correspond to
genuine moduli deformations.
$P[\hat{\mu}_{b}]=\frac{(\text{Im}z)^{2}}{{z}^{2}b}dz^{-1}d\bar{z}\,,\quad
P[\hat{\mu}_{\tau}]=\frac{i(\text{Im}z)^{2}}{\pi{z}^{2}}dz^{-1}d\bar{z}.$
(B.5)
One remarkable property of this projection is
$\langle\hat{\mu},\phi\rangle=\langle P[\hat{\mu}],\phi\rangle$ for any
quadratic differential $\phi$, and since we are primarily interested in inner
products we use the same notation $\hat{\mu}$ to represent the Beltrami and
its projection below. This identity is the statement that the operator
projects out the subset of deformations in $\hat{\mu}$ that do not deform the
moduli.
##### Quadratic differential.
To compute the metric we also need the quadratic differentials, which are
thought of as forming the cotangent space to moduli space. These differentials
are quite challenging to compute and were found by Wolpert [55], we quote the
final result484848Note that our convention here is a bit different from
Wolpert’s convention. The differences are (1) our definition of the
$\langle\mu,\phi\rangle$ inner product is twice as Wolpert’s and (2) the
integral measure on Riemann surface $d^{z}$ is 2 times the Euclidean measure
$dxdy$ used by Wolpert.
$\phi_{b}=\frac{1}{\pi
z^{2}}dz^{2}\,,\qquad\phi_{\tau}=\frac{i}{bz^{2}}dz^{2}.$ (B.6)
They have the following inner products with the beltrami differentials
$\mu_{b},\mu_{\tau}$494949We slightly change the definition of the inner
product by taking the real part, since both $db$ and $d\tau$ are real
variables.
$\langle\mu_{b},\phi_{b}\rangle\equiv
2\text{Re}\int_{\mathcal{F}}\mu_{b}\bar{\phi}_{b}=\frac{2}{\pi
b}\int_{\mathcal{F}}\frac{(\text{Im}\xi)^{2}}{\xi^{2}\bar{\xi}^{2}}d^{2}\xi=2\,,\qquad\langle\mu_{\tau},\phi_{\tau}\rangle=2,$
(B.7)
where we have used the inner product in (3.16). It is straight forward to
check that the off-diagonal inner products vanish
$\langle\mu_{\tau},\phi_{b}\rangle=0$ because of the factor $i$ in the
$\mu_{\tau}$ expression.
##### The Weil-Petersson metric.
Given these results the Weil-Petersson metric for the moduli space is
straightforward to obtain. From the discussion in Section 3, we need to
evaluate several determinants. It is straight forward to calculate
$\det\langle\mu,\phi\rangle=\det\begin{pmatrix}\langle\mu_{b},\phi_{b}\rangle&\langle\mu_{b},\phi_{\tau}\rangle\\\
\langle\mu_{\tau},\phi_{b}\rangle&\langle\mu_{\tau},\phi_{\tau}\rangle\end{pmatrix}=\det\begin{pmatrix}2&0\\\
0&2\end{pmatrix}=4\,.$ (B.8)
$\det\langle\phi,\phi\rangle=\det\begin{pmatrix}\langle\phi_{b},\phi_{b}\rangle&\langle\phi_{b},\phi_{\tau}\rangle\\\
\langle\phi_{\tau},\phi_{b}\rangle&\langle\phi_{\tau},\phi_{\tau}\rangle\end{pmatrix}=\det\begin{pmatrix}2b&0\\\
0&2b^{-1}\end{pmatrix}=4\,.$ (B.9)
The measure is given by equation (3.19)
$d\mu_{\text{DT}}=\mathcal{N}\frac{\det\langle\mu,\phi\rangle\det\langle\bar{\mu},\bar{\phi}\rangle}{\det\langle\phi,\phi\rangle}db\,d\tau=4\mathcal{N}db\,d\tau\,,$
(B.10)
where the integral ranges over $\tau\in[0,b]$ and $b\in[0,\infty)$. To get the
standard form of the gluing measure $\int bdb$ we must choose the
normalization $\mathcal{N}=1/4$ for the measure (3.3) as claimed in the main
text.
## Appendix C The determinant calculation
### Notation, conventions, and summary of results
We first list the notation and main results of the determinant calculation. We
denote the surface obtained by a quotient of the Fuchsian group $\Gamma$ as
$\Sigma=\mathbb{H}/\Gamma$. The fundamental domain is denoted by
$\mathcal{F}_{\Sigma}$ which is thought of as a subregion of $\mathbb{H}$. We
will sometimes not distinguish between $\mathcal{F}_{\Sigma}$ and $\Sigma$ for
simplicity. We define the Fuchsian group action to be $T(Z)$ where $T$ is a
Fuchsian group element and $Z$ will either be a subregion or a point in
$\mathbb{H}$.
An important geometrical invariant of the surface $\Sigma$ is the hyperbolic
area
$|\Sigma|=-\left(\frac{1}{2}\int_{\Sigma/\\{z_{i}\\}}d^{2}z\sqrt{g}R+\int_{\partial\Sigma}\sqrt{h}K\right)=2\pi\bigg{(}2g+n-2+\sum_{j=1}^{k}\left(1-n_{j}^{-1}\right)\bigg{)},$
(C.1)
where $g$ is the genus, $n$ is the number of boundaries (including cusps and
asymptotic boundaries), and $k$ is the number of defects located at positions
$z_{i}$ with deficit angles $\frac{2\pi}{n_{j}}$. For simplicity we exclude
surfaces with geodesic boundaries and cusps ($2\pi$ deficit angles) from the
determinant calculation. The area element of $\mathbb{H}$ is
$d^{2}z\sqrt{g}=y^{-2}dxdy$, and the $u$ variable for hyperbolic distance
$u=\cosh^{2}\left(\frac{\ell(z,z^{\prime})}{2}\right)=\frac{1}{2}\left(1+\cosh(\ell(z,z^{\prime}))\right)=\frac{(x-x^{\prime})^{2}+(y+y^{\prime})^{2}}{4yy^{\prime}}\,,$
(C.2)
where our complex coordinates are given by $z=x+iy$.
We will be interested in the scalar and vector Laplacians. The scalar
Laplacian is defined as $-g^{ab}\nabla_{a}\nabla_{b}$ acting on scalar
functions. Similarly, the vector Laplacian is
$\nabla_{1}^{2}=-g^{ab}\nabla_{a}\nabla_{b}$ acting on two component vectors
or covectors, and was introduced in the main text through
$\nabla_{1}^{2}=\frac{1}{2}P_{1}^{\dagger}P_{1}-1$. It will be more convenient
to conjugate the vector Laplacian and consider
$y^{-2}\nabla_{1}^{2}y^{2}+s(s-1)-1=\text{diag}\left(D_{-1}+s(s-1),D_{1}+s(s-1)\right)$
where we define the operators
$D_{\pm 1}=-\left(\partial_{x}\pm
i\partial_{y}\right)y^{2}\left(\partial_{x}\mp i\partial_{y}\right),$ (C.3)
where the conjugated operators acts on vectors with top component
$v^{x}+iv^{y}$ and bottom component $v^{x}-iv^{y}$. Conjugated operators have
the same spectrum, and so we can equivalently compute the determinant of the
above operators. The spectra of $D_{\pm 1}$ are identical, and so we find the
result that $\det(\Delta_{1}+1)=\det\left(D_{1}+2\right)^{2}$. The above
immediately implies that
$\det(\Delta_{1}+s(s-1)-1)=\det\left(D_{1}+s(s-1)\right)^{2}$. We will abuse
notation in this appendix and refer to $D_{\pm 1}$ as the vector Laplacian.
In the calculation of the determinant we will need the resolvent of the
associated operators, also known as the propagator, denoted by
$R^{0}_{\Sigma}$ and $R^{1}_{\Sigma}$, and their associated traces over the
surface ${\mathrm{t}r}R_{\Sigma}=\int_{\Sigma}d^{2}z\sqrt{g}R_{\Sigma}(z,z)$.
Note that the operator, the resolvent, and the trace are defined for a
specific surface $\Sigma$. The trace is an integral over the fundamental
domain $\mathcal{F}_{\Sigma}\subset\mathbb{H}$.
The main result for the determinants are given as follows 505050One can also
allow for $n_{c}$ cusps on the surface, which give a parabolic contribution
$Z_{\rm par.}=\left(\Gamma(s-1/2)(2s-1)2^{s-1}\right)^{n_{c}}$. With the
inclusion of cusps, the scalar determinant would include $Z_{\rm par.}$,
namely $\det\left(\ldots\right)\to\det\left(\ldots\right)\times Z_{\rm par.}$
[36]. We believe a similar formula holds for the vector determinant.
$\det\left({\Delta}_{0}+s(s-1)\right)={Z_{\text{hyp.}}(s)}{Z_{\text{ell.}}(s)}G_{\infty}(s)^{-\frac{|\Sigma|}{2\pi}}e^{-Bs(s-1)+D}\,.$
(C.4)
|
PHENIX Collaboration
# Nonprompt direct-photon production in Au$+$Au collisions at
$\sqrt{s_{{}_{NN}}}=200$ GeV
U.A. Acharya Georgia State University, Atlanta, Georgia 30303, USA A. Adare
University of Colorado, Boulder, Colorado 80309, USA C. Aidala Department of
Physics, University of Michigan, Ann Arbor, Michigan 48109-1040, USA N.N.
Ajitanand Deceased Chemistry Department, Stony Brook University, SUNY, Stony
Brook, New York 11794-3400, USA Y. Akiba<EMAIL_ADDRESS>RIKEN
Nishina Center for Accelerator-Based Science, Wako, Saitama 351-0198, Japan
RIKEN BNL Research Center, Brookhaven National Laboratory, Upton, New York
11973-5000, USA M. Alfred Department of Physics and Astronomy, Howard
University, Washington, DC 20059, USA N. Apadula Iowa State University,
Ames, Iowa 50011, USA Department of Physics and Astronomy, Stony Brook
University, SUNY, Stony Brook, New York 11794-3800, USA H. Asano Kyoto
University, Kyoto 606-8502, Japan RIKEN Nishina Center for Accelerator-Based
Science, Wako, Saitama 351-0198, Japan B. Azmoun Physics Department,
Brookhaven National Laboratory, Upton, New York 11973-5000, USA V. Babintsev
IHEP Protvino, State Research Center of Russian Federation, Institute for High
Energy Physics, Protvino, 142281, Russia M. Bai Collider-Accelerator
Department, Brookhaven National Laboratory, Upton, New York 11973-5000, USA
N.S. Bandara Department of Physics, University of Massachusetts, Amherst,
Massachusetts 01003-9337, USA B. Bannier Department of Physics and
Astronomy, Stony Brook University, SUNY, Stony Brook, New York 11794-3800, USA
K.N. Barish University of California-Riverside, Riverside, California 92521,
USA S. Bathe Baruch College, City University of New York, New York, New
York, 10010 USA RIKEN BNL Research Center, Brookhaven National Laboratory,
Upton, New York 11973-5000, USA A. Bazilevsky Physics Department, Brookhaven
National Laboratory, Upton, New York 11973-5000, USA M. Beaumier University
of California-Riverside, Riverside, California 92521, USA S. Beckman
University of Colorado, Boulder, Colorado 80309, USA R. Belmont University
of Colorado, Boulder, Colorado 80309, USA Department of Physics, University
of Michigan, Ann Arbor, Michigan 48109-1040, USA Physics and Astronomy
Department, University of North Carolina at Greensboro, Greensboro, North
Carolina 27412, USA A. Berdnikov Saint Petersburg State Polytechnic
University, St. Petersburg, 195251 Russia Y. Berdnikov Saint Petersburg
State Polytechnic University, St. Petersburg, 195251 Russia L. Bichon
Vanderbilt University, Nashville, Tennessee 37235, USA B. Blankenship
Vanderbilt University, Nashville, Tennessee 37235, USA D.S. Blau National
Research Center “Kurchatov Institute”, Moscow, 123098 Russia National
Research Nuclear University, MEPhI, Moscow Engineering Physics Institute,
Moscow, 115409, Russia J.S. Bok New Mexico State University, Las Cruces, New
Mexico 88003, USA V. Borisov Saint Petersburg State Polytechnic University,
St. Petersburg, 195251 Russia K. Boyle RIKEN BNL Research Center, Brookhaven
National Laboratory, Upton, New York 11973-5000, USA M.L. Brooks Los Alamos
National Laboratory, Los Alamos, New Mexico 87545, USA J. Bryslawskyj Baruch
College, City University of New York, New York, New York, 10010 USA
University of California-Riverside, Riverside, California 92521, USA V.
Bumazhnov IHEP Protvino, State Research Center of Russian Federation,
Institute for High Energy Physics, Protvino, 142281, Russia S. Campbell
Columbia University, New York, New York 10027 and Nevis Laboratories,
Irvington, New York 10533, USA Iowa State University, Ames, Iowa 50011, USA
V. Canoa Roman Department of Physics and Astronomy, Stony Brook University,
SUNY, Stony Brook, New York 11794-3800, USA C.-H. Chen RIKEN BNL Research
Center, Brookhaven National Laboratory, Upton, New York 11973-5000, USA M.
Chiu Physics Department, Brookhaven National Laboratory, Upton, New York
11973-5000, USA C.Y. Chi Columbia University, New York, New York 10027 and
Nevis Laboratories, Irvington, New York 10533, USA I.J. Choi University of
Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA J.B. Choi Deceased
Jeonbuk National University, Jeonju, 54896, Korea T. Chujo Tomonaga Center
for the History of the Universe, University of Tsukuba, Tsukuba, Ibaraki 305,
Japan Z. Citron Weizmann Institute, Rehovot 76100, Israel M. Connors
Georgia State University, Atlanta, Georgia 30303, USA R. Corliss Department
of Physics and Astronomy, Stony Brook University, SUNY, Stony Brook, New York
11794-3800, USA Y. Corrales Morales Los Alamos National Laboratory, Los
Alamos, New Mexico 87545, USA M. Csanád ELTE, Eötvös Loránd University,
H-1117 Budapest, Pázmány P. s. 1/A, Hungary T. Csörgő Institute for Particle
and Nuclear Physics, Wigner Research Centre for Physics, Hungarian Academy of
Sciences (Wigner RCP, RMKI) H-1525 Budapest 114, POBox 49, Budapest, Hungary
T.W. Danley Department of Physics and Astronomy, Ohio University, Athens,
Ohio 45701, USA A. Datta University of New Mexico, Albuquerque, New Mexico
87131, USA M.S. Daugherity Abilene Christian University, Abilene, Texas
79699, USA G. David Physics Department, Brookhaven National Laboratory,
Upton, New York 11973-5000, USA Department of Physics and Astronomy, Stony
Brook University, SUNY, Stony Brook, New York 11794-3800, USA C.T. Dean Los
Alamos National Laboratory, Los Alamos, New Mexico 87545, USA K. DeBlasio
University of New Mexico, Albuquerque, New Mexico 87131, USA K. Dehmelt
Department of Physics and Astronomy, Stony Brook University, SUNY, Stony
Brook, New York 11794-3800, USA A. Denisov IHEP Protvino, State Research
Center of Russian Federation, Institute for High Energy Physics, Protvino,
142281, Russia A. Deshpande RIKEN BNL Research Center, Brookhaven National
Laboratory, Upton, New York 11973-5000, USA Department of Physics and
Astronomy, Stony Brook University, SUNY, Stony Brook, New York 11794-3800, USA
E.J. Desmond Physics Department, Brookhaven National Laboratory, Upton, New
York 11973-5000, USA A. Dion Department of Physics and Astronomy, Stony
Brook University, SUNY, Stony Brook, New York 11794-3800, USA P.B. Diss
University of Maryland, College Park, Maryland 20742, USA J.H. Do Yonsei
University, IPAP, Seoul 120-749, Korea V. Doomra Department of Physics and
Astronomy, Stony Brook University, SUNY, Stony Brook, New York 11794-3800, USA
A. Drees Department of Physics and Astronomy, Stony Brook University, SUNY,
Stony Brook, New York 11794-3800, USA K.A. Drees Collider-Accelerator
Department, Brookhaven National Laboratory, Upton, New York 11973-5000, USA
J.M. Durham Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA
A. Durum IHEP Protvino, State Research Center of Russian Federation,
Institute for High Energy Physics, Protvino, 142281, Russia A. Enokizono
RIKEN Nishina Center for Accelerator-Based Science, Wako, Saitama 351-0198,
Japan Physics Department, Rikkyo University, 3-34-1 Nishi-Ikebukuro, Toshima,
Tokyo 171-8501, Japan R. Esha Department of Physics and Astronomy, Stony
Brook University, SUNY, Stony Brook, New York 11794-3800, USA B. Fadem
Muhlenberg College, Allentown, Pennsylvania 18104-5586, USA W. Fan
Department of Physics and Astronomy, Stony Brook University, SUNY, Stony
Brook, New York 11794-3800, USA N. Feege Department of Physics and
Astronomy, Stony Brook University, SUNY, Stony Brook, New York 11794-3800, USA
D.E. Fields University of New Mexico, Albuquerque, New Mexico 87131, USA M.
Finger, Jr Charles University, Ovocný trh 5, Praha 1, 116 36, Prague, Czech
Republic M. Finger Charles University, Ovocný trh 5, Praha 1, 116 36,
Prague, Czech Republic D. Fitzgerald Department of Physics, University of
Michigan, Ann Arbor, Michigan 48109-1040, USA S.L. Fokin National Research
Center “Kurchatov Institute”, Moscow, 123098 Russia J.E. Frantz Department
of Physics and Astronomy, Ohio University, Athens, Ohio 45701, USA A. Franz
Physics Department, Brookhaven National Laboratory, Upton, New York
11973-5000, USA A.D. Frawley Florida State University, Tallahassee, Florida
32306, USA P. Gallus Czech Technical University, Zikova 4, 166 36 Prague 6,
Czech Republic C. Gal Department of Physics and Astronomy, Stony Brook
University, SUNY, Stony Brook, New York 11794-3800, USA P. Garg Department
of Physics, Banaras Hindu University, Varanasi 221005, India Department of
Physics and Astronomy, Stony Brook University, SUNY, Stony Brook, New York
11794-3800, USA H. Ge Department of Physics and Astronomy, Stony Brook
University, SUNY, Stony Brook, New York 11794-3800, USA M. Giles Department
of Physics and Astronomy, Stony Brook University, SUNY, Stony Brook, New York
11794-3800, USA F. Giordano University of Illinois at Urbana-Champaign,
Urbana, Illinois 61801, USA A. Glenn Lawrence Livermore National Laboratory,
Livermore, California 94550, USA Y. Goto RIKEN Nishina Center for
Accelerator-Based Science, Wako, Saitama 351-0198, Japan RIKEN BNL Research
Center, Brookhaven National Laboratory, Upton, New York 11973-5000, USA N.
Grau Department of Physics, Augustana University, Sioux Falls, South Dakota
57197, USA S.V. Greene Vanderbilt University, Nashville, Tennessee 37235,
USA M. Grosse Perdekamp University of Illinois at Urbana-Champaign, Urbana,
Illinois 61801, USA T. Gunji Center for Nuclear Study, Graduate School of
Science, University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan T.
Guo Department of Physics and Astronomy, Stony Brook University, SUNY, Stony
Brook, New York 11794-3800, USA T. Hachiya Nara Women’s University, Kita-
uoya Nishi-machi Nara 630-8506, Japan RIKEN Nishina Center for Accelerator-
Based Science, Wako, Saitama 351-0198, Japan RIKEN BNL Research Center,
Brookhaven National Laboratory, Upton, New York 11973-5000, USA J.S. Haggerty
Physics Department, Brookhaven National Laboratory, Upton, New York
11973-5000, USA K.I. Hahn Ewha Womans University, Seoul 120-750, Korea H.
Hamagaki Center for Nuclear Study, Graduate School of Science, University of
Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan H.F. Hamilton Abilene
Christian University, Abilene, Texas 79699, USA J. Hanks Department of
Physics and Astronomy, Stony Brook University, SUNY, Stony Brook, New York
11794-3800, USA S.Y. Han Ewha Womans University, Seoul 120-750, Korea Korea
University, Seoul 02841, Korea M. Harvey Texas Southern University, Houston,
TX 77004, USA S. Hasegawa Advanced Science Research Center, Japan Atomic
Energy Agency, 2-4 Shirakata Shirane, Tokai-mura, Naka-gun, Ibaraki-ken
319-1195, Japan T.O.S. Haseler Georgia State University, Atlanta, Georgia
30303, USA K. Hashimoto RIKEN Nishina Center for Accelerator-Based Science,
Wako, Saitama 351-0198, Japan Physics Department, Rikkyo University, 3-34-1
Nishi-Ikebukuro, Toshima, Tokyo 171-8501, Japan T.K. Hemmick Department of
Physics and Astronomy, Stony Brook University, SUNY, Stony Brook, New York
11794-3800, USA X. He Georgia State University, Atlanta, Georgia 30303, USA
J.C. Hill Iowa State University, Ames, Iowa 50011, USA A. Hodges Georgia
State University, Atlanta, Georgia 30303, USA R.S. Hollis University of
California-Riverside, Riverside, California 92521, USA K. Homma Hiroshima
University, Kagamiyama, Higashi-Hiroshima 739-8526, Japan B. Hong Korea
University, Seoul 02841, Korea T. Hoshino Hiroshima University, Kagamiyama,
Higashi-Hiroshima 739-8526, Japan N. Hotvedt Iowa State University, Ames,
Iowa 50011, USA J. Huang Physics Department, Brookhaven National Laboratory,
Upton, New York 11973-5000, USA K. Imai Advanced Science Research Center,
Japan Atomic Energy Agency, 2-4 Shirakata Shirane, Tokai-mura, Naka-gun,
Ibaraki-ken 319-1195, Japan M. Inaba Tomonaga Center for the History of the
Universe, University of Tsukuba, Tsukuba, Ibaraki 305, Japan A. Iordanova
University of California-Riverside, Riverside, California 92521, USA D.
Isenhower Abilene Christian University, Abilene, Texas 79699, USA D.
Ivanishchev PNPI, Petersburg Nuclear Physics Institute, Gatchina, Leningrad
region, 188300, Russia B.V. Jacak Department of Physics and Astronomy, Stony
Brook University, SUNY, Stony Brook, New York 11794-3800, USA M. Jezghani
Georgia State University, Atlanta, Georgia 30303, USA X. Jiang Los Alamos
National Laboratory, Los Alamos, New Mexico 87545, USA Z. Ji Department of
Physics and Astronomy, Stony Brook University, SUNY, Stony Brook, New York
11794-3800, USA B.M. Johnson Physics Department, Brookhaven National
Laboratory, Upton, New York 11973-5000, USA Georgia State University,
Atlanta, Georgia 30303, USA D. Jouan IPN-Orsay, Univ. Paris-Sud, CNRS/IN2P3,
Université Paris-Saclay, BP1, F-91406, Orsay, France D.S. Jumper University
of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA S. Kanda Center
for Nuclear Study, Graduate School of Science, University of Tokyo, 7-3-1
Hongo, Bunkyo, Tokyo 113-0033, Japan J.H. Kang Yonsei University, IPAP,
Seoul 120-749, Korea D. Kawall Department of Physics, University of
Massachusetts, Amherst, Massachusetts 01003-9337, USA A.V. Kazantsev
National Research Center “Kurchatov Institute”, Moscow, 123098 Russia J.A.
Key University of New Mexico, Albuquerque, New Mexico 87131, USA V.
Khachatryan Department of Physics and Astronomy, Stony Brook University,
SUNY, Stony Brook, New York 11794-3800, USA A. Khanzadeev PNPI, Petersburg
Nuclear Physics Institute, Gatchina, Leningrad region, 188300, Russia A.
Khatiwada Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA
B. Kimelman Muhlenberg College, Allentown, Pennsylvania 18104-5586, USA C.
Kim Korea University, Seoul 02841, Korea D.J. Kim Helsinki Institute of
Physics and University of Jyväskylä, P.O.Box 35, FI-40014 Jyväskylä, Finland
E.-J. Kim Jeonbuk National University, Jeonju, 54896, Korea G.W. Kim Ewha
Womans University, Seoul 120-750, Korea M. Kim Department of Physics and
Astronomy, Seoul National University, Seoul 151-742, Korea T. Kim Ewha
Womans University, Seoul 120-750, Korea D. Kincses ELTE, Eötvös Loránd
University, H-1117 Budapest, Pázmány P. s. 1/A, Hungary A. Kingan Department
of Physics and Astronomy, Stony Brook University, SUNY, Stony Brook, New York
11794-3800, USA E. Kistenev Physics Department, Brookhaven National
Laboratory, Upton, New York 11973-5000, USA R. Kitamura Center for Nuclear
Study, Graduate School of Science, University of Tokyo, 7-3-1 Hongo, Bunkyo,
Tokyo 113-0033, Japan J. Klatsky Florida State University, Tallahassee,
Florida 32306, USA D. Kleinjan University of California-Riverside,
Riverside, California 92521, USA P. Kline Department of Physics and
Astronomy, Stony Brook University, SUNY, Stony Brook, New York 11794-3800, USA
T. Koblesky University of Colorado, Boulder, Colorado 80309, USA B. Komkov
PNPI, Petersburg Nuclear Physics Institute, Gatchina, Leningrad region,
188300, Russia D. Kotov PNPI, Petersburg Nuclear Physics Institute,
Gatchina, Leningrad region, 188300, Russia Saint Petersburg State Polytechnic
University, St. Petersburg, 195251 Russia L. Kovacs ELTE, Eötvös Loránd
University, H-1117 Budapest, Pázmány P. s. 1/A, Hungary K. Kurita Physics
Department, Rikkyo University, 3-34-1 Nishi-Ikebukuro, Toshima, Tokyo
171-8501, Japan M. Kurosawa RIKEN Nishina Center for Accelerator-Based
Science, Wako, Saitama 351-0198, Japan RIKEN BNL Research Center, Brookhaven
National Laboratory, Upton, New York 11973-5000, USA Y. Kwon Yonsei
University, IPAP, Seoul 120-749, Korea J.G. Lajoie Iowa State University,
Ames, Iowa 50011, USA D. Larionova Saint Petersburg State Polytechnic
University, St. Petersburg, 195251 Russia A. Lebedev Iowa State University,
Ames, Iowa 50011, USA S. Lee Yonsei University, IPAP, Seoul 120-749, Korea
S.H. Lee Iowa State University, Ames, Iowa 50011, USA Department of Physics,
University of Michigan, Ann Arbor, Michigan 48109-1040, USA Department of
Physics and Astronomy, Stony Brook University, SUNY, Stony Brook, New York
11794-3800, USA M.J. Leitch Los Alamos National Laboratory, Los Alamos, New
Mexico 87545, USA N.A. Lewis Department of Physics, University of Michigan,
Ann Arbor, Michigan 48109-1040, USA S.H. Lim Pusan National University,
Pusan 46241, Korea Yonsei University, IPAP, Seoul 120-749, Korea M.X. Liu
Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA X. Li
Science and Technology on Nuclear Data Laboratory, China Institute of Atomic
Energy, Beijing 102413, People’s Republic of China X. Li Los Alamos National
Laboratory, Los Alamos, New Mexico 87545, USA D.A. Loomis Department of
Physics, University of Michigan, Ann Arbor, Michigan 48109-1040, USA D. Lynch
Physics Department, Brookhaven National Laboratory, Upton, New York
11973-5000, USA S. Lökös ELTE, Eötvös Loránd University, H-1117 Budapest,
Pázmány P. s. 1/A, Hungary T. Majoros Debrecen University, H-4010 Debrecen,
Egyetem tér 1, Hungary Y.I. Makdisi Collider-Accelerator Department,
Brookhaven National Laboratory, Upton, New York 11973-5000, USA M. Makek
Department of Physics, Faculty of Science, University of Zagreb, Bijenička c.
32 HR-10002 Zagreb, Croatia A. Manion Department of Physics and Astronomy,
Stony Brook University, SUNY, Stony Brook, New York 11794-3800, USA V.I.
Manko National Research Center “Kurchatov Institute”, Moscow, 123098 Russia
E. Mannel Physics Department, Brookhaven National Laboratory, Upton, New York
11973-5000, USA M. McCumber Los Alamos National Laboratory, Los Alamos, New
Mexico 87545, USA P.L. McGaughey Los Alamos National Laboratory, Los Alamos,
New Mexico 87545, USA D. McGlinchey University of Colorado, Boulder,
Colorado 80309, USA Los Alamos National Laboratory, Los Alamos, New Mexico
87545, USA C. McKinney University of Illinois at Urbana-Champaign, Urbana,
Illinois 61801, USA A. Meles New Mexico State University, Las Cruces, New
Mexico 88003, USA M. Mendoza University of California-Riverside, Riverside,
California 92521, USA A.C. Mignerey University of Maryland, College Park,
Maryland 20742, USA A. Milov Weizmann Institute, Rehovot 76100, Israel D.K.
Mishra Bhabha Atomic Research Centre, Bombay 400 085, India J.T. Mitchell
Physics Department, Brookhaven National Laboratory, Upton, New York
11973-5000, USA M. Mitrankova Saint Petersburg State Polytechnic University,
St. Petersburg, 195251 Russia Iu. Mitrankov Saint Petersburg State
Polytechnic University, St. Petersburg, 195251 Russia S. Miyasaka RIKEN
Nishina Center for Accelerator-Based Science, Wako, Saitama 351-0198, Japan
Department of Physics, Tokyo Institute of Technology, Oh-okayama, Meguro,
Tokyo 152-8551, Japan S. Mizuno RIKEN Nishina Center for Accelerator-Based
Science, Wako, Saitama 351-0198, Japan Tomonaga Center for the History of the
Universe, University of Tsukuba, Tsukuba, Ibaraki 305, Japan A.K. Mohanty
Bhabha Atomic Research Centre, Bombay 400 085, India M.M. Mondal Department
of Physics and Astronomy, Stony Brook University, SUNY, Stony Brook, New York
11794-3800, USA P. Montuenga University of Illinois at Urbana-Champaign,
Urbana, Illinois 61801, USA T. Moon Korea University, Seoul 02841, Korea
Yonsei University, IPAP, Seoul 120-749, Korea D.P. Morrison Physics
Department, Brookhaven National Laboratory, Upton, New York 11973-5000, USA
T.V. Moukhanova National Research Center “Kurchatov Institute”, Moscow,
123098 Russia B. Mulilo Korea University, Seoul 02841, Korea RIKEN Nishina
Center for Accelerator-Based Science, Wako, Saitama 351-0198, Japan
Department of Physics, School of Natural Sciences, University of Zambia, Great
East Road Campus, Box 32379, Lusaka, Zambia T. Murakami Kyoto University,
Kyoto 606-8502, Japan RIKEN Nishina Center for Accelerator-Based Science,
Wako, Saitama 351-0198, Japan J. Murata RIKEN Nishina Center for
Accelerator-Based Science, Wako, Saitama 351-0198, Japan Physics Department,
Rikkyo University, 3-34-1 Nishi-Ikebukuro, Toshima, Tokyo 171-8501, Japan A.
Mwai Chemistry Department, Stony Brook University, SUNY, Stony Brook, New
York 11794-3400, USA K. Nagashima Hiroshima University, Kagamiyama, Higashi-
Hiroshima 739-8526, Japan J.L. Nagle University of Colorado, Boulder,
Colorado 80309, USA M.I. Nagy ELTE, Eötvös Loránd University, H-1117
Budapest, Pázmány P. s. 1/A, Hungary I. Nakagawa RIKEN Nishina Center for
Accelerator-Based Science, Wako, Saitama 351-0198, Japan RIKEN BNL Research
Center, Brookhaven National Laboratory, Upton, New York 11973-5000, USA H.
Nakagomi RIKEN Nishina Center for Accelerator-Based Science, Wako, Saitama
351-0198, Japan Tomonaga Center for the History of the Universe, University
of Tsukuba, Tsukuba, Ibaraki 305, Japan K. Nakano RIKEN Nishina Center for
Accelerator-Based Science, Wako, Saitama 351-0198, Japan Department of
Physics, Tokyo Institute of Technology, Oh-okayama, Meguro, Tokyo 152-8551,
Japan C. Nattrass University of Tennessee, Knoxville, Tennessee 37996, USA
S. Nelson Florida A&M University, Tallahassee, FL 32307, USA P.K. Netrakanti
Bhabha Atomic Research Centre, Bombay 400 085, India T. Niida Tomonaga
Center for the History of the Universe, University of Tsukuba, Tsukuba,
Ibaraki 305, Japan S. Nishimura Center for Nuclear Study, Graduate School of
Science, University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan R.
Nouicer Physics Department, Brookhaven National Laboratory, Upton, New York
11973-5000, USA RIKEN BNL Research Center, Brookhaven National Laboratory,
Upton, New York 11973-5000, USA N. Novitzky Helsinki Institute of Physics
and University of Jyväskylä, P.O.Box 35, FI-40014 Jyväskylä, Finland
Department of Physics and Astronomy, Stony Brook University, SUNY, Stony
Brook, New York 11794-3800, USA Tomonaga Center for the History of the
Universe, University of Tsukuba, Tsukuba, Ibaraki 305, Japan T. Novák
Eszterházy Károly University, Károly Róbert Campus, H-3200 Gyöngyös, Mátrai út
36, Hungary Institute for Particle and Nuclear Physics, Wigner Research
Centre for Physics, Hungarian Academy of Sciences (Wigner RCP, RMKI) H-1525
Budapest 114, POBox 49, Budapest, Hungary G. Nukazuka RIKEN Nishina Center
for Accelerator-Based Science, Wako, Saitama 351-0198, Japan RIKEN BNL
Research Center, Brookhaven National Laboratory, Upton, New York 11973-5000,
USA A.S. Nyanin National Research Center “Kurchatov Institute”, Moscow,
123098 Russia E. O’Brien Physics Department, Brookhaven National Laboratory,
Upton, New York 11973-5000, USA C.A. Ogilvie Iowa State University, Ames,
Iowa 50011, USA J.D. Orjuela Koop University of Colorado, Boulder, Colorado
80309, USA J.D. Osborn Department of Physics, University of Michigan, Ann
Arbor, Michigan 48109-1040, USA Oak Ridge National Laboratory, Oak Ridge,
Tennessee 37831, USA A. Oskarsson Department of Physics, Lund University,
Box 118, SE-221 00 Lund, Sweden K. Ozawa KEK, High Energy Accelerator
Research Organization, Tsukuba, Ibaraki 305-0801, Japan Tomonaga Center for
the History of the Universe, University of Tsukuba, Tsukuba, Ibaraki 305,
Japan R. Pak Physics Department, Brookhaven National Laboratory, Upton, New
York 11973-5000, USA V. Pantuev Institute for Nuclear Research of the
Russian Academy of Sciences, prospekt 60-letiya Oktyabrya 7a, Moscow 117312,
Russia V. Papavassiliou New Mexico State University, Las Cruces, New Mexico
88003, USA J.S. Park Department of Physics and Astronomy, Seoul National
University, Seoul 151-742, Korea S. Park Department of Physics and
Astronomy, Seoul National University, Seoul 151-742, Korea Department of
Physics and Astronomy, Stony Brook University, SUNY, Stony Brook, New York
11794-3800, USA M. Patel Iowa State University, Ames, Iowa 50011, USA S.F.
Pate New Mexico State University, Las Cruces, New Mexico 88003, USA J.-C.
Peng University of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA
W. Peng Vanderbilt University, Nashville, Tennessee 37235, USA D.V.
Perepelitsa Physics Department, Brookhaven National Laboratory, Upton, New
York 11973-5000, USA University of Colorado, Boulder, Colorado 80309, USA
G.D.N. Perera New Mexico State University, Las Cruces, New Mexico 88003, USA
D.Yu. Peressounko National Research Center “Kurchatov Institute”, Moscow,
123098 Russia C.E. PerezLara Department of Physics and Astronomy, Stony
Brook University, SUNY, Stony Brook, New York 11794-3800, USA J. Perry Iowa
State University, Ames, Iowa 50011, USA R. Petti Physics Department,
Brookhaven National Laboratory, Upton, New York 11973-5000, USA Department of
Physics and Astronomy, Stony Brook University, SUNY, Stony Brook, New York
11794-3800, USA C. Pinkenburg Physics Department, Brookhaven National
Laboratory, Upton, New York 11973-5000, USA R. Pinson Abilene Christian
University, Abilene, Texas 79699, USA R.P. Pisani Physics Department,
Brookhaven National Laboratory, Upton, New York 11973-5000, USA M. Potekhin
Physics Department, Brookhaven National Laboratory, Upton, New York
11973-5000, USA A. Pun Department of Physics and Astronomy, Ohio University,
Athens, Ohio 45701, USA M.L. Purschke Physics Department, Brookhaven
National Laboratory, Upton, New York 11973-5000, USA P.V. Radzevich Saint
Petersburg State Polytechnic University, St. Petersburg, 195251 Russia J. Rak
Helsinki Institute of Physics and University of Jyväskylä, P.O.Box 35,
FI-40014 Jyväskylä, Finland N. Ramasubramanian Department of Physics and
Astronomy, Stony Brook University, SUNY, Stony Brook, New York 11794-3800, USA
B.J. Ramson Department of Physics, University of Michigan, Ann Arbor,
Michigan 48109-1040, USA I. Ravinovich Weizmann Institute, Rehovot 76100,
Israel K.F. Read Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831,
USA University of Tennessee, Knoxville, Tennessee 37996, USA D. Reynolds
Chemistry Department, Stony Brook University, SUNY, Stony Brook, New York
11794-3400, USA V. Riabov National Research Nuclear University, MEPhI,
Moscow Engineering Physics Institute, Moscow, 115409, Russia PNPI, Petersburg
Nuclear Physics Institute, Gatchina, Leningrad region, 188300, Russia Y.
Riabov PNPI, Petersburg Nuclear Physics Institute, Gatchina, Leningrad
region, 188300, Russia Saint Petersburg State Polytechnic University, St.
Petersburg, 195251 Russia D. Richford Baruch College, City University of New
York, New York, New York, 10010 USA T. Rinn University of Illinois at
Urbana-Champaign, Urbana, Illinois 61801, USA Iowa State University, Ames,
Iowa 50011, USA S.D. Rolnick University of California-Riverside, Riverside,
California 92521, USA M. Rosati Iowa State University, Ames, Iowa 50011, USA
Z. Rowan Baruch College, City University of New York, New York, New York,
10010 USA J.G. Rubin Department of Physics, University of Michigan, Ann
Arbor, Michigan 48109-1040, USA J. Runchey Iowa State University, Ames, Iowa
50011, USA B. Sahlmueller Department of Physics and Astronomy, Stony Brook
University, SUNY, Stony Brook, New York 11794-3800, USA N. Saito KEK, High
Energy Accelerator Research Organization, Tsukuba, Ibaraki 305-0801, Japan T.
Sakaguchi Physics Department, Brookhaven National Laboratory, Upton, New York
11973-5000, USA H. Sako Advanced Science Research Center, Japan Atomic
Energy Agency, 2-4 Shirakata Shirane, Tokai-mura, Naka-gun, Ibaraki-ken
319-1195, Japan V. Samsonov National Research Nuclear University, MEPhI,
Moscow Engineering Physics Institute, Moscow, 115409, Russia PNPI, Petersburg
Nuclear Physics Institute, Gatchina, Leningrad region, 188300, Russia M.
Sarsour Georgia State University, Atlanta, Georgia 30303, USA S. Sato
Advanced Science Research Center, Japan Atomic Energy Agency, 2-4 Shirakata
Shirane, Tokai-mura, Naka-gun, Ibaraki-ken 319-1195, Japan B. Schaefer
Vanderbilt University, Nashville, Tennessee 37235, USA B.K. Schmoll
University of Tennessee, Knoxville, Tennessee 37996, USA K. Sedgwick
University of California-Riverside, Riverside, California 92521, USA R. Seidl
RIKEN Nishina Center for Accelerator-Based Science, Wako, Saitama 351-0198,
Japan RIKEN BNL Research Center, Brookhaven National Laboratory, Upton, New
York 11973-5000, USA A. Sen Iowa State University, Ames, Iowa 50011, USA
University of Tennessee, Knoxville, Tennessee 37996, USA R. Seto University
of California-Riverside, Riverside, California 92521, USA P. Sett Bhabha
Atomic Research Centre, Bombay 400 085, India A. Sexton University of
Maryland, College Park, Maryland 20742, USA D. Sharma Department of Physics
and Astronomy, Stony Brook University, SUNY, Stony Brook, New York 11794-3800,
USA I. Shein IHEP Protvino, State Research Center of Russian Federation,
Institute for High Energy Physics, Protvino, 142281, Russia T.-A. Shibata
RIKEN Nishina Center for Accelerator-Based Science, Wako, Saitama 351-0198,
Japan Department of Physics, Tokyo Institute of Technology, Oh-okayama,
Meguro, Tokyo 152-8551, Japan K. Shigaki Hiroshima University, Kagamiyama,
Higashi-Hiroshima 739-8526, Japan M. Shimomura Iowa State University, Ames,
Iowa 50011, USA Nara Women’s University, Kita-uoya Nishi-machi Nara 630-8506,
Japan P. Shukla Bhabha Atomic Research Centre, Bombay 400 085, India A.
Sickles Physics Department, Brookhaven National Laboratory, Upton, New York
11973-5000, USA University of Illinois at Urbana-Champaign, Urbana, Illinois
61801, USA C.L. Silva Los Alamos National Laboratory, Los Alamos, New Mexico
87545, USA D. Silvermyr Department of Physics, Lund University, Box 118,
SE-221 00 Lund, Sweden Oak Ridge National Laboratory, Oak Ridge, Tennessee
37831, USA B.K. Singh Department of Physics, Banaras Hindu University,
Varanasi 221005, India C.P. Singh Department of Physics, Banaras Hindu
University, Varanasi 221005, India V. Singh Department of Physics, Banaras
Hindu University, Varanasi 221005, India M. Slunečka Charles University,
Ovocný trh 5, Praha 1, 116 36, Prague, Czech Republic K.L. Smith Florida
State University, Tallahassee, Florida 32306, USA M. Snowball Los Alamos
National Laboratory, Los Alamos, New Mexico 87545, USA R.A. Soltz Lawrence
Livermore National Laboratory, Livermore, California 94550, USA W.E. Sondheim
Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA S.P.
Sorensen University of Tennessee, Knoxville, Tennessee 37996, USA I.V.
Sourikova Physics Department, Brookhaven National Laboratory, Upton, New York
11973-5000, USA P.W. Stankus Oak Ridge National Laboratory, Oak Ridge,
Tennessee 37831, USA M. Stepanov Deceased Department of Physics, University
of Massachusetts, Amherst, Massachusetts 01003-9337, USA S.P. Stoll Physics
Department, Brookhaven National Laboratory, Upton, New York 11973-5000, USA
T. Sugitate Hiroshima University, Kagamiyama, Higashi-Hiroshima 739-8526,
Japan A. Sukhanov Physics Department, Brookhaven National Laboratory, Upton,
New York 11973-5000, USA T. Sumita RIKEN Nishina Center for Accelerator-
Based Science, Wako, Saitama 351-0198, Japan J. Sun Department of Physics
and Astronomy, Stony Brook University, SUNY, Stony Brook, New York 11794-3800,
USA Z. Sun Debrecen University, H-4010 Debrecen, Egyetem tér 1, Hungary J.
Sziklai Institute for Particle and Nuclear Physics, Wigner Research Centre
for Physics, Hungarian Academy of Sciences (Wigner RCP, RMKI) H-1525 Budapest
114, POBox 49, Budapest, Hungary A. Taketani RIKEN Nishina Center for
Accelerator-Based Science, Wako, Saitama 351-0198, Japan RIKEN BNL Research
Center, Brookhaven National Laboratory, Upton, New York 11973-5000, USA K.
Tanida Advanced Science Research Center, Japan Atomic Energy Agency, 2-4
Shirakata Shirane, Tokai-mura, Naka-gun, Ibaraki-ken 319-1195, Japan RIKEN
BNL Research Center, Brookhaven National Laboratory, Upton, New York
11973-5000, USA Department of Physics and Astronomy, Seoul National
University, Seoul 151-742, Korea M.J. Tannenbaum Physics Department,
Brookhaven National Laboratory, Upton, New York 11973-5000, USA S. Tarafdar
Vanderbilt University, Nashville, Tennessee 37235, USA Weizmann Institute,
Rehovot 76100, Israel A. Taranenko National Research Nuclear University,
MEPhI, Moscow Engineering Physics Institute, Moscow, 115409, Russia Chemistry
Department, Stony Brook University, SUNY, Stony Brook, New York 11794-3400,
USA R. Tieulent Georgia State University, Atlanta, Georgia 30303, USA IPNL,
CNRS/IN2P3, Univ Lyon, Université Lyon 1, F-69622, Villeurbanne, France A.
Timilsina Iowa State University, Ames, Iowa 50011, USA T. Todoroki RIKEN
Nishina Center for Accelerator-Based Science, Wako, Saitama 351-0198, Japan
RIKEN BNL Research Center, Brookhaven National Laboratory, Upton, New York
11973-5000, USA Tomonaga Center for the History of the Universe, University
of Tsukuba, Tsukuba, Ibaraki 305, Japan M. Tomášek Czech Technical
University, Zikova 4, 166 36 Prague 6, Czech Republic C.L. Towell Abilene
Christian University, Abilene, Texas 79699, USA R. Towell Abilene Christian
University, Abilene, Texas 79699, USA R.S. Towell Abilene Christian
University, Abilene, Texas 79699, USA I. Tserruya Weizmann Institute,
Rehovot 76100, Israel Y. Ueda Hiroshima University, Kagamiyama, Higashi-
Hiroshima 739-8526, Japan B. Ujvari Debrecen University, H-4010 Debrecen,
Egyetem tér 1, Hungary H.W. van Hecke Los Alamos National Laboratory, Los
Alamos, New Mexico 87545, USA J. Velkovska Vanderbilt University, Nashville,
Tennessee 37235, USA M. Virius Czech Technical University, Zikova 4, 166 36
Prague 6, Czech Republic V. Vrba Czech Technical University, Zikova 4, 166
36 Prague 6, Czech Republic Institute of Physics, Academy of Sciences of the
Czech Republic, Na Slovance 2, 182 21 Prague 8, Czech Republic X.R. Wang New
Mexico State University, Las Cruces, New Mexico 88003, USA RIKEN BNL Research
Center, Brookhaven National Laboratory, Upton, New York 11973-5000, USA Y.
Watanabe RIKEN Nishina Center for Accelerator-Based Science, Wako, Saitama
351-0198, Japan RIKEN BNL Research Center, Brookhaven National Laboratory,
Upton, New York 11973-5000, USA Y.S. Watanabe Center for Nuclear Study,
Graduate School of Science, University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo
113-0033, Japan KEK, High Energy Accelerator Research Organization, Tsukuba,
Ibaraki 305-0801, Japan F. Wei New Mexico State University, Las Cruces, New
Mexico 88003, USA A.S. White Department of Physics, University of Michigan,
Ann Arbor, Michigan 48109-1040, USA C.P. Wong Georgia State University,
Atlanta, Georgia 30303, USA Los Alamos National Laboratory, Los Alamos, New
Mexico 87545, USA C.L. Woody Physics Department, Brookhaven National
Laboratory, Upton, New York 11973-5000, USA M. Wysocki Oak Ridge National
Laboratory, Oak Ridge, Tennessee 37831, USA B. Xia Department of Physics and
Astronomy, Ohio University, Athens, Ohio 45701, USA L. Xue Georgia State
University, Atlanta, Georgia 30303, USA S. Yalcin Department of Physics and
Astronomy, Stony Brook University, SUNY, Stony Brook, New York 11794-3800, USA
Y.L. Yamaguchi Center for Nuclear Study, Graduate School of Science,
University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan Department of
Physics and Astronomy, Stony Brook University, SUNY, Stony Brook, New York
11794-3800, USA A. Yanovich IHEP Protvino, State Research Center of Russian
Federation, Institute for High Energy Physics, Protvino, 142281, Russia I.
Yoon Department of Physics and Astronomy, Seoul National University, Seoul
151-742, Korea J.H. Yoo Korea University, Seoul 02841, Korea I.E. Yushmanov
National Research Center “Kurchatov Institute”, Moscow, 123098 Russia H. Yu
New Mexico State University, Las Cruces, New Mexico 88003, USA Peking
University, Beijing 100871, People’s Republic of China W.A. Zajc Columbia
University, New York, New York 10027 and Nevis Laboratories, Irvington, New
York 10533, USA A. Zelenski Collider-Accelerator Department, Brookhaven
National Laboratory, Upton, New York 11973-5000, USA S. Zhou Science and
Technology on Nuclear Data Laboratory, China Institute of Atomic Energy,
Beijing 102413, People’s Republic of China L. Zou University of California-
Riverside, Riverside, California 92521, USA
###### Abstract
The measurement of the direct-photon spectrum from Au$+$Au collisions at
$\sqrt{s_{{}_{NN}}}=200$ GeV is presented by the PHENIX collaboration using
the external-photon-conversion technique for 0%–93% central collisions in a
transverse-momentum ($p_{T}$) range of 0.8–10 GeV/$c$. An excess of direct
photons, above prompt-photon production from hard-scattering processes, is
observed for $p_{T}<6$ GeV/$c$. Nonprompt direct photons are measured by
subtracting the prompt component, which is estimated as $N_{\rm coll}$-scaled
direct photons from $p$$+$$p$ collisions at 200 GeV, from the direct-photon
spectrum. Results are obtained for $0.8<p_{T}<6.0$ GeV/$c$ and suggest that
the spectrum has an increasing inverse slope from ${\approx}0.2$ to 0.4
GeV/$c$ with increasing $p_{T}$, which indicates a possible sensitivity of the
measurement to photons from earlier stages of the evolution of the collision.
In addition, like the direct-photon production, the $p_{T}$-integrated
nonprompt direct-photon yields also follow a power-law scaling behavior as a
function of collision-system size. The exponent, $\alpha$, for the nonprompt
component is found to be consistent with 1.1 with no apparent $p_{T}$
dependence.
## I Introduction
Direct photons, defined as those not coming from hadron decays, have long been
considered a golden probe towards our understanding of the evolution of
relativistic heavy-ion collisions – from the quark-gluon plasma (QGP) phase to
the hadron-gas (HG) phase [1]. Unlike strongly interacting probes, such as
identified particles and jets, direct photons traverse the medium unmodified
due to the small cross section of electromagnetic interaction. These
penetrating photons encode information about the environment in which they
were created, including the temperature and the collective motion of the
medium. While the direct photons at high transverse momentum, $p_{T}$, are
dominated by photons created from hard-scattering processes, such as quark-
gluon Compton scattering, in the low-$p_{T}$ regime, they were initially
predicted to be of a thermal origin, being emitted from the QGP and HG phase
(see Ref. [2] for a recent review).
The $p_{T}$ spectrum of low-$p_{T}$ direct photons from Au$+$Au collisions at
$\sqrt{s_{{}_{NN}}}=200$ GeV, first measured by PHENIX [3], shows a clear
excess above the hard-scattering contribution estimated from $p$$+$$p$
measurements for $p_{T}$ below 3 GeV/$c$. Followup measurements by PHENIX have
established that low-$p_{T}$ direct-photon emission also shows a large
anisotropy with respect to the reaction plane [4, 5], and that the yield
increases faster than $N_{\rm part}$ or $dN_{\rm ch}/d\eta$ as a function of
the centrality of the collision [6]. Low-$p_{T}$ direct photons in Au$+$Au
collisions at 200 GeV have also been measured by STAR [7] using the same basic
method as [3], but different detection techniques. Quantitatively, STAR
results appear to be a factor 3 smaller than those from PHENIX. This tension
has not yet been resolved. Furthermore, low $p_{T}$ photons have been measured
in Au$+$Au at lower $\sqrt{s_{{}_{NN}}}$ of 39 GeV and 62.9 GeV by PHENIX [8],
and in Pb$+$Pb at $\sqrt{s_{{}_{NN}}}=2760$ GeV by ALICE [9].
The excess of direct photons in A$+$A collisions, in the low-$p_{T}$ regime,
is usually interpreted as the contribution of thermal radiation emitted from
the expanding and cooling QGP and HG phase. Due to the rapid anisotropic
expansion of the system, the radiation is Doppler shifted. Over the years,
several theoretical models have been developed and refined to describe the
production rates and space-time evolution of thermal photons in relativistic
heavy-ion collisions [10, 11, 12, 13, 14, 15, 16, 17]. While most of these
state-of-the-art models describe the data qualitatively, they fall short of
simultaneously describing all the features of the data quantitatively. To
describe the large yield, early emission at high temperatures is favored,
while sufficient build up of collective motion is required to explain the
large anisotropy, thereby favoring late-stage emission. This tension, often
termed as the “direct-photon puzzle”, hints at an incomplete understanding of
the different sources and mechanisms of direct-photon production. This has
triggered more thoughts on other unconventional photon sources, such as
emission from the pre-equilibrium stage, strong magnetic field effects, etc.
[18, 19, 20, 21, 22, 23, 24, 10]. For that very reason this paper refers to
the low-$p_{T}$-excess direct photons as “nonprompt” instead of “thermal”.
To provide new insights and further understandings, the PHENIX collaboration
presents results from the high-statistics 2014 Au$+$Au data at
$\sqrt{s_{{}_{NN}}}=200$ GeV. With a 10-fold increase in statistics compared
to previously published results, differential direct-photon measurements as
functions of $p_{T}$ and system size over a broad $p_{T}$ range from 0.8–10
GeV/$c$ and in 10% centrality classes are discussed. A new algorithm, which
utilizes the silicon-vertex detector (VTX) as the conversion material for
photons, is developed for this analysis.
The paper is organized as follows: Section II presents the experimental setup
relevant to this measurement and the algorithm to reconstruct the conversion
photons. Section III describes the double-ratio method to determine the
direct-photon excess ratio, $R_{\gamma}$, and gives details of the
experimental measurement. Section IV investigates the systematic
uncertainties. Section V discusses the results. Section VI presents the
summary and conclusions. Finally, there are two appendices: Appendix A
discusses the event mixing procedures and their validity, while Appendix B
describes the Monte-Carlo (MC)-sampling method used to derive the final
systematic uncertainties on the direct-photon yield.
## II Experimental setup and photon measurements
### II.1 PHENIX 2014 Au$+$Au $\sqrt{s_{{}_{NN}}}=200$ GeV data set
In 2014, a total of 19 billion Au$+$Au collisions at $\sqrt{s_{{}_{NN}}}=200$
GeV were recorded by the PHENIX detector at the Relativistic Heavy Ion
Collider (RHIC) with a minimum-bias (MB) trigger, based on the response of two
beam-beam counters (BBC) [25]. The BBCs are located on either side of the
interaction point along the beam axis at $z={\pm}1.44$ m with a pseudorapidity
coverage of $3.1<|\eta|<3.9$ and full $2\pi$ azimuthal acceptance. The MB
trigger requires a coincident signal in both BBCs. Each BBC, comprising 64
Čerenkov counters, measures the total number of charged particles produced
during the collision within its acceptance. The charged-particle multiplicity
is used to divide the MB events into different centrality classes; 0%–10%
corresponds to the most central collisions which produces the largest number
of charged particles, while 80%–93% corresponds to peripheral collisions with
only a small number of charged particles. The BBCs also utilize the arrival
time of the produced particles on each side to determine the collision vertex
along the beam direction.
Figure 1: (a) The beam view of the PHENIX central-arm spectrometer for the
year 2014. (b) A magnified view of the silicon-vertex detector. The solid
curves correspond to the electron and positron tracks from photon conversion.
The direct-photon measurement, presented here, is based on the tracking and
identification of electrons and positrons from photon conversions in the
detector material and the direct calorimetric measurement of photons in the
two PHENIX central arm spectrometers shown in Fig. 1 [26].
The VTX [27] comprises four silicon layers at nominal radii of 2.6, 5.1, 11.8,
and 16.7 cm. In the beam direction, the active area covers approximately $\pm
11$ cm for the innermost layer and $\pm 19$ cm for the outer layer. The VTX is
not used as an active detector in the measurement. However, it acts as the
photon converter, which is critical for this analysis. The total material
thickness of the VTX in terms of radiation length, $X_{0}$, is
${\approx}13\%X_{0}$. Events are selected with a $z$ vertex within $\pm 10$ cm
of the nominal interaction point. After applying quality assurance criteria, a
total of $1.25\times 10^{10}$ events are analyzed.
The central-arm spectrometers have three major parts: A charged-particle
tracking system [28, 29], particle-identification detectors [30], and
electromagnetic calorimeters (EMCal) [31]. Each arm covers 90∘ in the
azimuthal direction with $|\eta|<0.35$. The tracking system is located
$\approx$2.2 m from the beam axis outside of an axial magnetic field. The main
tracking detectors are drift chambers (DC) and pad chambers (PC1). The DC
provides a precise measurement of the transverse momentum for charged
particles with $\mbox{$p_{T}$}>0.2$ GeV/$c$. The PC1 measures the momentum
along beam direction, $p_{z}$. The effective momentum resolution of the
central-arm tracking system, for this analysis, is
$\sigma_{p}/p=0.8\%{\oplus}2\%\,p$ [GeV/$c$], where $p$ is the transverse
momentum of the track.
Charged tracks are identified as electrons or positrons with a ring-imaging
Čerenkov detector (RICH). The RICH has a $CO_{2}$ gas radiator with a low
radiation threshold for electrons (0.018 GeV/$c$) and a relatively high
threshold for charged pions ($>4.87$ GeV/$c$). Requiring a signal in at least
two phototubes in the focal plane of the RICH at the expected ring location
effectively separates electrons below 5 GeV/$c$ from charged hadrons. A
further matching of the momentum, $p$, of the charged track to the energy,
$E$, as measured in the EMCal within $-2\sigma_{E/p}<E/p<5{\sigma_{E/p}}$
removes most hadrons remaining in the sample. Here $\sigma_{E/p}$ is the
momentum-dependent resolution of the energy to momentum ratio, $E/p$.
For the calorimetric identification of photons, two types of calorimeters are
used, lead-scintillator (PbSc) and lead-glass (PbGl). The PbSc EMCal, which
covers 3/4 of the acceptance, is a sandwich sampling detector, also referred
to as a Shashlik type calorimeter. Based on the widths of reconstructed
$\pi^{0}$ mass through the $\mbox{$\pi^{0}$}\rightarrow\gamma\gamma$ decay,
the effective photon-energy resolution in this analysis is
$\sigma_{E}/E=8.1\%/\sqrt{E~{}{\rm[GeV]}}{\oplus}5.0\%$. The remaining 1/4 of
the acceptance is covered by the PbGl EMCal, which is a homogeneous Čerenkov-
type detector with an effective resolution of
$\sigma_{E}/E=8.7\%/\sqrt{E~{}{\rm\,[GeV]}}{\oplus}5.8\%$. Nominal cuts on the
energy threshold ($E>500$ MeV) and shower shape ($\chi^{2}<3$) are applied to
identify photons.
### II.2 External photon conversions in the VTX
Earlier measurements of direct photons from PHENIX are based on three
different strategies to measure photons in A$+$A collisions. The calorimeter
method is used to measure photons with $p_{T}$ of several GeV/$c$ via their
energy deposited in the EMCal [4]. To access lower $p_{T}$, $e^{+}e^{-}$ pairs
from photon conversions are reconstructed with the tracking system. These
$e^{+}e^{-}$ pairs are either from “internal” conversions of virtual photons
emitted from the collision [3] or “external” conversions of photons in the
detector material [6].
Figure 2: Artificial $e^{+}e^{-}$ pair mass for external photon conversions.
Each curve corresponds to a different radius region, which roughly maps to the
locations of beam-pipe, layers 1 (B0) through 4 (B3) of the VTX, and the VTX
(CF) carbon-fiber enclosure.
Here, external photon conversions at the VTX detector are reconstructed from
$e^{+}e^{-}$ pairs. The VTX material is distributed between 2 and 25 cm along
the radial direction. Depending on the conversion point, a different amount of
magnetic field is traversed by the $e^{+}e^{-}$ pair. In the standard PHENIX
track-reconstruction algorithm, the tracking system measures a part of the
trajectory outside of the magnetic field at a radial position of $\approx$2.2
m. The momentum vector is determined by assuming that the particle originates
at the event vertex. This assumption is incorrect for the $e^{+}e^{-}$ pairs
from conversions in the VTX material. Both $e^{+}$ and $e^{-}$ traverse a
smaller $\int{Bdl}$ than tracks from the vertex and thus the azimuthal
component of the momentum vector is mismeasured in opposing directions,
leading to an artificial opening angle and mismeasured mass of the
$e^{+}e^{-}$ pair. Because the magnetic field in the region of the VTX
detector is approximately constant at 0.9 Tesla, the artificial mass acquired
is proportional to the radial location of the conversion point. Fig. 2 shows
the mass of $e^{+}e^{-}$ pairs simulated with the geant3 PHENIX-detector
simulation [32], different curves represent photon conversions in different
VTX layers. The $m_{e^{+}e^{-}}$ is larger for conversions at larger radii
with most conversions occurring in the third and fourth layers of the VTX,
where the material budget is the largest.
To correctly reconstruct and identify photon conversions at different VTX
layers, a new track-reconstruction algorithm is developed. The new algorithm
relies on the fact that the $e^{+}$ and $e^{-}$ from a conversion have the
same origin and that their momenta were initially parallel in radial
direction. This additional constraint eliminates the need to assume the origin
of the track.
Figure 3: Schematic view of the conversion-reconstruction algorithm. The two
tracks are reconstructed to the same radius r. $\delta\phi$ is the azimuthal-
angular difference between the two tracks for a given reconstruction radius.
$\delta\phi$ is zero at the conversion point.
The algorithm is illustrated in Fig. 3. For all radii between 0 and 30 cm, all
possible momenta of the $e^{+}$ and $e^{-}$ are scanned to identify the
azimuthal location $\phi_{\pm}$ at which the track is perpendicular to the
circle of the given radius, or in other words points back radially to the
event vertex. The conversion point is determined by finding the radius for
which the difference of the azimuthal angles of the $e^{+}e^{-}$ pair,
$\delta\phi=\phi_{+}{-}\phi_{-}$, becomes zero. If such radius exists, the
pair is identified as a conversion candidate at the location $(\phi_{\rm
conv},r_{\rm conv})$, where $\phi_{\rm conv}$ is the azimuthal angle of the
conversion point, reconstructed with a resolution of $\approx$4 mrad, and
$r_{\rm conv}$ is the radial position reconstructed with a resolution of
$\approx$2 cm.
## III Data Analysis
### III.1 Double-ratio tagging method
The number of direct photons emitted in a Au$+$Au collision is small compared
to the number of photons from hadron decays. To make a precise measurement of
the direct-photon yield, a tagging method is employed [6], which measures the
ratio, $R_{\gamma}$, of all photons, referred to as inclusive photons,
$\gamma^{\rm incl}$, to the photons from hadron decays, $\gamma^{\rm hadr}$.
The ratio $R_{\gamma}$ is evaluated as double ratio, such that most systematic
uncertainties cancel explicitly. The $R_{\gamma}$ given in Eq. 1 features
three main terms:
$R_{\gamma}=\frac{\gamma^{\rm incl}}{\gamma^{\rm
hadr}}=\frac{\left(\frac{\gamma^{\rm
incl}}{\gamma^{\pi^{0}}}\right)}{\left(\frac{\gamma^{\rm
hadr}}{\gamma^{\pi^{0}}}\right)}=\frac{\langle\epsilon_{\gamma}f\rangle\left(\frac{N^{\rm
incl}_{\gamma}}{N_{\gamma}^{\pi^{0},\rm tag}}\right)_{\rm
Data}}{\left(\frac{\gamma^{\rm hadr}}{\gamma^{\pi^{0}}}\right)_{\rm Sim}}~{},$
(1)
* (i)
The ratio of measured photon yields
$N_{\gamma}^{\rm{incl}}$/$N_{\gamma}^{\rm{\pi^{0},tag}}$ is the number of
measured conversion photons in a given $p_{T}$ bin, divided by the sub-sample
of those conversion photons that are tagged by a second photon as resulting
from a $\mbox{$\pi^{0}$}\rightarrow\gamma\gamma$ decay. This quantity is
measured in bins of fixed conversion photon $p_{T}$.
* (ii)
The conditional acceptance and efficiency $\langle\epsilon_{\gamma}f\rangle$
is the conditional probability to detect and reconstruct the second $\pi^{0}$
decay photon with the EMCal, given that the first decay photon was
reconstructed as $e^{+}e^{-}$ pair from a photon conversion. The probability
is averaged over all parent $\pi^{0}$ $p_{T}$ that can contribute to the given
conversion photon $p_{T}$.
* (iii)
The cocktail ratio $\gamma^{\rm hadr}$/$\gamma^{\rm{\pi^{0}}}$ is the ratio of
all photons from hadron decays over only those photons from $\pi^{0}$ decays.
The following sections discuss how each term is determined.
### III.2 Ratio of the measured photon yields
$N_{\gamma}^{\rm{incl}}$/$N_{\gamma}^{\rm{\pi^{0},tag}}$
Electrons and positrons in a given event are combined to $e^{+}e^{-}$ pairs
and conversion candidates are selected with appropriate cuts, which results in
a foreground sample of $e^{+}e^{-}$ pair ${\rm FG}^{\rm{ee}}$. All conversion
candidates in a conversion photon $p_{T}$ bin, are combined with all photon
showers in the EMCal above an energy threshold, $E_{cut}$. The invariant mass
$m_{ee\gamma}$ is calculated and all combinations that lie in a mass window
around the $\pi^{0}$ mass are considered as candidates for tagged photons
${\rm FG}^{ee\gamma}$. Due to the large particle multiplicity in Au$+$Au
collisions, there are many false combinations where the electron, positron or
photon are not from the same source. These background pairs must be subtracted
statistically to obtain the signals of interest ${\rm SG}^{\rm{ee}}$ and ${\rm
SG}^{ee\gamma}$.
For $e^{+}e^{-}$ pairs, there are two possible combinations, signal pairs of
interest ${\rm SG}^{\rm{ee}}$ and uncorrelated background ${\rm BG}^{\rm{ee}}$
pairs where the electron and positron are from different sources. Their sum
constitutes the foreground ${\rm FG}^{\rm{ee}}$:
$\mbox{${\rm FG}^{\rm{ee}}$}=\mbox{${\rm SG}^{\rm{ee}}$}+\mbox{${\rm
BG}^{\rm{ee}}$}.$ (2)
When the $e^{+}e^{-}$ pairs are combined with photons to $e^{+}e^{-}\gamma$
combinations, both types of $e^{+}e^{-}$ pairs are combined with photons that
are either correlated or uncorrelated with the pair:
$\mbox{${\rm FG}^{ee\gamma}$}=\mbox{${\rm SG}^{ee\gamma}$}+\mbox{${\rm
BG}_{\rm uncorr}^{ee\gamma}$}+\mbox{${\rm BG}_{\rm corr}^{ee\gamma}$}.$ (3)
Introducing $i,j,k$ as the source of the positron, electron, and photon,
respectively, the terms in Eq. 3 are:
* (1)
The first term is the signal of interest with positron, electron, and photon
from the same source ($i=j=k$).
* (2)
The second term represents the cases where the $e^{+}e^{-}$ pair is combined
with uncorrelated photons. This includes the case ($i=j\neq k$), where the
$e^{+}e^{-}$ pair is correlated and randomly combined with a $\gamma$ as well
as the case ($i\neq j\neq k$) where all three are from different sources.
* (3)
The third term represents cases ($(i\neq j=k)\vee(j\neq i=k)$), where the
$e^{+}e^{-}$ pair is not from the same source but the $\gamma$ is correlated
with either the $e^{+}$ or the $e^{-}$.
Each of the background terms is determined with different event-mixing
procedures, which were developed using the MC method. The event-mixing
procedures and their validity are discussed in detail in Appendix A.
#### III.2.1 Determination of the inclusive photon yield
$N_{\gamma}^{\rm{incl}}$
Photons that convert at the VTX detector are selected by pairing electron and
positron tracks to $e^{+}e^{-}$ pairs. All pairs are required to have a valid
conversion point at a radial location within the VTX detector, between 1 and
29 cm. In addition, both tracks need to match in the beam direction within
$|\Delta z|<4$ cm. The invariant mass distribution of the selected
$e^{+}e^{-}$ conversion pairs is shown in Fig. 4 for the $p_{T}$ range
$1.0<\mbox{$p_{T}$}<1.2$ GeV/$c$. The four panels correspond to four different
centrality selections. Each panel shows the same peak structure, which is
characteristic of the multilayer structure of the VTX detector.
Figure 4: Mass distribution, $m_{e^{+}e^{-}}$, of the $e^{+}e^{-}$ pairs after
conversion selection cuts are applied. All four panels are for the same
$p_{T}$ range $1.0<\mbox{$p_{T}$}<1.2$ GeV/$c$ for four different centrality
selections (a) 0%–20%, (b) 20%–40%, (c) 40%–60% and (d) 60%–93%. Shown are the
foreground ${\rm FG}^{\rm{ee}}$, background ${\rm BG}^{\rm{ee}}$ and signal
${\rm SG}^{\rm{ee}}$.
The $e^{+}e^{-}$ pairs passing the conversion selection criteria contain
uncorrelated $e^{+}e^{-}$ pairs, where the $e^{+}$ and $e^{-}$ are from
different sources. These backgrounds are also shown in Fig. 4. Because of its
combinatorial nature, the background to foreground ratio increases towards
more central-event selections. An event-mixing technique is used to estimate
and subtract this background (see Appendix A for details). In this technique,
an $e^{+}$ from event A is paired with an $e^{-}$ from another event B to
produce the random $e^{+}e^{-}$ pair sample. To assure the events A and B have
similar topological characteristics, it is required that both events:
* (a)
are from the same 10$\%$ centrality selection,
* (b)
have their interaction vertex within $\Delta z=2.5$ cm,
* (c)
have their reaction planes aligned within $\Delta\phi=\pi/6$.
After the background is subtracted, $N_{\gamma}^{\rm{incl}}$ is calculated by
integrating the counts in the mass range from 0.04 to 0.12 GeV/$c^{2}$,
corresponding to layers 3 and 4 of the VTX. The analysis is repeated for bins
in $p_{T}$ and in centrality.
#### III.2.2 Tagged photon raw yield $N_{\gamma}^{\rm{\pi^{0},tag}}$
Next, the subset of $e^{+}e^{-}$ pairs in the $N_{\gamma}^{\rm{incl}}$ sample
that can be tagged as photons from a $\pi^{0}$ decay,
$N_{\gamma}^{\rm{\pi^{0},tag}}$, is determined. For a given event, each
$e^{+}e^{-}$ conversion candidate, in the mass window in which
$N_{\gamma}^{\rm{incl}}$ is counted, is paired with all reconstructed showers
in the EMCal with shower shape $\chi^{2}<3$ and energy larger than
$E_{cut}=0.5$ GeV, excluding those matched to the $e^{+}e^{-}$ pair itself.
The energy cut, together with the $p_{T}$ cut of 0.2 GeV/$c$ on the $e^{+}$
and $e^{-}$, constitutes an implicit asymmetry cut on the $\pi^{0}$ decay
photons that depends on the $p_{T}$ of the $\pi^{0}$. For all
$e^{+}e^{-}\gamma$ combinations, the invariant mass $m_{ee\gamma}$ is
calculated. This constitutes the foreground ${\rm FG}^{ee\gamma}$, for which
an example is given in Fig. 6 for the $e^{+}e^{-}$ pair in the $p_{T}$ range
$1.0<\mbox{$p_{T}$}<1.2$ GeV/$c$. The four panels (a) to (d) correspond to
four centrality selections 0%–20%, 20%–40%, 40%–60%, and 60%–93%,
respectively.
Despite the large background, the signal, ${\rm SG}^{ee\gamma}$, is clearly
visible as a peak around the $\pi^{0}$ mass, even in panel (a), which is the
most central event selection. As discussed above, the background ${\rm
BG}^{ee\gamma}$ has two components:
$\mbox{${\rm BG}^{ee\gamma}$}=\mbox{${\rm BG}_{\rm
uncorr}^{ee\gamma}$}+\mbox{${\rm BG}_{\rm corr}^{ee\gamma}$},$ (4)
for which the shape and normalization are obtained from the event-mixing
procedures described in Appendix A. The results are also shown in Fig. 6. The
uncorrelated background, ${\rm BG}_{\rm uncorr}^{ee\gamma}$, is given in
panels (a) to (d). The much smaller correlated background, ${\rm BG}_{\rm
corr}^{ee\gamma}$, is only revealed after ${\rm BG}_{\rm uncorr}^{ee\gamma}$
is subtracted from the foreground, ${\rm FG}^{ee\gamma}$. The differences are
given in panels (e) to (h) for central to peripheral events, respectively.
Figure 6 indicates that the correlated background decreases with centrality
from $\mbox{${\rm BG}_{\rm corr}^{ee\gamma}$}/(\mbox{${\rm
FG}^{ee\gamma}$}-\mbox{${\rm BG}_{\rm uncorr}^{ee\gamma}$})=8.6\%$ in central
collisions to 0.5% in the most-peripheral collisions.
For the 0%–20% centrality selection, Fig. 6 shows the mass distributions
$m_{ee\gamma}$ for four different $e^{+}e^{-}$ pair $p_{T}$ ranges. The
representation is the same as for Fig. 6. Panels (a) through (d) all show a
clear peak around the $\pi^{0}$ mass. The backgrounds are the largest for low
$p_{T}$ and the most central events. As $p_{T}$ increases and the event
multiplicity decreases, the backgrounds are significantly reduced.
Figure 5: Mass distribution, $m_{ee\gamma}$, for $e^{+}e^{-}$ pairs with
$p_{T}$ from 1.0 to 1.2 GeV/$c$, for four centrality selection (a,e) 0%–20%,
(b,f) 20%–40% (c,g) 40%–60%, and (d,h) 60%–93%. Panels (a) through (d) show
the foreground ${\rm FG}^{ee\gamma}$ and the uncorrelated background ${\rm
BG}_{\rm uncorr}^{ee\gamma}$. Panels (e) through (h) show the difference
$\mbox{${\rm FG}^{ee\gamma}$}-\mbox{${\rm BG}_{\rm uncorr}^{ee\gamma}$}$,
together with the correlated background ${\rm BG}_{\rm corr}^{ee\gamma}$.
Figure 6: Mass distribution, $m_{ee\gamma}$, for $e^{+}e^{-}$ pairs from the
0%–20% centrality selection for four different $e^{+}e^{-}$ pair $p_{T}$
regions, (a,e) 0.8 to 1.0, (b,f) 1.4 to 1.6, (c,g) 2.0 to 2.5 and (d,h) 3.5 to
4 GeV/$c$. Panels (a) through (d) show the foreground ${\rm FG}^{ee\gamma}$
and the uncorrelated background ${\rm BG}_{\rm uncorr}^{ee\gamma}$. Panels (e)
through (h) show the difference $\mbox{${\rm FG}^{ee\gamma}$}-\mbox{${\rm
BG}_{\rm uncorr}^{ee\gamma}$}$, together with the correlated background ${\rm
BG}_{\rm corr}^{ee\gamma}$.
Because of the complexity of the particle correlations present in the real
Au$+$Au collision events, including effects of collective expansion, jet
production, hadron decays, etc., there is a small residual background that is
not captured by the event-mixing procedure. To remove this background, a low-
order polynomial, $f_{ee\gamma}$, is fitted to the ratio $(\mbox{${\rm
FG}^{ee\gamma}$}-\mbox{${\rm BG}^{ee\gamma}$})/\mbox{${\rm BG}_{\rm
uncorr}^{ee\gamma}$}$ in the mass range 0.05–0.08 and 0.23–0.45 GeV/$c^{2}$.
This function is used to correct ${\rm BG}_{\rm uncorr}^{ee\gamma}$ before it
is finally subtracted. Thus, the final distribution for
$N_{\gamma}^{\rm{\pi^{0},tag}}$ is:
$\mbox{$N_{\gamma}^{\rm{\pi^{0},tag}}$}=\mbox{${\rm
FG}^{ee\gamma}$}-\mbox{${\rm BG}_{\rm
corr}^{ee\gamma}$}-(1+f_{ee\gamma})\times\mbox{${\rm BG}_{\rm
uncorr}^{ee\gamma}$}.$ (5)
An example of the residual background is given in Fig. 7 for the $e^{+}e^{-}$
pair $p_{T}$ range of 1 to 1.2 GeV/$c$ and 0%–20% centrality selection. In
panel (a), ${\rm FG}^{ee\gamma}$ with all the background components are shown.
Panel (b) gives a second-order polynomial fit to the ratio $(\mbox{${\rm
FG}^{ee\gamma}$}-\mbox{${\rm BG}^{ee\gamma}$})/\mbox{${\rm BG}_{\rm
uncorr}^{ee\gamma}$}$ ratio, $f_{ee\gamma}$, which is used to determine the
residual background. Due to the unfavorably small signal-to-background ratio
in this case, the residual background in the $\pi^{0}$ mass region is
$\approx$9.4%. The residual background quickly drops with $p_{T}$ and
centrality bins, for example as $p_{T}$ increases to 3 GeV/$c$, the residual
background reduces to 2.7%. For each $p_{T}$-centrality bin combination,
$N_{\gamma}^{\rm{\pi^{0},tag}}$ is extracted by integrating the number of
entries in a window around the $\pi^{0}$ peak
($0.09<\mbox{$m_{ee\gamma}$}<0.19$) GeV/$c^{2}$ after all background
subtractions are applied.
Figure 7: (a) An example for ${\rm FG}^{ee\gamma}$ and the various background
components after normalization in the indicated regions. (b) The ratio
$(\mbox{${\rm FG}^{ee\gamma}$}-\mbox{${\rm BG}^{ee\gamma}$})/\mbox{${\rm
BG}_{\rm uncorr}^{ee\gamma}$}$ and the polynomial fit to determine the
residual-background correction $f_{ee\gamma}$.
### III.3 Conditional probability $\langle\epsilon_{\gamma}f\rangle$
The probability, $\langle\epsilon_{\gamma}f\rangle$, that the second photon is
in the acceptance and is reconstructed, given a conversion $e^{+}e^{-}$ pair
from a $\pi^{0}$ decay, is extracted from the single $\pi^{0}$ simulation. In
this simulation, individual $\pi^{0}$ are tracked through the PHENIX MC-
simulation framework. The $\pi^{0}$ are generated with the published $p_{T}$
spectrum (see Sec. III.4) and uniform in pseudorapidity, $\eta$, and azimuthal
angle, $\phi$.
The energy scale and resolution of the EMCal in the MC simulation is tuned as
closely as possible to resemble the one observed in data by comparing the mean
and width of the measured and simulated $\pi^{0}$ mass distribution. The
$\pi^{0}$ are reconstructed through the
$\mbox{$\pi^{0}$}\rightarrow\gamma\gamma$ decay channel. For this purpose an
asymmetry of less than 20% between the energies of the two decay photons was
applied to keep the two-photon energies similar.
In the single $\pi^{0}$ MC simulation, $e^{+}e^{-}$ pairs in the mass window
$0.04<\mbox{$m_{e^{+}e^{-}}$}<0.12$ GeV/$c^{2}$ are counted to determine
$N_{ee}^{\rm{\pi^{0}}}$, the number of reconstructed $e^{+}e^{-}$ pairs in a
given $e^{+}e^{-}$ pair $p_{T}$ bin. The sub-sample for which the second
photon of the $\pi^{0}$ decay is reconstructed as a shower in the EMCal is
counted as $N_{ee}^{\rm{\pi^{0}},tag}$. The value of
$\langle\epsilon_{\gamma}f\rangle$ is then determined as:
$\mbox{$\langle\epsilon_{\gamma}f\rangle$}=\frac{\mbox{$N_{ee}^{\rm{\pi^{0}},tag}$}}{{N_{ee}^{\rm\pi^{0}}}}.$
(6)
For the extraction of $N_{ee}^{\rm{\pi^{0}},tag}$ the presence of other
showers in the EMCal needs to be taken into account. This is done by embedding
the showers from the simulated single $\pi^{0}$ into the EMCal response from
Au$+$Au collisions at the tower level. The combined EMCal information is then
reclustered to form new showers. All of the showers that contain energy
deposited by the embedded singe $\pi^{0}$ (identified by the MC ancestry
information) are combined with the $e^{+}e^{-}$ pair.
Figure 8: Conditional probability $\langle\epsilon_{\gamma}f\rangle$ as a
function of $p_{T}$ in 0%–20%, 20%–40%, 40%–60% and 60%–93% centrality
classes.
Similar to the $N_{\gamma}^{\rm{\pi^{0},tag}}$ extraction from data, a
residual background subtraction is applied. This eliminates any remaining
background inside the $\pi^{0}$ counting window. The residual background is
estimated by a second order polynomial function fit in the mass range
0.05–0.08 and 0.23–0.45 GeV/$c^{2}$. This residual background mainly comes
from events where both decay photon convert to $e^{+}e^{-}$ pairs, and the
reconstructed conversion photon gets paired with the EMCal cluster of the
$e^{+}$ or $e^{-}$ from the other conversion. The extracted
$\langle\epsilon_{\gamma}f\rangle$ is shown in Fig. 8 as a function of the
$e^{+}e^{-}$ pair $p_{T}$ for the four centrality selections.
The increasing trend of $\langle\epsilon_{\gamma}f\rangle$ with increasing
conversion photon $p_{T}$ is partly due to the decrease in the opening angle
between the conversion photon and the second photon so that the second photon
is more likely to fall into the acceptance of the EMCal. Another important
factor is that the average energy of the second photon increases with
increasing conversion photon $p_{T}$, and hence, the efficiency of the energy
threshold cut increases towards higher $p_{T}$. The difference in
$\langle\epsilon_{\gamma}f\rangle$ between different centrality classes is
mainly related to the shower shape ($\chi^{2}$) selection, because the showers
are more distorted in central Au$+$Au collisions due to the larger detector
occupancy, resulting in more accidental overlaps from the underlying event,
and the centrality dependent parent $\pi^{0}$ $p_{T}$ distributions.
Table 1: Parameters for the modified Hagedorn function Eq. 7 to PHENIX data [33, 34, 35] from Au$+$Au collisions at $\sqrt{s_{{}_{NN}}}=200$ GeV. centrality | $A$ | $a$ | $b$ | $p_{0}$ | $n$
---|---|---|---|---|---
| c(GeV/$c$)-2 | (GeV/$c$)-1 | (GeV/$c$)-2 | GeV/$c$ |
min.bias | 504.5 | 0.5169 | 0.1626 | 0.7366 | 8.274
0%–10% | 1331.0 | 0.5654 | 0.1945 | 0.7429 | 8.361
10%–20% | 1001.0 | 0.5260 | 0.1628 | 0.7511 | 8.348
20%–30% | 750.7 | 0.4900 | 0.1506 | 0.7478 | 8.229
30%–40% | 535.3 | 0.4534 | 0.1325 | 0.7525 | 8.333
40%–50% | 364.5 | 0.4333 | 0.1221 | 0.7385 | 8.261
50%–60% | 231.2 | 0.4220 | 0.1027 | 0.7258 | 8.220
60%–70% | 118.1 | 0.4416 | 0.0559 | 0.7230 | 8.163
70%–80% | 69.2 | 0.2850 | 0.0347 | 0.7787 | 8.532
80%–93% | 51.1 | 0.2470 | 0.0619 | 0.7101 | 8.453
### III.4 Cocktail ratio $\gamma^{\rm hadr}$/$\gamma^{\rm{\pi^{0}}}$
The last ingredient to calculate $R_{\gamma}$ is the cocktail ratio
$\gamma^{\rm hadr}$/$\gamma^{\rm{\pi^{0}}}$ of photons from $\pi^{0}$, $\eta$,
$\omega$, and $\eta^{\prime}$ decays over those from $\pi^{0}$ decays. The
cocktail ratio is obtained using the PHENIX meson decay generator EXODUS,
which simulates mesons according to given input $p_{T}$ spectra, decays them
based on the known decay kinematics and branching ratios, and aggregates the
decay photons in the PHENIX detector acceptance.
The photons from $\pi^{0}$ decays are generated from distributions obtained by
fitting a modified Hagedorn function (Eq. 7) to charged pion [33] and neutral
pion [34, 35] data measured by PHENIX.
$E\frac{d^{3}N}{dp^{3}}=A\
\Big{(}e^{-(ap_{T}+bp_{T}^{2})}+\frac{\mbox{$p_{T}$}}{p_{0}}\Big{)}^{-n}.$ (7)
The fit parameters are summarized in Table 1 for MB collisions, as well as for
nine centrality bins. The $\eta$ meson $p_{T}$ spectrum is obtained by
multiplying the $\pi^{0}$ spectrum with the $\eta/\mbox{$\pi^{0}$}$ ratio. The
ratio is extracted from an analysis of world data [36], which demonstrates a
universal value at high $p_{T}$ consistent with 0.487$\pm$0.024, independent
of collision energy, system size or centrality. The work takes into account
the fact that at lower $p_{T}$ the ratio deviates from $m_{T}$ scaling, and
that there are centrality dependent changes of $\eta/\mbox{$\pi^{0}$}$ due to
radial flow. The contribution from $\omega$ and $\eta^{\prime}$ decays are
based on $p_{T}$ distributions using the $\pi^{0}$ spectrum and replacing by
$f$($\sqrt{p_{T}+m^{2}_{\rm meson}-m^{2}_{\pi^{0}}}$). The normalization of
$\omega$ and $\eta^{\prime}$ are fixed at $p_{T}$ = 5 GeV/$c$ to $0.9\pm 0.06$
and $0.25\pm 0.075$, respectively [6]. The cocktail ratio $\gamma^{\rm
hadr}$/$\gamma^{\rm{\pi^{0}}}$ is shown in Fig. 9.
Figure 9: Cocktail ratio as a function of $p_{T}$ in the most central (0%–20%)
and the most peripheral (60%–93%) centrality classes.
## IV Systematic uncertainties
This section describes the sources of systematic uncertainties for each of the
three components for the calculation of $R_{\gamma}$. The systematic
uncertainties are categorized into three types according to the correlation
between the measured data points:
* •
Type A: No (or unknown) correlation between data points – uncertainties on the
individual data points can fluctuate independently, in the same way as the
statistical uncertainties.
* •
Type B: The uncertainties are correlated between data points – the fluctuation
of each data point can be determined by the fluctuation of the neighboring
points.
* •
Type C: A special form of type B uncertainty – every data point fluctuates
with the exact same fraction.
In the final results, type A systematic uncertainties are combined with the
statistical uncertainties and type B and C are combined to obtain the total
systematic uncertainty.
The following subsections discuss the major individual sources contributing to
the systematic uncertainties on $R_{\gamma}$ and on the direct-photon yield.
All contributions are summarized in Table 2 and depicted in Fig. 11 and Fig.
11 as functions of $p_{T}$ for $R_{\gamma}$ and $\gamma^{\rm dir}$. The final
systematic uncertainties on $\gamma^{\rm dir}$ and on all quantities derived
from $\gamma^{\rm dir}$ are determined using the error-sampling method
discussed in Appendix B
Figure 10: Systematic uncertainties of $R_{\gamma}$ as a function of
conversion photon $p_{T}$ in 0%–20%, 20%–40%, 40%–60% and 60%–93% centrality
bins. The black curve corresponds to total uncertainty, and colored curves
correspond to individual sources. The lines representing uncertainties from
the energy scale and the conversion loss overlap at 3%, so do the lines
representing uncertainties from the $\gamma$ reconstruction efficiency,
acceptance and input $p_{T}$ spectra.
Figure 11: Systematic uncertainties of $\gamma^{\rm hadr}$ as a function of
photon $p_{T}$.
Table 2: Summary of systematic uncertainties for $R_{\gamma}$ and $\gamma^{\rm dir}$. Uncertainties for which ranges are given vary with $p_{T}$. For details see Figs. 11 and 11. Observable | Factor | Source | correlation | correlation | 0%–20% | 20%–40% | 40%–60% | 60%–93%
---|---|---|---|---|---|---|---|---
| | | in $p_{T}$ | in centrality | | | |
$R_{\gamma}$ | $N_{\gamma}^{\rm{incl}}$/$N_{\gamma}^{\rm{\pi^{0},tag}}$ | $N_{\gamma}^{\rm{incl}}$ purity | Type B | Type B | $<1\%$ | $<1\%$ | $<1\%$ | $<1\%$
| | $N_{\gamma}^{\rm{\pi^{0},tag}}$ residual background | Type A | Type A | 1.5%–4.5% | 0.5%–4% | 0.5%–4% | 0.5%–4%
| | $N_{\gamma}^{\rm{\pi^{0},tag}}$ event mixing | Type B | Type B | 1.5% | 1.5% | 1.5% | 1.5%
| $\langle\epsilon_{\gamma}f\rangle$ | energy scale | Type B | Type B | $3\%$ | $3\%$ | $3\%$ | $3\%$
| | conversion loss | Type C | Type C | $3\%$ | $3\%$ | $3\%$ | $3\%$
| | $\gamma$ efficiency | Type B | Type A | $<1.4\%$ | $<1\%$ | $<1\%$ | $<1\%$
| | active area & acceptance | Type C | Type C | $1\%$ | $1\%$ | $1\%$ | $1\%$
| | input $\pi^{0}$ $p_{T}$ spectra | Type B | Type A | $1\%$ | $1\%$ | $1\%$ | $1\%$
| $\gamma^{hadr}/\gamma^{\pi^{0}}$ | $\eta/\pi^{0}$ | Type B | Type C | 1–1.5% | 1–1.5% | 1–1.5% | 1–1.5%
| | $\omega,\eta^{\prime}$ | Type B | Type C | $<1\%$ | $<1\%$ | $<1\%$ | $<1\%$
| $\gamma^{hadr}$ | input $\pi^{0}$ $p_{T}$ spectrum | Type B | Type A | 10%–24% | 10%–24% | 10%–25% | 10%–24%
### IV.1 Systematic uncertainties on
$N_{\gamma}^{\rm{incl}}$/$N_{\gamma}^{\rm{\pi^{0},tag}}$
#### IV.1.1 Purity of the conversion photon sample
Due to the high multiplicity of photons produced in Au$+$Au collisions, the
background in the conversion sample from uncorrelated $e^{+}e^{-}$ pairs can
be as large as 10% for the most central collisions and the lowest $p_{T}$ from
0.8 to 1.0 GeV/$c$. This background is subtracted statistically with a certain
accuracy. To estimate the effect on the final results, significantly more and
less stringent conversion selection cuts were applied, hence, increasing or
reducing the purity. The value of $\langle\epsilon_{\gamma}f\rangle$
$N_{\gamma}^{\rm{incl}}$/$N_{\gamma}^{\rm{\pi^{0},tag}}$, obtained from the
different cuts, varies by less than 1%. This range is quoted as systematic
uncertainty due to the limited purity of the conversion sample.
#### IV.1.2 $\pi^{0}$ yield extraction
One of the main sources of systematic uncertainty on the $R_{\gamma}$
measurement is the tagged photon or $\pi^{0}$ yield extraction. The
uncertainty of $\pi^{0}$ yield extraction arises from two sources: (i) from
the residual background subtraction, which is highly correlated with the
statistical accuracy of the mixed event background normalization, and (ii)
imperfect description of the large backgrounds using event-mixing techniques.
To evaluate the size of the uncertainty from the residual background
subtraction, different estimates are compared. These include using different
functional forms for the fit and different fit ranges to anchor the residual
background fit. In addition, the counting window for $\pi^{0}$ signal
extraction is varied. This gives a spread of
$\langle\epsilon_{\gamma}f\rangle$
$N_{\gamma}^{\rm{incl}}$/$N_{\gamma}^{\rm{\pi^{0},tag}}$ values in each
$p_{T}$ and centrality bin. The standard deviation of the spread is quoted as
the uncertainty. Due to the correlation with the statistical accuracy of the
foreground in the background region, this uncertainty depends on $p_{T}$ and
centrality.
To test the validity of the event-mixing techniques, an MC simulation with
high multiplicity $\pi^{0}$ events is performed. Details are discussed in
Appendix A. The simulation shows that
$N_{\gamma}^{\rm{\pi^{0},tag}}$/$\langle\epsilon_{\gamma}f\rangle$ can be
determined with the event-mixing technique to better than 1.5%.
### IV.2 Systematic uncertainty on $\langle\epsilon_{\gamma}f\rangle$
#### IV.2.1 Energy Scale
The accuracy of the energy scale of the EMCal is the main source of systematic
uncertainties in the $\langle\epsilon_{\gamma}f\rangle$ evaluation. Because of
the energy threshold cut, the second photon is reconstructed only for
$\approx$25% of the $e^{+}e^{-}$ pairs with the lowest $p_{T}$, even though
the photon was in the EMCal acceptance. Any potential mismatch of the energy
scale between the simulation and real data will cause
$\langle\epsilon_{\gamma}f\rangle$ to be off; a higher (lower) energy scale in
simulation will lead to an underestimate (overestimate) of
$\langle\epsilon_{\gamma}f\rangle$. As mentioned earlier, to improve the
accuracy, the EMCal response in the simulation is carefully tuned to the data
using the $\pi^{0}$ mass measurement in the
$\mbox{$\pi^{0}$}\rightarrow\gamma\gamma$ decay channel. The tuning includes
scaling the MC energy scale by 0.3% and 2.2% for the PbSc and PbGl
calorimeters, respectively. In addition, the nonlinearity of the energy
response is adjusted by up to 5% at the lowest measured energies. After the
tuning, the $\pi^{0}$ peak positions in data and MC are consistent to better
than 1%. Considering the additional uncertainty due to the adjustment of the
nonlinearity, the energy scale is known to better than 2%. Changing the energy
scale by $\pm 2$% introduces a systematic uncertainty on
$\langle\epsilon_{\gamma}f\rangle$ of 3% at low $p_{T}$ and decrease towards
high $p_{T}$. The uncertainty on the energy resolution has a negligible
effect.
#### IV.2.2 Conversion Photon Loss
Another important source of systematic uncertainty on
$\langle\epsilon_{\gamma}f\rangle$ is related to the probability that the
second photon converts to an $e^{+}e^{-}$ pair before reaching the EMCal.
Depending on the location of the conversion point, the second photon may not
be properly reconstructed, thereby reducing
$\langle\epsilon_{\gamma}f\rangle$. To account for the “conversion loss”, the
material budget, i.e. thickness and location of material, implemented in the
simulation framework must accurately reflect reality. If there is a mismatch,
the probability for conversions to occur will be different and, hence,
$\langle\epsilon_{\gamma}f\rangle$ will by systematically off. As there is
essentially no magnetic field after the DC exit window, the $e^{+}e^{-}$ pair
from conversions between the DC and the EMCal will likely merge into one
shower in the EMCal. Therefore, the value of
$\langle\epsilon_{\gamma}f\rangle$ is most sensitive to differences in the
material budget of the VTX. Comparison of the available information about the
materials and their thickness for all detector subsystems, reveals that the
conversion probability in material within the magnetic field is known to
better than 3%, which directly translates into and uncertainty of $3\%$ on
$R_{\gamma}$.
#### IV.2.3 Photon Efficiency
An EMCal shower shape, $\chi^{2}$, cut is used to identify photon candidates
among the EMCal energy clusters and to reduce the number of hadrons in the
sample. Similar to the energy scale uncertainty, a difference between the
shower shape in simulation and the data will translate directly into a
systematic shift of $\langle\epsilon_{\gamma}f\rangle$. To evaluate this
uncertainty, the $\chi^{2}$ is varied simultaneously in both data and
simulation and $\langle\epsilon_{\gamma}f\rangle$
$N_{\gamma}^{\rm{incl}}$/$N_{\gamma}^{\rm{\pi^{0},tag}}$ is recalculated. It
changes by $1.4\%$ for 0.8–2 GeV/$c$ in the 0%–20% centrality bin and by less
than 1$\%$ for all the other cases.
#### IV.2.4 Active area and geometric acceptance
Due to the limited geometrical acceptance of EMCal and some inactive areas,
the second photon is registered only for $\approx$35% of the $e^{+}e^{-}$
pairs at the lowest $p_{T}$. Therefore, the accuracy with which the acceptance
and dead areas are known will contribute to the systematic uncertainties on
$\langle\epsilon_{\gamma}f\rangle$. The uncertainty of the acceptance is the
result of the accuracy with which the radial location of the EMCal sectors can
be determined. The possible remaining offset leads to $<0.3\%$ difference in
acceptance along $\phi$ direction and $<0.9\%$ in $z$ direction. The dead
areas in the real EMCal are carefully matched to the MC simulation and the
accuracy of the dead area determination is found to be better than 0.6$\%$. It
is due to the cases when a tower malfunctioned only in a very small number of
events, and not masked out in the simulation. Combining all these effects, the
systematic uncertainty on $R_{\gamma}$ from the acceptance is set to 1%.
#### IV.2.5 Input $\pi^{0}$ distribution
Because $\langle\epsilon_{\gamma}f\rangle$ is averaged over all parent
$\pi^{0}$ $p_{T}$ that contribute to a given $e^{+}e^{-}$ pair $p_{T}$ bin,
the $p_{T}$ dependence of $\langle\epsilon_{\gamma}f\rangle$ is sensitive to
the shape of the $\pi^{0}$ distribution. The $\pi^{0}$ parent distribution was
determined for each centrality selection by a fit to the best available data
from Au$+$Au collisions at $\sqrt{s_{{}_{NN}}}=200$ GeV measured by the same
experiment [33, 34, 35]. The remaining uncertainty on
$\langle\epsilon_{\gamma}f\rangle$ is smaller than 1%.
### IV.3 Systematic uncertainty on $\gamma^{\rm
hadr}$/$\gamma^{\rm{\pi^{0}}}$
The cocktail $\gamma^{\rm hadr}$/$\gamma^{\rm{\pi^{0}}}$ accounts for photons
from hadron decays, other than those from $\pi^{0}$, which contributes
$\approx$23% of the decay photons at high $p_{T}$. Of the additional decay
photons more than 80% are from the $\eta\rightarrow\gamma\gamma$ decay, hence
the accuracy with which $\eta/\pi^{0}$ is known will determine the systematic
uncertainties on $R_{\gamma}$ from this source. The $p_{T}$ and centrality
dependent upper and lower bounds on $\eta/\pi^{0}$ for Au$+$Au collisions at
$\sqrt{s_{{}_{NN}}}=200$ GeV are taken from [36]. Together with the much
smaller uncertainty on the contribution from $\omega$ and $\eta^{\prime}$
decays, the systematic uncertainty on $R_{\gamma}$ is below 2% for the entire
$p_{T}$ range.
### IV.4 Systematic Uncertainties on $\gamma^{\rm dir}$
Once $R_{\gamma}$ is determined, the direct-photon yield $\gamma^{\rm dir}$ is
calculated as:
$\mbox{$\gamma^{\rm dir}$}=(\mbox{$R_{\gamma}$}-1)\ \mbox{$\gamma^{\rm
hadr}$}.$ (8)
In addition to the uncertainties on $R_{\gamma}$, the uncertainty on
$\gamma^{\rm hadr}$ needs to be determined. These systematic uncertainties
have been studied in detail in [6]. The main sources of uncertainty come from
the accuracy with which the $\pi^{0}$ $p_{T}$ spectrum can be determined.
These largely cancel in $R_{\gamma}$, but propagate directly to $\gamma^{\rm
hadr}$. The input $\pi^{0}$ spectrum is based on measurements of charged
pions, and $\pi^{0}$ from different data taking periods (see Sec. III.4). Each
data set comes with its own systematic uncertainties, and in addition, the
differences between different measurements are of the order of 10% [37]. The
latter is the dominant uncertainty. The uncertainty on the spectra of other
mesons ($\eta$, $\eta^{\prime}$, $\omega$) also contributes to the uncertainty
on $\gamma^{\rm hadr}$, but to a much smaller extent.
## V Results
### V.1 Direct photon $R_{\gamma}$
Figure 12 shows $R_{\gamma}$ as function of photon $p_{T}$ for every 20%
centrality class. The vertical error bar on each point corresponds to the
statistical uncertainty, while the box gives the systematic uncertainty. The
new results are compared with all other published PHENIX results for Au$+$Au
at $\sqrt{s_{{}_{NN}}}=200$ GeV; these were obtained with different methods
and have largely independent systematic uncertainties. The open circles were
determined using the external conversion method deploying the HBD detector as
converter [6], the full squares are from a virtual photon internal conversion
measurement [4], and the open squares were measured with the EMCal alone [38].
All measurements agree well within their independent systematic uncertainties.
The 2014 data presented here have smaller statistical uncertainties than in
previous publications at RHIC due to the increased luminosity and
significantly larger amount of conversion material. The new results provide a
continuous measurement across a wide range of $p_{T}$ from 0.8 to 10 GeV/$c$.
This range has previously been covered by measurements done with different
techniques with different systematics. Up to 3 to 4 GeV/$c$ internal or
external photon conversions to $e^{+}e^{-}$ pairs have been used, while above
4 GeV/$c$ photons were measured in the EMCal. For all centrality selections,
$R_{\gamma}$ shows a significant excess that is rather constant below
$\approx$3 GeV/$c$. Beyond that, $R_{\gamma}$ increases with $p_{T}$, the
increase being most pronounced for central collisions, and $R_{\gamma}$
continuously decreases towards more peripheral collisions. This is expected as
phenomena such as jet quenching reduce the number of decay photons from hadron
decays in more central collisions, which in turn increases $R_{\gamma}$ [34,
35].
The high statistics of the 2014 data set allows to divide the data sample into
nine centrality bins, from 0%–10% to 80%–93%, 10% bins each, except for the
last one which is slightly larger. The resulting $R_{\gamma}$ are shown in
Fig. 13. Up to 50%–60% centrality, data from the earlier calorimeter
measurement [38] are also shown.
For most bins the overall shape of $R_{\gamma}$ as a function of $p_{T}$ is
similar to what is observed in Fig. 12, with a notable difference for panel
(i), which is the most-peripheral centrality 80%–93%. Below 5 GeV/$c$, the
most-peripheral Au$+$Au data show no significant excess above unity and are
very consistent with the direct-photon result from $p$$+$$p$ collisions, which
is also shown in panel (i).
Figure 12: The ratio, $\mbox{$R_{\gamma}$}=\mbox{$\gamma^{\rm
incl}$}/\mbox{$\gamma^{\rm hadr}$}$, as a function of conversion photon
$p_{T}$ in 0%–20%, 20%–40%, 40%–60% and 60%–93% centrality bins. The 2014
Au$+$Au data at $\sqrt{s_{{}_{NN}}}=200$ GeV are compared to results from
previous PHENIX publications for the same system and $\sqrt{s_{{}_{NN}}}$.
Figure 13: $R_{\gamma}$ of direct photons as a function of conversion photon
$p_{T}$ in 0%–10% to 80%–93% centrality bins.
The MC sampling method is used to calculate both the statistical and
systematic uncertainties on $\gamma^{\rm dir}$ and all quantities derived from
the direct photon $p_{T}$ spectra. This method propagates the error correctly
in the presence of unphysical values of $\mbox{$R_{\gamma}$}<1$ and $p_{T}$
and centrality dependent correlations of uncertainties; it is discussed in
detail in Appendix B.
Figure 14: Invariant yield of direct photons as a function of conversion
photon $p_{T}$ in 0%–10% to 80%–93% centrality bins. Figure 15: Invariant
yield of direct photons as a function of conversion photon $p_{T}$ in (a)
0%–20%, (b) 20%–40%, (c) 40%–60% and (d) 60%–93% centrality bins.
### V.2 Direct-photon invariant yield
The direct-photon spectra are calculated from $R_{\gamma}$ and $\gamma^{\rm
hadr}$ using Eq. 8. The results for all 10% centrality selections are given in
Fig. 14. 111As the yields in the most-peripheral bin, 80%–93%, are mostly
upper limits on the measurement, this bin will not be included for estimation
of any further derived quantities in every 10% centrality selection. Figure 15
compares the direct-photon spectra with previous measurements, as shown in
broader centrality bins (a) 0–20%, (b) 20–40%, (c) 40–60%, and (d) 60–93%.
Each panel also presents the $N_{\rm coll}$-scaled pQCD calculation [12] and a
fit to direct-photon data from $p$$+$$p$ collisions at $\sqrt{s}=200$ GeV [39,
40, 41]. The $p$$+$$p$ fit is performed with a pQCD-inspired functional form
[42]:
$\frac{d^{3}N}{d^{2}\mbox{$p_{T}$}dy}=\frac{A_{pp}}{(1+(\frac{p_{T}}{p_{0}})^{2})^{n}},$
(9)
where the parameters are $A_{pp}=1.60\\!\cdot\\!10^{-4}$ (GeV/$c$)-2,
$p_{0}=1.45$ GeV/$c$ and $n=3.3$. The error band around the central fit
function represents the uncertainty propagated from both the data and the
unknown true functional form of the spectrum down to very low $p_{T}$. The
$p$$+$$p$ fit and the pQCD calculation agree well above 2 GeV/$c$, and can be
used as an estimate for the prompt-photon contribution.
Figure 15 also shows that the direct-photon yield for $p_{T}$ larger than 5
GeV/$c$ is well described by the $N_{\rm coll}$-scaled $p$$+$$p$ result and
pQCD calculations, which confirms that the high-$p_{T}$ direct photons are
predominately from initial hard-scattering processes. Below 4–5 GeV/$c$ a
clear direct-photon excess develops above the prompt component, gradually
becoming larger towards lower $p_{T}$.
Figure 16: nonprompt direct-photon yield as a function of conversion photon
$p_{T}$ in (a) 0%–20%, (b) 20%–40%, (c) 40%–60%, and (d) 60%–93% centrality
bins.
### V.3 Nonprompt direct-photon excess
To extract the direct-photon excess above the prompt-photon contribution, the
$N_{\rm coll}$ scaled $p$$+$$p$ fit is subtracted from the Au$+$Au data. This
excess is thought to be mostly the radiation that is emitted during the
collision from the hot-expanding fireball, and will be referred to here as
nonprompt direct-photon spectra. Figure 16 compares the nonprompt direct-
photon spectra to previously published results from Au$+$Au collisions at
$\sqrt{s_{{}_{NN}}}=200$ GeV [6], which had significantly lower statistics.
The new 2014 data extend the coverage, both in $p_{T}$ and centrality.
Table 3: Inverse slopes fitted to the direct-photon spectra in different $p_{T}$ ranges, and for different centrality selections. For each centrality range, $N_{\rm coll}$ and $dN_{\rm ch}/d\eta$ values are quoted, which are taken from previous work [43, 44], except for the $dN_{\rm ch}/d\eta$ values for the two most peripheral bins. Those were extrapolated using a fit of the form $\mbox{$dN_{\rm ch}/d\eta$}=B(\mbox{$N_{\rm coll}$})^{\beta}$. centrality | $dN_{\rm ch}/d\eta$ | $N_{\rm coll}$ | $T_{\rm eff}$ (GeV/$c$) | $T_{\rm eff}$ (GeV/$c$)
---|---|---|---|---
| | | $0.8<\mbox{$p_{T}$}<1.9$ GeV/$c$ | $2<\mbox{$p_{T}$}<4$
0%–20% | $519.2\pm 26.3$ | $770.6\pm 79.9$ | $0.277\pm 0.017\ ^{+0.036}_{-0.014}$ | $0.428\pm 0.031\ ^{+0.031}_{-0.030}$
20%–40% | $225.4\pm 13.2$ | $241.1\pm 28.4$ | $0.264\pm 0.010\ ^{+0.014}_{-0.007}$ | $0.354\pm 0.019\ ^{+0.020}_{-0.030}$
40%–60% | $85.5\pm 8.0$ | $82.6\pm 9.3$ | $0.247\pm 0.007\ ^{+0.005}_{-0.004}$ | $0.392\pm 0.023\ ^{+0.022}_{-0.022}$
60%–93% | $16.4\pm 2.8$ | $12.1\pm 3.1$ | $0.253\pm 0.011\ ^{+0.012}_{-0.006}$ | $0.331\pm 0.036\ ^{+0.031}_{-0.041}$
0%–10% | $623.9\pm 32.2$ | $951\pm 98.5$ | $0.268\pm 0.024\ ^{+0.026}_{-0.012}$ | $0.514\pm 0.061\ ^{+0.066}_{-0.039}$
10%–20% | $414.2\pm 20.2$ | $590.1\pm 61.1$ | $0.303\pm 0.024\ ^{+0.062}_{-0.021}$ | $0.358\pm 0.033\ ^{+0.024}_{-0.035}$
20%–30% | $274\pm 14.8$ | $357.2\pm 35.5$ | $0.263\pm 0.011\ ^{+0.014}_{-0.007}$ | $0.351\pm 0.024\ ^{+0.020}_{-0.030}$
30%–40% | $176.8\pm 11.6$ | $207.5\pm 21.2$ | $0.256\pm 0.011\ ^{+0.009}_{-0.005}$ | $0.333\pm 0.024\ ^{+0.020}_{-0.032}$
40%–50% | $109.4\pm 9.1$ | $111.1\pm 10.8$ | $0.244\pm 0.009\ ^{+0.003}_{-0.005}$ | $0.389\pm 0.029\ ^{+0.020}_{-0.021}$
50%–60% | $61.6\pm 7.1$ | $54.1\pm 7.9$ | $0.246\pm 0.010\ ^{+0.005}_{-0.005}$ | $0.345\pm 0.031\ ^{+0.019}_{-0.032}$
60%–70% | 32 $\pm$ 5 | $24\pm 6$ | $0.261\pm 0.015\ ^{+0.020}_{-0.008}$ | $0.319\pm 0.049\ ^{+0.037}_{-0.042}$
70%–80% | 16 $\pm$ 4 | $10\pm 3$ | $0.263\pm 0.016\ ^{+0.016}_{-0.007}$ | $0.335\pm 0.044\ ^{+0.020}_{-0.035}$
80%–93% | 7 $\pm$ 2 | $4\pm 1$ | $-$ | $-$
The data are very consistent in the region of overlap. In the range 0.8 to 1.9
GeV/$c$, the data are fitted with an exponential function and the results are
also shown on the panels of Fig. 16. The slope values are given in Table 3.
All centrality selections are consistent with an average inverse slope,
$T_{\rm eff}$, of ${\approx}0.260{\pm}0.011$ GeV/$c$. However, it is evident
from Fig. 16 that the nonprompt direct-photon spectra are not described by a
single exponential but rather have a continually increasing with $p_{T}$
inverse slope, $T_{\rm eff}$. Figure 17 brings this out more clearly where
each nonprompt direct-photon spectrum is divided by a fit with a fixed slope,
$T_{\rm eff}$ = 0.260 GeV/$c$. All centrality selections follow the same
trend. Over the $p_{T}$ range of up to 2 GeV/$c$ the ratios are consistent
with unity, but above 2 GeV/$c$, they start to rise monotonically.
Figure 17: Ratio of the yield of nonprompt direct photons over the exponential
fit result ($T_{\rm eff}$ fixed to 0.26 GeV/$c$) as a function of photon
$p_{T}$. Figure 18: $T_{\rm eff}$ as a function of charged-particle
multiplicity at midrapidity.
To quantify this changing slope, the nonprompt direct-photon spectra are
fitted with a second exponential function in the $p_{T}$ range from 2 to 4
GeV/$c$; the results are also included in Fig. 16. All data are consistent
with an average inverse slope of $0.376\pm 0.037$ GeV/$c$, which is
significantly larger than the slope observed below $p_{T}$ = 2 GeV/$c$. Above
4 GeV/$c$, the statistical and systematic uncertainties from the prompt-photon
subtraction become too large for a detailed analysis.
To establish any dependence on the system size, the nonprompt direct photon
spectra are determined for each 10% centrality bin, and subsequently fitted by
two exponential functions in the $p_{T}$ ranges $0.8<\mbox{$p_{T}$}<1.9$
GeV/$c$ and $2<\mbox{$p_{T}$}<4$ GeV/$c$. The resulting $T_{\rm eff}$ values
are tabulated in Table 3 and depicted in Fig. 18 as a function of $dN_{\rm
ch}/d\eta$. The figure also shows the average of the inverse slope values from
fitting Fig. 16. The $T_{\rm eff}$ values are consistent with a constant
value, independent of $dN_{\rm ch}/d\eta$. However, given the uncertainties on
the data, a possible increase of $T_{\rm eff}$ with $dN_{\rm ch}/d\eta$ can
not be excluded.
In addition to investigating the $p_{T}$ and system-size dependence of the
shape of the nonprompt direct-photon spectra, one can also look at the
dependence of the yield on system size and $p_{T}$. As reported previously,
the integrated direct-photon yield scales with $dN_{\rm ch}/d\eta$ to a power
$\alpha$ [8]:
$\frac{dN_{\gamma}}{dy}=\int_{p_{T,\rm{min}}}^{p_{T,\rm{max}}}\frac{dN_{\gamma}^{\rm
dir}}{d\mbox{$p_{T}$}dy}d\mbox{$p_{T}$}=A\times\left(\frac{dN_{\rm{ch}}}{d\eta}\right)^{\alpha},$
(10)
where all rapidity densities are densities at midrapidity. The direct-photon
spectra shown in Fig. 14 are integrated from $\mbox{$p_{T}$}_{,\rm{min}}=1$
GeV/$c$ to $\mbox{$p_{T}$}_{,\rm{max}}=5$ GeV/$c$ and plotted as a function of
$dN_{\rm ch}/d\eta$ in Fig. 19. They are in reasonable agreement with a
compilation of other direct-photon results [8, 45], also shown in the figure.
All data follow a trend similar to the $N_{\rm coll}$ scaled $p$$+$$p$ fit,
shown as band, but at a roughly 10 times larger yield. Scaling with $N_{\rm
coll}$ corresponds to $\alpha=1.25$ $\pm$ 0.02 [8]. The current high
statistics data allow for finer centrality binning and changes this picture
somewhat at the lowest and highest $dN_{\rm ch}/d\eta$. Fitting only the new
results in Fig. 19 gives a value of $\alpha=1.11\pm 0.02(\rm{stat})\
_{-0.08}^{+0.09}(\rm{sys})$. This value is lower, but consistent within
systematic uncertainties, with $\alpha=1.23$ $\pm$ 0.06 $\pm$ 0.18, found by
fitting all previously published PHENIX A$+$A data [45].
Note that the previous PHENIX measurements obtained the $\eta$ spectrum by
$m_{T}$-scaling the $\pi^{0}$ spectrum, while in the current measurement the
$\eta$ spectrum is obtained from the $\eta/$$\pi^{0}$ ratio using the world
data. There are significant differences between the two approaches in the
low-$p_{T}$ region [36]. Because the integration range starts at low $p_{T}$
and is wide (1–5 GeV/$c$), the power $\alpha$ is smaller than previously
published values, but is consistent within stated systematic uncertainties.
However, it is also consistent with unity within uncertainties.
Figure 19: Integrated direct-photon yield (1–5 GeV$/c$) versus charged-particle multiplicity at midrapidity. The present data is compared to a previous compilation of data from [8, 45] and the $N_{\rm coll}$ scaled fit to $p$$+$$p$ data. Also given are fits with Eq. 10 to different data; the solid line is a fit to the present data resulting in $\alpha=1.11\pm 0.02(\rm{stat})\ _{-0.08}^{+0.09}(\rm{sys})$, and the dashed line is from fitting previously published PHENIX data [45] that gave $\alpha=1.23$ $\pm$ 0.06 $\pm$ 0.18. Table 4: Scaling power, $\alpha$, of the $dN_{\rm ch}/d\eta$ dependence of nonprompt and direct-photon yields in various integration ranges. $p_{T}$ (GeV/$c$) | $\alpha(\mbox{$\gamma^{\rm{nonprompt}}$})$ | $\alpha(\mbox{$\gamma^{\rm dir}$})$
---|---|---
0.8–1.2 | $1.119\pm 0.038\ _{-0.094}^{+0.116}$ | $1.124\pm 0.036\ _{-0.089}^{+0.121}$
1.2–1.6 | $1.107\pm 0.029\ _{-0.082}^{+0.108}$ | $1.118\pm 0.027\ _{-0.073}^{+0.097}$
1.6–2.0 | $1.136\pm 0.034\ _{-0.091}^{+0.129}$ | $1.152\pm 0.029\ _{-0.077}^{+0.113}$
2.0–3.0 | $1.087\pm 0.032\ _{-0.092}^{+0.108}$ | $1.120\pm 0.025\ _{-0.065}^{+0.095}$
3.0–4.0 | $1.119\pm 0.078\ _{-0.134}^{+0.206}$ | $1.171\pm 0.048\ _{-0.076}^{+0.114}$
4.0–5.0 | $0.950\pm 0.176\ _{-0.205}^{+0.315}$ | $1.137\pm 0.077\ _{-0.082}^{+0.108}$
5.0–10.0 | | $1.296\pm 0.078\ _{-0.091}^{+0.129}$
To better understand the behavior of the scaling power, $\alpha$, in more
detail, the direct-photon yield and its nonprompt component are integrated for
six different nonoverlapping finer $p_{T}$ regions and for 10% centrality
classes. The integrated nonprompt yields are shown in Fig. 20. The $\alpha$
values are determined for each $p_{T}$ selection by fitting the data with Eq.
10. The fits are also shown in the figure. All $\alpha$ values, both for the
direct photon yield and the nonprompt component, are tabulated in Table 4 and
shown in Fig. 21. It is evident that the values for the direct component, for
higher $p_{T}$ ranges, are consistent with the prompt component,
$\alpha=1.25\pm 0.02$, corresponding to $N_{\rm coll}$ scaling. However, they
tend to be smaller, but still consistent within systematic uncertainties, with
previous measurements [8] for the lower $p_{T}$ ranges.
With increasing $p_{T}$, the $\alpha$ values for the nonprompt component are
slightly lower than those from direct photons. The systematic uncertainties
are larger due to the subtraction. The values of $\alpha$ for the nonprompt
component, as shown in Fig. 21, are remarkably constant with no evident
$p_{T}$ dependence.
Figure 20: Integrated nonprompt direct-photon yield versus charged particle
multiplicity at midrapidity for different $p_{T}$ integration ranges. Figure
21: Scaling factors, $\alpha$, extracted from fitting Eq. 10 to integrated
direct and nonprompt-photon yields as a function of $dN_{\rm ch}/d\eta$.
Values were obtained for different $p_{T}$ integration ranges tabulated in
Table 4.
## VI Concluding discussion of the results
The PHENIX collaboration has measured direct-photon production in Au$+$Au
collisions at $\sqrt{s_{{}_{NN}}}=200$ GeV using photon conversions to
$e^{+}e^{-}$ pairs. A large yield of direct photons below a $p_{T}$ of 3
GeV/$c$ is observed for all centrality bins except for the most peripheral bin
of 80%–93% with $dN_{\rm ch}/d\eta$ = 7.4, where it seems to be consistent
with the prompt-photon production with little or no radiation from a fireball.
The next centrality bin from 70%–80% with $dN_{\rm ch}/d\eta$ = 15.5 already
shows a significant yield with properties very similar to that of the
radiation from the more central bins.
The nonprompt direct-photon spectra are isolated by subtracting the prompt-
photon contribution, which is estimated through a fit to the direct-photon
data from $p$$+$$p$ collisions at $\sqrt{s}=200$ GeV, measured by PHENIX, and
scaled by $N_{\rm coll}$. Results are obtained for the $p_{T}$ range from 0.8
to 5 GeV/$c$ and for 0%–93% central collisions, covering a system size
spanning two orders of magnitude in $dN_{\rm ch}/d\eta$ from $\approx$7 to
620. The wealth of data enabled PHENIX to carry out double-differential
analyses of the shape of the momentum spectra and the rapidity density
$dN_{\gamma}/dy$ in $p_{T}$ and $dN_{\rm ch}/d\eta$.
For the centrality selections from 0%–10% to 70%–80%, all nonprompt direct-
photon spectra are very similar in shape, exhibiting increasing $T_{\rm eff}$
from 0.2 to 0.4 GeV/$c$ over the $p_{T}$ range from 0.8 to 4 GeV/$c$. The
changing $T_{\rm eff}$ is not surprising, because the spectra are time
integrated over the full evolution of the expanding fireball, from its
earliest pre-equilibrium state, through the QGP phase, crossing over to a HG,
and further expanding and cooling until hadrons eventually stop interacting.
Throughout the evolution the system cools, and thus earlier phases are
characterized by higher temperatures. In turn, the contributions from the
earliest times of the evolution are likely to dominate the emission at higher
$p_{T}$, consistent with the observation of an increasing $T_{\rm eff}$ with
$p_{T}$.
In the lower $p_{T}$ range from 0.8 to 1.9 GeV/$c$, the spectra are well
described by a $T_{\rm eff}$ = 0.26 GeV/$c$. This is consistent with what is
expected for radiation from the late QGP stage until freeze-out [14]. During
this period of the evolution, the temperature drops from $\approx$170 MeV near
the transition to $\approx$110 MeV when the system freezes out. At the same
time the system is rapidly expanding and thus, the radiation is blue shifted.
This compensates the temperature drop and results in an average $T_{\rm eff}$
= $\approx$0.26 GeV/$c$, with only minor variations with centrality of the
collision. In Ref. [14], a moderate increase of $T_{\rm eff}$ with centrality
was predicted. While the data favors a $T_{\rm eff}$ independent of
centrality, they are not precise enough to exclude a moderate change.
Above a $p_{T}$ of 2 GeV/$c$, the inverse slope of the spectra continues to
increase with $p_{T}$. Between $p_{T}$ = 2 and 4 GeV/$c$ the average inverse
slope is $T_{\rm eff}$ $\approx$0.376 GeV/$c$. This $T_{\rm eff}$ is larger
than what can be accommodated by a rapidly expanding HG, thus suggesting that
emissions from earlier times in the evolution starts to dominate the spectra.
Expected initial temperatures at RHIC are $\approx$375 MeV with maximum
$T_{\rm eff}$ in the range of 0.35 to 0.4 GeV/$c$, depending on viscosity
[14]. Thus, it is likely that an additional contribution from the pre-
equilibrium stage is needed to account for the measured $T_{\rm eff}$.
Figure 22: Nonprompt direct-photon yields for (a) 0%–20% and (b) 20%–40%
compared with model predictions from Ref. [10, 46]. (c,d) ratios of the yields
from data to the sum of yields from thermal and pre-equilibrium contributions.
In Fig. 22, the measured nonprompt direct-photon spectra are compared to a
recent calculation including contributions from the pre-equilibrium phase [10,
46]. These calculations predicted that the pre-equilibrium radiation becomes
the dominant source above a $p_{T}$ of 3 GeV/$c$. In the range
$2<\mbox{$p_{T}$}<4$ GeV/$c$, a fit of the thermal contribution with an
exponential function results in an inverse slope of $\approx$0.36 GeV/$c$,
while for the pre-equilibrium contribution a larger inverse slope of
$\approx$0.52 GeV/$c$ is found, for the more central collisions. Fitting the
same $p_{T}$ range for the combined thermal and pre-equilibrium spectra from
the model gives an inverse slope of $\approx$0.425 GeV/$c$. While the shape is
reproduced well, the overall yield predicted by the calculations falls short
compared to the data, in particular, below 2 GeV/$c$ where the nonprompt-
photon yield appears to be a factor of two to three larger.
The integrated nonprompt direct-photon yield exhibits a power-law relation
with $(\mbox{$dN_{\rm ch}/d\eta$})^{\alpha}$ [8]. Fitting the power $\alpha$
for multiple nonoverlapping $p_{T}$ ranges results in values consistent with
$\alpha=1.12\pm 0.06{\rm(\rm{stat})}\pm 0.14{\rm(\rm{sys})}$ with no apparent
dependence on $p_{T}$. The model calculations in [14] predict that the
radiation from the HG phase scale with $\alpha$ close to 1.2, while those from
the hot and dense QGP phase exhibit closer to a $(\mbox{$dN_{\rm
ch}/d\eta$})^{2}$ dependence. Because the QGP phase has a larger relative
contribution to the $p_{T}$ spectrum with increasing $p_{T}$, it is expected
that $\alpha$ increases with $p_{T}$. However, the $p_{T}$ dependence of
$\alpha$ from the pre-equilibrium phase needs further theoretical
understanding.
In conclusion, the 10-fold increase in statistics compared to previous samples
of Au$+$Au collisions recorded by PHENIX enabled detailed measurements of the
radiation from the hot and expanding fireball. The experimentally observed
inverse slopes of the $p_{T}$ spectra are qualitatively consistent with
predictions for thermal and pre-equilibrium radiation. However, there seems to
be more photons emitted from Au$+$Au collisions than can be accounted for in
model calculations. Furthermore, although this work presents no new data on
the azimuthal anisotropy, maximum anisotropy is observed for photons
$\approx$2–3 GeV/$c$. In this $p_{T}$ range, the yield is larger than what
would be expected from a rapidly but anisotropically expanding hadronic
fireball. Finally, the centrality dependence of the nonprompt direct-photon
yield, expressed in terms of the scaling power $\alpha(\mbox{$p_{T}$})$, shows
no indication of changing with $p_{T}$.
###### Acknowledgements.
We thank the staff of the Collider-Accelerator and Physics Departments at
Brookhaven National Laboratory and the staff of the other PHENIX participating
institutions for their vital contributions. We also thank J.F. Paquet for many
fruitful discussions and sharing additional information. We acknowledge
support from the Office of Nuclear Physics in the Office of Science of the
Department of Energy, the National Science Foundation, Abilene Christian
University Research Council, Research Foundation of SUNY, and Dean of the
College of Arts and Sciences, Vanderbilt University (USA), Ministry of
Education, Culture, Sports, Science, and Technology and the Japan Society for
the Promotion of Science (Japan), Natural Science Foundation of China
(People’s Republic of China), Croatian Science Foundation and Ministry of
Science and Education (Croatia), Ministry of Education, Youth and Sports
(Czech Republic), Centre National de la Recherche Scientifique, Commissariat à
l’Énergie Atomique, and Institut National de Physique Nucléaire et de Physique
des Particules (France), J. Bolyai Research Scholarship, EFOP, the New
National Excellence Program (ÚNKP), NKFIH, and OTKA (Hungary), Department of
Atomic Energy and Department of Science and Technology (India), Israel Science
Foundation (Israel), Basic Science Research and SRC(CENuM) Programs through
NRF funded by the Ministry of Education and the Ministry of Science and ICT
(Korea). Ministry of Education and Science, Russian Academy of Sciences,
Federal Agency of Atomic Energy (Russia), VR and Wallenberg Foundation
(Sweden), University of Zambia, the Government of the Republic of Zambia
(Zambia), the U.S. Civilian Research and Development Foundation for the
Independent States of the Former Soviet Union, the Hungarian American
Enterprise Scholarship Fund, the US-Hungarian Fulbright Foundation, and the
US-Israel Binational Science Foundation.
## Appendix A Event-mixing procedures and validation
In this analysis, $e^{+}e^{-}$ pairs and $e^{+}e^{-}\gamma$ combinations
result from combining positrons, electrons, and photons measured in the same
event. Given the large multiplicity of produced particles in Au$+$Au
collisions, the combinations include a significant background from particles
of different physical origin, for example different $\pi^{0}$ decays. For
$e^{+}e^{-}$ pairs there are two possible combinations: signal pairs, ${\rm
SG}^{\rm{ee}}$, that have the same source and background pairs, ${\rm
BG}^{\rm{ee}}$, that have different sources. Both types will be combined with
photons to get $e^{+}e^{-}\gamma$ combinations. There are three possibilities:
A correlated $e^{+}e^{-}$ pair is combined with a photon from the same source
(${\rm SG}^{ee\gamma}$); the $e^{+}e^{-}$ pair is not correlated, but the
photon is correlated to the $e^{+}$ or $e^{-}$ (${\rm BG}_{\rm
corr}^{ee\gamma}$); or the photon is uncorrelated to the $e^{+}e^{-}$ pair,
irrespective whether it is ${\rm SG}^{\rm{ee}}$ or ${\rm BG}^{\rm{ee}}$ (${\rm
BG}_{\rm uncorr}^{ee\gamma}$).
All backgrounds are determined using event-mixing techniques that were
developed and validated with MC studies of high-multiplicity events, for which
a large sample of simulated $\pi^{0}$ events was generated. These events serve
as pseudodata. The $\pi^{0}$ are generated according to the experimentally
observed $p_{T}$ spectrum, uniform in azimuthal angle, and with a constant
pseudorapidity density of 280 $\pi^{0}$, which corresponds to the typical
$\pi^{0}$ multiplicity in the most central Au$+$Au collisions at
$\sqrt{s_{{}_{NN}}}=200$ GeV.
From these pseudodata, $N_{\gamma}^{\rm{incl}}$ and
$N_{\gamma}^{\rm{\pi^{0},tag}}$ are extracted using the cuts and event-mixing
schemes developed for the analysis of real data. They are corrected by
$\langle\epsilon_{\gamma}f\rangle$, resulting in $R_{\gamma}$. Because in the
pseudodata there are no other hadronic decay channels contributing to
$\gamma^{\rm hadr}$ other than $\pi^{0}$, the $R_{\gamma}$ from this
pseudodata is given by:
$R_{\gamma}^{\rm pseudo}=\frac{N^{\rm incl}_{\gamma}}{N_{\gamma}^{\pi^{0},\rm
tag}}\times\mbox{$\langle\epsilon_{\gamma}f\rangle$}$ (11)
As there are no direct photons in the pseudodata, the expected result would be
$R_{\gamma}$ = 1, within the statistical uncertainties of the simulation. The
rest of this sections details each step of the $R_{\gamma}$ determination from
the pseudodata. The exact same procedure is also applied to the real data.
### A.1 Determination of the inclusive photon yield $N_{\gamma}^{\rm{incl}}$
Photon conversion candidates are created by combining $e^{+}$ and $e^{-}$ from
the same pseudodata event by requiring a valid conversion point within
$1<R<29$ cm. This results in a foreground, ${\rm FG}^{\rm{ee}}$, containing a
signal, ${\rm SG}^{\rm{ee}}$, that is, conversions of $\pi^{0}$ decay photons,
and a background, ${\rm BG}^{\rm{ee}}$, where the $e^{+}$ and $e^{-}$ come
from conversion of two different $\pi^{0}$ decay photons. The background is
determined by combining electrons and positrons from different pseudodata
events, which are paired and subjected to the same cuts and conversion
selection criteria. The mixed event background thus obtained, ${\rm
MBG}^{\rm{ee}}$, is normalized to the foreground, ${\rm FG}^{\rm{ee}}$, in the
mass region $0.16<\mbox{$m_{e^{+}e^{-}}$}<$ 0.3 GeV/$c^{2}$, which does not
contain $e^{+}e^{-}$ pairs from conversions (see Fig. 2 for reference).
Figure 23: Invariant mass distributions of $e^{+}e^{-}$ pairs reconstructed
from the high-multiplicity $\pi^{0}$ pseudodata in the $p_{T}$ range
$0.8<\mbox{$p_{T}$}<1.0$ GeV/$c$. The least-restrictive conversion selection
cuts are applied, which only require that the reconstruction algorithm has
identified the $e^{+}e^{-}$ pair as a conversion candidate. Panel (a) compares
foreground, ${\rm FG}^{\rm{ee}}$, the true background, ${\rm BG}^{\rm{ee}}$,
and the background determined from the mixed event technique, ${\rm
MBG}^{\rm{ee}}$. Panel (b) gives the extracted conversion photon signal.
Figure 24: Invariant mass distributions of $e^{+}e^{-}$ pairs reconstructed
from the high-multiplicity $\pi^{0}$ pseudodata. Same as Fig. 24, but with an
additional constraint that the $e^{+}$ and $e^{-}$ match in beam direction.
Panel (a) compares foreground, ${\rm FG}^{\rm{ee}}$, the true background,
${\rm BG}^{\rm{ee}}$, and the background determined from the mixed event
technique, ${\rm MBG}^{\rm{ee}}$. Panel (b) gives the extracted conversion
photon signal.
Figure 25: Extracted $N_{\gamma}^{\rm incl}$ after the background subtraction,
as a function of conversion photon $p_{T}$. The diamonds are obtained by
subtracting the background from the mixed event technique; they are compared
to the open symbols for which the true background was subtracted. Panel (b)
shows the ratio of the event-mixing result over the true information result.
Figure 26: Invariant mass distributions of $e^{+}e^{-}\gamma$ pairs.
Figure 24(a) shows the background, ${\rm MBG}^{\rm{ee}}$, obtained from the
mixed event technique together with the true background, ${\rm BG}^{\rm{ee}}$,
which was obtained from the MC ancestry information. Figure 24(b) shows the
results (solid curve) after subtracting the mixed-event background from the
foreground and (open symbols) subtracting the true background. Note that the
two are practically indistinguishable, which means that ${\rm BG}^{\rm{ee}}$
is equal to ${\rm MBG}^{\rm{ee}}$.
Even though the background can be subtracted accurately with the mixed-event
technique to obtain $N_{\gamma}^{\rm{incl}}$, the subtraction can only be done
statistically. Thus in the next step, where conversion photons from $\pi^{0}$
decays are tagged, the background pairs also need to be matched with EMCal
showers. This substantially increases the background in the $m_{ee\gamma}$
distribution. To reduce this background, additional cuts are applied in the
conversion-photon selection.
The magnetic field deflects electrons and positrons in a plane perpendicular
to the beam direction ($z$). Thus, $e^{+}e^{-}$ pairs from a conversion can be
constrained by requiring a match in the beam direction using the PC1
information. A cut of $|\Delta z|<4$ cm is applied. Because the conversion
reconstruction algorithm uses the projection of the tracks in the plane
perpendicular to the beam axis, the additional match reduces the number of
possible random-track combinations significantly. The $z$ cut effectively
truncates the mass distribution as the $e^{+}e^{-}$ pairs are required to have
the possible conversion point at radii below 29 cm and only the pairs with an
opening angle in the beam direction will create larger masses. The background
rejection is clearly visible in Fig. 24. The background normalization for the
mixed events is given by the less-restrictive cuts shown in Fig. 24, and
applied here. For the lowest $p_{T}$ and the highest-multiplicity bin, the
background rejection is approximately a factor of eight with a signal
efficiency of more than 85%. The background to foreground ratio, $\mbox{${\rm
BG}^{\rm{ee}}$}/\mbox{${\rm FG}^{\rm{ee}}$}$, is 12.1%. As $p_{T}$ increases
the multiplicity decreases and the $\mbox{${\rm BG}^{\rm{ee}}$}/\mbox{${\rm
FG}^{\rm{ee}}$}$ ratio decreases to 0.3% at the $p_{T}$ above 7 GeV/$c$.
The analysis is repeated for the entire accessible $p_{T}$ range and
$N_{\gamma}^{\rm{incl}}$ is calculated in the mass range from 0.04 to 0.12
GeV/$c^{2}$ by subtracting the background obtained from the mixed-event
technique, ${\rm MBG}^{\rm{ee}}$, from the foreground, ${\rm FG}^{\rm{ee}}$.
The result is compared to the true number of photon conversions determined
from the MC-ancestry information in Fig. 25. Panel (b) shows that the
difference is less than 1% for all $p_{T}$.
### A.2 The tagged photon yield $N_{\gamma}^{\rm{\pi^{0},tag}}$
Next, the subset $N_{\gamma}^{\rm{\pi^{0},tag}}$ of $e^{+}e^{-}$ pairs in the
$N_{\gamma}^{\rm{incl}}$ sample that can be tagged as photons from a $\pi^{0}$
decay is determined. For a given pseudodata event, each $e^{+}e^{-}$
conversion candidate is paired with all reconstructed showers in the EMCal,
excluding the showers matched to the $e^{+}e^{-}$ pair itself. For each
combination the invariant mass $m_{ee\gamma}$ is calculated. This constitutes
the foreground, ${\rm FG}^{ee\gamma}$, for which an example is given in panel
(a) of Fig. 26. Despite the large background the signal
$N_{\gamma}^{\rm{\pi^{0},tag}}$ is clearly visible as peak around the
$\pi^{0}$ mass. The background has two components: (i) combinations of
$e^{+}e^{-}$ pairs with an EMCal shower from another unrelated $\pi^{0}$ decay
denoted as ${\rm BG}_{\rm uncorr}^{ee\gamma}$, and (ii) a correlated
background, ${\rm BG}_{\rm corr}^{ee\gamma}$, where the shower in the EMCal
and the electron or positron are from the same $\pi^{0}$ decay, but the
$e^{+}e^{-}$ pair itself is a combination of an $e^{+}$ and $e^{-}$ from
different $\pi^{0}$ decay photons.
The uncorrelated background can be determined with a similar event mixing
technique as used for the extraction of $N_{\gamma}^{\rm{incl}}$; an
$e^{+}e^{-}$ pair from one event is mixed with the EMCal showers from a
different event resulting in mixed combinations, ${\rm MBG}_{\rm
uncorr}^{ee\gamma}$. These are normalized to the foreground, ${\rm
FG}^{ee\gamma}$, in the mass region from 0.25 to 0.45 GeV/$c^{2}$, where no
signal is expected. Figure 26(a) shows the corresponding distribution. There
is almost no visible difference between the mixed-event background, ${\rm
MBG}_{\rm uncorr}^{ee\gamma}$, and the true background, ${\rm BG}_{\rm
uncorr}^{ee\gamma}$, which is obtained using the MC-ancestry information.
Figure 26(b) shows the signal and remaining correlated background after the
uncorrelated mixed-event background is subtracted (${\rm FG}^{ee\gamma}$-${\rm
MBG}_{\rm uncorr}^{ee\gamma}$), as well as after subtracting the true
uncorrelated background (${\rm FG}^{ee\gamma}$-${\rm BG}_{\rm
uncorr}^{ee\gamma}$). Again they are indistinguishable.
The correlated background, ${\rm BG}_{\rm corr}^{ee\gamma}$, is determined
with a second event-mixing scheme. An $e^{+}$ from a given event is combined
with an $e^{-}$ from a different event, and the resulting $e^{+}e^{-}$ pair is
then combined with the showers in the EMCal from both events; again excluding
the showers from the $e^{+}$ and $e^{-}$. The $e^{+}e^{-}\gamma$ combinations
contain the correlated background, ${\rm MBG}_{\rm cor}^{ee\gamma}$, plus the
random background in which the $e^{+}$, $e^{-}$, and $\gamma$ are from three
different $\pi^{0}$ decays, ${\rm MBG}_{\rm comb}^{ee\gamma}$. The
normalization is per generated $e^{+}e^{-}$ pair, multiplied by ${\rm
FG}^{ee\gamma}$, i.e. the number of background pairs in the $e^{+}e^{-}$ pair
foreground.
Figure 27: Invariant mass distributions of $e^{+}e^{-}\gamma$ pairs from the
same event (FG) and different event-mixing setups.
The random background, ${\rm MBG}_{\rm comb}^{ee\gamma}$, can easily be
determined in a third event-mixing step, where $e^{+}$, $e^{-}$, and $\gamma$
are from three different events. The ${\rm MBG}_{\rm comb}^{ee\gamma}$ is
normalized to (${\rm MBG}_{\rm cor}^{ee\gamma}$ $+$${\rm MBG}_{\rm
comb}^{ee\gamma}$) in the mass range from 0.65 to 1.0 GeV/$c^{2}$ and
subtracted. Figure 27(a) shows the the result, ${\rm MBG}_{\rm
cor}^{ee\gamma}$, together with the foreground and the other background
components.
Last but not least, to account for any possible mismatch between the true
background and the one obtained from our multistep event-mixing procedure, the
ratio $(\mbox{${\rm FG}^{ee\gamma}$}-\mbox{${\rm MBG}_{\rm
cor}^{ee\gamma}$}-\mbox{${\rm MBG}_{\rm uncorr}^{ee\gamma}$})/\mbox{${\rm
MBG}_{\rm uncorr}^{ee\gamma}$}$ is fit with a second-order polynomial,
$f_{ee\gamma}$, excluding the $\pi^{0}$ peak regions. The fit result is shown
as a line on Fig. 27(b). This fit is used to correct ${\rm MBG}_{\rm
uncorr}^{ee\gamma}$ before subtraction. The final distribution for
$N_{\gamma}^{\rm{\pi^{0},tag}}$ is thus:
$\mbox{$N_{\gamma}^{\rm{\pi^{0},tag}}$}=\mbox{${\rm
FG}^{ee\gamma}$}-\mbox{${\rm MBG}_{\rm cor}^{ee\gamma}$}-(1+f_{ee\gamma})\
\mbox{${\rm MBG}_{\rm uncorr}^{ee\gamma}$}$ (12)
Figure 28: Extracted $N_{\gamma}^{\rm{\pi^{0},tag}}$ as a function of
conversion-photon $p_{T}$ using the (red) true information and (blue) event-
mixing technique. The bottom panel shows the ratio of the event-mixing result
over the true information result.
For each $p_{T}$ bin $N_{\gamma}^{\rm{\pi^{0},tag}}$ is extracted by counting
the number of entries in a window around the $\pi^{0}$ peak
($0.09<\mbox{$m_{ee\gamma}$}<0.19$) GeV/$c^{2}$. Figure 28 shows
$N_{\gamma}^{\rm{\pi^{0},tag}}$ as function of $p_{T}$ using the true MC-
ancestry information and the event-mixing technique. Overall the agreement is
very good, however, the result from the event-mixing technique is on average
lower. This mismatch is accounted for in the systematic uncertainties on
$R_{\gamma}$, which is discussed in more detail in the next section.
Figure 29: Average conditional probability $\langle\epsilon_{\gamma}f\rangle$
as a function of conversion photon $p_{T}$. Figure 30: Ratio $R_{\gamma}^{\rm
pseudo}$ as a function of conversion photon $p_{T}$. The dashed line gives a
constant offset of 1.3% fit to the points, and the dashed band represents a
$\pm 1.5\%$ range around unity.
### A.3 Completing the validation by determining $R_{\gamma}$
With $N_{\gamma}^{\rm{incl}}$ and $N_{\gamma}^{\rm{\pi^{0},tag}}$ established
from the pseudodata, the conditional probability
$\langle\epsilon_{\gamma}f\rangle$ remains to be determined to calculate
$R_{\gamma}$ and fully validate the background-subtraction procedure. In the
same way as for the data, a single $\pi^{0}$ simulation is embedded into
pseudodata events. The $e^{+}e^{-}$ pairs and $e^{+}e^{-}\gamma$ combinations
are reconstructed and counted as discussed in Sec. III.3. The extracted
$\langle\epsilon_{\gamma}f\rangle$ is shown in Fig. 29 as a function of the
conversion photon $p_{T}$.
With $N_{\gamma}^{\rm{incl}}$/$N_{\gamma}^{\rm{\pi^{0},tag}}$ from the
pseudodata and $\langle\epsilon_{\gamma}f\rangle$ from the embedded single
$\pi^{0}$ simulation in hand, $R_{\gamma}$ is calculated using Eq. A1. The
result is shown in Fig. 30, all points are close to unity indicating that the
analysis procedure is self consistent. There may be a 1.5% enhancement above
unity, which is consistent with the slightly lower-than-expected value found
for $N_{\gamma}^{\rm{\pi^{0},tag}}$. This difference is taken into account in
the estimate of the systematic uncertainty.
## Appendix B Uncertainty propagation with a MC sampling method
The uncertainties on $\gamma^{\rm dir}$ and any other quantity derived from
$\gamma^{\rm dir}$, such as $T_{\rm eff}$ or $\alpha$, are determined using a
MC-sampling method, which allows taking into account the $p_{T}$ and
centrality dependent correlations of individual sources of systematic
uncertainties, as well as the fact that the region $R_{\gamma}$ $<1$ is
unphysical.
### B.1 Systematic uncertainties
In the MC-sampling method, for each source of uncertainty, $i$, a variation
$\delta_{i}$ of $R_{\gamma}$ or $\gamma^{\rm hadr}$ is sampled from a Gaussian
distribution centered at zero with a width corresponding to the associated
uncertainty, $\sigma_{i}$. The size of $\delta_{i}$ depends not only on
$\sigma_{i}$, but also on whether the adjacent bins in $p_{T}$ and centrality
have uncorrelated (Type A) or correlated (Type B/C) uncertainties due to the
source $i$. The values of $\sigma_{i}$ and classification of each source is
summarized in Table 2.
If source $i$ is classified as uncorrelated, $\delta_{i}$ is calculated
independently for neighboring bins from Gaussian distributions of width
$\sigma_{i}$. For correlated uncertainties of Type C in $p_{T}$ or centrality,
$\delta_{i}$ is calculated with one common fraction $w$ so that
$\delta_{i}=w\sigma_{i}$ for all points. The fraction $w$ is determined
randomly from a Gaussian distribution of width 1. And finally, for Type B
uncertainties, $\delta_{i}$ is determined separately for the minimum and
maximum of the $p_{T}$ or centrality range using the same procedure as Type C.
All intermediate points are varied proportionally to create a smooth
transition from the minimum to the maximum of the range. Uncertainties on the
input $\pi^{0}$ $p_{T}$ distribution are a special case of Type B
uncertainties, as it is known that the systematic uncertainties move
simultaneously either up or down. In this case, $\delta_{i}$ at the minimum
and maximum of the range are chosen to have the same sign.
After applying all variations $\delta_{i}$ to recalculate $R_{\gamma}$ and
$\gamma^{\rm hadr}$, new values of $\gamma^{\rm dir}$, $T_{\rm eff}$, and
$\alpha$ are determined. This process is repeated multiple times, taking into
account the different sources of uncertainties, to obtain a distributions of
$\gamma^{\rm dir}$, $T_{\rm eff}$, and $\alpha$. The width of these
distribution is quoted as the systematic uncertainty. For individual
$\gamma^{\rm dir}$ points, it is possible that $\langle\mbox{$\gamma^{\rm
dir}$}\rangle-\sigma$ is less than 0, that is, unphysical. In such cases, an
upper limit of 90% confidence level (CL) is quoted based on the part of the
probability distribution in the physical region $\int_{0}^{\rm
upper}/\int_{0}^{+\infty}=90\%$.
### B.2 Statistical uncertainties
The statistical uncertainties on $R_{\gamma}$ are assumed to have a Gaussian
probability distribution and for most cases the statistical uncertainty on
$\gamma^{\rm dir}$ can be calculated with the usual error propagation.
However, there are two cases that need to be treated separately:
* •
$\mbox{$R_{\gamma}$}<1$: In this case $\gamma^{\rm dir}$ is unphysical, and
hence an upper limit at 90% CL is quoted, based on the physical part of the
probability distribution $\int_{0}^{\rm upper}/\int_{0}^{+\infty}=90\%$.
* •
$\mbox{$R_{\gamma}$}-\sigma_{\rm stat}<1$: In this case $\gamma^{\rm dir}$ is
in the physical region, but consistent with zero within less than one standard
deviation. For these situations the central value is shown, but the
uncertainty is given as 90% CL, calculated as above.
## References
* Shuryak [1978] E. V. Shuryak, Quark-gluon plasma and hadronic production of leptons, photons and psions, Phys. Lett. B 78, 150 (1978).
* David [2020] G. David, Direct real photons in relativistic heavy ion collisions, Rept. Prog. Phys. 83, 046301 (2020).
* Adare _et al._ [2010] A. Adare _et al._ (PHENIX Collaboration), Enhanced production of direct photons in Au+Au collisions at $\sqrt{s_{NN}}$=200 GeV and implications for the initial temperature, Phys. Rev. Lett. 104, 132301 (2010).
* Adare _et al._ [2012a] A. Adare _et al._ (PHENIX Collaboration), Observation of direct-photon collective flow in $\sqrt{s_{NN}}$=200 GeV Au+Au collisions, Phys. Rev. Lett. 109, 122302 (2012a).
* Adare _et al._ [2016a] A. Adare _et al._ (PHENIX Collaboration), Azimuthally anisotropic emission of low-momentum direct photons in Au$+$Au collisions at $\sqrt{s_{NN}}$=200 GeV, Phys. Rev. C 94, 064901 (2016a).
|
# FinMem: A Performance-Enhanced LLM Trading Agent with Layered Memory and
Character Design
Yangyang Yu∗, Haohang Li∗, Zhi Chen∗, Yuechen Jiang∗, Yang Li∗
Denghui Zhang, Rong Liu, Jordan W. Suchow, Khaldoun Khashanah†
Stevens Institute of Technology
Hoboken, NJ, United States
{yyu44, hli113, zchen100, yjiang52, yli269, dzhang42, rliu20<EMAIL_ADDRESS>
###### Abstract
Recent advancements in Large Language Models (LLMs) have exhibited notable
efficacy in question-answering (QA) tasks across diverse domains. Their
prowess in integrating extensive web knowledge has fueled interest in
developing LLM-based autonomous agents. While LLMs are efficient in decoding
human instructions and deriving solutions by holistically processing
historical inputs, transitioning to purpose-driven agents requires a
supplementary rational architecture to process multi-source information,
establish reasoning chains, and prioritize critical tasks. Addressing this, we
introduce FinMem, a novel LLM-based agent framework devised for financial
decision-making. It encompasses three core modules: Profiling, to customize
the agent’s characteristics; Memory, with layered message processing, to aid
the agent in assimilating hierarchical financial data; and Decision-making, to
convert insights gained from memories into investment decisions. Notably,
FinMem’s memory module aligns closely with the cognitive structure of human
traders, offering robust interpretability and real-time tuning. Its adjustable
cognitive span allows for the retention of critical information beyond human
perceptual limits, thereby enhancing trading outcomes. This framework enables
the agent to self-evolve its professional knowledge, react agilely to new
investment cues, and continuously refine trading decisions in the volatile
financial environment. We first compare FinMem with various algorithmic agents
on a scalable real-world financial dataset, underscoring its leading trading
performance in stocks. We then fine-tuned the agent’s perceptual span and
character setting to achieve a significantly enhanced trading performance.
Collectively, FinMem presents a cutting-edge LLM agent framework for automated
trading, boosting cumulative investment returns.
††footnotetext: Corresponding author. Email:
<EMAIL_ADDRESS>Equal contribution, with author order
decided by dice roll.22footnotetext: The source code of this project can be
found via: FinMem LLM Trading
KEYWODS: Financial AI, Large Language Model, Trading Algorithms, Deep
Learning, Financial Technology
## 1 Introduction
With the influx of diverse financial data streams from the web, traders face a
deluge of information from various sources. This requires them rapidly to
understand, memorize, and filtrate crucial events for investment decisions.
However, innate cognitive limitations restrict human traders from processing
information within their perception and memory capacity, a span much narrower
than the actual volume of available information [5]. Consequently,
insufficiently considering or even dismissing critical events affecting
trading decisions becomes increasingly concerning as data availability
expands. To overcome the physical limitations in the memory systems of human
traders, researchers have been consistently working on designing autonomous
trading agent systems. These systems need to thoroughly integrate all
available information and possess a sophisticated design in the agent’s
backbone algorithm to deliver enhanced trading performance.
The evolution of autonomous trading systems has transitioned from the initial
rule-based trading strategies [12] to more advanced machine-learning-based
algorithms [19]. In recent years, Reinforcement Learning (RL)-based agents
[14], especially those employing Deep Reinforcement Learning (DRL) [29] as
backbone algorithms, garner joint attention of both academia and industry.
Leveraging both RL principles and deep learning, DRL agents effectively handle
and learn from scalable and diverse financial data, including stock prices,
key financial indicators, and market sentiments. They utilize deep neural
networks to extract expressive features from input data, representing the
complex financial market environment, which enhances their comprehension
ability. Retaining the key features of RL agents, they learn through
interaction with a predefined environment to maximize investment gain over
time. Research suggests DRLs can meet the crucial needs of trading agents to
process and make informed decisions from large volumes of data. However,
certain inherent features of DRL algorithms exhibit notable deficiencies in
financial applications. Firstly, DRL agents exhibit a lack of interpretability
concerning the rationale behind their decisions [4]. They are often described
as “black boxes,” where the internal processes and computational layers
leading to a specific decision are neither easily understandable nor
transparent. Secondly, DRL agents find it challenging to effectively integrate
textual data with numerical features. Text data plays a vital role in finance,
since the majority of market information is conveyed through news articles and
financial reports. However, transforming text data to embeddings considerably
increases input space dimensionality, making learning more computationally
demanding. Plus, empirical studies have shown that combining textual
representations with numerical financial indicators often leads to convergence
challenges [15]. Therefore, a backbone algorithm with transparent reasoning
and the enhanced ability to capture investment-related textual insights
comprehensively is essential.
Recent advancements in Large Language Models (LLMs), like Generative Pre-
trained Transformers (GPTs) [34], offer viable solutions for developing
trading agents, alleviating previous concerns. With carefully designed
prompts, the LLM-based agent is able to provide reasons and outcomes in plain
text. This allows for immediate observation and prompt adjustment of its
reasoning process. Employing LLMs as the backbone algorithms for agents also
overcomes the constraint of isolated environments. This is achieved through
their vast pre-existing knowledge and the effective integration of valuable
insights from a variety of data sources, including both textual and numerical.
When equipped with suitable prompt templates, this approach significantly
enhances decision-making capabilities [50]. Studies indicate that prompt-
guided reasoning significantly improves problem-solving rates across various
domains [24]. Notably, a growing body of research has focused on utilizing
LLMs to make informed trading decisions for stocks and funds by continuously
interacting with financial environment information [55, 52]. However, in
currently available approaches, LLMs primarily serve as a QA role rather than
functioning as autonomous agents. The potential issue with these approaches is
the incapability of fully understanding the varying timeliness associated with
different types of financial data. These financial LLM agents, despite
outperforming traditional trading benchmarks, generally process information
indiscriminately through QA iterations, lacking the ability to memorize
influential messages. Furthermore, their method of acknowledging the
timeliness of financial data is heavily dependent on the uncertain and
laborious LLM fine-tuning process. These insufficiencies undermine their
ability to update the knowledge base in a daily manner, meaning they lack a
memory component. As a result, they may struggle to prioritize significant and
influential memory events effectively. Additionally, current literature on
LLM-based trading agents lacks a comparative analysis between these
applications and other autonomous trading systems, such as DRL agents.
To bridge this gap, we present FinMem, an innovative LLM-based autonomous
trading agent with a novel layered memory system and dynamic character design.
Unlike previous LLM agents in finance, FinMem encompasses a memory module
adept at processing multi-source financial data with varying timeliness and
self-adaptive character setting for fitting into volatile market environments.
Our concept is initially inspired by the Generative Agents framework by Park
et al. [35], aimed at enhancing the efficient retrieval of key events for
general-purpose LLM agents. This framework features a unique character design
and seed memory, activating the agent upon specific query through prompts. It
prioritizes events in a unified memory stream, ranked by a linear combination
of recency, relevancy, and importance. The framework outlined in [35] provides
a foundational structure for LLM agent design. It includes a profiling module
for character definition, a memory module for experience recording and
critical information retrieval, and an action module to guide actions based on
the retrieved memories. This structure effectively facilitates goal
achievement for the agent in a general social environment. However, Park et
al.’s framework struggles with comprehending financial data with varying
timeliness and importance, like daily news versus quarterly and annual
reports. Key challenges involve quantifying the timeliness of different
information sources, optimizing information retrieval, and enhancing trading
decisions with detailed analysis. To tackle these challenges, we further
propose FinMem with the following improvements.
FinMem maintains a modular approach similar to Park et al. [35], but features
novel design of profiling and memory modules. Its specialized profiling module
equips FinMem with a trading-task-specific professional background, enhancing
robustness to market fluctuations via offering self-adaptive risk inclination
option. FinMem’s memory module innovatively incorporates working memory and
layered long-term memory components, ideal for stratified information
processing. Its working memory acts as a dynamic “workspace,” enabling
operations like summarization, observation, and reflection on multi-source
information to facilitate trading decisions. Its long-term memory, structured
into shallow, intermediate, and deep layers [9], manages varied decay rates to
satisfy the need to retain distinct types of financial information within
different time scales based on their corresponding timeliness. For instance,
daily news, with its immediate effects on stock markets, is channeled into the
shallow processing layer. Meanwhile, annual company reports, exerting a more
prolonged impact, are processed in the deep layer by FinMem. Each layer in
FinMem prioritizes memory events based on the assemble of recency, relevancy,
and importance close to Park et al.’s method. However, it introduces new
measurements for recency and importance, specifically tailored to better rank
financial data according to their unique time sensitivity. FinMem’s memory
mechanism can also transit significantly impactful investment memory events to
deeper processing layers, ensuring their retention for extended periods.
FinMem’s memory module can mirror the human cognitive system [47] and
facilitate agile, real-time decisions [46]. It enables continuous evolution in
professional knowledge through structured summarizing, retrospecting past
experiences, and reacting to new trading scenarios. Additionally, FinMem
includes a decision-making module capable of deriving investment decisions by
considering top-ranked memory events and current market conditions.
Through experiments, we show that FinMem exhibits an outstanding ability to
stratify and leverage the various levels of market insights, significantly
improving the quality of trading decisions. We claim that FinMem provides
these key contributions:
FinMem presents a state-of-the-art LLM-based trading agent with a human-
aligned memory mechanism and character design, particularly crafted to capture
investment insights from the financial market. In its agent memory module
design, FinMem innovatively emulates human working and layered long-term
memory mechanisms. This approach effectively harnesses the time-sensitive
aspects of financial data, capturing crucial investment insights and thereby
boosting trading performance. FinMem’s profiling module includes a dynamic
character setting feature, offering seed information about professional
backgrounds and adjustable risk inclinations. Additionally, it continuously
updates domain knowledge as the trading experience grows, thereby enhancing
FinMem’s capabilities. Ablation studies demonstrate that FinMem can learn from
past trading experiences and evolve its knowledge base through continuous
market interaction, maintaining robustness in complex markets for profitable
trading decisions.
FinMem can utilize its distinctive features to expand the agent’s perceptual
range beyond the human limitation to make well-informed trading decisions.
Cognitive research indicates that human working memory can recall only five to
nine events at a time [30]. This limitation, while preventing information
overload, can yield insufficient insight for precise decision-making. In
contrast, FinMem’s memory module transcends this constraint. It allows
adjusting cognitive load by selecting a flexible number of top-ranked events
from each layer of its hierarchical long-term memory, allowing FinMem to
maintain agility and deliver superior trading decisions in data-rich contexts.
FinMem achieves impressive trading performance using training data that is
limited in volume and spans a short time period. Our experiments indicate that
training FinMem with daily collected data over just six months to a year is
enough to produce robust and notable trading results. This timeframe is
considerably shorter than what other comparable models require. This
efficiency is achieved through the optimal utilization of multi-source
incoming data and the precise identification of key signals for trading
decisions. Furthermore, it’s noteworthy that FinMem’s effectiveness is
demonstrated on smaller datasets and with general-purpose LLMs. Its
capabilities are anticipated to be further amplified with access to larger,
higher-quality financial datasets and LLMs fine-tuned specifically for
financial applications.
In this paper, we begin by explaining the three core modules of FinMem.
Subsequently, we emphasize its superior trading performance compared to a
range of representative algorithmic agents. We further explore how FinMem
achieves its optimal performance, examining adjustments in three key aspects:
backbone algorithms, working memory capacity, and character settings.
## 2 Related Work
### 2.1 Backbone Algorithms of Contemporary Autonomous Trading Agents
The development of trading agents has evolved over several decades, influenced
by advancements in technology, finance, and computational methodologies.
Conventionally, a rule-based algorithm for trading stocks is an automated
strategy that operates based on a predefined set of rules[7, 49, 37]. These
rules are often derived from historical market patterns and trading
experience. Compared with rule-based algorithms that use predefined rules and
conditions, Reinforcement Learning provides a way for agents to learn by
interacting with an environment and receiving feedback in the form of rewards
or penalties. [10, 22]. Deep learning models can be integrated with RL to
handle large and complex state spaces, like those in stock markets. Such
models are often referred to as Deep Reinforcement Learning (DRL) [53, 54].
For example, Deep Q-Network (DQN) [44], Advantage Actor-Critic (A2C) [56], and
Proximal Policy Optimization (PPO) [27] are popular algorithms for such tasks.
Using DRL agents as automated financial trading backbones, face two key
issues: 1) A lack of interpretability, as their decisions, rooted in complex
computations and high-dimensional representations, are challenging to
articulate [4]. 2) They struggle to fully leverage textual financial
information due to the high-dimensional nature and computational intensity of
rich text embeddings [11, 13]. Consequently, DRL agents often rely on
extracting textual sentiment [39], sidestepping the direct use of embeddings
[6, 3], leading to an incomplete representation of crucial market information
embedded in news and macroeconomic policies.
### 2.2 Advancements from LLMs to LLM Autonomous Agents
The evolution of LLMs has reshaped artificial intelligence and natural
language processing. From foundational embeddings like Word2Vec [16] and GloVe
[38], the field advanced with the introduction of sequential modeling like
Long Short-Term Memory (LSTM) [18] and early transformer models of
Bidirectional Encoder Representations from Transformers (BERT) [11]. Today,
the new-generation LLMs, like Generative Pre-trained Transformer series (GPTs)
[40, 34] and LLM Meta AI (Llamas) [48], stand out in diverse QA tasks. The
trend leans towards LLM agents. While LLM agents for domain-specific tasks
have been extensively researched [20, 25, 36], their application in financial
trading remains underexplored. Existing studies in this domain, such as [52,
55], often lack open-source availability or have not considered an
architecture specifically tailored to fit the unique environment of finance
markets. Thus, there’s significant value in further investigating advanced,
transparent LLM agents for trading.
### 2.3 Architecture Design of LLM Autonomous Agent
As Wang et al.[50] emphasizes, an effective architecture for LLMs serving
autonomous agents is essential. Typically, this structure comprises modules
like profiling, memory, planning, and actions, though not all may be essential
for every application. There are cases of two modules (e.g., planning and
action modules) being integrated as one component ([35]). The design
variations are numerous. For instance, profiling has been achieved through
methods like handcrafting [57], generation by LLMs [51], and alignment with
real-world datasets [2]. Among these modules, the memory component is
essential. Acting as the operational core, it aligns an agent’s actions with
real-world tasks. Research indicates that leveraging insights from cognitive
science studies on human memory [50, 45] can enhance this alignment. Thus, a
well-structured trading LLM agent, comprising aptly designed modules, can
sharply tackle the complexities of financial markets to make informed
decisions.
## 3 Architecture of FinMem
In this section, we comprehensively detail the three core components of
FinMem, namely the profiling, memory, and decision-making modules. The
profiling module empowers FinMem to adaptively tailor character setting for
specific trading tasks. Memory module leverages diverse time-efficiency
attributes of financial data, enhancing its trading efficacy. The decision-
making module enables FinMem to synchronize its memory streams with market
facts, facilitating high-quality trading decisions. The details and notations
associated with these three modules are provided in the subsequent sections.
Figure 1: The prompt template for FinMem’s profiling module. It includes two
key elements of its character setting: professional background knowledge and
three distinct investment risk inclinations. In the self-adaptive risk
inclination option, the omitted texts align with the detailed descriptions
provided for the risk-seeking and risk-averse inclinations.
### 3.1 Profiling Module
The profiling module empowers FinMem to develop a dynamic agent character
specifically designed to navigate the complex dynamics of financial markets
effectively.
The dynamic character of FinMem comprises two principal components, as
depicted in Figure 1: firstly, a foundational professional knowledge base akin
to a trading expert, and secondly, an agent with three distinct investment
risk inclinations. The first component includes two types of information: an
introduction to the primary trading sectors relevant to the company stock
FinMem will trade in, and a concise overview of the historical financial
performance of the specified ticker, spanning from the beginning to the end of
the training period. Before initiating trades in a new company’s stock, FinMem
accesses and updates this sector-specific and historical financial data from a
backend database. This professional background setting narrows down
information and memory events pertinent to specific trading tasks.
The second component of FinMem’s design, illustrated in Figure 1, encompasses
three distinct risk inclination options: risk-seeking, risk-averse, and a
self-adaptive risk character. The risk-seeking setting gears FinMem towards an
aggressive, high-reward approach, while the risk-averse setting gears it
towards a conservative, lower-risk strategy. A distinctive aspect of FinMem is
its ability to dynamically alternate between these risk settings in response
to current market conditions. Specifically, it shifts risk preferences when
the Cumulative Return falls to below zero within a brief period, such as three
days, and reversely. This flexible design functions as a protective mechanism,
mitigating prolonged downturns in turbulent market environments. During the
initial stage of the training phase, FinMem is configured with a chosen risk
preference, each supplemented with comprehensive textual explanations through
LLM prompts. These guidelines shape how FinMem processes incoming messages and
determines its subsequent actions in alignment with its designated risk
inclination. The system maintains a catalog of all risk inclinations and their
detailed explanations in a backlog, enabling seamless adaptation to different
stocks by switching among these risk profiles as needed.
The dynamic character setting in FinMe’s profiling module provides subjective
and professional background knowledge and flexible choice of risk
inclinations. It provides crucial context for filtering and retrieving
trading-relevant information and memory events, thus improving accurate
inferencing and adaptability to fluctuating market conditions.
### 3.2 Memory Module
The memory module of FinMem emulates a human trader’s cognitive system so that
it can efficiently process hierarchical financial information and prioritize
the critical messages for high-quality investment decisions. Furthermore, it
adjusts the memory span flexibly, enabling the agent to operate on a wider
range of events over a longer retrieval period. FinMem’s memory module,
illustrated in Figure 2, comprises working and long-term memory with layered
processing capability and is initiated by a specific investment inquiry.
Figure 2: Memory module structure of FinMem with a detailed view of
components, operations, and workflow. The cognitive architectures of FinMem’s
memory module have two core components – Working Memory and Layered Long-term
Memory.
#### 3.2.1 Working memory
Working memory refers to the human cognitive system’s functions for temporary
storage and diverse operations. We incorporate this concept into FinMem’s
memory module development, creating a central workspace for informed decision-
making. Unlike human working memory, having a maximum capacity of seven plus
or minus two memory events [30], FinMem has the ability to expand the capacity
based on specific requirements. Tailored for converting financial data into
trading actions, FinMem’s working memory encompasses three key operations:
summarization, observation, and reflection. The mechanisms by which they
interact and operate as an integrated decision-making workflow are detailed in
the middle box of Figure 2. Additionally, the LLM prompt template that
underpins these processes is thoroughly outlined in 3.
Summarization: FinMem leverages external market data to derive critical
investment insights and sentiments tailored to specific stock trading queries,
such as “Can you make an investment decision on TSLA on 1/24/2023?”. As
illustrated in Figure 3 (2), this system condenses the original text into a
compact yet informative paragraph, thereby enhancing FinMem’s processing
efficiency. It efficiently extracts and summarizes pertinent data and
sentiments for stock investment decisions, demonstrated here using Tesla Inc.
as an example. Subsequently, FinMem directs these insights to an appropriate
layer within its long-term memory architecture, selecting the layer based on
the time sensitivity of the information.
Observation: Triggered the same inquiry, FinMem initiates an observation
operation to gather market facts. The information available to FinMem varies
between the training and testing phases.
During the training phase, FinMem has access to comprehensive stock price data
within the specified period. Upon receiving trading inquiries that specify a
stock ticker and date, FinMem focuses on the daily adjusted closing price
differences, comparing the following day’s price with the current day’s. These
price differences are utilized as market ground labels. Specifically, a
decrease in price suggests a “Sell” action, while an increase or no change in
price indicates a “Buy” action.
During the testing phase, at a specific time point, FinMem loses the ability
to access future price data. Its focus shifts to the analysis of historical
stock price movements, depending on a retrospective evaluation of the
cumulative return from the last $M$ trading days to infer future market
trends. This phase, characterized by the absence of foreseen market grounds,
serves as a critical assessment of FinMem’s development. It tests whether the
system has adequately established logical connections between stock price
trends and various financial information sources, such as news, reports, and
indicators. This stage is key in evaluating FinMem’s capability of
independently evolving its trading strategies for subsequent tasks, leveraging
its analysis and interpretation of historical data patterns.
Reflection: Two types of reflections exist, immediate and extended reflection.
(a) Immediate reflection is activated upon receiving a daily trading inquiry
for a specific ticker. Using LLM and specific prompts exemplified in Figure 3
(2), the agent merges market indications and top-$K$-ranked events from each
long-term memory layer. Market indications are derived from the outcomes of
the observation operation and differ between the training and testing phases.
During testing, this process yields three types of outputs: the trading
direction (“Buy”, “Sell”, or “Hold”), the underlying rationale for this
decision, and the most influential memory events, along with their IDs from
each layer that informed the decision. In the training phase, specifying the
trading direction is unnecessary, as FinMem is already informed of future
stock movement directions. The top-$K$-ranked memory events encapsulate key
insights and sentiments derived from critical investment-related incoming
messages, all distilled by FinMem’s advanced summarization capabilities.
(b) Extended reflection reevaluates immediate reflection outcomes for a ticker
over a specified $M$-day trace period. It encompasses data like stock price
trends, trading returns, and action rationales from multiple immediate
reflections. While immediate reflection enables direct trading execution and
records current feedback, extended reflection summarizes market trends and
reassesses recent Cumulative Return on investment. Extended reflection is
eventually transmitted and stored in the deep processing layer to emphasize
its criticality (detailed introduced in Section 3.2.2) of long-term memory.
$K$ and $M$ are hyperparameters to adjust FinMem’s working memory capacity and
information retrieval ability. FinMem gains the flexibility of integrating
comprehensive information into well-informed decisions by fine-tuning them.
#### 3.2.2 Layered long-term memory
Figure 3: (1) The decision-making module workflow of the FinMem trading agent
retrieves critical memory events to inform specific decisions. (2) LLM prompt
template used by FinMem to interact with incoming financial information.
FinMem’s long-term memory organizes hierarchical financial data insights in a
stratified structure, as illustrated in the lower section of Figure 2. Drawing
inspiration from the varying decay speeds in the human cognitive system’s
information processing layers [9], FinMem employs a layered structure to
accommodate the diverse time sensitivities inherent to different types of
financial data. This structure categorizes summarized insights by their
timeliness and decay rates. Insights are derived by the working memory’s
summarization operation. Those directed to deeper layers receive smaller decay
rates, indicating longer retention, while those in shallower layers are
assigned larger decay rates for shorter retention.
$\gamma_{l}^{E}=S_{\text{Recency}_{l}}^{E}+S_{\text{Relevancy}_{l}}^{E}+S_{\text{Importance}_{l}}^{E},$
(1)
where each memory event is only associated with one score and can only belong
to a single layer.
Upon receiving an investment inquiry, FinMem retrieves the top-$K$ pivotal
memory events from each layer and channels them to the immediate reflection
component of the working memory. These events are chosen according to the
descending order of their information retrieval score, denoted as
$\gamma_{l}^{E}$, where $l$ belongs to the set shallow, intermediate, deep, as
specified in Equation 5. $E$ denotes a given memory event. This score, adapted
from Park et al. [35] but with modified recency and importance computations,
especially tailoring to handle data with various timelines. It encapsulates
three metrics: recency, relevancy, and importance. Individual metric scores
exceeding 1.0 are scaled to the [0,1] range before being summed. The
modification is to achieve the layered processing function and represent the
various periodicity of the financial environment.
$\begin{split}&S_{\text{Recency}_{l}}^{E}=e^{-\frac{\delta^{E}}{Q_{l}}},\quad\;\delta^{E}=t_{\text{P}}-t_{E},\\\
\end{split}$ (2)
where $\delta^{E}$ refers to the time difference between the memory event
occurrence and the trading inquiry arrival. $Q_{\text{shallow}}=14$,
$Q_{\text{intermediate}}=90$, and $Q_{\text{deep}}=365$ correspond to day
counts of two weeks, a quarter, and a year for shallow, intermediate, and deep
processing layers, respectively.
Upon a trade inquiry $P$ arrival in processing layer $l$ via LLM prompt, the
agent computes the recency score $S_{\text{Recency}_{l}}^{E}$ per Equation.2.
$S_{\text{Recency}_{l}}^{E}$ inversely correlates with the time gap between
the inquiry and the event’s memory timestamp, mirroring Ebbinghaus’s
forgetting curve [33]. The stability term $Q_{l}$ in Equation.2 partially
controls memory decay rates across layers, indicating longer memory
persistence in the long-term layer with a higher stability value. In the
context of trading, company annual reports, such as Form 10-Ks, are considered
to have more extended timeliness compared to daily financial news. Therefore,
they are assigned a higher stability value and are categorized within the
deeper processing layer. This classification reflects their extended relevance
and impact in financial decision-making scenarios.
$\begin{split}&S_{\text{Relevancy}_{l}}^{E}=\frac{\mathbf{m_{E}}\cdot\mathbf{m_{P}}}{\|\mathbf{m_{E}}\|_{2}\times\|\mathbf{m_{P}}\|_{2}}\end{split}$
(3)
The relevancy score, denoted as $S_{\text{relevancy}_{l}}^{E}$, quantifies the
cosine similarity between the embedding vectors. These vectors are derived
from the textual content of the memory event, $\mathbf{m_{E}}$, and the LLM
prompt query, $\mathbf{m_{P}}$, using OpenAI’s “text-embedding-ada-002” model,
as depicted in Equation 3. The LLM prompt query incorporates inputs related to
trading inquiries and the trading agent’s character setting.
The importance score $S_{\text{Importance}_{l}}^{E}$ is computed using the
value $v_{l}^{E}$ from a uniform piecewise scoring function (Formula 4),
multiplied by a degrading ratio $\theta_{l}$ (Formula 5) as per Equation 6.
The likelihood of higher $v_{l}^{E}$ values increases from shallow to deep
layers. $\theta_{l}$ measures the diminishing importance of an event over
time, which has a close form design of [35]. But our approach tailors
$\theta_{l}$ to the stratified structure of long-term memory. It adopts unique
exponential functions for each layer. The base $\alpha_{l}$ for each layer is
a hyperparameter, set to follow the sequence:
$\alpha_{shallow}<\alpha_{intermediate}<\alpha_{deep}$. These values correlate
with the rate at which their importance degrades after a certain period,
providing another angle to measure importance variances across different
memory types. Through experimentation, we set $\alpha_{shallow}=0.9$,
$\alpha_{intermediate}=0.967$ and $\alpha_{deep}=0.988$. This ensures that
$\theta_{l}$ decreases to a threshold score of $5$ after intervals of $30,90,$
and $365$ days for shallow, intermediate, and deep layers, respectively. The
three-piece-wise functions for $S_{\text{Importance}_{l}}^{E}$ and
$S_{\text{Recency}_{l}}^{E}$ enable FinMem to have layered processing in the
long-term memory component. Memory events are purged when
$S_{\text{Recency}_{l}}^{E}$ is below $0.05$ or
$S_{\text{Importance}_{l}}^{E}$ is under $5$ (pre-scaling).
$\begin{split}&v_{l}^{E}=\begin{cases}40&\text{with probability }p_{1}\\\
60&\text{with probability }p_{2}\\\ 80&\text{with probability
}p_{3}\end{cases}\end{split}$ (4)
$\begin{split}\theta_{l}=(\alpha_{l})^{\delta^{E}},\quad&l=\text{shallow},\text{intermediate},\text{deep},\end{split}$
(5)
where $p_{1}+p_{2}+p_{3}=1$, but their values vary by shallow, intermediate,
and deep processing. when shallow processing
${p_{1},p_{2},p_{3}}=\\{0.8,0.15,0.05\\}$, intermediate processing,
${p_{1},p_{2},p_{3}}=\\{0.05,0.8,0.15\\}$ and deep processing,
${p_{1},p_{2},p_{3}}=\\{0.05,0.15,0.8\\}$.
$S_{\text{Importance}_{l}}^{E}=v_{l}^{E}*\theta_{l},$ (6)
Furthermore, an access counter function oversees the transfer of memory events
among layers, ensuring that significant events influencing trading decisions
ascend from shallower to deeper layers for extended retention and recurrent
access by FinMem. Conversely, less pertinent events gradually diminish. This
process is facilitated by the LLM validation tool Guardrails AI [17], which
monitors critical memory IDs across different layers. An event identified as
pivotal for investment success receives an additional $5$ points in its
importance score $S_{\text{Importance}_{l}}^{E}$. Upon meeting the criteria
for upgrading to a deeper layer, an event’s recency score
$S_{\text{Recency}_{l}}^{E}$ is reset to $1.0$, emphasizing its importance and
preventing rapid decay. By implementing this access counter, FinMem
effectively identifies and prioritizes key events, taking into account their
nature and frequency of retrieval.
### 3.3 Decision-making Module
The decision-making module of FinMem efficiently integrates operational
outcomes from the profiling and memory modules to support well-informed
investment decisions, as depicted in Figure 3 (1). In its daily trading
decisions, FinMem is asked to select from three distinct actions for a single
share of a specific stock by Guardrails AI text validation function: “Buy”,
“Sell”, or “Hold”. Additionally, the inputs and results required by FinMem’s
decision-making module vary between its training and testing phases, with each
phase’s specifics detailed as follows:
During the training phase, FinMem accesses a wide array of multi-source
information relevant to the entire time period. When FinMem is prompted with
trading inquiries containing stock ticker and date, as well as trader
character-related texts, it concurrently initiates observation and
summarization operations in its working memory. FinMem observes the market
ground labels mentioned in the description about the observation operation in
Section 3.2.1, which involve daily adjusted price differences between
consecutive days, indicative of “Buy” or “Sell” actions. Utilizing these price
change signals, FinMem identifies and prioritizes the top-$K$ memories,
ranking them based on retrieval scores from each long-term memory layer. This
procedure enables FinMem to produce comprehensive reflections that provide a
well-founded rationale and in-depth inference of the correlation between
market ground labels and the memories retrieved. Through repeated trading
operations, reflections, and memory events with significant impact, transition
to a deeper memory processing layer, getting preserved for guiding future
investment decisions during the testing phase.
In the testing phase, where FinMem cannot access future price data, it relies
on the Cumulative Return over the previous $M$ trading days to anticipate
future market trends. To compensate for the absence of future market price
information, FinMem utilizes enhanced reflections derived from immediate
reflections spanning an $M$-trading-day period as supplementary references.
When faced with a specific trading inquiry, FinMem integrates insights from
various sources, including historical Cumulative Return, outcomes from
extended reflection, and the Top-$K$ retrieved memories. This comprehensive
approach enables FinMem to execute well-informed trading decisions.
It should be noted that FinMem generates executable actions exclusively in the
immediate reflection operation of the testing phase. Since the trading
direction is guided by the actual price trend, the training phase of FinMem
does not make investment decisions. Instead, this phase is dedicated to
accumulating trading experience through comparing market trends with incoming
multi-source financial messages. Additionally, during this phase, FinMem
develops a memory module enriched with a comprehensive knowledge base, thereby
evolving its capability for independent decision-making in future trading
activities.
## 4 Experiments Setups
We aim to evaluate the trading performance of FinMem. And we further
illustrate its unique advantages of requiring significantly less historical
trading time window to train and take full use of key financial data time
series as well as textual information. Specifically, we conducted several
experiments to study the following research questions (RQs):
* •
RQ1: Is FinMem capable of outperforming contemporary state-of-the-art
algorithmic trading agents?
* •
RQ2: Are there tasks that challenge other trading algorithms but are
manageable by FinMem?
* •
RQ3: Which LLM is best suited to form the backbone framework of FinMem?
* •
RQ4: Does equipping FinMem with different risk inclination choices truly
differentiate its trading performance?
* •
RQ5: Can FinMem effectively filter and prioritize information to facilitate
informed trading decisions?
In the rest of the section, we begin by introducing the real-world financial
dataset used in our experiments. We then describe the comparative algorithmic
agents and list several widely used financial metrics. Our experiments fall
into two categories: 1) The comparative experiments of FinMem versus other
algorithmic trading agents, and FinMem using different LLMs as backbone
algorithms. 2) The ablation studies evaluate the effects of FinMem’s
adjustable cognitive span and the role of the trader’s dynamic character
settings, particularly the risk inclinations, on its trading performance.
Through experiments, FinMem demonstrates to outperform other comparative
algorithmic agents. Furthermore, we are able to show that its profiling and
memory modules are sophisticated and tailored to effectively address the
intricacies of the financial landscape, resulting in superior trading
performance.
### 4.1 Datasets And Database Structure:
We assessed FinMem’s performance using multi-source financial data from August
15, 2021, to April 25, 2023, sourced from reputable financial databases and
APIs like Yahoo Finance (via yfinance) and Alpaca News API, detailed explained
in Table 1. The stock tickers used in our comparative experiments are detailed
in Figure 4. These were selected because they are among those with the highest
volumes of accessible news text data, and they are spread across various
trading sectors. This selection provides ample data to evaluate FinMem’s
generalization capabilities. Additionally, Tesla, Inc. (TSLA) was specifically
chosen for ablation studies due to its association with the largest amount of
textual data, offering sufficient information to assess performance
differences for the FinMem’s key features like cognitive spans.
The raw multi-source input data, initially stored in the “Raw Financial Data
Warehouse”, are diverged into FinMem’s “Layered Long-term Memory Data
Warehouse” based on timeliness through working memory’s summarization
operation in Figure 2. The deep processing layer holds annual reports (Form
10K’s) insights, the intermediate layer contains quarterly reports (Form
10Q’s) insights, and the shallow layer accommodates daily financial news
insights.
We leveraged the open-source vector database FAISS [23] for constructing the
memory warehouse of FinMem, benefiting from its rapid querying in high-
dimensional vectors and compatibility with OpenAI for cosine-similarity-based
semantic searches on specific tickers. This setup facilitates efficient top-
ranked event retrieval. Data categorization and memory module workflow are
also illustrated in Figure 2
Raw Financial Data Warehouse
---
News data associated with ticker indexes: News data is sourced from the Alpaca
News API, which utilizes Benzinga as its backend provider.
Corporate quarter filings indexes: Quarterly reports (Form 10-Q) are required
by the U.S. Securities and Exchange Commission (SEC).
Corporate annual filings indexes: Annual reports (Form 10-K) required by the
U.S. Securities and Exchange Commission (SEC).
Stock price records: Daily stock open-high-close-volume (OHLCV) data from
Yahoo Finance.
FinMem’s Layered Long-term Memory Data Warehouse
Shallow Processing: Insights of real-time market news extracted by LLM.
Updated daily.
Intermediate Processing: Insights of 10-Q filings extracted by LLM. Updated
quarterly.
Deep Processing: 10-K fillings, all summarized by LLM. Updated yearly.
Extended reflections for the stock in response to the trading inquiry include
FinMem’s cumulative trading returns, decision-making processes, trade volumes,
and underlying reasons. Updated daily.
Table 1: Raw data and memory warehouses of FinMem Figure 4: The distribution
of news in scraped from Alpaca News API for the five stocks in the experiments
### 4.2 Baseline And Comparative models:
We assess FinMem’s trading performance in comparison to five advanced
algorithmic agents and a commonly accepted baseline trading strategy. Among
these, three models employ Deep Reinforcement Learning (DRL) approaches, while
the remaining two are based on Large Language LLMs. Brief descriptions of each
are provided below:
Buy-and-Hold strategy (B&H):
A passive investment approach, where an investor purchases stocks and holds
onto them for an extended period regardless of market fluctuations, is
commonly used as a baseline for comparison of stock trading strategies.
DRL trading agents:
As the FinMem is practiced and examined on the basis of single stock trading
and discrete trading actions, we choose three advanced DRL algorithms fitting
into the same scenarios according to the previous and shown expressive
performance in the work of Liu et al. [28, 26]. The DRL training agents only
take numeric features as inputs.
* •
Proximal Policy Optimization (PPO): PPO [42] is employed in stock trading due
to its stability and efficiency. One salient advantage of PPO is that it
maintains a balance between exploration and exploitation by bounding the
policy update, preventing drastic policy changes.
* •
Deep Q-Network (DQN): DQN [32] is an adaptation of Q-learning, that can be
used to optimize investment strategies. Unlike traditional Q-learning that
relies on a tabular approach for storing Q-values, DQN generalizes Q-value
estimation across states using deep learning, making it more scalable for
complex trading environments.
* •
Advantage Actor-Critic (A2C): A2C [31] is applied to optimize trading actions
in the financial environment. It operates by simultaneously updating both the
policy (actor) and the value (critic) functions, providing a balance between
exploration and exploitation.
LLM trading agents:
We evaluate FinMem against two LLM agents in the context of stock trading. The
first LLM agent, known for its proficiency in general-purpose tasks, serves as
a baseline. The second agent, a leading-edge LLM in trading, has been
acclaimed for its promising performance in stock market operations.
* •
General-purpose Generative Agents – GA: The generative AI agent by Park et al.
[36], originally intended to simulate realistic human behavior and make
everyday decisions, has been adapted here for specific stock trading tasks.
This agent’s architecture includes a memory module that employs recency,
relevance, and importance metrics to extract pivotal memory events for
informed decision-making. However, it does not provide a layered memory module
to effectively differentiate the time sensitivities unique to various types of
financial data. Additionally, although it features a profiling module to
define agent attributes like professional background, the model does not
include a mechanism for self-adaptive risk preference. In our experiments, we
modified the original prompt template created by Park et al., which was
intended for general daily tasks, to suit financial investment tasks. The
textual elements of this revised template closely align with those of FinMem,
with the exception of two components that are absent in this version of
general-purpose Generative Agents.
* •
LLM trading agents – FinGPT: A novel open-source LLM framework specialized for
converting incoming textual and numeric information into informed financial
decision-making, introduced by Yang et al. [55]. It claims superiority over
the traditional buy-and-hold strategy.
### 4.3 Evaluation Metrics:
Figure 5: Cumulative return comparison over time between FinMem and other
algorithmic agents across five stocks. Figure 6: Cumulative Return comparison
over time between FinMem and other algorithmic agents on Coinbase Global, Inc.
(COIN).
We employ five widely-used metrics in finance to compare the investment
rewards of FinMem against other algorithmic trading agents. Here are their
introductions:
* •
Cumulative Return [21]: Cumulative Return is a key trading performance metric
because it provides a comprehensive insight into investment performance,
especially for strategies that emphasize long-term growth and reinvestment.
The effectiveness of different investment strategies is evaluated based on
their Cumulative Returns, which reflect the total change in value over time.
In this study, we compute Cumulative Returns over the specified period by
summing daily logarithmic returns, as outlined in Equation 7. This method is
widely accepted in the finance area due to its ability to precisely capture
minor price fluctuations and symmetrically address gains and losses. In
essence, a higher Cumulative Return typically indicates a more effective
strategy.
Cumulative Return $\displaystyle=\sum_{t=1}^{n}r_{i}$
$\displaystyle=\sum_{t=1}^{n}\left[\ln\left(\frac{p_{t+1}}{p_{t}}\right)\cdot\text{action}_{t}\right],$
(7)
where $r_{i}$ represents the logarithmic return for day $t+1$, $p_{t}$ is the
closing price on day $t$, $p_{t+1}$ is the closing price on day $t+1$, and
$\text{action}_{t}$ denotes the trading decision made by the model for that
day.
* •
Sharpe Ratio [43]: Sharpe Ratio is another core metric for evaluating
investment performance and adjusting returns for risk. It is calculated by
dividing the portfolio’s average excess return ($R_{p}$) over the risk-free
rate ($R_{f}$) by its volatility ($\sigma_{p}$), as shown in Equation 8. This
metric adjusts returns for risk, with a higher ratio indicating better risk-
adjusted performance. Essential in comparing different portfolios or
strategies, it contextualizes performance against similar investments.
Although a Sharpe Ratio above 1 is typically considered favorable and above 2
as excellent, these benchmarks can vary depending on the context of
comparison.
$\textbf{Sharpe Ratio}=\frac{R_{p}-R_{f}}{\sigma_{p}}$ (8)
* •
Annualized Volatility and Daily Volatility[8]: Annualized Volatility (Annum-
Volatility) is calculated as the Daily Volatility (standard deviation of daily
logarithmic returns) multiplied by the square root of the typical number of
trading days in a year (252) as outlined in Equation 9, is vital for assessing
investment risk. This measure reflects the extent of fluctuation in a security
or market index’s returns over a year, indicating potential deviations from
average returns. It’s especially relevant for investors with specific risk
profiles, such as those who are risk-averse, who may prefer portfolios
demonstrating lower annualized volatility.
Annum-Volatility $\displaystyle=\textbf{Daily Volatility}\times\sqrt{252}$ (9)
* •
Max Drawdown [1]: Max Drawdown is a metric for assessing risk. It represents
the most significant decrease in a portfolio’s value, from its highest
($P_{\text{peak}}$) to its lowest point ($P_{\text{trough}}$) until a new peak
emerges, detailed in Equation 10. Indicative of investment strategy
robustness, a smaller Max Drawdown suggests reduced risk.
$\displaystyle\textbf{Max
Drawdown}=\text{max}(\frac{P_{\text{peak}}-P_{\text{trough}}}{P_{\text{peak}}})$
(10)
In our experiments and ablation studies, we recorded the metric outcomes as an
average from five repeated trials.
Ticker Model Cumulative Return (%) Sharpe Ratio Daily Volatility (%)
Annualized Volatility (%) Max Drawdown (%) TSLA Buy and Hold -18.6312 -0.5410
4.4084 69.9818 55.3208 FinMem 61.7758* 2.6789 2.9522 46.8649 10.7996
Generative Agents 13.4636 0.5990 2.8774 45.6774 24.3177 FinGPT -7.4554 -0.2795
3.4145 54.2027 42.3993 A2C 13.7067 0.3979 4.4096 70.0009 52.3308 PPO 1.2877
0.0374 4.4110 70.0232 54.3264 DQN 33.3393 0.9694 4.4027 69.8900 52.0033 NFLX
Buy and Hold 35.5111 1.4109 3.1964 50.7410 20.9263 FinMem 36.4485* 2.0168
2.2951 36.4342 15.8495 Generative Agents 32.0058 1.5965 2.5460 40.4168 16.9893
FinGPT 9.0090 0.4266 2.6819 42.5732 28.2705 A2C 14.6155 0.5788 3.2071 50.9112
25.0184 PPO 8.4121 0.3330 3.2086 50.9344 25.0184 DQN -12.2067 -0.4833 3.2078
50.9217 28.7017 AMZN Buy and Hold -10.7739 -0.4980 2.7697 43.9674 33.6828
FinMem 4.8850* 0.2327 2.6872 42.6576 22.9294 Generative Agents -13.9271
-0.9981 1.7864 28.3576 27.7334 FinGPT -29.6781 -2.1756 1.7464 27.7225 28.4838
A2C -6.3591 -0.2938 2.7706 43.9819 26.1275 PPO -8.4194 -0.3891 2.7702 43.9761
33.6828 DQN -29.9820 -1.3906 2.7603 43.8177 38.3740 MSFT Buy and Hold 14.6949
0.8359 2.2326 35.4411 15.0097 FinMem 23.2613* 1.4402 2.0512 32.5617 14.9889
Generative Agents -18.1031 -1.6057 1.4318 22.7285 24.2074 FinGPT 5.7356 0.4430
1.6442 26.1008 12.8459 A2C 0.4598 0.0261 2.2357 35.4913 23.6781 PPO 12.8067
0.7282 2.2333 35.4532 19.5355 DQN 14.7397 0.8385 2.2326 35.4408 25.1845 COIN
Buy and Hold -30.0071 -0.5150 6.7517 107.1795 60.5084 FinMem 34.9832* 0.7170
5.6538 89.7515 35.7526 Generative Agents 3.4627 0.0896 4.4783 71.0908 32.0957
FinGPT -88.7805 -1.9507 5.2736 83.7153 73.5774 A2C - - - - - PPO - - - - - DQN
- - - - -
Table 2: Overall trading performance comparison during testing period between
FinMem and other algorithmic agents across five stocks.* indicates that the
result of the Wilcoxon signed-rank test is statistically significant.333The
bold numbers in this and subsequent tables signify the best performance for
the respective metrics. Figure 7: Cumulative Return of FinMem on trading
Tesla, Inc. (TSLA) stock Over an Extended Testing Period.
## 5 Experiments:
### 5.1 Implementation Details:
In the Trading Agents Comparison, FinMem employs GPT-4-Turbo as its backbone
algorithm. The temperature parameter of the model was set at 0.7 to maintain a
balance between response content consistency and model creativity. It was
trained on financial data from August 17, 2021, to October 05, 2022, and
underwent testing with data from October 06, 2022, to April 10, 2023. The
training period was chosen to account for the seasonal nature of corporate
financial reporting and the duration of data retention in FinMem’s memory
module. The selected training duration ensures the inclusion of at least one
publication cycle of either Form 10-Q, classified as intermediate memory, or
Form 10-K, regarded as deep memory, or in some instances, both. This strategy
ensures that the experiences retained in FinMem are still influential during
the testing phase for a significant period. Additionally, the training
duration allowed FinMem sufficient time to establish inferential links between
financial news, market indicators, and stock market trends, thereby
accumulating substantial experience. Furthermore, we set the number of top
memory events retrieved from each layer of long-term memory at 5. We ran
FinMem using each of the three available risk inclination settings. The
reported performance outcomes are based on the setting that achieved the
highest cumulative return during the testing phase.
To maintain consistency in the comparison, the training and testing phases for
the other two LLM-based agents were aligned with those of FinMem. For
parameters of other LLM-based agents that are not encompassed by FinMem’s
configuration, they were kept in accordance with their original settings as
specified in their respective source codes.
Considering that DRL algorithms need extensive training data for stable and
converged results, and given our daily evaluation of trading performance, we
extended the DRL agents’ training period to roughly a 10-year span, from
January 1, 2012, to October 05, 2022, for a fair comparison. The testing
period was kept consistent with the other models. The DRL algorithms were
implemented using Stable Baselines 3 [41].
FinMem’s performance was benchmarked against that of the most effective
comparative model, using Cumulative Return and Sharpe Ratio as the primary
evaluation metrics. The statistical significance of FinMem’s superior
performance was ascertained through the non-parametric Wilcoxon signed-rank
test, which is particularly apt for the non-Gaussian distributed data.
### 5.2 Algorithmic Trading Agents Comparison (RQ1 & RQ2)
In this experiment, we assess the stock trading performance of FinMem against
other models, focusing on stocks from five companies in different trading
sectors: Tesla, Inc. (TSLA), Netflix, Inc. (NFLX), Amazon.com, Inc. (AMZN),
Microsoft Corporation (MSFT), and Coinbase Global, Inc. (COIN). The
performance of all algorithmic trading agents across five key metrics is
consolidated in Table 3. Given the pivotal role of Cumulative Return in
evaluating trading performance over time, we present detailed time series
plots in Figure 5 and Figure 6. It’s important to note that the trading
performance of FinMem for COIN was exclusively compared with LLM trading
agents and the baseline. This is because Coinbase Global, Inc. completed its
IPO in April 2021 and, as a result, had not accumulated enough trading data to
facilitate stable outcomes with Deep Reinforcement Learning (DRL) algorithms.
These plots illustrate the changes in Cumulative Return for each of the five
companies throughout the testing phase, offering an in-depth comparison of
performance.
In response to RQ1, the trading outcomes presented in Table 3 reveal that
FinMem outperforms all other algorithmic trading agents and the B&H baseline
strategy in terms of Cumulative Return and Sharpe Ratio. FinMem’s superiority
is statistically significant when compared to the second-best trading
strategy. Specifically, for TSLA and NFLX, FinMem’s strategy achieves Sharpe
Ratios exceeding $2.0$ and Cumulative Returns surpassing $0.35$ while
maintaining the lowest Volatility and Max Drawdown. These indicators
underscore FinMem’s ability to generate higher returns per unit of risk. In
the case of MSFT and NFLX, FinMem also records a Sharpe Ratio above $1.0$ and
a Cumulative Return over $0.2$, coupled with relatively low Volatility and Max
Drawdown, demonstrating its impressive trading performance. For AMZN and COIN,
FinMem consistently delivers positive Cumulative Returns and superior Sharpe
Ratios, outperforming other strategies that yield negative values for these
metrics. Additionally, its Volatility and Max Drawdown are on the lower end.
Hence, these results collectively demonstrate FinMem’s robust trading
performance across a diverse range of trading sectors. Specifically, FinMem
exhibits superior performance compared to the two other LLM agents in our
study, FinGPT and the general-purpose generative agent developed by Park et
al. This underscores the effectiveness of FinMem’s unique profiling and memory
structure, which are particularly tailored for LLM agents dealing with
financial data, significantly enhancing their investment decision-making
capabilities.
Figure 8: The optimal risk inclination for FinMem when trading different
stocks.
In response to RQ2, the main challenge for DRL trading agents is that they
require training data with a large volume and extended time span, which are
hard to achieve when operating on stocks with limited historical data. As
shown in Table 3, our experiments reveal that FinMem achieves superior trading
performance with a much shorter training duration compared to DRL trading
agents trained on data spanning nearly a decade. This efficiency makes FinMem
particularly useful for newly public companies like Coinbase Global, Inc.,
which have limited trading histories. DRL agents often face convergence issues
due to inadequate training data in such cases. Moreover, even among LLM-based
trading agents suited for shorter training periods, FinMem’s performance
stands out, as illustrated in Figure 6.
To further assess FinMem’s adaptability to limited training data, we narrowed
the training period down to an even shorter period, spanning from August 17,
2021, to February 10, 2022. We then extended the testing phase to cover from
February 11, 2022, to April 25, 2023. This evaluation focused on the stock of
Tesla, Inc., which has the largest volume of news data. The trading
performance of FinMem during this period is depicted in Figure 7. Remarkably,
using less than six months of daily frequency data for training, which
encompassed the publication of one Form 10-K and one Form 10-Q, FinMem
consistently ranked high in gains and attained the highest cumulative return
after the latter half of December 2022.
The consistently strong trading performance of FinMem can be attributed to its
innovative profiling and memory module design. This design enables FinMem to
effectively integrate, comprehend, and prioritize key information from both
textual and numerical data. The flexibility of FinMem’s profiling module in
selecting risk inclinations plays a pivotal role in its ability to both
exploit rising market trends and safeguard assets during downturns. A prime
example is TSLA, which achieved its best trading results under a self-adaptive
risk setting in FinMem. This configuration enables FinMem to pursue a
conservative and cautious strategy when facing negative short-term cumulative
returns. On the other hand, with positive short-term returns, FinMem switches
to an optimistic and assertive approach, thus avoiding excessive passivity.
This self-adaptive risk inclination proved effective for most stocks, apart
from MSFT as shown in Table 8. For MSFT, a risk-seeking inclination was most
beneficial, resonating with its general bullish in the stock market.
Additionally, the memory module’s core features, including varied retention
times for different information types and critical memory transitions, equip
FinMem to capture essential information for well-informed investment
decisions.
## 6 Ablation Studies
We conducted three distinct ablation studies to evaluate key component
alternatives in FinMem. These studies concentrated on the backbone algorithm,
the memory module’s cognitive capacity, and the character setting in the
profiling module, particularly examining the aspect of risk inclination. These
studies were done using the stock of Tesla, Inc., with a more compact training
period from March 14, 2022, to June 15, 2022, and a testing period from June
16, 2022, to December 28, 2022. This shorter duration was chosen for budgetary
efficiency, yet it remains sufficient to differentiate the functionality of
each component.
### 6.1 FinMem Backbone Algorithm Comparison (RQ3)
Metric B&H GPT 3.5-Turbo GPT4 GPT4-Turbo davinci-003 Llama2-70b-chat
Cumulative Return (%) -66.9497 16.1501 62.6180 54.6958 1.6308 -52.7233 Sharpe
Ratio -2.0845 2.1589 2.2251 2.4960 0.8515 -2.8532 Daily Volatility (%) 3.8050
0.8862 3.3339 2.5960 0.2269 2.1891 Annualized Volatility (%) 60.4020 14.0683
52.9237 41.2100 3.6018 34.7503 Max Drawdown (%) 67.3269 1.1073 17.4012 12.5734
0.8408 44.7168
Table 3: Comparison of trading performance during the testing period for
FinMem using different LLMs as backbone algorithms.
In our first study, we evaluated the trading performance of FinMem using
various LLMs as its backbone algorithms. The LLMs under consideration included
davinci-003, GPT 3.5-Turbo, GPT4, GPT4-Turbo, and Llama2-70b-chat. The
parameter settings were consistent with its optimal performance in the
comparative experiment detailed in Section 5, and the risk inclination was
configured to be self-adaptive. All other model settings were maintained as
outlined in Section 5.1. The results of this evaluation are compiled in Table
3.
In response to RQ3, Addressing Research Question 3, the findings demonstrate
that FinMem, powered by GPT-4 and GPT-4 Turbo, delivered superior trading
results during the test phase. Specifically, GPT-4 recorded the highest
cumulative return, while GPT-4-Turbo exhibited the most favorable Sharpe
Ratio. GPT 3.5-Turbo’s performance was also noteworthy, following closely
behind. As depicted in Figure 9, though slightly lower than market baseline
(B&H), FinMem with GPT-4-Turbo led in cumulative returns before October 2022.
This period was characterized by relative stability and a modest upward trend
in TSLA stock. After October 2022, with TSLA undergoing increased volatility
and a notable downward trend, the cumulative return trajectory for FinMem with
GPT-4-Turbo exhibited significantly lower volatility and sustained stable
returns not markedly lower than those of GPT-4. These results indicate that
GPT-4 Turbo is the most suitable backbone algorithm for FinMem.
FinMem configured with davinci-003 and Llama2-70b-chat exhibited the lowest
Annualized Volatility and Max Drawdown, yet their Cumulative Return and Sharpe
Ratio were underwhelming. As illustrated in Figure 9, both models defaulted to
a “Hold” strategy beyond a certain point during periods of intense fluctuation
in TSLA stock. The unsatisfactory performance of davinci-003 may be attributed
to its limited capability, as an earlier generation language model, to capture
and understand nuanced yet decisive information.
We selected Llama2-70b-chat as it was deemed to possess stronger in-context
learning and instruction-following capabilities compared to other Llama family
models with fewer parameters, as noted in Zhao et al. [58]. Nonetheless, in
the context of stock trading, it still demonstrated challenges in adequately
comprehending key messages necessary for effective trading decisions. The
comparatively poorer performance of Llama2-70b-chat can also be attributed to
its shorter context window, especially when compared to the GPT models. When
integrated with FinMem, it needs to simplify prompts and shorten the length of
retrieved memory insights, which could potentially result in some loss of
context. The exceptional trading result demonstrated by GPT-4-Turbo across all
models was a main factor in choosing it as the backbone algorithm for FinMem
in our earlier comparative analysis with other algorithmic trading agents.
Figure 9: Comparison of overall Cumulative Returns over time for FinMem using
different LLMs as backbone algorithms.
### 6.2 Influence about varying the FinMem character design (RQ4)
In our second study, we focused on evaluating the influence of FinMem’s
profiling module on its trading effectiveness. Specifically, our assessment
centered on the effects of customizing trader profiles according to specific
stock trading, with a particular focus on risk inclination. As depicted in
Figure 8, we equipped FinMem with three distinct risk profiles: risk-seeking,
risk-averse, and a self-adaptive character. We executed a comparative analysis
of FinMem’s performance across these risk profiles, maintaining consistency in
all other settings as outlined in Section 5.1.
In response to RQ4, Table 4 delineates the varied trading performance across
different risk profiles. The self-adaptive profile enabled FinMem to achieve
the most favorable trading performance, as it was the only one to secure a
positive Cumulative Return and a Sharpe Ratio exceeding 2.0, along with the
least Max Drawdown by the end of the testing phase. Figure 10 illustrates
FinMem’s capacity to adeptly navigate substantial stock price volatility and
to strategically modulate its trading behavior when necessary. In contrast,
the risk-seeking profile, while beneficial during a stable or bullish market
as evidenced by MSFT’s performance in Figure 5, exhibited increased volatility
and a decline in the face of a market downturn. The risk-averse profile, on
the other hand, maintained a more conservative stance, often opting to hold
positions. This approach resulted in a Cumulative Return trajectory that
generally lagged behind the market baseline, reflecting a degree of
overcaution that limited trading activity and potential gains, particularly in
a bullish market.
Metric | B&H | Self Adaptive | Risk Seeking | Risk Averse
---|---|---|---|---
Cumulative Return (%) | -66.9497 | 54.6958 | -19.4132 | -12.4679
Sharpe Ratio | -2.0845 | 2.4960 | -0.7866 | -1.5783
Daily Volatility (%) | 3.9527 | 2.7419 | 3.2722 | 1.7744
Annualized Volatility (%) | 3.8050 | 2.5960 | 2.9236 | 0.9358
Max-Drawdown (%) | 67.3269 | 12.5734 | 45.0001 | 15.9882
Table 4: Comparison of overall trading performance during the testing period
with different risk inclinations setting in FinMem’s profiling module. Figure
10: Comparison of Cumulative Return during with different risk inclinations
setting in FinMem’s profiling module.
### 6.3 Impact of adjusting the capacity of FinMem working memory (RQ5)
In our third study, we explored whether appropriately tuning the memory
retrieval bandwidth of FinMem could enhance its trading performance. This
bandwidth is tied to the working memory’s capacity within its memory module.
As depicted in Figure 2, FinMem retrieves the top-$K$ memory events from its
long-term memory in response to a trading inquiry. The working memory capacity
is thus set at $3\times K$, mirroring the human cognitive system’s limit of
processing immediate messages upon specific stimuli ($3$ refers to the three
processing layers in long-term memory). By varying the $K$ hyperparameter,
FinMem can expand this capacity far beyond the human cognitive scope. We aimed
to determine whether such flexibility in adjusting memory bandwidth translates
to improvements in FinMem’s performance.
Metric | B&H | Top 1 | Top 3 | Top 5 | Top 10
---|---|---|---|---|---
Cumulative Return (%) | -66.9497 | 52.0936 | 29.4430 | 54.6958 | 79.4448
Sharpe Ratio | -2.0845 | 1.8642 | 1.1214 | 2.4960 | 2.7469
Daily Volatility (%) | 3.8050 | 3.3105 | 3.1105 | 2.5960 | 3.4262
Annualized Volatility (%) | 60.4020 | 52.5529 | 49.3779 | 41.2100 | 54.3891
Max Drawdown (%) | 67.3269 | 25.2355 | 27.0972 | 12.5734 | 17.1360
Table 5: Comparison of overall trading performance during the testing period
with different configurations of working memory capacity.
As demonstrated in Table 5, we adjusted the hyperparameter $K$ to alter the
number of memory events retrieved from shallow, intermediate, and deep long-
term memory layers in FinMem. We tested $K$ values of 1, 3, 5, and 10,
exploring FinMem’s working memory capabilities at levels below, near, and
above the human cognitive limit. For all these $K$ settings, we maintained a
self-adaptive risk inclination, while other settings were consistent with
those described in Section 5.1.
Across all $K$ configurations, FinMem outperformed the Buy & Hold baseline,
indicating the effectiveness of its memory module in processing diverse
information and capturing critical events, which subsequently enhanced its
trading performance, as evidenced by positive Cumulative Returns and Sharpe
Ratios. Notably, higher $K$ values, like 5 and 10, enabled FinMem to achieve
the best Cumulative Returns and Sharpe Ratios exceeding 2.0. With $K$ set to
1, FinMem still performed moderately well by capturing the most critical
memory events of each layer.
An in-depth analysis in Figure 11, which shows the Cumulative Return over time
for various $K$ settings, reveals that a $K$ value of 5 is optimal for trading
TSLA stock, consistently delivering robust performance with the lowest
Volatility and Max-Drawdown. Before mid-October 2022, when the stock market
was relatively stable and slightly upward, FinMem’s trading actions aligned
well with market trends (referring to B&H) and avoided significant losses.
During periods of high volatility and continuous downturns (post-mid-October
2022), it maintained earnings by reducing “Buy” actions and favoring more
“Hold” and “Sell” strategies. However, setting $K$ to 10, while effective
during market volatility, resulted in significant losses in stable market
conditions. The issue may stem from the disproportionately loose capacity
constraints on incoming information relative to the volume of incoming data.
The broad memory retrieval bandwidth might have mixed trivial messages with
critical ones, hampering FinMem’s decision precision. This challenge becomes
especially evident in neutral market conditions, where the influx of
information includes a mix of varying market sentiments and trends.
Addressing RQ5, appropriately tuning the number of memory events (Top-$K$ in
the FinMem memory module can significantly enhance its trading performance.
The aforementioned study illustrates that FinMem can achieve optimal results
by effectively assimilating key signals from a sufficient quantity of filtered
memories across each layer. However, the optimal value for $K$ may vary
depending on the volume and quality of incoming information.
Figure 11: Cumulative Return over time for with different FinMem working
memory capacity.
## 7 Conclusion and future work
In this paper, we introduce FinMem, an innovative automated trading agent
framework featuring an adjustable cognitive memory structure and dynamic
character design. Our research demonstrates its capacity to enhance stock
trading performance substantially using real-world financial datasets.
Additionally, the efficacy of each critical component within FinMem is
thoroughly demonstrated in our ablation studies, highlighting their roles in
optimizing trading outcomes.
Its unique features of human-like cognitive memory modules and dynamic
character design enable it to tackle the complexities of financial
environments and respond aptly to new situations. Compared to other LLM
trading agents, FinMem’s memory module equips it with the capability to better
comprehend financial data featured by various timeliness and organize them as
a self-evolving long-term memory layer. The dynamic character design endows
FinMem with critical professional insights, enabling efficient filtering of
impactful messages from incoming financial data for trading actions.
Additionally, the integration of multiple risk profiles enhances FinMem’s
adaptability to a range of market conditions.
FinMem’s exceptional performance underscores its remarkable ability to
transform diverse financial data into well-informed investment strategies. Its
proficiency in integrating various financial data types is further accentuated
by a notably reduced training duration, which offers advantages for trading
with newly established companies. In our approach, we utilized a limited range
and quality of financial news and reports, employing general-purpose LLMs as
the backbone algorithms. However, FinMem is fully compatible with LLMs
specifically fine-tuned for financial applications. We anticipate that its
trading efficacy will be elevated further with access to a more comprehensive
and higher-quality dataset, coupled with LLMs tailored specifically for
financial contexts.
While primarily designed for financial decision-making, the FinMem framework
boasts a versatility that extends to domains such as IT consulting and
business reporting, where actions are driven by time-sensitive information.
Looking ahead, an intriguing direction for development is the creation of a
multi-agent trading system, rooted in the FinMem platform, aimed at enhancing
investment portfolio optimization. This system, featuring diverse professional
backgrounds in its profiling modules, enables concurrent operations and
trading across a variety of financial products. It dynamically adjust trading
targets based on a sequential assessment of key trading performance
indicators. Simultaneously, it can timely reallocate investment proportions
for each financial product category, leveraging peer-to-peer communication and
systematic performance analysis. Supported by LLMs, this multi-agent trading
system is poised to assemble an optimized investment portfolio through its
strategically planned operations.
## References
* [1] Andrew Ang and Joseph Chen “Downside Risk” In _Journal of Portfolio Management_ 29.4, 2003, pp. 103–112
* [2] Lisa P Argyle et al. “Out of one, many: Using language models to simulate human samples” In _Political Analysis_ 31.3 Cambridge University Press, 2023, pp. 337–351
* [3] Loukia Avramelou, Paraskevi Nousi, Nikolaos Passalis and Anastasios Tefas “Deep reinforcement learning for financial trading using multi-modal features” In _Expert Systems with Applications_ Elsevier, 2023, pp. 121849
* [4] Surjeet Balhara et al. “A survey on deep reinforcement learning architectures, applications and emerging trends” In _IET Communications_ Wiley Online Library, 2022
* [5] Fischer Black “Noise” In _The journal of finance_ 41.3 Wiley Online Library, 1986, pp. 528–543
* [6] Yu-Fu Chen and Szu-Hao Huang “Sentiment-influenced trading system based on multimodal deep reinforcement learning” In _Applied Soft Computing_ 112 Elsevier, 2021, pp. 107788
* [7] Tai-Liang Chen “Forecasting the Taiwan stock market with a novel momentum-based fuzzy time-series” In _Review of Economics & Finance_ 2 Better Advances Press, Canada, 2012, pp. 38–50
* [8] John H. Cochrane “Volatility Tests and Efficient Markets: A Review Essay” In _Journal of Monetary Economics_ 22.3, 1988, pp. 463–485
* [9] Fergus IM Craik and Robert S Lockhart “Levels of processing: A framework for memory research” In _Journal of verbal learning and verbal behavior_ 11.6 Elsevier, 1972, pp. 671–684
* [10] Quang-Vinh Dang “Reinforcement learning in stock trading” In _International conference on computer science, applied mathematics and applications_ , 2019, pp. 311–322 Springer
* [11] Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova “Bert: Pre-training of deep bidirectional transformers for language understanding” In _arXiv preprint arXiv:1810.04805_ , 2018
* [12] Robert D Edwards, John Magee and WH Charles Bassetti “Technical analysis of stock trends” CRC press, 2018
* [13] Kawin Ethayarajh “How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings” In _arXiv preprint arXiv:1909.00512_ , 2019
* [14] Thomas G Fischer “Reinforcement learning in financial markets-a survey”, 2018
* [15] Samuel J Gershman and Bence P Ölveczky “The neurobiology of deep reinforcement learning” In _Current Biology_ 30.11 Elsevier, 2020, pp. R629–R632
* [16] Yoav Goldberg and Omer Levy “word2vec Explained: deriving Mikolov et al.’s negative-sampling word-embedding method” In _arXiv preprint arXiv:1402.3722_ , 2014
* [17] “Guardrails AI” Open source library for interacting with Large Language Models, https://docs.guardrailsai.com
* [18] Sepp Hochreiter and Jürgen Schmidhuber “Long short-term memory” In _Neural computation_ 9.8 MIT press, 1997, pp. 1735–1780
* [19] Boming Huang et al. “Automated trading systems statistical and machine learning methods and hardware implementation: a survey” In _Enterprise Information Systems_ 13.1 Taylor & Francis, 2019, pp. 132–144
* [20] Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana and Stephen MacNeil “Memory sandbox: Transparent and interactive memory management for conversational agents” In _arXiv preprint arXiv:2308.01542_ , 2023
* [21] John Hull “Risk Management and Financial Institutions” John Wiley & Sons, 2007
* [22] O Jangmin, Jongwoo Lee, Jae Won Lee and Byoung-Tak Zhang “Adaptive stock trading with dynamic asset allocation using reinforcement learning” In _Information Sciences_ 176.15 Elsevier, 2006, pp. 2121–2147
* [23] Jeff Johnson, Matthijs Douze and Hervé Jégou “Billion-scale similarity search with GPUs” In _IEEE Transactions on Big Data_ 7.3 IEEE, 2019, pp. 535–547
* [24] Yifei Li et al. “Making language models better reasoners with step-aware verifier” In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , 2023, pp. 5315–5333
* [25] Mark Liffiton, Brad Sheese, Jaromir Savelka and Paul Denny “Codehelp: Using large language models with guardrails for scalable support in programming classes” In _arXiv preprint arXiv:2308.06921_ , 2023
* [26] Xiao-Yang Liu et al. “FinRL-Meta: Market environments and benchmarks for data-driven financial reinforcement learning” In _Advances in Neural Information Processing Systems_ 35, 2022, pp. 1835–1849
* [27] Xiao-Yang Liu et al. “FinRL: A deep reinforcement learning library for automated stock trading in quantitative finance” In _arXiv preprint arXiv:2011.09607_ , 2020
* [28] Xiao-Yang Liu, Hongyang Yang, Jiechao Gao and Christina Dan Wang “FinRL: Deep reinforcement learning framework to automate trading in quantitative finance” In _ACM International Conference on AI in Finance (ICAIF)_ , 2021
* [29] Adrian Millea “Deep reinforcement learning for trading—A critical survey” In _Data_ 6.11 MDPI, 2021, pp. 119
* [30] George A Miller “The magical number seven, plus or minus two: Some limits on our capacity for processing information.” In _Psychological review_ 63.2 American Psychological Association, 1956, pp. 81
* [31] Volodymyr Mnih et al. “Asynchronous methods for deep reinforcement learning” In _International conference on machine learning_ , 2016, pp. 1928–1937 PMLR
* [32] Volodymyr Mnih et al. “Playing atari with deep reinforcement learning” In _arXiv preprint arXiv:1312.5602_ , 2013
* [33] Jaap MJ Murre and Joeri Dros “Replication and analysis of Ebbinghaus’ forgetting curve” In _PloS one_ 10.7 Public Library of Science San Francisco, CA USA, 2015, pp. e0120644
* [34] OpenAI “GPT-4 Technical Report”, 2023 arXiv:2303.08774 [cs.CL]
* [35] Joon Sung Park et al. “Generative Agents: Interactive Simulacra of Human Behavior” In _Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology_ , UIST ’23 San Francisco, CA, USA: Association for Computing Machinery, 2023 DOI: 10.1145/3586183.3606763
* [36] Joon Sung Park et al. “Social simulacra: Creating populated prototypes for social computing systems” In _Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology_ , 2022, pp. 1–18
* [37] Eero Pätäri and Mika Vilska “Performance of moving average trading strategies over varying stock market conditions: the Finnish evidence” In _Applied Economics_ 46.24 Taylor & Francis, 2014, pp. 2851–2872
* [38] Jeffrey Pennington, Richard Socher and Christopher D Manning “Glove: Global vectors for word representation” In _Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)_ , 2014, pp. 1532–1543
* [39] Tidor-Vlad Pricope “Deep reinforcement learning in quantitative algorithmic trading: A review” In _arXiv preprint arXiv:2106.00123_ , 2021
* [40] Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever “Improving language understanding by generative pre-training” OpenAI, 2018
* [41] Antonin Raffin et al. “Stable-Baselines3: Reliable Reinforcement Learning Implementations” In _Journal of Machine Learning Research_ 22.268, 2021, pp. 1–8 URL: http://jmlr.org/papers/v22/20-1364.html
* [42] John Schulman et al. “Proximal policy optimization algorithms” In _arXiv preprint arXiv:1707.06347_ , 2017
* [43] William F. Sharpe “The Sharpe Ratio” In _The Journal of Portfolio Management_ 21.1, 1994, pp. 49–58
* [44] Yong Shi et al. “Stock trading rule discovery with double deep Q-network” In _Applied Soft Computing_ 107 Elsevier, 2021, pp. 107320
* [45] Theodore Sumers, Shunyu Yao, Karthik Narasimhan and Thomas L Griffiths “Cognitive architectures for language agents” In _arXiv preprint arXiv:2309.02427_ , 2023
* [46] Ron Sun “Desiderata for cognitive architectures” In _Philosophical psychology_ 17.3 Taylor & Francis, 2004, pp. 341–373
* [47] John Sweller “Human cognitive architecture: Why some instructional procedures work and others do not.” American Psychological Association, 2012
* [48] Hugo Touvron et al. “Llama: Open and efficient foundation language models” In _arXiv preprint arXiv:2302.13971_ , 2023
* [49] Rashesh Vaidya “Moving Average Convergence-Divergence (MACD) Trading Rule: An Application in Nepalese Stock Market" NEPSE"” In _Quantitative Economics and Management Studies_ 1.6, 2020, pp. 366–374
* [50] Lei Wang et al. “A survey on large language model-based autonomous agents” In _arXiv preprint arXiv:2308.11432_ , 2023
* [51] Lei Wang et al. “RecAgent: A Novel Simulation Paradigm for Recommender Systems” In _arXiv preprint arXiv:2306.02552_ , 2023
* [52] Shijie Wu et al. “Bloomberggpt: A large language model for finance” In _arXiv preprint arXiv:2303.17564_ , 2023
* [53] Xing Wu et al. “Adaptive stock trading strategies with deep reinforcement learning methods” In _Information Sciences_ 538 Elsevier, 2020, pp. 142–158
* [54] Zhuoran Xiong et al. “Practical deep reinforcement learning approach for stock trading” In _arXiv preprint arXiv:1811.07522_ , 2018, pp. 1–7
* [55] Hongyang Yang, Xiao-Yang Liu and Christina Dan Wang “FinGPT: Open-Source Financial Large Language Models” In _arXiv preprint arXiv:2306.06031_ , 2023
* [56] Hongyang Yang, Xiao-Yang Liu, Shan Zhong and Anwar Walid “Deep reinforcement learning for automated stock trading: An ensemble strategy” In _Proceedings of the first ACM international conference on AI in finance_ , 2020, pp. 1–8
* [57] Hongxin Zhang et al. “Building cooperative embodied agents modularly with large language models” In _arXiv preprint arXiv:2307.02485_ , 2023
* [58] Wayne Xin Zhao et al. “A survey of large language models” In _arXiv preprint arXiv:2303.18223_ , 2023
|
# Selection of Time Headway in Connected and Autonomous Vehicle Platoons under
Noisy V2V Communication
Guoqi Ma, Prabhakar R. Pagilla∗, Swaroop Darbha The authors are with the
Department of Mechanical Engineering, Texas A&M University, College Station,
TX 77843, USA (e-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>[email protected]).
∗Corresponding author. Part of this work was presented at the 26th IEEE
International Conference on Intelligent Transportation Systems (ITSC), Bilbao,
Spain, 2023 [1].
###### Abstract
In this paper, we investigate the selection of time headway to ensure robust
string stability in connected and autonomous vehicle platoons in the presence
of signal noise in Vehicle-to-Vehicle (V2V) communication. In particular, we
consider the effect of noise in communicated vehicle acceleration from the
predecessor vehicle to the follower vehicle on the selection of the time
headway in predecessor-follower type vehicle platooning with a Constant Time
Headway Policy (CTHP). Employing a CTHP based control law for each vehicle
that utilizes on-board sensors for measurement of position and velocity of the
predecessor vehicle and wireless communication network for obtaining the
acceleration of the predecessor vehicle, we investigate how time headway is
affected by communicated signal noise. We derive constraints on the CTHP
controller gains for predecessor acceleration, velocity error and spacing
error and a lower bound on the time headway which will ensure robust string
stability of the platoon against signal noise. We provide comparative
numerical simulations on an example to illustrate the main result.
###### Index Terms:
Connected and Autonomous Vehicle Platoons, Cooperative Adaptive Cruise Control
(CACC), Constant Time Headway Policy (CTHP), V2V Communication, Signal-to-
Noise Ratio (SNR), Robust String Stability.
## I Introduction
The deployment of connected and autonomous vehicle platoons has the potential
to benefit transportation systems in a profound and comprehensive way [2, 3].
Initially, under the Adaptive Cruise Control (ACC) paradigm facilitated by the
use of onboard sensors (such as radars) for measuring the velocities of and
distances from adjacent vehicles, the vehicle platoon can maintain a constant
inter-vehicular spacing, referred to as Constant Spacing Policy (CSP) [4].
Recently, the benefits of employing advanced vehicular communication
technologies and modern communication protocols have been expounded in great
detail in the literature; for example, Dedicated Short Range Communication
(DSRC) [5], Long Term Evolution (LTE) [6], 5G [7], and V2V communication [8].
In addition to the use of onboard sensors, advanced vehicular wireless
communications can lead to a higher-level of connectivity by incorporating the
acceleration information of other vehicles which has the potential to
significantly improve safety, increase mobility and throughput, and reduce
fuel consumption under the paradigm of Cooperative Adaptive Cruise Control
(CACC) with Constant Time Headway Policy (CTHP) [9].
In addition to internal stability for each vehicle, the connected and
autonomous vehicle platoon should also exhibit robust string stability, i.e.,
robustness of the platoon to uncertain lag, noise, and external disturbances.
In recent decades, there has been extensive research devoted to connected and
autonomous vehicle platoons on a wide range of topics including controller
design [10, 11, 12, 13, 14, 15], communication mechanisms [16, 17, 18],
experimental validation in realistic environments [19, 20], mixed human and
autonomous traffic [21, 22, 23], string stability analysis [24, 25]. However,
most existing results assume ideal V2V communication which is often not
practical. Among many other factors, signal noise in communication channels is
a key concern, which can be caused by a variety of factors including limited
communication bandwidth [26], cyber-attacks [27], etc. If key control
parameters, such as time headway, are chosen without consideration of signal
noise in communicated signals, then there is a possibility of the onset of
string instability and collisions. Although communication quality has improved
significantly under 5G and more advanced wireless communication, signal noise
when propagating through the platoon could have a substantial affect on safety
and performance.
Previous research on imperfect communication focused on how packet drops
affected string stability [28]. In [29, 30], the disturbance estimation and
control problem was investigated, though without considering the quantitative
effect of disturbance on the design parameters. In contrast to prior research
with imperfect communication, this paper considers the following problem:
Given a Signal-to-Noise Ratio (SNR) with V2V communication, what is the lower
bound of the employable time headway for CACC based vehicle platooning? For
this purpose, we consider a CTHP based control law where the feedforward
acceleration of the predecessor vehicle is communicated and contains signal
noise. Since the communication is through an $n$-bit channel, we model the
noise in the acceleration signal as a sum of $n$ binary random variables. To
address the issue of the vehicle closed-loop governing equation being
stochastic due to the noise signal, we develop an equivalent deterministic
governing equation using the averaging procedure described in [28]. Based on
this, we obtain the spacing error propagation equation and show that the
platoon is robustly string stable in the presence of signal noise and
parasitic lag and derive conditions on control gains and time headway under
which the platoon is robustly string stable. We provide numerical simulation
results on a platoon to verify the main result.
The main contributions of the work can be summarized as follows. In the
context of communication from a single predecessor vehicle, this paper
provides an extension to our previous work in [9] by considering signal noise
in communicated acceleration from the predecessor vehicle. We derive a lower
bound on the time headway that is a function of the parasitic lag in the
vehicle dynamics ($\tau_{0}$), acceleration gain ($k_{a}$), and SNR ($\rho$).
This lower bound aids in the selection of the time headway in the presence of
noise. Further, the lower bound reduces to the ideal communication case given
in [9] with no noise in the communicated acceleration signal. In addition, our
formulation and analysis allow for systematic selection of the control gains
as well as time headway for a given signal to noise ratio.
The remainder of the paper is organized as follows. Section II contains
preliminaries including vehicle dynamics and relevant definitions. The main
theoretical results are provided in Section III. An illustrative numerical
example and simulation results are provided in Section IV. Finally, some
concluding remarks are given in Section V.
## II Preliminaries
Consider a string of autonomous vehicles equipped with V2V communication as
illustrated in Fig. 1.
Figure 1: Autonomous and connected vehicle platoon with V2V communication.
The $i$-th vehicle dynamics is given by the model:
$\displaystyle\left\\{\begin{array}[]{*{20}{l}}\ddot{x}_{i}(t)=a_{i}(t),\\\
\tau\dot{a}_{i}(t)+a_{i}(t)=u_{i}(t),\end{array}\right.$ (3)
where $x_{i}(t)$, $a_{i}(t)$, $u_{i}(t)$ represent the position, acceleration,
and control input of the $i$-th vehicle at time instant $t$, and
$i\in\mathcal{N}=\\{1,2,\cdots,N\\}$, where $N$ is the total number of the
following vehicles in the platoon, $\tau$ denotes the parasitic actuation lag.
It is assumed that $\tau$ is uncertain with $\tau\in\left(0,\tau_{0}\right]$,
where $\tau_{0}$ is a positive real constant.
###### Definition II.1
Let $d$ denote the minimum or standstill spacing between adjacent vehicles,
$v_{i}(t)$ denote the velocity of the $i$-th vehicle, $h_{w}$ denote the time
headway, and $e_{i}(t):=x_{i}(t)-x_{i-1}(t)+d$ be the spacing error for the
$i$-th vehicle with respect to the $(i-1)$-th vehicle. Define the velocity
dependent inter-vehicular spacing error for the $i$-th vehicle as:
$\displaystyle\delta_{i}(t)=e_{i}(t)+h_{w}v_{i}(t).$ (4)
###### Definition II.2 ([9])
Let $\delta_{i}(s)$ denote the Laplace transform of $\delta_{i}(t)$ and let
$H(s)$ denote the spacing error propagation transfer function that satisfies
$\delta_{i}(s)=H(s)\delta_{i-1}(s)$. The connected and autonomous vehicle
platoon is said to be robustly string stable if the following two conditions
hold for all $\tau\in(0,\tau_{0}]$: (i) $H(s)$ achieves internal stability and
(ii) the platoon is string stable, i.e., it holds that
$\|\delta_{i}(t)\|_{\infty}\leq\|\delta_{i-1}(t)\|_{\infty}$, or in the
frequency domain
$\displaystyle\|H(j\omega)\|_{\infty}\leq 1.$ (5)
###### Definition II.3
Let $s(t)$ and $n(t)$ denote the transmitted signal and the noise signal,
respectively, and assume $n(t)=0$ when $s(t)=0$. Then, the Signal-to-Noise
Ratio (SNR) for $s(t)\neq 0$ is defined as
$\displaystyle{\rm SNR}=\min\limits_{t>0}20\log\frac{|s(t)|}{|n(t)|},$ (6)
where $\rm log$ denotes the logarithmic function with base 10. From the ${\rm
SNR}$ definition (6), we can obtain that if ${\rm SNR}\coloneqq\varrho$, then
$\rho=\min\limits_{t>0}\frac{|s(t)|}{|n(t)|}=10^{\frac{\varrho}{20}}$, and for
notational convenience, we will use
$\rho=\min\limits_{t>0}\frac{|s(t)|}{|n(t)|}$ instead of ${\rm SNR}$.
## III Main Results
### III-A Model of Noise
Let $a_{i-1}(t)$ denote the acceleration signal of the preceding vehicle
($(i-1)$-th) and $w_{i,i-1}(t)>0$ denote the noise factor associated with V2V
communication of the acceleration signal of the $(i-1)$-th vehicle to the
$i$-th vehicle. Considering the noise affected signal, the communicated
acceleration signal available to the $i$-th vehicle is given by
$w_{i,i-1}(t)a_{i-1}(t)$. We define $\rho>1$ as the signal to noise ratio
factor such that $1-\frac{1}{\rho}\leq w_{i,i-1}(t)\leq 1+\frac{1}{\rho}$,
that is, the communicated acceleration signal to the $i$-th vehicle satisfies
$\left|\left(1-\frac{1}{\rho}\right)a_{i-1}(t)\right|\leq\left|w_{i,i-1}(t)a_{i-1}(t)\right|\leq\left|\left(1+\frac{1}{\rho}\right)a_{i-1}(t)\right|$.
An illustration of the admissible region of the noise signal in terms of the
ideal communicated acceleration signal is provided in Fig. 2, and the range of
the communicated signal with respect to the actual signal is provided in Fig.
3.
Figure 2: An illustration of the admissible region for the signal noise in the
communicated acceleration signal ($\rho=5$). Figure 3: An illustration of the
admissible region of the communicated acceleration signal with respect to the
actual one ($\rho=5$).
Considering the communicated acceleration signal is through an $n$-bit
communication channel, we model $w_{i,i-1}(t)$ as follows:
$\displaystyle
w_{i,i-1}(t)=\left(1-\frac{1}{\rho}\right)+\frac{1}{\rho}\left(\sum_{j=0}^{n-1}\frac{z_{i,j}(t)}{2^{j}}\right),$
(7)
where $z_{i,j}(t),j=0,\ldots,n-1,$ are independent binary random processes.
Let $\gamma_{i,j}:=\mathbb{E}[z_{i,j}(t)]$. We assume that $\gamma_{i,j}$ is
independent of time, i.e., the characteristics of the noise processes of the
communication channel are time-invariant. However, in order to model wide
variety of noise processes, we do not assume that $\gamma_{i,j}$’s are known a
priori. Note that $\gamma_{i,j}\in(0,1)$ then this noise modeling procedure
will allow us to get a communicated signal that takes any value lying between
$\left(1-\frac{1}{\rho}\right)a_{i,i-1}$ and
$\left(1+\frac{1}{\rho}\right)a_{i,i-1}$.
### III-B Inter-vehicular Spacing Error Propagation Equation
Consider the following CTHP control law for the $i$-th vehicle:
$\displaystyle
u_{i}(t)=k_{a}w_{i,i-1}(t)a_{i-1}(t)-k_{v}(v_{i}(t)-v_{i-1}(t))-k_{p}\delta_{i}(t),$
(8)
then the governing equation for the $i$-th vehicle is given by:
$\displaystyle\tau\dddot{x}_{i}(t)+\ddot{x}_{i}(t)$
$\displaystyle=k_{a}w_{i,i-1}(t)a_{i-1}(t)$ $\displaystyle\quad-
k_{v}(v_{i}(t)-v_{i-1}(t))-k_{p}\delta_{i}(t).$ (9)
The governing equation is stochastic due to the presence of the stochastic
noise signal $\omega_{i,i-1}(t)$; correspondingly, the state is also
stochastic. To perform robust string stability analysis, we consider an
equivalent deterministic governing equation along the lines described in [28].
By defining the augmented state vector as
$\hat{X}(t)=[x_{0}(t),v_{0}(t),a_{0}(t),x_{1}(t),v_{1}(t),a_{1}(t),\cdots,x_{N}(t),v_{N}(t),a_{N}(t)]$
and letting $\hat{\omega}(t)$ be the vector of noise signals with its $i$-th
component being $\omega_{i,i-1}(t)$, and the acceleration input to the lead
vehicle to be $U(t)$, the augmented state equation for the vehicle platoon can
be developed as
$\displaystyle\dot{\hat{X}}(t)=\hat{A}(\hat{w}(t))\hat{X}(t)+BU(t),$ (10)
where the structures of the matrices $\hat{A}$ and $B$ are given in [28].
Through a discretization process for a sufficiently small time interval, it
was shown that
$\displaystyle\mathbb{E}[\hat{A}]=\bar{A},\quad\mathbb{E}[e^{\hat{A}t}]=e^{\bar{A}t},$
(11)
where $\bar{A}=\hat{A}(\bar{w})$ with $\bar{w}$ as the expectation of
$\hat{w}(t)$. The second equality is true because of the specific lower
triangular and banded structure of the governing equations for the CACC case.
Let $\bar{X}=\mathbb{E}[\hat{X}]$; in the following, unless specified
otherwise, we will use bar to indicate expected value for a corresponding
random variable. The evolution of the vehicle states in the platoon converge
to the states of the following deterministic state equation:
$\displaystyle\dot{\bar{X}}(t)=\bar{A}\bar{X}(t)+BU(t).$ (12)
Utilizing this procedure, the governing equation for the $i$-th vehicle is
given by:
$\displaystyle\tau\dddot{\bar{x}}_{i}(t)+\ddot{\bar{x}}_{i}(t)$
$\displaystyle=k_{a}\mathbb{E}[w_{i,i-1}(t)]\bar{a}_{i-1}(t)$
$\displaystyle\quad-
k_{v}(\bar{v}_{i}(t)-\bar{v}_{i-1}(t))-k_{p}\bar{\delta}_{i}(t).$ (13)
Let
$\displaystyle\tilde{k}_{a}:=k_{a}\mathbb{E}[w_{i,i-1}(t)]=\left[1-\frac{1}{\rho}+\frac{1}{\rho}\left(\sum_{j=0}^{n-1}\frac{\gamma_{i,j}}{2^{j}}\right)\right]k_{a}.$
(14)
The governing equation for the $i$-th vehicle can be re-written as:
$\displaystyle\tau\dddot{\bar{x}}_{i}(t)+\ddot{\bar{x}}_{i}(t)$
$\displaystyle=\tilde{k}_{a}\ddot{\bar{x}}_{i-1}(t)$ $\displaystyle\quad-
k_{v}(\dot{\bar{x}}_{i}(t)-\dot{\bar{x}}_{i-1}(t))-k_{p}\bar{\delta}_{i}(t).$
(15)
Let
$\bar{\delta}_{i}(t)=\bar{x}_{i}(t)-\bar{x}_{i-1}(t)+d+h_{w}\dot{\bar{x}}_{i}(t)$,
and let $\bar{\delta}(s)$ be the Laplace transform of $\bar{\delta}_{i}(t)$.
Then, the inter-vehicular spacing error propagation equation can be derived as
$\displaystyle\bar{\delta}_{i}(s)=\tilde{H}(s;\tau)\bar{\delta}_{i-1}(s),$
(16)
where $\tilde{H}(s;\tau)=\tilde{\mathcal{N}}(s)/\mathcal{D}(s)$ is the inter-
vehicular spacing error propagation transfer function with
$\displaystyle\tilde{\mathcal{N}}(s)$
$\displaystyle=\tilde{k}_{a}s^{2}+k_{v}s+k_{p},$ $\displaystyle\mathcal{D}(s)$
$\displaystyle=\tau s^{3}+s^{2}+\gamma s+k_{p},$
where $\gamma:=k_{v}+h_{w}k_{p}$.
The platoon error propagation equation given by (III-B) was studied in Theorem
1 of [9] which can be recast for the case considered in this paper as follows:
###### Theorem III.1
The following are true for the platoon given by the inter-vehicular spacing
error propagation equation (16):
1. (a)
$\|\tilde{H}(j\omega;\tau)\|_{\infty}\leq 1$, $\forall\tau\in(0,\tau_{0}]$,
implies that $\tilde{k}_{a}\in(0,1)$;
2. (b)
given any $\tilde{k}_{a}\in(0,1)$, $h_{w}$ satisfying the following:
$\displaystyle h_{w}\geq\frac{2\tau_{0}}{1+\tilde{k}_{a}},$ (17)
there exist $k_{p},k_{v}>0$ such that
$\|\tilde{H}(j\omega;\tau)\|_{\infty}\leq 1$ for all $\tau\in(0,\tau_{0}]$.
### III-C Stability Analysis
###### Theorem III.2
The following are true for the platoon given by inter-vehicular spacing error
propagation equation (16):
1. (a)
$\|\tilde{H}(j\omega;\tau)\|_{\infty}\leq 1$, $\forall\tau\in(0,\tau_{0}]$,
implies that $\tilde{k}_{a}\in(0,1)$.
2. (b)
given any
$\displaystyle k_{a}\in\left(0,\frac{1}{1+\frac{1}{\rho}}\right),$ (18)
and $h_{w}$ satisfying the following:
$\displaystyle
h_{w}>h_{w,lb}(k_{a}):=2\tau_{0}\left(\frac{1-\left(1-\frac{1}{\rho}\right)k_{a}}{1-\left(1+\frac{1}{\rho}\right)^{2}k_{a}^{2}}\right),$
(19)
there exist $k_{p},k_{v}>0$ such that
$\|\tilde{H}(j\omega;\tau)\|_{\infty}\leq 1$ for all $\tau\in(0,\tau_{0}]$;
3. (c)
given any $\rho>1$, the minimizing value, $k_{a}^{*}$, of $k_{a}$ and the
corresponding minimum value, $h_{w,lb}^{*}$, of $h_{w,lb}(k_{a})$ are given
by:
$\displaystyle
k_{a}^{*}=\left(\frac{1-\frac{1}{\sqrt{\rho}}}{1+\frac{1}{\sqrt{\rho}}}\right)\frac{1}{\left(1+\frac{1}{\rho}\right)},$
(20) $\displaystyle
h_{w,lb}^{*}=\tau_{0}\frac{\left(1+\frac{1}{\sqrt{\rho}}\right)^{2}}{\left(1+\frac{1}{\rho}\right)}.$
(21)
Correspondingly, there exist $k_{p},k_{v}>0$ such that
$\|\tilde{H}(j\omega;\tau)\|_{\infty}\leq 1$ for all $\tau\in(0,\tau_{0}]$.
###### Remark III.1
Minimizing $h_{w,lb}$ is useful for improving traffic throughput because a
lower time headway can be employed while still guaranteeing “robust” string
stability.
###### Proof:
For internal stability, $\gamma-\tau k_{p}>0$ by Routh-Hurwitz criterion on
$\mathcal{D}(s)$. Applying _Theorem III.1_ to the error propagation equation
(16), we infer that:
1. (a)
$\tilde{k}_{a}\in(0,1)$;
2. (b)
$h_{w}\geq\dfrac{2\tau_{0}}{1+\tilde{k}_{a}}$.
Since $\gamma_{i,j}$’s are not known, we can bound $\tilde{k}_{a}$ by
considering their maximum possible values. Hence, we have
$\displaystyle\tilde{k}_{a}\leq\left(1-\frac{1}{\rho}+\frac{2}{\rho}\right)k_{a}=\left(1+\frac{1}{\rho}\right)k_{a}.$
(22)
Thus, if
$\displaystyle k_{a}<\frac{1}{1+\frac{1}{\rho}}\implies\tilde{k}_{a}<1,$ (23)
allowing us to apply part (b) of Theorem _Theorem III.1_. We then have
$\displaystyle h_{w}\geq\frac{2\tau_{0}}{1+\tilde{k}_{a}}.$ (24)
Again, since $\tilde{k}_{a}$ is dependent on $\gamma_{i,j}$’s which are not
known a priori, we will consider the worst possible lower bound, i.e., the
lower bound for $h_{w}$ that is a maximum over all possible values of
$\gamma_{i,j}$’s. To address this, we require
$\displaystyle h_{w}$
$\displaystyle\geq\max\limits_{\gamma_{i,0},\ldots,\gamma_{i,n}}\frac{2\tau_{0}}{1+\tilde{k}_{a}}.$
(25)
Hence, if the above condition holds, then (24) will hold. Substituting
$\tilde{k}_{a}$ from (14) into (25), the condition (24) will hold if
$\displaystyle h_{w}$
$\displaystyle\geq\max\limits_{\gamma_{i,0},\ldots,\gamma_{i,n}}\frac{2\tau_{0}}{1+\left(1-\frac{1}{\rho}+\frac{1}{\rho}\left(\sum\limits_{j=0}^{n-1}\frac{\gamma_{i,j}}{2^{j}}\right)\right)k_{a}}$
$\displaystyle=\frac{2\tau_{0}}{\min\limits_{\gamma_{i,0},\ldots,\gamma_{i,n}}\left(1+\left(1-\frac{1}{\rho}+\frac{1}{\rho}\left(\sum\limits_{j=0}^{n-1}\frac{\gamma_{i,j}}{2^{j}}\right)\right)k_{a}\right)}$
$\displaystyle=\frac{2\tau_{0}}{1+\left(1-\frac{1}{\rho}\right)k_{a}}.$ (26)
In [9], the feasible region for the control gains $k_{p}$ and $k_{v}$ is
specific to one chosen value of $\tilde{k}_{a}\in(0,1)$. However, in this
case, $\tilde{k}_{a}$ could be any value in the interval
$\mathcal{I}:=\left[\left(1-\frac{1}{\rho}\right)k_{a},\left(1+\frac{1}{\rho}\right)k_{a}\right]$
for a given $k_{a}\in\left(0,\frac{1}{1+\frac{1}{\rho}}\right)$. Let
$\mathcal{F}(\tilde{k}_{a})$ denote the set of all $(k_{p},k_{v})$ that ensure
robust string stability for a given value of $\tilde{k}_{a}$. Unlike in [9],
we need to show that the set of all feasible $(k_{p},k_{v})$ that work for any
$\tilde{k}_{a}\in\mathcal{I}$ denoted by $\mathcal{S}$ is non-empty, i.e.,
$\mathcal{S}:=\bigcap_{\tilde{k}_{a}\in\mathcal{I}}\mathcal{F}(\tilde{k}_{a})\neq\emptyset.$
In the remainder of the proof, we will focus on showing that
$\mathcal{S}\neq\emptyset$ and explicitly construct this set for synthesizing
the controller. We do so by considering the robust stability condition,
$\|\tilde{H}(j\omega;\tau)\|_{\infty}^{2}\leq 1\;\forall\tau\in(0,\tau_{0}]$
and employing a time headway satisfying (III-C).
According to _Definition II.2_, robust string stability is guaranteed when
$\|\tilde{H}(j\omega;\tau)\|_{\infty}^{2}\leq 1$, i.e.,
$|\tilde{\mathcal{N}}(j\omega)|^{2}\leq|\mathcal{D}(j\omega)|^{2},\forall\omega$,
which can be rewritten as
$\displaystyle\tau^{2}\omega^{4}+(1-\tilde{k}_{a}^{2}-2\tau\gamma)\omega^{2}+\gamma^{2}-2k_{p}-k_{v}^{2}+2\tilde{k}_{a}k_{p}\geq
0.$ (27)
The above condition is satisfied if $\forall\tau\in(0,\tau_{0}]$,
$\displaystyle 1-\tilde{k}_{a}^{2}-2\tau\gamma\geq 0,$ (28a)
$\displaystyle\gamma^{2}-2k_{p}-k_{v}^{2}+2\tilde{k}_{a}k_{p}\geq 0,$ (28b)
i.e.,
$\displaystyle\gamma\leq\frac{1-\tilde{k}_{a}^{2}}{2\tau_{0}},$ (29a)
$\displaystyle\gamma\geq\sqrt{2k_{p}(1-\tilde{k}_{a})+k_{v}^{2}}.$ (29b)
Thus, by considering the appropriate lower and upper bounds of $\tilde{k}_{a}$
in the inequalities (29a) and (29b), these inequalities on $\gamma$ can be
satisfied for all $\tilde{k}_{a}\in\mathcal{I}$ if $\gamma$ satisfies
$\displaystyle\gamma\leq\min\limits_{\tilde{k}_{a}\in\mathcal{I}}\frac{1-\tilde{k}_{a}^{2}}{2\tau_{0}}=\frac{1-\left(1+\frac{1}{\rho}\right)^{2}k_{a}^{2}}{2\tau_{0}},$
(30a)
$\displaystyle\gamma\geq\max\limits_{\tilde{k}_{a}\in\mathcal{I}}\sqrt{2k_{p}(1-\tilde{k}_{a})+k_{v}^{2}}$
$\displaystyle\qquad=\sqrt{2k_{p}\left(1-\left(1-\frac{1}{\rho}\right)k_{a}\right)+k_{v}^{2}},$
(30b)
from which the admissible range of $\gamma$ is given by
$\displaystyle\sqrt{2k_{p}\left(1-\left(1-\frac{1}{\rho}\right)k_{a}\right)+k_{v}^{2}}\leq\gamma\leq\frac{1-\left(1+\frac{1}{\rho}\right)^{2}k_{a}^{2}}{2\tau_{0}}.$
(31)
Next, we will show that the set $\mathcal{S}\neq\emptyset$. First, since
$\gamma=k_{v}+h_{w}k_{p}$, the right inequality in (31) can be rewritten as
$\displaystyle
k_{v}+h_{w}k_{p}\leq\frac{1-\left(1+\frac{1}{\rho}\right)^{2}k_{a}^{2}}{2\tau_{0}}.$
(32)
Thus, an admissible set of $k_{p}$ and $k_{v}$ is given by
$\displaystyle\mathcal{S}_{1}:=\left\\{(k_{p},k_{v}):k_{p}>0,k_{v}>0,\frac{k_{v}}{a_{1}}+\frac{k_{p}}{b_{1}}\leq
1\right\\},$ (33)
where
$\displaystyle\begin{cases}a_{1}=\dfrac{1-\left(1+\frac{1}{\rho}\right)^{2}k_{a}^{2}}{2\tau_{0}},\\\
b_{1}=\dfrac{1-\left(1+\frac{1}{\rho}\right)^{2}k_{a}^{2}}{2\tau_{0}h_{w}}=\frac{1}{h_{w}}a_{1}.\end{cases}$
(34)
Second, the left inequality in (31) can be rewritten as
$\displaystyle
k_{v}+h_{w}k_{p}\geq\sqrt{2k_{p}\left(1-\left(1-\frac{1}{\rho}\right)k_{a}\right)+k_{v}^{2}}.$
(35)
Squaring both sides of (35) and simplifying, we obtain
$\displaystyle 2h_{w}k_{v}+h_{w}^{2}k_{p}\geq
2\left(1-\left(1-\frac{1}{\rho}\right)k_{a}\right).$ (36)
Thus, an admissible set of $k_{p}$ and $k_{v}$ is given by
$\displaystyle\mathcal{S}_{2}:=\left\\{(k_{p},k_{v}):k_{p}>0,k_{v}>0,\frac{k_{v}}{a_{2}}+\frac{k_{p}}{b_{2}}\geq
1\right\\},$ (37)
where
$\displaystyle\begin{cases}{}a_{2}=\dfrac{1-\left(1-\frac{1}{\rho}\right)k_{a}}{h_{w}},\\\
b_{2}=\frac{2\left(1-\left(1-\frac{1}{\rho}\right)k_{a}\right)}{h_{w}^{2}}=\frac{2}{h_{w}}a_{2}.\end{cases}$
(38)
Then, combining (33) and (37), the feasible region of $k_{p}$ and $k_{v}$ is
given by the set
$\displaystyle\mathcal{S}=\mathcal{S}_{1}\cap\mathcal{S}_{2}.$ (39)
Note that $\mathcal{S}$ is nonempty if $a_{1}\geq a_{2}$ or $b_{1}\geq b_{2}$.
In particular, notice that
$\frac{a_{1}}{a_{2}}=\frac{h_{w}}{2\tau_{0}}\left(\frac{1-\left(1+\frac{1}{\rho}\right)^{2}k_{a}^{2}}{1-\left(1-\frac{1}{\rho}\right)k_{a}}\right).$
Substituting $h_{w}$ from (19), we have
$\frac{a_{1}}{a_{2}}>1.$
Thus, $\mathcal{S}\neq\emptyset$. This completes the proof of Statement (b) of
_Theorem III.2_. In the following we prove Statement (c) of _Theorem III.2_.
Let $\bar{h}_{w}(k_{a}):=\dfrac{h_{w,lb}(k_{a})}{2\tau_{0}}$,
$m:=1-\frac{1}{\rho}$, and $n:=\left(1+\frac{1}{\rho}\right)^{2}$. Then,
$\displaystyle\bar{h}_{w}(k_{a})=\frac{1-mk_{a}}{1-nk_{a}^{2}}$ (40)
and
$\displaystyle\frac{d\bar{h}_{w}(k_{a})}{dk_{a}}=\frac{-mnk_{a}^{2}+2nk_{a}-m}{(1-nk_{a}^{2})^{2}}.$
(41)
Let $f(k_{a}):=-mnk_{a}^{2}+2nk_{a}-m$. Then, the roots of $f(k_{a})=0$ are
given by
$r_{1,2}:=\frac{n\mp\sqrt{n(n-m^{2})}}{mn}.$
Note that $r_{1}<\frac{1}{1+\frac{1}{\rho}}$ and
$r_{2}>\frac{1}{1+\frac{1}{\rho}}$. The function $f(k_{a})$ vs. $k_{a}$ is
provided as an illustration for $\rho=5$.
Figure 4: $f(k_{a})$ vs. $k_{a}$ when $\rho=5$.
Since $k_{a}\in\left(0,\frac{1}{1+\frac{1}{\rho}}\right)$, we need to consider
only $r_{1}$. Hence, we have
$\displaystyle\begin{cases}\dfrac{d\bar{h}_{w}}{dk_{a}}<0,\mbox{when}~{}k_{a}\in(0,r_{1});\\\
\dfrac{d\bar{h}_{w}}{dk_{a}}>0,\mbox{when}~{}k_{a}\in\left(r_{1},\frac{1}{1+\frac{1}{\rho}}\right).\end{cases}$
(42)
Thus,
$\displaystyle\min\limits_{k_{a}\in\left(0,\frac{1}{1+\frac{1}{\rho}}\right)}\bar{h}_{w}(k_{a})=\bar{h}_{w}(r_{1}).$
(43)
Substituting $m$ and $n$ into $r_{1}$, we obtain
$\displaystyle k_{a}^{*}:=r_{1}$
$\displaystyle=\left(\frac{1-\frac{1}{\sqrt{\rho}}}{1+\frac{1}{\sqrt{\rho}}}\right)\left(\frac{1}{1+\frac{1}{\rho}}\right).$
(44)
Further, substituting $r_{1}$ into $\bar{h}_{w}(r_{1})$, we obtain
$\displaystyle\bar{h}_{w}(r_{1})=\frac{\left(1+\frac{1}{\sqrt{\rho}}\right)^{2}}{2\left(1+\frac{1}{\rho}\right)}.$
(45)
Therefore,
$\displaystyle
h_{w,lb}^{*}=2\tau_{0}\bar{h}_{w}(r_{1})=\tau_{0}\frac{\left(1+\frac{1}{\sqrt{\rho}}\right)^{2}}{1+\frac{1}{\rho}}.$
(46)
This completes the proof of Statement (c) of _Theorem III.2_.
Finally, we will prove the lower bound of the time headway given by (21)
satisfies the internal stability condition. For this purpose, note that
$\displaystyle
h_{w}k_{p}\geq\tau_{0}k_{p}\frac{\left(1+\frac{1}{\sqrt{\rho}}\right)^{2}}{\left(1+\frac{1}{\rho}\right)}.$
(47)
Thus, $\gamma=k_{v}+h_{w}k_{p}>h_{w}k_{p}\geq\tau_{0}k_{p}$.
Therefore, this completes the proof of _Theorem III.2_. ∎
###### Remark III.2
If $\rho\to\infty$ (no signal noise case), then $k_{a}\in(0,1)$ and
$h_{w}=\frac{2\tau_{0}}{1+k_{a}}$ as given in in [9].
###### Remark III.3
In (19), if $\rho\to\infty$, then the lower bound of the time headway reduces
to $h_{w}\geq h_{w,lb}(k_{a})=\frac{2\tau_{0}}{1+k_{a}}$ as given in [9]. For
the same $k_{a}$ value, the lower bound of the time headway increases as
$\rho$ decreases in the presence of signal noise. In addition, according to
(18), the upper bound of $k_{a}$ is smaller when compared to the no noise
case, which also factors into the increase of the lower bound of the time
headway in the presence of noise. In addition, in (21), if $\rho\to\infty$,
the minimum lower bound of the time headway reduces to
$h^{\ast}_{w,lb}\to\tau_{0}$, and the corresponding optimal $k_{a}$ given by
(20) becomes $k_{a}^{\ast}\to 1$.
## IV Numerical Simulations
In this section, we present a numerical example and simulation results to
corroborate the results in Section III. We consider the following numerical
values for the system parameters: $\tau_{0}=0.5$ s, $d=5$ m, $N=12$. In the
numerical simulation, $\tau$ was chosen as $\tau=\tau_{0}$. We assume that the
lead vehicle experiences an external disturbance which causes a perturbation
on its acceleration, denoted as $a_{0}(t)$, as follows:
$\displaystyle a_{0}(t)=\begin{cases}0.5\sin(0.1(t-10)),10~{}{\rm
s}<t<(10+20\pi)~{}{\rm s},\\\ 0,\mbox{otherwise},\end{cases}$ (48)
and under which the performance of the communication and control strategy
considered in Section III will be evaluated in the following.
Assume $\rho=5$, then the upper bound of $k_{a}$ can be computed as
$k_{a}<\frac{1}{1+\frac{1}{\rho}}=0.8333$. Then, by choosing $k_{a}=0.5$ and
substituting $\tau_{0},\rho,k_{a}$ values in (19), $h_{w}\geq 0.9375$ s.
Choosing $h_{w}=0.95$ s, the feasible region of $k_{p}$ and $k_{v}$ is
obtained as shown in Fig. 5, from which we choose $k_{p}=0.009$, $k_{v}=0.63$
for the numerical simulations. With the above chosen values of the time
headway and control gains, the frequency response of
$\left|\tilde{H}(j\omega;\tau_{0})\right|$ is shown in Fig. 6 which
demonstrates string stability of the platoon.
Figure 5: The feasible region of $k_{p}$ and $k_{v}$ when $k_{a}=0.5$,
$h_{w}=0.95$ s. Figure 6: $|\tilde{H}(j\omega;\tau_{0})|$ when $k_{a}=0.5$,
$k_{p}=0.009$, $k_{v}=0.63$, $h_{w}=0.95$ s.
Suppose we model noise in (7) by choosing $n=16$ discretizations, and the
expectation of the binary random variables $z_{i,j}$ are respectively:
$\gamma_{i,0}=0.8055$, $\gamma_{i,1}=0.5767$, $\gamma_{i,2}=0.1829$,
$\gamma_{i,3}=0.2399$, $\gamma_{i,4}=0.8865$, $\gamma_{i,5}=0.0287$,
$\gamma_{i,6}=0.4899$, $\gamma_{i,7}=0.1679$, $\gamma_{i,8}=0.9787$,
$\gamma_{i,9}=0.7127$, $\gamma_{i,10}=0.5005$, $\gamma_{i,11}=0.4711$,
$\gamma_{i,12}=0.0596$, $\gamma_{i,13}=0.6820$, $\gamma_{i,14}=0.0424$,
$\gamma_{i,15}=0.0714$. The evolutions of the inter-vehicular spacing errors
are shown in Fig. 7.
Figure 7: The inter-vehicular spacing errors of the following vehicles
($k_{a}=0.5$, $k_{p}=0.009$, $k_{v}=0.63$ and $h_{w}=0.95$ s).
In addition, as an example, the noise and the communicated acceleration signal
from vehicle 1 to vehicle 2 are shown in Fig. 8.
Figure 8: The noise and the communicated acceleration signal from vehicle 1 to
vehicle 2 ($k_{a}=0.5$, $k_{p}=0.009$, $k_{v}=0.63$ and $h_{w}=0.95$ s).
For comparison, under the same condition as above except for $h_{w}=0.65$ s,
then the frequency response of $|\tilde{H}(j\omega;\tau_{0})|$ is shown in
Fig. 9 which exhibits string instability. In addition, the inter-vehicular
spacing errors of the following vehicles are shown in Fig. 10.
Figure 9: $|\tilde{H}(j\omega;\tau_{0})|$ when $k_{a}=0.5$, $k_{p}=0.009$,
$k_{v}=0.63$, $h_{w}=0.65$ s. Figure 10: The inter-vehicular spacing errors
of the following vehicles ($k_{a}=0.5$, $k_{p}=0.009$, $k_{v}=0.63$ and
$h_{w}=0.65$ s).
In addition, we have conducted numerical simulations to evaluate the
performance of the platoon for $k_{a}^{\ast}$ and $h_{w,lb}^{\ast}$ given in
Statement (c) of Theorem III.2. For $\rho=5$, we can obtain
$k_{a}^{\ast}=0.3183$ and $h^{\ast}_{w,lb}=0.8727$. Choosing
$k_{a}=k_{a}^{\ast}$ and $h_{w}=0.88$, the feasible region of $k_{v}$ and
$k_{p}$ is shown in Fig. 11.
Figure 11: The feasible region of $k_{p}$ and $k_{v}$ when
$k_{a}=k_{a}^{\ast}$, $h_{w}=0.88$ s.
Choosing $k_{p}=0.003$, $k_{v}=0.85$ from the feasible region, the evolution
of the inter-vehicular spacing errors is provided in Fig. 12.
Figure 12: The inter-vehicular spacing errors of the following vehicles
($k_{a}=k_{a}^{\ast}$, $k_{p}=0.003$, $k_{v}=0.85$ and $h_{w}=0.88$ s).
Figure 13 provides the comparison of the platoon size ($x_{0}-x_{N}$) for the
two time headways, $h_{w}=0.95$ s and $h_{w}=0.88$ s, indicating higher
throughput as stated in Remark III.1.
Figure 13: The comparison of the length of the platoon between $h_{w}=0.95$
and $h_{w}=0.88\approx h^{\ast}_{w,lb}$.
From the numerical simulation results, it can be seen that this vehicle
platoon is robustly string stable with the synthesized control gains and time
headway according to the analysis and design procedure provided in Section
III.
## V Conclusion
We have investigated robust string stability of connected and autonomous
vehicle platoons with cooperative adaptive cruise control systems subject to
noisy V2V communication. We have derived conditions on control gains for
predecessor acceleration, relative velocity and spacing errors and a lower
bound for time headway that are dependent on the signal to noise ratio due to
V2V communication of acceleration, while ensuring that the CACC system is
internal and string stable. We have provided a systematic analysis through
which one can select control gains and time headway for a given SNR. We have
also provided an illustrative example and corresponding simulation results to
demonstrate the main results. For future work, we will investigate extension
to noisy communication when information from multiple predecessors is
employed.
## References
* [1] G. Ma, P. R. Pagilla, and S. Darbha, “The effect of signal noise and latency in communicated acceleration on time headway in vehicle platooning,” in _2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC)_. IEEE, 2023, pp. 4532–4537.
* [2] A. Matin and H. Dia, “Impacts of connected and automated vehicles on road safety and efficiency: A systematic literature review,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 24, no. 3, pp. 2705–2736, 2023.
* [3] M. Garg and M. Bouroche, “Can connected autonomous vehicles improve mixed traffic safety without compromising efficiency in realistic scenarios?” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 24, no. 6, pp. 6674–6689, 2023.
* [4] D. Swaroop and J. K. Hedrick, “Constant Spacing Strategies for Platooning in Automated Highway Systems,” _Journal of Dynamic Systems, Measurement, and Control_ , vol. 121, no. 3, pp. 462–470, 09 1999.
* [5] J. B. Kenney, “Dedicated short-range communications (DSRC) standards in the United States,” _Proceedings of the IEEE_ , vol. 99, no. 7, pp. 1162–1182, 2011.
* [6] R. Molina-Masegosa and J. Gozalvez, “System level evaluation of lte-v2v mode 4 communications and its distributed scheduling,” in _2017 IEEE 85th Vehicular Technology Conference (VTC Spring)_ , 2017, pp. 1–5.
* [7] M. H. C. Garcia, A. Molina-Galan, M. Boban, J. Gozalvez, B. Coll-Perales, T. Şahin, and A. Kousaridas, “A tutorial on 5G NR V2X communications,” _IEEE Communications Surveys & Tutorials_, vol. 23, no. 3, pp. 1972–2026, 2021.
* [8] A. Vinel, L. Lan, and N. Lyamin, “Vehicle-to-vehicle communication in c-acc/platooning scenarios,” _IEEE Communications Magazine_ , vol. 53, no. 8, pp. 192–197, 2015.
* [9] S. Darbha, S. Konduri, and P. R. Pagilla, “Benefits of V2V communication for autonomous and connected vehicles,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 20, no. 5, pp. 1954–1963, 2018.
* [10] J. Ploeg, D. P. Shukla, N. van de Wouw, and H. Nijmeijer, “Controller synthesis for string stability of vehicle platoons,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 15, no. 2, pp. 854–865, 2014.
* [11] G. F. Silva, A. Donaire, A. McFadyen, and J. J. Ford, “String stable integral control design for vehicle platoons with disturbances,” _Automatica_ , vol. 127, p. 109542, 2021.
* [12] R. Rajamani and C. Zhu, “Semi-autonomous adaptive cruise control systems,” _IEEE Transactions on Vehicular Technology_ , vol. 51, no. 5, pp. 1186–1192, 2002.
* [13] J. Jiang, A. Astolfi, and T. Parisini, “Robust traffic wave damping via shared control,” _Transportation Research Part C: Emerging Technologies_ , vol. 128, p. 103110, 2021.
* [14] W. Scholte, P. Zegelaar, and H. Nijmeijer, “A control strategy for merging a single vehicle into a platoon at highway on-ramps,” _Transportation Research Part C: Emerging Technologies_ , vol. 136, p. 103511, 2022.
* [15] J. Shen, E. K. H. Kammara, and L. Du, “Fully distributed optimization-based cav platooning control under linear vehicle dynamics,” _Transportation Science_ , vol. 56, no. 2, pp. 381–403, 2022.
* [16] S. Darbha, S. Konduri, and P. R. Pagilla, “Vehicle platooning with constant spacing strategies and multiple vehicle look ahead information,” _IET Intelligent Transport Systems_ , vol. 14, no. 6, pp. 589–600, 2020.
* [17] Y. Bian, Y. Zheng, W. Ren, S. E. Li, J. Wang, and K. Li, “Reducing time headway for platooning of connected vehicles via v2v communication,” _Transportation Research Part C: Emerging Technologies_ , vol. 102, pp. 87–105, 2019.
* [18] G. Guo, J. Kang, H. Lei, and D. Li, “Finite-time stabilization of a collection of connected vehicles subject to communication interruptions,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 23, no. 8, pp. 10 627–10 635, 2022.
* [19] G. J. L. Naus, R. P. A. Vugts, J. Ploeg, M. J. G. van de Molengraft, and M. Steinbuch, “String-stable cacc design and experimental validation: A frequency-domain approach,” _IEEE Transactions on Vehicular Technology_ , vol. 59, no. 9, pp. 4268–4279, 2010.
* [20] J. I. Ge and G. Orosz, “Connected cruise control among human-driven vehicles: Experiment-based parameter estimation and optimal control design,” _Transportation Research Part C: Emerging Technologies_ , vol. 95, pp. 445–459, 2018.
* [21] L. Jin, M. Čičić, K. H. Johansson, and S. Amin, “Analysis and design of vehicle platooning operations on mixed-traffic highways,” _IEEE Transactions on Automatic Control_ , vol. 66, no. 10, pp. 4715–4730, 2021.
* [22] S. Gong and L. Du, “Cooperative platoon control for a mixed traffic flow including human drive vehicles and connected and autonomous vehicles,” _Transportation Research Part B: Methodological_ , vol. 116, pp. 25–61, 2018\.
* [23] J. Lan, D. Zhao, and D. Tian, “Data-driven robust predictive control for mixed vehicle platoons using noisy measurement,” _IEEE Transactions on Intelligent Transportation Systems_ , pp. 1–11, 2021.
* [24] J. Ploeg, N. van de Wouw, and H. Nijmeijer, “Lp string stability of cascaded systems: Application to vehicle platooning,” _IEEE Transactions on Control Systems Technology_ , vol. 22, no. 2, pp. 786–793, 2014.
* [25] B. Besselink and K. H. Johansson, “String stability and a delay-based spacing policy for vehicle platoons subject to disturbances,” _IEEE Transactions on Automatic Control_ , vol. 62, no. 9, pp. 4376–4391, 2017.
* [26] Z. Li, B. Hu, M. Li, and G. Luo, “String stability analysis for vehicle platooning under unreliable communication links with event-triggered strategy,” _IEEE Transactions on Vehicular Technology_ , vol. 68, no. 3, pp. 2152–2164, 2019.
* [27] R. A. Biroon, Z. A. Biron, and P. Pisu, “False data injection attack in a platoon of cacc: Real-time detection and isolation with a pde approach,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 23, no. 7, pp. 8692–8703, 2022.
* [28] V. Vegamoor, S. Rathinam, and S. Darbha, “String stability of connected vehicle platoons under lossy V2V communication,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 23, no. 7, pp. 8834–8845, 2022.
* [29] M. Hu, X. Wang, Y. Bian, D. Cao, and H. Wang, “Disturbance observer-based cooperative control of vehicle platoons subject to mismatched disturbance,” _IEEE Transactions on Intelligent Vehicles_ , vol. 8, no. 4, pp. 2748–2758, 2023.
* [30] Y. Zhu, Y. Li, H. Zhu, W. Hua, G. Huang, S. Yu, S. E. Li, and X. Gao, “Joint sliding-mode controller and observer for vehicle platoon subject to disturbance and acceleration failure of neighboring vehicles,” _IEEE Transactions on Intelligent Vehicles_ , vol. 8, no. 3, pp. 2345–2357, 2023.
|
# t-DGR: A Trajectory-Based
Deep Generative Replay Method for
Continual Learning in Decision Making
William Yue
The University of Texas at Austin
<EMAIL_ADDRESS>
&Bo Liu
The University of Texas at Austin
<EMAIL_ADDRESS>
&Peter Stone
The University of Texas at Austin, Sony AI
<EMAIL_ADDRESS>
###### Abstract
Deep generative replay has emerged as a promising approach for continual
learning in decision-making tasks. This approach addresses the problem of
catastrophic forgetting by leveraging the generation of trajectories from
previously encountered tasks to augment the current dataset. However, existing
deep generative replay methods for continual learning rely on autoregressive
models, which suffer from compounding errors in the generated trajectories. In
this paper, we propose a simple, scalable, and non-autoregressive method for
continual learning in decision-making tasks using a generative model that
generates task samples conditioned on the trajectory timestep. We evaluate our
method on Continual World benchmarks and find that our approach achieves
state-of-the-art performance on the average success rate metric among
continual learning methods. Code is available at
https://github.com/WilliamYue37/t-DGR.
## 1 Introduction
Continual learning, also known as lifelong learning, is a critical challenge
in the advancement of general artificial intelligence, as it enables models to
learn from a continuous stream of data encompassing various tasks, rather than
having access to all data at once [1]. However, a major challenge in continual
learning is the phenomenon of catastrophic forgetting, where previously
learned skills are lost when attempting to learn new tasks [2].
To mitigate catastrophic forgetting, replay methods have been proposed, which
involve saving data from previous tasks and replaying it to the learner during
the learning of future tasks. This approach mimics how humans actively prevent
forgetting by reviewing material for tests and replaying memories in dreams.
However, storing data from previous tasks requires significant storage space
and becomes computationally infeasible as the number of tasks increases.
In the field of cognitive neuroscience, the Complementary Learning Systems
theory offers insights into how the human brain manages memory. This theory
suggests that the brain employs two complementary learning systems: a fast-
learning episodic system and a slow-learning semantic system [3, 4, 5]. The
hippocampus serves as the episodic system, responsible for storing specific
memories of unique events, while the neocortex functions as the semantic
system, extracting general knowledge from episodic memories and organizing it
into abstract representations [6].
Figure 1: The first row presents a comparison of three generative methods for
imitating an agent’s movement in a continuous 2D plane with Gaussian noise.
The objective is to replicate the ground truth path, which transitions from
darker to lighter colors. The autoregressive method (CRIL) encounters a
challenge at the first sharp turn as nearby points move in opposing
directions. Once the autoregressive method deviates off course, it never
recovers and compromises the remaining trajectory. In contrast, sampling
individual state observations i.i.d. without considering the temporal nature
of trajectories (DGR) leads to a fragmented path with numerous gaps. Our
proposed method t-DGR samples individual state observations conditioned on the
trajectory timestep. By doing so, t-DGR successfully avoids the pitfalls of
CRIL and DGR, ensuring a more accurate replication of the desired trajectory.
The second row illustrates how each method generates trajectory data. CRIL
generates the next state observation conditioned on the previous state
observation. DGR, in contrast, does not attempt to generate a trajectory but
generates individual state observations i.i.d. On the other hand, t-DGR
generates state observations conditioned on the trajectory timestep.
Drawing inspiration from the human brain, deep generative replay (DGR)
addresses the catastrophic forgetting issue in decision-making tasks by using
a generative model as the hippocampus to generate trajectories from past tasks
and replay them to the learner which acts as the neocortex (Figure 2) [7]. The
time-series nature of trajectories in decision-making tasks sets it apart from
continual supervised learning, as each timestep of the trajectory requires
sufficient replay. In supervised learning, the learner’s performance is not
significantly affected if it performs poorly on a small subset of the data.
However, in decision-making tasks, poor performance on any part of the
trajectory can severely impact the overall performance. Therefore, it is
crucial to generate state-action pairs that accurately represent the
distribution found in trajectories. Furthermore, the high-dimensional
distribution space of trajectories makes it computationally infeasible to
generate complete trajectories all at once.
Existing DGR methods adopt either the generation of individual state
observations i.i.d. without considering the temporal nature of trajectories or
autoregressive trajectory generation. Autoregressive approaches generate the
next state(s) in a trajectory by modeling the conditional probability of the
next state(s) given the previously generated state(s). However, autoregressive
methods suffer from compounding errors in the generated trajectories. On the
other hand, generating individual state observations i.i.d. leads to a higher
sample complexity compared to generating entire trajectories, which becomes
significant when replay time is limited (see Section 2.2).
To address the issues in current DGR methods, we propose a simple, scalable,
and non-autoregressive trajectory-based DGR method. We define a generated
trajectory as temporally coherent if the transitions from one state to the
next appear realistic (refer to Section 3.4 for a formal definition). Given
that current decision-making methods are trained on state-action pairs, we do
not require trajectories to exhibit temporal coherence. Instead, our focus is
on ensuring an equal number of samples generated at each timestep of the
trajectory to accurately represent the distribution found in trajectories. To
achieve equal sample coverage at each timestep, we train our generator to
produce state observations conditioned on the trajectory timestep, and then
sample from the generator conditioned on each timestep of the trajectory. The
intuition behind our method is illustrated in Figure 1.
To evaluate the effectiveness of our proposed method, t-DGR, we conducted
experiments on the Continual World benchmarks CW10 and CW20 [8] using
imitation learning. Our results indicate that t-DGR achieves state-of-the-art
performance in terms of average success rate when compared to other top
continual learning methods.
## 2 Related Work
This section provides an overview of existing continual learning methods
within the context of “General Continual Learning", with a particular focus on
pseudo-rehearsal methods.
### 2.1 Continual Learning in the Real World
As the field of continual learning continues to grow, there is an increasing
emphasis on developing methods that can be effectively applied in real-world
scenarios [9, 10, 11, 12, 13]. The concept of “General Continual Learning" was
introduced by Buzzega et al. [14] to address certain properties of the real
world that are often overlooked or ignored by existing continual learning
methods. Specifically, two important properties, bounded memory and blurry
task boundaries, are emphasized in this work. Bounded memory refers to the
requirement that the memory footprint of a continual learning method should
remain bounded throughout the entire lifespan of the learning agent. This
property is crucial to ensure practicality and efficiency in real-world
scenarios. Additionally, blurry task boundaries highlight the challenge of
training on tasks that are intertwined, without clear delineation of when one
task ends and another begins. Many existing methods fail to account for this
characteristic, which is common in real-world learning scenarios. While there
are other significant properties associated with continual learning in the
real world, this study focuses on the often-neglected aspects of bounded
memory and blurry task boundaries. By addressing these properties, we aim to
develop methods that are more robust and applicable in practical settings.
### 2.2 Continual Learning Methods
Continual learning methods for decision-making tasks can be categorized into
three main categories.
#### Regularization
Regularization methods in continual learning focus on incorporating
constraints during model training to promote the retention of past knowledge.
One simple approach is to include an $L_{2}$ penalty in the loss function.
Elastic Weight Consolidation (EWC) builds upon this idea by assigning weights
to parameters based on their importance for previous tasks using the Fisher
information matrix [15]. MAS measures the sensitivity of parameter changes on
the model’s output, prioritizing the retention of parameters with a larger
effect [16]. VCL leverages variational inference to minimize the Kullback-
Leibler divergence between the current and prior parameter distributions [17].
Progress and Compress learns new tasks using a separate model and subsequently
distills this knowledge into the main model while safeguarding the previously
acquired knowledge [18]. However, regularization methods may struggle with
blurry task boundaries as they rely on knowledge of task endpoints to apply
regularization techniques effectively. In our experiments, EWC was chosen as
the representative regularization method based on its performance in the
original Continual World experiments [8].
#### Architecture-based Methods
Architecture-based methods aim to maintain distinct sets of parameters for
each task, ensuring that future learning does not interfere with the knowledge
acquired from previous tasks. Packnet [19], UCL [20], and AGS-CL [21] all
safeguard previous task information in a neural network by identifying
important parameters and freeing up less important parameters for future
learning. Identification of important parameters can be done through iterative
pruning (Packnet), parameter uncertainty (UCL), and activation value (AGS-CL).
However, a drawback of parameter isolation methods is that each task requires
its own set of parameters, which may eventually exhaust the available
parameters for new tasks and necessitate a dynamically expanding network
without bounded memory [22]. Additionally, parameter isolation methods require
training on a single task at a time to prune and isolate parameters,
preventing concurrent learning from multiple interwoven tasks. In our
experiments, PackNet was selected as the representative architecture-based
method based on its performance in the original Continual World experiments
[8].
#### Pseudo-rehearsal Methods
Pseudo-rehearsal methods mitigate the forgetting of previous tasks by
generating synthetic samples from past tasks and replaying them to the
learner. Deep generative replay (DGR) (Figure 2) utilizes a generative model,
such as generative adversarial networks [23], variational autoencoders [24],
or diffusion models [25, 26], to generate the synthetic samples. Originally,
deep generative replay was proposed to address continual supervised learning
problems, where the generator only needed to generate single data point
samples [7]. However, in decision-making tasks, expert demonstrations consist
of trajectories (time-series) with a significantly higher-dimensional
distribution space.
One existing DGR method generates individual state observations i.i.d. instead
of entire trajectories. However, this approach leads to a higher sample
complexity compared to generating entire trajectories. The sample complexity
of generating enough individual state observations i.i.d. to cover every
portion of the trajectory $m$ times can be described using the Double Dixie
Cup problem [27]. For trajectories of length $n$, it takes an average of
$\Theta(n\log n+mn\log\log n)$ i.i.d. samples to ensure at least $m$ samples
for each timestep. In scenarios with limited replay time (small $m$) and long
trajectories (large $n$) the sample complexity can be approximated as
$\Theta(n\log n)$ using the Coupon Collector’s problem [28]. The additional
$\Theta(\log n)$ factor reduces the likelihood of achieving complete sample
coverage of the trajectory when the number of replays or replay time is
limited, especially considering the computationally expensive nature of
current generative methods. Furthermore, there is a risk that the generator
assigns different probabilities to each timestep of the trajectory, leading to
a selective focus on certain timesteps rather than equal representation across
the trajectory.
Another existing DGR method is autoregressive trajectory generation. In the
existing autoregressive method, CRIL, a generator is used to generate samples
of the initial state, and a dynamics model predicts the next state based on
the current state and action [29]. However, even with a dynamics model
accuracy of 99% and a 1% probability of deviating from the desired trajectory,
the probability of an autoregressively generated trajectory going off course
is $1-0.99^{n}$, where $n$ denotes the trajectory length. With a trajectory
length of $n=200$ (as used in our experiments), the probability of an
autoregressively generated trajectory going off course is $1-0.99^{200}=0.87$.
This example demonstrates how the issue of compounding error leads to a high
probability of failure, even with a highly accurate dynamics model.
In our experiments, t-DGR is evaluated against all existing trajectory
generation methods in pseudo-rehearsal approaches to assess how well t-DGR
addresses the limitations of those methods.
Figure 2: The deep generative replay paradigm. The algorithm learns to
generate trajectories from past tasks to augment real trajectories from the
current task in order to mitigate catastrophic forgetting. Both the generator
and policy model are updated with this augmented dataset.
## 3 Background
This section introduces notation and the formulation of the continual
imitation learning problem that we use in this paper. Additionally, we provide
a concise overview of diffusion probabilistic models used in our generative
model implementation.
### 3.1 Imitation Learning
Imitation learning algorithms aim to learn a policy $\pi_{\theta}$
parameterized by $\theta$ by imitating a set of expert demonstrations
$D=\\{\tau_{i}\\}_{i=1\ldots M}$. Each trajectory $\tau_{i}$ consists of a
sequence of state-action pairs $\\{(s_{j},a_{j})\\}_{j=1\ldots|\tau_{i}|}$
where $|\tau_{i}|$ is the length of the trajectory. Each trajectory comes from
a task $\mathcal{T}$ which is a Markov decision process that can be
represented as a tuple $\langle S,A,T,\rho_{0}\rangle$ with state space $S$,
action space $A$, transition dynamics $T:S\times A\times S\to[0,1]$, and
initial state distribution $\rho_{0}$. Various algorithms exist for imitation
learning, including behavioral cloning, GAIL [30], and inverse reinforcement
learning [31]. In this work, we use behavioral cloning where the objective can
be formulated as minimizing the loss function:
$\mathcal{L}(\theta)=\mathbb{E}_{s,a\sim
D}\bigg{[}\big{\|}\pi_{\theta}(s)-a\big{\|}^{2}_{2}\bigg{]}$ (1)
where the state and action spaces are continuous.
### 3.2 Continual Imitation Learning
In the basic formulation most common in the field today, continual imitation
learning involves sequentially solving multiple tasks
$\mathcal{T}_{1},\mathcal{T}_{2},\ldots,\mathcal{T}_{N}$. When solving for
task $\mathcal{T}_{i}$, the learner only gets data from task $\mathcal{T}_{i}$
and can not access data for any other task. In a more general scenario,
certain tasks may have overlapping boundaries, allowing the learner to
encounter training data from multiple tasks during certain phases of training.
The learner receives a continuous stream of training data in the form of
trajectories $\tau_{1},\tau_{2},\tau_{3},\ldots$ from the environment, where
each trajectory $\tau$ corresponds to one of the $N$ tasks. However, the
learner can only access a limited contiguous portion of this stream at any
given time.
Let $s_{i}$ be the success rate of task $\mathcal{T}_{i}$ after training on
all $N$ tasks. The continual imitation learning objective is defined as
maximizing the average success rate over all tasks:
$S=\frac{1}{N}\sum_{i=1}^{N}s_{i}$ (2)
The primary issue that arises from the continual learning problem formulation
is the problem of catastrophic forgetting where previously learned skills are
forgotten when training on a new task.
### 3.3 Diffusion Probabilistic Models
Diffusion probabilistic models [25, 26] generate data through a learned
reverse denoising diffusion process $p_{\theta}(x_{t-1}\mid x_{t})$. The
forward diffusion process $q(x_{t}\mid x_{t-1})$ gradually adds Gaussian noise
to an input $x_{0}$ at each time step $t$, ultimately resulting in pure noise
$x_{T}$ at $t=T$. The forward diffusion process is defined as:
$q(x_{t}\mid
x_{t-1})=\mathcal{N}\left(x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}\mathbf{I}\right)$
(3)
where $0<\beta_{t}<1$ is defined by a known variance schedule. In our
implementation, we adopted the cosine schedule proposed by Nichol et al. [32].
For a sufficiently large time horizon $T$ and a well-behaved variance
schedule, $x_{T}$ approximates an isotropic Gaussian distribution. If we had
the reverse diffusion process $p(x_{t-1}\mid x_{t})$, we could sample
$x_{T}\sim\mathcal{N}(0,\mathbf{I})$ and obtain a sample from $q(x_{0})$ by
denoising $x_{T}$ with $p(x_{t-1}\mid x_{t})$. However, computing
$p(x_{t-1}\mid x_{t})$ is intractable as it necessitates knowledge of the
distribution of all possible $x_{t}$. Instead, we approximate $p(x_{t-1}\mid
x_{t})$ using a neural network:
$p_{\theta}(x_{t-1}\mid
x_{t})=\mathcal{N}\left(x_{t-1};\mu_{\theta}(x_{t},t),\Sigma(x_{t},t)\right)$
(4)
Since $q$ and $p_{\theta}$ can be viewed as a variational auto-encoder [24],
we can use the variational lower bound to minimize the negative log-likelihood
of the reverse process. We can express $\mu_{\theta}(x_{t},t)$ from Equation 4
as:
$\mu_{\theta}(x_{t},t)=\frac{1}{\sqrt{\alpha_{t}}}\left(x_{t}-\frac{\beta_{t}}{\sqrt{1-\overline{\alpha}_{t}}}\epsilon_{\theta}(x_{t},t)\right)$
(5)
where $\alpha_{t}=1-\beta_{t}$ and
$\overline{\alpha}_{t}=\prod_{s=0}^{t}\alpha_{s}$. The training loss can then
be defined as:
$\mathcal{L}(\theta)=\mathbb{E}_{x_{0},t,\epsilon}\left[\lVert\epsilon-\epsilon_{\theta}(x_{t},t)\rVert^{2}\right]$
(6)
Note that the timesteps $t$ in the diffusion process differ from the
trajectory timesteps $t$. Henceforth, we will refer only to the trajectory
timesteps $t$.
### 3.4 Notation
Deep generative replay involves training two models: a generator $G_{\gamma}$
parameterized by $\gamma$ and a learner $\pi_{\theta}$ parameterized by
$\theta$. We define $G_{\gamma}^{(i)}$ as the generator trained on tasks
$\mathcal{T}_{1}\ldots\mathcal{T}_{i}$ and capable of generating data samples
from tasks $\mathcal{T}_{1}\ldots\mathcal{T}_{i}$. Similarly,
$\pi_{\theta}^{(i)}$ represents the learner trained on tasks
$\mathcal{T}_{1}\ldots\mathcal{T}_{i}$ and able to solve tasks
$\mathcal{T}_{1}\ldots\mathcal{T}_{i}$.
A sequence of state observations $(s_{1},s_{2},\ldots,s_{n-1},s_{n})$ is
temporally coherent if $\forall 1\leq i<n,\exists a\in
A:T(s_{i},a,s_{i+1})>\varepsilon$, where $0<\varepsilon<1$ is a small constant
representing a threshold for negligible probabilities.
## 4 Method
Our proposed method, t-DGR, tackles the challenge of generating long
trajectories by training a generator, denoted as $G_{\gamma}(j)$, which is
conditioned on the trajectory timestep $j$ to generate state observations.
Pseudocode for t-DGR is provided as Algorithm 1. The algorithm begins by
initializing the task index, replay ratio, generator model, learner model, and
learning rates (Line 1). The replay ratio, denoted as $0\leq r<1$, determines
the percentage of training samples seen by the learner that are generated.
Upon receiving training data from the environment, t-DGR calculates the number
of trajectories to generate based on the replay ratio $r$ (Lines 4-5). The
variable $L$ (Line 7) represents the maximum length of trajectories observed
so far.
To generate a trajectory $\tau$ of length $L$, t-DGR iterates over each
timestep $1\leq j\leq L$ (Line 9). At each timestep, t-DGR generates the
$j$-th state observation of the trajectory using the previous generator
$G_{\gamma}^{(t-1)}$ conditioned on timestep $j$ (Line 10), and then labels it
with an action using the previous policy $\pi_{\theta}^{(t-1)}$ (Line 11).
After generating all timesteps in the trajectory $\tau$, t-DGR adds it to the
existing training dataset (Line 14). Note that the generated state
observations within a trajectory do not have temporal coherence, as each state
observation is generated independently of other timesteps. This approach is
acceptable since our learner is trained on state-action pairs rather than full
trajectories. However, unlike generating state observations i.i.d., our method
ensures equal coverage of every timestep during the generative process,
significantly reducing sample complexity.
Once t-DGR has augmented the training samples from the environment with our
generated training samples, t-DGR employs backpropagation to update both the
generator and learner using the augmented dataset (Lines 16-18). The t-DGR
algorithm continues this process of generative replay throughout the agent’s
lifetime, which can be infinite (Line 2). Although we perform the generative
process of t-DGR at task boundaries for ease of understanding, no part of
t-DGR is dependent on clear task boundaries.
#### Architecture
We employ a U-net [33] trained on the loss specified in Equation 6 to
implement the generative diffusion model $G_{\gamma}$. Since we utilize
proprioceptive observations in our experiments, $\pi_{\theta}$ is implemented
with a multi-layer perceptron trained on the loss specified in Equation 1.
Algorithm 1 Trajectory-based Deep Generative Replay (t-DGR)
1:Initialize task index $t=0$, replay ratio $r$, generator $G^{(0)}_{\gamma}$,
learner $\pi^{(0)}_{\theta}$, and learning rates
$\lambda_{\gamma},\lambda_{\theta}$.
2:while new task available do
3: $t\leftarrow t+1$
4: Initialize dataset $D$ with trajectories from task $t$.
5: $n\leftarrow\frac{r*|D|}{1-r}$ $\triangleright$ number of trajectories to
generate
6: for $i=1$ to $n$ do
7: $L\leftarrow$ maximum trajectory length
8: $\tau\leftarrow\emptyset$ $\triangleright$ initialize trajectory of length
$L$
9: for $j=1$ to $L$ do
10: $S\leftarrow G_{\gamma}^{(t-1)}(j)$ $\triangleright$ generate states
11: $A\leftarrow\pi_{\theta}^{(t-1)}(S)$ $\triangleright$ label with actions
12: $\tau_{j}\leftarrow(S,A)$ $\triangleright$ add to trajectory
13: end for
14: $D\leftarrow D\cup\tau$ $\triangleright$ add generated trajectory to $D$
15: end for
16: Update generator and learner using $D$
17:
$\gamma^{(t)}\leftarrow\gamma^{(t-1)}-\lambda_{\gamma}\nabla_{\gamma}\mathcal{L}_{G^{(t-1)}}(\gamma^{(t-1)})$
18:
$\theta^{(t)}\leftarrow\theta^{(t-1)}-\lambda_{\theta}\nabla_{\theta}\mathcal{L}_{\pi^{(t-1)}}(\theta^{(t-1)})$
19:end while
## 5 Experiments
In this section, we outline the experimental setup and performance metrics
employed to compare t-DGR with representative methods, followed by an analysis
of experimental results across different benchmarks and performance metrics.
### 5.1 Experimental Setup
We evaluate our method on the Continual World benchmarks CW10 and CW20 [8],
along with our own “General Continual Learning" variant of CW10 called GCL10.
CW10 consists of a sequence of 10 Meta-World [34] tasks, where each task
involves a Sawyer arm manipulating one or two objects in the Mujoco physics
simulator. For computational efficiency, we provide the agents with
proprioceptive observations. Notably, the observation and action spaces are
continuous and remain consistent across all tasks. CW20 is an extension of
CW10 with the tasks repeated twice. To our knowledge, Continual World is the
only standard continual learning benchmark for decision-making tasks. GCL10
gives data to the learner in 10 sequential buckets $B_{1},\ldots,B_{10}$. Data
from task $\mathcal{T}_{i}$ from CW10 is split evenly between buckets
$B_{i-1}$, $B_{i}$, and $B_{i+1}$, except for the first and last task. Task
$\mathcal{T}_{1}$ is evenly split between buckets $B_{0}$ and $B_{1}$, and
task $\mathcal{T}_{10}$ is evenly split between buckets $B_{9}$ and $B_{10}$.
In order to ensure bounded memory usage, we adopt a one-hot vector approach to
condition the model on the task, rather than maintaining a separate final
neural network layer for each individual task. Additionally, we do not allow
separate biases for each task, as originally done in EWC [15]. Expert
demonstrations for training are acquired by gathering 100 trajectories per
task using hand-designed policies from Meta-World, with each trajectory
limited to a maximum of 200 steps. Importantly, the learner model remains
consistent across different methods and benchmark evaluations. Moreover, we
maintain a consistent replay ratio of $r=0.9$ across all pseudo-rehearsal
methods.
We estimated the success rate $S$ of a model by running each task 100 times.
The metrics for each method were computed using 5 seeds to create a 90%
confidence interval. Further experimental details, such as hyperparameters,
model architecture, random seeds, and computational resources, are included in
the appendix. This standardization enables a fair and comprehensive comparison
of our proposed approach with other existing methods.
### 5.2 Metrics
We evaluate our models using three metrics proposed by the Continual World
benchmark [8], with the average success rate being the primary metric.
Although the forward transfer and forgetting metrics are not well-defined in a
“General Continual Learning" setting, they are informative within the context
of Continual World benchmarks. As a reminder from Section 3.2, let $N$ denote
the number of tasks, and $s_{i}$ represent the success rate of the learner on
task $\mathcal{T}_{i}$. Additionally, let $s_{i}(t)$ denote the success rate
of the learner on task $\mathcal{T}_{i}$ after training on tasks
$\mathcal{T}_{1}$ to $\mathcal{T}_{t}$.
#### Average Success Rate
The average success rate, as given by Equation 2, serves as the primary
evaluation metric for continual learning methods.
#### Average Forward Transfer
We introduce a slightly modified metric for forward transfer that applies to a
broader range of continual learning problems beyond just continual
reinforcement learning in the Continual World benchmark. Let
$s_{i}^{\mathrm{ref}}$ represent the reference performance of a single-task
experiment on task $\mathcal{T}_{i}$. The forward transfer metric $FT_{i}$ is
computed as follows:
$\displaystyle
FT_{i}=\frac{D_{i}-D_{i}^{\mathrm{ref}}}{1-D_{i}^{\mathrm{ref}}}$
$\displaystyle D_{i}=\frac{s_{i}(i)+s_{i}(i-1)}{2}$ $\displaystyle
D_{i}^{\mathrm{ref}}=\frac{s_{i}^{\mathrm{ref}}}{2}$
The expressions for $D_{i}$ and $D^{\mathrm{ref}}_{i}$ serve as approximations
of the integral of task $\mathcal{T}_{i}$ performance with respect to the
training duration for task $\mathcal{T}_{i}$. The average forward transfer
$FT$ is then defined as the mean forward transfer over all tasks, calculated
as $FT=\frac{1}{N}\sum_{i=1}^{N}FT_{i}$.
#### Average Forgetting
We measure forgetting using the metric $F_{i}$, which represents the amount of
forgetting for task $i$ after all training has concluded. $F_{i}$ is defined
as the difference between the success rate on task $\mathcal{T}_{i}$
immediately after training and the success rate on task $\mathcal{T}_{i}$ at
the end of training.
$F_{i}=s_{i}(i)-s_{i}(N)$
The average forgetting $F$ is then computed as the mean forgetting over all
tasks, given by $F=\frac{1}{N}\sum_{i=1}^{N}F_{i}$.
### 5.3 Baselines
We compare the following methods on the Continual World benchmark using
average success rate as the primary evaluation metric. Representative methods
were chosen based on their success in the original Continual World
experiments, while DGR-based methods were selected to evaluate whether t-DGR
addresses the limitations of existing pseudo-rehearsal methods.
* •
Finetune: The policy is trained only on data from the current task.
* •
Multitask: The policy is trained on data from all tasks simultaneously.
* •
oEWC [18]: A variation of EWC known as online Elastic Weight Consolidation
(oEWC) bounds the memory of EWC by employing a single penalty term for the
previous model instead of individual penalty terms for each task. This
baseline is the representative regularization-based method.
* •
PackNet [19]: This baseline is the representative parameter isolation method.
Packnet safeguards previous task information in a neural network by
iteratively pruning, freezing, and retraining parts of the network.
* •
DGR [7]: This baseline is a deep generative replay method that only generates
individual state observations i.i.d. and not entire trajectories.
* •
CRIL [29]: This baseline is a deep generative replay method that trains a
policy along with a start state generator and a dynamics model that predicts
the next state given the current state and action. Trajectories are generated
by using the dynamics model and policy to autoregressively generate next
states from a start state.
* •
t-DGR: Our proposed method.
Due to the inability of oEWC and PackNet to handle blurry task boundaries, we
made several adjustments for CW20 and GCL10. Since PackNet cannot continue
training parameters for a task once they have been fixed, we treated the
second repetition of tasks in CW20 as distinct from the first iteration,
resulting in PackNet being evaluated with $N=20$, while the other methods were
evaluated with $N=10$. As for GCL10 and its blurry task boundaries, the best
approach we could adopt with oEWC and PackNet was to apply their
regularization techniques at regular training intervals rather than strictly
at task boundaries. During evaluation, all tasks were assessed using the last
fixed set of parameters in the case of PackNet.
(a) CW10 Method | Success Rate $\uparrow$ | FT$\uparrow$ | Forgetting$\downarrow$
---|---|---|---
Finetune | 16.4 $\pm$6.4 | -3.0 $\pm$6.0 | 78.8 $\pm$7.6
Multitask | 97.0 $\pm$1.0 | N/A | N/A
oEWC | 18.6 $\pm$5.3 | -6.3 $\pm$5.7 | 74.1 $\pm$6.1
PackNet | 81.4 $\pm$3.7 | -14.8 $\pm$7.8 | -0.1 $\pm$1.2
DGR | 75.0 $\pm$5.8 | -4.3 $\pm$5.1 | 17.8 $\pm$4.1
CRIL | 28.4 $\pm$10.6 | -1.1 $\pm$2.8 | 68.6 $\pm$10.4
t-DGR | 81.9 $\pm$3.3 | -0.3 $\pm$4.9 | 14.4 $\pm$2.5
(b) GCL10 Method | Success Rate $\uparrow$
---|---
Finetune | 21.7 $\pm$2.6
Multitask | 97.0 $\pm$1.0
oEWC | 21.8 $\pm$1.7
PackNet | 26.9 $\pm$5.6
DGR | 75.3 $\pm$4.4
CRIL | 53.5 $\pm$5.5
t-DGR | 81.7 $\pm$4.0
(c) CW20 Method | Success Rate $\uparrow$ | FT$\uparrow$ | Forgetting$\downarrow$
---|---|---|---
Finetune | 14.2 $\pm$4.0 | -0.5 $\pm$3.0 | 82.2 $\pm$5.6
Multitask | 97.0 $\pm$1.0 | N/A | N/A
oEWC | 19.4 $\pm$5.3 | -2.8 $\pm$4.1 | 75.2 $\pm$7.5
PackNet | 74.1 $\pm$4.1 | -20.4 $\pm$3.4 | -0.2 $\pm$0.9
DGR | 74.1 $\pm$4.1 | 18.9 $\pm$2.9 | 23.3 $\pm$3.3
CRIL | 50.8 $\pm$4.4 | 4.4 $\pm$4.9 | 46.1 $\pm$5.4
t-DGR | 83.9 $\pm$3.0 | 30.6 $\pm$4.5 | 14.6 $\pm$2.9
(d) Replay Ratio Ratio | t-DGR | DGR
---|---|---
0.5 | 63.2 $\pm$2.6 | 52.8 $\pm$2.9
0.6 | 66.3 $\pm$4.4 | 56.9 $\pm$4.5
0.7 | 70.8 $\pm$4.1 | 62.5 $\pm$3.6
0.8 | 75.0 $\pm$6.9 | 69.2 $\pm$4.9
0.9 | 81.9 $\pm$3.3 | 75.0 $\pm$5.8
Table 1: Tables (a), (b), and (c) present the results for Continual World 10,
General Continual Learning 10, and Continual World 20, respectively. The
tables display the average success rate, forward transfer, and forgetting (if
applicable) with 90% confidence intervals using 5 random seeds. An up arrow
indicates that higher values are better and a down arrow indicates that
smaller values are better. Table (d) compares the impact of replay amount on
the average success rate of t-DGR and DGR on CW10 with 90% confidence
intervals obtained using 5 random seeds. The best results are highlighted in
bold.
### 5.4 Discussion
t-DGR emerges as the leading method, demonstrating the highest success rate on
CW10 (Table 1(a)), CW20 (Table 1(c)), and GCL10 (Table 1(b)). Notably,
PackNet’s performance on the second iteration of tasks in CW20 diminishes,
highlighting its limited capacity for continually accommodating new tasks.
This limitation underscores the fact that PackNet falls short of being a true
lifelong learner, as it necessitates prior knowledge of the task count for
appropriate parameter capacity allocation. On the contrary, pseudo-rehearsal
methods, such as t-DGR, exhibit improved performance with the second iteration
of tasks in CW20 due to an increased replay time. These findings emphasize the
ability of DGR methods to effectively leverage past knowledge, as evidenced by
their superior forward transfer in both CW10 and CW20.
GCL10 (Table 1(b)) demonstrates that pseudo-rehearsal methods are mostly
unaffected by blurry task boundaries, whereas PackNet’s success rate
experiences a significant drop-off. This discrepancy arises from the fact that
PackNet’s regularization technique does not work effectively with less clearly
defined task boundaries.
Additionally, the diminishing performance gap between DGR and t-DGR as the
replay ratio increases in Table 1(d) indicates that a higher replay ratio
reduces the likelihood of any portion of the trajectory being insufficiently
covered when sampling individual state observations i.i.d., thereby
contributing to improved performance. This trend supports the theoretical
sample complexity of DGR derived in Section 2.2, as $\Theta(n\log n+mn\log\log
n)$ closely approximates the sample complexity of t-DGR, $\Theta(mn)$, when
the replay amount $m\to\infty$. However, while DGR can achieve comparable
performance to t-DGR with a high replay ratio, the availability of extensive
replay time is often limited in many real-world applications.
Overall, t-DGR exhibits promising results, outperforming other methods in
terms of success rate in all evaluations. Notably, t-DGR achieves a
significant improvement over existing pseudo-rehearsal methods on CW20 using a
Welch t-test with a significance level of $\text{p-value}=0.005$. Its ability
to handle blurry task boundaries, leverage past knowledge, and make the most
of replay opportunities position it as a state-of-the-art method for continual
lifelong learning in decision-making.
## 6 Conclusion
In conclusion, we have introduced t-DGR, a novel method for continual learning
in decision-making tasks, which has demonstrated state-of-the-art performance
on the Continual World benchmarks. Our approach stands out due to its
simplicity, scalability, and non-autoregressive nature, positioning it as a
solid foundation for future research in this domain.
Importantly, t-DGR aligns with the concept of “General Continual Learning" by
taking into account essential properties of the real world, including bounded
memory and blurry task boundaries. These considerations ensure that our method
remains applicable and effective in real-world scenarios, enabling its
potential integration into practical applications.
Looking ahead, one potential avenue for future research is the refinement of
the replay mechanism employed in t-DGR. Rather than assigning equal weight to
all past trajectories, a more selective approach could be explored. By
prioritizing certain memories over others and strategically determining when
to replay memories to the learner, akin to human learning processes, we could
potentially enhance the performance and adaptability of our method.
## Acknowledgements
This work has taken place in the Learning Agents Research Group (LARG) at the
Artificial Intelligence Laboratory, The University of Texas at Austin. LARG
research is supported in part by the National Science Foundation
(FAIN-2019844, NRT-2125858), the Office of Naval Research (N00014-18-2243),
Army Research Office (E2061621), Bosch, Lockheed Martin, and Good Systems, a
research grand challenge at the University of Texas at Austin. The views and
conclusions contained in this document are those of the authors alone. Peter
Stone serves as the Executive Director of Sony AI America and receives
financial compensation for this work. The terms of this arrangement have been
reviewed and approved by the University of Texas at Austin in accordance with
its policy on objectivity in research.
## References
* [1] Mark Ring. Continual Learning in Reinforcement Environments. PhD thesis, University of Texas at Austin, 1994.
* [2] Michael McCloskey and Neal J. Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. volume 24 of Psychology of Learning and Motivation, pages 109–165. Academic Press, 1989.
* [3] James L. McClelland, Bruce L. McNaughton, and Randall C. O’Reilly. Complementary learning systems within the hippocampus: A neural network modeling approach to understanding episodic memory consolidation. Psychological Review, 102(3):419–457, 1995.
* [4] Dharshan Kumaran, Demis Hassabis, and James L. McClelland. What learning systems do intelligent agents need? complementary learning systems theory updated. Trends in Cognitive Sciences, 20(7):512–534, 2016.
* [5] James Mcclelland, Bruce Mcnaughton, and Randall O’Reilly. Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory. Psychological review, 102:419–57, 08 1995.
* [6] Randall C. O’Reilly and Kenneth A. Norman. Hippocampal and neocortical contributions to memory: Advances in the complementary learning systems framework. Trends in Cognitive Sciences, 6(12):505–510, December 2002.
* [7] Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay, 2017.
* [8] Maciej Wołczyk, Michał Zając, Razvan Pascanu, Łukasz Kuciński, and Piotr Miłoś. Continual world: A robotic benchmark for continual reinforcement learning, 2021.
* [9] Liyuan Wang, Xingxing Zhang, Hang Su, and Jun Zhu. A comprehensive survey of continual learning: Theory, method and application, 2023.
* [10] Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sample selection for online continual learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
* [11] Jihwan Bang, Heesu Kim, YoungJoon Yoo, Jung-Woo Ha, and Jonghyun Choi. Rainbow memory: Continual learning with a memory of diverse samples, 2021.
* [12] Yen-Chang Hsu, Yen-Cheng Liu, Anita Ramasamy, and Zsolt Kira. Re-evaluating continual learning scenarios: A categorization and case for strong baselines, 2019.
* [13] Gido M. van de Ven and Andreas S. Tolias. Three scenarios for continual learning, 2019.
* [14] Pietro Buzzega, Matteo Boschini, Angelo Porrello, Davide Abati, and Simone Calderara. Dark experience for general continual learning: a strong, simple baseline, 2020.
* [15] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521–3526, mar 2017.
* [16] Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget, 2018.
* [17] Cuong V. Nguyen, Yingzhen Li, Thang D. Bui, and Richard E. Turner. Variational continual learning, 2018.
* [18] Jonathan Schwarz, Jelena Luketina, Wojciech M. Czarnecki, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, and Raia Hadsell. Progress & compress: A scalable framework for continual learning, 2018.
* [19] Arun Mallya and Svetlana Lazebnik. Packnet: Adding multiple tasks to a single network by iterative pruning, 2018.
* [20] Hongjoon Ahn, Sungmin Cha, Donggyu Lee, and Taesup Moon. Uncertainty-based continual learning with adaptive regularization, 2019.
* [21] Sangwon Jung, Hongjoon Ahn, Sungmin Cha, and Taesup Moon. Continual learning with node-importance based adaptive group sparse regularization, 2021.
* [22] Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang. Lifelong learning with dynamically expandable networks, 2018.
* [23] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks, 2014.
* [24] Diederik P Kingma and Max Welling. Auto-encoding variational bayes, 2022.
* [25] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models, 2020.
* [26] Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. CoRR, abs/1503.03585, 2015.
* [27] Donald J. Newman. The double dixie cup problem. The American Mathematical Monthly, 67(1):58–61, 1960.
* [28] Amy N. Myers and Herbert S. Wilf. Some new aspects of the coupon-collector’s problem, 2003.
* [29] Chongkai Gao, Haichuan Gao, Shangqi Guo, Tianren Zhang, and Feng Chen. Cril: Continual robot imitation learning via generative and prediction model, 2021.
* [30] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning, 2016.
* [31] Andrew Y. Ng and Stuart J. Russell. Inverse reinforcement learning. In Proceedings of the 17th International Conference on Machine Learning (ICML-2000), pages 663–670, 2000.
* [32] Alex Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models, 2021.
* [33] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation, 2015.
* [34] Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Avnish Narayan, Hayden Shively, Adithya Bellathur, Karol Hausman, Chelsea Finn, and Sergey Levine. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning, 2021.
## Appendix
## Appendix A Hyperparameters
### A.1 Finetune
Hyperparameter | Value | Brief Description
---|---|---
batch size | 32 | number of samples in each training iteration
epochs | 250 | number of times the entire dataset is passed through per task
learning rate | $10^{-4}$ | learning rate for gradient descent
optimization algorithm | Adam | optimization algorithm used
$\beta_{1}$ | 0.9 | exponential decay rate for first moment estimates in Adam
$\beta_{2}$ | 0.999 | exponential decay rate for second moment estimates in Adam
epsilon | $10^{-8}$ | small constant for numerical stability
weight decay | 0 | weight regularization
### A.2 Multitask
Hyperparameter | Value | Brief Description
---|---|---
batch size | 32 | number of samples in each training iteration
epochs | 500 | number of times the entire dataset of all tasks is passed through
learning rate | $10^{-4}$ | learning rate for gradient descent
optimization algorithm | Adam | optimization algorithm used
$\beta_{1}$ | 0.9 | exponential decay rate for first moment estimates in Adam
$\beta_{2}$ | 0.999 | exponential decay rate for second moment estimates in Adam
epsilon | $10^{-8}$ | small constant for numerical stability
weight decay | 0 | weight regularization
### A.3 oEWC
Hyperparameter | Value | Brief Description
---|---|---
batch size | 32 | number of samples in each training iteration
epochs | 250 | number of times the entire dataset of all tasks is passed through
learning rate | $10^{-4}$ | learning rate for gradient descent
Fisher multiplier | $10^{2}$ | the Fisher is scaled by this number to form the EWC penalty
optimization algorithm | Adam | optimization algorithm used
$\beta_{1}$ | 0.9 | exponential decay rate for first moment estimates in Adam
$\beta_{2}$ | 0.999 | exponential decay rate for second moment estimates in Adam
epsilon | $10^{-8}$ | small constant for numerical stability
weight decay | 0 | weight regularization
The Fisher multiplier hyperparameter was tuned with the values:
$10^{-2},10^{-1},10^{0},10^{1},10^{2},10^{3},10^{4},10^{5},10^{6}$. We
selected the value $10^{2}$ based on the success rate metric given by Equation
2.
### A.4 PackNet
Hyperparameter | Value | Brief Description
---|---|---
batch size | 32 | number of samples in each training iteration
epochs | 250 | number of times the entire dataset of all tasks is passed through
retrain epochs | 125 | number of training epochs after pruning
learning rate | $10^{-4}$ | learning rate for gradient descent
prune percent | 0.75 | percent of free parameters pruned for future tasks
optimization algorithm | Adam | optimization algorithm used
$\beta_{1}$ | 0.9 | exponential decay rate for first moment estimates in Adam
$\beta_{2}$ | 0.999 | exponential decay rate for second moment estimates in Adam
epsilon | $10^{-8}$ | small constant for numerical stability
weight decay | 0 | weight regularization
The retrain epochs and prune percent hyperparameters were chosen following the
approach in the original PackNet paper. After training the first task, bias
layers are frozen.
### A.5 DGR
Hyperparameter | Value | Brief Description
---|---|---
batch size | 32 | number of samples in each training iteration
epochs | 250 | number of times the entire dataset of all tasks is passed through
learning rate | $10^{-4}$ | learning rate for gradient descent
diffusion training steps | $10^{4}$ | number of training steps for the diffusion model per task
diffusion warmup steps | $5*10^{4}$ | number of extra training steps for the diffusion model on the first task
diffusion timesteps | $10^{3}$ | number of timesteps in the diffusion process
replay ratio | 0.9 | percentage of training examples that are generated
optimization algorithm | Adam | optimization algorithm used
$\beta_{1}$ | 0.9 | exponential decay rate for first moment estimates in Adam
$\beta_{2}$ | 0.999 | exponential decay rate for second moment estimates in Adam
epsilon | $10^{-8}$ | small constant for numerical stability
weight decay | 0 | weight regularization
### A.6 CRIL
Hyperparameter | Value | Brief Description
---|---|---
batch size | 32 | number of samples in each training iteration
epochs | 300 | number of times the entire dataset of all tasks is passed through
learning rate | $10^{-4}$ | learning rate for gradient descent
diffusion training steps | $10^{4}$ | number of training steps for the diffusion model per task
diffusion warmup steps | $5*10^{4}$ | number of extra training steps for the diffusion model on the first task
diffusion timesteps | $10^{3}$ | number of timesteps in the diffusion process
replay ratio | 0.9 | percentage of training examples that are generated
optimization algorithm | Adam | optimization algorithm used
$\beta_{1}$ | 0.9 | exponential decay rate for first moment estimates in Adam
$\beta_{2}$ | 0.999 | exponential decay rate for second moment estimates in Adam
epsilon | $10^{-8}$ | small constant for numerical stability
weight decay | 0 | weight regularization
### A.7 t-DGR
Hyperparameter | Value | Brief Description
---|---|---
batch size | 32 | number of samples in each training iteration
epochs | 300 | number of times the entire dataset of all tasks is passed through
learning rate | $10^{-4}$ | learning rate for gradient descent
diffusion training steps | $10^{4}$ | number of training steps for the diffusion model per task
diffusion warmup steps | $5*10^{4}$ | number of extra training steps for the diffusion model on the first task
diffusion timesteps | $10^{3}$ | number of timesteps in the diffusion process
replay ratio | 0.9 | percentage of training examples that are generated
optimization algorithm | Adam | optimization algorithm used
$\beta_{1}$ | 0.9 | exponential decay rate for first moment estimates in Adam
$\beta_{2}$ | 0.999 | exponential decay rate for second moment estimates in Adam
epsilon | $10^{-8}$ | small constant for numerical stability
weight decay | 0 | weight regularization
## Appendix B Model Architecture
Layer (type) | Output Shape | Param #
---|---|---
Linear-1 | [32, 512] | 25,600
Linear-2 | [32, 512] | 262,656
Linear-3 | [32, 512] | 262,656
Linear-4 | [32, 512] | 262,656
Linear-5 | [32, 4] | 2,052
Total params: | 815,620
Trainable params: | 815,620
Non-trainable params: | 0
Table 2: Multi-layer perceptron architecture of the learner shared by all
methods
## Appendix C Experiment Details
We utilized the following random seeds for the experiments: 1, 2, 3, 4, 5. All
experiments were conducted on Nvidia A100 GPUs with 80 GB of memory. The
computational node consisted of an Intel Xeon Gold 6342 2.80GHz CPU with 500
GB of memory. For our longest benchmark, CW20, the runtimes were as follows:
DGR and t-DGR took 3 days, CRIL took 16 hours, finetune and oEWC took 6 hours,
and PackNet took 8 hours.
|
# Evaluating Models’ Local Decision Boundaries via Contrast Sets
Matt Gardner★♢ Yoav ArtziΓ Victoria Basmova♢♣ Jonathan Berant♢♠ Ben Bogin♠
Sihao Chen♡ Pradeep Dasigi♢ Dheeru Dua□ Yanai Elazar♢♣ Ananth Gottumukkala□
Nitish Gupta♡ Hanna Hajishirzi♢△ Gabriel Ilharco△ Daniel Khashabi♢ Kevin Lin+
Jiangming Liu♢† Nelson F. Liu¶ Phoebe Mulcaire△ Qiang Ning♢ Sameer Singh□ Noah
A. Smith♢△ Sanjay Subramanian♢ Reut Tsarfaty♢♣ Eric Wallace+ Ally ZhangΓ Ben
Zhou♡
♢Allen Institute for AI ΓCornell University ♣Bar-Ilan University
♠Tel-Aviv University ♡University of Pennsylvania △University of Washington
□UC Irvine +UC Berkeley †University of Edinburgh ¶Stanford University
<EMAIL_ADDRESS>
###### Abstract
Standard test sets for supervised learning evaluate in-distribution
generalization. Unfortunately, when a dataset has systematic gaps (e.g.,
annotation artifacts), these evaluations are misleading: a model can learn
simple decision rules that perform well on the test set but do not capture the
abilities a dataset is intended to test. We propose a more rigorous annotation
paradigm for NLP that helps to close systematic gaps in the test data. In
particular, after a dataset is constructed, we recommend that the dataset
authors manually perturb the test instances in small but meaningful ways that
(typically) change the gold label, creating _contrast sets_. Contrast sets
provide a local view of a model’s decision boundary, which can be used to more
accurately evaluate a model’s true linguistic capabilities. We demonstrate the
efficacy of contrast sets by creating them for 10 diverse NLP datasets (e.g.,
DROP reading comprehension, UD parsing, and IMDb sentiment analysis). Although
our contrast sets are not explicitly adversarial, model performance is
significantly lower on them than on the original test sets—up to 25% in some
cases. We release our contrast sets as new evaluation benchmarks and encourage
future dataset construction efforts to follow similar annotation processes.
## 1 Introduction
## 2 Contrast Sets
(a) A two-dimensional dataset that requires a complex decision boundary to
achieve high accuracy.
(b) If the same data distribution is instead sampled with systematic gaps
(e.g., due to annotator bias), a simple decision boundary _can perform well on
i.i.d. test data_ (shown outlined in pink).
(c) Since filling in all gaps in the distribution is infeasible, a _contrast
set_ instead fills in a local ball around a test instance to evaluate the
model’s decision boundary.
Figure 1: An illustration of how contrast sets provide a more comprehensive model evaluation when datasets have systematic gaps. Dataset | Original Instance | Contrastive Instance (color = edit)
---|---|---
IMDb | Hardly one to be faulted for his ambition or his vision, it is genuinely unexpected, then, to see all Park’s effort add up to so very little. …The premise is promising, gags are copious and offbeat humour abounds but it all fails miserably to create any meaningful connection with the audience.
(Label: Negative) | Hardly one to be faulted for his ambition or his vision, here we see all Park’s effort come to fruition. …The premise is perfect, gags are hilarious and offbeat humour abounds, and it creates a deep connection with the audience.
(Label: Positive)
MATRES | Colonel Collins followed a normal progression once she was picked as a NASA astronaut.
(“picked” was before “followed”) | Colonel Collins followed a normal progression before she was picked as a NASA astronaut.
(“picked” was after “followed”)
UD English | They demanded talks with local US commanders. I attach a paper on gas storage value modeling. I need to get a job at the earliest opportunity. | They demanded talks with great urgency. I attach a paper on my own initiative. I need to get a job at House of Pies.
PERSPECTRUM | Claim: Should uniforms be worn at school. Perspective: School uniforms emphasize the socio-economic divisions they are supposed to eliminate. Label: Against | Claim: Should uniforms be banned at school. Perspective: School uniforms emphasize the socio-economic divisions they are supposed to eliminate. Label: For
DROP | Context: In the spring of 1625 the Spanish regained Bahia in Brazil and Breda in the Netherlands from the Dutch. In the autumn they repulsed the English at Cadiz. Question: What event happened first, the Spanish repulsed the English at Cadiz or the Spanish regained Bahia? | Context: In the spring of 1625 the Spanish regained Bahia in Brazil and Breda in the Netherlands from the Dutch. In winter the year earlier they had repulsed the English at Cadiz. Question: What event happened first, the Spanish repulsed the English at Cadiz or the Spanish regained Bahia?
Quoref | Context: Matt Helm is a secret agent. His assignment is to stop the sinister Tung-Tze, armed with spy gadgets. Helm prevails with Gail by his side as he destroys Tung-Tze. Question: Who is armed with spy gadgets? | Context: Matt Helm is a secret agent. His assignment is to stop the sinister Tung-Tze, even though he is armed with spy gadgets. Helm prevails with Gail by his side as he destroys Tung-Tze. Question: Who is armed with spy gadgets?
MC-TACO | Context: She renews in Ranchipur an acquaintance with a former lover, Tom Ransome, now a dissolute alcoholic. Question: How frequently does Tom drink? Candidate Answer: Every other night Label: Likely | Context: She renews in Ranchipur an acquaintance with a former lover, Tom Ransome, who keeps very healthy habits. Question: How frequently does Tom drink? Candidate Answer: Every other night Label: Unlikely
Table 1: We create contrast sets for 10 datasets and show instances from seven
of them here.
### 2.1 The Problem
We first give a sketch of the problem that contrast sets attempt to solve in a
toy two-dimensional classification setting as shown in Figure 1. Here, the
true underlying data distribution requires a complex decision boundary (Figure
1(a)). However, as is common in practice, our toy dataset is rife with
systematic gaps (e.g., due to annotator bias, repeated patterns, etc.). This
causes simple decision boundaries to emerge (Figure 1(b)). And, because our
biased dataset is split _i.i.d._ into train and test sets, this simple
decision boundary will perform well on test data. Ideally, we would like to
fill in all of a dataset’s systematic gaps, however, this is usually
impossible. Instead, we create a _contrast set_ : a collection of instances
tightly clustered in input space around a single test instance, or _pivot_
(Figure 1(c); an $\epsilon$-ball in our toy example). This contrast set allows
us to measure how well a model’s decision boundary aligns with the correct
decision boundary local to the pivot. In this case, the contrast set
demonstrates that the model’s simple decision boundary is incorrect. We repeat
this process around numerous pivots to form entire evaluation datasets.
When we move from toy settings to complex NLP tasks, the precise nature of a
“systematic gap” in the data becomes harder to define. Indeed, the geometric
view in our toy examples does not correspond directly to experts’ perception
of data; there are many ways to “locally perturb” natural language. We do not
expect intuition, even of experts, to exhaustively reveal gaps.
Nevertheless, the presence of these gaps is well-documented Gururangan et al.
(2018); Poliak et al. (2018); Min et al. (2019), and Niven and Kao (2019) give
an initial attempt at formally characterizing them. In particular, one common
source is annotator bias from data collection processes Geva et al. (2019).
For example, in the SNLI dataset Bowman et al. (2015), Gururangan et al.
(2018) show that the words _sleeping_ , _tv_ , and _cat_ almost never appear
in an entailment example, either in the training set or the test set, though
they often appear in contradiction examples. This is not because these words
are particularly important to the phenomenon of entailment; their absence in
entailment examples is a _systematic gap_ in the data that can be exploited by
models to achieve artificially high test accuracy. This is but one kind of
systematic gap; there are also biases due to the writing styles of small
groups of annotators Geva et al. (2019), the distributional biases in the data
that was chosen for annotation, as well as numerous other biases that are more
subtle and harder to discern Shah et al. (2020).
Completely removing these gaps in the initial data collection process would be
ideal, but is likely impossible—language has too much inherent variability in
a very high-dimensional space. Instead, we use contrast sets to fill in gaps
in the test data to give more thorough evaluations than what the original data
provides.
### 2.2 Definitions
We begin by defining a _decision boundary_ as a partition of some space into
labels.111In this discussion we are talking about the _true_ decision
boundary, not a _model’s_ decision boundary. This partition can be represented
by the set of all points in the space with their associated labels:
$\\{(x,y)\\}$. This definition differs somewhat from the canonical definition,
which is a collection of hypersurfaces that separate labels. There is a
bijection between partitions and these sets of hypersurfaces in continuous
spaces, however, so they are equivalent definitions. We choose to use the
partition to represent the decision boundary as it makes it very easy to
define a _local_ decision boundary and to generalize the notion to discrete
spaces, which we deal with in NLP.
A _local decision boundary_ around some _pivot_ $x$ is the set of all points
$x^{\prime}$ and their associated labels $y^{\prime}$ that are within some
distance $\epsilon$ of $x$. That is, a local decision boundary around $x$ is
the set $\\{(x^{\prime},y^{\prime})~{}|~{}d(x,x^{\prime})<\epsilon\\}$. Note
here that even though a “boundary” or “surface” is hard to visualize in a
discrete input space, using this partition representation instead of
hypersurfaces gives us a uniform definition of a local decision boundary in
any input space; all that is needed is a distance function $d$.
A _contrast set_ $C(x)$ is any sample of points from a local decision boundary
around $x$. In other words, $C(x)$ consists of inputs $x^{\prime}$ that are
similar to $x$ according to some distance function $d$. Typically these points
are sampled such that $y^{\prime}\neq y$. To evaluate a model using these
contrast sets, we define the _contrast consistency_ of a model to be whether
it makes correct predictions $\hat{y}$ on every element in the set:
$\mathrm{all}(\\{\hat{y}=y^{\prime}~{}\forall(x^{\prime},y^{\prime})\in
C(x)\\})$. Since the points $x^{\prime}$ were chosen from the local decision
boundary, we expect contrast consistency on expert-built contrast sets to be a
significantly more accurate evaluation of whether model predictions match the
task definition than a random selection of input / output pairs.
### 2.3 Contrast sets in practice
Given these definitions, we now turn to the actual construction of contrast
sets in practical NLP settings. There were two things left unspecified in the
definitions above: the distance function $d$ to use in discrete input spaces,
and the method for sampling from a local decision boundary. While there has
been some work trying to formally characterize distances for adversarial
robustness in NLP Michel et al. (2019); Jia et al. (2019), we find it more
useful in our setting to simply rely on expert judgments to generate a similar
but meaningfully different $x^{\prime}$ given $x$, addressing both the
distance function and the sampling method.
Future work could try to give formal treatments of these issues, but we
believe expert judgments are sufficient to make initial progress in improving
our evaluation methodologies. And while expert-crafted contrast sets can only
give us an upper bound on a model’s local alignment with the true decision
boundary, an upper bound on local alignment is often more informative than a
potentially biased _i.i.d._ evaluation that permits artificially simple
decision boundaries. To give a tighter upper bound, we draw pivots $x$ from
some _i.i.d._ test set, and we do not provide _i.i.d._ contrast sets at
training time, which could provide additional artificially simple decision
boundaries to a model.
Figure LABEL:fig:teaser displays an example contrast set for the NLVR2 visual
reasoning dataset Suhr and Artzi (2019). Here, both the sentence and the image
are modified in small ways (e.g., by changing a word in the sentence or
finding a similar but different image) to make the output label change.
A contrast set is _not_ a collection of adversarial examples Szegedy et al.
(2014). Adversarial examples are almost the methodological opposite of
contrast sets: they change the input such that a model’s decision changes but
the gold label does not Jia and Liang (2017); Wallace et al. (2019a). On the
other hand, contrast sets are model-agnostic, constructed by experts to
characterize whether a model’s decision boundary locally aligns to the true
decision boundary around some point. Doing this requires input changes that
also induce changes to the gold label.
We recommend that the original dataset authors—the experts on the linguistic
phenomena intended to be reflected in their dataset—construct the contrast
sets. This is best done by first identifying a list of phenomena that
characterize their dataset. In syntactic parsing, for example, this list might
include prepositional phrase attachment ambiguities, coordination scope,
clausal attachment, etc. After the standard dataset collection process, the
authors should sample pivots from their test set and perturb them according to
the listed phenomena.
### 2.4 Design Choices of Contrast Sets
Here, we discuss possible alternatives to our approach for constructing
contrast sets and our reasons for choosing the process we did.
## 3 How to Create Contrast Sets
Here, we walk through our process for creating contrast sets for three
datasets. Examples are shown in Figure LABEL:fig:teaser and Table 1.
#### DROP
DROP Dua et al. (2019) is a reading comprehension dataset that is intended to
cover compositional reasoning over numbers in a paragraph, including
filtering, sorting, and counting sets, and doing numerical arithmetic. The
data has three main sources of paragraphs, all from Wikipedia articles:
descriptions of American football games, descriptions of census results, and
summaries of wars. There are many common patterns used by the crowd workers
that make some questions artificially easy: 2 is the most frequent answer to
_How many…?_ questions, questions asking about the ordering of events
typically follow the linear order of the paragraph, and a large fraction of
the questions do not require compositional reasoning.
Our strategy for constructing contrast sets for DROP was three-fold. First, we
added more compositional reasoning steps. The questions about American
football passages in the original data very often had multiple reasoning steps
(e.g., _How many yards difference was there between the Broncos’ first
touchdown and their last?_), but the questions about the other passage types
did not. We drew from common patterns in the training data and added
additional reasoning steps to questions in our contrast sets. Second, we
inverted the semantics of various parts of the question. This includes
perturbations such as changing _shortest_ to _longest_ , _later_ to _earlier_
, as well as changing questions asking for counts to questions asking for sets
(_How many countries…_ to _Which countries…_). Finally, we changed the
ordering of events. A large number of questions about war paragraphs ask which
of two events happened first. We changed (1) the order the events were asked
about in the question, (2) the order that the events showed up in the passage,
and (3) the dates associated with each event to swap their temporal order.
#### NLVR2
We next consider NLVR2, a dataset where a model is given a sentence about two
provided images and must determine whether the sentence is true Suhr et al.
(2019). The data collection process encouraged highly compositional language,
which was intended to require understanding the relationships between objects,
properties of objects, and counting. We constructed NLVR2 contrast sets by
modifying the sentence or replacing one of the images with freely-licensed
images from web searches. For example, we might change _The left image
contains twice the number of dogs as the right image_ to _The left image
contains three times the number of dogs as the right image_. Similarly, given
an image pair with four dogs in the left and two dogs in the right, we can
replace individual images with photos of variably-sized groups of dogs. The
textual perturbations were often changes in quantifiers (e.g., _at least one_
to _exactly one_), entities (e.g., _dogs_ to _cats_), or properties thereof
(e.g., _orange glass_ to _green glass_). An example contrast set for NLVR2 is
shown in Figure LABEL:fig:teaser.
#### UD Parsing
Finally, we discuss dependency parsing in the universal dependencies (UD)
formalism Nivre et al. (2016). We look at dependency parsing to show that
contrast sets apply not only to modern “high-level” NLP tasks but also to
longstanding linguistic analysis tasks. We first chose a specific type of
attachment ambiguity to target: the classic problem of prepositional phrase
(PP) attachment Collins and Brooks (1995), e.g. _We ate spaghetti with forks_
versus _We ate spaghetti with meatballs_. We use a subset of the English UD
treebanks: GUM Zeldes (2017), the English portion of LinES Ahrenberg (2007),
the English portion of ParTUT Sanguinetti and Bosco (2015), and the
dependency-annotated English Web Treebank Silveira et al. (2014). We searched
these treebanks for sentences that include a potentially structurally
ambiguous attachment from the head of a PP to either a noun or a verb. We then
perturbed these sentences by altering one of their noun phrases such that the
semantics of the perturbed sentence required a different attachment for the
PP. We then re-annotated these perturbed sentences to indicate the new
attachment(s).
Dataset | # Examples | # Sets | Model | Original Test | Contrast | Consistency
---|---|---|---|---|---|---
NLVR2 | 994 | 479 | LXMERT | 76.4 | 61.1 | (–15.3) | 30.1
IMDb | 488 | 488 | BERT | 93.8 | 84.2 | (–9.6) | 77.8
MATRES | 401 | 239 | CogCompTime2.0 | 73.2 | 63.3 | (–9.9) | 40.6
UD English | 150 | 150 | Biaffine + ELMo | 64.7 | 46.0 | (–18.7) | 17.3
PERSPECTRUM | 217 | 217 | RoBERTa | 90.3 | 85.7 | (–4.6) | 78.8
DROP | 947 | 623 | MTMSN | 79.9 | 54.2 | (–25.7) | 39.0
QUOREF | 700 | 415 | XLNet-QA | 70.5 | 55.4 | (–15.1) | 29.9
ROPES | 974 | 974 | RoBERTa | 47.7 | 32.5 | (–15.2) | 17.6
BoolQ | 339 | 70 | RoBERTa | 86.1 | 71.1 | (–15.0) | 59.0
MC-TACO | 646 | 646 | RoBERTa | 38.0 | 14.0 | (–24.0) | 8.0
Table 2: Models struggle on the contrast sets compared to the original test
sets. For each dataset, we use a (sometimes near) state-of-the-art model and
evaluate it on the “# Examples” examples in the contrast sets (_not_ including
the original example). We report percentage accuracy for NLVR2, IMDb,
PERSPECTRUM, MATRES, and BoolQ; F1 scores for DROP and Quoref; Exact Match
(EM) scores for ROPES and MC-TACO; and unlabeled attachment score on modified
attachments for the UD English dataset. We also report _contrast consistency_
: the percentage of the “# Sets” contrast sets for which a model’s predictions
are correct for all examples in the set (_including_ the original example).
More details on datasets, models, and metrics can be found in §A and §B.
#### Summary
While the overall process we recommend for constructing contrast sets is
simple and unified, its actual instantiation varies for each dataset. Dataset
authors should use their best judgment to select which phenomena they are most
interested in studying and craft their contrast sets to explicitly test those
phenomena. Care should be taken during contrast set construction to ensure
that the phenomena present in contrast sets are similar to those present in
the original test set; the purpose of a contrast set is not to introduce new
challenges, but to more thoroughly evaluate the original intent of the test
set.
## 4 Datasets and Experiments
### 4.1 Original Datasets
We create contrast sets for 10 NLP datasets (full descriptions are provided in
Section A):
* •
NLVR2 Suhr et al. (2019)
* •
IMDb sentiment analysis Maas et al. (2011)
* •
MATRES Temporal RE Ning et al. (2018)
* •
English UD parsing Nivre et al. (2016)
* •
PERSPECTRUM Chen et al. (2019)
* •
DROP Dua et al. (2019)
* •
Quoref Dasigi et al. (2019)
* •
ROPES Lin et al. (2019)
* •
BoolQ Clark et al. (2019)
* •
MC-TACO Zhou et al. (2019)
We choose these datasets because they span a variety of tasks (e.g., reading
comprehension, sentiment analysis, visual reasoning) and input-output formats
(e.g., classification, span extraction, structured prediction). We include
high-level tasks for which dataset artifacts are known to be prevalent, as
well as longstanding formalism-based tasks, where data artifacts have been
less of an issue (or at least have been less well-studied).
### 4.2 Contrast Set Construction
The contrast sets were constructed by NLP researchers who were deeply familiar
with the phenomena underlying the annotated dataset
## 5 Related Work
The fundamental idea of finding or creating data that is “minimally different”
has a very long history. In linguistics, for instance, the term _minimal pair_
is used to denote two words with different meaning that differ by a single
sound change, thus demonstrating that the sound change is phonemic in that
language Pike (1946). Many people have used this idea in NLP (see below),
creating challenge sets or providing training data that is “minimally
different” in some sense, and we continue this tradition. Our main
contribution to this line of work, in addition to the resources that we have
created, is giving a simple and intuitive geometric interpretation of “bias”
in dataset collection, and showing that this long-standing idea of minimal
data changes can be effectively used to solve this problem on a wide variety
of NLP tasks. We additionally generalize the idea of a minimal _pair_ to a
_set_ , and use a _consistency_ metric, which we contend more closely aligns
with what NLP researchers mean by “language understanding”.
#### Training on Perturbed Examples
Many previous works have provided minimally contrastive examples on which to
train models. Selsam et al. (2019), Tafjord et al. (2019), Lin et al. (2019),
and Khashabi et al. (2020) designed their data collection process to include
contrastive examples. Data augmentation methods have also been used to
mitigate gender Zhao et al. (2018), racial Dixon et al. (2018), and other
biases Kaushik et al. (2020) during training, or to introduce useful inductive
biases Andreas (2020).
#### Challenge Sets
The idea of creating challenging contrastive evaluation sets has a long
history Levesque et al. (2011); Ettinger et al. (2017); Glockner et al.
(2018); Naik et al. (2018); Isabelle et al. (2017). Challenge sets exist for
various phenomena, including ones with “minimal” edits similar to our contrast
sets, e.g., in image captioning Shekhar et al. (2017), machine translation
Sennrich (2017); Burlot and Yvon (2017); Burlot et al. (2018), and language
modeling Marvin and Linzen (2018); Warstadt et al. (2019). Minimal pairs of
edits that perturb gender or racial attributes are also useful for evaluating
social biases Rudinger et al. (2018); Zhao et al. (2018); Lu et al. (2018).
Our key contribution over this prior work is in grouping perturbed instances
into a contrast set, for measuring local alignment of decision boundaries,
along with our new, related resources. Additionally, rather than creating new
data from scratch, contrast sets augment existing test examples to fill in
systematic gaps. Thus contrast sets often require less effort to create, and
they remain grounded in the original data distribution of some training set.
Since the initial publication of this paper, Shmidman et al. have further
demonstrated the utility of contrast sets by applying these ideas to the
evaluation of morphological disambiguation in Hebrew.
## 6 Conclusion
We presented a new annotation paradigm, based on long-standing ideas around
contrastive examples, for constructing more rigorous test sets for NLP. Our
procedure maintains most of the established processes for dataset creation but
fills in some of the systematic gaps that are typically present in datasets.
By shifting evaluations from accuracy on _i.i.d._ test sets to consistency on
contrast sets, we can better examine whether models have learned the desired
capabilities or simply captured the idiosyncrasies of a dataset. We created
contrast sets for 10 NLP datasets and released this data as new evaluation
benchmarks.
We recommend that future data collection efforts create contrast sets to
provide more comprehensive evaluations for both existing and new NLP datasets.
While we have created thousands of new test examples across a wide variety of
datasets, we have only taken small steps towards the rigorous evaluations we
would like to see in NLP. The last several years have given us dramatic
modeling advancements; our evaluation methodologies and datasets need to see
similar improvements.
## Acknowledgements
We thank the anonymous reviewers for their helpful feedback on this paper, as
well as many others who gave constructive comments on a publicly-available
preprint. Various authors of this paper were supported in part by ERC grant
677352, NSF grant 1562364, NSF grant IIS-1756023, NSF CAREER 1750499, ONR
grant N00014-18-1-2826 and DARPA grant N66001-19-2-403.
## References
* Ahrenberg (2007) Lars Ahrenberg. 2007. LinES: an English-Swedish parallel treebank. In _NODALIDA_.
* Andreas (2020) Jacob Andreas. 2020. Good-enough compositional data augmentation. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 7556–7566, Online. Association for Computational Linguistics.
* Ben-David et al. (2010) Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. _Machine Learning_.
* Bowman et al. (2015) Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015\. A large annotated corpus for learning natural language inference. In _EMNLP_.
* Burlot et al. (2018) Franck Burlot, Yves Scherrer, Vinit Ravishankar, Ondřej Bojar, Stig-Arne Grönroos, Maarit Koponen, Tommi Nieminen, and François Yvon. 2018. The WMT’18 morpheval test suites for English-Czech, English-German, English-Finnish and Turkish-English. In _Proceedings of the Third Conference on Machine Translation: Shared Task Papers_ , Belgium, Brussels. Association for Computational Linguistics.
* Burlot and Yvon (2017) Franck Burlot and François Yvon. 2017. Evaluating the morphological competence of machine translation systems. In _Proceedings of the Second Conference on Machine Translation_ , pages 43–55, Copenhagen, Denmark. Association for Computational Linguistics.
* Chen et al. (2016) Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the CNN/Daily Mail reading comprehension task. In _ACL_.
* Chen et al. (2019) Sihao Chen, Daniel Khashabi, Wenpeng Yin, Chris Callison-Burch, and Dan Roth. 2019\. Seeing things from a different angle: Discovering diverse perspectives about claims. In _NAACL_.
* Clark et al. (2019) Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. BoolQ: Exploring the surprising difficulty of natural Yes/No questions. In _NAACL_.
* Collins and Brooks (1995) Michael Collins and James Brooks. 1995. Prepositional phrase attachment through a backed-off model. In _Third Workshop on Very Large Corpora_.
* Dasigi et al. (2019) Pradeep Dasigi, Nelson F Liu, Ana Marasovic, Noah A Smith, and Matt Gardner. 2019\. Quoref: A reading comprehension dataset with questions requiring coreferential reasoning. In _EMNLP_.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In _NAACL_.
* Dixon et al. (2018) Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigating unintended bias in text classification. In _ACM AIES_.
* Dozat and Manning (2017) Timothy Dozat and Christopher D Manning. 2017. Deep biaffine attention for neural dependency parsing. In _ICLR_.
* Dua et al. (2019) Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In _NAACL_.
* Ettinger et al. (2017) Allyson Ettinger, Sudha Rao, Hal Daumé III, and Emily M. Bender. 2017. Towards linguistically generalizable NLP systems: A workshop and shared task. In _Proceedings of the First Workshop on Building Linguistically Generalizable NLP Systems_ , Copenhagen, Denmark. Association for Computational Linguistics.
* Feng et al. (2019) Shi Feng, Eric Wallace, and Jordan Boyd-Graber. 2019. Misleading failures of partial-input baselines. In _ACL_.
* Feng et al. (2018) Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. 2018. Pathologies of neural models make interpretations difficult. In _EMNLP_.
* Geva et al. (2019) Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? An investigation of annotator bias in natural language understanding datasets. In _EMNLP_.
* Glockner et al. (2018) Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that require simple lexical inferences. In _ACL_.
* Gururangan et al. (2018) Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In _NAACL_.
* Hu et al. (2019) Minghao Hu, Yuxing Peng, Zhen Huang, and Dongsheng Li. 2019. A multi-type multi-span network for reading comprehension that requires discrete reasoning. In _EMNLP_.
* Isabelle et al. (2017) Pierre Isabelle, Colin Cherry, and George Foster. 2017. A challenge set approach to evaluating machine translation. In _EMNLP_.
* Jia and Liang (2017) Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In _EMNLP_.
* Jia et al. (2019) Robin Jia, Aditi Raghunathan, Kerem Göksel, and Percy Liang. 2019. Certified robustness to adversarial word substitutions. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , Hong Kong, China. Association for Computational Linguistics.
* Kaushik et al. (2020) Divyansh Kaushik, Eduard Hovy, and Zachary C Lipton. 2020. Learning the difference that makes a difference with counterfactually-augmented data. In _ICLR_.
* Khashabi et al. (2020) Daniel Khashabi, Tushar Khot, and Ashish Sabhwawal. 2020. More bang for your buck: Natural perturbation for robust question answering. In _EMNLP_.
* Lai et al. (2017) Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale reading comprehension dataset from examinations. In _EMNLP_.
* Levesque et al. (2011) Hector J. Levesque, Ernest Davis, and Leora Morgenstern. 2011. The winograd schema challenge. In _KR_.
* Lin et al. (2019) Kevin Lin, Oyvind Tafjord, Peter Clark, and Matt Gardner. 2019. Reasoning over paragraph effects in situations. In _EMNLP MRQA Workshop_.
* Lipton et al. (2018) Zachary Lipton, Yu-Xiang Wang, and Alexander Smola. 2018. Detecting and correcting for label shift with black box predictors. In _ICML_.
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. _arXiv preprint arXiv:1907.11692_.
* Lu et al. (2018) Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Amancharla, and Anupam Datta. 2018\. Gender bias in neural natural language processing. _arXiv preprint arXiv:1807.11714_.
* Maas et al. (2011) Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In _ACL_.
* Marcus et al. (1993) Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The Penn treebank. In _Computational Linguistics_.
* Marvin and Linzen (2018) Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In _EMNLP_.
* Michel et al. (2019) Paul Michel, Xian Li, Graham Neubig, and Juan Miguel Pino. 2019. On evaluation of adversarial perturbations for sequence-to-sequence models. In _NAACL_.
* Min et al. (2019) Sewon Min, Eric Wallace, Sameer Singh, Matt Gardner, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019. Compositional questions do not necessitate multi-hop reasoning. In _ACL_.
* Naik et al. (2018) Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In _COLING_.
* Nie et al. (2019) Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2019. Adversarial NLI: A new benchmark for natural language understanding. _arXiv preprint arXiv:1910.14599_.
* Ning et al. (2019) Qiang Ning, Sanjay Subramanian, and Dan Roth. 2019. An Improved Neural Baseline for Temporal Relation Extraction. In _EMNLP_.
* Ning et al. (2018) Qiang Ning, Hao Wu, and Dan Roth. 2018. A Multi-Axis Annotation Scheme for Event Temporal Relations. In _ACL_.
* Niven and Kao (2019) Timothy Niven and Hung-Yu Kao. 2019. Probing neural network comprehension of natural language arguments. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , Florence, Italy. Association for Computational Linguistics.
* Nivre et al. (2016) Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajič, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal dependencies v1: A multilingual treebank collection. In _LREC_.
* Peters et al. (2018) Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In _NAACL_.
* Pike (1946) K.L. Pike. 1946. _Phonemics: A Technique for Reducing Languages to Writing_. Number v. 2 in Phonemics: A Technique for Reducing Languages to Writing. Summer Institute of Linguistics.
* Poliak et al. (2018) Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In _*SEM_.
* Recht et al. (2019) Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. 2019. Do ImageNet classifiers generalize to ImageNet? In _ICML_.
* Ribeiro et al. (2019) Marco Tulio Ribeiro, Carlos Guestrin, and Sameer Singh. 2019. Are red roses red? Evaluating consistency of question-answering models. In _ACL_.
* Ribeiro et al. (2018a) Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018a. Semantically equivalent adversarial rules for debugging NLP models. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 856–865, Melbourne, Australia. Association for Computational Linguistics.
* Ribeiro et al. (2018b) Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018b. Semantically equivalent adversarial rules for debugging NLP models. In _ACL_.
* Rudinger et al. (2018) Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In _NAACL_.
* Sanguinetti and Bosco (2015) Manuela Sanguinetti and Cristina Bosco. 2015. PartTUT: The Turin university parallel treebank. In _Harmonization and Development of Resources and Tools for Italian Natural Language Processing within the PARLI Project_.
* Selsam et al. (2019) Daniel Selsam, Matthew Lamm, Benedikt Bünz, Percy Liang, Leonardo de Moura, and David L. Dill. 2019. Learning a SAT solver from single-bit supervision. In _International Conference on Learning Representations_.
* Sennrich (2017) Rico Sennrich. 2017. How grammatical is character-level neural machine translation? Assessing MT quality with contrastive translation pairs. In _EACL_.
* Shah et al. (2020) Deven Shah, H. Andrew Schwartz, and Dirk Hovy. 2020. Predictive biases in natural language processing models: A conceptual framework and overview. In _ACL_.
* Shekhar et al. (2017) Ravi Shekhar, Sandro Pezzelle, Yauhen Klimovich, Aurélie Herbelot, Moin Nabi, Enver Sangineto, and Raffaella Bernardi. 2017. Foil it! Find One mismatch between image and language caption. In _ACL_.
* Shimodaira (2000) Hidetoshi Shimodaira. 2000. Improving predictive inference under covariate shift by weighting the log-likelihood function. In _Journal of Statistical Planning and Inference_.
* (59) Avi Shmidman, Joshua Guedalia, Shaltiel Shmidman, Moshe Koppel, and Reut Tsarfaty. A novel challenge set for hebrew morphological disambiguation and diacritics restoration. In _Findings of EMNLP_.
* Silveira et al. (2014) Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Chris Manning. 2014. A gold standard dependency corpus for English. In _LREC_.
* Suhr and Artzi (2019) Alane Suhr and Yoav Artzi. 2019. NLVR2 visual bias analysis. _arXiv preprint arXiv:1909.10411_.
* Suhr et al. (2019) Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019\. A corpus for reasoning about natural language grounded in photographs. In _ACL_.
* Szegedy et al. (2014) Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In _ICLR_.
* Tafjord et al. (2019) Oyvind Tafjord, Matt Gardner, Kevin Lin, and Peter Clark. 2019. QuaRTz: An open-domain dataset of qualitative relationship questions. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 5941–5946, Hong Kong, China. Association for Computational Linguistics.
* Tan and Bansal (2019) Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from transformers. In _EMNLP_.
* UzZaman et al. (2013) Naushad UzZaman, Hector Llorens, Leon Derczynski, James Allen, Marc Verhagen, and James Pustejovsky. 2013. SemEval-2013 Task 1: TempEval-3: Evaluating time expressions, events, and temporal relations. In _*SEM_.
* Wallace et al. (2019a) Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019a. Universal adversarial triggers for attacking and analyzing NLP. In _EMNLP_.
* Wallace et al. (2019b) Eric Wallace, Pedro Rodriguez, Shi Feng, Ikuya Yamada, and Jordan Boyd-Graber. 2019b. Trick me if you can: Human-in-the-loop generation of adversarial question answering examples. In _TACL_.
* Warstadt et al. (2019) Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2019. BLiMP: A benchmark of linguistic minimal pairs for english. _arXiv preprint arXiv:1912.00582_.
* Wu et al. (2019) Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, and Daniel Weld. 2019. Errudite: Scalable, reproducible, and testable error analysis. In _ACL_.
* Yadav and Bottou (2019) Chhavi Yadav and Léon Bottou. 2019. Cold case: The lost MNIST digits. In _NeurIPS_.
* Yang et al. (2019) Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In _NeurIPS_.
* Zeldes (2017) Amir Zeldes. 2017. The GUM corpus: Creating multilayer resources in the classroom. In _LREC_.
* Zellers et al. (2018) Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversarial dataset for grounded commonsense inference. In _EMNLP_.
* Zellers et al. (2019) Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In _ACL_.
* Zhao et al. (2018) Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018\. Gender bias in coreference resolution: Evaluation and debiasing methods. In _NAACL_.
* Zhou et al. (2019) Ben Zhou, Daniel Khashabi, Qiang Ning, and Dan Roth. 2019. “Going on a vacation” takes longer than “going for a walk”: A study of temporal commonsense understanding. In _EMNLP_.
## Appendix A Dataset Details
Here, we provide details for the datasets that we build contrast sets for.
#### Natural Language Visual Reasoning 2
(NLVR$2$) Given a natural language sentence about two photographs, the task is
to determine if the sentence is true Suhr et al. (2019). The dataset has
highly compositional language, e.g., _The left image contains twice the number
of dogs as the right image, and at least two dogs in total are standing_. To
succeed at NLVR2, a model is supposed to be able to detect and count objects,
recognize spatial relationships, and understand the natural language that
describes these phenomena.
#### Internet Movie Database
(IMDb) The task is to predict the sentiment (positive or negative) of a movie
review Maas et al. (2011). We use the same set of reviews from Kaushik et al.
(2020) in order to analyze the differences between crowd-edited reviews and
expert-edited reviews.
#### Temporal relation extraction
(MATRES) The task is to determine what temporal relationship exists between
two events, i.e., whether some event happened _before_ or _after_ another
event Ning et al. (2018). MATRES has events and temporal relations labeled for
approximately 300 news articles. The event annotations are taken from the data
provided in the TempEval3 workshop UzZaman et al. (2013) and the temporal
relations are re-annotated based on a multi-axis formalism. We assume that the
events are given and only need to classify the relation label between them.
#### English UD Parsing
We use a combination of four English treebanks (GUM, EWT, LinES, ParTUT) in
the Universal Dependencies parsing framework, covering a range of genres. We
focus on the problem of prepositional phrase attachment: whether the head of a
prepositional phrase attaches to a verb or to some other dependent of the
verb. We manually selected a small set of sentences from these treebanks that
had potentially ambiguous attachments.
#### Reasoning about perspectives
(PERSPECTRUM) Given a debate-worthy natural language claim, the task is to
identify the set of relevant argumentative sentences that represent
perspectives for/against the claim Chen et al. (2019). We focus on the stance
prediction sub-task: a binary prediction of whether a relevant perspective is
for/against the given claim.
#### Discrete Reasoning Over Paragraphs
(DROP) A reading comprehension dataset that requires numerical reasoning,
e.g., adding, sorting, and counting numbers in paragraphs Dua et al. (2019).
In order to compute the consistency metric for the span answers of DROP, we
report the average number of contrast sets in which $F_{1}$ for all instances
is above $0.8$.
#### Quoref
A reading comprehension task with span selection questions that require
coreference resolution Dasigi et al. (2019). In this dataset, most questions
can be localized to a single event in the passage, and reference an argument
in that event that is typically a pronoun or other anaphoric reference.
Correctly answering the question requires resolving the pronoun. We use the
same definition for consistency for Quorefas we did for _DROP_.
#### Reasoning Over Paragraph Effects in Situations
(ROPES) A reading comprehension dataset that requires applying knowledge from
a background passage to new situations Lin et al. (2019). This task has
background paragraphs drawn mostly from science texts that describe causes and
effects (e.g., that brightly colored flowers attract insects), and situations
written by crowd workers that instantiate either the cause (e.g., bright
colors) or the effect (e.g., attracting insects). Questions are written that
query the application of the statements in the background paragraphs to the
instantiated situation. Correctly answering the questions is intended to
require understanding how free-form causal language can be understood and
applied. We use the same consistency metric for ROPES as we did for DROP and
Quoref.
#### BoolQ
A dataset of reading comprehension instances with Boolean (yes or no) answers
Clark et al. (2019). These questions were obtained from organic Google search
queries and paired with paragraphs from Wikipedia pages that are labeled as
sufficient to deduce the answer. As the questions are drawn from a
distribution of what people search for on the internet, there is no clear set
of “intended phenomena” in this data; it is an eclectic mix of different kinds
of questions.
#### MC-TACO
A dataset of reading comprehension questions about multiple temporal common-
sense phenomena Zhou et al. (2019). Given a short paragraph (often a single
sentence), a question, and a collection of candidate answers, the task is to
determine which of the candidate answers are plausible. For example, the
paragraph might describe a storm and the question might ask how long the storm
lasted, with candidate answers ranging from seconds to weeks. This dataset is
intended to test a system’s knowledge of typical event durations, orderings,
and frequency. As the paragraph does not contain the information necessary to
answer the question, this dataset is largely a test of background (common
sense) knowledge.
## Appendix B Contrast Set Details
### B.1 NLVR2
#### Text Perturbation Strategies
We use the following text perturbation strategies for NLVR2:
* •
Perturbing quantifiers, e.g., _There is at least one dog_ $\to$ _There is
exactly one dog_.
* •
Perturbing numbers, e.g., _There is at least one dog_ $\to$ _There are at
least two dogs_.
* •
Perturbing entities, e.g., _There is at least one dog_ $\to$ _There is at
least one cat_.
* •
Perturbing properties of entities, e.g., _There is at least one yellow dog_
$\to$ _There is at least one green dog_.
#### Image Perturbation Strategies
For image perturbations, the annotators collected images that are perceptually
and/or conceptually close to the hypothesized decision boundary, i.e., they
represent a minimal change in some concrete aspect of the image. For example,
for an image pair with 2 dogs on the left and 1 dog on the right and the
sentence _There are more dogs on the left than the right_ , a reasonable image
change would be to replace the right-hand image with an image of two dogs.
#### Model
We use LXMERT Tan and Bansal (2019) trained on the NLVR2 training dataset.
#### Contrast Set Statistics
Five annotators created 983 perturbed instances that form 479 contrast sets.
Annotation took approximately thirty seconds per textual perturbation and two
minutes per image perturbation.
### B.2 IMDb
#### Perturbation Strategies
We minimally perturb reviews to flip the label while ensuring that the review
remains coherent and factually consistent. Here, we provide example revisions:
Original (Negative): I had quite high hopes for this film, even though it got
a bad review in the paper. I was extremely tolerant, and sat through the
entire film. I felt quite sick by the end.
New (Positive): I had quite high hopes for this film, even though it got a bad
review in the paper. I was extremely amused, and sat through the entire film.
I felt quite happy by the end.
Original (Positive): This is the greatest film I saw in 2002, whereas I’m used
to mainstream movies. It is rich and makes a beautiful artistic act from these
11 short films. From the technical info (the chosen directors), I feared it
would have an anti-American basis, but … it’s a kind of (11 times) personal
tribute. The weakest point comes from Y. Chahine : he does not manage to
“swallow his pride” and considers this event as a well-merited punishment … It
is really the weakest part of the movie, but this testifies of a real freedom
of speech for the whole piece.
New (Negative): This is the most horrendous film I saw in 2002, whereas I’m
used to mainstream movies. It is low budgeted and makes a less than beautiful
artistic act from these 11 short films. From the technical info (the chosen
directors), I feared it would have an anti-American basis, but … it’s a kind
of (11 times) the same. One of the weakest point comes from Y. Chahine : he
does not manage to “swallow his pride” and considers this event as a well-
merited punishment … It is not the weakest part of the movie, but this
testifies of a real freedom of speech for the whole piece.
#### Model
We use the same BERT model setup and training data as Kaushik et al. (2020)
which allows us to fairly compare the crowd and expert revisions.
#### Contrast Set Statistics
We use 100 reviews from the validation set and 488 from the test set of
Kaushik et al. (2020). Three annotators used approximately 70 hours to
construct and validate the dataset.
### B.3 MATRES
MATRES has three sections: TimeBank, AQUAINT, and Platinum, with the Platinum
section serving as the test set. We use 239 instances (30% of the dataset)
from Platinum.
#### Perturbation Strategies
The annotators perturb one or more of the following aspects: appearance order
in text, tense of verb(s), and temporal conjunction words. Below are example
revisions:
* •
Colonel Collins followed a normal progression once she was picked as a NASA
astronaut. (original sentence: “followed” is after “picked”)
* •
Once Colonel Collins was picked as a NASA astronaut, she followed a normal
progression. (appearance order change in text; “followed” is still after
“picked”)
* •
Colonel Collins followed a normal progression before she was picked as a NASA
astronaut. (changed the temporal conjunction word from “once” to “before” and
“followed” is now before “picked”)
* •
Volleyball is a popular sport in the area, and more than 200 people were
watching the game, the chief said. (original sentence: “watching” is before
“said”)
* •
Volleyball is a popular sport in the area, and more than 200 people would be
watching the game, the chief said. (changed the verb tense: “watching” is
after “said”)
#### Model
We use CogCompTime 2.0 Ning et al. (2019).
#### Contrast Set Statistics
Two annotators created 401 perturbed instances that form 239 contrast sets.
The annotators used approximately 25 hours to construct and validate the
dataset.
#### Analysis
We recorded the perturbation strategy used for each example. 49% of the
perturbations changed the “appearance order”, 31% changed the “tense”, 24%
changed the “temporal conjunction words”, and 10% had other changes. We double
count the examples that have multiple perturbations. The model accuracy on the
different perturbations is reported in the table below.
Perturbation Type | Accuracy
---|---
Overall | 63.3%
Appearance Order | 66.5%
Tense Change | 61.8%
Temporal Conjunction | 60.0%
Other Changes | 61.8%
Table 3: Accuracy breakdown of the perturbation types for MATRES.
### B.4 Syntactic Parsing
#### Perturbation Strategies
The annotators perturbed noun phrases adjacent to prepositions (leaving the
preposition unchanged). For example, _The clerics demanded talks with local US
commanders_ $\to$ _The clerics demanded talks with great urgency_. The
different semantic content of the noun phrase changes the syntactic path from
the preposition with to the parent word of the parent of the preposition; in
the initial example, the parent is commanders and the grandparent is the noun
talks; in the perturbed version, the grandparent is now the verb demanded.
#### Model
We use a biaffine parser following the architecture of Dozat and Manning
(2017) with ELMo embeddings Peters et al. (2018), trained on the combination
of the training sets for the treebanks that we drew test examples from (GUM,
EWT, LinES, and ParTUT).
#### Contrast Set Statistics
One annotator created 150 perturbed examples that form 150 contrast sets. 75
of the contrast sets consist of a sentence in which a prepositional phrase
attaches to a verb, paired with an altered version where it attaches to a noun
instead. The other 75 sentences were altered in the opposite direction.
#### Analysis
The process of creating a perturbation for a syntactic parse is highly time-
consuming. Only a small fraction of sentences in the test set could be altered
in the desired way, even after filtering to find relevant syntactic structures
and eliminate unambiguous prepositions (e.g. of always attaches to a noun
modifying a noun, making it impossible to change the attachment without
changing the preposition). Further, once a potentially ambiguous sentence was
identified, annotators had to come up with an alternative noun phrase that
sounded natural and did not require extensive changes to the structure of the
sentence. They then had to re-annotate the relevant section of the sentence,
which could include new POS tags, new UD word features, and new arc labels. On
average, each perturbation took 10–15 minutes. Expanding the scope of this
augmented dataset to cover other syntactic features, such as adjective scope,
apposition versus conjunction, and other forms of clausal attachment, would
allow for a significantly larger dataset but would require a large amount of
annotator time. The very poor contrast consistency on our dataset (17.3%)
suggests that this would be a worthwhile investment to create a more rigorous
parsing evaluation.
Notably, the model’s accuracy for predicting the target prepositions’
grandparents in the original, unaltered tree (64.7%) is significantly lower
than the model’s accuracy for grandparents of all words (78.41%) and for
grandparents of all prepositions (78.95%) in the original data. This indicates
that these structures are already difficult for the parser due to structural
ambiguity.
### B.5 PERSPECTRUM
#### Perturbation Strategies
The annotators perturbed examples in multiple steps. First, they created non-
trivial negations of the claim, e.g., _Should we live in space?_ $\to$ _Should
we drop the ambition to live in space?_. Next, they labeled the perturbed
claim with respect to each perspective. For example:
Claim: Should we live in space?
Perspective: Humanity in many ways defines itself through exploration and
space is the next logical frontier.
Label: True
Claim: Should we drop the ambition to live in space?
Perspective: Humanity in many ways defines itself through exploration and
space is the next logical frontier.
Label: False
#### Model
We use a RoBERTa model Liu et al. (2019) finetuned on PERSPECTRUM following
the training process from Chen et al. (2019).
#### Contrast Set Statistics
The annotators created 217 perturbed instances that form 217 contrast sets.
Each example took approximately three minutes to annotate: one minute for an
annotator to negate each claim and one minute each for two separate annotators
to adjudicate stance labels for each contrastive claim-perspective pair.
### B.6 DROP
#### Perturbation Strategies
See Section 3 in the main text for details about our perturbation strategies.
#### Model
We use MTMSN Hu et al. (2019), a DROP question answering model that is built
on top of BERT Large Devlin et al. (2019).
#### Contrast Set Statistics
The total size of the augmented test set is 947 examples and contains a total
of 623 contrast sets. Three annotators used approximately 16 hours to
construct and validate the dataset.
#### Analysis
We bucket $100$ of the perturbed instances into the three categories of
perturbations described in Section 3. For each subset, we evaluate MTMSN’s
performance and show the results in the Table below.
Perturbation Type | Frequency | Accuracy
---|---|---
Adding Compositional Steps | 38% | 67.5 $F_{1}$
Inversion of Semantics | 37% | 53.2 $F_{1}$
Re-ordering Events | 25% | 47.3 $F_{1}$
Table 4: Accuracy breakdown of the perturbation types for DROP.
### B.7 Quoref
#### Perturbation Strategies
We use the following perturbation strategies for Quoref:
* •
Perturb questions whose answers are entities to instead make the answers a
property of those entities, e.g., _Who hides their identity …_ $\to$ _What is
the nationality of the person who hides their identity …_.
* •
Perturb questions to add compositionality, e.g., _What is the name of the
person …_ $\to$ _What is the name of the father of the person …_.
* •
Add sentences between referring expressions and antecedents to the context
paragraphs.
* •
Replace antecedents with less frequent named entities of the same type in the
context paragraphs.
#### Model
We use XLNet-QA, the best model from Dasigi et al. (2019), which is a span
extraction model built on top of XLNet Yang et al. (2019).
#### Contrast Set Statistics
Four annotators created 700 instances that form 415 contrast sets. The mean
contrast set size (including the original example) is $2.7(\pm 1.2)$. The
annotators used approximately 35 hours to construct and validate the dataset.
### B.8 ROPES
#### Perturbation Strategies
We use the following perturbation strategies for ROPES:
* •
Perturbing the background to have the opposite causes and effects or
qualitative relation, e.g., _Gibberellins are hormones that cause the plant to
grow_ $\to$ _Gibberellins are hormones that cause the plant to stop growing._
* •
Perturbing the situation to associate different entities with different
instantiations of a certain cause or effect. For example, _Grey tree frogs
live in wooded areas and are difficult to see when on tree trunks. Green tree
frogs live in wetlands with lots of grass and tall plants._ $\to$ _Grey tree
frogs live in wetlands areas and are difficult to see when on stormy days in
the plants. Green tree frogs live in wetlands with lots of leaves to hide on._
* •
Perturbing the situation to have more complex reasoning steps, e.g., _Sue put
2 cubes of sugar into her tea. Ann decided to use granulated sugar and added
the same amount of sugar to her tea._ $\to$ _Sue has 2 cubes of sugar but Ann
has the same amount of granulated sugar. They exchange the sugar to each other
and put the sugar to their ice tea._
* •
Perturbing the questions to have presuppositions that match the situation and
background.
#### Model
We use the best model from Lin et al. (2019), which is a span extraction model
built on top of a RoBERTa model Liu et al. (2019) that is first finetuned on
RACE Lai et al. (2017).
#### Contrast Set Statistics
Two annotators created 974 perturbed instances which form 974 contrast sets.
The annotators used approximately 65 hours to construct and validate the
dataset.
### B.9 BoolQ
#### Perturbation Strategies
We use a diverse set of perturbations, including adjective, entity, and event
changes. We show three representative examples below:
Paragraph: The Fate of the Furious premiered in Berlin on April 4, 2017, and
was theatrically released in the United States on April 14, 2017, playing in
3D, IMAX 3D and 4DX internationally…A spinoff film starring Johnson and
Statham’s characters is scheduled for release in August 2019, while the ninth
and tenth films are scheduled for releases on the years 2020 and 2021.
Question: Is “Fate and the Furious” the last movie?
Answer: False
New Question: Is “Fate and the Furious” the first of multiple movies?
New Answer: True
Perturbation Strategy: Adjective Change
Paragraph: Sanders played football primarily at cornerback, but also as a kick
returner, punt returner, and occasionally wide receiver…An outfielder in
baseball, he played professionally for the New York Yankees, the Atlanta
Braves, the Cincinnati Reds and the San Francisco Giants, and participated in
the 1992 World Series with the Braves.
Question: Did Deion Sanders ever win a world series?
Answer: False
New Question: Did Deion Sanders ever play in a world series?
New Answer: True
Perturbation strategy: Event Change
Paragraph: The White House is the official residence and workplace of the
President of the United States. It is located at 1600 Pennsylvania Avenue NW
in Washington, D.C. and has been the residence of every U.S. President since
John Adams in 1800. The term is often used as a metonym for the president and
his advisers.
Question: Does the president live in the White House?
Answer: True
New Question: Did George Washington live in the White House?
New Answer: False
Perturbation Strategy: Entity Change
#### Model
We use RoBERTa base and follow the standard finetuning process from Liu et al.
(2019).
#### Contrast Set Statistics
The annotators created 339 perturbed questions generated that form 70 contrast
sets. One annotator created the dataset and a separate annotator verified it.
This entire process took approximately 16 hours.
### B.10 MC-TACO
#### Perturbation Strategies
The main goal when perturbing MC-TACO questions is to retain a similar
question that requires the same temporal knowledge to answer, while there are
additional constraints with slightly different related context that changes
the answers. We also modified the answers accordingly to make sure the
question has a combination of plausible and implausible candidates.
#### Model
We use the best baseline model from the original paper Zhou et al. (2019)
which is based on $\textsc{RoBERTa}_{base}$ Liu et al. (2019).
#### Contrast Set Statistics
The annotators created 646 perturbed question-answer pairs that form 646
contrast sets. Two annotators used approximately 12 hours to construct and
validate the dataset.
|
# Spectral analysis of the AMXP IGR J17591–2342 during its 2018 outburst
A. Manca,1 A. F. Gambino,2 A. Sanna,1,3 G. K. Jaisawal,5 T. Di Salvo,2,3,4 R.
Iaria,2 S. M. Mazzola,1 A. Marino,8,9 A. Anitra,2 E. Bozzo,7 A. Riggio,1,4 and
L. Burderi,1,3,4
1Dipartimento di Fisica, Università degli Studi di Cagliari, SP Monserrato-
Sestu, KM 0.7, Monserrato, 09042 Italy
2Università degli Studi di Palermo, Dipartimento di Fisica e Chimica - Emilio
Segrè, via Archirafi 36 - 90123 Palermo, Italy
3INFN, Sezione di Cagliari, Cittadella Universitaria, 09042 Monserrato, CA,
Italy
4INAF - Osservatorio Astronomico di Cagliari, via della Scienza 5, 09047
Selargius (CA), Italy
5National Space Institute, Technical University of Denmark, Elektrovej
327-328, 2800 Lyngby, Denmark
6Istituto Nazionale di Astrofisica, IASF Palermo, Via U. La Malfa 153, I-90146
Palermo, Italy
7ISDC, Department of Astronomy, University of Geneva, Chemin d’Écogia 16, 1290
Versoix, Switzerland
8Institute of Space Sciences (ICE, CSIC), Campus UAB, Carrer de Can Magrans
s/n, E-08193 Barcelona, Spain
9Institut d’Estudis Espacials de Catalunya (IEEC), E-08034 Barcelona, Spain
E-mail<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
The Accreting Millisecond X-ray Pulsar IGR J17591–2342 is a LMXB system that
went in outburst on August 2018 and it was monitored by the NICER observatory
and partially by other facilities. We aim to study how the spectral emission
of this source evolved during the outburst, by exploiting the whole X-ray data
repository of simultaneous observations. The continuum emission of the
combined broad-band spectra is on average well described by an absorbed
Comptonisation component scattering black-body-distributed photons peaking at
(0.8$\pm$0.5) keV, by a moderately optically thick corona ($\tau$=2.3$\pm$0.5)
with temperature of (34$\pm$9) keV. A black-body component with temperature
and radial size of (0.8$\pm$0.2) keV and (3.3$\pm$1.5) km respectively is
required by some of the spectra and suggests that part of the central
emission, possibly a fraction of the neutron star surface, is not efficiently
scattered by the corona. The continuum at low energies is characterised by
significant residuals suggesting the presence of an absorption edge of O viii
and of emission lines of Ne ix ions. Moreover, broad Fe i and Fe xxv K$\alpha$
emission lines are detected at different times of the outburst, suggesting the
presence of reflection in the system.
###### keywords:
X-rays: binaries – stars: neutron – stars:individual:IGR J17591–2342 – line:
formation – line: profiles
††pubyear: 2015††pagerange: Spectral analysis of the AMXP IGR J17591–2342
during its 2018 outburst–A
## 1 Introduction
Accreting Millisecond X-ray Pulsars (AMXPs) are Low Mass X-ray binary systems
(LMXBs) hosting neutron stars (NS) that show coherent pulsations with periods
lower than 10 ms. Neutron Stars in these systems are characterised by low
magnetic fields, generally between $10^{8}$–$10^{9}$ G (see, e.g., Di Salvo &
Sanna, 2020). Their relatively small spin periods are now established to be a
direct consequence of the mass transfer occurring via Roche lobe overflow from
a low mass (< 1 M⊙) companion star onto a slow rotating NS. This property
makes AMXPs the progenitors of the rotation-powered millisecond pulsars
emitting on a large fraction of the electomagnetic spectrum, i.e. from the
radio to the gamma-ray band (Alpar et al., 1982). AMXPs usually show sporadic
outbursts during which the X-ray luminosity can attain values between
$10^{36}$ and $10^{37}$ erg/s. So far, 24 sources are included in this
subclass of objects (see Bult et al., 2022; Ng et al., 2021). Even though the
spectral evolution of these sources was rarely monitored in detail during the
whole outburst, AMXPs are generally observed in hard spectral states with no
hard to soft transitions. For this reason AMXPs are usually referred as hard
X-ray transients (Di Salvo & Sanna, 2020).
The AMXP system IGR J17591–2342 was discovered in outburst by the
INTernational Gamma-Ray Astrophysics Laboratory (INTEGRAL) on August 2018
(Ducci et al., 2018). Nowak et al. (2019) inferred the celestial coordinates
of the source from a pointed Chandra/HETG observation, i.e. 17h 59m 02.83s,
-23∘ 43’ 10.2" (J2000) with an error box of 0.6" representing the 90%
confidence level of Chandra positional accuracy. Observations performed by the
Australia Telescope Compact Array (ATCA) detected the radio counterpart of IGR
J17591–2342 providing an improvement on the previous coordinates that resulted
to be 17h 59m 02.86s $\pm$ 0.04, -23∘ 43’ 08.3" $\pm$ 0.1" (J2000) (Russell et
al., 2018).
Using NICER and NuSTAR observations, the timing analysis performed by Sanna et
al. (2018) allowed to constrain coherent pulsations at a period of 1.9 ms and
an orbital period of 8.8 hrs from the Doppler modulation. Moreover, the X-ray
data show a peculiar spin-down behaviour that is compatible with a
magnetically threaded disc (Sanna et al., 2020). The results reported by Sanna
et al. (2018) led to discuss a scenario in which the companion star is a late
spectral type star having an age between 8–12 Gyrs and with a mass between
0.85–0.92 M⊙, assuming a mass of 1.4 M⊙ for the NS. This scenario is
consistent with a value of the inclination angle between 28∘–30∘.
Broad-band spectral analysis was performed on this source, but considering an
average spectrum obtained by pointed observations at different times of the
outburst. Sanna et al. (2018) analysed a combined averaged spectrum consisting
in Swift, INTEGRAL/ISGRI and NuSTAR FPMA and FPMB data. They modelled the
spectrum using an absorbed soft black-body plus a Comptonisation component.
Their work reports a value of the hydrogen column density $N_{H}$ of
(3.6$\pm$1.1)$\times 10^{22}$ cm-2, and a Comptonisation component that is
characterised by a photon index $\Gamma$ of about 1.8, a temperature $kT_{e}$
of the corona of 22${}^{+4}_{-3}$ keV, and a temperature of the seed photons
(assuming a black-body spectrum) of $kT_{seed}=0.79\pm 0.09$ keV. The soft
black-body direct emission is characterised by a temperature that is
compatible with that of the seed photons and in line with the emission from a
region of a few kilometers. These authors also detected a weak Gaussian line
centred at an energy compatible with the iron K$\alpha$ complex, and
interpreted as a possible signature of disc reflection.
Nowak et al. (2019) modelled a 1–9 keV Chandra-NICER spectrum of the source,
taken at about 58353 MJD, using a model consisting in an absorbed black-body
and Comptonisation component. They inferred a value of $N_{H}$ of
(2.9$\pm$0.5)$\times 10^{22}$ cm-2 and a black-body temperature of 0.06 keV,
by fixing the $kT{{}_{e}}$ value to that observed by Sanna et al. (2018). In
addition, based on a Si xiii absorption line detected in the Chandra spectrum,
they propose the presence of an outflow with a velocity of about 2800 km s-1
in the system. They also found evidences of possible Ca lines in the HETGS
spectra and hypothesized that the NS could be formed via accretion induced
collapse of a white dwarf in a rare, calcium-rich Type Ib supernova explosion.
Kuiper et al. (2020) studied the broad-band 0.3–150 keV averaged spectrum of
the source using data from NuSTAR FPMA and FPMB modules, from XMM-Newton RGS,
Epic-pn and Epic-MOS2 instruments and from INTEGRAL/ISGRI. They modelled the
spectrum with an absorbed Comptonisation component finding a value of $N_{H}$
of (2.09$\pm$0.05)$\times 10^{22}$ cm-2 and a Comptonisation component in
which the temperature of the seed photons is about 0.64 keV, the coronal
temperature $kT_{e}$ is (38.8$\pm$1.2) keV and in which the optical depth of
the corona is $\tau=1.59\pm 0.04$. They excluded the presence of a local
absorber into the system, due to the fact that their estimation of $N_{H}$
results in line with the one expected according to the total Galactic
absorption (i.e. 2.2$\times 10^{22}$ cm-2) resulting from the optical
reddening maps (Russell et al., 2018). They also found evidences of an
emission line in the Iron K-$\alpha$ region. However, they considered the
detection of this line spurious and related to blending of lines from
different Fe ionisation stages, or alternatively to uncertainties in the XMM
Epic-pn response (Kuiper et al., 2020). In the same work, these authors
estimated an upper limit on the distance to the source of $d=(7.6\pm 0.7)$ kpc
according to the analysis of a burst showing clues of Photospheric Radius
Expansion (PRE) in one INTEGRAL/JEM-X observation.
In this work we perform a spectral study of IGR J17591–2342 based on the
entire NICER data set and including a large sample of the available
observations in the X-ray archive, with the aim of studying its spectral
evolution in detail during the whole outburst. The paper is structured as
follows: in section 2 we describe the data selection and reduction, in section
3 we report the data analysis, in section 4 we discuss the obtained results,
and in section 5 we summarize these results advancing some conclusions.
## 2 Observations and data reduction
Figure 1: NICER light curve of IGR J17591–2342 during the outburst of 2018.
Superimposed we show the mid-observation times at which other observatories
pointed the source and the shaded areas representing the time windows of the
INTEGRAL observations. Figure 2: The 3–6 keV (upper panel) and 6–10 keV
(middle panel) NICER light curve of IGR J17591–2342 during the outburst of
2018. In the bottom panel we report the hardness ratio evaluated between these
two energy bands.
A complete follow up of the outburst of IGR J17591–2342 was performed by the
X-ray timing instrument (XTI, Gendreau et al. 2012; Gendreau et al. 2016) on
board the Neutron star Interior Composition Explorer (NICER) mission, hosted
onto the International Space Station (ISS). This instrument collected data of
the source between 2018-08-14T23:59:42 (ObsID 1200310101) and
2018-10-17T05:32:22 (ObsID 1200310139). We reduced the NICER data using the
nicerl2111https://heasarc.gsfc.nasa.gov/docs/nicer/analysis_threads/nicerl2/
script available under HEAsoft version 6.28. We processed the data using the
latest gain and calibration database files of version 20200722. We applied
standard filtering criteria based on elevation angle from the Earth limb,
pointing offset, offset from the bright Earth, and South Atlantic Anomaly
region in our study. Following the above criteria, we selected the Good Time
Intervals using the nimaketime task. We extracted the source spectrum using
the filtered clean event file in the xselect environment, while we obtained
the background spectra corresponding to each observation using the
nibackgen3C50222https://heasarc.gsfc.nasa.gov/docs/nicer/toolsnicer_bkg_est_tools.html
tool (Remillard et al., 2021). For the spectral analysis, we considered the
response matrix and ancillary response file of version 20200722. We added a
systematic error of 1% to the obtained source spectra, as suggested by the
NICER team (see, e.g., Jaisawal et al., 2019).
From Figure 1, it is possible to follow the complete light curve of the source
during the outburst as observed by NICER in the band 0.2–10 keV. The first
observation of the source shows an average count-rate of about 17 c/s that
increased up to a first peak of 29 c/s at 58362 MJD. After this first maximum,
the count-rate decreased until it reached a value of about 17 c/s at 58367
MJD, when it started to increase again up to the absolute maximum of about 42
c/s reached at 58380 MJD. Later on, the source started to decrease its
luminosity down to a count-rate of 14 c/s, reached on 58397 MJD. In addition,
as highlighted by the hardness ratio reported in Figure 2 and evaluated
between the energy band 3–6 keV and 6–10 keV, the spectral state of IGR
J17591–2342 maintains quite constant, suggesting that no significant spectral
changes occurred during the outburst, in agreement with what usually observed
for AMXPs (Di Salvo & Sanna, 2020).
The outburst of IGR J17591–2342 was also observed by INTEGRAL between
revolutions 1986 and 2009. In addition, further single pointed observation of
the Nuclear Spectroscopic Telescope Array (NuSTAR), Chandra, XMM-Newton and
ASTROSAT were performed during the first part of the outburst.
NuSTAR (Harrison et al., 2013) observed IGR J17591–2342 on 2018-08-13T22:36:09
(ObsID 90401331002) and on 2018-08-17T20:01:09 (ObsID 80301311002). The data
have been reprocessed with the nupipeline routine while the source and
background spectra were extrapolated with the nuproducts pipeline, using
circular extraction regions of 60" of radius centred at the coordinates of the
source and far away from the source, respectively. The spectra were modelled
in the 2–80 keV energy band.
Chandra observed IGR J17591–2342 on 2018-08-23T17:40:05 for 20 ks using the
High Energy Transmission Grating spectrometer (HETG, Canizares et al., 2005).
The events were reprocessed using the official Chandra analysis environment
CIAO v. 4.13 with calibration files updated to the version 4.9.4. The data
were reprocessed with the chandra_repro pipeline and the first order High-
Energy Grating (HEG) and Medium-Energy Grating (MEG) spectra were combined
using the combine_spectra tool. The obtained spectra were grouped to have at
least 25 counts per energy bin and analysed in the 1.2–7.8 keV energy band.
The XMM-Newton observatory observed the source with both the Reflection
Grating Spectrometer (RGS, Den Herder et al., 2001) and the European Imaging
cameras (EPIC) on 2018-09-03T18:44:55 during a time window in which the count-
rate of the source momentarily decreased, as it can be inferred from Figure 1.
The science data reduction was performed using the XMM-Newton Science Analysis
System (SAS) version 19.0.0. The (imaging) EPIC-pn instrument (Strüder et al.,
2001) operated in Timing Mode, while the MOS-1 and MOS-2 cameras (Turner et
al., 2001), acquired data in Small window and Timing uncompressed modes,
respectively. The Epic-pn spectra were extracted from the RAWX coordinates
between 31–44, while the background spectrum was obtained from RAWX
coordinates centred between 5 and 15. The MOS 1 data were collected in Small
window mode and, as already noticed by Kuiper et al. (2020), are extremely
corrupted by pile-up effects and for this reason they are not considered in
our work. On the contrary, MOS 2 data are not affected by pile-up, since the
instrument was operating in Timing uncompressed mode, which allows to collect
data up to 35 mCrab without severe pile-up distortions. The source and
background spectra for the MOS2 were extrapolated in the RAWX range 290–320,
while the background spectrum was extracted by a box having a width and a
height of 8432.64 and 4285.44 in physical coordinates, respectively. The
analysed energy range is 2.2–10 keV. The RGS operated in the standard
Spectroscopy HighEventRate mode with Single Event Selection. No soft proton
background flares were detected during the observation. Then, we combined the
first order data of RGS1 and RGS2 using the rgscombine tool included in the
SAS package, and grouping the obtained spectrum to have at least 25 counts per
energy bin. However, the data counts are considerably low for these data for
energies below 1.2 keV, and for this reason we preferred not to consider these
data for our analysis, since the energy range above 1 keV is well covered by
the rest of the considered space missions.
The ASTROSAT mission observed the source with the Large Area X-ray
Proportional Counter (LAXPC) instrument (Yadav et al., 2016; Antia et al.,
2017; Agrawal et al., 2017) on 2018-08-23T01:10:15 (ObsID 9000002320) and on
2018-08-27T00:00:00 (ObsID 9000002332) for a net exposure of 30 ks and 37.7
ks, respectively. A thermonuclear type I X-ray burst occurred during the ObsID
9000002320 and has been removed before extracting the persistent spectrum of
the source. For both the observations, the LAXPC 30 was not working. On the
other hand, the LAXPC 10 was active on a low voltage gain
setting333http://astrosat-ssc.iucaa.in/. Since the source is faint, we
extracted spectra from the top layer of each detector to avoid unnecessary
background. However, the LAXPC 10 spectra appear to be affected by the
background and gain change, and for this reason they were not included in the
analysis. In addition, the LAXPC 20 spectrum extracted for ObsID 9000002320
shows a significant mismatch with respect to the spectra of the closest
available observations from other missions, possibly introduced by a bad
calibration. For this reason, this observation was also excluded from the
analysis. A systematic error of 2% has been added to the LAXPC 20 spectrum of
ObsID 9000002332 (see, e.g., Misra et al., 2017), which was also grouped to
have at least 25 counts per energy bin. The analysis was conducted in the 4–17
keV energy range.
INTEGRAL observed the outburst from the source during the period spanning from
58340 MJD to 58405 MJD, corresponding to satellite revolutions from 1986 to
2016. We considered data from the two JEM-X units (Lund et al., 2003) and from
IBIS/ISGRI (Ubertini et al., 2003; Laurent et al., 2003). We analysed the
ISGRI, JEM-X 1 and JEM-X2 data with the Offline Scientific Analysis (OSA)
software version 11.0 distributed by the ISDC (Courvoisier et al., 2003).
Different source spectra were initially extracted for the two JEM-X and
IBIS/ISGRI from each revolution. A grouping of 16 bins has been used for the
JEM-X (IBIS/ISGRI) spectra extraction, following the standard practice for
similar sources. We excluded the science window (SCW) 50 in revolution 2001
due to the presence of a type I X-ray burst (see also Kuiper et al., 2020).
The JEM-X spectra were analysed in the 3–20 keV energy range, while the
IBIS/ISGRI spectra were analysed in the 25–200 keV range.
Figure 3: Evolution of the main spectral parameters describing the continuum
emission of IGR J17591–2342 as a function of time. The associated errors are
reported at a level of statistical confidence of 90%. In the upper panel the
mean count-rate collected during each NICER observation is reported. In each
panel, the star-shaped points represent fixed values for the parameters.
Figure 4: Evolution of the iron line spectral parameters and statistical
significance of the observed feature. In the plot of the line centroids
(second plot from the top) we also reported the rest-frame energy of the Fe i
K$\alpha 1$ , Fe xxv (resonance), and Fe xxvi Ly$\alpha 1$ transitions, as
reference. In the bottom panel, we highlighted in pink the detection area
considering as lower limit $\sigma$=2.5 for a weak detection. The dotted line
represents the 3$\sigma$ detection threshold. Figure 5: Best fit model and
associated residuals obtained for the broad-band spectrum associated to the
NICER ObsID 1200310106 (see Table 2 and Table 3). In black the NICER spectrum,
in red, green and blue the INTEGRAL JEM-X1, JEM-X2 and ISGRI spectra of
revolution 1992 respectively, and in cyan the ASTROSAT LAXPC20 spectrum of
ObsID 9000002332.
## 3 Spectral Analysis
The spectral analysis was entirely performed using the X-ray spectral fitting
package Xspec v. 12.11.1c (Arnaud, 1996). We chose to fit as many spectra as
possible in order to follow the evolution of the spectral parameters during
the outburst with the highest possible temporal resolution guaranteed by the
available data. For this reason, we paired each NICER spectrum with any quasi-
simultaneaous, i.e. taken within two days, data-set from the other
observatories, when possible. As widely discussed in Sanna et al. (2018) and
Kuiper et al. (2020), after some preliminary tests on the available data, we
noted that the model which best describes the continuum was an absorbed
Comptonisation model. For this reason, we adopted this model in the following
analysis. Some of the performed fits, however, revealed that in some cases a
soft excess remained in the residuals and this could be corrected with the
addition of a black-body emission from the source, as already reported in
Sanna et al. (2018). We used the chemical abundances of Wilms et al. (2000a)
and the cross sections reported in Verner et al. (1996). We modelled the
continuum spectral emission of IGR J17591–2342 adopting the Tuebingen-Boulder
ISM absorption model Tbabs, a black-body component bbodyrad, and the thermal
Comptonisation component nthcomp. The nthcomp model is described by the
asymptotic power-law photon index $\Gamma$, the electron temperature $kT_{e}$
of the hot electron corona, the seed photon temperature $kT_{seed}$, the
$inp\\_type$ parameter, which can assume values 0 or 1 for considering black-
body or disc-black-body distributions for the seed photons, respectively, and
the $redshift$ parameter. We assumed that the seed photons are distributed
accordingly to a black-body law by fixing the $inp\\_type$ parameter to 0.
Moreover, we adopted a value of redshift equal to zero for all the subsequent
analysis. The NICER spectra were fitted in the energy range 0.6–10 keV. On the
other hand, the NuSTAR FPMA and FPMB spectra of ObsID 80301311002 were fitted
in the range 2–60 keV, while those of ObsID 90401331002 in the range 2–80 keV.
The XMM-Newton spectra of ObsID 0795750101 were fitted in the range 3–10 keV
and 2.2–10 keV for the EPIC-pn and EPIC-MOS2 spectra, respectively. The
Chandra MEG and HEG spectra of ObsID 20173 were fitted in the range 1.2–6.8
keV and 1.4–7.8 keV, respectively, while the ASTROSAT/LAXPC 20 spectrum of
ObsID 9000002332 was fitted in the energy range 4–17 keV.
We inspected with xspec all INTEGRAL data of the source, but for our broad-
band analysis we finally only made use of the spectra extracted during
revolutions from 1992 and 2006 because they are more reasonably close in time
to the available NICER data. During revolutions from 1986 to 1989, the source
was caught by INTEGRAL during the earliest stages of the outburst evolution
and the INTEGRAL data were characterised by a number of counts too low to
perform any meaningful spectral analysis. A similar conclusion applies to the
data collected during revolutions 2008–2016, corresponding to the rapid decay
of the source flux toward the end of the outburst. Depending on the source
flux, we limited our spectral analysis for the JEM-X data roughly in the
interval 3–20 keV. During revolutions 1989, 1994, 1995, 2002, 2004 and 2006,
the JEM-X data did not have a large number of counts, in order to be used in
the spectral analysis and thus we do not mention these data any longer in the
following sections. IBIS/ISGRI data were used, following the OSA 11.0
recommendations, from an energy of 25 keV up to roughly 200 keV, above which
the source fell below the detection threshold of the instrument. The higher
energy bound of the IBIS/ISGRI spectrum varies across the different
revolutions considered depending on the flux of the source but remains in all
cases well above 100 keV.
The NICER spectra, in combination with those of other missions, generally
ensured a good energy coverage over a large energy range. However, some of the
NICER spectra could not be combined with other spectra, keeping hard to
constrain the high-energy cut-off $kT_{e}$ for the Comptonisation component.
In all these cases, this parameter was fixed to the value obtained by Kuiper
et al. (2020) (i.e. $kT_{e}=38.8\;keV$). In Figure 3 we show the evolution of
the spectral parameters obtained from the best fit model of each observation
as a function of time. In all the plots, the star-shaped points indicate
parameters that were frozen in the fit. This plot shows how the $kT_{e}$
parameter has been constrained for many observations, with an average value
obtained during the whole outburst of about 34 keV, which is in line with that
inferred by Kuiper et al. (2020). Some of the NICER spectra showed the
presence of residuals in absorption at about 0.87 keV, which are consistent
with the presence of an O viii absorption edge. To fit the low energy
residuals, as a first step, we tried to replace the Tbabs ISM absorption
component with the Tbfeo and the Tbvarabs components that take into account
variable abundances for the chemical species in the ISM. However, even if both
these models return perfectly compatible values for the $N_{H}$, they are
unable to fit the observed residuals. For this reason we manually described
the observed feature by recurring to the edge component in Xspec. The energy
of the edge was kept frozen for all the fits. Other localised residuals
characterise some of the NICER spectra at about 0.922 keV, in line with the
presence of Ne ix ions. This feature was modelled with a Gaussian component in
which the energy centroid was fixed to the rest-frame energy of the
aforementioned ion. Moreover, the spectral resolution of the NICER/XTI
instrument around 1 keV was not sufficient to constrain the width of the
modelled line. For this reason, we fixed the parameter $sigma$ to 0.085 keV
(i.e. the NICER/XTI spectral resolution at 1 keV). On the other hand, the
normalisation of the Gaussian component was left free to vary. Firstly, the
neutral column density NH in the Tbabs component was left free to vary during
the whole duration of the outburst, but we noticed a correlation with the
kTseed parameter of the Comptonisation component, with a consequent scattering
of the NH component. Therefore, we decided to keep this parameter frozen at
the value of (2.09$\pm$0.05)$\times 10^{22}$ cm-2, in accordance with Kuiper
et al. (2020) for the rest of the analysis.
The Comptonisation component is on average characterised by stable values of
$\Gamma$ and $kT_{e}$, equal to 1.9, with a standard deviation of about 0.2,
and 34 keV with a standard deviation of about 9 keV, respectively. On the
other hand, during the outburst the temperature of the seed photons, assumed
to be injected with a black-body-like spectrum, shows a variation in
accordance with the advance of the outburst. The mean value of this parameter
during the outburst was of 0.8 keV with an associated standard deviation of
0.5 keV.
The bbodyrad component appears to be required by the fit only in some of the
observations. We plotted the resulting black-body temperature $kT$ in the
fifth panel from the top of Figure 3. This parameter shows an almost constant
trend during the outburst with a mean value of 0.8 keV and a standard
deviation of about 0.2 keV. The associated errors at 90% of confidence level
strongly suffer from the statistics of the fitted spectra. However, for all
the reported black-body components the significance of its detection results
to be higher than 3$\sigma$ (the magenta points have a significance of more
than 5$\sigma$, whilst the dark violet ones have a significance between
3$\sigma$ and 4$\sigma$).
Some of the observed combined spectra also showed significant residuals in the
energy range at which the Fe xxv line is expected, i.e at about 6.5 keV. Once
the residuals have been detected, we tried to model them with a Gaussian
component. For each of these observations we evaluated the significance of
this local feature, by evaluating how much the normalization of this component
deviates from the continuum in units of $\sigma$. In the lower panel of Figure
4, we report the significance of the observed iron line. In the upper and
medium panels of the same figure, we show how the parameters of this feature
evolve in time during the outburst. In particular, we can observe that, with
the exception of the second point for which we have an upper limit on the
energy centroid, the rest of the lines have been well constrained, even though
it was not possible to obtain lower limits for the associated width. However,
these observations suggest that the line is marginally detected and that
results to be in agreement with a Fe i K$\alpha 1$ line the first phase of the
burst. After this phase, the line starts to be consistent with the presence of
more ionised species as Fe xxv. Since this line tends to be generally broad,
when its width could be constrained, we tried to fit this line also by using
the Relativistic model discline (Fabian et al., 1989) or the most recent
shaddisc (La Placa et al., 2020). These models, however, with the exception of
the centroid energy, returned totally unconstrained spectral parameters and
generally not physically reasonable due to the low number of counts on the
line profile.
As an example, we show the best fit model obtained for the broad-band spectrum
associated to the NICER ObsID 1200310106 in Figure 5.
## 4 Discussion
### 4.1 Spectral evolution
We report hereon the spectral analysis of the whole set of NICER observations
of the outburst of IGR J17591–2342 occurred in 2018, integrating these
observations with all the available quasi-simultaneous observations stored in
the X-ray data archive. The NICER light curve during the outburst is
characterised by a double peak. The spectral model adopted to fit the spectra
of the source suggests that during the first peak, occurring at 58362 MJD, the
source showed an unabsorbed bolometric flux of (4.6$\pm$0.2)$\times 10^{-10}$
erg cm-2 s-1 in the range 0.1–100 keV, which increases up to
7.2${}^{+0.7}_{-0.5}\times 10^{-10}$ erg cm-2 s-1 in the same energy band at
58380 MJD, when the second peak of the outburst occurred. The flux errors are
reported at 3$\sigma$ c.l., as derived from the cflux convolution model.
Comptonisation is the physical process for which we observe the major
contribution in terms of flux. Actually, this component, on average,
contributes to the total unabsorbed flux for 95% in the case of IGR
J17591–2342. According to the obtained results, the Comptonisation component
appears to be characterised by an electron corona with a temperature $kT_{e}$
of about 34 keV, which is in line with the value of 38.8 keV reported by
Kuiper et al. (2020). Unfortunately, due to the lack of coverage at the higher
energies in some points of the outburst, we were unable to constrain this
parameter for all the observations performed by NICER. However, we constrained
this parameter in 29% of the NICER observations, inferring that at the
beginning of the outburst the cloud was characterised by a slightly lower
temperature of about 24 keV.
The photons that are Comptonised in this corona appear to be mainly with a
temperature of about 0.8 keV, with the tendency to slightly increase during
the peaks of the outburst, always, however, at the limit of statistical
compatibility at the 90% confidence level. This could be due to episodes of
increased heating of the neutron star surface/boundary layer, plausibly caused
by occasional rises in the mass-accretion rate. The asymptotic power law index
$\Gamma$, on the contrary, remains almost stable between 1.7 and 2.1.
The black-body component has been tested for each spectrum and only 26% of the
best fit models require this component at a level of confidence that is
greater than 3$\sigma$. The equivalent radius of emission of the black-body
component has been obtained from the normalization as Rbb=$N_{bb}d^{2}_{10}$,
where Nbb is the normalization of the bbodyrad component and $d^{2}_{10}$ is
the distance to the source in units of 10 kpc. Assuming the distance to the
source obtained in this work (i.e., (7.2$\pm$0.8) kpc), we obtained the radii
reported in Figure 6.
Figure 6: Trend of the equivalent radius of emission of the black-body
component (upper panel) and of the Comptonisation seed photons (middle panel)
as a function of time. In the lower panel, the trend followed by the optical
depth of the Comptonising electron cloud is also reported. The arrows indicate
upper limits on the values of the parameters.
From the observed trend, it is possible to notice how the radius of the
emitting black-body is mainly at a value of about $\overline{R_{bb}}$ =
(3.3$\pm$1.5) km, where the associated error in this case coincides with the
standard deviation of the distribution of measurements. This component,
peaking mainly at $\overline{kT}$ = (0.8$\pm$0.2) keV, could arise from a
direct emission by a fraction of the NS surface. It is interesting to notice
that the size of this region varies in accordance with the variation of the
flux extrapolated in the band 0.1–100 keV during the outburst. In particular,
for lower values of the flux, it appears smaller with respect to the case in
which the flux is higher. One possible explanation could be represented by the
fact that when the mass accretion rate increases, the region responsible for
the emission becomes larger, reaching a maximum in proximity of the two peaks
of the outburst, where the values are consistent with a radius of about 5 km,
i.e half of the NS size, assuming a NS of 10 km of radius. Then, we tried to
test if the direct observation of the black-body emission from the NS might be
related to a low optical depth of the hot corona, using the relation of
Zdziarski et al. (1996):
$\Gamma=\left[\dfrac{9}{4}+\dfrac{1}{\tau\left(1+\dfrac{\tau}{3}\right)\left(\dfrac{kT_{e}}{m_{e}c^{2}}\right)}\right]^{1/2}-\dfrac{1}{2},$
(1)
with the spectral parameters obtained in the best fit model for each
observation. The evolution of this parameter as a function of the observing
time is reported in Figure 6 in the lower panel. As shown by this trend, the
value of the optical depth remains almost stable during the outburst at a
value of about $\tau\sim$2.3 and a standard deviation of 0.5, at least for the
parameters that were constrained. For this reason, it is possible that the
corona Comptonises the majority of the photons emitted by the NS surface
leaving, however, only a small fraction ($\sim$ 10%) of them not significantly
Comptonised. The latter could contribute to the observed direct black-body
component, peaking approximately at the same temperature of the seed photons,
as evidenced in the obtained results.
A direct proof of this scenario could be provided by estimating the equivalent
radius of the region emitting the seed photons for the Comptonisation, which
we expect more or less similar to the radius obtained for the direct black-
body emission region. An estimation of this physical parameter can be obtained
by using the relation of In ’t Zand et al. (1999), assuming a spherical
geometry of the corona,
$R_{0}=3\times
10^{4}\;d\left(\dfrac{f_{bol}}{1+y}\right)^{1/2}\left(kT_{seed}\right)^{-2},$
(2)
where $d$ is the distance to the source in kpc inferred in this work,
$f_{bol}$ is the unabsorbed bolometric flux extrapolated from the
Comptonisation component in erg cm-2 s-1, $kT_{seed}$ is the temperature of
the seed photons in keV, and $y=4kT_{e}max[\tau,\tau^{2}]/(m_{e}c^{2})$ is the
Compton parameter, in which $kT_{e}$ is the electron temperature in keV. The
radius $R_{0}$ obtained for each observation is plotted as a function of the
observing time in the middle panel of Figure 6. These parameters, at least in
the cases in which the statistics of the data allowed to constrain them, are
scattered around a mean value of $\overline{R_{0}}$ = (5.0$\pm$2.8) km, which
is in agreement with the obtained value for $\overline{R_{bb}}$.
### 4.2 Spectral lines
The spectral continuum of the source is characterised by the presence of
several local features. On the one hand, almost the totality of the analysed
spectra present an absorption edge at 0.871 keV, which suggests the presence
of O viii ions. Despite the fact that the energy of this feature needed to be
fixed due to the complexity of the residuals at the lowest energies in the
NICER spectra, the significance of this feature is always higher than
3$\sigma$ for all the results reported in Table 2. The low energy residuals of
some spectra are well fitted by taking into account a Ne ix emission line for
all the observations (see Table 3). In accordance with the statistics of the
available spectra and with the spectral resolution of NICER/XTI, which is not
sufficient to constrain the line width, we fixed the parameter $sigma$ of the
Gaussian component to a value of 85 eV, which is the spectral resolution of
the instrument at 1 keV. Trying to leave this parameter free to vary during
the fit, indeed results in a value of width that is of the order of the
spectral resolution of NICER at 1 keV, or slightly higher, even though not
constrained. This could probably suggest two possible scenarios: on one hand,
the lines could be produced approximately in the same region of an accretion
disc where they are relativistically broadened due to the proximity to the NS.
This scenario, however, is not supported by evidences of any further black-
body or multicolour-disc black-body component needed to fit the spectrum and
attributable to emission from the disc. On the other hand, this line could be
smeared only as a consequence of Compton broadening, or as an effect of the
not sufficient spectral resolution of the XTI instrument to resolve a complex
of lines at those energies. The test of this latter scenario deserves
observations with high spectral resolution instruments. Indeed, the low energy
spectrum of the source is completely dominated by the background flux below 1
keV in the case of the pointed observations performed by the Chandra/HETG and
the two MOS instruments on board the XMM-Newton mission.
About 19% of the observations show a significant evidence ($>3\sigma$) of Fe
emission lines. In Figure 4, we show the evolution followed by the parameters
describing the line profile, assumed to be Gaussian. The detection of this
line is significant only for the time window including the range spanned by
the first peak of the outburst and by part of the rise of flux towards the
main peak of the burst, i.e during the periods of higher statistics of the
NICER spectra. Our analysis shows how, in the reported cases, it is possible
to constrain the centroid energy of the lines, with the exception of the
observations occurred on 58357 MJD and on 58395 MJD, for which we find only an
upper limit of 6.3 and 6.7 keV, respectively. The remaining sample of line
profiles result to be in line with the rest-frame energy corresponding to the
Fe xxv K$\alpha$ line, usually observed in bright LMXB systems showing
evidences of reflection. The observed sigma for these lines results to be on
average of about 0.5 keV and its distribution shows a standard deviation of
about 0.5 keV. On the basis of this result, it seems that the line profile is
tendentially broad. Kuiper et al. (2020) detected a similar line profile in
the average spectrum of the source. The feature they observed is centred at
about 6.7 keV and has a width of 0.69 keV, in line with our results. However,
they consider this feature to be characterised by values of the centroid
energy and sigma that are too far off and too broad. They propose that this
feature could be produced by a blend of Fe lines deriving from different
ionisation stages, which can not be distinguished individually by the MOS due
to its spectral resolution. Moreover, they propose that this detection could
reflect uncertainties in the XMM-Newton EPIC-pn response for observations
taken in timing mode, also because there is no detection for Chandra/HETG
data, having a higher spectral resolution. For this reason they do not
consider the detection as real. On the contrary, we detected this line at
different times during the outburst, including the case occurring on 58353 MJD
for which we observe a detection at about 3$\sigma$, and that was obtained by
considering a combined spectrum that also includes a Chandra/HETG spectrum. In
accordance with this evidence, and to the fact that also other spectra during
the outburst show the same feature, we conclude that this feature is real.
The poor statistics offered by the available data on the iron line profile,
and the lack of detected lines for close observations, does not allow a deeper
investigation on the nature of this feature that could be produced by
reflection. A simple modelling with a Relativistic component as discline or
shaddisc returns unconstrained values for the physical parameters describing
the line profile. Further observations of the source in outburst, performed
with future space missions that will be provided with higher effective area
and spectral resolution over a wider energy range, will be fundamental to
investigate the reflection component in IGR J17591–2342, for a deeper
comprehension of the ionisation state of the matter and of the geometry of the
system.
### 4.3 A new measurement of the distance
The analysis of PRE type I X-ray bursts is largely used in literature for
inferring the distance to the sources, the method being often the only
possible way to obtain such an estimate. However, this method suffers from
systematic uncertainties (see, e.g., Marino et al., 2019, and references
therein). For instance, the flux reached by PRE bursts for the same source
generally scatters around a mean value with variations of about 15% (see,
e.g., Kuulkers et al., 2003; Galloway et al., 2003, 2008). Moreover, the value
used for the Eddington luminosity of the source may not be accurate without
details on, e.g., the composition of the NS atmosphere and and therefore of
its opacity. The latest value for the distance of the source is $d=(7.6\pm
0.7)$ kpc, obtained by Kuiper et al. (2020) from the analysis of a type-I
X-ray burst which was suggested to reach the Eddington limit. However, without
strong evidences of PRE, the obtained distance could be an upper limit rather
than an actual estimate. Furthermore, the aforementioned systematic
uncertainties that affect the method, demand to try an alternative way to find
the distance and eventually confirm the measurement by Kuiper et al. (2020).
From the value of the hydrogen column density, we can derive an estimate of
the distance of the system to compare with the existing one by Kuiper et al.
(2020), by invoking the 3D extinction map of the radiation in the Ks band for
our Galaxy of Chen, B. Q. et al. (2013). The map provides a profile of the
radiation extinction in the direction of the Galactic bulge, as a function of
the distance to the source. We used the profile at galactic coordinates
$l=0.00$, $b=1.00$, reported in Figure 7.
Figure 7: Expected profile of the extinction in the Ks band, as a function of
the distance to the source in the direction of IGR J17591–2342. The blue
vertical lines indicates the best fit value (navy blue line) of the distance
to the source inferred by Kuiper et al. (2020) and its relative error (lighter
blue lines). The green line represents the best fit parabolic function that
fits the profile in the range 5–10 kpc.
The value of NH is related to the visual extinction of the source radiation AV
through the relation of Güver & Özel (2009):
$N_{H}=(2.21\pm 0.09)\times 10^{21}A_{V}.$ (3)
The visual extinction is then related to the extinction of the radiation in
the Ks band (A${}_{K_{S}}$) through the relation of Nishiyama et al. (2008):
$A_{K_{S}}=(0.062\pm 0.005)\;A_{V}\;{\rm mag}.$ (4)
We fitted the profile of Figure 7 with a parabolic function in the range 5–10
kpc, i.e. in the region corresponding to values of the distance inferred by
Kuiper et al. (2020), that is d = (7.6$\pm$0.7) kpc. Assuming the value of NH
found by Kuiper et al. (2020), the value of expected extinction is
A${}_{K_{s}}$=0.59$\pm$0.01 mag, corresponding to a distance of $d=(7.2\pm
0.8)$ kpc.
The estimation of the distance through the hydrogen column density is strongly
affected by uncertainties on this parameter. The accuracy in the evaluation of
the column density depends on the quality of the data and on the energy range
in which the fit is conducted. A small energy range and a low resolution can
lead to correlation effects with other parameters of the model. The use of
solar abundances as initial values for the parameters can also lead to
differences around 5% in the estimation (see Wilms et al., 2000b). Moreover,
the accuracy extinction map itself depends on the quality of the data used,
whether they are recent data with higher quality, and the analysed region of
the sky, which can be affected by strong intrinsic variations in the
interstellar medium. Nonetheless, the two estimates of the distance, by means
of the PRE analysis of Kuiper et al. (2020) and through the spectral evolution
of this work, are compatible.
## 5 Conclusions
We analysed a large sample of the available observations of IGR J17591–2342 in
the X-ray archive with the aim of characterising the spectral emission of the
source and its evolution during the outburst. The source is well fitted by an
absorbed Comptonisation component that on average contributes to 95% of the
whole budget of the unabsorbed emitted flux. No significant spectral changes
were found for the source during the whole outburst. The estimation of the
distance is in line with the same values reported in literature, and equal to
$d=(7.2\pm 0.8)$ kpc.
The spectral continuum, especially in the time window at which the flux
reaches higher values during the outburst, needs to be fitted with a black-
body component peaking at a temperature that is correlated with the NICER
count-rate. This component is characterised by a temperature of about 0.8 keV
and appears to be emitted from a region with a radius of (3.3$\pm$1.5) km that
could be compatible with a fraction of the NS surface, or possibly with the
boundary layer. No significant variations are observed on the electron
temperature of the Comptonising cloud that mainly shows a temperature of about
34 keV. The corona appears to be characterised by an optical depth of about
2.3, which could in part explain the direct black-body component that could
arise from a fraction of photons emitted by a small region of the NS surface
and that are not significantly scattered in the corona. The spectral continuum
appears to be characterised by a Ne ix emission line. This line, however,
seems to have a broad profile that is compatible or some times higher than the
spectral resolution of NICER at 1 keV. This could be produced by relativistic
effects, if this feature is originated in the innermost parts of the accretion
disc (the evidence of which is not detectable from the analysed data), or
could be an effect of the spectral resolution of NICER, being unable to
resolve a complex of lines at those energies.
A broad iron emission line has been detected in the spectra of a number of
observations. The energy line appears to be correlated to the phase of the
outburst. In particular, for lower flux regimes, we observe a broad ($\sim 1$
keV) Fe i K$\alpha$ line, while in proximity of the peak of the outburst we
observed a Fe xxv K$\alpha$ line. Observations of future outbursts of IGR
J17591–2342 with instruments equipped with larger effective area over a wider
energy range, as for example the enhanced X-ray Timing and Polarimetry mission
(eXTP, Zhang et al., 2019; In’t Zand et al., 2019), could provide important
constraints on the possible reflection component in this system, and then also
on the ionization state of the matter and on the inclination angle of the
system that can not be inferred by the current statistics.
## Acknowledgements
The authors acknowledge financial contribution from the agreement ASI-INAF
n.2017-14-H.0 from INAF mainstream (PI: A. De Rosa), and from the HERMES
project financed by the Italian Space Agency (ASI) Agreement n. 2016/13 U.O
and from the ASI-INAF Accordo Attuativo HERMES Technologic Pathfinder n.
2018-10-H.1-2020. We also acknowledge support from the European Union Horizon
2020 Research and Innovation Framework Programme under grant agreement HERMES-
Scientific Pathfinder n. 821896. RI and TDS acknowledge the research grant
iPeska (PI: Andrea Possenti) funded under the INAF national call Prin-SKA/CTA
approved with the Presidential Decree 70/2016. RI acknowledges financial
contribution from the agreement ASI-INAF n.2017-14-H.0, from INAF mainstream
(PI: T. Belloni). A. Marino is supported by the H2020 ERC Consolidator Grant
"MAGNESIA" under grant agreement No. 817661 (PI: Rea) and National Spanish
grant PGC2018-095512-BI00. This work was also partially supported by the
program Unidad de Excelencia Maria de Maeztu CEX2020-001058-M, and by the
PHAROS COST Action (No. CA16214).
## Data availability
The data utilized in this article are publicly available in the Heasarc Data
Archive at https://heasarc.gsfc.nasa.gov/cgi-bin/W3Browse/w3browse.pl. The
ASTROSAT data are publicly available in the ISRO Science Data Archive at
https://webapps.issdc.gov.in/astro_archive/archive/Search.jsp.
## References
* Agrawal et al. (2017) Agrawal P. C., et al., 2017, Journal of Astrophysics and Astronomy, 38, 30
* Alpar et al. (1982) Alpar M. A., Cheng A. F., Ruderman M. A., Shaham J., 1982, Nature, 300, 728
* Antia et al. (2017) Antia H. M., et al., 2017, ApJS, 231, 10
* Arnaud (1996) Arnaud K. A., 1996, in Jacoby G. H., Barnes J., eds, Astronomical Society of the Pacific Conference Series Vol. 101, Astronomical Data Analysis Software and Systems V. p. 17
* Bult et al. (2022) Bult P., et al., 2022, The discovery of the 528.6 Hz accreting millisecond X-ray pulsar MAXI J1816-195, doi:10.48550/ARXIV.2208.04721, https://arxiv.org/abs/2208.04721
* Canizares et al. (2005) Canizares C. R., et al., 2005, PASP, 117, 1144
* Chen, B. Q. et al. (2013) Chen, B. Q. Schultheis, M. Jiang, B. W. Gonzalez, O. A. Robin, A. C. Rejkuba, M. Minniti, D. 2013, A&A, 550, A42
* Courvoisier et al. (2003) Courvoisier T. J. L., et al., 2003, A&A, 411, L53
* Den Herder et al. (2001) Den Herder J. W., et al., 2001, A&A, 365, L7
* Di Salvo & Sanna (2020) Di Salvo T., Sanna A., 2020, arXiv e-prints, p. arXiv:2010.09005
* Ducci et al. (2018) Ducci L., Kuulkers E., Grinberg V., Paizis A., Sidoli L., Bozzo E., Ferrigno C., Savchenko V., 2018, The Astronomer’s Telegram, 11941, 1
* Fabian et al. (1989) Fabian A. C., Rees M. J., Stella L., White N. E., 1989, MNRAS, 238, 729
* Galloway et al. (2003) Galloway D. K., Psaltis D., Chakrabarty D., Muno M. P., 2003, ApJ, 590, 999
* Galloway et al. (2008) Galloway D. K., Muno M. P., Hartman J. M., Psaltis D., Chakrabarty D., 2008, ApJS, 179, 360
* Gendreau et al. (2012) Gendreau K. C., Arzoumanian Z., Okajima T., 2012, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. p. 13, doi:10.1117/12.926396
* Gendreau et al. (2016) Gendreau K. C., et al., 2016, in Space Telescopes and Instrumentation 2016: Ultraviolet to Gamma Ray. p. 99051H, doi:10.1117/12.2231304
* Güver & Özel (2009) Güver T., Özel F., 2009, MNRAS, 400, 2050
* Harrison et al. (2013) Harrison F. A., et al., 2013, ApJ, 770, 103
* In ’t Zand et al. (1999) In ’t Zand J. J. M., et al., 1999, A&A, 345, 100
* In’t Zand et al. (2019) In’t Zand J. J. M., et al., 2019, Science China Physics, Mechanics, and Astronomy, 62, 29506
* Jaisawal et al. (2019) Jaisawal G. K., et al., 2019, ApJ, 885, 18
* Kuiper et al. (2020) Kuiper L., Tsygankov S. S., Falanga M., Mereminskiy I. A., Galloway D. K., Poutanen J., Li Z., 2020, A&A, 641, A37
* Kuulkers et al. (2003) Kuulkers E., den Hartog P. R., in’t Zand J. J. M., Verbunt F. W. M., Harris W. E., Cocchi M., 2003, A&A, 399, 663
* La Placa et al. (2020) La Placa R., Stella L., Papitto A., Bakala P., Di Salvo T., Falanga M., De Falco V., De Rosa A., 2020, ApJ, 893, 129
* Laurent et al. (2003) Laurent P., et al., 2003, A&A, 411, L185
* Lund et al. (2003) Lund N., et al., 2003, A&A, 411, L231
* Marino et al. (2019) Marino A., et al., 2019, MNRAS, 490, 2300
* Misra et al. (2017) Misra R., et al., 2017, The Astrophysical Journal, 835, 195
* Ng et al. (2021) Ng M., et al., 2021, ApJ, 908, L15
* Nishiyama et al. (2008) Nishiyama S., Nagata T., Tamura M., Kandori R., Hatano H., Sato S., Sugitani K., 2008, ApJ, 680, 1174
* Nowak et al. (2019) Nowak M. A., Paizis A., Jaisawal G. K., Chenevez J., Chaty S., Fortin F., Rodriguez J., Wilms J., 2019, ApJ, 874, 69
* Remillard et al. (2021) Remillard R. A., et al., 2021, arXiv e-prints, p. arXiv:2105.09901
* Russell et al. (2018) Russell T. D., Degenaar N., Wijnands R., van den Eijnden J., Gusinskaia N. V., Hessels J. W. T., Miller-Jones J. C. A., 2018, ApJ, 869, L16
* Sanna et al. (2018) Sanna A., et al., 2018, A&A, 617, L8
* Sanna et al. (2020) Sanna A., et al., 2020, MNRAS, 495, 1641
* Strüder et al. (2001) Strüder L., et al., 2001, A&A, 365, L18
* Turner et al. (2001) Turner M. J. L., et al., 2001, A&A, 365, L27
* Ubertini et al. (2003) Ubertini P., et al., 2003, A&A, 411, L131
* Verner et al. (1996) Verner D. A., Ferland G. J., Korista K. T., Yakovlev D. G., 1996, ApJ, 465, 487
* Wilms et al. (2000a) Wilms J., Nowak M. A., Boyd P., Pottschmidt K., Heindl W. A., Begelman M. C., 2000a, in AAS/High Energy Astrophysics Division #5. p. 1247
* Wilms et al. (2000b) Wilms J., Allen A., McCray R., 2000b, ApJ, 542, 914
* Yadav et al. (2016) Yadav J. S., et al., 2016, ApJ, 833, 27
* Zdziarski et al. (1996) Zdziarski A. A., Johnson W. N., Magdziarz P., 1996, MNRAS, 283, 193
* Zhang et al. (2019) Zhang S., et al., 2019, Science China Physics, Mechanics, and Astronomy, 62, 29502
## Appendix A Tables
Table 1: Observations considered in this work. | | |
---|---|---|---
ObsID | Satellite | Start Time (UT) | Stop Time (UT)
90401331002 | NuSTAR | 2018-08-13T22:36:09 | 2018-08-14T14:26:09
1200310101 | NICER | 2018-08-14T23:59:42 | 2018-08-15T14:08:00
80301311002 | NuSTAR | 2018-08-17T20:01:09 | 2018-08-18T13:21:09
Rev. 1989 | INTEGRAL | 2018-8-17 10:45:21 | 2018-8-19 19:44:37
1200310102 | NICER | 2018-08-18T02:07:53 | 2018-08-18T03:56:20
1200310103 | NICER | 2018-08-23T00:58:20 | 2018-08-23T23:01:36
9000002320 | AstroSAT | 2018-08-23T01:10:15 | 2018-08-24T00:41:18
20173 | Chandra | 2018-08-23T17:40:05 | 2018-08-23T23:31:42
1200310104 | NICER | 2018-08-24T01:51:10 | 2018-08-24T14:39:23
Rev. 1992 | INTEGRAL | 2018-8-25 13:08:59 | 2018-8-27 19:11:30
1200310105 | NICER | 2018-08-27T00:49:37 | 2018-08-27T22:39:23
9000002332 | AstroSAT | 2018-08-27T00:05:56 | 2018-08-27T21:59:56
1200310106 | NICER | 2018-08-27T23:55:20 | 2018-08-28T01:44:44
1200310107 | NICER | 2018-08-30T15:18:00 | 2018-08-30T17:35:00
Rev. 1994 | INTEGRAL | 2018-8-30 20:48:49 | 2018-9-2 3:31:23
1200310108 | NICER | 2018-08-31T22:13:38 | 2018-08-31T22:55:20
1200310109 | NICER | 2018-08-31T23:47:15 | 2018-09-01T17:05:18
1200310110 | NICER | 2018-09-02T06:40:10 | 2018-09-02T23:58:36
Rev. 1995 | INTEGRAL | 2018-9-2 12:39:01 | 2018-9-4 19:22:19
795750101 | XMM-Newton | 2018-09-03T18:47:34.08 | 2018-09-04T03:58:54.86
1200310111 | NICER | 2018-09-03T01:12:30 | 2018-09-03T07:41:56
1200310112 | NICER | 2018-09-05T17:59:40 | 2018-09-05T21:55:00
1200310113 | NICER | 2018-09-06T00:24:00 | 2018-09-06T06:49:00
1200310114 | NICER | 2018-09-07T08:42:40 | 2018-09-07T09:26:40
1200310115 | NICER | 2018-09-08T01:42:11 | 2018-09-08T11:42:51
1200310116 | NICER | 2018-09-09T03:58:11 | 2018-09-09T22:59:33
Rev. 1998 | INTEGRAL | 2018-9-10 12:07:30 | 2018-9-12 18:48:26
1200310118 | NICER | 2018-09-11T00:44:36 | 2018-09-11T17:58:50
1200310119 | NICER | 2018-09-11T23:54:45 | 2018-09-12T21:49:14
1200310120 | NICER | 2018-09-13T05:15:23 | 2018-09-13T22:53:29
1200310121 | NICER | 2018-09-14T07:57:04 | 2018-09-14T17:04:00
1200310122 | NICER | 2018-09-15T00:54:21 | 2018-09-15T19:31:40
Rev. 2001 | INTEGRAL | 2018-9-18 11:33:28 | 2018-9-20 18:17:35
1200310124 | NICER | 2018-09-19T06:26:49 | 2018-09-19T22:06:40
1200310125 | NICER | 2018-09-20T05:35:18 | 2018-09-20T15:09:00
Rev. 2002 | INTEGRAL | 2018-9-21 4:47:37 | 2018-9-23 10:06:50
1200310126 | NICER | 2018-09-21T00:07:18 | 2018-09-21T11:14:40
1200310127 | NICER | 2018-09-23T07:45:04 | 2018-09-23T11:11:45
Rev. 2003 | INTEGRAL | 2018-9-23 19:12:26 | 2018-9-26 1:57:06
1200310128 | NICER | 2018-09-24T02:17:04 | 2018-09-24T02:41:20
1200310129 | NICER | 2018-09-25T01:27:04 | 2018-09-25T15:49:20
1200310130 | NICER | 2018-09-26T03:55:00 | 2018-09-26T04:11:20
Rev. 2004 | INTEGRAL | 2018-9-26 11:02:00 | 2018-9-28 17:47:47
1200310131 | NICER | 2018-09-28T00:30:47 | 2018-09-28T05:38:00
Rev. 2005 | INTEGRAL | 2018-9-29 4:14:04 | 2018-10-1 9:38:30
Rev. 2006 | INTEGRAL | 2018-10-1 18:42:44 | 2018-10-4 0:28:03
1200310132 | NICER | 2018-10-04T01:52:19 | 2018-10-04T22:15:00
1200310133 | NICER | 2018-10-05T18:10:40 | 2018-10-05T21:19:40
1200310134 | NICER | 2018-10-10T20:33:44 | 2018-10-10T20:48:40
1200310135 | NICER | 2018-10-11T01:11:02 | 2018-10-11T01:28:20
1200310136 | NICER | 2018-10-12T06:32:14 | 2018-10-12T06:56:18
1200310137 | NICER | 2018-10-13T13:24:10 | 2018-10-13T15:15:40
1200310138 | NICER | 2018-10-16T03:14:20 | 2018-10-16T05:06:20
1200310139 | NICER | 2018-10-17T05:32:22 | 2018-10-17T18:13:20
Table 2: Best fit model continuum for each observation of IGR J17591–2342. The associated errors are reported at 90% confidence level. | | Tbabs | Edge | Bbodyrad | nthComp | $\chi^{2}/dof$
---|---|---|---|---|---|---
ObsID | MJD | NH | EdgeE | MaxTau | kT | Norm | Gamma | $kT_{e}$ | kTseed | Norm |
| | ($\times 10^{22}$) | (keV) | | (keV) | | | (keV) | (keV) | |
1200310101 | 58345.0 | 2.09* | $0.871*$ | $2.0\pm 0.2$ | $1.10\pm 0.06$ | $0.8\pm 0.2$ | $1.76\pm 0.02$ | $22^{+4}_{-3}$ | $0.43\pm 0.03$ | $0.0153^{+0.0010}_{-0.0009}$ | $2444.3/2147$
1200310102 | 58348.09 | 2.09* | $0.871*$ | $2.5^{+0.6}_{-0.2}$ | – | – | $1.77\pm 0.02$ | $22^{+8}_{-4}$ | $1.03^{+0.06}_{-0.07}$ | $0.0019^{+0.0003}_{-0.0002}$ | $1541.3/1515$
1200310103 | 58353.04 | 2.09* | $0.871*$ | $2.0\pm 0.1$ | – | – | 1.9* | 38.8* | $0.58\pm 0.01$ | $0.0150\pm 0.0004$ | $1507.4/1641$
1200310104 | 58354.08 | 2.09* | $0.871*$ | $1.7\pm 0.2$ | – | – | $1.91\pm 0.05$ | 38.8* | $0.63\pm 0.02$ | $0.0139^{+0.0007}_{-0.0006}$ | $1346.2/1563$
1200310105 | 58357.03 | 2.09* | $0.871*$ | $2.2\pm 0.3$ | $1.02^{+0.08}_{-0.09}$ | $4.4^{+2.1}_{-1.6}$ | $1.65^{+0.07}_{-0.07}$ | 38.8* | $0.44^{+0.06}_{-0.07}$ | $0.016\pm 0.002$ | $760.8/718$
1200310106 | 58358.0 | 2.09* | $0.871*$ | $2.5^{+0.3}_{-0.7}$ | $1.15^{+0.09}_{-0.07}$ | $3\pm 1$ | $1.71^{+0.07}_{-0.06}$ | $40^{+57}_{-13}$ | $0.42^{+0.10}_{-0.13}$ | $0.019^{+0.009}_{-0.004}$ | $538.0/557$
1200310107 | 58360.64 | 2.09* | – | – | – | – | $1.82\pm 0.05$ | $31^{+55}_{-10}$ | $0.54\pm 0.02$ | $0.012\pm 0.002$ | $657.4/623$
1200310108 | 58361.93 | 2.09* | $0.871*$ | $2.1\pm 0.3$ | – | – | $1.84^{+0.07}_{-0.06}$ | $32^{+86}_{-11}$ | $0.55\pm 0.04$ | $0.022\pm 0.002$ | $587.1/540$
1200310109 | 58361.99 | 2.09* | $0.871*$ | $1.8\pm 0.2$ | – | – | $1.90\pm 0.04$ | $\geq 42$ | $0.60\pm 0.02$ | $0.0207^{+0.0009}_{-0.0008}$ | $798.7/769$
1200310110 | 58363.28 | 2.09* | $0.871*$ | $1.9\pm 0.2$ | $0.58\pm^{+0.04}_{-0.05}$ | $58^{+10}_{-8}$ | $2.3^{+0.3}_{-0.2}$ | 38.8* | $1.1\pm 0.2$ | $0.0061^{+0.0028}_{-0.0015}$ | $865.2/745$
1200310111 | 58364.05 | 2.09* | $0.871*$ | $1.6\pm 0.2$ | – | – | $1.79\pm 0.02$ | 38.8* | $0.57\pm 0.02$ | $0.0164\pm 0.0008$ | $1190.3/840$
1200310112 | 58366.75 | 2.09* | $0.871*$ | $1.4\pm 0.6$ | $0.61^{+0.09}_{-0.10}$ | $36^{+21}_{-8}$ | 1.9* | 38.8* | $1.3^{+0.5}_{-0.4}$ | $\leq 0.0023$ | $533.0/489$
1200310113 | 58367.02 | 2.09* | $0.871*$ | $2.4\pm 0.4$ | – | – | $1.78^{+0.06}_{-0.05}$ | 38.8* | $0.45\pm 0.05$ | $0.017\pm 0.002$ | $561.6/535$
1200310114 | 58368.36 | 2.09* | $0.871*$ | $1.4\pm 0.3$ | – | – | 1.9* | 38.8* | $0.60\pm 0.03$ | $0.0132^{+0.0010}_{-0.0009}$ | $492.6/496$
1200310115 | 58369.07 | 2.09* | $0.871*$ | $1.9\pm 0.2$ | – | – | $1.87^{+0.06}_{-0.05}$ | 38.8* | $0.57\pm 0.03$ | $0.0148^{+0.0010}_{-0.0009}$ | $683.9/643$
1200310116 | 58370.17 | 2.09* | $0.871*$ | $2.0\pm 0.2$ | – | – | $1.93^{+0.06}_{-0.05}$ | $\geq 41$ | $0.62\pm 0.03$ | $0.0168^{+0.0013}_{-0.0009}$ | $735.3/718$
1200310118 | 58372.03 | 2.09* | $0.871*$ | $1.3\pm 0.3$ | $0.73^{+0.08}_{-0.09}$ | $45^{+8}_{-40}$ | $1.9\pm 0.2$ | 38.8* | $1.8^{+0.8}_{-0.5}$ | $0.005^{+0.014}_{-0.002}$ | $530.0/499$
1200310119 | 58373.0 | 2.09* | $0.871*$ | $2.0\pm 0.3$ | $0.65^{+0.06}_{-0.10}$ | $45^{+8}_{-40}$ | $1.8^{+0.5}_{-0.4}$ | $\geq 27$ | $1.2^{+0.4}_{-0.7}$ | $0.005^{+0.014}_{-0.002}$ | $719.1/648$
1200310120 | 58374.22 | 2.09* | $0.871*$ | $2.2\pm 0.3$ | – | – | $1.84\pm 0.06$ | 38.8* | $0.62\pm 0.03$ | $0.021\pm 0.001$ | $641.6/642$
1200310121 | 58375.33 | 2.09* | – | – | – | – | $2.19\pm 0.02$ | 38.8* | $0.87\pm 0.04$ | $0.0130\pm 0.005$ | $561.0/426$
1200310122 | 58376.04 | 2.09* | $0.871*$ | $2.4\pm 0.3$ | – | – | $1.81^{+0.08}_{-0.07}$ | 38.8* | $0.60\pm 0.04$ | $0.025\pm 0.002$ | $601.8/548$
1200310124 | 58380.27 | 2.09* | $0.871*$ | $2.2\pm 0.3$ | – | – | $1.90^{+0.09}_{-0.07}$ | $58^{+289}_{-23}$ | $0.64\pm 0.04$ | $0.028\pm 0.002$ | $511.7/554$
1200310125 | 58381.23 | 2.09* | $0.871*$ | $3.1\pm 0.5$ | – | – | $1.77^{+0.09}_{-0.07}$ | $33^{+20}_{-8}$ | $0.53\pm 0.6$ | $0.033^{+0.005}_{-0.004}$ | $401.3/420$
1200310126 | 58382.01 | 2.09* | $0.871*$ | $2.2\pm 0.2$ | – | – | $1.89^{+0.07}_{-0.06}$ | $40^{+54}_{-13}$ | $0.62\pm 0.03$ | $0.029\pm 0.002$ | $605.5/621$
1200310127 | 58384.32 | 2.09* | $0.871*$ | $2.1\pm 0.3$ | – | – | $1.91^{+0.09}_{-0.08}$ | $33^{+33}_{-10}$ | $0.63\pm 0.04$ | $0.025\pm 0.002$ | $555.6/561$
1200310128 | 58385.1 | 2.09* | $0.871*$ | $2.0\pm 0.3$ | – | – | $1.90^{+0.08}_{-0.07}$ | 38.8* | $0.61\pm 0.04$ | $0.023\pm 0.002$ | $525.1/523$
1200310129 | 58386.06 | 2.09* | $0.871*$ | $2.1\pm 0.3$ | – | – | $1.90\pm 0.06$ | 38.8* | $0.61^{+0.03}_{-0.04}$ | $0.0215^{+0.002}_{-0.001}$ | $688.0/630$
1200310130 | 58387.16 | 2.09* | $0.871*$ | $2.0\pm 0.4$ | $0.68^{+0.07}_{-0.08}$ | $49^{+14}_{-10}$ | $2.0\pm 0.1$ | 38.8* | $1.4\pm 0.4$ | $0.004^{+0.003}_{-0.001}$ | $468.7/465$
1200310131 | 58389.02 | 2.09* | $0.871*$ | $2.0\pm 0.3$ | – | – | $1.91\pm 0.06$ | 38.8* | $0.57\pm 0.04$ | $0.019^{+0.002}_{-0.001}$ | $619.8/613$
1200310132 | 58395.08 | 2.09* | $0.871*$ | $2.1\pm 0.4$ | – | – | $1.87^{+0.21}_{-0.07}$ | $\geq 45$ | $0.45\pm 0.05$ | $0.0168^{+0.003}_{-0.002}$ | $453.4/453$
1200310133 | 58396.76 | 2.09* | – | – | – | – | $2.0^{+0.3}_{-0.2}$ | 38.8* | $0.62\pm 0.07$ | $0.0075^{+0.0007}_{-0.0006}$ | $134.4/120$
Table 3: Best fit parameters for the emission lines detected in the IGR J17591–2342 spectrum for each observation. The line energy for the Ne ix ion was fixed to the rest-frame energy, while the sigma was fixed to 0.085 keV for each ion transition (i.e. the spectral resolution of NICER/XTI at 1 keV). All the associated errors are reported at 68% confidence level. | | Ne ix | Fe xxv
---|---|---|---
ObsID | MJD | Norm | LineE | Sigma | Norm
| | | (keV) | (keV) |
1200310101 | 58345.0 | $0.015\pm 0.002$ | – | – | –
1200310102 | 58348.09 | $0.008^{+0.003}_{-0.002}$ | – | – | –
1200310103 | 58353.04 | $0.021\pm 0.002$ | $6.6^{+0.3}_{-0.4}$ | $0.5\pm 0.3$ | $0.00015^{0.00006}_{-0.00005}$
1200310104 | 58354.08 | $0.019^{+0.003}_{-0.002}$ | – | – | –
1200310105 | 58357.03 | $0.0108^{+0.0016}_{-0.0014}$ | $\leq 6.3$ | $1.5\pm 0.2$ | $0.0026^{+0.0007}_{-0.0008}$
1200310106 | 58358.0 | $0.024^{+0.007}_{-0.009}$ | $6.59\pm 0.08$ | $0.1*$ | $0.00018^{+0.00002}_{-0.00005}$
1200310107 | 58360.64 | $0.034^{+0.007}_{-0.006}$ | – | – | –
1200310108 | 58361.93 | $0.028^{+0.007}_{-0.006}$ | – | – | –
1200310109 | 58361.99 | $0.016\pm 0.002$ | – | – | –
1200310110 | 58363.28 | $0.016\pm 0.002$ | – | – | –
1200310111 | 58364.05 | $0.016^{+0.003}_{-0.002}$ | $6.7\pm 0.2$ | $0.8^{+0.3}_{-0.2}$ | $0.00019^{+0.00005}_{-0.00004}$
1200310112 | 58366.75 | $0.0068^{+0.0019}_{-0.0015}$ | – | – | –
1200310113 | 58367.02 | $0.017^{+0.005}_{-0.004}$ | – | – | –
1200310114 | 58368.36 | $0.016^{+0.004}_{-0.003}$ | $6.53^{+0.07}_{-0.06}$ | $0.02^{+0.10}_{-0.02}$ | $0.00009\pm 0.00004$
1200310115 | 58369.07 | $0.017\pm 0.003$ | – | – | –
1200310116 | 58370.17 | $0.019\pm 0.003$ | – | – | –
1200310118 | 58372.03 | $0.016^{+0.005}_{-0.004}$ | – | – | –
1200310119 | 58373.0 | $0.019^{+0.004}_{-0.003}$ | – | – | –
1200310120 | 58374.22 | $0.030^{+0.006}_{-0.005}$ | – | – | –
1200310121 | 58375.33 | $0.048^{+0.020}_{-0.014}$ | – | – | –
1200310122 | 58376.04 | $0.041^{+0.011}_{-0.009}$ | – | – | –
1200310124 | 58380.27 | $0.031^{+0.009}_{-0.007}$ | – | – | –
1200310125 | 58381.23 | $0.08^{+0.04}_{-0.03}$ | – | – | –
1200310126 | 58382.01 | $0.033^{+0.007}_{-0.006}$ | – | – | –
1200310127 | 58384.32 | $0.019^{+0.005}_{-0.004}$ | – | – | –
1200310128 | 58385.1 | $0.024^{+0.007}_{-0.005}$ | – | – | –
1200310129 | 58386.06 | $0.019^{+0.004}_{-0.003}$ | – | – | –
1200310130 | 58387.16 | $0.017^{+0.006}_{-0.005}$ | – | – | –
1200310131 | 58389.02 | $0.018^{+0.004}_{-0.003}$ | – | – | –
1200310132 | 58395.08 | $0.013^{+0.004}_{-0.003}$ | $6.63^{+0.06}_{-0.05}$ | $0.04331^{+0.09}_{-0.04}$ | $0.00013^{+0.00005}_{-0.00004}$
1200310133 | 58396.76 | – | – | – | –
|
11institutetext: School of Technology, Pontifícia Universidade Católica do
Rio Grande do Sul
Av. Ipiranga, 6681, 90619-900, Porto Alegre, RS, Brazil
11email<EMAIL_ADDRESS>
11email<EMAIL_ADDRESS>
# Zero-shot performance of the Segment Anything Model (SAM) in 2D medical
imaging: A comprehensive evaluation and practical guidelines
Christian Mattjie 11 Luis Vinicius de Moura 11 Rafaela Cappelari Ravazio 11
Lucas Silveira Kupssinskü 11 Otávio Parraga 11 Marcelo Mussi Delucis 11
Rodrigo C. Barros 11
###### Abstract
Segmentation in medical imaging is a critical component for the diagnosis,
monitoring, and treatment of various diseases and medical conditions.
Presently, the medical segmentation landscape is dominated by numerous
specialized deep learning models, each fine-tuned for specific segmentation
tasks and image modalities. The recently-introduced Segment Anything Model
(SAM) employs the ViT neural architecture and harnesses a massive training
dataset to segment nearly any object; however, its suitability to the medical
domain has not yet been investigated. In this study, we explore the zero-shot
performance of SAM in medical imaging by implementing eight distinct prompt
strategies across six datasets from four imaging modalities, including X-ray,
ultrasound, dermatoscopy, and colonoscopy. Our findings reveal that SAM’s
zero-shot performance is not only comparable to, but in certain cases,
surpasses the current state-of-the-art. Based on these results, we propose
practical guidelines that require minimal interaction while consistently
yielding robust outcomes across all assessed contexts. The source code, along
with a demonstration of the recommended guidelines, can be accessed at
https://github.com/Malta-Lab/SAM-zero-shot-in-Medical-Imaging.
###### Keywords:
Medical Imaging Segmentation Segment Anything Model Zero-shot Learning Deep
Neural Networks.
## 1 Introduction
Medical imaging plays a pivotal role in the diagnosis, monitoring, and
treatment of a wide range of diseases and conditions [1]. Accurate
segmentation of these images is often critical in extracting valuable
information that can aid clinical decision-making. However, traditional
segmentation methods primarily rely on labor-intensive, manually-engineered
features and error-prone thresholding designed for specific scenarios,
resulting in limited generalizability to new images [2]. Large advancements in
medical image segmentation have been achieved with the advent of deep learning
(DL) techniques, owing to their ability to learn intrinsic features and
patterns from large datasets [3, 4, 5].
The DL revolution was ignited by the groundbreaking success of Convolutional
Neural Networks (CNNs) in computer vision applications [6]. Recently, a new
wave of innovative applications based on the Transformer architecture has
emerged [7]. Transformers enhance the training process by harnessing larger
datasets while providing smaller induction bias, thereby creating models that
can generalize to unseen distributions and even adapt to diverse tasks.
Nonetheless, medical image segmentation poses significant challenges for DL
due to the substantial cost associated with specialized professionals
annotating images, leading to the scarcity of available data. Furthermore,
there is limited evidence regarding the ability of DL models trained on
natural images to generalize to medical application settings.
The Segment Anything Model (SAM) has been recently introduced by Meta [8].
SAM, a state-of-the-art vision transformer (ViT), is capable of generating
segmentation masks for virtually any object. It introduces the concept of
prompting in image segmentation, whereby the model’s inference process is
guided by providing points inside the region of interest (ROI) or by drawing a
bounding box around it.
In this paper, we rigorously evaluate the zero-shot capabilities of SAM in
segmenting 2D medical images. We assess its performance across six datasets
encompassing four distinct imaging modalities: X-ray, ultrasound,
dermatoscopy, and colonoscopy, using various prompting strategies. Our
comprehensive evaluation reveals that SAM demonstrates promising results in
those medical imaging modalities, even when we have complex patterns such as
hair on skin lesions. We also propose practical guidelines for physicians to
utilize SAM in medical image segmentation tasks. This guideline suggests
starting with a bounding box prompt, selecting the optimal prediction from the
generated outputs, and refining the segmentation using point prompts when
necessary.
## 2 Related Work
### 2.1 Medical Image Segmentation
Medical image segmentation plays a pivotal role in medical imaging analysis,
focusing on the identification and delineation of structures or regions such
as organs, tissues, or lesions. Accurate segmentation is crucial for various
clinical applications, encompassing diagnosis, treatment, and monitoring of
disease progression. This enables essential tasks like measuring tissue volume
for tracking growth and outlining radiosensitive organs in radiotherapy
treatment.
In the current domain of medical image segmentation, specific methods are
tailored to the application, imaging modality, and body part under examination
[9, 10]. However, automatic segmentation remains a formidable challenge due to
the intricacy of medical images and data scarcity. The segmentation
algorithm’s output is influenced by multiple factors, including the partial
volume effect, intensity inhomogeneity, presence of artifacts, and
insufficient contrast between soft regions [11].
Deep learning techniques have garnered considerable attention in medical image
segmentation, owing to their capacity for capturing intricate patterns and
representations from large-scale datasets. Among the most prevalent DL
approaches for medical image segmentation are CNNs. Widely employed models for
medical image segmentation include U-Net [3] and its derivatives, which were
explicitly developed for biomedical image segmentation. U-Net utilizes a
symmetric encoder-decoder architecture, enabling the model to capture both
high-level contextual information and fine-grained details, resulting in
enhanced segmentation outcomes.
In recent years, novel state-of-the-art segmentation techniques have emerged,
such as training DL models on polar images [12], integrating textual
information with vision-language models [4], and employing attention
mechanisms with CNNs in ViTs [13].
### 2.2 Vision Transformer (ViT)
ViTs constitute a class of DL models that leverage the transformer
architecture [14]. These models process images by dividing them into fixed-
size, non-overlapping patches and linearly embedding these patches into a flat
sequence of tokens. Each token is subsequently passed through a series of
self-attention layers to learn relevant contextual relationships and spatial
information, enabling the model to discern semantically-rich patterns [7].
ViTs do not share some of the inductive biases inherent in CNNs, such as
locality and translation equivariance. A reduced inductive bias allows ViTs to
be more adaptable even though it necessitates more data for generalization.
The data demand may limit the application of ViTs in medical imaging, where
data is often scarce. Nevertheless, by capitalizing on pre-training and fine-
tuning strategies, ViTs are revolutionizing the computer vision landscape with
strong generalization performance [15, 16].
Recently, ViTs have demonstrated strong results in zero-shot learning [17, 18,
19]. This setting presents a challenge since the model must learn to
generalize for classes and contexts not encountered during training. In
medical imaging, ViT-based models have achieved state-of-the-art results [20,
21, 13], though very few studies address the zero-shot capabilities of the
learning models, and whether their performance in zero-shot settings is
reasonable or even competitive to fine-tuning [22, 13].
## 3 Methodology
### 3.1 Segment Anything Model (SAM)
SAM [8] is a state-of-the-art ViT model trained on the massive SA-1B dataset
(also introduced in [8]). This dataset comprises approximately 11 million
images and 1 billion segmentation masks, making it the largest publicly
available image segmentation dataset to date. The model’s high accuracy has
been demonstrated through its impressive capability of segmenting a wide
variety of objects and shapes, thereby validating its effectiveness in
segmenting virtually any object within a 2D image.
SAM can function in two distinct ways: by segmenting all objects present in an
input image or by utilizing prompts that explicitly specify the target region
for segmentation. These prompts can take the form of points identifying the
region of interest or regions that should be excluded. Additionally, a
bounding box may be provided to delineate the area containing the object of
interest. While initial results with SAM showcase strong segmentation quality
and zero-shot generalization to novel scenes and unseen objects, it is
important to note that the model’s training dataset lacks medical images.
Consequently, its generalizability to the medical domain remains an open
question.
To address potential issues arising from ambiguous prompts, SAM generates a
set of three masks, each with an accompanying score reflecting a different
interpretation of the intended region. The first mask in the output sequence
represents the smallest, most conservative interpretation of the intended
region according to the given prompt. As the sequence progresses, the
subsequent masks increase in size, with each mask encompassing the previous
one. The score assigned to each mask is an indicator of SAM’s confidence in
that particular prediction. This design enables SAM to accommodate a wider
range of potential segmentation outcomes, reflecting the model’s efforts to
account for the ambiguity in the target region’s size due to the prompt’s
limited information.
In practical applications, especially within the medical imaging domain, it is
crucial to ensure that the model accurately identifies and segments pertinent
structures or regions of interest. Given this requirement, our study focused
on investigating input prompt strategies for guiding SAM’s segmentation
process. This decision stems from the inherent uncertainties associated with
the segment-everything approach, as the model’s comprehension of the segmented
objects cannot be guaranteed. By utilizing prompts, we aimed to improve SAM’s
segmentation capabilities in medical imaging tasks and provide a more reliable
and controlled evaluation of its performance. Furthermore, we did not consider
the confidence scores provided by SAM for each mask, as these scores reflect
the quality of the segmentation without accounting for the accuracy of the
target region relative to the intended object.
The ViT architecture employed by SAM consists of three distinct iterations,
each with unique trade-offs between computational requirements and model
performance: ViT Base (ViT-B), ViT Large (ViT-L), and ViT Huge (ViT-H). The
primary differences between these iterations lie in the model’s number of
layers and parameters, as illustrated in Table 1. As the number of layers and
parameters increases, the model becomes more powerful, enabling the capture of
more intricate aspects of the input images. However, larger models necessitate
more computing power, which may pose a drawback in certain situations.
Nevertheless, even the largest iteration of SAM remains relatively compact.
Architecture | Transformer Layers | Parameters | Size (Mb)
---|---|---|---
ViT-B | 12 | 91M | 776
ViT-L | 24 | 308M | 1582
ViT-H | 32 | 636M | 2950
Table 1: Summary of SAM’s ViT architecture variations.
### 3.2 Datasets
For evaluating SAM, we used six datasets from four medical imaging modalities:
X-ray, Ultrasound, dermatoscopic, and colonoscopy images. Our primary
objective is to assess the model’s performance and versatility when prompted
with various strategies, simulating a physician’s approach to segmenting
specific organs or ROIs in medical images. Fig 1 shows a sample from each
dataset.
* •
ISIC 2018 [23]: this publicly available dataset comprises $2,594$
dermatoscopic images from $2,056$ unique patients, showcasing skin lesions
with varying types, sizes, and colors. The images have resolutions ranging
from $640\times 480$ to approximately $6,700\times 4,400$ pixels and are
provided in JPEG format. Expert dermatologists generated accompanying
segmentation masks using a manual annotation tool, and a second expert
reviewed each mask for accuracy.
* •
HAM10000 [24]: this dataset contains $10,015$ dermatoscopic images of skin
lesions from $7,388$ unique patients, with varying types, sizes, and colors.
All images have a resolution of $640\times 450$ and are provided in JPEG
format. Recently, Tschandl, P. et al.[25] supplied expert segmentation masks
for all images, with corresponding resolutions.
* •
Montgomery-Shenzhen [26, 27]: this dataset is a fusion of two publicly
available chest X-ray datasets collected from respective hospitals. It
comprises $800$ X-ray images, with $704$ accompanying lung segmentation masks
manually created by expert radiologists. The dataset is available in PNG
format.
* •
X-ray Images of Hip Joints [28]: this publicly available dataset contains
$140$ X-ray images of the lower legs, with an average resolution of $327\times
512$. Corresponding segmentation masks for the femur and ilium are provided
separately. The images and masks are available in NII format.
* •
CVC-ClinicDB [29]: this dataset consists of $612$ images from $31$ colonoscopy
sequences, with a resolution of $384\times 288$. The images are provided in
PNG format. Expert gastroenterologists have created segmentation masks for the
polyps, which are provided for all available images.
* •
Breast Ultrasound Images [30]: this dataset comprises $780$ ultrasound images
of the breast from $600$ patients, with an average size of $500\times 500$
pixels. The images are provided in PNG format and are categorized into normal,
benign, and malignant. Segmentation masks for tumors are supplied for both
benign and malignant cases.
Figure 1: Samples from each of the six datasets used in this study. A: ISIC,
B: HAM, C: CXR, D: HJXR, E: CVC, F: BUSI.
### 3.3 Prompt Strategies
In the context of interactive segmentation, a physician may guide the
procedure using various strategies, such as clicking within the region of
interest, clicking outside the region, or drawing a bounding box around the
target. To investigate the impact of these plausible prompting strategies on
our segmentation models, we conducted a series of experiments with the
following approaches:
* •
Central-point (CP): utilizing only the centroid of the ground-truth mask,
which is anticipated to be the most informative single-point prompt;
* •
Random-point (RP): eroding the ground-truth mask and subsequently selecting a
random point within it, representing minimal guidance;
* •
Distributed random-points (RP3 and RP5): eroding the ground-truth mask,
dividing it vertically into sections (three and five, respectively), and
selecting a random point within each section to provide a more distributed set
of prompts;
* •
Bounding-box (BB): prompting with the bounding box of the ground-truth mask,
offering a more explicit spatial constraint for segmentation; and
* •
Perturbed bounding-box (BBS5, BBS10, and BBS20): modifying the size and
position of the bounding box by $5$%, $10$%, and $20$% of the ground-truth
mask size, respectively, simulating variations in the accuracy of a
physician’s initial assessment.
For the multiple points strategy, we divided the mask into three and five
sections, and for the varied bounding box strategy, we randomly altered its
size and position up to 5%, 10%, and 20% of the ground-truth mask. Given these
variations, we ran a total of eight experiments per model/dataset, which are
shown in Fig 2: central-point (CP), random-point (RP), random-points-3 (RP3),
random-points-5 (RP5), bounding-box (BB), bounding-box-similar-5 (BBS5),
bounding-box-similar-10 (BBS10) and bounding-box-similar-20 (BBS20).
Figure 2: Example of all prompt strategies on a skin lesion image and mask.
(A): original image, (B): CP, (C): RP, (D): RP3, (E): RP5, (F): BB in green,
BBS5 in red, BBS10 in blue, and BBS20 in yellow. The size and position shown
are their max variation for BBS methods.
In the RP, RP3, and RP5 strategies, we apply an erosion morphological operator
to the ground-truth mask before selecting a random point within the resulting
region. This process ensures that the selected point is not situated near the
region of interest’s edges while preserving the element of randomness expected
in real-world scenarios. The erosion value was determined according to the
dataset: for the CXR and ISIC datasets, where the regions are larger, we used
a $30$-pixel radius; for the CVC dataset, which contains smaller images that
could be completely eroded, we employed a $1$-pixel radius; for all other
datasets with relatively small regions, we opted for a $10$-pixel radius.
The various prompting strategies applied to a skin lesion image and mask are
illustrated in Fig 2. This figure demonstrates the original image (A), and the
different prompt strategies: CP (B), RP (C), RP3 (D), RP5 (E), and BB (F) in
green, BBS5 in red, BBS10 in blue, and BBS20 in yellow. The size and position
shown for the BBS methods represent their maximum variation, while in our
experiments, they were altered randomly.
### 3.4 Preprocessing
Throughout our experimentation process, we encountered various challenges
stemming from the characteristics of the datasets under consideration. To
address these issues, we implemented a fill-holes technique aimed at
rectifying mask information, particularly in the context of the ISIC dataset,
wherein some masks solely outlined the relevant lesion. Moreover, in instances
where an image contained multiple masks (e.g., dual lungs or skin lesions), we
isolated the two most substantial regions and processed them independently,
employing the prompt strategies delineated in the previous section. This
approach ensured accurate and precise segmentation. A manual inspection of all
images was conducted to confirm the absence of any containing three distinct
and relevant regions. The model-generated predictions for both regions were
combined to form a single prediction.
In the case of the HJXR dataset, which uniquely contained images in a format
incompatible with SAM, we transformed the images from NII to PNG format,
normalizing their values within the range of $0$ to $255$. Given that masks
for the femur and ilios were available individually for each image in the
dataset, we assessed the predictions separately in HJXR-F and HJXR-I.
### 3.5 Evaluation
The Dice Similarity Coefficient (DSC) serves as a widely recognized
statistical metric for gauging the accuracy of image segmentation. This
coefficient quantifies the similarity between two sets of data, typically
represented as binary arrays, by comparing a predicted segmentation mask to
the ground-truth mask. The DSC operates on a scale from one to zero, with one
signifying a perfect match and zero indicating a complete mismatch. The
utility of this metric lies in its ability to discern performance differences
between classifiers, rendering it an invaluable instrument for evaluating
segmentation algorithms. The DSC can be calculated as follows:
$\operatorname{DSC}\left(m\left(x_{i}\right),y_{i}\right)=\frac{\left|2*\left(m\left(x_{i}\right)\cap
y_{i}\right)\right|}{\left|\left(m\left(x_{i}\right)\cap
y_{i}\right)\right|+\left|m\left(x_{i}\right)\cup y_{i}\right|},$ (1)
where $\left|m\left(x_{i}\right)\cap y_{i}\right|$ and
$\left|m\left(x_{i}\right)\cup y_{i}\right|$ refer to the area of overlap and
area of union, respectively.
## 4 Results and Discussion
In this section, we present the results of our comprehensive experiments
conducted on six datasets, employing eight prompting strategies, and utilizing
three variations of the SAM. The performance of these models is compared to
the current state-of-the-art (SOTA) methods, with certain zero-shot results of
SAM surpassing established benchmarks. We subsequently engage in a qualitative
discussion of the observed results, showcasing select challenging images to
elucidate our findings. Finally, we provide a practical implementation
guideline for physicians to effectively utilize the SAM, ensuring minimal
interaction and delivering robust outcomes.
Table 2 shows the DSC of the predictions for ViT-H, the largest SAM model,
with results for ViT-B and ViT-L shown in the Supplementary Material. The
terms 1st, 2nd, and 3rd correspond to the three predictions generated by SAM,
and the table presents the metrics when only one of these predictions is used
consistently for all images. Fig 3 showcases an example of these predictions
for the Chest X-Ray (CXR) dataset, employing both the RP5 and BBS10
strategies. The RP5 method provides better differentiation between
predictions, while the BBS10 approach demonstrates greater uniformity. This
observation could potentially be attributed to the bounding box, which
simultaneously indicates the target region for segmentation and the areas to
be excluded (outside the box).
Figure 3: Three returning predictions from SAM using RP5 (A, B, C) and BBS10
(D, E, F) input methods for the CXR dataset. A physician may choose the one
that best fits the corresponding region to be segmented. Table 2: DSC of
predictions for the ViT-H model for six datasets using the eight proposed
prompt strategies considering the 1st, 2nd, and 3rd prediction.
Dataset | Pred | CP | RP | RP3 | RP5 | BB | BBS5 | BBS10 | BBS20
---|---|---|---|---|---|---|---|---|---
ISIC | 1st | $0.538$ | $0.531$ | $0.762$ | $0.774$ | $0.745$ | $0.737$ | $0.715$ | $0.603$
2nd | $0.718$ | $0.677$ | $0.769$ | $0.788$ | $0.845$ | $0.842$ | $0.833$ | $0.789$
3rd | $0.375$ | $0.363$ | $0.390$ | $0.483$ | 0.872 | $0.868$ | $0.860$ | $0.816$
HAM | 1st | $0.544$ | $0.527$ | $0.752$ | $0.765$ | $0.732$ | $0.724$ | $0.700$ | $0.589$
2nd | $0.729$ | $0.686$ | $0.768$ | $0.785$ | $0.838$ | $0.835$ | $0.824$ | $0.778$
3rd | $0.420$ | $0.406$ | $0.443$ | $0.541$ | 0.865 | $0.861$ | $0.851$ | $0.809$
CXR | 1st | $0.904$ | $0.863$ | $0.923$ | $0.927$ | $0.936$ | $0.934$ | $0.911$ | $0.686$
2nd | $0.758$ | $0.727$ | $0.766$ | $0.828$ | 0.942 | $0.939$ | $0.929$ | $0.826$
3rd | $0.471$ | $0.469$ | $0.482$ | $0.514$ | $0.935$ | $0.930$ | $0.913$ | $0.803$
HJXR-F | 1st | $0.876$ | $0.822$ | $0.941$ | $0.948$ | $0.924$ | $0.908$ | $0.848$ | $0.618$
2nd | $0.743$ | $0.767$ | $0.767$ | $0.776$ | 0.962 | $0.958$ | $0.904$ | $0.746$
3rd | $0.517$ | $0.543$ | $0.548$ | $0.599$ | $0.949$ | $0.945$ | $0.905$ | $0.723$
HJXR-I | 1st | $0.211$ | $0.742$ | $0.808$ | $0.828$ | 0.875 | $0.866$ | $0.734$ | $0.624$
2nd | $0.393$ | $0.479$ | $0.449$ | $0.491$ | $0.855$ | $0.849$ | $0.790$ | $0.620$
3rd | $0.294$ | $0.295$ | $0.316$ | $0.384$ | $0.800$ | $0.796$ | $0.758$ | $0.629$
CVC | 1st | $0.716$ | $0.763$ | $0.861$ | $0.880$ | $0.889$ | $0.881$ | $0.835$ | $0.702$
2nd | $0.554$ | $0.544$ | $0.642$ | $0.754$ | 0.926 | $0.924$ | $0.916$ | $0.844$
3rd | $0.232$ | $0.224$ | $0.224$ | $0.245$ | $0.924$ | $0.922$ | $0.918$ | $0.868$
BUSI | 1st | $0.583$ | $0.541$ | $0.736$ | $0.766$ | $0.754$ | $0.744$ | $0.713$ | $0.631$
2nd | $0.641$ | $0.616$ | $0.688$ | $0.735$ | $0.840$ | $0.837$ | $0.823$ | $0.768$
3rd | $0.192$ | $0.184$ | $0.196$ | $0.254$ | 0.863 | $0.859$ | $0.848$ | $0.800$
In a real-world clinical setting, a physician may opt to select the most
suitable prediction. To simulate this decision-making process, we assessed the
highest DSC per image, irrespective of being the 1st, 2nd, or 3rd prediction.
The results are presented in Table 3. This approach led to a modest
improvement of approximately 1% compared to using only the 1st, 2nd or 3rd
prediction in all images, as shown in Fig 4. Even though the overall
enhancement is marginal, it holds significance for certain subjects and
necessitates minimal input from the physician.
Table 3: DSC of predictions for all variations of SAM for six datasets using
the eight proposed prompt strategies. For each set of predictions, only the
one with the highest DSC was considered.
Dataset | Model | CP | RP | RP3 | RP5 | BB | BBS5 | BBS10 | BBS20
---|---|---|---|---|---|---|---|---|---
ISIC | ViT-H | 0.788 | 0.768 | 0.820 | 0.835 | 0.877 | 0.874 | 0.866 | 0.829
ViT-L | 0.783 | 0.768 | 0.811 | 0.818 | 0.876 | 0.872 | 0.864 | 0.819
ViT-B | 0.764 | 0.733 | 0.804 | 0.815 | 0.879 | 0.876 | 0.864 | 0.822
HAM | ViT-H | 0.782 | 0.764 | 0.812 | 0.824 | 0.870 | 0.866 | 0.857 | 0.820
ViT-L | 0.784 | 0.772 | 0.809 | 0.819 | 0.867 | 0.864 | 0.854 | 0.809
ViT-B | 0.745 | 0.706 | 0.785 | 0.796 | 0.872 | 0.867 | 0.855 | 0.810
CXR | ViT-H | 0.922 | 0.902 | 0.928 | 0.936 | 0.952 | 0.950 | 0.942 | 0.862
ViT-L | 0.929 | 0.917 | 0.932 | 0.930 | 0.954 | 0.952 | 0.943 | 0.849
ViT-B | 0.915 | 0.893 | 0.930 | 0.935 | 0.948 | 0.943 | 0.932 | 0.858
HJXR-F | ViT-H | 0.906 | 0.917 | 0.943 | 0.950 | 0.973 | 0.973 | 0.957 | 0.861
ViT-L | 0.910 | 0.916 | 0.939 | 0.948 | 0.973 | 0.973 | 0.956 | 0.880
ViT-B | 0.927 | 0.882 | 0.910 | 0.907 | 0.971 | 0.969 | 0.950 | 0.870
HJXR-I | ViT-H | 0.483 | 0.786 | 0.808 | 0.828 | 0.889 | 0.886 | 0.843 | 0.719
ViT-L | 0.478 | 0.841 | 0.865 | 0.860 | 0.894 | 0.889 | 0.839 | 0.726
ViT-B | 0.500 | 0.765 | 0.825 | 0.830 | 0.875 | 0.870 | 0.838 | 0.696
CVC | ViT-H | 0.838 | 0.854 | 0.884 | 0.898 | 0.940 | 0.938 | 0.934 | 0.889
ViT-L | 0.815 | 0.823 | 0.848 | 0.847 | 0.934 | 0.931 | 0.920 | 0.869
ViT-B | 0.739 | 0.749 | 0.783 | 0.784 | 0.932 | 0.930 | 0.921 | 0.851
BUSI | ViT-H | 0.732 | 0.706 | 0.791 | 0.816 | 0.870 | 0.868 | 0.855 | 0.813
ViT-L | 0.744 | 0.727 | 0.800 | 0.807 | 0.875 | 0.872 | 0.865 | 0.810
ViT-B | 0.734 | 0.701 | 0.804 | 0.818 | 0.886 | 0.884 | 0.874 | 0.831
Figure 4: Comparison of using always the 1st, 2nd, or 3rd prediction versus
choosing the best one per image (max) with the BB strategy for all datasets.
The Bounding Box (BB) strategy consistently exhibited superior performance
across all datasets, as illustrated in Table 2 and Table 3. Even with
variations of $5\%$ or $10$% (BBS5, BBS10), this method outperforms all point
prompt strategies, while BBS20 achieved results comparable to RP5. This
observation underscores the robustness of the bounding box approach, even in
the presence of minor inaccuracies while delineating the desired segmentation
region.
Regarding point prompt methods (CP, RP, RP3, RP5), an increased number of
input points correspond to enhanced model performance. However, these
techniques could not outperform the BB, BBS5, and BBS10 strategies. Moreover,
RP5 requires greater manual intervention, rendering it more labor-intensive
compared to employing a bounding box.
Our experiments do not incorporate additional prompt points that can be
introduced post-prediction to refine the segmentation. This fine-tuning
process can be applied to both encompass regions excluded from the prediction
and eliminate regions that should not be part of the segmentation. As a
result, physicians can achieve even more precise segmentation masks with
minimal additional effort.
We also highlight that the ViT-B model attained performance levels comparable
to the larger variants of SAM, occasionally even surpassing them. Furthermore,
owing to its modest GPU memory requirements, it can be readily utilized with
cost-effective hardware, making SAM’s application in medical imaging highly
accessible without a significant cost.
### 4.1 Comparison with state-of-the-art (SOTA) segmentation models
We employ the intermediate-sized SAM (ViT-L) for a comparative analysis with
the current state-of-the-art (SOTA) models. Table 4 presents a performance
comparison of SAM with the BB5 strategy (emulating a physician annotating with
minimal error, followed by selecting the most accurate among three
predictions) against SOTA models employed on each dataset. Notably, no
baseline models were found for evaluation in the HJXR dataset.
Table 4: Comparison of the results of the BBS5 strategy using the ViT-L model with the current state-of-the-art DL models. Dataset | Model | DSC
---|---|---
ISIC | Rema-net [5] | 0.944
SAM ViT-L BBS5 | 0.872
HAM | Rema-net [5] | 0.936
SAM ViT-L BBS5 | 0.864
CXR | Attention U-Net [31] | 0.982
ReSE-Net [32] | 0.976
SAM ViT-L BBS5 | 0.952
CVC | FSA-Net [33] | 0.947
SAM ViT-L BBS5 | 0.931
BUSI | PODDA, A. et al [34] | 0.826
SAM ViT-L BBS5 | 0.872
HJXR-F | SAM ViT-L BBS5 | 0.973
HJXR-I | SAM ViT-L BBS5 | 0.889
SAM achieved very strong results for a zero-shot (no training/fine-tuning)
approach in comparison to the SOTA. In the BUSI dataset, SAM surpassed the
SOTA by approximately 5%, sustaining its superior performance even when
employing the BBS20 strategy, which accommodates a substantial margin of error
in image annotation. In the CVC dataset, SAM’s performance was marginally
lower (less than 2%), while in the CXR dataset, the gap was a mere 3%.
Although no directly comparable studies exist, SAM exhibited a very high DSC
($0.973$) for femur segmentation. The segmentation of ilios is a more
intricate task due to reduced contrast with adjacent regions. Taking that into
account, the results for ilios segmentation can also be considered quite
strong.
For the ISIC and HAM datasets, SAM was outperformed by $\approx 7$%. But here
we need to take into account the unique characteristics of these datasets, and
a more nuanced analysis is presented in the next section. Moreover, the
substantial volume of available data (over $10,000$ images) renders the
training of task-specific deep learning (DL) models more viable for those
tasks. In contrast, with smaller datasets like BUSI, training an end-to-end DL
model becomes strenuous due to the scarcity of data. In such scenarios,
employing a model like SAM proves to be the best option, as it benefits from
exposure to an extensive range of data across various domains.
### 4.2 Qualitative Analysis
The analysis of medical images presents a unique set of challenges due to the
complex and diverse nature of datasets. For instance, the CXR dataset, which
consists of chest X-rays and their corresponding segmentation masks, contains
inconsistencies in the segmented regions, as depicted in Fig 5. Some ground-
truth masks include the heart while others exclude it. Still, SAM can rapidly
rectify these discrepancies by allowing users to select the most appropriate
prediction, as demonstrated in Fig 3, or by refining input points to include
or exclude specific regions as needed.
Figure 5: Example of inconsistencies within the ground-truth region in the CXR
dataset.
The standard DICOM format for X-ray images typically features a $12$ or $16$
bit depth, enabling physicians to manipulate the window/level settings for
enhanced visualization of tissues and organs. We postulate that optimizing the
window/level parameters during conversion to JPEG or PNG formats could improve
tissue delineation and subsequently enhance SAM’s performance for this imaging
modality. Nevertheless, we did not assess this approach, given that the CXR
dataset is provided in PNG format, and the HJXR dataset was normalized and
converted to PNG using its maximum and minimum values.
For the ISIC dataset, which comprises skin lesion images, we identified
numerous instances of inaccurate ground-truth mask annotations, as illustrated
in Fig 6. These inaccuracies impacted the DSC results, as the masks generated
by SAM appear to exhibit higher precision compared to the original masks.
Moreover, the presence of body hair in the ISIC and HAM datasets significantly
influences the segmentation process, particularly when employing point prompt
strategies. For example, a hair intersecting a lesion may erroneously indicate
two distinct regions instead of one. To address this issue, bounding box
strategies can be implemented to provide sufficient information to the model.
However, SAM’s exclusion of hair from the segmentation negatively affects its
performance. Additionally, the skin lesion datasets present challenges due to
indistinct lesion boundaries, rendering accurate segmentation of skin lesions
a challenging task.
Figure 6: Example of inconsistencies in the ground-truth region in the ISIC
dataset.
Ultrasound images pose considerable difficulties for DL models, attributable
to their inhomogeneous intensities and low signal-to-noise ratio, which hinder
the accurate delineation of breast tumors in datasets such as BUSI.
Furthermore, the absorption and reflection of ultrasound can give rise to
artifacts in the image, exacerbating the segmentation task even for well-
optimized models. Despite these obstacles, SAM achieved strong results in this
dataset. However, it encountered challenges in accurately segmenting the
boundaries of breast tumors due to the inherent blurred edges in ultrasound
images.
### 4.3 Guidelines
In light of our empirical findings, we propose a robust and pragmatic
framework for utilizing the Segment Anything Model (SAM) in the realm of
medical imaging tasks. This methodology empowers physicians to capitalize on
the capabilities of SAM to attain precise segmentation outcomes, while
preserving their autonomy in overseeing the process. Our recommendation is to
employ the largest SAM variation that is feasible given the constraints of the
available hardware; nonetheless, any of the three model variants may be
utilized.
1. 1.
Initiate with a bounding box prompt: our experimental results consistently
indicate that among various prompting strategies, the bounding box technique
exhibits superior performance, even in the presence of minor variations. Thus,
we advocate that physicians start the segmentation procedure by supplying a
bounding box prompt encompassing the region of interest.
2. 2.
Evaluate the generated predictions: SAM generates a triplet of segmentation
masks in response to an input image and a bounding box, each signifying a
distinct interpretation of the intended region’s dimensions. Physicians are
advised to visually scrutinize and juxtapose the three produced masks against
the source image. If there is a suitable prediction, select it. If none of the
predictions correctly segment the intended region, proceed to the next step.
3. 3.
Refine the segmentation employing point prompts: in cases where none of the
initial predictions adequately segment the intended region, assess the best
prediction and identify the areas it incorrectly captures or omits in the
segmentation. Utilize input points to include (label $1$) or exclude (label
$0$) these areas. SAM will generate three new predictions. Repeat the process
of refining the segmentation using point prompts until an adequate
segmentation is achieved.
Fig. 7 and Fig. 8 demonstrate the application of our proposed framework on
images from the BUSI and CVC datasets, including the bounding box prompt and
subsequent predictions. Since the intended regions were accurately segmented,
the physician merely has to select them.
Figure 7: Image from the BUSI dataset with bounding box input accompanied by
SAM’s predictions. Both the 2nd and 3rd predictions exhibit accurate
segmentation of the intended region. Figure 8: Image from the CVC dataset with
bounding box input accompanied by SAM’s predictions. Both the 2nd and 3rd
predictions exhibit accurate segmentation of the intended region.
Fig. 9 presents the application of our framework on an image from the ISIC
dataset, followed by the bounding box prompt and predictions. This represents
a more intricate scenario, as discussed earlier. None of the predictions
provided satisfactory results; therefore, the physician must evaluate the best
one (2nd) and incorporate point prompts to guide the model. Fig 10 displays
the original bounding box input in conjunction with the point prompts and the
generated predictions. A significant improvement in segmentation is observable
in the 2nd prediction due to the additional point prompts.
Figure 9: Image from the ISIC dataset with bounding box input accompanied by
SAM’s predictions. None of them are adequate and require further prompt
points. Figure 10: Image from the ISIC dataset with bounding box and point
inputs accompanied by SAM’s predictions. The point prompts guide the model to
remove these areas. The 2nd prediction reached an adequate segmentation.
This methodology ensures that the model’s output coheres with the physician’s
expertise, culminating in accurate and dependable segmentation results across
diverse clinical applications and imaging modalities.111A demo of this
framework is available at https://github.com/Malta-Lab/SAM-zero-shot-in-
Medical-Imaging.
## 5 Conclusion and Future Work
In this study, we thoroughly evaluated the zero-shot performance of SAM by
employing eight distinct prompting strategies across six datasets from four
different 2D medical image modalities. Our comprehensive analysis shed light
on the advantages and limitations of these strategies in various scenarios for
the three SAM ViT sizes. Remarkably, SAM demonstrated exceptional performance
as a zero-shot approach, achieving competitive results in comparison to the
state-of-the-art segmentation methods specifically designed or fine-tuned for
a particular modality of medical imaging. Notably, SAM outperformed the
current best performance on the BUSI dataset by a substantial margin. Taken
together, our findings underscore the immense potential of SAM as a powerful
tool for low-effort medical image segmentation.
Drawing upon our results, we propose pragmatic guidelines that facilitate easy
implementation, necessitate minimal user interaction, and yield robust
outcomes in medical imaging segmentation with SAM. By incorporating the
bounding box method and refining the segmentation using point prompts, medical
practitioners can effectively harness SAM’s potential to attain accurate
results while maintaining control over the segmentation process. Furthermore,
given the comparable performance of the three SAM sizes, practitioners can
choose any of them based on their hardware resource constraints.
The segmentation results generated by SAM have the potential to exceed the
most stringent quality standards with minimal involvement from physicians. Our
findings highlight concerns regarding the quality of some manually-annotated
ground truth masks, as SAM outcomes appear to delineate the region of interest
more accurately in certain instances. This observation holds particular
significance for labeling new datasets, as it substantially reduces the time
and effort required for this laborious and tedious task. Consequently, SAM-
generated segmentation masks offer immense promise for streamlining data
annotation processes and enhancing workflow efficiency in the area of medical
image analysis.
Future research endeavors could focus on further augmenting SAM’s capabilities
in this domain, achieving even higher performance while preserving SAM’s
extensive refinement options. Additionally, investigating the potential of
adapting SAM for 3D medical imaging represents a valuable research direction,
as it would extend the model’s applicability to a broader spectrum of medical
imaging tasks.
## References
* Wang et al. [2017] Shuo Wang, Mu Zhou, Zaiyi Liu, Zhenyu Liu, Dongsheng Gu, Yali Zang, Di Dong, Olivier Gevaert, and Jie Tian. Central focused convolutional neural networks: Developing a data-driven model for lung nodule segmentation. _Medical image analysis_ , 40:172–183, 2017.
* Pham et al. [2000] Dzung L Pham, Chenyang Xu, and Jerry L Prince. Current methods in medical image segmentation. _Annual review of biomedical engineering_ , 2(1):315–337, 2000.
* Ronneberger et al. [2015] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In _Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18_ , pages 234–241. Springer, 2015.
* Liu et al. [2023] Jie Liu, Yixiao Zhang, Jieneng Chen, Junfei Xiao, Yongyi Lu, Bennett A. Landman, Yixuan Yuan, Alan L. Yuille, Yucheng Tang, and Zongwei Zhou. Clip-driven universal model for organ segmentation and tumor detection. _CoRR_ , abs/2301.00785, 2023. doi: 10.48550/arXiv.2301.00785. URL https://doi.org/10.48550/arXiv.2301.00785.
* Yang et al. [2023] Litao Yang, Chao Fan, Hao Lin, and Yingying Qiu. Rema-net: An efficient multi-attention convolutional neural network for rapid skin lesion segmentation. _Computers in Biology and Medicine_ , page 106952, 2023.
* Litjens et al. [2017] Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen Awm Van Der Laak, Bram Van Ginneken, and Clara I Sánchez. A survey on deep learning in medical image analysis. _Medical image analysis_ , 42:60–88, 2017.
* Dosovitskiy et al. [2020] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. _arXiv preprint arXiv:2010.11929_ , 2020.
* Kirillov et al. [2023] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. _arXiv preprint arXiv:2304.02643_ , 2023.
* Chowdhary and Acharjya [2020] Chiranji Lal Chowdhary and D Prasanna Acharjya. Segmentation and feature extraction in medical imaging: a systematic review. _Procedia Computer Science_ , 167:26–36, 2020.
* Sharma and Aggarwal [2010] Neeraj Sharma and Lalit M Aggarwal. Automated medical image segmentation techniques. _Journal of medical physics_ , 35(1):3–14, 2010\.
* Shirly and Ramesh [2019] S Shirly and K Ramesh. Review on 2d and 3d mri image segmentation techniques. _Current Medical Imaging_ , 15(2):150–160, 2019\.
* Bencevic et al. [2021] Marin Bencevic, Irena Galic, Marija Habijan, and Danilo Babin. Training on polar image transformations improves biomedical image segmentation. _IEEE Access_ , 9:133365–133375, 2021. doi: 10.1109/ACCESS.2021.3116265. URL https://doi.org/10.1109/ACCESS.2021.3116265.
* Henry et al. [2022] Emerald U. Henry, Onyeka Emebob, and Conrad Asotie Omonhinmin. Vision transformers in medical imaging: A review. _CoRR_ , abs/2211.10043, 2022. doi: 10.48550/arXiv.2211.10043. URL https://doi.org/10.48550/arXiv.2211.10043.
* Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, _Advances in Neural Information Processing Systems_ , volume 30. Curran Associates, Inc., 2017.
* Touvron et al. [2021] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In _International conference on machine learning_ , pages 10347–10357. PMLR, 2021.
* Wang et al. [2021] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In _Proceedings of the IEEE/CVF international conference on computer vision_ , pages 568–578, 2021.
* Guo et al. [2022] Ziyu Guo, Renrui Zhang, Longtian Qiu, Xianzheng Ma, Xupeng Miao, Xuming He, and Bin Cui. Calip: Zero-shot enhancement of clip with parameter-free attention. _arXiv preprint arXiv:2209.14169_ , 2022.
* Pham et al. [2021] Hieu Pham, Zihang Dai, Golnaz Ghiasi, Hanxiao Liu, Adams Wei Yu, Minh-Thang Luong, Mingxing Tan, and Quoc V Le. Combined scaling for zero-shot transfer learning. _arXiv preprint arXiv:2111.10050_ , 2021.
* Radford et al. [2021] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _International conference on machine learning_ , pages 8748–8763. PMLR, 2021.
* Jang et al. [2022] Jongseong Jang, Daeun Kyung, Seung Hwan Kim, Honglak Lee, Kyunghoon Bae, and Edward Choi. Significantly improving zero-shot x-ray pathology classification via fine-tuning pre-trained image-text encoders. _arXiv preprint arXiv:2212.07050_ , 2022.
* Wang et al. [2022] Zifeng Wang, Zhenbang Wu, Dinesh Agarwal, and Jimeng Sun. Medclip: Contrastive learning from unpaired medical images and text. _arXiv preprint arXiv:2210.10163_ , 2022.
* Guo and Fan [2022] Fu-Ming Guo and Yingfang Fan. Zero-shot and few-shot learning for lung cancer multi-label classification using vision transformer. _arXiv preprint arXiv:2205.15290_ , 2022.
* Codella et al. [2019] Noel C. F. Codella, Veronica Rotemberg, Philipp Tschandl, M. Emre Celebi, Stephen W. Dusza, David A. Gutman, Brian Helba, Aadi Kalloo, Konstantinos Liopyris, Michael A. Marchetti, Harald Kittler, and Allan Halpern. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (ISIC). _CoRR_ , abs/1902.03368, 2019. URL http://arxiv.org/abs/1902.03368.
* Tschandl et al. [2018] Philipp Tschandl, Cliff Rosendahl, and Harald Kittler. The HAM10000 dataset: A large collection of multi-source dermatoscopic images of common pigmented skin lesions. _CoRR_ , abs/1803.10417, 2018. URL http://arxiv.org/abs/1803.10417.
* Tschandl et al. [2020] Philipp Tschandl, Christoph Rinner, Zoe Apalla, Giuseppe Argenziano, Noel Codella, Allan Halpern, Monika Janda, Aimilios Lallas, Caterina Longo, Josep Malvehy, John Paoli, Susana Puig, Cliff Rosendahl, H. Peter Soyer, Iris Zalaudek, and Harald Kittler. Human–computer collaboration for skin cancer recognition. _Nature Medicine_ , 26(8):1229–1234, June 2020\. doi: 10.1038/s41591-020-0942-0. URL https://doi.org/10.1038/s41591-020-0942-0.
* Jaeger et al. [2014] Stefan Jaeger, Alexandros Karargyris, Sema Candemir, Les R. Folio, Jenifer Siegelman, Fiona M. Callaghan, Zhiyun Xue, Kannappan Palaniappan, Rahul K. Singh, Sameer K. Antani, George R. Thoma, Yi-Xiang J. Wang, Pu-Xuan Lu, and Clement J. McDonald. Automatic tuberculosis screening using chest radiographs. _IEEE Trans. Medical Imaging_ , 33(2):233–245, 2014. doi: 10.1109/TMI.2013.2284099. URL https://doi.org/10.1109/TMI.2013.2284099.
* Candemir et al. [2014] Sema Candemir, Stefan Jaeger, Kannappan Palaniappan, Jonathan P. Musco, Rahul K. Singh, Zhiyun Xue, Alexandros Karargyris, Sameer K. Antani, George R. Thoma, and Clement J. McDonald. Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration. _IEEE Trans. Medical Imaging_ , 33(2):577–590, 2014. doi: 10.1109/TMI.2013.2290491. URL https://doi.org/10.1109/TMI.2013.2290491.
* Gut [2021] Daniel Gut. X-ray images of the hip joints, 2021. URL https://data.mendeley.com/datasets/zm6bxzhmfz/1.
* Bernal et al. [2015] Jorge Bernal, Francisco Javier Sánchez, Gloria Fernández-Esparrach, Debora Gil, Cristina Rodríguez de Miguel, and Fernando Vilariño. WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians. _Comput. Medical Imaging Graph._ , 43:99–111, 2015. doi: 10.1016/j.compmedimag.2015.02.007. URL https://doi.org/10.1016/j.compmedimag.2015.02.007.
* Al-Dhabyani et al. [2020] Walid Al-Dhabyani, Mohammed Gomaa, Hussien Khaled, and Aly Fahmy. Dataset of breast ultrasound images. _Data in Brief_ , 28:104863, 2020. ISSN 2352-3409. doi: https://doi.org/10.1016/j.dib.2019.104863. URL https://www.sciencedirect.com/science/article/pii/S2352340919312181.
* Kim and Lee [2021] Minki Kim and Byoung-Dai Lee. Automatic lung segmentation on chest x-rays using self-attention deep neural network. _Sensors_ , 21(2):369, 2021.
* [32] Tarun Agrawal and Prakash Choudhary. Rese-net: Enhanced unet architecture for lung segmentation in chest radiography images. _Computational Intelligence_.
* Zhan et al. [2023] Bangcheng Zhan, Enmin Song, and Hong Liu. Fsa-net: Rethinking the attention mechanisms in medical image segmentation from releasing global suppressed information. _Computers in Biology and Medicine_ , page 106932, 2023.
* Podda et al. [2022] Alessandro Sebastian Podda, Riccardo Balia, Silvio Barra, Salvatore Carta, Gianni Fenu, and Leonardo Piano. Fully-automated deep learning pipeline for segmentation and classification of breast ultrasound images. _Journal of Computational Science_ , 63:101816, 2022.
## 6 Supplementary Material
Table 5: DSC of predictions for six datasets using the eight proposed prompt strategies for the three SAM ViT sizes. Dataset | Model | Pred | CP | RP | RP3 | RP5 | BB | BBS5 | BBS10 | BBS20
---|---|---|---|---|---|---|---|---|---|---
ISIC | ViT-H | 1st | 0.538 | 0.531 | 0.762 | 0.774 | 0.745 | 0.737 | 0.715 | 0.603
2nd | 0.718 | 0.677 | 0.769 | 0.788 | 0.845 | 0.842 | 0.833 | 0.789
3rd | 0.375 | 0.363 | 0.390 | 0.483 | 0.872 | 0.868 | 0.860 | 0.816
ViT-L | 1st | 0.704 | 0.665 | 0.703 | 0.700 | 0.864 | 0.861 | 0.852 | 0.805
2nd | 0.518 | 0.521 | 0.768 | 0.794 | 0.763 | 0.757 | 0.733 | 0.623
3rd | 0.382 | 0.366 | 0.358 | 0.362 | 0.841 | 0.836 | 0.819 | 0.730
ViT-B | 1st | 0.366 | 0.355 | 0.354 | 0.375 | 0.870 | 0.866 | 0.855 | 0.810
2nd | 0.665 | 0.618 | 0.692 | 0.695 | 0.825 | 0.823 | 0.807 | 0.751
3rd | 0.504 | 0.490 | 0.766 | 0.790 | 0.640 | 0.631 | 0.601 | 0.496
HAM | ViT-H | 1st | 0.544 | 0.527 | 0.752 | 0.765 | 0.732 | 0.724 | 0.700 | 0.589
| 2nd | 0.729 | 0.686 | 0.768 | 0.785 | 0.838 | 0.835 | 0.824 | 0.778
| 3rd | 0.420 | 0.406 | 0.443 | 0.541 | 0.865 | 0.861 | 0.851 | 0.809
ViT-L | 1st | 0.731 | 0.689 | 0.723 | 0.721 | 0.859 | 0.856 | 0.846 | 0.799
2nd | 0.522 | 0.518 | 0.764 | 0.793 | 0.766 | 0.761 | 0.740 | 0.626
3rd | 0.435 | 0.413 | 0.406 | 0.408 | 0.830 | 0.824 | 0.805 | 0.699
ViT-B | 1st | 0.414 | 0.403 | 0.401 | 0.425 | 0.863 | 0.859 | 0.846 | 0.799
2nd | 0.659 | 0.607 | 0.681 | 0.683 | 0.810 | 0.807 | 0.795 | 0.740
3rd | 0.478 | 0.431 | 0.749 | 0.772 | 0.619 | 0.610 | 0.578 | 0.466
CXR | ViT-H | 1st | 0.904 | 0.863 | 0.923 | 0.927 | 0.936 | 0.934 | 0.911 | 0.686
2nd | 0.758 | 0.727 | 0.766 | 0.828 | 0.942 | 0.939 | 0.929 | 0.826
3rd | 0.471 | 0.469 | 0.482 | 0.514 | 0.935 | 0.930 | 0.913 | 0.803
ViT-L | 1st | 0.834 | 0.814 | 0.786 | 0.776 | 0.932 | 0.929 | 0.916 | 0.805
2nd | 0.915 | 0.870 | 0.930 | 0.929 | 0.940 | 0.936 | 0.906 | 0.660
3rd | 0.472 | 0.471 | 0.468 | 0.474 | 0.945 | 0.942 | 0.928 | 0.758
ViT-B | 1st | 0.459 | 0.459 | 0.467 | 0.497 | 0.916 | 0.910 | 0.894 | 0.817
2nd | 0.804 | 0.782 | 0.786 | 0.803 | 0.937 | 0.933 | 0.921 | 0.813
3rd | 0.882 | 0.813 | 0.928 | 0.932 | 0.916 | 0.898 | 0.818 | 0.524
HJXR-F | ViT-H | 1st | 0.876 | 0.822 | 0.941 | 0.948 | 0.924 | 0.908 | 0.848 | 0.618
2nd | 0.743 | 0.767 | 0.767 | 0.776 | 0.962 | 0.958 | 0.904 | 0.746
3rd | 0.517 | 0.543 | 0.548 | 0.599 | 0.949 | 0.945 | 0.905 | 0.723
ViT-L | 1st | 0.773 | 0.800 | 0.788 | 0.791 | 0.972 | 0.969 | 0.951 | 0.843
2nd | 0.874 | 0.804 | 0.927 | 0.944 | 0.925 | 0.922 | 0.844 | 0.685
3rd | 0.516 | 0.540 | 0.540 | 0.619 | 0.961 | 0.944 | 0.818 | 0.448
ViT-B | 1st | 0.466 | 0.486 | 0.481 | 0.489 | 0.924 | 0.915 | 0.888 | 0.788
2nd | 0.733 | 0.775 | 0.742 | 0.727 | 0.958 | 0.954 | 0.926 | 0.771
3rd | 0.911 | 0.774 | 0.909 | 0.907 | 0.899 | 0.876 | 0.735 | 0.490
HJXR-I | ViT-H | 1st | 0.211 | 0.742 | 0.808 | 0.828 | 0.875 | 0.866 | 0.734 | 0.624
2nd | 0.393 | 0.479 | 0.449 | 0.491 | 0.855 | 0.849 | 0.790 | 0.620
3rd | 0.294 | 0.295 | 0.316 | 0.384 | 0.800 | 0.796 | 0.758 | 0.629
ViT-L | 1st | 0.363 | 0.540 | 0.448 | 0.451 | 0.824 | 0.817 | 0.748 | 0.594
2nd | 0.165 | 0.758 | 0.864 | 0.860 | 0.887 | 0.877 | 0.762 | 0.555
3rd | 0.301 | 0.306 | 0.292 | 0.330 | 0.862 | 0.841 | 0.733 | 0.580
ViT-B | 1st | 0.259 | 0.303 | 0.328 | 0.368 | 0.772 | 0.767 | 0.734 | 0.591
2nd | 0.403 | 0.502 | 0.467 | 0.478 | 0.849 | 0.843 | 0.802 | 0.615
3rd | 0.314 | 0.717 | 0.823 | 0.830 | 0.838 | 0.838 | 0.779 | 0.622
CVC | ViT-H | 1st | 0.716 | 0.763 | 0.861 | 0.880 | 0.889 | 0.881 | 0.835 | 0.702
2nd | 0.554 | 0.544 | 0.642 | 0.754 | 0.926 | 0.924 | 0.916 | 0.844
3rd | 0.232 | 0.224 | 0.224 | 0.245 | 0.924 | 0.922 | 0.918 | 0.868
ViT-L | 1st | 0.498 | 0.482 | 0.508 | 0.522 | 0.920 | 0.918 | 0.906 | 0.853
2nd | 0.702 | 0.728 | 0.836 | 0.841 | 0.873 | 0.867 | 0.818 | 0.672
3rd | 0.229 | 0.222 | 0.217 | 0.223 | 0.909 | 0.904 | 0.870 | 0.773
ViT-B | 1st | 0.234 | 0.225 | 0.222 | 0.226 | 0.920 | 0.916 | 0.906 | 0.833
2nd | 0.447 | 0.440 | 0.495 | 0.510 | 0.907 | 0.906 | 0.892 | 0.796
3rd | 0.644 | 0.688 | 0.778 | 0.783 | 0.821 | 0.810 | 0.758 | 0.585
BUSI | ViT-H | 1st | 0.583 | 0.541 | 0.736 | 0.766 | 0.754 | 0.744 | 0.713 | 0.631
2nd | 0.641 | 0.616 | 0.688 | 0.735 | 0.840 | 0.837 | 0.823 | 0.768
3rd | 0.192 | 0.184 | 0.196 | 0.254 | 0.863 | 0.859 | 0.848 | 0.800
ViT-L | 1st | 0.656 | 0.649 | 0.674 | 0.663 | 0.866 | 0.862 | 0.855 | 0.794
2nd | 0.567 | 0.536 | 0.748 | 0.779 | 0.782 | 0.777 | 0.754 | 0.649
3rd | 0.228 | 0.205 | 0.202 | 0.252 | 0.849 | 0.847 | 0.830 | 0.741
ViT-B | 1st | 0.202 | 0.192 | 0.181 | 0.213 | 0.884 | 0.881 | 0.869 | 0.823
2nd | 0.634 | 0.604 | 0.682 | 0.691 | 0.832 | 0.830 | 0.818 | 0.766
3rd | 0.562 | 0.522 | 0.773 | 0.797 | 0.725 | 0.722 | 0.689 | 0.582
|
# OV-DQUO: Open-Vocabulary DETR with Denoising Text Query Training and Open-
World Unknown Objects Supervision
Junjie Wang1 Bin Chen1, 2, 3 Bin Kang2 Yulin Li1
Yichi Chen2 Weizhi Xian3 Huifeng Chang4
1 Harbin Institute of Technology, Shenzhen 2 University of Chinese Academy of
Sciences
3 Harbin Institute of Technology, Chongqing Research Institute 4 CECloud
Computing Technology Co., Ltd
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Open-Vocabulary Detection (OVD) aims to detect objects from novel categories
beyond the base categories on which the detector is trained. However, existing
open-vocabulary detectors trained on known category data tend to assign higher
confidence to trained categories and confuse novel categories with background.
To resolve this, we propose OV-DQUO, an Open-Vocabulary DETR with Denoising
text Query training and open-world Unknown Objects supervision. Specifically,
we introduce a wildcard matching method that enables the detector to learn
from pairs of unknown objects recognized by the open-world detector and text
embeddings with general semantics, mitigating the confidence bias between base
and novel categories. Additionally, we propose a denoising text query training
strategy that synthesizes additional noisy query-box pairs from open-world
unknown objects to trains the detector through contrastive learning, enhancing
its ability to distinguish novel objects from the background. We conducted
extensive experiments on the challenging OV-COCO and OV-LVIS benchmarks,
achieving new state-of-the-art results of 45.6 AP50 and 39.3 mAP on novel
categories respectively, without the need for additional training data. Models
and code are released at https://github.com/xiaomoguhz/OV-DQUO
## 1 Introduction
Open-Vocabulary Detection [37] focuses on identifying objects from novel
categories not encountered during training. Recently, Vision-Language Models
(VLMs)[22, 28, 16] pretrained on large-scale image-text pairs, such as
CLIP[22], have demonstrated impressive performance in zero-shot image
classification, providing new avenues for open-vocabulary detection.
Figure 1: (a) Detector confidence bias is a primary reason for the suboptimal
detection performance on novel categories. (b) Existing methods use VLM and
RPN to generate pseudo region-text pairs from image-caption datasets. (c)
Instead, this work leverages the open-world detector to recognize _novel
unknown objects_ within the training set and learns to match them with
wildcard text embeddings.
ViLD [6] is the first work to distill VLMs’ classification knowledge into an
object detector by aligning the detector-generated region embeddings with the
corresponding features extracted from VLMs. Subsequent methods [32, 29, 34,
36, 15] have proposed more elaborately designed strategies to improve the
efficiency of knowledge distillation, such as BARON [32], which aligns bag-of-
regions embeddings with image features extracted by VLMs. However, the context
discrepancy limits the effectiveness of knowledge distillation [44].
RegionCLIP [42] is a representative method that utilizes VLMs for pseudo-
labeling by generating pseudo region-text pairs from caption datasets[26]
using RPN and CLIP to train open-vocabulary detectors. Later works [2, 41, 40]
have further extended the implementation of pseudo-labeling, such as SASDet
[41], which explores leveraging a self-training paradigm for pseudo-labeling.
Nevertheless, these methods suffer from pseudo-label noise.
All of the above methods employ indirect utilization of VLMs, thus not
unleashing their full potential. Existing state-of-the-art methods [33, 35,
14] typically deploy a frozen CLIP image encoder as the image backbone and
perform open-vocabulary detection by extracting region features within the
prediction box. Intuitively, the performance ceiling of such methods depends
directly on the classification ability of VLMs. Therefore, current works
mainly enhance VLM’s region recognition accuracy through fine-tuning [35] or
self-distillation [33]. Yet, these methods overlook the fact that detectors
trained on known category data tend to assign higher confidence to trained
categories and confuse novel categories with background.
To verify the impact of confidence bias on novel category detection, we first
analyze the confidence assigned by VLMs and detectors to base and novel
categories, as shown in Figure 1(a). It is evident that the detector assigns
significantly lower confidence to novel category objects (e.g., umbrella) than
to known categories (e.g., person). Furthermore, we observed a significant
performance gap when using VLM to classify Ground Truth (GT) boxes compared to
detector predictions. However, this gap narrows when we manually adjust the
prediction confidence of bounding boxes based on their Intersection over Union
(IoU) with GT boxes. The experimental results reveal that confidence bias is
one of the factors responsible for suboptimal performance in novel category
detection.
Based on the above findings, we propose OV-DQUO, an open-vocabulary detection
framework with denoising text query training and open-world unknown objects
supervision. Unlike existing methods that generate pseudo region-text pairs
(Figure 1(b)), our framework propose a wildcard matching method and a
contrastive denoising training strategy to directly learn from open-world
unknown objects, mitigating performance degradation in novel category
detection caused by confidence bias.
As shown in Figure 1(c), to address the confidence bias between base and novel
categories, OV-DQUO first utilizes an open-world detector to recognize novel
unknown objects within the training set. It then queries these unknown objects
using text embeddings with general semantics (i.e., wildcard matching) and
enables the detector to regard them as query-box match. Since the open-world
detector cannot identify all novel unknown objects, we designed a denoising
text query training strategy to address the detector’s confusion between novel
categories and the background. This method synthesizes additional query-box
pairs by perturbing bounding boxes of unknown objects and assigning noisy text
embeddings, enabling OV-DQUO to leverage contrastive learning to better
distinguish novel objects from the background. Finally, to mitigate the impact
of confidence bias on region proposal selection, we propose RoQIs Selection,
which integrates region-text similarity with confidence scores to select
region proposals, achieving a more balanced recall of base and novel category
objects. The main contributions of this paper can be summarized as follows:
* •
Inspired by the open-world detection task of recognizing novel unknown
objects, we propose an OV-DQUO framework, which mitigates the detector’s
confidence bias on novel category detection.
* •
We design a wildcard matching method, which enables the detector to learn from
pairs of text embeddings with general semantics and novel unknown objects
recognized by the open-world detector, thereby alleviating the confidence bias
between base and novel categories.
* •
We introduce the denoising text query training strategy, which allows a
detector to perform contrastive learning from synthetic noisy query-box pairs,
thus enhancing its ability to distinguish novel objects from the background.
* •
OV-DQUS consistently outperforms existing state-of-the-art methods on the OV-
COCO and OV-LVIS open-vocabulary detection benchmarks and demonstrates
excellent performance in cross-dataset detection on COCO and Objects365.
## 2 Related Works
Open-Vocabulary Detection (OVD) is a paradigm proposed by OVR-CNN [37], which
aims to train models to detect objects from arbitrary categories, including
those not seen during training. State-of-the-art methods [14, 35, 33] leverage
a frozen VLM image encoder as the backbone to extract features and perform
open-vocabulary detection. Compared to pseudo-labeling [1, 43, 42, 40, 21, 27]
and knowledge distillation-based methods [32, 29, 34, 36, 15], these
approaches directly benefit from the large-scale pretraining knowledge of VLMs
and better generalize to novel objects. F-VLM [14] pioneered the discovery
that VLMs retain region-sensitive features useful for object detection. It
freezes the VLM and uses it as a backbone for feature extraction and region
classification. CORA [35] also uses a frozen VLM but fine-tunes it with a
lightweight region prompt layer, enhancing region classification accuracy.
CLIPself [33] reveals that the ViT version of VLM performs better on image
crops than on dense features, and explores aligning dense features with image
crop features through self-distillation. However, we identify that these
methods suffer from a confidence bias issue, leading to suboptimal performance
in novel category detection.
Open-World Detection (OWD) is a paradigm proposed by ORE[10], which aims to
achieve two goals: (1) recognizing both known category objects and the unknown
objects not present in the training set, and (2) enabling incremental object
detection learning through newly introduced external knowledge. OW-DETR [8]
attempts to identify potential unknown objects based on feature map activation
scores, as foreground objects typically exhibit stronger activation responses
compared to the background. PROB [45] performs distribution modeling on the
model output logits to identify unknown objects and decouples the
identification of background, known objects, and unknown objects. Based on the
observation that foreground regions exhibit more variability while background
regions change monotonously, MEPU [5] employs Weibull modeling on the feature
reconstruction error of these regions and proposes the Reconstruction Error-
based Weibull (REW) model. REW assigns likelihood scores to region proposals
that potentially belong to unknown objects. These methods inspire us to
leverage open-world detectors to address the confidence bias issue in OVD.
## 3 Methodology
In this section, we present OV-DQUO, a novel OVD framework with denoising text
query training and open-world unknown objects supervision. An overview is
given in Figure 2. First, we briefly introduce the OVD setup. Then, we detail
the open-world pseudo-labeling pipeline and the corresponding wildcard
matching strategy, which is our key approach for mitigating the confidence
bias between known and novel categories (Sec. 3.1). Subsequently, we elaborate
the denoising text query training strategy that enhances a model ability to
distinguish novel objects from the background (Sec. 3.2). Finally, we
introduce the region of query interests selection module, which achieves a
more balanced recall of base and novel category objects (Sec. 3.3).
Task Formulation. In our study, we follow the classical open-vocabulary
problem setup as in OVR-CNN [37]. In this setup, only partial class
annotations of the dataset are available during the training process, commonly
referred to as base classes and denoted by the symbol
$\mathcal{C}^{\text{base}}$. During the inference stage, the model is required
to recognize objects from both the base classes and the novel classes (denoted
as $\mathcal{C}^{\text{novel}}$, where
$\mathcal{C}^{\text{base}}\cap\mathcal{C}^{\text{novel}}=\varnothing$) that
were not seen during training, while the names of the novel classes are
provided as clues during inference.
Figure 2: Overview of OV-DQUO. (a) Open-world pseudo labeling pipeline, which
iteratively trains the detector, generates unknown object proposals, estimates
and filters foreground probabilities, and updates the training set. (b)
Denoising text query training, which enables contrastive learning with
synthetic noisy query-box pairs from unknown objects. (c) RoQIs selection
module, which considers both objectness and region-text similarity for
selecting regions of query interest.
### 3.1 Open-World Pseudo Labeling & Wildcard Matching
Unknown object proposals from the external OLN. Existing works [42, 6, 43, 41,
1, 2] leverage RPNs to mine potential novel objects, but these RPNs are biased
towards the base classes they are trained on and perform poorly on novel
classes. Unlike these approaches, we leverage the Object Localization Network
(OLN) [11] to recognize novel unknown objects from the training set in the OV-
DQUO framework, as shown in Figure 2(a). OLN is an open-world detector trained
to estimate the objectness of each region purely based on how well the
location and shape of a region overlap with any ground-truth object (e.g.,
centerness and IoU). After training OLN with $\mathcal{C}^{\text{base}}$
annotations from the OVD benchmark, we apply it to the training set to run
inference and generate open-world unknown object proposals. Specifically,
given an input image $I\in\mathbb{R}^{3\times H\times W}$, OLN outputs a
series of tuples $\mathcal{R}=\\{r_{1},r_{2},\ldots,r_{n}\\}$, where each
$r_{i}=[b_{i},q_{i}]$. Here, $b_{i}$ represents the coordinates of an unknown
proposal, and $q_{i}$ denotes the localization quality estimations.
Foreground likelihood estimation for novel unknown objects. Reducing the
impact of noisy labels is a key challenge in pseudo-label learning. Inspired
by [5], we leverage a probability distribution, which we denote as the
Foreground Estimator (FE), to estimate the likelihood that a novel unknown
object $r_{i}$ belongs to a foreground region. FE is based on the Weibull
distribution and is modeled upon the feature reconstruction error of the
foreground and background regions. Specifically, we first train a feature
reconstruction network using images from the OVD benchmark in an unsupervised
setting. Then, we collect the feature reconstruction errors for foreground and
background regions based on the $\mathcal{C}^{\text{base}}$ annotations.
Subsequently, we apply maximum likelihood estimation on Equation 1 to model
the foreground and background Weibull distributions, denoted as
$\mathcal{D}_{\textit{fg}}$ and $\mathcal{D}_{\textit{bg}}$, respectively.
$\mathcal{D}(\eta|a,c)=ac\left[1-\exp\left(-\eta^{c}\right)\right]^{a-1}\exp\left(-\eta^{c}\right)\eta^{c-1}$
(1)
where symbols $a$ and $c$ represent the shape parameters of the distribution,
while $\eta$ represents the feature reconstruction error of the foreground or
background region. With $\mathcal{D}_{\textit{fg}}$ and
$\mathcal{D}_{\textit{bg}}$, we can estimate the foreground likelihood $w_{i}$
for each novel unknown object $r_{i}=[b_{i},q_{i}]$ in $\mathcal{R}$ using
Equation 2, resulting in
$\hat{\mathcal{R}}=\\{\hat{r}_{1},\hat{r}_{2},\ldots,\hat{r}_{n}\\}$, where
$\hat{r}_{i}=[b_{i},q_{i},w_{i}]$.
$w_{i}=\frac{\mathcal{D}_{\textit{fg}}\left(\eta({b_{i}})\right)}{\mathcal{D}_{\textit{fg}}\left(\eta({b_{i}})\right)+\mathcal{D}_{\textit{bg}}\left(\eta({b_{i}})\right))},$
(2)
$\hat{\mathcal{R}}$ are used to update the training set annotations
$\mathcal{C}^{\text{base}}$ after being filtered by ground truth annotations
and some heuristic criteria. Once the training set is updated, it can be used
to retrain OLN. Subsequently, the entire process can be iterated to yield more
unknown objects, as shown in Figure 2(a). The visualization of unknown
proposals and their corresponding foreground likelihood estimations are
provided in Appendix A.3, and the details of the heuristic criteria can be
found in Appendix A.6.
Learning from open-world unknown objects via wildcard matching. The additional
supervision signals provided by open-world detectors enable OV-DQUO to avoid
treating novel objects as background during training, thereby mitigating the
confidence bias between known and novel categories. However, applying an open-
vocabulary training framework to open-world pseudo-labels raises the following
challenges: open-world unknown objects lack category information.
Unlike existing works [42, 41, 2] that re-label each proposal to specific
categories using VLMs, we propose to match open-world unknown objects directly
using text embeddings with general semantics, thereby avoiding additional
label noise. Specifically, let $\mathcal{V}_{t}$ represent the text encoder of
the VLM. The query text for unknown objects is "a photo of a
$\\{\text{wildcard}\\}$", denoted as $T_{wc}$, where the wildcard can be terms
like "object" or "thing." The query text for base classes is "a photo of a
$\\{\mathcal{C}^{\text{base}}\\}$", denoted as $T_{base}$. In the learning
process of pseudo-labels, if a region proposal $p_{i}$ generated by the OV-
DQUO encoder has an IoU with any pseudo-label in $\hat{\mathcal{R}}$ greater
than the threshold $\tau$, we assign the proposal with wildcard query
embedding $\mathcal{V}_{t}(T_{wc})$; otherwise, we assign it the text
embeddings of the base category with the maximum similarity,
$\mathcal{V}_{t}(T_{base}^{*})$, as shown in the following equation:
$(m_{i},\hat{p}_{i})=\operatorname{Decoder}(q_{i},p_{i}),\ \text{where}\
q_{i}=\begin{cases}\mathcal{V}_{t}(T_{wc})&\text{if
}\operatorname{IoU}(p_{i},\hat{\mathcal{R}})>\tau,\\\
\mathcal{V}_{t}(T_{base}^{*})&\text{otherwise}.\end{cases}$ (3)
where $\hat{\mathcal{R}}$ represents the set of open-world pseudo-labels. The
decoder of OV-DQUO iteratively refines each query with its associated anchor
box $(q_{i},p_{i})$ into output $o_{i}=(m_{i},\hat{p}_{i})$, where $m_{i}$
denotes the probability that the input query embedding matches the category of
its corresponding bounding box, and $\hat{p}_{i}$ represents the predicted
box. To achieve text query conditional matching, OV-DQUO constrains each
ground-truth box to match predictions with the same category query embedding,
including the pseudo-labels. Specifically, given a prediction set
$\mathcal{O}^{wc}=\\{o_{1}^{wc},o_{2}^{wc},\ldots,o_{n}^{wc}\mid
q_{i}=\mathcal{V}_{t}(T_{wc})\\}$ that is conditioned on wildcard query
embedding, the class-aware Hungarian matching algorithm
$\mathcal{H}_{\text{cls}}$ yields the optimal permutation
$\mathcal{M}^{wc}=\\{(\hat{r}_{1},o_{1}^{wc}),(\hat{r}_{2},o_{2}^{wc}),\ldots,(\hat{r}_{k},o_{k}^{wc})\\}$
that minimizes the matching cost $\mathcal{L}_{\text{cost}}$ between the open-
world pseudo-labels set $\hat{\mathcal{R}}$ and the predicted set
$\mathcal{O}^{wc}$ as follows:
$\mathcal{M}^{wc}=\mathcal{H}_{\text{cls}}\left(\hat{\mathcal{R}},\
\mathcal{O}^{wc},\ \mathcal{L}_{\text{cost}}\right),\ \text{where}\
\mathcal{L}_{\text{cost}}=\mathcal{L}_{\text{focal
}}(m_{i}^{wc})+\mathcal{L}_{\text{bbox}}(\hat{p}_{i}^{wc},\hat{r}_{i})$ (4)
$\mathcal{L}_{\text{focal}}$ denotes the binary focal loss [19], while
$\mathcal{L}_{\text{bbox }}$ consists of L1 loss and GIoU loss [38]. With the
matching results, the loss for unknown objects and base annotations can be
expressed as follows:
$\displaystyle\mathcal{L}_{pseudo}=\sum_{o_{i}^{wc}\in\mathcal{M}^{wc}}w_{i}\mathcal{L}_{\text{focal}}\left(m_{i}^{wc}\right),\quad\mathcal{L}_{base}=\sum_{c\in\mathcal{C}^{\text{base}}}\sum_{o_{i}^{c}\in\mathcal{M}^{c}}\left(\mathcal{L}_{\text{focal}}\left(m_{i}^{c}\right)+\mathcal{L}_{\text{bbox}}\left(\hat{p}_{i}^{c},y_{i}^{c}\right)\right)$
(5)
where $o_{i}^{wc}=(m_{i}^{wc},\hat{p}_{i}^{wc})$ and
$o_{i}^{c}=(m_{i}^{c},\hat{p}_{i}^{c})$ are the predictions selected by the
Hungarian matching algorithm, whose query embeddings are
$\mathcal{V}_{t}(T_{wc})$ and $\mathcal{V}_{t}(T_{c})$, respectively.
$y_{i}^{c}$ represents a GT of base category $c$. $w_{i}$ is the foreground
probability estimation of unknwn object $\hat{r}_{i}$. We only compute the
$\mathcal{L}_{\text{focal}}$ for unknown objects. Additionally, the
classification targets for predictions matched by $\mathcal{H}_{\text{cls}}$
are 1; otherwise, the target is 0. We omit them from the Equation 5 for
simplicity.
### 3.2 Denoising Text Query Training
Since the open-world detector cannot recognize all potential novel objects and
provide supervision signals, we propose denoising text query training to
enhance a detector’s ability to distinguish novel objects from the background.
We achieve this by enabling OV-DQUO to perform contrastive learning from
synthetic noisy query-box pairs, as shown in Figure 2(b). Specifically, for a
given unknown object box $\hat{r}_{i}$, $2N$ noise proposals
$\tilde{\mathcal{R}}=\\{\tilde{r}_{1},\tilde{r}_{2},\ldots,\tilde{r}_{2N}\\}$
are generated based on its coordinates with two noise scales $\lambda_{1}$ and
$\lambda_{2}$, where $\lambda_{1}<\lambda_{2}$. Among these proposals, the
first $N-1$ region proposals have a smaller noise scale $\lambda_{1}$ and are
regarded as positive samples during training. In contrast, the remaining
proposals from $N$ to $2N-1$ have a larger noise scale $\lambda_{2}$ and are
treated as negative samples. For query embedding $q_{i}$, if a noisy region
proposal $\tilde{r}_{i}$ belongs to the positive samples, we query it with the
correct text embedding $\mathcal{V}_{t}(T_{wc})$. In contrast, for negative
samples, we randomly select a proportion $\rho$ of samples and assign
incorrect text embeddings of base categories $\mathcal{V}_{t}(T_{base})$,
where $\rho$ is a noise scale parameter. The whole process is as follows:
$\displaystyle\tilde{r}_{i}$
$\displaystyle=\begin{cases}\hat{r}_{i}+\lambda_{1}\cdot\epsilon(\hat{r}_{i}),&\text{if
}0\leq i<N,\\\
\hat{r}_{i}+\lambda_{2}\cdot\epsilon(\hat{r}_{i}),&\text{otherwise}.\end{cases}\quad
q_{i}$ $\displaystyle=\begin{cases}\mathcal{V}_{t}(T_{base}),&\text{if }N\leq
i<2N\text{ and }R(i)<\rho,\\\
\mathcal{V}_{t}(T_{wc}),&\text{otherwise}.\end{cases}$ (6)
where $R(i)\sim\text{Uniform}(0,1)$ is a random function, and $\epsilon$ is a
function randomly calculates the offset based on input boxes. Denoising text
query training utilizes contrastive learning by treating accurate bounding
boxes with correct queries as positive samples, and bounding boxes that
partially cover objects as negative samples, regardless of the query. The
denoising part is performed simultaneously with the vanilla training part
while using the attention mask for isolation. The denoise training loss and
overall training objective for OV-DQUO can be expressed as follows:
$\displaystyle\mathcal{L}_{denoise}=\sum_{i=0}^{2N}w_{i}\mathcal{L}_{\text{focal}}\left(\tilde{m}_{i},\mathbb{I}_{(0<i<N)}\right),\
\text{where}\ \tilde{m}_{i}=\operatorname{Decoder}(q_{i},\tilde{r}_{i})$ (7)
$\displaystyle\mathcal{L}_{total}=\mathcal{L}_{pseudo}+\mathcal{L}_{base}+\mathcal{L}_{denoise}$
(8)
where $\mathbb{I}_{(0<i<N)}$ is the indicator function, which equals 1 if
$0<i<N$ and 0 otherwise. $\tilde{m}_{i}$ denotes the probability that query
embedding $q_{i}$ matches the content within bounding box $\tilde{r}_{i}$.
$\mathcal{L}_{pseudo}$ and $\mathcal{L}_{base}$ are vanilla pseudo-label
learning loss and base category loss mentioned above.
### 3.3 Region of Query Interests Selection
Existing two-stage OVD methods select region proposals based on either class-
agnostic objectness[42, 33] or region-text similarity[20]. However, as we
mentioned, objectness tends to favor the known categories. Region-text
similarities exhibit less bias when leveraging a frozen VLM image encoder as
the backbone, but they are insensitive to localization quality. As shown in
Figure 2(c), we propose Region of Query Interests (RoQIs) selection, a novel
method that considers both objectness and region-text similarity for selecting
region proposals, achieving a more balanced recall of base and novel category
objects. Specifically, given the region proposals set $\mathcal{R}$ and
corresponding objectness score vector $O$, VLM feature map $\bm{\phi}$, and
category name text embedding $\mathbf{L}$, the region of query interests set
$\mathcal{R}^{*}$ for the next stage is generated as follows:
$\mathcal{R}^{*}=\operatorname{gather}(\mathcal{R},t,N),\ \text{where
}t=(\operatorname{max}(\operatorname{RoIAlign}(\mathcal{R},\bm{\phi})\cdot\mathbf{L}^{\top}))^{\bm{\alpha}}\cdot
O^{(1-\bm{\alpha})}$ (9)
where $\operatorname{gather}$ denotes the operation of selecting top-$N$
regions from $R$ accordi ng to $t$. RoIAlign[9] is a method used to obtain
region features within a bounding boxes from the feature map $\bm{\phi}$.
$\operatorname{max}$ means the maximum similarity of each region visual
embeddings to all text embeddings. $\bm{\alpha}$ is the weighted geometric
mean parameter.
## 4 Experiments
### 4.1 Dataset & Training & Evaluation
OV-COCO benchmark. Following [37], we divide the 80 classes in the COCO
dataset [18] into 48 base classes and 17 novel classes. In this benchmark,
models are trained on the 48 base classes, which contain 107,761 images and
665,387 instances. Subsequently, the models are evaluated on the validation
set, which includes both the base and novel classes, containing 4,836 images
and 33,152 instances. For the OV-COCO benchmark, we use
$\text{AP}_{50}^{\text{Novel}}$ as our evaluation metric, which calculates the
mean average precision at an IoU of 50% for novel classes.
OV-LVIS benchmark. Following standard practice [42, 6], we remove categories
with rare tags in the LVIS [7] training set. Models are trained on 461 common
classes and 405 frequent classes, which contain 100,170 images and 1,264,883
instances. After training, the models are evaluated on the validation set,
which includes the common, frequent, and rare classes, containing 19,809
images and 244,707 instances. For the OV-LVIS benchmark, we use
$\text{mAP}_{r}$ as our evaluation metric, which calculates the box AP
averaged on IoUs from 0.5 to 0.95 for rare classes.
### 4.2 Implementation Details
Model Specifications. OV-DQUO is built on the closed-set detector DINO [39].
To adapt it for the open-vocabulary setting, we follow the previous
practice[36, 35] of modifying the decoder and letting it output matching
probabilities conditioned on the input query. OV-DQUO is configured to have
1,000 object queries, 6 encoder layers, and 6 decoder layers. In the OV-COCO
benchmark, we use CLIP of R50 and R50x4 [35] as the backbone networks. In the
OV-LVIS benchmark, we use the self-distilled CLIP of ViT-B/16 and ViT-L/14
[33] as the backbone network. For the text embedding of each category, follow
the previous works[35, 36, 33], we calculate the average representation of
each category under 80 prompt templates using the text encoder of VLM,
including the wildcard. We employ a MLP layer to transform the text embedding
dimension of VLMs into 256.
Training & Hyperparameters. We train OV-DQUO using 8 GPUs with a batch size of
4 on each GPU, using the AdamW optimizer with a learning rate of
$1\mathrm{e}{-4}$ and a weight decay of $1\mathrm{e}{-4}$. To stabilize
training, we evaluate on the exponential moving average (EMA) of the model
after training. The cost hyperparameters for class, bbox, and GIoU in the
Hungarian matching algorithm are set to 2.0, 5.0, and 2.0, respectively. More
details about the model settings and training parameters of OV-DQUO and open-
world pseudo labeling process can be found in Appendix A.5.
### 4.3 Benchmark Results
OV-COCO. Table 1 summarizes the main results of OV-DQUO on the OV-COCO
benchmark. To ensure a fair comparison, we detail the use of external training
resources, backbone size, and access to novel class names during training for
each method, as these factors vary from methods and significantly impact
performance. It can be seen that OV-DQUO consistently outperforms all state-
of-the-art methods in novel object detection, achieving the best results of
39.2/45.6 $\text{AP}_{50}^{\text{Novel}}$ with backbone networks of
RN50/R50x4, respectively. Note that CLIPSelf[33] is based on the EVA version
of CLIP[28], which is larger than our backbone and has stronger zero-shot
classification capabilities. However, OV-DQUO still outperforms CLIPSelf by
1.3 AP50 on novel categories.
Table 1: Comparison with state-of-the-art open-vocabulary object detection
methods on OV-COCO. Caption supervision indicates that the method learns from
extra image-text pairs, while CLIP supervision refers to transferring
knowledge from CLIP. The column ’Novel’ specifies whether a method requires
access to novel class names during training. †: implemented with the EVA
version of CLIP[28]. P-L, R-AT, and KD-based are classifications of methods,
denoting pseudo-labeling, region-aware training, and knowledge distillation-
based approaches, respectively, as defined in [44].
Method Taxonomy Supervision Backbone Novel AP${}_{50}^{\text{Novel}}$
AP${}_{50}^{\text{Base}}$ AP${}_{50}^{\text{All}}$ ViLD[6] KD-based CLIP RN50
✓ 27.6 59.9 51.3 Detic[43] P-L Caption[23] RN50 ✗ 27.8 47.1 42.0 OV-DETR[36]
KD-based CLIP RN50 ✓ 29.4 61.0 52.7 RegionCLIP[42] P-L Caption[26] RN50 ✗ 31.4
57.1 50.4 VLDet[17] R-AT Caption[3] RN50 ✗ 32.0 50.6 45.8 MEDet[2] R-AT
Caption[3] RN50 ✗ 32.6 54.0 49.4 BARON-KD[32] KD-based CLIP RN50 ✗ 34.0 60.4
53.5 VL-PLM[40] P-L CLIP RN50 ✓ 34.4 60.2 53.5 CLIM[34] KD-based CLIP RN50 ✗
36.9 - - SAS-Det[41] P-L CLIP RN50x4 ✓ 37.4 58.5 53.0 RegionCLIP[42] P-L
Captions[26] RN50x4 ✗ 39.3 61.6 55.7 CORA[35] R-AT CLIP RN50x4 ✗ 41.7 44.5
43.8 PromptDet[27] P-L Caption[24] ViT-B/16 ✗ 30.6 63.5 54.9 RO-ViT[13] R-AT
CLIP ViT-L/16 ✗ 33.0 - 47.7 CFM-ViT[12] R-AT CLIP ViT-L/16 ✗ 34.1 - 46.0
CLIPSelf[33] KD-based CLIP ViT-B/16†(87M) ✗ 37.6 54.9 50.4 CLIPSelf[33] KD-
based CLIP ViT-L/14†(304M) ✗ 44.3 64.1 59.0 OV-DQUO(Ours) P-L CLIP RN50(38M) ✗
39.2 41.8 41.1 OV-DQUO(Ours) P-L CLIP RN50x4(87M) ✗ 45.6 49.0 48.1
Table 2: Comparison with state-of-the-art open-vocabulary object detection
methods on OV-LVIS.
Method Supervision Backbone $\text{mAP}_{r}$ ViLD[6] CLIP RN50 16.3 OV-
DETR[36] CLIP RN50 17.4 BARON-KD[32] CLIP RN50 22.6 RegionCLIP[42] Caption[26]
RN50x4 22.0 CORA+[35] Caption[23] RN50x4 28.1 F-VLM[14] CLIP RN50x64 32.8 CFM-
ViT[12] CLIP ViT-L/14 33.9 RO-ViT[13] CLIP ViT-H/16 34.1 CLIPSelf[33] CLIP
ViT-L/14 34.9 CoDet[21] Caption[26] ViT-L/14 37.0 OV-DQUO(Ours) CLIP ViT-B/16
29.7 OV-DQUO(Ours) CLIP ViT-L/14 39.3
Table 3: Cross-datasets transfer detection from OV-LVIS to COCO and
Objects365. $\dagger$: Detection specialized pretraining with SoCo[31].
COCO Objects365 Method AP AP50 AP75 AP AP50 AP75 Supervised[6] 46.5 67.6 50.9
25.6 38.6 28.0 ViLD[6] 36.6 55.6 39.6 11.8 18.0 12.6 DetPro$\dagger$[4] 34.9
53.8 37.4 12.1 18.8 12.9 BARON[32] 36.2 55.7 39.1 13.6 21.0 14.5 RO-ViT[13] -
- - 17.1 26.9 19.5 F-VLM[14] 37.9 59.6 41.2 16.2 25.3 17.5 CoDet[21] 39.1 57.0
42.3 14.2 20.5 15.3 OV-DQUO (Ours) 39.2 55.8 42.5 18.4 26.8 19.6
OV-LVIS. Table 3 summarizes the main results of OV-DQUO on the OV-LVIS
benchmark. Since LVIS dataset encompasses considerably more categories than
COCO (1203 vs. 80), we replaced the backbone network with stronger
classification capabilities ViT-B/16 and ViT-L/14 [33] in the OV-LVIS
experiments. It is worth noting that this does not lead to an unfair
comparison, as OV-DQUO still consistently outperforms all state-of-the-art
methods, including those using the same [33] (+4.4 $\text{mAP}_{r}$) or larger
backbones [14] (+5.8 $\text{mAP}_{r}$), or using external image-caption data
[21] (+2.3 $\text{mAP}_{r}$), achieving the best result of 39.3
$\text{mAP}_{r}$.
Transfer to Other Datasets. Since the open-vocabulary detector may encounter
data from different domains in open-world applications, we further evaluate
OV-DQUO under a cross-dataset setting. Table 3 summarizes the main results of
transferring OV-DQUO trained on OV-LVIS to the validation sets of COCO[18] and
Object365[25]. We do not finetune OV-DQUO but only replace the text query
embedding with the 80 categories in COCO and the 365 categories in Object365
during testing. Experiments show that OV-DQUO achieves competitive results on
COCO and outperform the previous leading method[14] by 1.3 AP on Object365,
demonstrating robust cross-dataset generalization.
### 4.4 Ablation Study
Ablation Study on Main Components. As presented in Table 5, with the RN50x4
backbone, the vanilla OV-DQUO achieves 41.7 $\text{AP}_{50}$ on novel
categories (#1). Additional supervision from open-world unknown objects boosts
this to 43.3 $\text{AP}_{50}$ (#2). Furthermore, adding denoising text query
training brings an additional 1.7 $\text{AP}_{50}$ performance gain (#3),
demonstrating its effectiveness in improving discriminability between novel
categories and backgrounds. Finally, RoQIs selection contributes another 0.6
$\text{AP}_{50}$ to the novel categories (#5).
Effects of Matching Different Wildcards. As presented in Table 5, we explore
matching different wildcard text embeddings with open-world unknown objects.
In addition to "Object", we select several words that can represent general
foreground regions, such as "Salient Object", "Foreground Region", "Target",
and "Thing", and investigate their impact on performance. Experimental results
demonstrate that compared to intricate wildcards ("Foreground Region","Salient
Object"), simpler and more general wildcards ("Thing","Object") can achieve
better results.
Table 4: Ablation study on the main effective components of OV-DQUO.
# Open-World Supervision Denoising Text Query Training RoQIs Selection
$\text{AP}_{50}^{\text{Novel}}$ $\text{AP}_{50}^{\text{Base}}$
$\text{AP}_{50}^{\text{All}}$ 1 - - - 41.7 48.1 46.4 2 ✓ ✗ ✗ 43.3 46.8 45.8 3
✓ ✓ ✗ 45.0 49.0 47.9 4 ✗ ✗ ✓ 42.7 48.0 46.6 5 ✓ ✓ ✓ 45.6 49.0 48.1
Table 5: Ablation study on matching different wildcards with unknown objects.
Wildcard $\text{AP}_{50}^{\text{Novel}}$ $\text{AP}_{50}^{\text{Base}}$
$\text{AP}_{50}^{\text{All}}$ "Salient Object" 44.4 47.9 47.0 "Foreground
Region" 44.1 47.7 46.7 "Target" 44.5 48.6 47.5 "Thing" 44.9 48.0 47.2 "Object"
45.0 48.9 47.9
(a) Baseline Detector[35]
(b) OV-DQUO
Figure 3: Confidence score distributions
(c) Baseline Detector[35]
(d) OV-DQUO
Figure 4: Embedding distributions
Visualization Analysis of OV-DQUO. We visualize the prediction results of OV-
DQUO and the baseline detector[35] in Figures 4 and 4, including their output
confidence distributions and output embedding T-SNE results. As shown in
Figure 4, compared to the baseline detector, OV-DQUO outputs a more balanced
confidence distribution between novel and base classes. Additionally, the
confidence distribution predicted by OV-DQUO for both base and novel classes
has less overlap with the background confidence distribution. As shown in
Figure 4, compared to the baseline detector, the embeddings of novel category
object output by OV-DQUO exhibit better discriminability from background
embeddings. The comparison of the confidence distributions between OV-DQUO and
baseline detector for each novel category can be found in the Appendix A.1.
Wildcard Matching .vs Relabeling. We further compare wildcard matching with
existing relabeling methods [35, 42] to evaluate its superiority.
Specifically, we compare it with two methods: (1) relabeling each unknown
object with the most similar novel category; and (2) forcibly relabeling each
unknown object with the most similar base category. As presented in Table 7,
experiments show that pairing each open-world unknown object with a specific
category leads to suboptimal results. We believe that this outcome arises
because open-world unknown objects include many foreground objects that do not
belong to base or novel categories. Forcing these objects into specific
pairings introduces considerable noise during training. Conversely, matching
such foreground objects with wildcard text embeddings prevents model
misguidance.
Effects of Different Region Proposal Selection Strategies. We explore the
impact of different region proposal selection strategies on performance,
including objectness, region-text similarity, and RoQIs selection. As shown in
Table 7, selecting proposals based on objectness score result in the recall of
regions biased towards base categories. Besides, selecting proposals based on
region-text similarity tends to recall regions with low localization quality,
leading to performance degradation. Consequently, fusing objectness with
region-text similarity achieves best results.
Table 6: Ablation study on wildcard matching and relabeling methods
Match Method $\text{AP}_{50}^{\text{Novel}}$ $\text{AP}_{50}^{\text{Base}}$
$\text{AP}_{50}^{\text{All}}$ Base classes Relabeling 42.4 48.7 47.0 Novel
classes Relabeling 42.9 47.4 46.2 Wildcard Matching 45.0 48.9 47.9
Table 7: Ablation study on different proposal selection strategies
Selection Strategy $\text{AP}_{50}^{\text{Novel}}$
$\text{AR}_{50}^{\text{Base}}$ $\text{AR}_{50}^{\text{Novel}}$ Objectness
Selection 41.7 72.4 69.9 Region-Text Similarity 29.7 58.6 69.3 RoQIs Selection
42.7 72.1 70.5
Ablation Study on Hyperparameters. We explored the impact of different
hyperparameter settings in OV-DQUO on performance, including the number of
open-world pseudo-labeling iterations $t$, the weight $\bm{\gamma}$ for
scaling the foreground likelihood score, and the weight of the denoising loss
$\bm{\beta}$. Table 10 shows the ablation study on pseudo-labeling iteration
$t$. We calculated the recall for objects in the COCO training set after each
pseudo-labeling iteration as a reference. Experimental results indicate that
OV-DQUO achieves optimal results when $t$ equals 2. Although recall increases
with more iterations, the introduced noise starts to reduce the model
performance on novel categories. Table 10 presents the ablation study on
scaling the foreground score. We use the power function
$(w_{i})^{\bm{\gamma}}$ to scale the foreground likelihood score for each
unknown object, where $\bm{\gamma}$ controls the degree of scaling. When
$\bm{\gamma}$ is set to 0, it serves as an ablation for the FE module. Results
show that setting $\bm{\gamma}$ to 0 significantly degrades performance due to
the release of pseudo-label noise. The best performance is achieved when
$\bm{\gamma}$ is set to 0.5. Table 10 presents the ablation study on the
weight of the denoising loss $\bm{\beta}$. Experimental results show that
changing the weight of the denoising loss does not significantly affect
performance. Moreover, the best results on novel categories are achieved when
the denoising loss weight equals the classification loss weight, i.e.,
$\bm{\beta}=2$.
Table 8: Ablation study on pseudo-labeling iterations
$t$ $\text{AR}_{50}^{\text{All}}$ $\text{AP}_{50}^{\text{Novel}}$
$\text{AP}_{50}^{\text{All}}$ - 80.2 41.7 46.4 1 85.7 44.0 47.9 2 86.5 45.0
47.9 3 87.1 44.8 48.5
Table 9: Ablation study on scaling foreground score
$\bm{\gamma}$ $\text{AP}_{50}^{\text{Novel}}$ $\text{AP}_{50}^{\text{Base}}$
$\text{AP}_{50}^{\text{All}}$ 0.0 43.0 47.4 46.2 0.5 45.0 48.9 47.9 1.0 44.4
48.3 47.3 2.0 44.1 47.7 46.7
Table 10: Ablation study on denoising loss weight
$\bm{\beta}$ $\text{AP}_{50}^{\text{Novel}}$ $\text{AP}_{50}^{\text{Base}}$
$\text{AP}_{50}^{\text{All}}$ 1.0 44.8 48.3 47.4 2.0 45.0 48.9 47.9 3.0 44.4
48.9 47.7 4.0 44.4 48.6 47.5
## 5 Limitations and Conclusions
In this paper, we reveal that confidence bias constrains the novel category
detection of existing OVD methods. Inspired by open-world detection tasks that
identify unknown objects, we introduce an OV-DQUO framework to address this
bias, which achieves new state-of-the-art results on various OVD benchmarks.
While integrating OVD with OWD into a unified end-to-end framework is
promising, it remains under-explored here and reserved for future research.
## References
* Bangalath et al. [2022] Hanoona Bangalath, Muhammad Maaz, Muhammad Uzair Khattak, Salman H Khan, and Fahad Shahbaz Khan. Bridging the gap between object and image-level representations for open-vocabulary detection. _Advances in Neural Information Processing Systems_ , 35:33781–33794, 2022.
* Chen et al. [2022] Peixian Chen, Kekai Sheng, Mengdan Zhang, Mingbao Lin, Yunhang Shen, Shaohui Lin, Bo Ren, and Ke Li. Open vocabulary object detection with proposal mining and prediction equalization. _arXiv preprint arXiv:2206.11134_ , 2022.
* Chen et al. [2015] Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco captions: Data collection and evaluation server. _arXiv preprint arXiv:1504.00325_ , 2015.
* Du et al. [2022] Yu Du, Fangyun Wei, Zihe Zhang, Miaojing Shi, Yue Gao, and Guoqi Li. Learning to prompt for open-vocabulary object detection with vision-language model. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 14084–14093, 2022.
* Fang et al. [2023] Ruohuan Fang, Guansong Pang, Lei Zhou, Xiao Bai, and Jin Zheng. Unsupervised recognition of unknown objects for open-world object detection. _arXiv preprint arXiv:2308.16527_ , 2023.
* Gu et al. [2021] Xiuye Gu, Tsung-Yi Lin, Weicheng Kuo, and Yin Cui. Open-vocabulary object detection via vision and language knowledge distillation. _arXiv preprint arXiv:2104.13921_ , 2021.
* Gupta et al. [2019] Agrim Gupta, Piotr Dollar, and Ross Girshick. Lvis: A dataset for large vocabulary instance segmentation. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pages 5356–5364, 2019.
* Gupta et al. [2022] Akshita Gupta, Sanath Narayan, KJ Joseph, Salman Khan, Fahad Shahbaz Khan, and Mubarak Shah. Ow-detr: Open-world detection transformer. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pages 9235–9244, 2022.
* He et al. [2017] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In _Proceedings of the IEEE international conference on computer vision_ , pages 2961–2969, 2017.
* Joseph et al. [2021] KJ Joseph, Salman Khan, Fahad Shahbaz Khan, and Vineeth N Balasubramanian. Towards open world object detection. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pages 5830–5840, 2021.
* Kim et al. [2022] Dahun Kim, Tsung-Yi Lin, Anelia Angelova, In So Kweon, and Weicheng Kuo. Learning open-world object proposals without learning to classify. _IEEE Robotics and Automation Letters_ , 7(2):5453–5460, 2022.
* Kim et al. [2023a] Dahun Kim, Anelia Angelova, and Weicheng Kuo. Contrastive feature masking open-vocabulary vision transformer. In _2023 IEEE/CVF International Conference on Computer Vision (ICCV)_ , pages 15556–15566, 2023a. doi: 10.1109/ICCV51070.2023.01430.
* Kim et al. [2023b] Dahun Kim, Anelia Angelova, and Weicheng Kuo. Region-aware pretraining for open-vocabulary object detection with vision transformers. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 11144–11154, 2023b.
* Kuo et al. [2022] Weicheng Kuo, Yin Cui, Xiuye Gu, AJ Piergiovanni, and Anelia Angelova. F-vlm: Open-vocabulary object detection upon frozen vision and language models. _arXiv preprint arXiv:2209.15639_ , 2022.
* Li et al. [2023a] Liangqi Li, Jiaxu Miao, Dahu Shi, Wenming Tan, Ye Ren, Yi Yang, and Shiliang Pu. Distilling detr with visual-linguistic knowledge for open-vocabulary object detection. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pages 6501–6510, 2023a.
* Li et al. [2023b] Yanghao Li, Haoqi Fan, Ronghang Hu, Christoph Feichtenhofer, and Kaiming He. Scaling language-image pre-training via masking. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 23390–23400, 2023b.
* Lin et al. [2022] Chuang Lin, Peize Sun, Yi Jiang, Ping Luo, Lizhen Qu, Gholamreza Haffari, Zehuan Yuan, and Jianfei Cai. Learning object-language alignments for open-vocabulary object detection. _arXiv preprint arXiv:2211.14843_ , 2022.
* Lin et al. [2014] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In _Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13_ , pages 740–755. Springer, 2014.
* Lin et al. [2017] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In _Proceedings of the IEEE international conference on computer vision_ , pages 2980–2988, 2017.
* Liu et al. [2023] Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. _arXiv preprint arXiv:2303.05499_ , 2023.
* Ma et al. [2023] Chuofan Ma, Yi Jiang, Xin Wen, Zehuan Yuan, and Xiaojuan Qi. Codet: Co-occurrence guided region-word alignment for open-vocabulary object detection. In A. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, _Advances in Neural Information Processing Systems_ , volume 36, pages 71078–71094. Curran Associates, Inc., 2023.
* Radford et al. [2021] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In Marina Meila and Tong Zhang, editors, _Proceedings of the 38th International Conference on Machine Learning_ , volume 139 of _Proceedings of Machine Learning Research_ , pages 8748–8763. PMLR, 18–24 Jul 2021.
* Ridnik et al. [2021] Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, and Lihi Zelnik-Manor. Imagenet-21k pretraining for the masses. _arXiv preprint arXiv:2104.10972_ , 2021.
* Schuhmann et al. [2021] Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. _arXiv preprint arXiv:2111.02114_ , 2021.
* Shao et al. [2019] Shuai Shao, Zeming Li, Tianyuan Zhang, Chao Peng, Gang Yu, Xiangyu Zhang, Jing Li, and Jian Sun. Objects365: A large-scale, high-quality dataset for object detection. In _Proceedings of the IEEE/CVF international conference on computer vision_ , pages 8430–8439, 2019.
* Sharma et al. [2018] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 2556–2565, 2018.
* Song and Bang [2023] Hwanjun Song and Jihwan Bang. Prompt-guided transformers for end-to-end open-vocabulary object detection. _arXiv preprint arXiv:2303.14386_ , 2023.
* Sun et al. [2023] Quan Sun, Yuxin Fang, Ledell Wu, Xinlong Wang, and Yue Cao. Eva-clip: Improved training techniques for clip at scale. _arXiv preprint arXiv:2303.15389_ , 2023.
* Wang et al. [2023] Luting Wang, Yi Liu, Penghui Du, Zihan Ding, Yue Liao, Qiaosong Qi, Biaolong Chen, and Si Liu. Object-aware distillation pyramid for open-vocabulary object detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 11186–11196, 2023.
* Wang et al. [2022] Xinlong Wang, Zhiding Yu, Shalini De Mello, Jan Kautz, Anima Anandkumar, Chunhua Shen, and Jose M Alvarez. Freesolo: Learning to segment objects without annotations. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pages 14176–14186, 2022.
* Wei et al. [2021] Fangyun Wei, Yue Gao, Zhirong Wu, Han Hu, and Stephen Lin. Aligning pretraining for detection via object-level contrastive learning. _Advances in Neural Information Processing Systems_ , 34:22682–22694, 2021.
* Wu et al. [2023a] Size Wu, Wenwei Zhang, Sheng Jin, Wentao Liu, and Chen Change Loy. Aligning bag of regions for open-vocabulary object detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 15254–15264, 2023a.
* Wu et al. [2023b] Size Wu, Wenwei Zhang, Lumin Xu, Sheng Jin, Xiangtai Li, Wentao Liu, and Chen Change Loy. Clipself: Vision transformer distills itself for open-vocabulary dense prediction. _arXiv preprint arXiv:2310.01403_ , 2023b.
* Wu et al. [2024] Size Wu, Wenwei Zhang, Lumin Xu, Sheng Jin, Wentao Liu, and Chen Change Loy. Clim: Contrastive language-image mosaic for region representation. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 38, pages 6117–6125, 2024.
* Wu et al. [2023c] Xiaoshi Wu, Feng Zhu, Rui Zhao, and Hongsheng Li. Cora: Adapting clip for open-vocabulary detection with region prompting and anchor pre-matching. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pages 7031–7040, 2023c.
* Zang et al. [2022] Yuhang Zang, Wei Li, Kaiyang Zhou, Chen Huang, and Chen Change Loy. Open-vocabulary detr with conditional matching. In _European Conference on Computer Vision_ , pages 106–122. Springer, 2022.
* Zareian et al. [2021] Alireza Zareian, Kevin Dela Rosa, Derek Hao Hu, and Shih-Fu Chang. Open-vocabulary object detection using captions. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 14393–14402, 2021.
* Zhai et al. [2020] Hongyu Zhai, Jian Cheng, and Mengyong Wang. Rethink the iou-based loss functions for bounding box regression. In _2020 IEEE 9th joint international information technology and artificial intelligence conference (ITAIC)_ , volume 9, pages 1522–1528. IEEE, 2020.
* Zhang et al. [2022] Hao Zhang, Feng Li, Shilong Liu, Lei Zhang, Hang Su, Jun Zhu, Lionel M Ni, and Heung-Yeung Shum. Dino: Detr with improved denoising anchor boxes for end-to-end object detection. _arXiv preprint arXiv:2203.03605_ , 2022.
* Zhao et al. [2022] Shiyu Zhao, Zhixing Zhang, Samuel Schulter, Long Zhao, BG Vijay Kumar, Anastasis Stathopoulos, Manmohan Chandraker, and Dimitris N Metaxas. Exploiting unlabeled data with vision and language models for object detection. In _European Conference on Computer Vision_ , pages 159–175. Springer, 2022.
* Zhao et al. [2024] Shiyu Zhao, Samuel Schulter, Long Zhao, Zhixing Zhang, Vijay Kumar B. G, Yumin Suh, Manmohan Chandraker, and Dimitris N. Metaxas. Taming self-training for open-vocabulary object detection, 2024.
* Zhong et al. [2022] Yiwu Zhong, Jianwei Yang, Pengchuan Zhang, Chunyuan Li, Noel Codella, Liunian Harold Li, Luowei Zhou, Xiyang Dai, Lu Yuan, Yin Li, et al. Regionclip: Region-based language-image pretraining. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 16793–16803, 2022.
* Zhou et al. [2022] Xingyi Zhou, Rohit Girdhar, Armand Joulin, Philipp Krähenbühl, and Ishan Misra. Detecting twenty-thousand classes using image-level supervision. In _European Conference on Computer Vision_ , pages 350–368. Springer, 2022.
* Zhu and Chen [2023] Chaoyang Zhu and Long Chen. A survey on open-vocabulary detection and segmentation: Past, present, and future. _arXiv preprint arXiv:2307.09220_ , 2023.
* Zohar et al. [2023] Orr Zohar, Kuan-Chieh Wang, and Serena Yeung. Prob: Probabilistic objectness for open world object detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 11444–11453, 2023.
## Appendix A Appendix / supplemental material
### A.1 Visualization Result of Confidence Distribution for OV-DQUO and
Baseline Detector
(a) Airplane
(b) Bus
(c) Cake
(d) Cat
(e) Couch
(f) Cow
(g) Cup
(h) Dog
(i) Elephant
(j) Keyboard
(k) Knife
(l) Sink
Figure 5: Visualization result of confidence distribution for OV-DQUO and
baseline detector
As shown in Figure 5, we present details on the differences in confidence
distribution between OV-DQUO and the baseline detector [35] when detecting
novel categories. The data is derived from their predictions on the OV-COCO
validation set. The experimental results indicate that, for novel categories
such as airplanes, buses, cats, and dogs, the high-density region of the
confidence distribution for OV-DQUO lies between 0.6 and 0.8, in sharp
contrast to the baseline detector. This indicates that OV-DQUO benefits from
the additional supervision signals provided by the open-world detector.
Additionally, we observed that for novel categories such as keyboards, knives,
and sinks, the high-density region of the confidence distribution for OV-DQUO
is around 0.4. These category objects share the characteristic of being small
and typically not the salient objects within an image, which makes them
difficult for the open-world detector to recognize. However, through denoising
text query training, the confidence for these category objects still exhibits
superiority compared to the baseline detector.
### A.2 Model Performance Analysis
We provide more details in Table 11 regarding using VLM to classify GT boxes,
classify detector predictions, and classify detector predictions with IoU
confidence. It is evident that compared with existing methods, our method
significantly improves the detection performance of novel categories and
narrows the gap with the experimental group that uses IoU as the confidence.
Simultaneously, we observed an improvement in the detection performance of
known categories. We attribute this to the model learning from open-world
pseudo-labels and denoising training, which enhances its ability to
distinguish foreground objects from the background. However, there is still a
gap between our method and the group that uses IoU as confidence. We believe
that false positive detections caused by the similarity between category text
embeddings are the primary reason for this phenomenon. We will explore this
issue in future work.
Table 11: Performance analysis on the OV-COCO validation set with backbone
networks RN50 and RN50x4.
Method Backbone $\text{AP}_{50}^{\text{Novel}}$ $\text{AP}_{50}^{\text{Base}}$
Ground Truth RN50 65.1 70.0 IoU Confidence 52.5 58.6 CORA[35] 35.1 35.5 OV-
DQUO 39.2 41.8
Method Backbone $\text{AP}_{50}^{\text{Novel}}$ $\text{AP}_{50}^{\text{Base}}$
Ground Truth RN50x4 74.1 76.0 IoU Confidence 59.1 63.7 CORA[35] 41.7 44.5 OV-
DQUO 45.6 49.0
### A.3 Visualization of Open-World Object Proposals
Figure 6: Visualization of open-world pseudo-labels. The first row shows the
base category annotations from the OV-COCO training set, with missing novel
category objects marked by dashed boxes for each image. The second row
displays the open-world object proposals generated by OLN. The third row
presents the foreground likelihood estimation results from FE for each unknown
object proposal.
OV-DQUO mitigates the confidence bias issue between base and novel categories
by learning from open-world unknown objects. Additionally, to avoid pseudo-
label noise misleading the OV-DQUO training process, we follow the OWD method
and use a foreground estimator to assign weights to each open-world unknown
object. In Figure 6, we visualize these open-world unknown objects along with
their corresponding foreground likelihood scores. The visualization results
show that the open-world detector can identify most of the novel category
objects. Additionally, we observe that the output of the detector also
includes some non-object areas, such as distant trees and buildings.
Furthermore, it can be seen that the foreground estimator is able to assign
discriminative weights to foreground objects and non-object regions, which is
key to avoiding model degradation.
### A.4 Visualization of Detection Results
We show the detection results of OV-DQUO on OV-COCO and OV-LVIS validation set
in Figure 7 and Figure 8, respectively. On OV-COCO dataset, OV-DQUO correctly
detects novel categories including couch, dog, bus, cow, scissors, and so on.
On LVIS dataset, OV-DQUO detects rare categories like salad plate, fedora hat,
gas mask and so on. In Figure 9, we also present the results of applying the
LVIS-trained OV-DQUO to the Objects365 dataset. We observe that OV-DQUO
trained on OV-LVIS is capable of accurately identifying a broad spectrum of
object concepts specified in the Objects365 dataset, showcasing remarkable
generalization ability.
Figure 7: Visualization of detection results on OV-COCO. Red boxes are for
novel categories, while blue boxes are for base categories. Figure 8:
Visualization of detection results on OV-LVIS. Red boxes are for rare
categories, while blue boxes are for common and frequent categories. Figure 9:
Visualization of transfer detection results on Objects365[25] dataset.
### A.5 Details of OV-DQUO Hyper-Parameter Configuration
Detail setting for OV-DQUO. Following previous work [35], we set the
exponential moving average factor to 0.99996. The hyperparameters for the
matching cost are identical to the corresponding loss coefficients. During
inference, the temperature $\tau$ of the classification logits is set to 0.01.
Additionally, we multiply the logits of novel classes by a factor of 3.0.
There are slight differences in specific parameter settings between our
experiments on the OV-COCO and OV-LVIS datasets. These differences include the
number of training epochs, image processing resolution, and the application of
repeat factor sampling, among other parameters. Detailed configurations are
provided in Table 12.
Detail setting for open-world pseudo labeling. Following previous work [5], we
train OLN using 8 GPUs with a batch size of 2 per GPU. The models are
initialized with SoCo weights [31] and trained for 70,000 iterations using the
SGD optimizer with a learning rate of $2\times 10^{-2}$. FE are trained for
3,000 iterations with a learning rate of $2\times 10^{-7}$ and a total batch
size of 16. The training of OLN and FE adheres to the settings of OV-COCO and
OV-LVIS, where annotations for novel classes and rare categories are removed.
Following [5], we use the region proposals generated by FreeSoLo [30] as the
initial unknown object annotations.
Table 12: Experimental configurations of OV-DQUO for OV-COCO and OV-LVIS
experiments.
Configuration OV-COCO OV-LVIS Training epochs 30 35 Repeat factor sampling No
Yes Image resolution 1333 $\times$ 800 1024 $\times$ 1024 / 896 $\times$ 896
Text embedding dimensions 1024 / 640 512 / 768 Multi-scale features ResNet
(C3, C4) ViT (5, 7, 11) / (10, 14, 23) Sample categories No 100 Pseudo-label
iterations 2 3
### A.6 Criterion Details for Filtering Open-World Object Proposals
In this section, we detail the process of filtering open-world object
proposals generated by the open-world detector. The specific steps are as
follows:
* •
Perform non-maximum suppression based on localization quality with a threshold
of 0.3.
* •
Ensure that the box size exceeds 2000 pixels.
* •
Maintain an aspect ratio between 0.25 and 4.0.
* •
Ensure that the Intersection over Union with base category objects is less
than 0.3.
|
#### Continuous Uniform
By definition
$\displaystyle\psi_{P}^{*}(y)=\sup\left\\{y\theta-\log\left(M_{P}[\theta]\right):\theta\in\real\right\\},$
where for $a<b$ we have that
$\displaystyle
M_{P}[\theta]=\begin{cases}\frac{\exp(b\theta)-\exp(a\theta)}{(b-a)\theta},&\theta\neq
0,\\\ 1,&\theta=0.\end{cases}$
Since $M_{P}[\theta]$ is continuous at zero, then, without loss of generality,
we obtain
$\displaystyle\begin{array}[]{rl}\psi_{P}^{*}(y)&=\sup\left\\{y\theta-\log\left(\frac{\exp(b\theta)-\exp(a\theta)}{(b-a)\theta}\right):\theta\in\real\right\\}\\\
&=\sup\left\\{(y-b)\theta-\log\left(\frac{1-\exp(-(b-a)\theta)}{(b-a)\theta}\right):\theta\in\real\right\\}\\\
&=\sup\left\\{(y-a)\theta-\log\left(\frac{\exp((b-a)\theta)-1}{(b-a)\theta}\right):\theta\in\real\right\\}\\\
&=\sup\left\\{(y-\mu)\theta-\log\left(\frac{\exp((b-\mu)\theta)-\exp((a-\mu)\theta)}{(b-a)\theta}\right):\theta\in\real\right\\}.\end{array}$
(A.13)
where $\mu=(a+b)/2$. If $y\geq b$ then from the second formulation above we
can conclude that $\psi_{P}^{*}(y)=\infty$ by taking
$\theta\rightarrow\infty$. Similarly, if, $y\leq a$, then from the third
formulation above we can conclude that $\psi_{P}^{*}(y)=\infty$ by taking
$\theta\rightarrow-\infty$. If $y=\mu$ then the last formulation of (A.13) can
be written as
$\displaystyle\sup\left\\{-\log\left(\frac{\exp(\gamma\theta)-\exp(-\gamma\theta)}{2\gamma\theta}\right):\theta\in\real\right\\}=-\log\left(\inf\left\\{\phi(\theta):\theta\in\real\right\\}\right),$
where $\gamma:=(b-a)/2>0$ and
$\displaystyle\phi(\theta):=\begin{cases}\frac{\exp(\gamma\theta)-\exp(-\gamma\theta)}{2\gamma\theta},&\theta\neq
0,\\\ 1,&\theta=0.\end{cases}$
By using L’Hôpital’s rule and some straightforward arguments, it is easy to
verify that
$\displaystyle\lim\limits_{|\theta|\rightarrow+\infty}\phi(\theta)=+\infty,\quad\lim\limits_{|\theta|\rightarrow
0}\phi(\theta)=1\quad\text{and}\quad\phi(\theta)=\phi(-\theta).$
Thus, $\phi$ is continuous at zero (which justifies its definition), coercive
and symmetric. Since the log-normalizer function
$\psi_{P}(\theta)=\log\left(M_{P}[\theta]\right)$ is strictly convex we can
conclude that if a solution exists it must be unique. The coercivity of $\phi$
implies that a solution exists, and due to the symmetry of $\phi$ we can
conclude that it must be zero. To summarize, in this case,
$\psi_{P}^{*}(\mu)=0$ (with $\theta=0$). If $y\neq\mu$ such that $a<y<b$ then
the optimal solution to (A.13) is nonzero and by the first-order optimality
conditions it must satisfy
$\displaystyle
y-\frac{b\exp(b\theta)-a\exp(a\theta)}{\exp(b\theta)-\exp(a\theta)}+\frac{1}{\theta}=0.$
(A.14)
Therefore, using (A.13) we can summarize that for
$y\in(a,b)=\mathrm{dom}\,{\psi}_{P}^{*}$:
$\displaystyle\psi_{P}^{*}(y)=\begin{cases}0,&y=\mu,\\\
(y-\mu)\theta-\log\left(\frac{\exp((b-\mu)\theta)-\exp((a-\mu)\theta)}{(b-a)\theta}\right),&y\neq\mu,\end{cases}$
where $\theta$ is the root of (A.14).
#### Logistic
The moment generating function for Logistic distribution with location and
scaling parameters $\mu$ and $s>0$, respectively, is given by
$\displaystyle M_{P}[\theta]=\exp(\mu y)B(1-s\theta,1+s\theta),\qquad
s\theta\in(-1,1),$
where $B(\cdot,\cdot)$ stands for the _Beta function_
$\displaystyle B(\alpha,\beta)=\int_{0}^{1}t^{\alpha-1}(1-t)^{\beta-1}dt.$
The beta function and the closely related _gamma function_
$\displaystyle\Gamma(\alpha)=\int_{0}^{\infty}t^{\alpha-1}\exp(-t)dt,\qquad\alpha>0,$
share the following well-known relation
$\displaystyle
B(\alpha,\beta)=\frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)}.$
(A.15)
The gamma function is an extension of the factorial as for a positive integer
$\alpha$ it holds that $\Gamma(\alpha)=(\alpha-1)!$. In the following, we will
use the well-known function equations
$\displaystyle B(\alpha+1,\beta)=B(\alpha,\beta)\frac{\alpha}{\alpha+\beta},$
(A.16)
and
$\displaystyle
B(\alpha,1-\alpha)=\Gamma(1-\alpha)\Gamma(\alpha)=\frac{\pi}{\sin(\pi\alpha)},\qquad\alpha\notin\mathbb{Z}.$
(A.17)
The latter is known as Euler’s reflection formula or Euler’s function
equation. Further details and proofs for both (A.16) and (A.17) can be found,
for example, in [2].
Since $s\theta\in(-1,1)$, the above relations imply that for any $\theta\neq
0$
$\displaystyle\phi_{s}(\theta):=B(1-s\theta,1+s\theta)\overset{\eqref{apndx:eq:beta_func_1}}{=}B(-s\theta,1+s\theta)\frac{-s\theta}{-s\theta+1+s\theta}\overset{\eqref{apndx:eq:beta_eulers_reflection}}{=}\frac{-\pi
s\theta}{\sin(-\pi s\theta)}.$
For $\theta=0$ we can verify by (A.15) that
$\displaystyle\phi_{s}(\theta)=B_{s}(1-s\theta,1+s\theta)=1.$
Thus, we can summarize
$\displaystyle\phi_{s}(\theta)=B(1-s\theta,1+s\theta)=\begin{cases}1,&s\theta=0,\\\
\frac{-\pi s\theta}{\sin(-\pi
s\theta)},&s\theta\in(-1,1)\setminus\\{0\\}.\end{cases}$ (A.18)
Using L’Hôpital’s rule we can verify that $\phi_{s}$ is continuous at
$\theta=0$. Since $-\pi s\theta\geq\sin(-\pi s\theta)$ for all
$s\theta\in(-1,1)$ we can conclude that $\phi_{s}(\theta)\geq 1$ for all
$s\theta\in(-1,1)$ and equality ($\phi_{s}(\theta)=1$) holds if and only if
$s\theta=1$. Taking $|s\theta|\rightarrow 1$ it is evident that
$\phi_{s}(\theta)\rightarrow\infty$. In addition, for any $\theta\neq 0$ the
derivative of $\phi$ is given by
$\displaystyle\phi_{s}^{\prime}(\theta)=-\pi s\left[\frac{\sin(-\pi
s\theta)+\pi s\theta\cos(-\pi s\theta)}{\sin^{2}(-\pi s\theta)}\right],$
and consequently
$\displaystyle\frac{\phi_{s}^{\prime}(\theta)}{\phi_{s}(\theta)}=\frac{\sin(-\pi
s\theta)+\pi s\theta\cos(-\pi s\theta)}{\theta\sin(-\pi s\theta)}.$ (A.19)
We are now ready to evaluate Cramér’s rate function that corresponds to the
logistic distribution.
$\displaystyle\begin{array}[]{rl}\psi_{P}^{*}(y)&=\sup\left\\{y\theta-\log\left(M_{P}[\theta]\right):\theta\in\real\right\\}\\\
&=\sup\left\\{(y-\mu)\theta-\log\left(\phi_{s}(\theta)\right):\theta\in\real\right\\}.\end{array}$
(A.22)
If $y=\mu$ then the discussion that follows equation (A.18) implies that
$\sup\\{-\log(\phi_{s}(\theta)):\theta\in\real\\}\leq 0$ where the upper bound
is attained for $\theta=0$ (since $\phi_{s}(\theta)\geq 1$ and
$\phi_{s}(0)=1$). Thus, we can conclude that $\psi_{P}^{*}(\mu)=0$. If
$y\neq\mu$ then the optimal solution to (A.22) satisfies $\theta\neq 0$.
Since, in addition, for $|s\theta|\rightarrow 1$ we have that
$\phi_{s}(\theta)\rightarrow\infty$, and consequently,
$-\log(\phi_{s}(\theta))\rightarrow-\infty$, an optimal solution to (A.22) for
the case $y\neq\mu$ must satisfy the first-order optimality conditions
$\displaystyle
0=y-\mu-\frac{\phi_{s}^{\prime}(\theta)}{\phi_{s}(\theta)}=y-\mu-\frac{1}{\theta}-\frac{\pi
s}{\tan{(-\pi s\theta)}},$ (A.23)
where the above follows from (A.19). To summarize,
$\displaystyle\psi_{P}^{*}(y)=\begin{cases}0,&y=\mu,\\\
(y-\mu)\theta-\log\left(B(1-s\theta,1+s\theta)\right),&y\neq\mu,\end{cases}$
where $\theta\in\real$ is the nonzero root of (A.23).
|
# The impact of the uncertainties in the 12C($\alpha$, $\gamma$)16O reaction
rate on the evolution of low– to intermediate–mass stars
Ben T. Pepper,1 A. G. Istrate2, A. D. Romero1 and S. O. Kepler1
1Physics Institute, Universidade Federal do Rio Grande do Sul, 91501-900
Porto-Alegre, RS, Brazil
2Department of Astrophysics/IMAPP, Radboud University, P O Box 9010, NL-6500
GL Nijmegen, The Netherlands
E-mail<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
One of the largest uncertainties in stellar evolutionary computations is the
accuracy of the considered reaction rates. The 12C($\alpha$, $\gamma$)16O
reaction is particularly important for the study of low- and intermediate-mass
stars as it determines the final C/O ratio in the core which influences the
white dwarf cooling evolution. Thus, there is a need for a study of how the
computations of white dwarfs and their progenitors that are made to date may
be affected by the uncertainties of the 12C($\alpha$, $\gamma$)16O reaction
rates. In this work we compute fully evolutionary sequences using the MESA
code with initial masses in the range of $0.90\leq M_{i}/M_{\odot}\leq 3.05$.
We consider different adopted reaction rates, obtained from the literature, as
well as the extreme limits within their uncertainties. As expected, we find
that previous to the core helium burning stage, there are no changes to the
evolution of the stars. However, the subsequent stages are all affected by the
uncertainties of the considered reaction rate. In particular, we find
differences to the convective core mass during the core helium burning stage
which may affect pulsation properties of subdwarfs, the number of thermal
pulses during the asymptotic giant branch and trends between final oxygen
abundance in the core and the progenitor masses of the remnant white dwarfs.
###### keywords:
nuclear reactions – stars: abundances – stars: evolution
††pubyear: 2021††pagerange: The impact of the uncertainties in the
12C($\alpha$, $\gamma$)16O reaction rate on the evolution of low– to
intermediate–mass stars–B
## 1 Introduction
Single stellar evolution is fuelled by nuclear reactions that occur within the
stellar interior (Bethe, 1939; Hoyle, 1946, 1954; Burbidge et al., 1957).
These reactions not only release energy which allows the star to support
itself against gravitational collapse and remain in hydrostatic equilibrium,
but also change the composition of the star: this is known as nucleosynthesis
(Eddington, 1920; Hoyle, 1954; Burbidge et al., 1957). The study of these
nuclear reactions is where nuclear physics and astronomy come hand-in-hand; an
understanding of what happens at the fundamental level provides a better
knowledge of how stars evolve and influence their environment. Particularly,
improved estimations of the often uncertain reaction rate data, including
formula fitted to such data, will improve the accuracy of stellar evolution
codes and the understanding of stellar evolution (Caughlan & Fowler, 1988;
Angulo et al., 1999; Katsuma, 2012; Xu et al., 2013; An et al., 2016). Such
estimations are hereafter referred to as ’reaction rates’.
The 12C($\alpha$, $\gamma$)16O reaction during the central helium burning
stage is considered to be the most important mechanism for defining the white
dwarf (WD) core composition (Salaris & Cassisi, 2005; D’Antona & Mazzitelli,
1990; De Gerónimo et al., 2017; Deboer et al., 2019). However, the reaction
rate for this reaction has an extremely large uncertainty (Fowler et al.,
1967; Caughlan & Fowler, 1988; Kunz et al., 2002; An et al., 2016; Deboer et
al., 2017, 2019). The main entrance channel for the ${}^{12}\text{C}+\alpha$
mechanism ($E_{\alpha_{0}}=7.16\,\text{MeV}$) does not have a resonance
channel close to this threshold, the closest occurring at
$E_{x}=9.59\,\text{MeV}$. Instead, the low energy cross-section is largely
influenced by the $1^{-1}$ ($E_{x}=7.12\,\text{MeV}$) and $2^{+}$
($E_{x}=6.92\,\text{MeV}$) subthreshold states (see Figure 2 of Deboer et al.,
2017, for details). The primary influence of these two nearby subthreshold
states and the addition of possible resonant transitions in the wings of the
broad channel at $E_{x}=9.59\,\text{MeV}$ makes the nuclear cross-section
extremely difficult to estimate (see Fowler et al., 1967; Kunz et al., 2002;
An et al., 2016; Deboer et al., 2017, 2019; Aliotta et al., 2021).
During the core helium burning (CHB) stage, carbon is produced from the fusion
of three helium nuclei via the triple-$\alpha$ process (Salpeter, 1952;
Kippenhahn & Weigert, 1990; Salaris & Cassisi, 2005; Prialnik, 2009). As the
abundance of helium in the core depletes, the probability of carbon
interacting with helium to produce oxygen [via 12C($\alpha$, $\gamma$)16O] is
larger than that of the triple-$\alpha$ process at late times during the core
helium burning stage (Salaris & Cassisi, 2005). Thus, the 12C($\alpha$,
$\gamma$)16O reaction is of great importance and is vital to model the carbon-
oxygen (C/O) abundance in the inner chemical profiles for all stellar masses,
but particularly low– and intermediate–mass stars (Woosley & Weaver, 1995;
Weaver & Woosley, 1993; Wallerstein et al., 1997).
The C/O abundance, therefore the 12C($\alpha$, $\gamma$)16O reaction, is
important in many areas of stellar evolution. Such as, influencing the
pulsation properties of ZZ Ceti stars (De Gerónimo et al., 2015, 2017).
Differences between the considered 12C($\alpha$, $\gamma$)16O reaction rate
will also affect the duration of the core helium burning stage (Deboer et al.,
2017). In addition, the 12C($\alpha$, $\gamma$)16O reaction impacts supernova
explosions as the outcome is related to the composition of the final WD (e.g.
Iben & Tutukov, 1984; Wu et al., 2020) and third dredge-up episodes (TDUs)
during the asymptotic giant branch (AGB) stage (Frost & Lattanzio, 1996;
Karakas et al., 2002; Marigo, 2002; Karakas & Lattanzio, 2003; Marigo, 2007;
Cristallo et al., 2009; Weiss & Ferguson, 2009; Ventura & Marigo, 2009;
Kalirai et al., 2014; Matteucci, 2021). Furthermore, thermonuclear explosions
of C/O WDs impacts the ignition of Type 1a supernovae, an important event in
constraining cosmological parameters (Perlmutter et al., 1999; Riess et al.,
1998). The enrichment of the outer layer of the AGB stars from dredge-up and
the mass-loss affects the chemical evolution of galaxies (Matteucci, 2012;
Boothroyd & Sackmann, 1988; Kobayashi et al., 2020; Ventura et al., 2020;
Cristallo et al., 2015; Matteucci, 2021). Additionally, the 12C($\alpha$,
$\gamma$)16O reaction governs whether a star will form a neutron star or black
hole (Brown et al., 2001; Heger et al., 2002; Tur et al., 2007; West et al.,
2013; Sukhbold & Adams, 2020). Gravitational wave detections from black hole
mergers can also be used to constrain the 12C($\alpha$, $\gamma$)16O reaction
rate by determining the mass of the black hole and the fraction of carbon and
oxygen that remains (see Farmer et al., 2020, for details).
De Gerónimo et al. (2015) and De Gerónimo et al. (2017) consider 3 different
reaction rates: an adopted rate from Angulo et al. (1999) and the high and low
rates from Kunz et al. (2002). They consider these alternate rates for the CHB
until the thermally pulsing asymptotic giant branch (TP-AGB) phase with a sole
focus on how the pulsational properties are affected in ZZ Ceti stars, rather
than all stages as we attempt in this work.
In this work, we use stellar evolutionary models as tools to study the impact
of the 12C($\alpha$, $\gamma$)16O reaction rate uncertainties on the stellar
structure and evolution of low– and intermediate–mass stars. The paper is
organised as follows. Section 2 describes the input physics and numerical tool
used to compute the evolutionary sequences, as well as a deeper discussion of
the considered 12C($\alpha$, $\gamma$)16O reaction rates used in this work. In
section 3 we present and discuss our results. We summarise our work in section
4, concluding our findings and indicating future areas where the impact of
this work may affect.
## 2 Numerical Tools
### 2.1 MESA Input Physics
In this work we employ the Modules for Experiments in Stellar Astrophysics
(MESA) code version-r15140 (see Paxton et al. (2011); Paxton et al. (2013,
2015, 2018), for details). We compute the full evolutionary sequence from the
zero age main sequence (ZAMS) through both core hydrogen and helium burning
stages, leading to the AGB and the white dwarf stage (WD). The computation
stops when the stellar model reaches a luminosity of $\log(L/L_{\odot})=-3$ on
the WD cooling track. This stopping condition is applied such that the
sequences have experienced their evolution through the DAV instability strip
(Fontaine & Brassard, 2008; Winget & Kepler, 2008; Althaus et al., 2010). This
allows for asteroseismology of ZZ Ceti stars to be performed in the future.
The final WD masses obtained in this work range from $0.513M_{\odot}\leq
M_{f}/M_{\odot}\leq 0.691\,M_{\odot}$. The initial mass range considered in
this work is selected such that all sequences evolve into a carbon–oxygen WD
(examples of works which consider/include a similar mass range are Renedo et
al., 2010; Romero et al., 2015; De Gerónimo et al., 2017; Marigo et al.,
2020).
We compute a total of 246 sequences, with an initial metallicity of
$Z_{i}=0.01$ and 41 initial masses in the range of $0.90\leq
M_{i}/M_{\odot}\leq 3.05$. For each initial mass, we compute the full
evolution considering 6 different formulae for the 12C($\alpha$, $\gamma$)16O
reaction rate. The 6 reaction rates are adapted from Angulo et al. (1999) and
An et al. (2016). Each source comprises 3 reaction rates: the adopted rate,
the low and high limiting values, given by the reported uncertainties of the
respective rate (see Section 2.2). The rates taken from Angulo et al. (1999)
are part of the NACRE compilation and have been used extensively in other
computations (Renedo et al., 2010; Romero et al., 2015; De Gerónimo et al.,
2017). The reaction rates from An et al. (2016) are less recognised, but boast
a lower uncertainty on their reported adopted reaction rate. More detail on
these rates and their significance can be found in Section 2.2.
We use the reaction network ’basic.net’, which comprises 33 individual
reactions including the full p-p chain, CNO cycle, 3$\alpha$ up until 24Mg,
which contains the 12C($\alpha$, $\gamma$)16O reaction. This network also
includes 8 individual isotopes: 1H, 3He, 4He, 12C, 14N, 16O, 20Ne, 24Mg in
addition to elementary and $\alpha$ particles.
In our computations, we consider the default radiative opacity tables within
MESA. These are from Ferguson et al. (2005) (for $2.7\leq\textrm{log}T\leq
4.5$) and from the OPAL project (for $3.75\leq\textrm{log}T\leq 8.7$)
(Iglesias & Rogers, 1993, 1996). Furthermore, we consider OPAL Type 2 tables
as they allow for varying amounts of C and O, which are needed for helium
burning and beyond (Iglesias & Rogers, 1996; Paxton et al., 2011).
We adopt the standard mixing length free parameter as $\alpha=2.0118$. This
value is adopted from the work of Guzik et al. (2016) who found this value to
be a good approximation for sequences that consider the solar metallicity when
using the opacity tables from the OPAL project. To derive this value, Guzik et
al. (2016) compared calculated non-adiabatic solar oscillation frequencies and
solar interior sound speeds to observed frequencies and helioseismic
inferences. However, it should be noted that Guzik et al. (2016) consider an
initial metallicity of $Z_{i}=0.015$, rather than the value we consider in
this work ($Z_{i}=0.01$). Such a difference would alter the value of the
$\alpha$ parameter if a similar analysis was performed with this initial
metallicity consideration. Convective mixing is treated as a time-dependent
diffusion process, with the diffusion coefficient given as,
$D_{\mathrm{EM}}=D_{0}\exp(-2z/fH_{P})$ (1)
where $H_{P}$ is the pressure scale-height at the convective boundary, $D_{0}$
is the diffusion coefficient of the unstable regions that are near the
convective boundary, and $z$ is the geometric distance from the convective
boundary. $f$ is an adjustable free parameter that controls the efficiency of
mixing by setting the size of the overshooting region (Herwig et al., 1997;
Herwig, 2000). We take the value of $f=0.016$ for all regions of the model for
this work, following the same consideration of overshooting as Herwig (2000);
Weiss & Ferguson (2009); De Gerónimo et al. (2017). This treatment of the
convective boundaries was also adopted by other authors for single stellar
evolution computations (Weiss & Ferguson, 2009; Romero et al., 2015; De
Gerónimo et al., 2017).
The presence of dredge-up episodes during the core helium burning stage is
relevant for the final composition of WDs (Prada Moroni & Straniero, 2002;
Straniero et al., 2003; Renedo et al., 2010). During the thermally pulsing AGB
phase, although overshooting was considered at the boundary of the convective
H-rich envelope during the TP-AGB, the third dredge-up episodes did not occur.
Therefore, the evolution of the hydrogen–exhausted core (which is hereafter
simply referred to as "the helium core mass") and the final mass of the
sequences for those which should experience some third dredge-up episodes will
be affected (see Section 3.2). We define the "helium core mass" as the region
from the centre until the local abundance of hydrogen is greater than
$10^{-6}$. Additional models were computed to assess the impact of the third
dredge-up on the core mass growth during the thermal pulses (see Section 3.2
and Appendix B for details).
For regions stable against convection according to the Ledoux criterion, but
there is an inversion of mean molecular weight, we employ thermohaline mixing.
In MESA this is treated as a diffusion process, as above, with a diffusion
coefficient produced by the stability analysis of Ulrich (1972) and Kippenhahn
et al. (1980). For the efficiency parameter of thermohaline mixing, we
consider $\alpha_{th}=1.0$ (see Equation 14 of Paxton et al., 2013, for
details)). Thermohaline mixing was considered in order to smooth a
discontinuity in the carbon and oxygen chemical profiles at the edge of the
C/O core, during the early-AGB.
Towards the end of the core helium burning stage, when the central He
abundance is lower than $\sim$10%, breathing pulse–like instabilities may
appear. However, these events are attributed to adopted algorithms rather than
to the physics of convection (see Straniero et al., 2003; Romero et al., 2015;
Constantino et al., 2016, 2017, for details). To suppress the breathing
pulses, when the central abundance of He drops below 0.13, we neglect
convection until the central abundance of helium decreases below $10^{-6}$,
similar to the prescription used by Renedo et al. (2010) and Romero et al.
(2015). Without this prescription, the final carbon-to-oxygen (C/O) ratios can
vary rapidly (up to $\pm 0.1$) with small increments of initial mass
($0.05\,M_{\odot}$).
During the main sequence (MS), red–giant branch (RGB) and core helium burning
stages, the mass-loss due to stellar winds follows the rate based on the
Reimers formula (see Reimers, 1975). The asymptotic giant branch and
subsequent evolution follow a rate based on the Bloecker formula instead (see
Bloecker, 1995). We set our scale factors to be $\eta_{R}=0.5$ and
$\eta_{B}=0.2$ for the Reimers and Bloecker formulae, respectively. These
values are chosen as they reproduce a WD with a similar final mass to that
found by Renedo et al. (2010) for $M_{i}=1.00M_{\odot}$ with $Z_{i}=0.01$.
A grey atmosphere is employed for the entire evolution of all sequences, which
utilises the grey Eddington $\tau$ relation. We consider the equations of
state ELM EOS and DT2 EOS, which are derived from the HELM EOS (Timmes &
Swesty, 2000) and the SCVH tables (Saumon et al., 1995), respectively.
Once the star leaves the AGB, we employ an element diffusion process from the
work of Burgers (1969). We refer to element diffusion as the physical
mechanism for mixing chemicals that includes gravitational settling, thermal
diffusion and chemical diffusion. Gravitational settling leads to denser
element diffusing towards the core, while lighter elements float towards the
surface. Thermal diffusion acts in the same direction as gravitational
settling, although to a lesser extent, bringing highly charged and more
massive species to the central regions of the star. Chemical diffusion,
however, works against this general direction (see Iben & MacDonald, 1985;
Thoul et al., 1994, for details). In addition to the aforementioned processes,
MESA includes radiative accelerations (Hu et al., 2011) into their element
diffusion prescription. These radiative forces are negligible in hot regions,
as well as being computationally demanding. Hence, we do not consider the
effects of radiative levitation. Our element diffusion process is applied to
the following isotopes: 1H, 3He, 4He, 12C, 14N, 16O, 20Ne, 24Mg.
### 2.2 The 12C($\alpha$, $\gamma$)16O Reaction
Here we discuss a brief, yet relevant, history of 12C($\alpha$, $\gamma$)16O
reaction rate evaluations. We lead this into further detail for the
12C($\alpha$, $\gamma$)16O reaction rate prescriptions from Angulo et al.
(1999) and An et al. (2016), discussing their differences to the previous
determinations from the literature.
Fowler et al. (1967) organised the first symposium of reaction rate cross-
sections that included the 12C($\alpha$, $\gamma$)16O reaction. At the time,
many resonant factors were neglected and were updated by Caughlan & Fowler
(1988). However, it is believed that some resonances were still neglected and
the treatment of the S-factor in this work produced values that are too small
and require a scale factor of $\sim$2 to produce a realistic S-Factor (Angulo
et al., 1999; Kunz et al., 2002; Heil et al., 2008; An et al., 2016; Deboer et
al., 2017, 2019).
Built upon the works of Fowler et al. (1967); Caughlan & Fowler (1988) and
those associated works in between, Angulo et al. (1999) provided a strong
basis for the 12C($\alpha$, $\gamma$)16O reaction within the NACRE
compilation. Angulo et al. (1999) provided the reaction rates for 86 different
reactions, including 12C($\alpha$, $\gamma$)16O. For the S-factor
calculations, Angulo et al. (1999) considered the values for non-resonant
energies. For narrow resonances, however, they fit the resulting cross-section
using a Briet-Wigner model. When the effects of different resonant energies
overlap, they use a multi-resonance fit, shown in equation 29 of Angulo et al.
(1999). Angulo et al. (1999) state that their analysis is numerical for the
majority, although they do provide an analytical approach for each reaction,
for completeness. They find that their numerical approach yields a higher
accuracy for their calculated reaction rates. The quoted S-Factor value from
Angulo et al. (1999) for a stellar energy of 300 keV is $S(\text{300\,
\text{keV}})=199\pm 64\,\text{keV\,b}$, resulting in a reaction rate ($RR$) of
$RR(\text{300\, \text{keV}})=(9.11^{+3.69}_{-3.67})\cdot
10^{-15}\text{cm\textsuperscript{3}\, mole\textsuperscript{-1}
s\textsuperscript{-1}}$. A stellar energy of $E=\text{300\, \text{keV}}$ is
often chosen as the energy at which to compare the S-factors across different
works, as it is associated with the ignition of core helium burning. In this
work, we consider the adopted rate of Angulo et al. (1999) (NACRE_A) and the
highest and lowest reaction rate within the uncertainties (NACRE_H and
NACRE_L, respectively). Hereafter, we refer to the collective 12C($\alpha$,
$\gamma$)16O reaction rates from Angulo et al. (1999) as ’NACRE’.
An et al. (2016) point out that the resonance parameters used by Kunz et al.
(2002), which were taken from Tilley et al. (1993), neglect the ground state
transitions from the works of Brochard, F. et al. (1975); Ophel et al. (1976).
This results in a larger value for the expected reaction rate at helium
burning temperatures. Instead, An et al. (2016) use the reduced R-matrix and
S-factor derived by An et al. (2015) to estimate the reaction rate, which
accounted for all transitions.
In their computations, An et al. (2015) and An et al. (2016) found a
significant reduction to the uncertainty of their S-factors when compared to
that of Angulo et al. (1999), $S(\text{300\, keV})=162.7\pm 7.3\,\text{keV\,
b}$. The reaction rate for the same energy resulted
$RR(\text{300\,keV})=(7.83\pm 0.35)\times
10^{-15}\text{cm\textsuperscript{3}\, mole\textsuperscript{-1}
s\textsuperscript{-1}}$. We consider the adopted rate from An et al. (2016)
(An_A) and the highest and lowest reaction rate within the uncertainties (An_H
and An_L, respectively). However, the S-factor calculation of An et al.
(2015), seems to neglect external contributions for ground state energy
levels, making this approximation not valid for high precision analysis
(Deboer et al., 2017). Therefore, we treat the uncertainties of the
12C($\alpha$, $\gamma$)16O reaction rate from An et al. (2016) as arbitrary
differences to determine the effect of the urgent need for more precise
12C($\alpha$, $\gamma$)16O reaction rate uncertainties, as claimed by Kunz et
al. (2002); Tur et al. (2010). Some works further claim that the uncertainty
must be less than 10% to be on par with non-nuclear physical uncertainties
(see Woosley et al., 2003; Deboer et al., 2017, for details).
Figure 1 shows a comparison between the adopted reaction rates from An et al.
(2016), NACRE and all of their associated uncertainties. In this figure, we
depict for each rate, the ratio between the rate and the value for NACRE_A, as
a function of temperature. For an analysis including other works, see Figure 4
of An et al. (2016). As can be seen from Figure 1, for energies characteristic
of stellar energies, the An_A, An_H and An_L reaction rates are lower than for
NACRE_A for most temperatures within the blue shaded region, characteristic of
core helium burning temperatures. We therefore expect to have a larger C/O
ratio in the core after the central helium burning stage for the sequences
which consider the rate from An et al. (2016) when compared to those sequences
which consider NACRE_A. It can also be seen in Figure 1 that the range between
NACRE_H and NACRE_L includes all the other prescriptions within the region of
helium burning temperatures, which will lead to the largest differences in the
C/O ratio after the core helium burning stage. At higher temperatures (greater
than those considered to be helium burning temperatures) the reaction rate
from An et al. (2016) is larger than that from NACRE. These temperatures are
not reached in the sequences computed within this work.
Figure 1: Ratios of each reaction rate considered when compared to the adopted
NACRE rate for the 12C($\alpha$, $\gamma$)16O reaction, as a function of
temperature, where $T_{9}=T/10^{9}$. The beige shaded region defines the
temperatures where helium burning occurs. During the core helium burning stage
is also where the 12C($\alpha$, $\gamma$)16O reaction is most prominent. The
light-orange dotted and red dashed lines represent the NACRE_L and NACRE_H
considerations, respectively. The solid blue line defines the adopted rate
from An et al. (2016) with the An_L and An_H rates being depicted as light-
blue dotted and dark-blue dashed lines, respectively.
## 3 Results and Discussions
In this section we describe in detail the effects that the uncertainties of
the 12C($\alpha$, $\gamma$)16O reaction rate have on the inner structure and
evolution for low- and intermediate-mass single stars. As expected, during the
pre-main sequence, main sequence (MS) and red-giant branch (RGB), we find no
differences to the evolution since the 12C($\alpha$, $\gamma$)16O reaction
only becomes important, and increasingly more dominant, during the CHB as the
central helium abundance decreases (Salaris & Cassisi, 2005; Spruit, 2015;
Deboer et al., 2019). Thus, we report no difference between the different
12C($\alpha$, $\gamma$)16O reaction rates at the time of, or shortly after,
the helium-flash or a non-degenerate helium ignition. We only show the results
from the CHB, AGB and WD stages where we expect some differences to occur due
to the uncertainties and separate literature sources of the 12C($\alpha$,
$\gamma$)16O reaction rate. We consider each evolutionary stage separately in
chronological order.
### 3.1 The Core Helium Burning Phase
Figure 2 shows the carbon–to–oxygen (C/O) ratio for each star at the end of
CHB, as a function of initial mass. As expected due to the large uncertainties
of the reaction rate from NACRE, the smallest and largest C/O ratios come from
the NACRE_H and NACRE_L rates, respectively. Note that when all reaction rates
from An et al. (2016) are considered, the values for the C/O ratios are
between the values corresponding to NACRE_A and NACRE_L.
We find that the C/O ratio at the end of the CHB decreases for all considered
reaction rates around an initial mass of $M_{i}=1.90\,M_{\odot}$. This mass
corresponds to the minimum mass for which helium burning starts in non-
degenerate conditions, and will be referred to as the transition mass. The C/O
ratio increases again for higher initial masses (between $2.20\leq
M_{i}/M_{\odot}\leq 2.45$). We find that the initial mass where the increase
of the C/O ratio occurs is dependent on the considered 12C($\alpha$,
$\gamma$)16O reaction rate, such that higher reaction rates have a wider
initial mass range for the decreased C/O ratio and lower reaction rates have a
narrower initial mass range. For example, the NACRE_H has the widest range
($1.90\leq M_{i}/M_{\odot}\leq 2.45$) whereas the NACRE_L has the narrowest
range ($1.90\leq M_{i}/M_{\odot}\leq 2.20$). Furthermore, we find no
difference to the initial mass range between the adopted rate from An et al.
(2016) and the An_H and An_L rates. We also add that the decrease in the C/O
ratio is more pronounced for less efficient reaction rates, see Figure 2, for
details.
Figure 2: Central C/O ratio at the end of the CHB as a function of initial
mass. The red points represent the reaction rates considered by NACRE and the
blue points are those considered by An et al. (2016). Additionally, squares
represent the respective adopted rates while darker-coloured triangles and
lighter-coloured upside-down triangles represent the high and low limit
uncertainties, respectively.
Figure 3 shows the time spent in the CHB as a function of initial mass for the
High and Low reaction rate formulas for NACRE (left panel) and An et al.
(2016) (right panel). We consider the difference in the CHB age from the
values obtained using the respective adopted reaction rate for each panel.
Considering the NACRE rates (left panel of Figure 3), we find that the CHB
lifetime can be up to 12 Myr shorter (longer) from the adopted rate if we
consider NACRE_L (NACRE_H) reaction rate, which is roughly a 7% difference. On
the other hand the differences between the An et al. (2016) rates are much
lower (right panel of Figure 3), up to 4 Myr translating to a difference of
4%. Such changes to the CHB lifetimes due to limits of the uncertainties on
the 12C($\alpha$, $\gamma$)16O reaction rate are not negligible, particularly
for the rate taken from NACRE. Constantino et al. (2016) found that the
difference in the the ratio of HB–to–AGB stars in a sample of 48 globular
clusters could be explained by the differences in the CHB duration due to the
uncertainties in the 12C($\alpha$, $\gamma$)16O reaction rate.
Figure 3: Differences to the duration of the CHB stage due to associated
reaction rate uncertainties as a function of initial mass. The differences are
calculated between each limit of the reaction rate due to their uncertainties
and the adopted rate of each case. The left panel shows the differences of the
uncertainties of the rate calculated by NACRE and the right panel shows the
same from the rate of An et al. (2016). Darker-coloured triangles and lighter-
coloured upside-down triangles represent the high and low limit uncertainties,
respectively.
The top panel of Figure 4 shows the CHB history of the convective mass. The
convective mass is defined as the mass-coordinate of the core convective
boundary, such that convection occurs between this mass-coordinate and the
centre. Additionally, the bottom panel of Figure 4 shows the luminosities of
the 3$\alpha$ process and the 12C($\alpha$, $\gamma$)16O reaction (the latter
will be referred to as C$\alpha$ luminosity), for the NACRE reaction rates. As
expected, the C$\alpha$ luminosity increases when the more efficient reaction
rates are considered. Furthermore, the contribution from the 3$\alpha$ process
decreases for higher reaction rates due to the helium reservoir being depleted
faster by the more efficient 12C($\alpha$, $\gamma$)16O reaction.
Mixing episodes due to the convective core during the CHB extends from the C/O
core to the He-rich layers above, so we define the convective mass as the mass
of the convective core. Figure 4 also shows that higher reaction rates produce
more mixing episodes which are characterised by sudden increases of the
convective mass. These enhanced convective episodes bring fresh helium from
the helium region above the C/O core which not only increases the duration of
the CHB but also increases the abundance of oxygen in the core (Ghasemi et
al., 2017; Guo & Li, 2018).
Convective mixing episodes induce a chemical discontinuity between the fully
mixed core and the radiative layer, increasing the opacity beyond the
convective boundary. In a class of CHB pulsating stars, sdB stars (see Heber,
2009, for an in depth discussion), g-modes propagate from the surface all the
way until the boundary of the convective core (Ghasemi et al., 2017). Since we
find significant differences to the size of the convective core and number of
mixing episodes between the NACRE adopted reaction rate and its uncertainties
for the 12C($\alpha$, $\gamma$)16O reaction rate, the precision of
astereoseismology for these objects is limited and must be considered in the
calculations of the pulsation period spectrum. However for the adopted rate
taken from An et al. (2016), the high and low limits (An_H and An_L,
respectively) do not produce a significant change to the convective core mass
and the total number of mixing episodes and would therefore produce a more
precise study of the g-mode pulsations (see Figure 13 in Appendix A, for an
example of the same case that considers the reaction rates from An et al.
(2016)). The implications for asteroseismology from the treatment to mixing
during the CHB has been studied by Constantino et al. (2015) who found that
changes to the composition and He-burning reaction rates do not significantly
change the period spacing of pulsations for pulsators during the CHB stage.
However, the period values could be more sensitive to the changes in the
chemical profile.
Figure 4: History of the convective mass (top panel), 3$\alpha$ luminosity and
the luminosity of the 12C($\alpha$, $\gamma$)16O reaction during the CHB
(bottom panel). The history is given in terms of the CHB duration. This plot
in particular considers all NACRE prescriptions for the 12C($\alpha$,
$\gamma$)16O reaction rate for an initial mass of $M_{i}=2.45\,M_{\odot}$.
Blue lines represent NACRE_H, orange-brown depicts NACRE_A and dark-brown
shows NACRE_L. Furthermore, the solid line represents the convective mass,
dotted lines show the luminosity of the 12C($\alpha$, $\gamma$)16O reaction
and dot-dash lines portray the 3$\alpha$ luminosity.
The total energy produced by the 12C($\alpha$, $\gamma$)16O reaction during
the CHB is presented in Figure 5. The values shown in Figure 5 are moving
averages. We compute the total energy by integrating the C$\alpha$ luminosity
with respect to time for the CHB duration. Figure 5 shows the ratio between
the different reaction rates and the NACRE_A (top panel) and An_A (bottom
panel) reaction rates, as a function of initial mass. If we consider the
reaction rates from An et al. (2016), the differences are generally smaller
than 10%, the largest difference occurs for the sequence with an initial mass
of $2.85\,M_{\odot}$ that considers An_H. In most cases, the differences are
no larger than 5% (70.7% of the sequences for An_H and 82.9% of the sequences
for An_L).
We find larger differences between the limits of 12C($\alpha$, $\gamma$)16O
NACRE rates when compared to the NACRE_A formula, as shown in the top panel of
Figure 5. In this case we also compare the adopted reaction rate from An et
al. (2016). If we consider how NACRE_H differs from NACRE_A, we find that the
energy production for the majority of the sequences are greater than 10% than
that of the NACRE_A case, with a few exceeding a difference of 20%. For
NACRE_L, the carbon energy produced differs more than 30% from the NACRE_A
rate. The extra energy produced from the high rates when compared to the
adopted rates increases the temperature gradient further allowing convection
to continue, causing the extra mixing episodes shown in Figure 4 (Kippenhahn &
Weigert, 1990; Prialnik, 2009).
Considering the adopted rate from An et al. (2016), the absolute value of the
differences in carbon energy produced due to An_H and An_L appears to be
independent of either selection. This is not the case for the NACRE rates. A
limiting factor for the amount of energy produced is the abundance of
available helium. This is more of a limit for the NACRE_H case due to lack of
available helium inhibiting further reactions to occur. The NACRE_L will
always produce less carbon energy and so is not limited by the helium
abundance or lack thereof. The smaller uncertainties of the rates taken from
An et al. (2016) are not large enough to produce such an effect.
Figure 5: Ratios of the total energy produced by the 12C($\alpha$,
$\gamma$)16O reaction as a function of initial mass. Values are presented in
the form of moving averages. The energy produced is calculated by integrating
the C$\alpha$ luminosity shown in Figure 4 and is integrated with respect to
time. The ratios in the top panel are in terms of the NACRE_A rate and the
ratios in the bottom panel are made in terms of An_A. The red points represent
the reaction rates considered by NACRE and the blue points are those
considered by An et al. (2016). Additionally, squares represent the respective
adopted rates while darker-coloured triangles and lighter-coloured upside-down
triangles represent the high and low limit uncertainties, respectively.
The CHB stage is where the 12C($\alpha$, $\gamma$)16O reaction is the most
active. In particular, we find that the largest differences due to the
considered 12C($\alpha$, $\gamma$)16O reaction rate appear in the final C/O
ratio, CHB duration, energy generation rate and the number of experienced
mixing episodes. The primary reason that we find such changes to these
properties is due to the changes in energy generation that affects the
convection efficiency in this phase. Furthermore, we find that the differences
between the An_H and An_L rates from the An_A rate are generally
insignificant, unlike those of the NACRE uncertainties which are intrinsically
larger. A final point to add is that, in future works, the use of overshooting
parameters specifically designed for the CHB would be interesting. Works such
as Spruit (2015) claim to keep the convective boundaries stable inhibiting the
need for manual breathing pulse suppression, as performed in this work, whilst
keeping "stable" convection active throughout the evolution (Spruit, 2015;
Constantino et al., 2017).
### 3.2 The Asymptotic Giant Branch Phase
During the AGB the energy production is given by two shell sources, the
hydrogen-shell at the base of the hydrogen-rich envelope and the He-shell on
top of the C/O core. Hydrogen burning occurs through the CNO cycle, while He-
burning is through the 3$\alpha$ process. Towards the end of the AGB, the He-
burning shell will become thin enough to trigger unstable burning, and the
thermal pulses (TPs) begin (e.g. Kippenhahn & Weigert, 1990; Iben, 1991).
During the interpulse period between the TPs, the outer convection zone may be
deep enough to bring the products of He-shell burning to the surface, this is
known as the third dredge-up (TDU) (Wallerstein et al., 1997; Busso et al.,
1999; Herwig, 2005; Karakas & Lattanzio, 2014).
Well known consequences of TDUs are a reduction of the helium core mass and
changes to the surface composition, leading to the formation of C-stars (Frost
& Lattanzio, 1996; Busso et al., 1999; Karakas et al., 2002; Weiss & Ferguson,
2009; Romero et al., 2015; Marigo et al., 2020). The extent of the reduction
of the helium core mass from TDU episodes is parameterised by the dredge-up
efficiency parameter, $\lambda_{d}$111The dredge-up efficiency parameter is
defined as the fraction of helium core mass lost during the TDU episode over
the helium core mass growth since the last TDU (see Karakas et al., 2002;
Marigo et al., 2013, for details). The 12C($\alpha$, $\gamma$)16O reaction
during this stage is essentially inactive. There may be some fusion reactions
between 12C and alpha particles at the edge of the C/O core but they are,
however, insignificant. Thus, any difference between the sequences during the
AGB is due to the effect that the 12C($\alpha$, $\gamma$)16O reaction rate has
during the CHB.
Figure 6 shows the helium core mass at the first TP of each sequence as a
function of initial mass. A minimum value occurs for an initial mass
$M_{i}=1.90\,M_{\odot}$, which is the transition point as described in Section
3.1. The same result was found in the work of Kalirai et al. (2014), whose
initial models come from those produced in Bressan et al. (2012). However,
their transition point occurs for $M_{i}=2.00\,M_{\odot}$ due the larger
initial metallicity affecting the mass for which core helium burning ignites
in degenerate conditions (Bertelli et al., 1986; Romero et al., 2015). We find
that there is no significant difference to the helium core mass at the first
TP as a result of different 12C($\alpha$, $\gamma$)16O reaction rates for
masses lower than the transition point. Above this mass, the maximum
difference between the NACRE rates is $\sim 0.01M_{\odot}$, with NACRE_L
producing lower helium core masses and NACRE_H producing larger helium core
masses. This is due to the difference in energy outputs between the adopted
rate, NACRE_A, and the NACRE_H/NACRE_L rates. Higher reaction rates during the
CHB increase the temperature throughout the star which favours the CNO-cycle
(Boeltzig et al., 2016), allowing the helium core mass to develop further than
sequences which consider lower reaction rates. There are no significant
differences in the helium core mass at the first TP between the adopted rate
from An et al. (2016) and An_H/An_L for any of the considered initial masses.
Figure 6: Helium core mass at the start of the first TP as a function of
initial mass. All of the considered reaction rates and their uncertainties are
shown within this figure. We find a minimum to the helium core mass for the
same initial mass which corresponds to the transition mass where core helium
burning begins on a non-degenerate core rather an electron degenerate core
($M_{i}=1.90\,M_{\odot}$). Within the uncertainties, we find differences up to
0.01 M⊙ for masses larger than $M_{i}=1.90\,M_{\odot}$. The red points
represent the reaction rates considered by NACRE and the blue points are those
from An et al. (2016). Additionally, squares represent the respective adopted
rates while darker-coloured triangles and lighter-coloured upside-down
triangles represent the high and low limit uncertainties, respectively.
Figure 7 shows the growth of the helium core mass during the TP-AGB as a
function of initial mass for each considered reaction rate. We find that the
dramatic increase of core growth (for helium core mass growth $\geq 10\%$
(Kalirai et al., 2014)) occurs in the range $1.70\leq M_{i}/M_{\odot}\leq
2.60$, with a maximum increase of 19% occurring at $M_{i}\approx
2.00\,M_{\odot}$. This result is in agreement with that of Bird & Pinsonneault
(2011) and is similar to that of Kalirai et al. (2014), who find a helium core
growth up to 30%. This discrepancy between their work and ours is due to not
only a different initial metallicity, but also their consideration of a less
efficient mass-loss scheme for stages previous to the AGB (Reimers law with
$\eta_{R}=0.2$ (Bressan et al., 2012)). Thus, the models used by Kalirai et
al. (2014) have a larger mass of hydrogen fuel to produce a larger final mass
(see Table 1 for our values of this variable and Bird & Pinsonneault (2011)
for an in-depth discussion of the hydrogen fuel variable). Furthermore,
possible differences to the energy produced in the H-rich envelope during the
TP-AGB may affect the rate of the helium core growth (see Forestini &
Charbonnel, 1997; Marigo et al., 2013; Kalirai et al., 2014, for details).
Considering only the difference in helium core mass growth for NACRE_A rate
and it’s NACRE_H/NACRE_L limits, we find that NACRE_L has a larger core growth
and NACRE_H has smaller core growth. The increased core growth during the AGB
for the NACRE_L sequences is due to the smaller helium core mass at the first
TP (see Figure 6) and as such more fuel to keep He-shell burning sustained,
particularly for initial masses above the transition point where the core
growth differences are greater (see Table 1). Additionally, during the TP-AGB,
we find differences in the energy generation from the CNO cycle between the
NACRE_H/NACRE_L limits in comparison with the NACRE_A. The energy generation
can be up to 25% lower (higher) when the NACRE_H (NACRE_L) reaction rate is
considered.
Figure 7: Percentage growth of the helium core mass during the AGB as a
function of initial mass. Growth is calculated as the difference between the
final mass of the core and the helium core mass described in Figure 6. We find
that the largest growth occurs for initial masses $\approx 2.00\,M_{\odot}$,
peaking at 19%. Above initial masses of $M_{i}=2.90\,M_{\odot}$, it appears
that the growth begins to plateau around 8-9%. The red points represent the
reaction rates considered by NACRE and the blue points are those from An et
al. (2016). Additionally, squares represent the respective adopted rates while
darker-coloured triangles and lighter-coloured upside-down triangles represent
the high and low limit uncertainties, respectively.
Figure 8 shows the number of thermal pulses as a function of initial mass for
each considered reaction rate. Moreover, it shows that lower reaction rates
experience more TPs than higher reaction rates. This is related to the larger
amount of available hydrogen to aid the outward growth of the helium core
through a greater number of unstable He-shell burning episodes - TPs. We do
not find any M-star to C-star transitions (see Marigo et al., 2020, for
example) as convective overshooting about the boundary between the helium core
and the He–exhausted core was disregarded during the TP-AGB, inhibiting the
TDU (Herwig, 2000; Romero et al., 2015). However, overshooting still occurred
at the boundary of the H–rich core. We define the "He–exhausted core" as the
region from the centre until the local abundance of helium is greater than
$10^{-6}$.
Thermal pulses are strongly dependent on the mass–loss rate, helium core mass
and initial metallicity (Karakas et al., 2002; Cristallo et al., 2009; Weiss &
Ferguson, 2009; Renedo et al., 2010; Romero et al., 2015; De Gerónimo et al.,
2017). We find that the number of thermal pulses in our computations is lower
than that from the works of Weiss & Ferguson (2009); Renedo et al. (2010) and
Romero et al. (2015) for a given initial mass, a similar treatment of
convection and a similar helium core mass at the beginning of the TP-AGB
phase. Difference in the number of TPs could be related to the different
mass–loss schemes during the RGB stage. In this work we consider the mass–loss
prescription from Bloecker (1995) while the works of Weiss & Ferguson (2009);
Renedo et al. (2010) and Romero et al. (2015) consider a mass–loss scheme that
produces a "super wind" stage towards the last TPs, making it more efficient
in these last TPs but less so in the early TP-AGB (see Vassiliadis & Wood,
1993; van Loon et al., 2005, for details). However, the trend in the number of
experienced TPs as a function of initial mass obtained in our work agrees with
other works (see Weiss & Ferguson, 2009; Renedo et al., 2010; Romero et al.,
2015).
To assess the effect of the TDU during the TP-AGB, we computed additional
sequences, allowing convective overshooting to occur at all fully– or
semi–convective boundaries, with $f=0.016$ (see Appendix B, for details on
it’s effect). For sequences that consider the NACRE_A prescription, TDU
episodes occur for initial masses larger than $M_{i}\geq 2.40\;M_{\odot}$,
with the dredge-up efficiency parameter ($\lambda_{d}$) showing values of
$\lambda_{d}=0.033-0.124$ that increases with increasing initial mass. The
abundance of carbon and oxygen at the surface does increase during each TDU in
these additional models, but the C/O is still lower than 1 meaning that our
models show an oxygen dominated surface. A higher value of the overshooting
parameter may be necessary to produce C–stars (see Herwig et al., 1997;
Karakas et al., 2002; Weiss & Ferguson, 2009; Romero et al., 2015; Marigo et
al., 2020, for examples of C-star transitions). For sequences where convective
overshooting was considered across all boundaries during the AGB we find a
decrease in the final helium core mass up to 0.63%. This value is much lower
than the 15% decrease found by Karakas et al. (2002); Romero et al. (2015).
The sequences that have initial masses $M_{i}<2.40\;M_{\odot}$ do not show any
third dredge–up episodes, as such we do not expect any difference to the
growth of the helium core or the final mass. For those sequences with initial
masses $M_{i}\geq 2.40\;M_{\odot}$, a more detailed study of the convective
boundaries during the TP-AGB is required for more thorough analysis of why we
find such weak dredge–up efficiency parameters.
In the case of NACRE_H and NACRE_L, we find that TDU episodes occur for the
same initial mass range as that of the NACRE_A sequences ($2.40\leq
M_{i}/M_{\odot}\leq 3.05$). Additionally, the dredge-up efficiency parameters
are also similar to those of the NACRE_A sequences, with
$\lambda_{d}=0.040-0.123$. From the results gathered in this work, we find
that the uncertainties of current 12C($\alpha$, $\gamma$)16O reaction rates
are not significant in modelling the TDU.
The 12C($\alpha$, $\gamma$)16O reaction during the AGB is negligible during
the TP-AGB. Instead, the main energy source occurs through the 3$\alpha$
reaction series and the CNO-cycle within the H-rich envelope (Herwig, 2005;
Karakas & Lattanzio, 2014). Thus, we do not find any significant change to the
peak TP luminosity nor the depth of each TDU, since the changes in core mass
at the beginning of the TP-AGB are negligible as a result of the uncertainties
of the 12C($\alpha$, $\gamma$)16O reaction rate, as shown in Figure 6 (see
Wallerstein et al., 1997; Wagenhuber & Groenewegen, 1998; Busso et al., 1999;
Herwig, 2005; Karakas & Lattanzio, 2014, for details). However, the
uncertainties of the overshooting efficiency raises a greater uncertainty in
the surface composition during the AGB, as such we leave a detailed discussion
for a future work that considers the overshooting efficiency in more detail
(Abia et al., 2002; Herwig, 2005; Cristallo et al., 2009; Ventura & Marigo,
2009; Karakas & Lattanzio, 2014).
Figure 8: Number of TPs experienced as a function of initial mass. Each reaction rate consideration and their uncertainties are shown. We find that the number of TPs peaks at initial masses $\approx 2.00\,M_{\odot}$, in-line with the largest core growth, as in Figure 7. We also show that lower reaction rates for the 12C($\alpha$, $\gamma$)16O reaction produce more TPs. The red points represent the reaction rates considered by NACRE and the blue points are those from An et al. (2016). Additionally, squares represent the respective adopted rates while darker-coloured triangles and lighter-coloured upside-down triangles represent the high and low limit uncertainties, respectively. $M_{i}$/$M_{\odot}$ | $\Delta M_{\text{growth}}/M_{\odot}$ | $M_{\text{fuel}}/M_{\odot}$
---|---|---
NACRE_H | NACRE_A | NACRE_L | An_A | NACRE_H | NACRE_A | NACRE_L | An_A
1.00 | 0.009 | 0.010 | 0.009 | 0.009 | 0.007 | 0.008 | 0.007 | 0.008
1.50 | 0.030 | 0.027 | 0.031 | 0.031 | 0.024 | 0.022 | 0.026 | 0.025
1.60 | 0.037 | 0.038 | 0.039 | 0.038 | 0.030 | 0.031 | 0.031 | 0.031
2.00 | 0.085 | 0.091 | 0.089 | 0.091 | 0.069 | 0.073 | 0.072 | 0.073
2.90 | 0.043 | 0.044 | 0.050 | 0.048 | 0.035 | 0.035 | 0.040 | 0.039
Table 1: Values showing the TP-AGB helium core mass growth and fuel mass. We
report the values from the following reaction rate considerations: NACRE_H,
NACRE_A, NACRE_L and An_A. We do not report the values from the uncertainties
of the rate taken from An et al. (2016) since they are negligible when
compared to their adopted rate.
### 3.3 The White Dwarf Final Cooling Track
Figure 9 shows the initial-to-final mass relation (IFMR) for all sequences
produced in this work. We find that there is no significant difference in the
final mass of any given initial mass due to the 12C($\alpha$, $\gamma$)16O
reaction rate. Considering the largest difference in the reaction rates,
between NACRE_H and NACRE_L, the largest difference in the final mass for a
given initial mass is less than $0.01\,M_{\odot}$ ($<2\%$).
In the interest of the pursuit for a global IFMR, we compare our IFMR to those
of other works of a similar metallicity. We consider the IFMRs from the works
of Weidemann (2000); Salaris et al. (2009) and Renedo et al. (2010). We find a
similar trend with the work of Weidemann (2000), both of which consider the
same mass-loss scheme from Bloecker (1995) for the AGB phase. The IFMRs from
the works of Salaris et al. (2009) and Renedo et al. (2010) consider the
mass–loss scheme from Vassiliadis & Wood (1993) for the AGB and show a much
steeper gradient in their IFMRs. However, the core masses between this work
and the works of Weidemann (2000); Salaris et al. (2009) and Renedo et al.
(2010) are similar at the first TP. Thus, it is reasonable to assume that the
difference is due to their considered mass-loss scheme for the IFMR
determination.
By considering the third-order polynomial nature of the IFMR computed in this
work, we fit a function to the NACRE_A final masses to produce a general
relation from the results of this work. This allows for a comparison to other
IFMRs as well as other masses to be easily estimated, if desired. The
following IFMR reproduces the IFMR of NACRE_A well, such that the R-square
value is $R^{2}=0.9995$:
$M_{f}=0.02047M_{i}^{3}-0.1051M_{i}^{2}+0.2323M_{i}+0.3783M_{\odot}$ (2)
where $M_{f}$ is the final mass and $M_{i}$ is the initial mass. The non-
linear relationship described by Equation 2 is caused by the mass-loss rate
adopted on the AGB. The Bloecker (1995) scheme in particular has a large
dependency on luminosity. It would be interesting to see how our IFMR holds
for observational data as well as it’s dependency on metallicity - an
important dependence as discussed in Romero et al. (2015).
Figure 9: Initial-to-final mass relation of all sequences calculated as part
of this work. Also shown are other IFMRs from the works of Weidemann (2000);
Salaris et al. (2009); Renedo et al. (2010) (yellow stars, purple dashed line
and black squares, respectively) for a comparison of their trends. The red
points represent the reaction rates considered by NACRE, and the blue points
are those from An et al. (2016). Additionally, squares represent the
respective adopted rates while darker-coloured triangles and lighter-coloured
upside-down triangles represent the high and low limit uncertainties,
respectively. We find that the slope of the IFMR has a strong dependency on
the considered mass-loss scheme considered during the AGB, with the scheme
from Vassiliadis & Wood (1993) producing a steeper gradient and that from
Bloecker (1995) showing a shallower gradient.
In Figure 10 we show, in panel a), the final ages of a WD that has cooled to
an effective temperature of $T_{\textrm{eff}}=10\,000$K (log scale) as a
function of initial mass for all the sequences computed in this work. The
differences in the final ages due to the High/Low limits of each considered
12C($\alpha$, $\gamma$)16O reaction rate are in general negligible, with
variations of the order $\sim 0.01$ Gyr for both the NACRE and An et al.
(2016) 12C($\alpha$, $\gamma$)16O reaction rate. The variations in the
reported final ages due to the uncertainties of the 12C($\alpha$, $\gamma$)16O
reaction rate are a magnitude lower than the populations studied in the works
of Hansen et al. (2013); Forbes et al. (2015); Campos et al. (2016). As such,
the impact that the 12C($\alpha$, $\gamma$)16O reaction rate has on final ages
of WD models is currently negligible as compared to the greater uncertainty of
ageing stellar populations.
Panels c) and d) of Figure 10 show the moving average for the time spent on
the cooling track for the NACRE and An et al. (2016) 12C($\alpha$,
$\gamma$)16O reaction rates, respectively. We define this quantity as the time
taken for a star on the final cooling track to cool from it’s maximum
effective temperature until an effective temperature of
$T_{\text{eff}}=10\,000$K. During the final cooling track, the differences in
the duration due to the reaction rates between the Adopted and High/Low limits
generally differ up to $0.030$ Gyr for those of NACRE and up to $0.015$ Gyr
for An et al. (2016). The general trend is in agreement with past discussions
of the effect of the 12C($\alpha$, $\gamma$)16O reaction rate and cooling time
during this stage of evolution, such that more oxygen-rich cores will produce
a lower cooling time. This is due to the gravitational energy release during
stratification occurring at earlier times for more oxygen-rich cores. As a
consequence, the WD is left with a lower thermal content to feed the surface
luminosity at later times. The larger the luminosity at which the
stratification occurs, the shorter the resulting cooling times will be
(D’Antona & Mazzitelli, 1990; Prada Moroni & Straniero, 2002; Salaris et al.,
2010). Furthermore, for the High/Low limits of the NACRE rate, we find that
NACRE_L produces a greater absolute difference than that of NACRE_H. This is
due to the availability of helium during the CHB as discussed in Section 3.1.
Figure 10: Panel a) shows the final age (log scale) of the star on the final
cooling track with an effective temperature $T_{\text{eff}}=10\,000$K. Panel
b) shows the time spent of the cooling track, defined as the time taken for a
WD on the final cooling track to cool from its maximum effective temperature
to an effective temperature of $T_{\text{eff}}=10\,000$K. Panel c) and d) show
the moving average for the difference of cooling times between the High/Low
limits and the Adopted rate for the NACRE and An et al. (2016) 12C($\alpha$,
$\gamma$)16O reaction rate, respectively. All panels are represented as
functions of initial mass. The NACRE reaction rates are shown as different
shades of red and those from An et al. (2016) are depicted by shades of blue.
Furthermore, squares represent the respective adopted rates while darker-
coloured triangles and lighter-coloured upside-down triangles represent the
high and low limit uncertainties, respectively. In general, we find that the
uncertainties of the 12C($\alpha$, $\gamma$)16O reaction rate have an
negligible effect on the final ages of the stars at this point, whereas the
cooling time can differ up to 8%.
After the settling and diffusion processes described in Section 2, the final
oxygen abundances within the core of the sequences are presented in Figure 11,
as a function of initial mass. We find similar trends to the oxygen mass
fraction in this stage to those found at the end of the CHB. Although there
are slight increases to the oxygen mass fraction due to the aforementioned
diffusion processes (Unglaub & Bues, 2000). Additionally, diffusion affects
the C/O ratio throughout the star up to the surface and not just in the core
(see Herwig, 2000; Straniero et al., 2003, for details).
The onset of crystallisation begins when the core cools to a certain
temperature, $T_{c}$ (Segretain et al., 1994; Horowitz et al., 2010). This
temperature is dependent on the internal composition of the star. Through
observations of the globular cluster NGC 6397, Winget et al. (2009) report
that the crystallisation of the WD core is similar to that of a pure carbon
core. According to the phase diagram produced in Horowitz et al. (2010) and
their limits for the maximum crystallisation temperature, this would require a
limit to the oxygen mass fraction of $X_{\text{O}}\leq 0.64$. This requires
that the maximum S-factor at 300 keV has an upper limit of
$S(300\,\text{keV})\leq 170\,\text{keV b}$. Considering the relationship
between oxygen mass fraction and initial mass presented in Figure 11, we find
that NACRE_H and NACRE_A produce central oxygen abundances that are too large
for a crystallisation process similar to that found by Horowitz et al. (2010).
Meanwhile, the rates An et al. (2016) agree not only with the oxygen mass
fraction limit presented by Horowitz et al. (2010), but also their derived
S-factor for an energy of 300 keV. Thus, we find that sequences dedicated to
studying crystallisation using the method presented by Horowitz et al. (2010)
should consider a lower reaction rate than that from NACRE for the
12C($\alpha$, $\gamma$)16O reaction to keep their analysis consistent with the
input physics that they use.
Figure 11: Central oxygen mass fraction for the final WD as a function of
initial mass. We show each calculated sequence. The trends for each considered
reaction rate are similar to those found in Figure 2. There has been a slight
increase in the central oxygen abundance since the CHB due to diffusion
processes in the star. Additionally, squares represent the respective adopted
rates while darker-coloured triangles and lighter-coloured upside-down
triangles represent the high and low limit uncertainties, respectively.
Figure 12 shows the abundance profiles of white dwarf models with a stellar
mass of $M_{*}=0.548M_{\odot}$, $T_{\textrm{eff}}=20\,000$K and an initial
mass of $M_{i}=1.30\,M_{\odot}$. Sequences that consider a reaction rate from
NACRE are shown in the top panel and those from An et al. (2016) are
represented in the bottom panel. All sequences finish with similar structure
to those shown in Figure 12. The profiles depict a DA white dwarf
configuration, with a hydrogen-rich envelope, a helium buffer and a C/O core.
Where the abundance of carbon reaches it’s maximum, we hereafter refer to this
as the carbon peak.
We show that the interior of the star has a consistent trend where the carbon
peak is higher for lower reaction rates - an outcome of a less efficient
reaction rate which leaves behind a larger abundance of carbon. Furthermore,
the position of the carbon peak changes with the reaction rates, moving away
from the centre as the reaction rate increases. We find in general that
differences between An_A and the An_H/An_L reaction rates do not affect this
region drastically (bottom panel), unlike that of the NACRE 12C($\alpha$,
$\gamma$)16O reaction rate considerations (top panel).
The abundance profile and composition gradients in these central regions that
lie within the range of $1<-\textrm{log}_{10}(1-M_{r}/M_{*})<2$ affect the
peaks in the Brunt-Väisälä frequency, which disturbs the period spectrum
structure (see Córsico & Althaus, 2006; Romero et al., 2012b, for more
details). This is an outcome of the pulsation modes that are trapped in this
region through the mode-trapping mechanism. We confirm that uncertainties of
the 12C($\alpha$, $\gamma$)16O reaction rate may affect the pulsation period
spectrum. Another region where the Brunt-Väisälä frequency is affected is in
the He/H transition region. In particular, the position of the He/H transition
will impact the period spectrum (Romero et al., 2012a, 2013).
Figure 12: In both panels we show the abundance profiles of sequences
considering an initial mass of $M_{i}=1.30\,M_{\odot}$. The top panel
represents the adopted rate and it’s uncertainties for the NACRE rate, and the
same for the An et al. (2016) rates in the bottom panel. The line-styles for
each rate are shown in the legend in the bottom panel and the colours for each
element is shown in the legend in the top panel. Colour version is available
online.
## 4 Conclusions
In this work we analyse the impact that the limits of the 12C($\alpha$,
$\gamma$)16O reaction rate has on the inner structure and evolutionary
properties of low- and intermediate-mass stars. We consider the 12C($\alpha$,
$\gamma$)16O reaction rates from NACRE (Angulo et al., 1999) and An et al.
(2016). We have computed stellar sequences from the ZAMS until the remnant
white dwarf reaches a luminosity of $\text{log}(L/L_{\odot})=-3$. We applied
similar starting parameters for different ensembles of reaction rates where we
consider the adopted rate along with the upper and lower limits within the
uncertainties of each source. We summarise our main results below.
1. 1.
The C/O ratio of the core in the final model of each sequence is affected by
the 12C($\alpha$, $\gamma$)16O reaction rate as expected, with lower C/O
ratios for larger reaction rates. We find that the decreased C/O ratio for
initial masses greater than the transition mass increase again at higher
masses. The mass at which this increase occurs is dependent on the considered
12C($\alpha$, $\gamma$)16O reaction rate, such that it occurs for higher
masses if higher reaction rates are considered. This is due to an increased
number of mixing episodes, a cause of larger energy outputs increasing
convective efficiency which brings fresh helium to the core during the CHB.
Note that significant differences between the adopted rate and high/low limits
occur only for those rates taken from NACRE which has a much larger
uncertainty than those from An et al. (2016).
2. 2.
CHB lifetime is dependent on the considered reaction rate, a higher reaction
rate produces a greater lifetime. We deem this to be a consequence in the
number of mixing episodes extending the core helium burning lifetime, although
further research would be beneficial to confirm this. Between the adopted rate
and high/low limits, we find a difference up to 12 Myr for the NACRE rates and
up to 4 Myr for those from An et al. (2016).
3. 3.
The helium core mass at the beginning of the first TP is independent of the
considered 12C($\alpha$, $\gamma$)16O reaction rate up to and including the
transition mass. Above this mass, we find a maximum difference of $\approx
0.01M_{\odot}$ between NACRE_H and NACRE_L, with lower reaction rates
producing a lower helium core mass. Additionally, our minimum helium core mass
at this point occurs at our transition mass.
4. 4.
Growth of the helium core mass between the first TP and the final mass reaches
a maximum of 19%, with growths greater than 10% occurring in the mass range
$1.70\leq M_{i}/M_{\odot}\leq 2.60$ which is in agreement with Bird &
Pinsonneault (2011) and Kalirai et al. (2014). The largest growths occur for
the lower reaction rates due to more available hydrogen which remained after
the CHB. There are no significant differences between the rates taken from An
et al. (2016) due to the limits being smaller in relation to their adopted
rate than those from NACRE.
5. 5.
The number of TPs during the TP-AGB is dependent on the considered
12C($\alpha$, $\gamma$)16O reaction rate. We find that lower reaction rates
increase the number of TPs due to a larger hydrogen fuel aiding the outward
growth of the helium core mass by fuelling the unstable He-shell with a
greater supply of fresh helium.
6. 6.
TDU episodes occur for sequences in the initial mass range of $2.40\leq
M_{i}/M_{\odot}\leq 3.05$ with dredge-up efficiency parameters
$\lambda_{d}=0.033-0.124$. This mass range is independent of the considered
12C($\alpha$, $\gamma$)16O reaction rate. Additionally, the values of
$\lambda_{d}$ between the considered 12C($\alpha$, $\gamma$)16O reaction rate
uncertainties are not significant. Furthermore, the depth of each TDU is
independent of the 12C($\alpha$, $\gamma$)16O reaction rate.
7. 7.
The IFMR produced in this work has a similar trend to that of Weidemann
(2000), who also consider a similar mass-loss prescription during the AGB. The
IFMRs of Renedo et al. (2010) and Salaris et al. (2009) show a much steeper
gradient and they consider the Vassiliadis & Wood (1993) mass-loss
prescription during the AGB.
8. 8.
We find that the final ages of the sequences are in general independent of the
considered reaction rate. However, during the final cooling track, we find
differences up to 10% between the adopted rates and high/low limits. This is
true for both those rates taken from NACRE and An et al. (2016). This
difference in the cooling time agrees with the works of Prada Moroni &
Straniero (2002); Salaris et al. (2010); Isern et al. (2013).
9. 9.
The final C/O ratio in the core shows a similar trend to that at the end of
the CHB. The oxygen abundance increases slightly due to the diffusion
processes. The final oxygen mass fraction for NACRE_A and NACRE_H sequences
are greater than the values derived by Horowitz et al. (2010) for
crystallisation of a C/O core. The reaction rates from An et al. (2016) agree
closely with the derived values of Horowitz et al. (2010). As such, future
works should consider a lower reaction rate than that of NACRE when
considering the crystallisation process of Horowitz et al. (2010).
10. 10.
The inner structure of the star is affected by the uncertainties within the
considered reaction rates, particularly those from NACRE. The position and
height of the carbon peak is significantly affected by the difference between
the adopted rate and high/low limits of the reaction rate for the NACRE
considerations. This may affect the modes in which pulsations can occur during
the ZZ Ceti instability strip (Córsico & Althaus, 2006; Romero et al., 2012a).
Although we analyse the possible evolutionary stages where more accurate
12C($\alpha$, $\gamma$)16O reaction rates are needed, a deeper analysis of
some effects are still required. For instance, a quantification of how the
pulsation modes of sdB’s and ZZ Ceti stars are affected, for example.
Furthermore, we conclude that a lower reaction than that of NACRE_A is
favourable for the Horowitz et al. (2010) considerations of crystallisation,
however, this must be further analysed as well. By limiting the uncertainties
of 12C($\alpha$, $\gamma$)16O reaction rates to 10% of the adopted rate, as in
An et al. (2016), reports a much better consistency of stellar parameters.
## Acknowledgements
BTP, ADR and SOK acknowledge support by CNPq and PRONEX-FAPERGS/CNPq. This
study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de
Nível Superior - Brasil (CAPES) - Finance Code 001. This research has made use
of NASA’s Astrophysics Data System. AGI acknowledges support from the
Netherlands Organisation for Scientific Research (NWO). We also thank
developers of the MESA software, which was used extensively in this work.
Finlly, we thank the anonymous referee for their input to make it a more
complete work.
## Data Availability
The data is available upon request to the corresponding author.
## References
* Abia et al. (2002) Abia C., et al., 2002, ApJ, 579, 817
* Aliotta et al. (2021) Aliotta M., et al., 2021, arXiv e-prints, p. arXiv:2109.14418
* Althaus et al. (2010) Althaus L. G., Córsico A. H., Isern J., García-Berro E., 2010, A&ARv, 18, 471
* An et al. (2015) An Z.-D., et al., 2015, Phys. Rev. C, 92, 045802
* An et al. (2016) An Z.-D., Ma Y.-G., Fan G.-T., Li Y.-J., Chen Z.-P., Sun Y.-Y., 2016, ApJ, 817, L5
* Angulo et al. (1999) Angulo C., et al., 1999, Nuclear Phys. A, 656, 3
* Bertelli et al. (1986) Bertelli G., Bressan A., Chiosi C., Angerer K., 1986, A&AS, 66, 191
* Bethe (1939) Bethe H. A., 1939, Physical Review, 55, 434
* Bird & Pinsonneault (2011) Bird J. C., Pinsonneault M. H., 2011, ApJ, 733, 81
* Bloecker (1995) Bloecker T., 1995, A&A, 297, 727
* Boeltzig et al. (2016) Boeltzig A., et al., 2016, European Physical Journal A, 52, 75
* Boothroyd & Sackmann (1988) Boothroyd A. I., Sackmann I. J., 1988, ApJ, 328, 653
* Bressan et al. (2012) Bressan A., Marigo P., Girardi L., Salasnich B., Dal Cero C., Rubele S., Nanni A., 2012, MNRAS, 427, 127
* Brochard, F. et al. (1975) Brochard, F. Chevallier, P. Disdier, D. Rauch, V. Scheibling, F. 1975, J. Phys. France, 36, 113
* Brown et al. (2001) Brown G. E., Heger A., Langer N., Lee C.-H., Wellstein S., Bethe H. A., 2001, New Astron., 6, 457
* Burbidge et al. (1957) Burbidge E. M., Burbidge G. R., Fowler W. A., Hoyle F., 1957, Reviews of Modern Physics, 29, 547
* Burgers (1969) Burgers J. M., 1969, Flow Equations for Composite Gases
* Busso et al. (1999) Busso M., Gallino R., Wasserburg G. J., 1999, ARA&A, 37, 239
* Campos et al. (2016) Campos F., et al., 2016, MNRAS, 456, 3729
* Caughlan & Fowler (1988) Caughlan G. R., Fowler W. A., 1988, Atomic Data and Nuclear Data Tables, 40, 283
* Constantino et al. (2015) Constantino T., Campbell S. W., Christensen-Dalsgaard J., Lattanzio J. C., Stello D., 2015, MNRAS, 452, 123
* Constantino et al. (2016) Constantino T., Campbell S. W., Lattanzio J. C., van Duijneveldt A., 2016, MNRAS, 456, 3866
* Constantino et al. (2017) Constantino T., Campbell S. W., Lattanzio J. C., 2017, MNRAS, 472, 4900
* Córsico & Althaus (2006) Córsico A. H., Althaus L. G., 2006, A&A, 454, 863
* Cristallo et al. (2009) Cristallo S., Straniero O., Gallino R., Piersanti L., Domínguez I., Lederer M. T., 2009, ApJ, 696, 797
* Cristallo et al. (2015) Cristallo S., Straniero O., Piersanti L., Gobrecht D., 2015, ApJS, 219, 40
* D’Antona & Mazzitelli (1990) D’Antona F., Mazzitelli I., 1990, ARA&A, 28, 139
* De Gerónimo et al. (2015) De Gerónimo F. C., Córsico A. H., Althaus L. G., Romero A. D., 2015, The Impact of the Uncertainties in the 12C($\alpha$,$\gamma$)16O Reaction Rate on the Asteroseismology of ZZ Ceti Stars: First Results. p. 225
* De Gerónimo et al. (2017) De Gerónimo F. C., Althaus L. G., Córsico A. H., Romero A. D., Kepler S. O., 2017, A&A, 599, A21
* Deboer et al. (2017) Deboer R. J., et al., 2017, in APS Division of Nuclear Physics Meeting Abstracts. p. CC.006
* Deboer et al. (2019) Deboer R., Brune C., Wiescher M., 2019, in APS Division of Nuclear Physics Meeting Abstracts. p. FG.003
* Eddington (1920) Eddington A. S., 1920, The Observatory, 43, 341
* Farmer et al. (2020) Farmer R., Renzo M., de Mink S. E., Fishbach M., Justham S., 2020, ApJ, 902, L36
* Ferguson et al. (2005) Ferguson J. W., Alexander D. R., Allard F., Barman T., Bodnarik J. G., Hauschildt P. H., Heffner-Wong A., Tamanai A., 2005, ApJ, 623, 585
* Fontaine & Brassard (2008) Fontaine G., Brassard P., 2008, PASP, 120, 1043
* Forbes et al. (2015) Forbes D. A., Pastorello N., Romanowsky A. J., Usher C., Brodie J. P., Strader J., 2015, MNRAS, 452, 1045
* Forestini & Charbonnel (1997) Forestini M., Charbonnel C., 1997, A&AS, 123, 241
* Fowler et al. (1967) Fowler W. A., Caughlan G. R., Zimmerman B. A., 1967, ARA&A, 5, 525
* Frost & Lattanzio (1996) Frost C. A., Lattanzio J. C., 1996, ApJ, 473, 383
* Ghasemi et al. (2017) Ghasemi H., Moravveji E., Aerts C., Safari H., Vučković M., 2017, MNRAS, 465, 1518
* Guo & Li (2018) Guo J.-J., Li Y., 2018, MNRAS, 478, 3290
* Guzik et al. (2016) Guzik J. A., Fontes C. J., Walczak P., Wood S. R., Mussack K., Farag E., 2016, IAU Focus Meeting, 29B, 532
* Hansen et al. (2013) Hansen B. M. S., et al., 2013, Nature, 500, 51
* Heber (2009) Heber U., 2009, ARA&A, 47, 211
* Heger et al. (2002) Heger A., Woosley S. E., Rauscher T., Hoffman R. D., Boyes M. M., 2002, New Astron. Rev., 46, 463
* Heil et al. (2008) Heil M., et al., 2008, Phys. Rev. C, 78, 025803
* Herwig (2000) Herwig F., 2000, A&A, 360, 952
* Herwig (2005) Herwig F., 2005, ARA&A, 43, 435
* Herwig et al. (1997) Herwig F., Bloecker T., Schoenberner D., El Eid M., 1997, A&A, 324, L81
* Herwig et al. (1999) Herwig F., Blöcker T., Langer N., Driebe T., 1999, A&A, 349, L5
* Horowitz et al. (2010) Horowitz C. J., Schneider A. S., Berry D. K., 2010, Phys. Rev. Lett., 104, 231101
* Hoyle (1946) Hoyle F., 1946, MNRAS, 106, 343
* Hoyle (1954) Hoyle F., 1954, ApJS, 1, 121
* Hu et al. (2011) Hu H., Tout C. A., Glebbeek E., Dupret M. A., 2011, MNRAS, 418, 195
* Iben (1991) Iben I. J., 1991, in Michaud G., Tutukov A. V., eds, Vol. 145, Evolution of Stars: the Photospheric Abundance Connection. p. 257
* Iben & MacDonald (1985) Iben I. J., MacDonald J., 1985, ApJ, 296, 540
* Iben & Tutukov (1984) Iben I. J., Tutukov A. V., 1984, ApJS, 54, 335
* Iglesias & Rogers (1993) Iglesias C. A., Rogers F. J., 1993, ApJ, 412, 752
* Iglesias & Rogers (1996) Iglesias C. A., Rogers F. J., 1996, ApJ, 464, 943
* Isern et al. (2013) Isern J., Artigas A., García-Berro E., 2013, in European Physical Journal Web of Conferences. p. 05002 (arXiv:1212.0806), doi:10.1051/epjconf/20134305002
* Kalirai et al. (2014) Kalirai J. S., Marigo P., Tremblay P.-E., 2014, ApJ, 782, 17
* Karakas & Lattanzio (2003) Karakas A. I., Lattanzio J. C., 2003, Publ. Astron. Soc. Australia, 20, 279
* Karakas & Lattanzio (2014) Karakas A. I., Lattanzio J. C., 2014, Publ. Astron. Soc. Australia, 31, e030
* Karakas et al. (2002) Karakas A. I., Lattanzio J. C., Pols O. R., 2002, Publ. Astron. Soc. Australia, 19, 515
* Katsuma (2012) Katsuma M., 2012, ApJ, 745, 192
* Kippenhahn & Weigert (1990) Kippenhahn R., Weigert A., 1990, Stellar Structure and Evolution
* Kippenhahn et al. (1980) Kippenhahn R., Ruschenplatt G., Thomas H. C., 1980, A&A, 91, 175
* Kobayashi et al. (2020) Kobayashi C., Karakas A. I., Lugaro M., 2020, ApJ, 900, 179
* Kunz et al. (2002) Kunz R., Fey M., Jaeger M., Mayer A., Hammer J. W., Staudt G., Harissopulos S., Paradellis T., 2002, ApJ, 567, 643
* Marigo (2002) Marigo P., 2002, A&A, 387, 507
* Marigo (2007) Marigo P., 2007, A&A, 467, 1139
* Marigo et al. (2013) Marigo P., Bressan A., Nanni A., Girardi L., Pumo M. L., 2013, MNRAS, 434, 488
* Marigo et al. (2020) Marigo P., et al., 2020, Nature Astronomy,
* Matteucci (2012) Matteucci F., 2012, Chemical Evolution of Galaxies, doi:10.1007/978-3-642-22491-1.
* Matteucci (2021) Matteucci F., 2021, A&ARv, 29, 5
* Ophel et al. (1976) Ophel T. R., Frawley A. D., Treacy P. B., Bray K. H., 1976, Nuclear Physics A, 273, 397
* Paxton et al. (2011) Paxton B., Bildsten L., Dotter A., Herwig F., Lesaffre P., Timmes F., 2011, ApJS, 192, 3
* Paxton et al. (2013) Paxton B., et al., 2013, ApJS, 208, 4
* Paxton et al. (2015) Paxton B., et al., 2015, ApJS, 220, 15
* Paxton et al. (2018) Paxton B., et al., 2018, ApJS, 234, 34
* Perlmutter et al. (1999) Perlmutter S., Turner M. S., White M., 1999, Phys. Rev. Lett., 83, 670
* Prada Moroni & Straniero (2002) Prada Moroni P. G., Straniero O., 2002, ApJ, 581, 585
* Prialnik (2009) Prialnik D., 2009, An Introduction to the Theory of Stellar Structure and Evolution
* Reimers (1975) Reimers D., 1975, Circumstellar envelopes and mass loss of red giant stars. pp 229–256
* Renedo et al. (2010) Renedo I., Althaus L. G., Miller Bertolami M. M., Romero A. D., Córsico A. H., Rohrmann R. D., García-Berro E., 2010, ApJ, 717, 183
* Riess et al. (1998) Riess A. G., et al., 1998, AJ, 116, 1009
* Romero et al. (2012a) Romero A. D., Córsico A. H., Althaus L. r. G., Miller Bertolami M. M., 2012a, arXiv e-prints, p. arXiv:1204.6101
* Romero et al. (2012b) Romero A. D., Córsico A. H., Althaus L. G., Kepler S. O., Castanheira B. G., Miller Bertolami M. M., 2012b, MNRAS, 420, 1462
* Romero et al. (2013) Romero A. D., Kepler S. O., Córsico A. H., Althaus L. G., Fraga L., 2013, ApJ, 779, 58
* Romero et al. (2015) Romero A. D., Campos F., Kepler S. O., 2015, MNRAS, 450, 3708
* Salaris & Cassisi (2005) Salaris M., Cassisi S., 2005, Evolution of Stars and Stellar Populations
* Salaris et al. (2009) Salaris M., Serenelli A., Weiss A., Miller Bertolami M., 2009, ApJ, 692, 1013
* Salaris et al. (2010) Salaris M., Cassisi S., Pietrinferni A., Kowalski P. M., Isern J., 2010, ApJ, 716, 1241
* Salpeter (1952) Salpeter E. E., 1952, ApJ, 115, 326
* Saumon et al. (1995) Saumon D., Chabrier G., van Horn H. M., 1995, ApJS, 99, 713
* Segretain et al. (1994) Segretain L., Chabrier G., Hernanz M., Garcia-Berro E., Isern J., Mochkovitch R., 1994, ApJ, 434, 641
* Spruit (2015) Spruit H. C., 2015, A&A, 582, L2
* Straniero et al. (2003) Straniero O., Domínguez I., Imbriani G., Piersanti L., 2003, ApJ, 583, 878
* Sukhbold & Adams (2020) Sukhbold T., Adams S., 2020, MNRAS, 492, 2578
* Thoul et al. (1994) Thoul A. A., Bahcall J. N., Loeb A., 1994, ApJ, 421, 828
* Tilley et al. (1993) Tilley D. R., Weller H. R., Cheves C. M., 1993, Nuclear Physics A, 564, 1
* Timmes & Swesty (2000) Timmes F. X., Swesty F. D., 2000, ApJS, 126, 501
* Tur et al. (2007) Tur C., Heger A., Austin S. M., 2007, ApJ, 671, 821
* Tur et al. (2010) Tur C., Heger A., Austin S. M., 2010, ApJ, 718, 357
* Ulrich (1972) Ulrich R. K., 1972, ApJ, 172, 165
* Unglaub & Bues (2000) Unglaub K., Bues I., 2000, A&A, 359, 1042
* Vassiliadis & Wood (1993) Vassiliadis E., Wood P. R., 1993, ApJ, 413, 641
* Ventura & Marigo (2009) Ventura P., Marigo P., 2009, MNRAS, 399, L54
* Ventura et al. (2020) Ventura P., Dell’Agli F., Lugaro M., Romano D., Tailo M., Yagüe A., 2020, A&A, 641, A103
* Wagenhuber & Groenewegen (1998) Wagenhuber J., Groenewegen M. A. T., 1998, A&A, 340, 183
* Wallerstein et al. (1997) Wallerstein G., et al., 1997, Reviews of Modern Physics, 69, 995
* Weaver & Woosley (1993) Weaver T., Woosley S., 1993, Physics Reports, 227, 65
* Weidemann (2000) Weidemann V., 2000, A&A, 363, 647
* Weiss & Ferguson (2009) Weiss A., Ferguson J. W., 2009, A&A, 508, 1343
* West et al. (2013) West C., Heger A., Austin S. M., 2013, ApJ, 769, 2
* Winget & Kepler (2008) Winget D. E., Kepler S. O., 2008, ARA&A, 46, 157
* Winget et al. (2009) Winget D. E., Kepler S. O., Campos F., Montgomery M. H., Girardi L., Bergeron P., Williams K., 2009, ApJ, 693, L6
* Woosley & Weaver (1995) Woosley S. E., Weaver T. A., 1995, ApJS, 101, 181
* Woosley et al. (2003) Woosley S. E., Heger A., Rauscher T., Hoffman R. D., 2003, Nuclear Phys. A, 718, 3
* Wu et al. (2020) Wu C., Wang B., Wang X., Maeda K., Mazzali P., 2020, MNRAS, 495, 1445
* Xu et al. (2013) Xu Y., Takahashi K., Goriely S., Arnould M., Ohta M., Utsunomiya H., 2013, Nuclear Physics A, 918, 61
* van Loon et al. (2005) van Loon J. T., Cioni M. R. L., Zijlstra A. A., Loup C., 2005, A&A, 438, 273
## Appendix A Convection during CHB for An rates
Figure 13 shows the CHB history of the convective mass and the luminosities of
the 3$\alpha$ process and the 12C($\alpha$, $\gamma$)16O reaction, for the
reaction rates taken from An et al. (2016). This figure is analogous to that
of Figure 4 which shows the same for the NACRE rates. We provide this figure
to prove that we do not find any significant difference between the number of
mixing episodes, luminosity from the 3$\alpha$ process and the 12C($\alpha$,
$\gamma$)16O reaction. Thus, the high/low limits for the 12C($\alpha$,
$\gamma$)16O reaction rate from An et al. (2016) does not affect the CHB in
terms of energy production, mixing episodes or CHB duration. This was not
found for the NACRE case, which is discussed in Section 3.1.
Figure 13: History of the convective mass (top panel), 3$\alpha$ luminosity
and the luminosity of the 12C($\alpha$, $\gamma$)16O reaction during the CHB
(bottom panel). The history is given in terms of the CHB duration. This plot
in particular considers all An et al. (2016) prescriptions for the
12C($\alpha$, $\gamma$)16O reaction rate for an initial mass of
$M_{i}=2.45\,M_{\odot}$. Blue lines represent An_H, orange-brown depicts An_A
and dark-brown shows An_L. Furthermore, the solid line represents the
convective mass, dotted lines show the luminosity of the 12C($\alpha$,
$\gamma$)16O reaction and dot-dash lines portray the 3$\alpha$ luminosity.
## Appendix B Additional AGB Models
Figure 14 shows the Kippenhahn diagram for the case of $M_{i}=3.05\,M_{\odot}$
during the TP-AGB in the original NACRE_A models. We represent the mass co-
ordinate on the first y–axis and the surface C/O ratio on the second y–axis.
Both values are plotted against the age of the sequence. These models did not
consider convective overshooting around the border of the He–exhausted core.
Green slashed areas show convective regions, red back slashed areas represent
semi-convective regions and the purple regions are where overshooting occurs.
The purple dotted line shows the history of the He–exhausted core mass and the
blue dotted line represents the history of the helium core mass. The colour
bar measures the energy generation rate from nuclear reactions. The solid
orange line represents the C/O ratio at the surface. It can be seen that the
overshooting occurs close to the envelope boundary and there is no
overshooting about the semi-convective region of the He–exhausted core. As a
result of this, we do not observe TDU episodes in the original models. We can
be sure that there are no TDU episodes because of the lack of change in helium
core mass and that the surface C/O ratio remains constant, which would change
if TDUs were experienced (Frost & Lattanzio, 1996; Herwig et al., 1999;
Karakas et al., 2002; Weiss & Ferguson, 2009; Romero et al., 2015; De Gerónimo
et al., 2017; Marigo et al., 2020).
Figure 15 shows the same as Figure 14 but allows for convective overshooting
at each boundary. We find that, with the new prescription, convection and
overshooting extends throughout the helium buffer. For this reason material
can be "dredged-up" from the core to the surface. This results in the helium
core mass and He–exhausted core masses changing with each convective episode -
an outcome of TDU episodes (Frost & Lattanzio, 1996; Herwig et al., 1999;
Karakas et al., 2002; Weiss & Ferguson, 2009; Romero et al., 2015; De Gerónimo
et al., 2017; Marigo et al., 2020). Furthermore, we find an increase in the
surface C/O ratio with each TDU as material travels from the stellar interior
to the surface. The surface C/O ratio, however, remains less than 1. This
indicates a larger overshooting parameter is required for M–star to C–star
transitions.
Figure 14: Kippenhahn diagram for during the TP-AGB for the case of
$M_{i}=3.05\,M_{\odot}$ of the original models. We represent the mass co-
ordinate on the first y–axis and the surface C/O ratio on the second y–axis.
Both values are plotted against the age of the sequence. This model did not
consider convective overshooting at boundary of the He–exhausted core which
inhibited the TDU. The colour bar measures the energy generation rate from
nuclear reactions. The blue dotted line represents the helium core mass while
the purple dotted line represents the He–exhausted core. Green slashed regions
show convection and the red back slashed regions represent where regions of
the star are semi-convective. Finally, purple areas are where overshooting
occurs. Figure 15: Kippenhahn diagram for during the TP-AGB for the case of
$M_{i}=3.05\,M_{\odot}$ of the new models. We represent the mass co-ordinate
on the first y–axis and the surface C/O ratio on the second y–axis. Both
values are plotted against the age of the sequence. The colour bar measures
the energy generation rate from nuclear reactions. This model considered
convective overshooting at all convective boundaries, allowing fro TDUs to
occur. The blue dotted line represents the helium core mass while the purple
dotted line represents the He–exhausted core. Green slashed regions show
convection and the red back slashed regions represent where regions of the
star are semi-convective. Finally, purple areas are where overshooting occurs.
|
# Teaching Design by Contract using Snap! ††thanks: Identify applicable
funding agency here. If none, delete this.
1st Marieke Huisman Formal Methods and Tools
University of Twente
Enschede, The Netherlands
<EMAIL_ADDRESS>2nd Raúl E. Monti Formal Methods and Tools
University of Twente
Enschede, The Netherlands
<EMAIL_ADDRESS>
###### Abstract
With the progress in deductive program verification research, new tools and
techniques have become available to support design-by-contract reasoning about
non-trivial programs written in widely-used programming languages. However,
deductive program verification remains an activity for experts, with ample
experience in programming, specification and verification. We would like to
change this situation, by developing program verification techniques that are
available to a larger audience. In this paper, we present how we developed
prototypal program verification support for Snap!. Snap! is a visual
programming language, aiming in particular at high school students. We added
specification language constructs in a similar visual style, designed to make
the intended semantics clear from the look and feel of the specification
constructs. We provide support both for static and dynamic verification of
Snap! programs. Special attention is given to the error messaging, to make
this as intuitive as possible.
###### Index Terms:
verification, software, education
## I Introduction
Research in deductive program verification has made substantial progress over
the last years: tools and technique have been developed to reason about non-
trivial programs written in widely-used programming languages, the level of
automation has substantially increased, and bugs in widely-used libraries have
been found [1, 2, 3]. However, the use of deductive verification techniques
remains the field of expert users, and substantial programming knowledge is
necessary to appreciate the benefits of these techniques.
We believe that it is important to make deductive program verification
techniques accessible also to novice programmers. Therefore, we have to teach
the Design-by-Contract [4] (DbC) approach, which requires the programmer to
explicitly specify the assumptions and responsibilities of code in a modular
way, in parallel with actually teaching programming, i.e. DbC should be taught
as an integral part of the process leading from design to implementation. In
this paper, we make the Design-by-Contract idea accessible to high school
students, in combination with appropriate tool support, which is currently
unavailable.
Concretely, this paper presents a Design-by-Contract approach for Snap! [5].
Snap! is a visual programming language targeting high school students. The
design of Snap! is inspired by Scratch, another widely-used visual programming
language. Compared to Scratch, Snap! has some more advanced programming
features. In particular, Snap! provides the possibility to create parametrised
reusable blocks, basically modelling user-defined functions. Also the look and
feel of Snap! targets high school students, whereas Scratch aims at an even
younger age group. Snap! has been successfully integrated in high school
curricula, by its integration in the _Beauty and Joy of Computing_ course [6].
This course combines programming skills with a training in abstract
computational thinking.
The first step to support Design-by-Contract for Snap! is to define a suitable
specification language. The visual specification language that we propose in
this paper is built as a seamless extension of Snap!, i.e. we propose a number
of new specification blocks and natural modifications of existing ones. These
variations capture the main ingredients for the Design-by-Contract approach,
such as pre- and postconditions. Moreover, we also provide blocks to add
assertions and loop invariants in a program and we extend the standard
expression pallets of Snap! with some common expressions to ease
specifications. The choice of specification constructs is inspired by existing
specification languages for Design-by-Contract, such as JML [7], choosing the
most frequently used constructs with a clear and intuitive meaning. Moreover,
all verification blocks are carefully designed to reflect the intended
semantics of the specifications in a visual way.
A main concern for a programmer, after writing the specification of the
intended behaviour of their programs, should be to validate that these
programs behave according to their specification. Therefore, we provide two
kinds of tool support: (i) runtime assertion checking [8], which checks
whether specifications are not violated during a particular program execution,
and (ii) static checking (or deductive verification) [9], which verifies that
all possible program executions respect its specifications. The runtime
assertion checker is built as an extension of the standard Snap! execution
mechanism. The deductive verification support is built by providing a
translation from a Snap! program into Boogie [10].
Another important aspect to take into account for a good learning experience
are the error messages that indicate that a specification is violated. We have
integrated these messages in Snap!’s standard error reporting system, again
sticking to the look and feel of standard Snap!. Moreover, we have put in
effort to make the error messages as clear as possible, so that also a
relatively novice programmer can understand why the implementation deviates
from the specification.
## II Background
### II-A Snap!
Snap! is a visual programming language. It has been designed to introduce
children (but also adults) to programming in an intuitive way. At the same
time, it is also a platform for serious study of computer science[11]. Snap!
actually re-implements and extends Scratch [12]. Programming in Snap! is done
by dragging and dropping blocks into the coding area. Blocks represent common
program constructs such as variable declarations, control flow statements
(branching and loops), function calls and assignments. Snapping blocks
together, the user builds a script and visualises its behaviour by means of
turtle graphics visualisations, called sprites. Sprites can change shape,
move, show bubbled text, play music, etc. For all these effects, dedicated
blocks are available.
Figure 1: The Snap! working area.
The Snap! interface divides the working area into three parts: the pallet
area, the scripting area, and the stage area, indicated by labels 1, 2 and 3,
respectively, in Fig. 1. On the left, the various programming blocks are
organised into pallets that describe their natural use. For instance, the
_Variables_ pallet contains blocks for declaring and manipulating variables.
In Snap!, variables are dynamically typed. Blocks are dragged and dropped from
the pallets into the scripting area, located at the centre of the working area
where the Snap! program is constructed. Blocks can be arranged by snapping
them together, or by inserting them as arguments of other blocks. Blocks can
only be used as arguments if their shapes match with the shape of the argument
slots in the target block. These shapes actually provide a hint on the
expected evaluation type of a block, for instance, rounded slots for numbers
and diamond slots for booleans .
The behaviour of the script is shown with turtle graphics drawings in the
stage area located in the rightmost part of the screen.
In addition, at the bottom of the pallet area, there is a “Make a block”
button. This allows the user to define his or her _Build Your Own Block_
(BYOB) blocks. When pressed, a new floating “Block Editor” window pops out
with a new coding area, in which the behaviour of the personalised block can
be defined (similar to how a script is made in the scripting area). Label $4$
in Fig. 1 shows a BYOB block being edited. Once defined, the BYOB block
becomes available to be used just as any other predefined block.
### II-B Program Verification
The basis of the Design-by-Contract approach [13] is that the behaviour of all
program components is defined as a contract. For example, a function contract
specifies the conditions under which a function may be called (the function’s
_precondition_), and it specifies the guarantees that the function provides to
its caller (the function’s _postcondition_). There exist several specification
languages that have their roots in this Design-by-Contract approach. For
example the Eiffel programming language has built-in support for pre- and
postconditions [14], and for Java, the behavioural interface language JML [15]
is widely used. As is common for such languages, we use the keyword _requires_
to indicate a precondition, and the keyword _ensures_ to indicate a
postcondition.
If a program behaviour is specified using contracts, various techniques can be
used to validate whether an implementation respects the contract.
Dynamic verification validates an implementation w.r.t. a specification at
runtime. This means that, whenever during program execution a specification is
reached, it will be checked for this particular execution that the property
specified indeed holds. In particular, this means that whenever a function
will be called, its precondition will be checked, and whenever the function
returns, its postcondition will be checked. An advantage of this approach is
that it is easy and fast to use it: one just runs a program and checks if the
execution does not violate the specifications. A disadvantage is that it only
provides guarantees about a concrete execution.
In contrast, static verification aims at verifying that all possible
behaviours of a function respect its contract. This is done by applying Hoare
logic proof rules [16] or using Dijkstra’s predicate transformer semantics
[17]. Applying these rules results in a set of first-order proof obligations;
if these proof obligations can be proven it means that the code satisfies its
specification. Advantage of this approach is that it guarantees correctness of
all possible behaviours. Disadvantage is that it is often labour-intensive,
and often many additional annotations, such as for example loop invariants,
are needed to guide the prover.
## III Visual Program Specifications
This section discusses how to add visual specification constructs to Snap!.
Our goal was to do this in such a way that (1) the intended semantics of the
specification construct is clear from the way it is visualised, and (2) that
it smoothly integrates with the existing programming constructs in Snap!
Often, Design-by-Contract specifications are added as special comments in the
code. For example, in JML a function contract is written in a special comment,
tagged with an @-symbol, immediately preceding the function declaration. The
tag ensures that the comment can be recognised as part of the specification.
There also exist languages where for example pre- and postconditions are part
of the language (e.g., Eiffel [18], Spec# [19]). We felt that for our goal,
specifications should be integrated in a natural way in the language, rather
than using comments. Therefore, we introduce variations of the existing block
structures, in which we added suitable slots for the specifications. This
section discusses how we added pre- and postconditions, and in-code
specifications such as asserts and loop invariants to Snap!. In addition, to
have a sufficiently expressive property specification language, we also
propose an extension of the expression constructs.
### III-A Visual Pre- and Postconditions
To specify pre- and postconditions for a BYOB script, we provide a variation
of the initial hat block with a slot for a precondition at the start of the
block, and a slot for a postcondition at the end of the block (Fig. 2).
This shape is inspired by the c-shaped style of other Snap! blocks, such as
blocks for loops. The main advantage is that it visualises at which points in
the execution, the pre- and the postconditions are expected to hold. In
addition, it also graphically identifies which code is actually verified.
Moreover, the shapes are already familiar to the Snap! programmer. If the
slots are not filled, default pre- and postcondition true can be used. Notice
that the pre- and postcondition slots consist of multiple boolean-argument
slots, and we define the property to be the conjunction of the evaluation of
each of these slots. This is similar to how Snap! extends a list or adds
arguments to the header of a BYOB.
Figure 2: Hat block extended with contracts
### III-B Visual Assertions and Loop Invariants
For static verification, pre- and postconditions are often not sufficient, and
we need additional in-code specifications to guide the prover, such as
assertions, which specify properties that should hold at a particular point in
the program, and loop invariants. Moreover, assertions can also be convenient
for run-time assertion checking to make it explicit that a property holds at a
particular point in the program.
#### Visual Assertions
To specify assertions, both the property specified and the location within the
code are relevant. To allow the specification of assertions at arbitrary
places in a script, we define a special assertion block similar to all other
control blocks.
#### Visual Loop Invariants
Loop invariants are necessary for static verification [20]. A loop invariant
should hold at the beginning and end of every loop iteration. To account for
this, we provide a (multi-argument boolean) slot to specify the loop invariant
in the traditional Snap! c-shaped loop block. This slot is located just after
the header where the loop conditions are defined. In addition, the c-shaped
loop block repeats the word invariant at the bottom of the block (see Figure
3) to visually indicate that the invariant is checked after each iteration.
Figure 3: Visual loop invariants.
### III-C Visual Expressions
In addition, we have introduced some specification-only keywords, as commonly
found in Design-by-Contract languages.
* •
An _old_ expression is used in postconditions to indicate that a
variable/expression should be evaluated in the pre-state of the function. To
support this, we introduced an operator block with a slot for a variable
name.
* •
A _result_ expression refers to the return value of a function inside its
postcondition. We support this by introducing a constant operator, that
allows to specify a property about the result value of a reporter BYOB.
We also introduce syntax to ease the definition of complex Boolean
expressions, by means of the operator blocks , , and , as well as syntax
to write more advanced Boolean expressions, introducing support for quantified
expressions (See Fig.4).
Figure 4: A global quantification expression block
## IV Graphical approach to verification result reporting
Another important point to consider is how to report on the outcome of the
verification: (1) presenting the verdict of a passed verification, and (2) in
case of failure, giving a concrete and understandable explanation for the
failure. The latter is especially important in our case, as we are using the
technique with inexperienced users.
In order to signal a contract violation, or any assertion invalidated during
dynamic verification, we use Snap!’s pop-up notification windows. These
windows have the advantage that a failing block can be printed inside them
even when the failing script is not currently visible to the user. This allows
to be very precise about the error, even when the BYOB body is not currently
visible.
In order to signal errors while compiling to Boogie, such as making use of
dynamic typing or nested lists in your Snap! BYOB code, we use Snap!’s speech
bubbles that can emerge at specific points in the script while describing the
cause of failure. This has the advantage that the failing block can easily be
singled out by the location of the bubble, while the cause of failure is
described by the text inside the bubble. We find this option less invasive
than a pop-up window but still as precise, and we can be sure that the blocks
involved will be visible since static verification is triggered from the BYOB
editor window (See Fig.5). Notice that currently we do not report the results
of static verification within Snap!, since our extension only returns a
compiled Boogie code which has to be verified with Boogie separately.
Figure 5: Static verification compilation notification.
## V Tool support
We have developed our ideas into a prototypal extension to Snap! which can be
found at https://git.snt.utwente.nl/montire/verifiedsnap/. This repository
also contains a set of running examples to showcase the new support for
verification. These are available in the _lessons_ folder under the root
directory along with an exercise sheet named _exercises.pdf_. The extension
uses the same technology as the original Snap! and can be run by just opening
the _snap.html_ file in most common web-browsers that support java-script.
Our extension supports both dynamic and static verification of BYOB blocks.
Dynamic verification is automatically triggered when executing BYOB blocks in
the usual way. For static verification, a dedicated button located at the top
right corner of the BYOB editor window allows to trigger the compilation of
the BYOB code into an intended equivalent Boogie code. The compiled code can
be then downloaded and verified with Boogie. Boogie can be run locally or on
the cloud at https://rise4fun.com/Boogie/. _Dynamic verification_ has been
fully integrated into the normal execution flow of a Snap! program, and thus
there is no real restrictions on the characteristics of the BYOB that can be
dynamically verified. For _Static verification_ , we have restricted data
types to be Integers, Booleans and List of Integers. Moreover, we do not
support dynamic typing of variables. Finally, we only focus on compiling an
interesting subset of Snap! blocks for the sake of teaching Design-by-
Contract.
## VI Conclusions
This paper presented a prototypal program verification extension to Snap!. The
extension is intended to support the teaching of Design-by-Contract in the
later years of high schools. We paid considerable attention to the didactic
aspects of our tool: the looks and feel of the extension should remain
familiar to Snap! users, the syntax and structure of the new blocks should
give a clear intuition about their semantics, and the error reporting should
be precise and expressive.
Our extension allows to analyse BYOB blocks both by runtime assertion checking
and static verification. Runtime assertion checking is fully integrated into
Snap! and there is no limitation on the kind of blocks that can be analysed.
Static verification compiles the Snap! code into a Boogie equivalent code and
the verification needs to be run outside of Snap!. Moreover, we make some
restrictions on the kind of BYOB blocks we can compile, in order to keep the
complexity of the prototype low. As future work we would like to lift these
restrictions as much as possible by integrating the remaining Snap! blocks
into the compilation and by allowing other data types to be used. Also, we
would like to integrate the verification into Snap!, translating Boogie
messages back to the Snap! world, to help student to interpret them.
We would like to carry out an empirical study on our proposed approach. This
will require the development of a concrete study plan and its evaluation in a
Dutch classroom.
Computer science curricula that uses blocks programming is widely and freely
available [21, 22, 23, 24, 25]. Nevertheless, it is hardly spotted that they
include topics around design and verification of code. The words ‘test’ or
‘testing’ are also rare around the curricula and, where mentioned, they are
not sufficiently motivated. The drawbacks of teaching coding with blocks
without paying attention to design nor correctness has already been analysed
[26, 27]. We have not found any work on teaching these concepts in schools,
nor implementations on block programming that support teaching them.
| Marieke Huisman is a professor in Software Reliability at the University of
Twente. She obtained her PhD in 2001 from the Radboud University Nijmegen.
Afterwards she worked at INRIA Sophia Antipolis, and since 2008 at University
of Twente. Her research interests are in the verification of concurrent
software, as implemented in the VerCors program verifier. She is in particular
interested in making verification usable in a practical setting, and she works
for example on annotation generation, and support for different programming
languages.
---|---
| Raúl E. Monti received his PhD in 2018 from Universidad Nacional de
Córdoba. He is currently a PostDoc at the Universiteit Twente. His research
interests involve the development and practical application of formal
foundations and tools for analysis and verification of software and hardware
systems by means of model checking and deductive verification. His work
involves interacting with industry to apply his research in the verification
of industrial (embedded) systems and software.
---|---
## References
* [1] S. De Gouw, J. Rot, F. De Boer, R. Bubel, and R. Hähnle, “OpenJDK’s java.utils.collection.sort() is broken: The good, the bad and the worst case,” in _Proc. 27th Intl. Conf. on Computer Aided Verification (CAV), San Francisco_ , ser. LNCS, D. Kroening and C. Pasareanu, Eds., vol. 9206\. Springer, Jul. 2015, pp. 273–289.
* [2] W. Oortwijn, M. Huisman, S. J. Joosten, and J. van de Pol, “Automated verification of parallel nested dfs,” in _International Conference on Tools and Algorithms for the Construction and Analysis of Systems_. Springer, 2020, pp. 247–265.
* [3] M. Safari, W. Oortwijn, S. Joosten, and M. Huisman, “Formal verification of parallel prefix sum,” in _NASA Formal Methods_ , R. Lee, S. Jha, and A. Mavridou, Eds. Cham: Springer International Publishing, 2020, pp. 170–186. [Online]. Available: https://doi.org/10.1007/978-3-030-55754-6_10
* [4] B. Meyer, “Applying "design by contract",” _Computer_ , vol. 25, no. 10, pp. 40–51, 1992. [Online]. Available: https://doi.org/10.1109/2.161279
* [5] B. Harvey, D. D. Garcia, T. Barnes, N. Titterton, D. Armendariz, L. Segars, E. Lemon, S. Morris, and J. Paley, “Snap!(build your own blocks),” in _Proceeding of the 44th ACM technical symposium on Computer science education_ , 2013, pp. 759–759.
* [6] D. Garcia, B. Harvey, and T. Barnes, “The beauty and joy of computing,” _Inroads_ , vol. 6, no. 4, pp. 71–79, 2015. [Online]. Available: https://doi.org/10.1145/2835184
* [7] G. T. Leavens, A. L. Baker, and C. Ruby, “JML: A notation for detailed design,” in _Behavioral Specifications of Businesses and Systems_ , H. Kilov, B. Rumpe, and I. Simmonds, Eds. Boston, MA: Springer US, 1999, pp. 175–188.
* [8] Y. Cheon, “A runtime assertion checker for the Java Modeling Language,” Ph.D. dissertation, Department of Computer Science, Iowa State University, Ames, 2003, technical Report 03-09.
* [9] K. R. M. Leino, “Towards reliable modular programs,” 1995.
* [10] M. Barnett, B. E. Chang, R. DeLine, B. Jacobs, and K. R. M. Leino, “Boogie: A modular reusable verifier for object-oriented programs,” in _Formal Methods for Components and Objects, 4th International Symposium, FMCO 2005, Amsterdam, The Netherlands, November 1-4, 2005, Revised Lectures_ , ser. Lecture Notes in Computer Science, F. S. de Boer, M. M. Bonsangue, S. Graf, and W. P. de Roever, Eds., vol. 4111. Springer, 2005, pp. 364–387. [Online]. Available: https://doi.org/10.1007/11804192_17
* [11] B. Harvey and J. Mönig, “Snap! reference manual,” _URL http://snap. berkeley. edu/SnapManual. pdf_ , 2017.
* [12] M. Resnick, J. Maloney, A. Monroy-Hernández, N. Rusk, E. Eastmond, K. Brennan, A. Millner, E. Rosenbaum, J. S. Silver, B. Silverman, and Y. B. Kafai, “Scratch: programming for all,” _Commun. ACM_ , vol. 52, no. 11, pp. 60–67, 2009. [Online]. Available: https://doi.org/10.1145/1592761.1592779
* [13] B. Meyer, J.-M. Nerson, and M. Matsuo, “Eiffel: object-oriented design for software engineering,” in _European Software Engineering Conference_. Springer, 1987, pp. 221–229.
* [14] B. Meyer, “Eiffel: A language and environment for software engineering,” _Journal of Systems and Software_ , vol. 8, no. 3, pp. 199–246, 1988.
* [15] G. T. Leavens, Y. Cheon, C. Clifton, C. Ruby, and D. R. Cok, “How the design of jml accommodates both runtime assertion checking and formal verification,” _Science of Computer Programming_ , vol. 55, no. 1-3, pp. 185–208, 2005.
* [16] C. Hoare, “An axiomatic basis for computer programming,” _Communications of the ACM_ , vol. 12, no. 10, pp. 576–580, 1969.
* [17] E. Dijkstra, _A Discipline of Programming_. Prentice-Hall, 1976.
* [18] B. Meyer, _Eiffel: The Language_. Prentice-Hall, 1991. [Online]. Available: http://www.eiffel.com/doc/#etl
* [19] M. Barnett, K. R. M. Leino, and W. Schulte, “The Spec# programming system: An overview,” in _Construction and Analysis of Safe, Secure and Interoperable Smart Devices: Proceedings of the International Workshop CASSIS 2004_ , ser. LNCS, G. Barthe, L. Burdy, M. Huisman, J.-L. Lanet, and T. Muntean, Eds., vol. 3362. Springer, 2005, pp. 151–171.
* [20] T. Türk, “Local reasoning about while-loops,” in _VSTTE 2010. Workshop Proceedings_ , R. Joshi, T. Margaria, P. Müller, D. Naumann, and H. Yang, Eds. ETH Zürich, 2010, pp. 29–39.
* [21] “The Beauty and Joy of Computing. An AP CS Principles Course,” https://bjc.edc.org/. Accessed October 2020.
* [22] “CS First,” https://csfirst.withgoogle.com/s/en/home. Accessed October 2020.
* [23] “The Creative Computing Curriculum,” http://creativecomputing.gse.harvard.edu/guide/. Accessed October 2020.
* [24] “An Introduction to Programming. A Pencil Code Teacher’s Manual,” https://manual.pencilcode.net/. Accessed October 2020.
* [25] P. Factorovich and F. Sawady, _Actividades para aprender a Program. AR: Segundo ciclo de la educación primaria y primero de la secundaria_. Miller Ed Buenos Aires, 2015.
* [26] O. Meerbaum-Salant, M. Armoni, and M. Ben-Ari, “Habits of programming in scratch,” in _Proceedings of the 16th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education, ITiCSE 2011, Darmstadt, Germany, June 27-29, 2011_ , G. Rößling, T. L. Naps, and C. Spannagel, Eds. ACM, 2011, pp. 168–172. [Online]. Available: https://doi.org/10.1145/1999747.1999796
* [27] E. Aivaloglou and F. Hermans, “How kids code and how we know: An exploratory study on the scratch repository,” in _Proceedings of the 2016 ACM Conference on International Computing Education Research_ , 2016, pp. 53–61.
|
# A low-loss ferrite circulator as a tunable chiral quantum system
Ying-Ying Wang Department of Physics, University of Massachusetts-Amherst,
Amherst, MA, USA Sean van Geldern Department of Physics, University of
Massachusetts-Amherst, Amherst, MA, USA Thomas Connolly Present address:
Department of Applied Physics, Yale University, New Haven, CT, USA Department
of Physics, University of Massachusetts-Amherst, Amherst, MA, USA Yu-Xin Wang
Pritzker School of Molecular Engineering, University of Chicago, Chicago, IL,
USA Alexander Shilcusky Department of Physics, University of Massachusetts-
Amherst, Amherst, MA, USA Alexander McDonald Pritzker School of Molecular
Engineering, University of Chicago, Chicago, IL, USA Department of Physics,
University of Chicago, Chicago, IL, USA Aashish A. Clerk Pritzker School of
Molecular Engineering, University of Chicago, Chicago, IL, USA Chen Wang
<EMAIL_ADDRESS>Department of Physics, University of Massachusetts-Amherst,
Amherst, MA, USA
###### Abstract
Ferrite microwave circulators allow one to control the directional flow of
microwave signals and noise, and thus play a crucial role in present-day
superconducting quantum technology. They are typically viewed as a black-box,
with their internal structure neither specified nor used as a quantum
resource. In this work, we show a low-loss waveguide circulator constructed
with single-crystalline yttrium iron garnet (YIG) in a 3D cavity, and analyze
it as a multi-mode hybrid quantum system with coupled photonic and magnonic
excitations. We show the coherent coupling of its chiral internal modes with
integrated superconducting niobium cavities, and how this enables tunable non-
reciprocal interactions between the intra-cavity photons. We also probe
experimentally the effective non-Hermitian dynamics of this system and its
effective non-reciprocal eigenmodes. The device platform provides a test bed
for implementing non-reciprocal interactions in open-system circuit QED.
## I Introduction
Microwave circulators, typically composed of a transmission line Y-junction
with ferrite materials Kord _et al._ (2020), are ubiquitous in
superconducting circuit QED experiments Devoret and Schoelkopf (2013). They
provide a crucial link in the readout chain of superconducting quantum
processors, by directing the signal traffic while protecting the qubits and
resonators from thermal noise Krantz _et al._ (2019). They also enable the
interactions between distinct quantum circuit modules to be non-reciprocal
Kurpiers _et al._ (2018); Axline _et al._ (2018), a feature which is
important for eliminating long-distance cross-talks in modular quantum
computation architectures. Despite their importance, microwave circulators are
generally treated as broadband black-box devices in experiments. Formulating a
more microscopic quantum description is often challenging, as their internal
modes involving the magnetic spin excitations (magnons) are generally too
lossy and complex to be analyzed using canonical circuit quantization Vool and
Devoret (2017).
On the other hand, there has been growing interest in studying and
manipulating magnon excitations of ferromagnetic/ferrimagnetic materials in
the quantum regime Lachance-Quirion _et al._ (2019); Awschalom _et al._
(2021). In particular, the ferromagnetic resonance (FMR) mode of yttrium iron
garnet (YIG), a ferrimagnetic insulator with usage in commercial circulators,
has shown sufficiently high quality factor and coupling cooperativity with
microwave cavities to function as a quantum oscillator mode in strong-coupling
circuit QED Huebl _et al._ (2013); Tabuchi _et al._ (2014); Zhang _et al._
(2014). Notably, coherent coupling of magnons with a superconducting qubit
Tabuchi _et al._ (2015) and single-shot detection of a single magnon
Lachance-Quirion _et al._ (2020) have been demonstrated using a millimeter-
sized single-crystalline YIG sphere in a 3D cavity. Furthermore, there is a
plausible pathway towards planar superconducting-magnonic devices Hou and Liu
(2019); Golovchanskiy _et al._ (2021) to connect circuit QED with spintronics
technologies by advancing fabrication techniques of low-damping YIG films
Heyroth _et al._ (2019).
It would be interesting to harness these recent advances in the study of
quantum magnonics to revisit the design of microwave circulators, potentially
leading to new kinds of non-reciprocal devices in circuit QED. Our work here
describes a first step in this direction. Here we demonstrate a tunable non-
reciprocal device based on the waveguide circulator loaded with single-
crystalline YIG, which explicitly makes use of well-characterized hybrid
polariton modes. Such modes are the normal modes of coupled magnon-photon
systems Huebl _et al._ (2013); Tabuchi _et al._ (2014); Zhang _et al._
(2014); Boventer _et al._ (2018); Zhang _et al._ (2017), and have an
intrinsic chirality that is set by the magnetic field Anderson _et al._
(2016); Owens _et al._ (2018); Zhang _et al._ (2020). While our device
follows the same basic working principles underpinning textbook circulators
Kord _et al._ (2020); Fay and Comstock (1965), detailed understanding of the
internal modes allows us to incorporate the physical source of non-reciprocity
in the full description of a larger system including two external
superconducting cavities, using a non-Hermitian effective Hamiltonian.
While our device can be configured to operate as a traditional circulator for
its non-reciprocal transmission of travelling waves, the main focus of our
study is to use the device for mediating tunable non-reciprocal interaction
between localized long-lived quantum modes. Such non-reciprocal mode-mode
couplings result in distinct signatures in the eigenvalues and eigenvectors of
the non-Hermitian system Hamiltonian, which is relevant to the more general
study of non-Hermitian dynamics in contexts ranging from classical optics to
quantum condensed matter. Anomalous properties of the eigenvalues and
eigenvectors of a non-Hermitian Hamiltonian have given rise to a number of
striking phenomena such as the existence of exceptional points Heiss (2012);
Özdemir _et al._ (2019) and the non-Hermitian skin effect Hatano and Nelson
(1997); Yao and Wang (2018); McDonald _et al._ (2018), but direct
experimental access to the underlying eigenmodes is often difficult. In this
study, we provide comprehensive characterization of the eigenmode structure,
which is a step towards effective Hamiltonian engineering of non-reciprocal
non-Hermitian systems.
The most tantalizing usage of non-reciprocity in quantum systems (such as
entanglement stabilization using directional interactions in chiral quantum
optics setups Stannigel _et al._ (2012); Lodahl _et al._ (2017)) require
extremely high quality devices. In particular, they must approach the pristine
limit where undesirable internal loss rates are negligible compared to the
non-reciprocal coupling rates. While many experiments have focused on new
avenues of achieving non-reciprocity Chapman _et al._ (2017); Lecocq _et
al._ (2017); Sliwa _et al._ (2015); Ruesink _et al._ (2016); Fang _et al._
(2017); Wang _et al._ (2019); Xu _et al._ (2019), this loss-to-coupling
ratio, which can be understood as the quantum efficiency of the non-reciprocal
interactions, has been typically limited to approximately 10% ($\sim$ 0.5 dB)
or more, which is comparable to the linear insertion loss of typical
commercial circulators as measured in modular circuit QED experiments Kurpiers
_et al._ (2018); Axline _et al._ (2018). This performance lags far behind the
quality of unitary operations between reciprocally coupled quantum components
(i.e. two-qubit gate infidelity $<1$%). Our approach provides a route for
transcending this limitation on the quantum efficiency of non-reciprocal
interactions.
The results of our study have implications in several areas: (1) In the
context of quantum magnonics, we present the first study of polariton modes
with a partially magnetized ferrite material, which features a high quality
factor and low operating field, both of which are crucial for constructing
superconducting-magnonic devices. (2) In the context of modular
superconducting quantum computing, we demonstrate the first circulator with
internal loss well below 1% of the coupling bandwidth, which would enable
high-fidelity directional quantum state transfer. (3) For the general non-
Hermitian physics, we demonstrate an experimental probe of the non-reciprocal
eigenvector composition of a non-Hermitian system. Combining these advances,
we have established an experimental platform that meets the conditions for
future study of nonlinear non-reciprocal interactions with superconducting
qubits.
## II Experimental Setup
Figure 1: Device and measurement setup. (a) A YIG cylinder (black) is placed
at the center of the intersection of three rounded-rectangular waveguides
placed 120 degrees away from each other. The light grey region is vacuum
inside an oxygen-free copper enclosure. The device can be assembled in two
different configurations: First, a drum-head shaped transition pin can be
attached at the end of each waveguide section to form an impedance matched
waveguide-to-SMA transition (IMT). Alternatively, a short weakly-coupled probe
(WCP) can be attached to each waveguide section to explore the internal modes
of the device. (b) The device is mounted to a mezzanine plate that is
thermalized to the mixing chamber of a dilution refrigerator, and is
positioned at the center of a superconducting solenoid magnet which operates
at 4K. The device is connected to three input cables (with attenuators as
marked) and two output amplifier lines (with directional couplers splitting
signals) for $S$-parameter measurements using a vector network analyzer (VNA).
Our experimental setup is shown in Fig. 1(a). Three rounded-rectangular
waveguides, each with a cross section of 21.0 mm $\times$ 4.0 mm, placed 120
degrees away from each other, intersect to form the body of the circulator. A
$\phi$-5.58 mm $\times$ 5.0 mm single-crystalline YIG cylinder is placed at
the center of the Y-junction, with external magnetic fields applied along its
height (the $z$ axis and the [111] orientation of the YIG crystal). At the end
of the three waveguide sections, we can either attach impedance-matched
waveguide-to-SMA transitions (IMT) to perform standard characterization of the
circulator (as in Section IV), or attach weakly-coupled probe pins (WCP) to
explore the internal modes of this YIG-loaded Y-shaped cavity (as in Section
III). The use of reconfigurable probes in the same waveguide package allows us
to infer the operation condition and the performance of the circulator from
the properties of the internal modes. Furthermore, the copper waveguide
sections can be replaced by superconducting niobium cavities, with details to
be described in Section V and Fig. 5. This modular substitution introduces
additional external high Q modes to the system, and understanding the
resulting Hamiltonian and the hybridized mode structure of the full system
will be a first step towards the study of pristine non-reciprocal interactions
in circuit QED.
The device package is thermalized to the mixing chamber plate ($\sim$20 mK) of
a Bluefors LD-250 dilution refrigerator inside the $\phi$-100 mm bore of a 1 T
superconducting magnet that applies magnetic field along the $z$ axis [Fig.
1(b)]. A vector network analyzer is used to measure the complex microwave
transmission coefficients $S_{ij}$ (from Port $j$ to Port $i$, where
$i,j=1,2,3$) of the device in series with a chain of attenuators, filters and
amplifiers as in typical circuit QED experiments. A magnetic shield made of a
steel sheet is placed outside the bottom half of the refrigerator, and all
data is acquired under the persistent mode of the superconducting magnet to
minimize magnetic-field fluctuations.
## III Internal mode structure
Figure 2: Internal mode spectrum of the device. (a) VNA transmission
measurement $S_{21}$ of multi-mode photon-magnon hybrid system formed in the
waveguide circulator package with WCP. The blue (red) dashed line plots the
frequency of the clockwise(counterclockwise) mode from a simplified two-mode
model of photon-magnon avoided crossing with $g/2\pi$ = 1.3 GHz (2.1 GHz) to
compare with an observed spectral line. (b) The right panel shows a finer
sweep of $S_{21}$ in the low-field regime. The mode frequencies differ
slightly from (a) since the data was acquired after some modifications to the
device packaging (a piece of Teflon spacer at the top of the YIG cylinder was
removed). The left panel shows the electromagnetic mode structures of the
eigenmode solutions from our HFSS simulation for the WCP with good frequency
agreement to the experimental data (see Fig. 8 in Appendix). The color scale
from red to blue represents electric field strength from high to low in log
scale. The pair of modes around 11 GHz are connected to the circulating modes
of the loaded circulator and (c) shows their linewidths.
We begin by discussing the internal mode structure of the device, as probed by
$S_{21}$ as a function of applied magnetic field $B$ when the device is
installed with WCP [Fig. 2(a)]. A series of electromagnetic modes (relatively
field-independent) are observed to undergo large avoided crossings with a
cluster of magnon modes of the YIG crystal, forming photon-magnon polariton
modes. The magnon mode most strongly coupled to photons is known to correspond
to near-uniform precession of YIG spins, or the Kittel mode of FMR, whose
frequency increases linearly with magnetic field:
$\omega_{m}=\gamma[B+\mu_{0}(N_{x,y}-N_{z})M_{s}]\approx\gamma B$, as marked
by the dashed line in Fig. 2(a). Here $\gamma$ is the gyromagnetic ratio, and
the (volume-averaged) demagnetizing factors $N_{x,y,z}$ in magnetic saturated
state are very close to $1/3$ for the aspect ratio of our YIG cylinder Chen
_et al._ (1991). These avoided crossings are similar to previous experiments
showing strong photon-magnon coupling Tabuchi _et al._ (2014); Zhang _et
al._ (2014), but due to the much larger size of the YIG in our experiment, a
large cluster of higher-order magnetostatic modes, most of which have slightly
higher frequency than the Kittel mode Fletcher _et al._ (1960); Klingler _et
al._ (2017) also coherently interact with the microwave photons, contributing
to the complex transmission spectra in the vicinity of the crossings.
Nevertheless, to have a coarse estimate of the photon-magnon coupling
strength, it is convenient to model each observed spectral line far away from
the crossing region as a bare electromagnetic mode with frequency
$\omega_{c}/2\pi$ hybridized with a single combined magnon mode. The implied
coupling strengths $g/2\pi$ (in the cavity QED convention) are about 1.2 GHz
and 2.1 GHz for the two modes of particular interest to this study [blue and
red in Fig. 2(a)], placing the mode hybridization in the ultrastrong coupling
regime (see e.g. Marković _et al._ (2018)) with
$g/(\omega_{c}+\omega_{m})\sim 10\%$. Even at $B=0$, with a photon-magnon
detuning of $\Delta=\omega_{c}-\omega_{m}\approx 2\pi\cdot 10$ GHz, the
participation of magnon excitations in the photon-branch of the polariton
modes remains quite substantial.
Using finite-element simulations (Ansys HFSS, Appendix A), we identify that
the five polariton modes in the frequency range of 8-12 GHz at $B=0$ include
two nearly-degenerate mode pairs with two-fold symmetry and another mode with
three-fold symmetry. Electric field distributions of each of the modes are
illustrated in Fig. 2(b). Each degenerate mode pair can be understood using a
basis of standing-wave modes polarized along the $x$ or $y$ direction. The
application of a magnetic field lifts this $x$-$y$ degeneracy, as the mode
pair forms clockwise and counterclockwise rotating eigenmodes with a frequency
splitting Owens _et al._ (2018); Zhang _et al._ (2020); Anderson _et al._
(2016).
Prior use of these chiral polariton mode pairs have been in the magnetically
saturated regime Anderson _et al._ (2016); Owens _et al._ (2018); Zhang _et
al._ (2020). Here we focus on the low-field regime ($|B|<0.05$ mT) where the
approximately linear increase of frequency splitting between the mode pair
reflects increasing magnetization of YIG under increasing applied magnetic
field. After implementing demagnetization training cycles to suppress a
relatively small hysteretic effect throughout our experiments, we expect an
approximately linear magnetization curve ($M$-$H$) for YIG until it approaches
magnetic saturation. In the limit of high permeability $\mu\gg\mu_{0}$ (with
$\mu_{0}$ being the vacuum permeability), we have $M=B/{\mu_{0}N_{z}}$ (note
that $B$ is the applied magnetic field strength) and $N_{z}\approx 0.285$ is
the $z$-direction demagnetizing factor when the YIG is significantly below
magnetic saturation Chen _et al._ (1991). Saturation magnetization
$M_{s}=2440$ Oe Solt (1962) of YIG is approached on the scale of
$B\sim\mu_{0}N_{z}M_{s}\approx 70$ mT, which agrees with the changing
curvature of the mode-splitting spectra.
On the other hand, in the completely demagnetized state ($M=0$) at zero field,
the system is expected to satisfy macroscopic time-reversal symmetry. As
supported by numerical simulations, the $x$-$y$ mode pairs should be in
principle exactly degenerate since both the Y-junction geometry and the [111]
YIG crystal has 3-fold rotational symmetry around the $z$ axis. However,
appreciable zero-field splitting is observed experimentally. We attribute this
splitting to some anisotropy in the x-y plane breaking this symmetry and
allowing a preferred magnetization axis of the YIG at 0 field. Some possible
explanations for this anisotropy are a small visible damage to our YIG crystal
on one edge or possible imperfections in eccentricity and alignment. If the
magnetic domains of unsaturated YIG preferentially align with one in-plane
axis compared to its orthogonal axis within the $x$-$y$ plane, this anisotropy
would result in a relative frequency shift between the standing-wave modes
along the in-plane easy and hard axes. This anisotropy-induced frequency shift
$\pm\beta$ for the $x$ and $y$ modes can be modeled in numerical simulations
employing a permeability tensor of unsaturated ferromagnets Schlömann (1970);
Green and Sandy (1974) with certain anisotropic assumption, which can
plausibly explain the data (Appendix A). As $B$ increases, we expect $\beta$
to decay towards 0 when the magnetic domains are increasingly aligned towards
the $z$ direction, thus making any $x$-$y$ plane energetic preference of
negligible effect. We model this decay with a thermodynamic toy model
(Appendix B) whose details do not affect the conclusions of this study.
For the rest of this article, we will focus on the pair of polariton modes
near 11 GHz in Fig. 2(b), and refer to them as “the circulator modes” for
reasons that will become apparent. We can model their frequencies in the
partially magnetized regime ($|B|<50$ mT) using a phenomenological model
accounting for the degeneracy-lifting anisotropy and the field-dependent
magnetization of YIG. Let the zero-field frequencies of the $x$ and $y$ modes
be $\omega_{x}=\omega_{y}$ if the device had perfect 3-fold symmetry, $\beta$
and $\theta/2$ be the magnitude of anisotropy caused degeneracy-lifting and
the direction of the in-plane anisotropy axis (relative to the $x$ axis), and
off-diagonal imaginary coupling term $\pm ikB$ be the magnetic field induced
degeneracy-lifting, linearly increasing with a real coefficient $k$. We use
the following Hamiltonian to characterize the pair of circulator modes in the
basis of $x$ and $y$ mode amplitudes:
$H/\hbar=\begin{pmatrix}\omega_{x}+\beta\cos{\theta}+mB^{2}&\beta\sin{\theta}+ikB\\\
\beta\sin{\theta}–ikB&\omega_{y}-\beta\cos{\theta}+mB^{2}\end{pmatrix}$ (1)
This effective model of the polariton modes has absorbed the magnon
contributions in the regime where they have been adiabatically eliminated. The
formation of clockwise and counterclockwise eigenmodes is due to magnon-
mediated interactions modeled by $\pm ikB$. The level repulsion from the far-
detuned magnon modes is approximated by a small quadratic shift in frequency
$mB^{2}$. The quadratic dependence was empirically chosen because the sum of
the mode frequencies over field displayed a roughly quadratic relationship
with $B$ over the plotted field range. By fitting the mode spectrum in Fig.
2(b), we obtain $\omega_{x}/2\pi=\omega_{y}/2\pi=11.054$ GHz, $k/2\pi=9.82$
GHz/T, $m/2\pi=50$ GHz/T2, $\beta/2\pi=139$ MHz.
Figure 3: Illustration of the circulator working principle and low-temperature
characterization of the non-reciprocity. In the circulator package with IMT,
the frequency splitting of clockwise and counterclockwise rotating modes as
shown in (a) can be tuned such that the phase of the modes are $\pi/6$ and
$-\pi/6$ as shown in (b). This then produces a node at the upper port, thereby
preventing any signal from leaving there at all times where $\omega t=0$ and
$\omega t=\pi/4$ are shown pictorially in (c). (d, e) Measured microwave
transmission (d) $|S_{12}|$ and (e) $|S_{21}|$ spectra as a function of
magnetic field B. (f-i) The isolation performance, (f)
$\mathcal{I}_{12}=|S_{12}/S_{21}|$,(h) $\mathcal{I}_{21}=|S_{21}/S_{12}|$, (g)
$\mathcal{I}_{23}=|S_{23}/S_{32}|$, (i) $\mathcal{I}_{13}=|S_{13}/S_{31}|$.
$S_{21}$ is obtained by measuring the $S_{12}$ at –$B$, which provides a self-
calibrated way to determine the isolation of the circulator.
It is well-known that the FMR modes of partially magnetized ferrimagnetic
insulators, where the magnetic domains are not aligned in equilibrium, have
large damping. Therefore, one may expect broad linewidths for photon-magnon
polariton modes below magnetic saturation. Indeed, we have observed linewidths
exceeding 100 MHz for another polariton mode at 5 GHz at $B<$ 50 mT (not
shown). Surprisingly, the polariton modes at higher frequency display narrow
linewidths, $\kappa_{i}\approx 2$ MHz for the pair of circulator modes [Fig.
2(c)], which corresponds to quality factors on par with some circuit QED
elements such as the readout resonators. The narrow internal linewidth of the
circulator modes is crucial for constructing a low-loss circulator and
eventually achieving high quantum efficiency of non-reciprocal interactions in
circuit QED. It is primarily aided by the use of single crystalline YIG and
the relatively low magnon participation in the circulator modes compared to
commercial circulators. The observed $\kappa_{i}$ may be limited by either the
spin relaxation in YIG or the Ohmic loss in copper. The former remains to be
investigated in this partially magnetized regime, and the latter may be
further reduced through better surface treatment or the use of superconducting
materials in low-field regions of the waveguide package.
## IV Circulator characterization
The device acts as a circulator when the end of each waveguide section is in
IMT rather than WCP with an applied magnetic field in the $\hat{z}$ direction.
In this configuration, the linewidths of all internal modes are substantially
broadened forming a transmission continuum in the measurement, as shown in
Fig. 3(d,e) for $|S_{12}|$ and $|S_{21}|$. Nevertheless, the operating
condition of the circulator can be conceptually understood as having a pair of
counter-propagating internal modes with their magnetic-field-induced splitting
($\delta$) satisfying the relationship $\delta=2\kappa_{c}/{\sqrt{3}}$ versus
their half linewidths ($\kappa_{c}$) Fay and Comstock (1965). As illustrated
in Fig. 3(a-c), when driven at a frequency in the middle of the two
resonances, the two circulator modes are excited with equal amplitude and a
phase shift of $\pm\frac{\pi}{6}$ relative to the drive. The resultant
standing wave pattern forms a node at the isolation port of the circulator.
This condition can be satisfied by choosing the correct combination of
frequency and magnetic field.
We characterize the non-reciprocity of the circulator by the isolation ratio
$\mathcal{I}_{12}=|S_{12}/S_{21}|$, which may be computed from Fig. 3(d,e).
However, since $S_{12}$ and $S_{21}$ are measured through different cables and
amplifier chains [Fig. 1(b)], it is challenging to calibrate their absolute
values precisely. A much better self-calibrated technique to extract the
isolation ratio in our system is to use the Onsager-Casimir relation Casimir
(1945), $S_{21}(B)=S_{12}(-B)$, resulting from the microscopic time reversal
symmetry. Therefore, we use $\mathcal{I}_{12}=|S_{12}(B)/S_{12}(-B)|$ to
determine the isolation ratio of the circulator as shown in Fig. 3(f), with
the (field-independent) contribution from same transmission chain cancelled
out. The result indicates the circulator working condition is met for the pair
of counter-propagating modes at $\sim$11.2 GHz with external field $\sim$0.022
T. We see $\geq$20 dB of isolation over a bandwidth of about 250 MHz, with
maximum isolation of at least 35 dB.
The same analysis on $S_{21}$ data yields the same isolation property [Fig.
3(h)] as expected. Similarly, $\mathcal{I}_{23}$ and $\mathcal{I}_{13}$ are
measured as in Fig. 3(g) and (i), each showing a slightly different working
field and frequency (possibly due to imperfections of the device geometry) but
similar isolation magnitude and bandwidth. These data are measured at an
estimated circulating photon number on the order of 10’s, but when we lower
the power to below single photon level, the isolation property does not show
notable changes.
An important motivation of our work is to ultimately implement pristine non-
reciprocal interactions between superconducting qubits or cavities. It is
crucial to minimize the ratio between the undesirable internal dissipation
($\kappa_{i}$) and the external bath coupling ($\kappa_{c}$) that enables non-
reciprocity. In the case of a circulator, this ratio sets the limit for the
circulator’s microwave insertion loss $\mathcal{L}_{21}$ Kord _et al._
(2020); Fay and Comstock (1965):
$\mathcal{L}_{21}=1-|S_{21}|^{2}\geq
1-|S_{21}|^{2}-|S_{11}|^{2}-|S_{31}|^{2}\approx\frac{\kappa_{i}}{\kappa_{c}}$
(2)
Figure 4: Characterization of the internal loss of the circulator at room
temperature. (a) Linewidths of the pair of circulator modes measured at room
temperature. (b) Transmission $S_{21}$ and reflection $S_{11}$ near the
maximum isolation regime of the circulator, measured at $B=24.8$ mT (top
panel) and the internal loss of the circulator calculated from it (bottom
panel) at room temperature.
This lower limit is obtained in principle when the circulator has perfect
impedance matching ($S_{11}=0$) and isolation ratio ($S_{31}=0$). Typical
commercial ferrite circulators used in circuit QED experiments have shown
insertion loss around 10% Kurpiers _et al._ (2018); Axline _et al._ (2018),
which is dominated by internal loss. Experimental Josephson circulators so far
have also reported insertion loss of -0.5 dB (11%) or higher Chapman _et al._
(2017); Lecocq _et al._ (2017). The lowest quoted insertion loss for a
commercially-listed waveguide circulator is -0.1 dB (or 2.2%) but that is
untested in the quantum regime. In order for the quantum efficiency of a non-
reciprocal two-qubit interaction channel to match the fidelity of state-of-
the-art two-qubit operations, the insertion loss would need to be improved to
the sub-percent level.
To the best of our knowledge, it is an open challenge to calibrate the
insertion loss of a microwave component in a dilution refrigerator with a
precision better than 1%. Even using specialized Thru-Reflect-Line calibration
components and well-characterized cryogenic switches, the resultant precision
would still be limited to about 0.1dB (or 2.3%) Ranzani _et al._ (2013). In
order to infer the loss of our circulator at 20 mK, we measure its $S$
parameters at room temperature after a careful calibration procedure that uses
attenuators in series to suppress standing waves. We find a conservative upper
bound for room-temperature internal loss of $\leq
1–|S_{21}|^{2}–|S_{11}|^{2}\approx 2\%$, as shown in Fig. 4(b). Assuming
$\kappa_{c}$ does not change as a function of temperature, comparing the
intrinsic linewidth of the circulator mode pair at room temperature versus 20
mK would inform the internal loss at 20 mK. The intrinsic linewidths, measured
in WCP, are 4.1 and 6.3 MHz at room temperature [Fig. 4(a)] and 1.8 and 2.2
MHz at low temperature [Fig. 2(c)], indicating that $\kappa_{c}\gtrsim 260$
MHz and $\kappa_{i}/\kappa_{c}\lesssim 0.8\%$. If we instead use the relation
of $\kappa_{c}=\sqrt{3}\delta/2$, which yields $\kappa_{c}$ in the range of
430 MHz to 550 MHz (and data in Section V would further suggest $\kappa_{c}$
at the high end of this range), or $\kappa_{i}/\kappa_{c}\approx 0.4\%$.
Further improvement of the circulator bandwidth and the coupling ratio can be
achieved by applying impedance transformation techniques to increase
$\kappa_{c}$ Helszajn (2008).
Translating this small internal loss ratio to a sub-percent insertion loss for
a circulator as a peripheral transmission-line device would further require
excellent impedance matching. However, we emphasize that this requirement is
not fundamental if the circulator is modeled as part of the quantum system
itself mediating interactions between other quantum resonance modes. Unlike
most ferrite circulators, our device operates in the regime of partial
magnetization for YIG. It only requires a moderate external magnetic field
that is significantly below the critical field of a variety of superconducting
materials. This allows for 3D integration of superconducting niobium cavities
and shielded transmon qubits for studying circuit QED with non-reciprocal
interactions. In the following section, we demonstrate direct coupling of two
external superconducting cavity modes with the circulator modes and analyze
the resultant non-reciprocal hybrid system as a whole.
## V Tuning non-reciprocity of eigenmode structure
Figure 5: Waveguide circulator-cavity integration. (a) The photo image, (b) a
schematic top-down view, and (c) a diagrammatic illustration of the effective
Hamiltonian (see Eq. (4), for clarity the $\beta$ and $mB^{2}$ terms have been
neglected in the illustration) of our integrated non-reciprocal device. It is
composed of a Cu waveguide Y-junction loaded with a YIG cylinder, two Nb
cavity segments with weakly-coupled drive ports (Port 1 and 2), and an output
port with IMT (Port 3). For each cavity, the sidewall closest to the copper
Y-junction is formed by a standalone niobium plate in the assembly [enclosed
in blue in (a)], which contains a 5 mm-diameter aperture to create an
evanescent coupling between the superconducting cavity mode and the circulator
modes. One of the cavities is loaded with a transmon qubit [marked as $\times$
in (b)] which stays unused in its ground state in this study.
We integrate superconducting cavities with the ferrite device by replacing the
rectangular waveguide extensions with superconducting 3D cavities made of
niobium [Fig. 5(a)]. Two cavities, attached at Port 1 and 2, are tuned to have
resonance frequencies close to each other, $\omega_{1}\approx\omega_{2}\sim
10.8$ GHz, both of which are within the bandwidth of the circulator. Each
cavity is coupled to the central Y-junction via a coupling aperture. As a
result, the circulator modes will mediate an interaction between these two
external cavities. Crucially, this circulator-mediated interaction can have
both coherent and dissipative aspects, and can be non-reciprocal. The degree
and the direction of non-reciprocity of the coupling can be tuned via the
external magnetic field. Note that Port 3 remains impedance-matched to a
transmission line. This is also essential: it serves as the dominant
dissipative bath that is necessary for achieving non-reciprocal inter-mode
interactions Metelmann and Clerk (2015).
To probe the hybridized mode structure of the composite system, we measure
$S_{31}$ from a weakly-coupled drive port on Cavity 1 to Port 3. The measured
amplitude of $S_{31}$ as a function of magnetic field and frequency is shown
in Fig. 6(a). There are a total of four bare oscillator modes in the vicinity
(within 0.5 GHz) of the frequency range of interest: two superconducting
cavity modes and two internal circulator modes. Since the loaded circulator
modes with very broad linewidths ($>100$ MHz) are difficult to observe in the
presence of the standing-wave background of the coaxial cables, this
spectroscopy measurement primarily reveals the eigenmodes that are localized
in the external superconducting cavities. Indeed, at $|B|>0.03$ T, the
spectrum shows two sharp resonances which we identify as the bare cavity modes
to a good approximation. At lower fields, the cavities appear to more strongly
hybridize with each other and with the lossy circulator modes, but the
spectrum can nontheless be captured relatively well by the sum of two
Lorentzian modes $a$ and $b$:
$S_{31}=\frac{A_{a}e^{i\phi_{a}}}{{-i(\omega-\omega_{a})-\kappa_{a}/2}}+\frac{A_{b}e^{i\phi_{b}}}{{-i(\omega-\omega_{b})-\kappa_{b}/2}}$
(3)
By fitting the spectrum to Eq. (3), we can extract the linewidth
($\kappa_{i}$), frequency ($\omega_{i}$) and amplitude ($A_{i}$) of the two
Lorentzians at each magnetic field, as plotted in Fig. 6(c-e).
The magnetic field dependence of the two prominent Lorentzians can be
connected to the eigenmode solutions of an effective Hamiltonian model of
system. We describe the system using the following 4$\times$4 non-Hermitian
matrix, written in the basis of the amplitudes of the two cavity modes and the
two circulator modes:
$H_{\mathrm{eff}}/\hbar=\\\
\begin{pmatrix}\;\omega_{1}-i\frac{\kappa_{1}}{2}\;&0&g_{y}&g_{x}\\\
0&\;\omega_{2}-i\frac{\kappa_{2}}{2}\;&g_{y}&-g_{x}\\\
g_{y}&g_{y}&\omega_{y}-\beta\cos\theta+mB^{2}-i\frac{\kappa_{3}}{2}&\beta\sin\theta-
ikB\\\ g_{x}\ &-g_{x}\
&\beta\sin\theta+ikB&\omega_{x}+\beta\cos\theta+mB^{2}\\\ \end{pmatrix}$ (4)
The two niobium cavities have bare frequencies $\omega_{1}$, $\omega_{2}$, and
input coupling rates of $\kappa_{1}$ and $\kappa_{2}$. The bottom right block
of Eq. (4) describes the two circulator modes, with their anisotropy
dependence and imaginary coupling due to magnon hybridization following the
same description as in Eq. (1). The zero-field frequencies of the two
circulator modes $\omega_{x}$, $\omega_{y}$ are no longer equal since the
device is no longer 3-fold symmetric. The $y$-mode with frequency $\omega_{y}$
is symmetric with respect to the $y$ axis, and therefore has an equal and in-
phase coupling rate $g_{y}$ with the two cavities. It rapidly leaks to the
waveguide output Port 3, with a decay rate
$\kappa_{3}\gg\kappa_{1},\kappa_{2},g_{x},g_{y}$. $\kappa_{3}$ is related to
$\kappa_{c}$ of the loaded circulator as in Section IV by
$\kappa_{3}=4\kappa_{c}/3$. The $x$-mode is anti-symmetric with respect to the
$y$ axis, preventing it from coupling to the output port. This also leads to a
180∘ phase difference in cavity coupling as accounted for by the negative sign
on two of the $g_{x}$ parameters
Figure 6: Spectroscopy of the hybridized non-reciprocal modes of a circulator-
cavity system. (a) VNA transmission measurement and (b) model prediction of
$|S_{31}|$ frequency spectrum over external magnetic field $B$. Remaining
panels show magnetic field dependence of the system’s eigenmodes and
wavefunctions: (c) eigenmode linewidths $\kappa_{n}/2\pi$, (d) eigenmode
frequencies $\omega_{n}/2\pi$, (e) amplitude parameter $A_{n}$ (c.f. Eq.(3)),
and (f) amplitude ratio (c.f. Eq.(8)) of experimental data (dots) from two-
mode Lorentzian fit (Eq.(3)) and theory predictions (dashed lines). The
symmetry of $\kappa_{n}$ and $\omega_{n}$ ($i.e.~{}$ complex eigen-energy of
the hybrid system) with respect to $B$ exemplifies the microscopic time-
reversal symmetry of the non-Hermitian system. The non-reciprocity is
reflected in the difference in $A_{n}$ at $\pm B$, which reveals the asymmetry
in the left/right eigen-vector structure (c.f. Eq.(8))). The effective
Hamiltonian parameters from the fit are: $\omega_{1}/2\pi=10.8104$ GHz,
$\omega_{2}/2\pi=10.8040$ GHz, $\omega_{x}/2\pi=10.707$ GHz,
$\omega_{y}/2\pi=10.813$ GHz, $\theta=37.7^{\circ}$, $\kappa_{3}/2\pi=730$
MHz, $g_{x}/2\pi=(9.0+0.011\beta)$ MHz, $g_{y}/2\pi=(5.0+0.006\beta)$ MHz with
$\beta/2\pi=139$ MHz at $B=0$ and decays with $|B|$.
This effective non-Hermitian Hamiltonian can be diagonalized as:
$H_{\mathrm{eff}}=\sum_{n}\hbar\omega_{n}\ket{n_{R}}\bra{n_{L}}$ (5)
where $n=a,b,c,d$ are the eigenmode indices of the system, $\omega_{n}$ the
complex eigen-frequencies, and $\ket{n_{R}}$ and $\ket{n_{L}}$ the right and
left eigenvectors of the non-Hermitian Hamiltonian, defined as:
$H_{\mathrm{eff}}\ket{n_{R}}=\hbar\omega_{n}\ket{n_{R}}$ and
$H_{\mathrm{eff}}^{\dagger}\ket{n_{L}}=\hbar\omega_{n}^{*}\ket{n_{L}}$. The
scattering matrix element $S_{ij}$ from Port $j$ to Port $i$ can be generally
drived from the input-output theory relation:
$S_{ij}=\delta_{ij}-i\sqrt{\kappa_{i}\kappa_{j}}G_{ij}^{R}(\omega)$, where the
$4\times 4$ retarded matrix Green’s function is defined as:
$G^{R}(\omega)=(\omega-H_{\mathrm{eff}})^{-1}$, and $\kappa_{i}$ and
$\kappa_{j}$ are the output and input coupling rates, respectively. Applying
this formalism to the $S_{31}$ measurement of our device, we arrive at the
following Lorentzian spectral decomposition to describe the spectrum:
$S_{31}(\omega)=\sum_{n}\frac{-i\sqrt{\kappa_{1}\kappa_{3}}\bra{y}\ket{n_{R}}\bra{n_{L}}\ket{1}}{\omega-\omega_{n}}$
(6)
where the real and imaginary parts of the eigen-frequency $\omega_{n}$
correspond to the observed Lorentzian frequencies and half linewidths,
respectively. The amplitudes of the Lorentzians are proportional to the
product of the left eigenvector overlap with the bare cavity mode $\ket{1}$
and the right eigenvector overlap with the output circulator mode $\ket{y}$.
By fitting the extracted Lorentzian parameters of the two prominent eigenmodes
in Fig. 6(c-e) to the predictions of the $4\times 4$ Hamiltonian model across
all fields (Eq. 4), we can determine all the free Hamiltonian parameters in
this model. This includes $\kappa_{3}=730$ MHz, implying $\kappa_{c}=550$ MHz
for the loaded circulator, consistent with (and at the high end of) the
estimates in Section IV. Somewhat surprisingly, the experimental data strongly
suggests that the cavity-circulator coupling rates $g_{x}$ and $g_{y}$ must be
magnetic field dependent. (For example, it heavily constrains that
$g_{x}/2\pi>16$ MHz near $B$ = 0 and $g_{x}/2\pi<12$ MHz at $|B|>30$ mT.) We
attribute this varying coupling to the change in electromagnetic field
distribution of the $x$\- and $y$-modes around the coupling aperture due to
the $x$-$y$ anisotropy of YIG. Assuming $g_{x}$ and $g_{y}$ contains a
contribution proportional to $\beta(B)$ with the same decay shape over applied
field, the effective Hamiltonian model fits the Lorentzian parameters quite
well and also reproduces the overall transmission spectrum [Fig. 6(b)].
The eigenmode features of the system can be understood intuitively by
considering first the inter-mixing of the $x$, $y$ circulator modes (i.e.
diagonalization of the lower right block of $H_{\mathrm{eff}}$) and then their
mixing with the two cavity modes. At $B=0$, the circulator modes are
relatively close in frequency to the bare cavities, resulting in strong four-
mode hybridization and substantial linewidth-broadening and frequency shift to
Mode $a$. As $B$ increases, the block-diagonalized circulator modes split
further in frequency in response to increasing magnetization of YIG (analogous
to the unloaded internal mode spectrum in Fig. 2b), and become more detuned
from the bare cavities, so the cavity-circulator hybridization is continuously
reduced. This is reflected in the eventual flattening of the frequency and
linewidth of the observed Lorentzians at high fields.
In our device, opposite magnetic fields produce opposite directions of non-
reciprocity, hence the transmission spectra observed at $\pm B$ in Fig. 6(a)
are markedly different. Interestingly, the extracted data in Fig. 6(c,d) shows
that the underlying eigenmode frequency and linewidths at $\pm B$ are equal,
unchanged under the mapping $\mathcal{P}$ of $B:\mapsto-B$. This is no
coincidence, but is rather the direct consequence of microscopic symmetry
requirements. Recall again that the Onsager-Casimir relation Casimir (1945)
requires that the full scattering matrix $S$ satisfy
$S(-B)=S^{\mathrm{T}}(B)$. As $S$ is however directly determined by our non-
Hermitian Hamiltonian, this necessarily implies that
$H^{\phantom{T}}_{\mathrm{eff}}(-B)=H_{\mathrm{eff}}^{\mathrm{T}}(B)$. This in
turn implies that the complex eigenvalues of $H_{\mathrm{eff}}$ are unchanged
under $\mathcal{P}$. Note that the operation $\mathcal{P}$ is not just a
simple time-reversal operation, as it does not involve transforming loss to
gain (and vice versa). This property of $H_{\mathrm{eff}}$ can be easily seen
to hold for our specific model in Eq. (4). Nonetheless, we emphasize that our
experimental observation here of eigenvalue invariance under the mapping
$\mathcal{P}$ is a demonstration of a general physical property; it is by no
means contingent on the specifics of our model.
While the eigenvalues of $H_{\mathrm{eff}}$ do not directly reflect the non-
reciprocal physics of our system, the same is not true of its eigenvectors. As
it involves matrix transposition, the operation $\mathcal{P}$ exchanges the
left and right eigenvectors of the effective Hamiltonian:
$\ket{n_{R}(B)}=\ket{n_{L}(-B)}^{*}$. A defining feature of a non-reciprocal
Hamiltonian is that the left and right eigenvectors generally differ in their
spatial structures (i.e. they look very different when expressed in a basis of
bare modes):
$R_{i,n}=\frac{|\bra{n_{L}}\ket{i}|}{|\bra{i}\ket{n_{R}}|}\neq 1$ (7)
As has been discussed elsewhere McDonald and Clerk (2020); Schomerus (2020),
the $R_{i,n}$ characterizes a fundamental asymmetry in the response of our
system. The numerator characterizes the susceptibility of the eigenmode $n$ to
a perturbation or excitation entering from bare mode $i$. In contrast, the
denominator tells us the amplitude on bare mode $i$ that would result given
that the system eigenmode $n$ is excited. In a Hermitian system these
quantities are necessarily identical, expressing a fundamental kind of
reciprocity between susceptibility and response. In our non-Hermitian system,
the non-unity ratio here reflects the effective non-reciprocity of the inter-
mode interactions.
This non-reciprocal eigenvector structure is experimentally verified by the
asymmetry of the Lorentzian amplitudes with respect to $B$ in Fig. 6(e),
$\frac{A_{n}(B)}{A_{n}(-B)}=\bigg{|}\frac{\bra{n_{L}}\ket{1}}{\bra{1}\ket{n_{R}}}\frac{\bra{y}\ket{n_{R}}}{\bra{n_{L}}\ket{y}}\bigg{|}=\frac{R_{1,n}}{R_{y,n}}$
(8)
We plot this ratio in Fig. 6(f). For our device, a calculation based on Eq.
(4) shows that $R_{y,n}\approx 1$ for most of the field range (near zero field
and $|B|\gtrsim 15$ mT), allowing Fig. 6(f) to be understood as a measurement
of the non-reciprocity ratio $R_{1,n}$, in this field range, showing the role
of Cavity 1 in the two prominent eigenmodes of the system. In particular, the
most pronounced asymmetry is observed near the optimal working point of the
circulator ($B=\pm 28$ mT) for Mode $b$, which can leak through cavity 1 but
cannot be excited from Cavity 1 or vice versa, as expected for a mode
dominated by photons in Cavity 2.
Figure 7: Mediated non-reciprocal coupling rates between the external
superconducting cavities. Red curves show the off-diagonal coupling terms in
the effective two-mode Hamiltonian (c.f. Eq.(9)), $\lvert H_{12}\rvert$ and
$\lvert H_{21}\rvert$, as a function of magnetic field, and blue shows
$r=\sqrt{\lvert H_{21}\rvert/\lvert H_{12}\rvert}$.
It is also interesting to discuss our system in the context of general systems
exhibiting non-reciprocal interactions between constituent parts. Such systems
are commonly described by phenomenological non-Hermitian Hamiltonian matrices
$H$, whose matrix elements in a local basis encode interactions with
directionality: $|H_{ij}|\neq|H_{ji}|$. A prominent example is the Hatano-
Nelson model Hatano and Nelson (1997) of asymmetric tunneling on a lattice. In
our case, we have a microscopically-motivated model that is fully consistent
with the requirements of microscopic reversability, but which encodes non-
reciprocity. As shown in appendix C, one can adiabatically eliminate the
internal circulator modes from our system to obtain an effective two-mode non-
Hermitian Hamiltonian that describes the external cavity modes and their
circulator-mediated interaction:
$\displaystyle
H^{\prime}_{\mathrm{eff}}/\hbar=\begin{pmatrix}\omega_{1,\mathrm{eff}}-i\frac{\kappa_{1,\mathrm{eff}}}{2}&H_{12}\\\
H_{21}&\omega_{2,\mathrm{eff}}-i\frac{\kappa_{2,\mathrm{eff}}}{2}\\\
\end{pmatrix}$ (9)
The field-tunable non-reciprocity can be seen in the asymmetry of the off-
diagonal coupling values $H_{12}$ and $H_{21}$ based on the model, as plotted
in Fig. 7. We note that the scale of $H_{12}$ and $H_{21}$ of a few MHz (which
can be increased by using a larger coupling hole) is much larger than the
achievable internal loss of the superconducting cavities , making the non-
reciprocal coupling the dominant interaction.
Furthermore, this reduced Hamiltonian and its eigenvectors can be mapped to a
reciprocal Hamiltonian and associated eigenvectors using a similarity
transformation $S(r)$, where $\sqrt{\lvert H_{21}\rvert/\lvert
H_{12}\rvert}\equiv r$, as outlined in Appendix C. The similarity transform
effectively localizes the mode participation on the lattice site in the
direction of stronger coupling more than would be expected in the reciprocal
case, causing the amplitude ratios ($R_{i,n}$) to deviate from 1, which can be
viewed as a consequence of the non-Hermitian skin effect on a two site Hatano-
Nelson lattice Yao and Wang (2018). Furthermore,the similarity transformation
can be used to explain the qualitative behavior of the disparate amplitude
ratios seen in Fig. 6(f). In particular, for $B\gtrsim 20$ mT, we have
$R_{1,a}\approx 1$ and $R_{1,b}\approx r^{2}$ (see Appendix C for details).
## VI Outlook
In this work, we have revisited the working principles of a Y-junction ferrite
circulator Fay and Comstock (1965), a microwave engineering classic from the
1960’s, in an entirely new context of hybrid quantum systems and non-Hermitian
Hamiltonian. The use of reconfigurable probes and single-crystalline YIG in a
low-loss waveguide package allows us to connect the properties of the photon-
magnon polaritons to the circulator performance. We have further leveraged our
direct access to the internal modes and our ability to tune their coupling in
situ to construct a multi-mode chiral system and unambiguously reveal its non-
reciprocal eigenvector structure. An understanding of the circulator modes and
the non-reciprocal eigen-vector structure of the multi-mode chiral system
provides a foundation for future engineering of any target non-Hermitian
Hamiltonian. This is achieved by our creation of a template model that one can
use to couple any circuit QED element to in order to understand how it would
integrate into the non-reciprocal dynamics, as was done here with the
superconducting cavities.
Looking forward, our device architecture provides a versatile testbed for
studying non-reciprocal interactions in circuit QED by integration of
superconducting qubits. This is enabled by two of its highlighted properties:
the low internal loss of the circulator modes ($<$1% of the demonstrated
coupling rates, compatible with potential high-fidelity operations), and the
relatively low-field operation of the circulator ($\sim$25 mT, below
ferrimagnetic saturation). The latter allows niobium waveguides or cavities to
conveniently act as magnetic shields for superconducting qubits. We have
preliminarily tested that the coherence times of a transmon qubit housed in
one of the niobium cavities are unaffected by in-situ application of a global
magnetic field up to at least 0.1 T. We expect a transmon housed in a niobium
waveguide should receive a similar level of protection from magnetic field.
Direct non-reciprocal coupling of superconducting qubits would open a new
frontier in the study of non-reciprocal dynamics currently dominated by linear
systems Sounas and Alù (2017); Fang _et al._ (2017); Xu _et al._ (2019);
Ruesink _et al._ (2016). The physics of a $N$-mode linear non-reciprocal
system can always be described efficiently by a $N\times N$ non-Hermitian
Hamiltonian matrix (exemplified by our application of such a model) and its
dynamics are always in the classical correspondence limit. Direct
participation of multiple nonlinear modes (such as superconducting qubits) in
non-reciprocal coupling, as envisioned in chiral quantum optics Lodahl _et
al._ (2017), would lead to novel forms of entanglement stabilization and many-
body phases Stannigel _et al._ (2012); Ramos _et al._ (2014). Our system
presents another potential platform to implement this regime in circuit QED in
additional to those proposed using dynamic control Guimond _et al._ (2020);
Gheeraert _et al._ (2020). Strong coupling of Josephson circuits with low-
loss non-reciprocal elements can even produce degenerate and protected ground
states for robust encoding of qubits Rymarz _et al._ (2021).
###### Acknowledgements.
We thank Juliang Li and Dario Rosenstock for experimental assistance. This
research was supported by U.S. Army Research Office under grants
W911-NF-17-1-0469 and W911-NF-19-1-0380.
## Appendix A Numerical simulation of the ferrite device
Finite element analysis software that supports magnetodynamic simulations,
such as Ansys HFSS, can be used to simulate our ciculator system with both
driven mode and eigenmode solutions. Eigenmode analysis can solve for the
frequency and field distributions of our device’s eigenmodes, while driven
mode analysis reports the S-parameters over frequency. Here we discuss
eigenmode simulations, but driven mode analysis can be carried out similarly.
It is well known that when the applied field is large and magnetization is
saturated along $z$ axis, one can generalize to the whole ferrite the
equations of motion derived from the torque experienced by an electron dipole
moment under the presence of an applied field. This approach, augmented by the
small signal approximation of the Landau-Lifshitz equation of motion, yields
the textbook Polder (relative) permeability tensor:
$[\mu]_{z}=\begin{pmatrix}\mu_{r}&i\kappa&0\\\ -i\kappa&\mu_{r}&0\\\ 0&0&1\\\
\end{pmatrix}$ (10)
where $\mu_{r}=1+\frac{\omega_{0}\omega_{m}}{\omega_{0}^{2}-\omega^{2}}$,
$\kappa=\frac{\omega\omega_{m}}{\omega_{0}^{2}-\omega^{2}}$, with
$\omega_{0}=\gamma\mu_{0}H_{0}$ and $\omega_{m}=\gamma\mu_{0}M_{s}$ being the
internal field strength and saturation magnetization converted to frequencies,
respectively.
The Polder permeability tensor is implimented in HFSS by default to solve for
the interaction of a saturated ferrite with an AC microwave field. However, in
our experiment we operate the circulator at a low bias field, where the
ferrite is not fully saturated. We adopt a permeability tensor model proposed
by Sandy and Green Green and Sandy (1974) for a partially-magnetized ferrite:
$\displaystyle[\mu]_{z}=\begin{pmatrix}\mu_{p}&i\kappa_{p}&0\\\
-i\kappa_{p}&\mu_{p}&0\\\ 0&0&\mu_{z}\\\ \end{pmatrix}$ (11)
where
$\displaystyle\mu_{p}$
$\displaystyle=\mu_{d}+(1-\mu_{d})\bigg{(}\frac{|M_{p}|}{M_{s}}\bigg{)}^{3/2}$
(12) $\displaystyle\kappa_{p}$
$\displaystyle=\kappa\bigg{(}\frac{M_{p}}{M_{s}}\bigg{)}$ (13)
$\displaystyle\mu_{z}$
$\displaystyle=\mu_{d}^{\big{(}1-\frac{|M_{p}|}{M_{s}}\big{)}^{5/2}}$ (14)
$\displaystyle\mu_{d}$
$\displaystyle=\frac{1}{3}+\frac{2}{3}\sqrt{1-\bigg{(}\frac{\omega_{m}}{\omega}\bigg{)}^{2}}$
(15)
with $M_{p}$ being the net magnetization of the partially magnetized ferrite.
This model contains functional forms for $\mu_{p}$ and $\mu_{z}$ that are
purely empirical. However, the expressions for $\kappa_{p}$, which dictates
the chiral splitting of the circulator modes, and $\mu_{d}$, which represents
the permeability in fully demagnetized state, are well motivated Schlömann
(1970).
Figure 8: Eigenfrequencies of the device from finite-element simulations. For
the circulator device with WCP, HFSS eigenmode simulation gives the mode
frequency over different magnetic fields(dots) and agree semi-quantitatively
with experimental data.
This model is implemented in simulations by defining materials with the
customized permeability tensor as given above. Whereas HFSS does not by
default support eigenmode simulations for a ferrite under a DC bias field,
manually defining the permeability tensor components allows us to simulate the
circulator’s eigenmode structure at any magnetization (bias field) as shown in
Fig. 8.
The simulation results agree semi-quantitatively with the experimental data in
Fig. 2(a) with a linear relationship between applied magnetic field and
magnetization $M=\mu_{0}B/N_{z}$ mentioned earlier, including a dielectric
resonance mode with steep magnetic field dependence that is visible in the
experimental data.
Relating to the anisotropy mentioned earlier in section III, the simulation is
treating the YIG cylinder as completely isotropic, leading to degenerate Mode
x and y at 0 field. To account for the anisotropy, we introduce a general
energetic preference along the $x$ axis, thus making the domains of the
unsaturated YIG preferentially align along the $x$ axis and breaking the
rotational symmetry.
When all domains are oriented along the $z$ axis with net magnetization of
zero, the permeability is calculated to be
$[\mu]_{z}=\begin{pmatrix}\mu_{\mathrm{eff}}&0&0\\\ 0&\mu_{\mathrm{eff}}&0\\\
0&0&1\\\ \end{pmatrix}$ (16)
where
$\mu_{\mathrm{eff}}=\sqrt{\frac{\omega^{2}-\omega_{m}^{2}}{\omega^{2}}}$. To
get the permeability matrix for domains align along the $x$ and $y$ axes
($[\mu]_{x}$, $[\mu]_{y}$), one can apply a change of coordinates to Eq. (A7).
The matrix for completely random domain orientations would be an equal average
of the three permeability matrices, $[\mu]_{x},[\mu]_{y},[\mu]_{z}$ Schlömann
(1970). Applying a weighted average to the matrices will then allow for
representation of an energetic preference, as shown for a preference along the
$x$ axis:
$[\mu]=(\frac{1}{3}+\delta)[\mu]_{x}+(\frac{1}{3}-\delta)[\mu]_{y}+\frac{1}{3}[\mu]_{z}$
(17)
Using $\delta=0.1$ in Eq. (A8) gives 260 MHz of splitting between Mode x and
Mode y, which is in good agreement with experimental results from Fig. 2(b).
## Appendix B Modeling YIG anisotropy in system Hamiltonian
As mentioned earlier, there is a clear broken rotational symmetry in the
$x$-$y$ plane apparent from the splitting in Fig. 2(b). Since the exact origin
of the anisotropy is unknown, we will treat it as a general energetic favoring
in the x-y plane. As the $\hat{z}$ bias field is increased, the magnetization
will align more along $\hat{z}$, making $x$-$y$ plane preferences less
impactful. Based off this understanding, we wanted a simple functional form to
describe how the effect of this anisotropy decays with an increase in bias
field strength that we could use to describe the decay of $\beta$. Since we
just want the general form of how the effect of an energetic preference decays
over field, the actual form of the energetic preference in the $x$-$y$ plane
is not important. We chose to use a toy model of a magnetic domain with a
simple Hamiltonian ($H_{\mathrm{an}}$) with a simple energetic preference
given by $K$ along the $x$ axis and a total net magnetic moment $M$:
$H_{\mathrm{an}}=-BM\cos(\theta)-K\sin^{2}(\theta)\cos^{2}(\phi)$ (18)
To see how the effect of this anisotropy changes as we vary the magnetic field
$B$, we utilized classical Boltzmann statistics. We define a partition
function:
$Z=\int_{0}^{\pi}\int_{0}^{2\pi}e^{-H_{\mathrm{an}}(\theta,\phi)/(k_{b}T)}\,\sin(\theta)d\theta
d\phi\ $ (19)
so we can calculate the expectation of the magnetic moment direction using Eq.
(B3) for $A=M_{x},M_{y},M_{z}$;
$M_{x}=M\sin(\theta)\cos(\phi),M_{y}=M\sin(\theta)\sin(\phi),M_{z}=M\cos(\theta)$.
$\langle
A^{2}\rangle=\int_{0}^{\pi}\int_{0}^{2\pi}A^{2}e^{-H_{\mathrm{an}}(\theta,\phi)/(k_{b}T)}\,\sin(\theta)d\theta
d\phi\ /Z$ (20)
We calculate these expectation values numerically, and find that the
difference of $\langle M_{x}^{2}\rangle-\langle M_{y}^{2}\rangle$ follows
approximately a $\sech(B)$ function. This motivates us to use this simple
functional form to model the anisotropy-induced term $\beta$:
$\displaystyle\beta(B)$ $\displaystyle=\beta_{0}\sech(B/B_{0})$ (21)
The scaling factor $B_{0}$ was fit to the $S_{31}$ spectrum giving a value of
18.5 mT. While this is a rather crude phenomenological treatment of the
anisotropy, since the detuning of the circulator modes becomes large enough
that there is little hybridization with the cavities at relatively small
magnetic fields ($\sim$20 mT), the exact dependence on magnetic field becomes
less important to understand the non reciprocal dynamics of the cavities.
## Appendix C Two-mode Hamiltonian and gauge symmetry
We aim to elucidate the non-reciprocity from the Hamiltonian given in Eq. (4)
by reducing it to the form written in Eq. (9). In order to do this, we
adiabatically integrate out the two circulator modes to reduce the Hamiltonian
to a simple $2\times 2$ matrix ($H^{\prime}_{\mathrm{eff}})$ involving only
the two cavity modes. The adiabatic elimination is justified due to the large
loss rate on the hybridized circulator modes, making their relevant time
scales much faster than the time scale set by the coupling parameters to the
cavities. The form of the of the effective Hamiltonian is written out in Eq.
(9). Due to the complicated dependence on the four mode model parameters, we
have written simple frequency and loss terms on the diagonal entries and
simple non-reciprocal couplings on the off diagonal entries where their
explicit values change as a function of magnetic field. The coupling terms
($H_{12},H_{21}$) along with $r=\sqrt{\lvert H_{21}\rvert/\lvert
H_{12}\rvert}$ are plotted in Fig. 7. The non-reciprocal nature of the system
then becomes immediately apparent as the $H_{21}$ and $H_{12}$ Hamiltonian
terms are different outside of 0 field, showing a clear directionality in the
interaction. We can map this Hamiltonian to a reciprocal one using the
similarity transformation outlined in Eq. (C1) with the transformation matrix
written in Eq. (C2). This new reciprocal Hamiltonain is now symmetric under
flipping the sign of the magnetic field
$H_{\mathrm{rec}}(B)=H_{\mathrm{rec}}(-B)$.
$H^{\prime}_{\mathrm{eff}}\rightarrow SH^{\prime}_{\mathrm{eff}}S^{-1}\equiv
H_{\mathrm{rec}}$ (22) $S=\begin{pmatrix}r^{1/2}&0\\\ 0&r^{-1/2}\\\
\end{pmatrix}$ (23)
This means that plotting the ratio of $R_{i,n,\mathrm{rec}}$ from
$H_{\mathrm{rec}}$ will always yield 1 for all B values. One can also map the
eigenvectors of the original system ($\ket{\psi_{i}}$) to the reciprocal
system ($\ket{\psi_{i,\mathrm{rec}}}$) by $\ket{\psi_{Ri,\mathrm{rec}}}$ =
$S\ket{\psi_{Ri}},\ket{\psi_{Li,\mathrm{rec}}}$ = $S^{-1}\ket{\psi_{Li}}$.
Starting from the ratio $R_{i,n,\mathrm{rec}}=1$ using the eigenvectors of
$H_{\mathrm{rec}}$, it is then apparent that transforming the eigenvectors
back to those of $H^{\prime}_{\mathrm{eff}}$ will allow one to simply
caluclate $R_{i,n}$. To illustrate this, we start with the explicit change in
components from the transformation of the right and left eigenvectors as
written in Eqns. (C3, C4), we can then substitute these in to the earlier
expression for the ratio $R_{i,n}$ and see how the ratio deviates from the
reciprocal case of 1, as done in Eq. (C5) with $i=1$ as an example.
$\displaystyle\ket{\psi_{R,\mathrm{rec}}}=\begin{pmatrix}x\\\ y\\\
\end{pmatrix}\xrightarrow[\text{transform}]{\text{similarity}}\ket{\psi_{R}}=\frac{1}{\sqrt{\frac{|x|^{2}}{r}+|y|^{2}r}}\begin{pmatrix}\frac{1}{\sqrt{r}}x\\\
\sqrt{r}y\\\ \end{pmatrix}$ (24)
$\ket{\psi_{L,\mathrm{rec}}}=\begin{pmatrix}x^{*}\\\ y^{*}\\\
\end{pmatrix}\xrightarrow[\text{transform}]{\text{similarity}}\ket{\psi_{L}}=\frac{1}{\sqrt{|x|^{2}r+\frac{|y|^{2}}{r}}}\begin{pmatrix}\sqrt{r}x^{*}\\\
\frac{1}{\sqrt{r}}y^{*}\\\ \end{pmatrix}$ (25)
$R_{1,n}=\frac{|\bra{n_{L}}\ket{1}|}{|\bra{1}\ket{n_{R}}|}=\frac{|r^{1/2}x\sqrt{|x|^{2}r^{-1}+|y|^{2}r|}}{{|r^{-1/2}x\sqrt{|x|^{2}r+|y|^{2}r^{-1}}|}}$
(26)
It is important to note two simplifying limits for Eq. (C5) that the reader
may verify themselves, for $x/y\gg r$, $R_{1,n}\approx 1$ and for $y/x\gg r$,
$R_{1,n}\approx r^{2}$.
One can use this similarity transformation to understand the qualitative
behavior of the disparate amplitude ratios seen in Fig. 6(f). As mentioned
earlier, the amplitude ratio in this case can be roughly approximated as
$R_{1,n}$ at $|B|\gtrsim$ 15 mT so we can focus primarily on this ratio to
understand the behavior in this field range. At larger fields ($B\gtrsim 20$
mT) Modes a and b are dominated by participation in the bare cavity modes so
we can approximate these modes by using the eigenmode values from
$H^{\prime}_{\mathrm{eff}}$ for the cavity mode components and zeros for the
circulator mode components. Under this approximation we can look at the inner
products in $R_{1,n}$ just from the components in the eigenmodes of
$H^{\prime}_{\mathrm{eff}}$. Mode a is largely dominated by the cavity 1
component with little circulator participation with
$|\bra{a_{\mathrm{rec}}}\ket{1}|/|\bra{a_{\mathrm{rec}}}\ket{2}|\gg r$ for all
values of $r$, thus we can invoke the limit of Eq.(C5) previously mentioned to
find the ratio $R_{1,b}\approx 1$ which is what is seen in Fig. 6(f). The same
argument can be made for mode b, but in the opposite limit of
$|\bra{b_{\mathrm{rec}}}\ket{2}|/|\bra{b_{\mathrm{rec}}}\ket{1}|\gg r$,
leading to the other limit of Eq.(C5), making the ratio $R_{1,b}\approx r^{2}$
which can be seen by comparing Fig. 6(f) with Fig. 7.
## References
* Kord _et al._ (2020) A. Kord, D. L. Sounas, and A. Alù, Proceedings of the IEEE 108, 1728 (2020).
* Devoret and Schoelkopf (2013) M. H. Devoret and R. J. Schoelkopf, Science 339, 1169 (2013).
* Krantz _et al._ (2019) P. Krantz, M. Kjaergaard, F. Yan, T. P. Orlando, S. Gustavsson, and W. D. Oliver, Applied Physics Reviews 6, 021318 (2019).
* Kurpiers _et al._ (2018) P. Kurpiers, P. Magnard, T. Walter, B. Royer, M. Pechal, J. Heinsoo, Y. Salathé, A. Akin, S. Storz, J.-C. Besse, S. Gasparinetti, A. Blais, and A. Wallraff, Nature 558, 264 (2018).
* Axline _et al._ (2018) C. J. Axline, L. D. Burkhart, W. Pfaff, M. Zhang, K. Chou, P. Campagne-Ibarcq, P. Reinhold, L. Frunzio, S. M. Girvin, L. Jiang, M. H. Devoret, and R. J. Schoelkopf, Nature Physics 14, 705 (2018).
* Vool and Devoret (2017) U. Vool and M. Devoret, International Journal of Circuit Theory and Applications 45, 897 (2017).
* Lachance-Quirion _et al._ (2019) D. Lachance-Quirion, Y. Tabuchi, A. Gloppe, K. Usami, and Y. Nakamura, Applied Physics Express 12, 070101 (2019).
* Awschalom _et al._ (2021) D. D. Awschalom, C. H. R. Du, R. He, F. J. Heremans, A. Hoffmann, J. T. Hou, H. Kurebayashi, Y. Li, L. Liu, V. Novosad, J. Sklenar, S. E. Sullivan, D. Sun, H. Tang, V. Tiberkevich, C. Trevillian, A. W. Tsen, L. R. Weiss, W. Zhang, X. Zhang, L. Zhao, and C. W. Zollitsch, arXiv:2102.03222 (2021).
* Huebl _et al._ (2013) H. Huebl, C. W. Zollitsch, J. Lotze, F. Hocke, M. Greifenstein, A. Marx, R. Gross, and S. T. B. Goennenwein, Physical Review Letters 111, 127003 (2013).
* Tabuchi _et al._ (2014) Y. Tabuchi, S. Ishino, T. Ishikawa, R. Yamazaki, K. Usami, and Y. Nakamura, Physical Review Letters 113, 083603 (2014).
* Zhang _et al._ (2014) X. Zhang, C.-L. Zou, L. Jiang, and H. X. Tang, Physical Review Letters 113, 156401 (2014).
* Tabuchi _et al._ (2015) Y. Tabuchi, S. Ishino, A. Noguchi, T. Ishikawa, R. Yamazaki, K. Usami, and Y. Nakamura, Science 349, 405 (2015).
* Lachance-Quirion _et al._ (2020) D. Lachance-Quirion, S. P. Wolski, Y. Tabuchi, S. Kono, K. Usami, and Y. Nakamura, Science 367, 425 (2020).
* Hou and Liu (2019) J. T. Hou and L. Liu, Physical Review Letters 123, 107702 (2019).
* Golovchanskiy _et al._ (2021) I. A. Golovchanskiy, N. N. Abramov, V. S. Stolyarov, M. Weides, V. V. Ryazanov, A. A. Golubov, A. V. Ustinov, and M. Y. Kupriyanov, Science Advances 7, eabe8638 (2021).
* Heyroth _et al._ (2019) F. Heyroth, C. Hauser, P. Trempler, P. Geyer, F. Syrowatka, R. Dreyer, S. G. Ebbinghaus, G. Woltersdorf, and G. Schmidt, Physical Review Applied 12, 054031 (2019).
* Boventer _et al._ (2018) I. Boventer, M. Pfirrmann, J. Krause, Y. Schön, M. Klaui, and M. Weides, Physical Review B 97, 184420 (2018).
* Zhang _et al._ (2017) D. Zhang, X.-Q. Luo, Y.-P. Wang, T.-F. Li, and J. Q. You, Nature Communications 8, 1368 (2017).
* Anderson _et al._ (2016) B. M. Anderson, R. Ma, C. Owens, D. I. Schuster, and J. Simon, Physical Review X 6, 041043 (2016).
* Owens _et al._ (2018) C. Owens, A. LaChapelle, B. Saxberg, B. M. Anderson, R. Ma, J. Simon, and D. I. Schuster, Physical Review A 97, 013818 (2018).
* Zhang _et al._ (2020) X. Zhang, A. Galda, X. Han, D. Jin, and V. M. Vinokur, Physical Review Applied 13, 044039 (2020).
* Fay and Comstock (1965) C. E. Fay and R. L. Comstock, IEEE Transactions on Microwave Theory and Techniques 13, 15 (1965).
* Heiss (2012) W. D. Heiss, 45, 444016 (2012).
* Özdemir _et al._ (2019) S. K. Özdemir, S. Rotter, F. Nori, and L. Yang, Nature Materials 18, 783 (2019).
* Hatano and Nelson (1997) N. Hatano and D. R. Nelson, Physical Review B 56, 8651 (1997).
* Yao and Wang (2018) S. Yao and Z. Wang, Physical Review Letters 121, 086803 (2018).
* McDonald _et al._ (2018) A. McDonald, T. Pereg-Barnea, and A. A. Clerk, Physical Review X 8, 041031 (2018).
* Stannigel _et al._ (2012) K. Stannigel, P. Rabl, and P. Zoller, New Journal of Physics 14, 063014 (2012).
* Lodahl _et al._ (2017) P. Lodahl, S. Mahmoodian, S. Stobbe, A. Rauschenbeutel, P. Schneeweiss, J. Volz, H. Pichler, and P. Zoller, Nature 541, 473 (2017).
* Chapman _et al._ (2017) B. J. Chapman, E. I. Rosenthal, J. Kerckhoff, B. A. Moores, L. R. Vale, J. A. B. Mates, G. C. Hilton, K. Lalumière, A. Blais, and K. W. Lehnert, Physical Review X 7, 041043 (2017).
* Lecocq _et al._ (2017) F. Lecocq, L. Ranzani, G. A. Peterson, K. Cicak, R. W. Simmonds, J. D. Teufel, and J. Aumentado, Physical Review Applied 7, 024028 (2017).
* Sliwa _et al._ (2015) K. M. Sliwa, M. Hatridge, A. Narla, S. Shankar, L. Frunzio, R. J. Schoelkopf, and M. H. Devoret, Physical Review X 5, 041020 (2015).
* Ruesink _et al._ (2016) F. Ruesink, M.-A. Miri, A. Alù, and E. Verhagen, Nature Communications 7, 13662 (2016).
* Fang _et al._ (2017) K. Fang, J. Luo, A. Metelmann, M. H. Matheny, F. Marquardt, A. A. Clerk, and O. Painter, Nature Physics 13, 465 (2017).
* Wang _et al._ (2019) Y.-P. Wang, J. W. Rao, Y. Yang, P.-C. Xu, Y. S. Gui, B. M. Yao, J. Q. You, and C.-M. Hu, Physical Review Letters 123, 127202 (2019).
* Xu _et al._ (2019) H. Xu, L. Jiang, A. A. Clerk, and J. G. E. Harris, Nature 568, 65 (2019).
* Chen _et al._ (1991) D.-X. Chen, J. Brug, and R. Goldfarb, IEEE Transactions on Magnetics 27, 3601 (1991).
* Fletcher _et al._ (1960) R. C. Fletcher, R. C. LeCraw, and E. G. Spencer, Physical Review 117, 955 (1960).
* Klingler _et al._ (2017) S. Klingler, H. Maier-Flaig, C. Dubs, O. Surzhenko, R. Gross, H. Huebl, S. T. B. Goennenwein, and M. Weiler, Applied Physics Letters 110, 092409 (2017).
* Marković _et al._ (2018) D. Marković, S. Jezouin, Q. Ficheux, S. Fedortchenko, S. Felicetti, T. Coudreau, P. Milman, Z. Leghtas, and B. Huard, Physical Review Letters 121, 040505 (2018).
* Solt (1962) I. H. Solt, Journal of Applied Physics 33, 1189 (1962).
* Schlömann (1970) E. Schlömann, Journal of Applied Physics 41, 204 (1970).
* Green and Sandy (1974) J. Green and F. Sandy, IEEE Transactions on Microwave Theory and Techniques 22, 641 (1974).
* Casimir (1945) H. B. G. Casimir, Reviews of Modern Physics 17, 343 (1945).
* Ranzani _et al._ (2013) L. Ranzani, L. Spietz, Z. Popovic, and J. Aumentado, Review of Scientific Instruments 84, 034704 (2013).
* Helszajn (2008) J. Helszajn, _The Stripline Circulator: Theory and Practice_ , 1st ed. (Wiley-IEEE Press, Hoboken, NJ, 2008).
* Metelmann and Clerk (2015) A. Metelmann and A. A. Clerk, Physical Review X 5, 021025 (2015).
* McDonald and Clerk (2020) A. McDonald and A. A. Clerk, Nature Communications 11, 5382 (2020).
* Schomerus (2020) H. Schomerus, Physical Review Research 2, 013058 (2020).
* Sounas and Alù (2017) D. L. Sounas and A. Alù, Nature Photonics 11, 774 (2017).
* Ramos _et al._ (2014) T. Ramos, H. Pichler, A. J. Daley, and P. Zoller, Physical Review Letters 113, 237203 (2014).
* Guimond _et al._ (2020) P.-O. Guimond, B. Vermersch, M. L. Juan, A. Sharafiev, G. Kirchmair, and P. Zoller, npj Quantum Information 6, 1 (2020).
* Gheeraert _et al._ (2020) N. Gheeraert, S. Kono, and Y. Nakamura, Physical Review A 102, 053720 (2020).
* Rymarz _et al._ (2021) M. Rymarz, S. Bosco, A. Ciani, and D. P. DiVincenzo, Physical Review X 11, 011032 (2021).
|
# Direct Nash Optimization:
Teaching Language Models to Self-Improve with General Preferences
Corby Rosset Ching-An Cheng Arindam Mitra Michael Santacroce
Ahmed Awadallah11footnotemark: 1 Tengyang Xie11footnotemark: 1
Microsoft Research Correspondence to
<EMAIL_ADDRESS>
###### Abstract
This paper studies post-training large language models (LLMs) using preference
feedback from a powerful oracle to help a model iteratively improve over
itself. The typical approach for post-training LLMs involves Reinforcement
Learning from Human Feedback (RLHF), which traditionally separates reward
learning and subsequent policy optimization. However, such a reward
maximization approach is limited by the nature of “point-wise” rewards (such
as that of the Bradley-Terry model), which fails to express complex
intransitive or cyclic preference relations. While advances on RLHF show
reward learning and policy optimization can be merged into a single
contrastive objective for stability, they yet still remain tethered to the
reward maximization framework. Recently, a new wave of research sidesteps the
reward maximization presumptions in favor of directly optimizing over “pair-
wise” or general preferences. In this paper, we introduce _Direct Nash
Optimization_ (DNO), a _provable_ and _scalable_ algorithm that marries the
_simplicity_ and _stability_ of contrastive learning with _theoretical
generality_ from optimizing general preferences. Because DNO is a _batched on-
policy_ algorithm using a regression-based objective, its implementation is
straightforward and efficient. Moreover, DNO enjoys _monotonic improvement_
across iterations which helps it improve even over a strong teacher (such as
GPT-4). In our experiments, a resulting 7B parameter Orca-2.5 model aligned by
DNO achieves the state-of-the-art win-rate against GPT-4-Turbo of 33% on
AlpacaEval 2.0 (even after controlling for response length), an absolute gain
of 26% ($7\%\\!\to\\!33\%$) over the initializing model. It outperforms models
with far more parameters, including Mistral Large, Self-Rewarding LM (70B
parameters), and older versions of GPT-4. Our ablation studies analyze
critical design decisions surrounding the choice of preference pairs, and the
use of LLMs-as-preference-annotators. These results underscore the promise of
DNO in the LLMs post-training, as well as offer actionable insights for the AI
research community.
## 1 Introduction
Figure 1: Direct Nash Optimization achieves state-of-the-art results for a 7B
parameter large language model, being the first to surpass 30% in both raw
win-rate and length-controlled (LC) win-rate against GPT-4-Turbo. Win Rate and
LC Win Rate have $0.93$ to $0.98$ correlation with ChatBot Arena scores.
The field of artificial intelligence is evolving towards advanced models that
can understand, reason, follow complex instructions, and create nuanced
content, while aligning with human values and preferences. Large Language
Models (LLMs) (e.g., Brown et al., 2020; Ouyang et al., 2022; Touvron et al.,
2023; OpenAI et al., 2023) have demonstrated remarkable capabilities in
generating human-like text, answering questions, and coding, yet they still
face challenges in tasks that require a high degree of reliability, safety,
and ethical alignment. To address these challenges, fine-tuning LLMs using
Reinforcement Learning from Human Feedback (RLHF) (Christiano et al., 2017;
Bai et al., 2022a; Ouyang et al., 2022) has demonstrates strong potential for
making LLMs more helpful by aligning them with human values.
The RLHF framework has been long studied in the context of preference-based
reinforcement learning (RL) or RL from human preferences (e.g., Knox and
Stone, 2008; Akrour et al., 2012; Griffith et al., 2013; Wirth et al., 2017;
Christiano et al., 2017). The conventional methods for RLHF typically assume
that the preference is determined by a scalar reward function through some
model, such as the frequently used Bradley-Terry (BT) model (Bradley and
Terry, 1952).111We use “reward model” to denote a framework that translates
preferences into rewards, e.g., Bradley-Terry, while “reward function” is a
(possibly learned) function that outputs reward scalars. RLHF then optimizes
toward the preference in a two-step procedure: reward learning, and policy
optimization (through RL) to maximize the learned reward. Under certain
conditions, the two-step procedure can be streamlined into a single-step
contrastive learning approach (Rafailov et al., 2023), eliminating the need
for explicit reward learning. Algorithms of this kind (e.g., Rafailov et al.,
2023, DPO) leverage the insight that a policy can be expressed equivalently by
an “internal reward function” that the policy is optimal to, so they reduce
the RLHF problem to regressing the policy’s internal reward function to that
of the preference model. These algorithms are originally offline, and boast
enhanced stability and ease of optimization. Nonetheless, two-step RLHF
algorithms and their single-step contrastive variants still fundamentally rely
on the reward maximization framework, wherein reward-based preferences are
governed by, e.g., the BT model.
The reward maximization framing poses a major limitation. Reward functions,
defined to output a scalar score $r(x,y)$ for a _single_ response $y$ to input
$x$, cannot express general preferences $y\succ y^{\prime}\mid x$ between a
_pair_ of outputs in all cases, e.g., intransitive or cyclic preferences (Elo,
1978). Hence, LLMs trained under reward maximization cannot always align with
human preference. Furthermore, recent works show that even in settings where
preferences can be perfectly expressed under the reward-based BT models,
optimizing towards rewards yields problematic behaviors; we refer the reader
to Bertrand et al. (2023); Azar et al. (2023); Munos et al. (2023) for more
details. Lastly, reward functions in practice can quickly become “stale” as
the distribution of the policy shifts under training (Ross et al., 2011; Cheng
et al., 2023; Azar et al., 2023; Munos et al., 2023) – leaving them vulnerable
to “reward hacking” (Amodei et al., 2016)
In response to these weaknesses, an appealing line of work on RLHF proposes to
directly optimize the _general preference function_ itself, instantiated as
some oracle. These studies re-frame RLHF as finding a Nash equilibrium of a
two-player game with “payoffs” from a regularized (Munos et al., 2023) or un-
regularized (Swamy et al., 2024) general preference function. To solve it,
they further approximate such Nash equilibrium using single-player algorithms
by leveraging the symmetry of the preference function. Then, they instead
define the reward of a response as the expected win-rate against the policy’s
own behavior, as judged by the preference function, e.g.,
$r(x,y)={\mathbb{E}}_{y^{\prime}\sim\pi(\cdot\mid x)}\left[\mathcal{P}(y\succ
y^{\prime}\mid x)\right]$. Hence, rewards are maximized by responses that are
preferred over the policy’s expected response, and a Nash equilibrium is
achieved when both players deploy a $\pi^{\star}$ that is preferred over any
competing policy. However, these proposed single-player algorithms are
primarily (purely) on-policy, and they sometimes require a separately
estimated preference function or a time-varying reward function. How to scale
these algorithms up faithfully is still under-investigated.
We are motivated to overcome two separate challenges: the limited expressivity
of reward-based RLHF, and the lack of clarity on how to scale up optimizing
with respect to general preferences. Recent advances in reward-based
optimization e.g., DPO, already have efficient and scalable implementations –
we seek a similarly efficient solution under the framework of general
preferences.
We propose a provable and scalable RLHF algorithm – Direct Nash Optimization
(DNO) (Algorithm 1) that achieves the best of both worlds, combining the
scalability of contrastive objectives with the theoretical soundness of
general preference optimization. DNO is designed as a _batched on-policy_
algorithm with a regression-based learning objective; this design choice makes
DNO stable and scalable, striking a balance between deployment efficiency and
adaptability.
We summarize at a high level the key ingredients and insights of DNO below.
1. 1.
To address the issue that reward functions cannot express general preferences,
we leverage recent insights that the notion of reward of ought to be expressed
as _expected_ win-rates with regard to a general preference function.222E.g.,
for a fixed $y\mid x$, the expected win-rate of $y$ against the policy itself
is: ${\mathbb{E}}_{y^{\prime}\sim\pi(\cdot\mid x)}\left[\mathcal{P}(y\succ
y^{\prime}\mid x)\right]$.
2. 2.
To address the issue found in previous work that optimizing this more general
objective with online algorithms is sample-inefficient or unstable, we
decompose the learning procedure into a sequence of “batched on-policy”
iterations, wherein each step instead optimizes a simple regression objective.
3. 3.
The regression objective (we choose binary cross-entropy) aligns the “internal
reward function” of the policy to the expected win-rate compared with itself
(as defined in 3 of Algorithm 1). By sampling outputs from the current policy
to use for training (i.e., “self-play”), this procedure incentivizes self-
improving behavior.
4. 4.
Our framework is general enough to admit off-policy samples into training,
importantly, those from a more powerful teacher (See choice of $\mu_{1}$ and
$\mu_{2}$ in Algorithm 1).
5. 5.
Furthermore, to ensure stability and computational efficiency, we propose a
filtering scheme such that the reward regression is only performed on
preference pairs with a sufficiently large margin (for theoretical
explanation, see Section 4; in practice, see Section 5.2).
6. 6.
DNO repeats this procedure for multiple iterations to let the policy optimize
toward the general preference. Since each step involves a regression problem
it can be easily implemented at scale.
Theoretically, we prove DNO converges to the intended Nash equilibrium on
average, and that it can improve monotonically across iterations (see Section
3.1). Furthermore, our finite-sample analysis shows that approximation error
at any iteration between the learned policy and the target is tightly bounded
(Theorem 1).
On the practical side, we provide a scalable implementation of DNO (Algorithm
2): an iterative self-improving algorithm with contrastive updates, which
approximates Algorithm 1 under several critical design choices. Those choices
include: sampling multiple online outputs from the policy being trained, using
GPT-4 as the preference oracle, comparing on-policy samples to GPT-4’s own
(teacher) outputs, and training only on pairs with “large margin” (for
theoretical explanation, see Section 4; in practice, see Section 5.2).
The primary distinction of our work over related works of Nash-MD (Munos et
al., 2023) and SPO (Swamy et al., 2024) is that they both exhibit sample
efficiency issues (two timescale updates or sample-inefficient RL steps), and
both use purely on-policy samples. We resolve the efficiency issue with a
sample-efficient objective that works in practice, and DNO is more flexible to
incorporate off-policy samples from e.g., a powerful teacher.
Most importantly, DNO works in practice – we provide comprehensive empirical
evaluations, resulting in state-of-the-art performance:
* •
The resulting 7B parameter Orca-2.5 model, aligned using the practical
implementation of DNO (Algorithm 2), achieves the state-of-the-art win-rate of
any 7B model, exceeding $33\%$ against GPT-4-Turbo beyond on the AlpacaEval
2.0, even after controlling for length. This is an over $26\%$ absolute gain
($7\%\\!\to\\!33\%$) compared to the initialized model. It outperforms several
recent advanced closed-source models, including Mistral Large and GPT-4-0613,
as well as open-source models with far more ($10\times$) parameters, such as
Self-Rewarding LM (Yuan et al., 2024) which has 70B parameters.
* •
Our thorough ablation studies in Section 5.2 examine critical design
touchpoints surrounding choice of loss function (supervised finetuning or
contrastive), training paradigm (with or without on-policy samples),
preference annotator quality (large margin or not), and training pair
construction (self-play, teacher-vs-student, etc). Our findings highlight that
carefully-crafted methods encoded in Algorithm 2 lead to substantial gains.
* •
We show some examples of outputs across iterations which demonstrate
qualitative improvements such as better addressing nuanced issues and
presumptious questions (LABEL:ex:example-1), better organization and clarity
while refraining from making misleading statements (LABEL:ex:example-2), and
higher information density in answers (LABEL:ex:example-3).
We hope that the results presented herein will provide clarity to the
community regarding the use of AI feedback for post-training LLMs.
## 2 Preliminaries
This section provides an overview of the RL from human feedback (RLHF)
pipeline. We do not differentiate between RLHF and RLAIF (e.g., Bai et al.,
2022b; Lee et al., 2023), as the distinction is outside our scope of
discussion. Thus, we will uniformly refer to both concepts as RLHF. However,
we want to make a clear delineation between two subtle differences: RLHF
maximizing point-wise reward functions, and RLHF optimizing general
preferences. It should be noted that this discussion is more broadly
applicable in scope to general contextual bandits setup as well.
Throughout this paper, we use $x\in\mathcal{X}$ to denote the input (i.e. the
prompt) received by the LLM from a space $\mathcal{X}$. In this paper, we do
not consider the distribution shift over the prompts, following the standard
contextual bandits setup of RLHF (e.g., Ouyang et al., 2022; Rafailov et al.,
2023), and we use $\rho$ to denote the distribution of the prompts. We use
$y\in\mathcal{Y}$ to denote the response from the LLM given the prompt $x$
(this corresponds to action in the contextual bandits setup). We also use
$\pi:\mathcal{X}\to\Delta(\mathcal{Y})$ to denote the policy, which is a LLM
here, and $\Pi$ is the policy class.
Our discussion throughout this paper will also regularly involve the following
three learning paradigms, which are originally introduced and commonly used in
the RL literature:
1. (1)
_Offline_ : The learning algorithm operates without any active data
collection, e.g., sampling from the current policy. The algorithm relies
solely on an offline dataset for training.
2. (2)
_Purely on-policy_ : technically, online on-policy. In this setup, learning
takes place by sampling outputs from the latest policy and immediately
updating it based on the newly collected data. No data reuse or additional
offline data is considered.
3. (3)
_Batched on-policy_ 333 We acknowledge abuse of terminology. Our algorithm is
not entirely online, as it only contains batched data collection. It is also
not strictly on-policy because it uses examples from other policies, like a
teacher. While “offline” or “off-policy” may be technically more relevant,
they might lead to misunderstanding among readers and detract from the
emphasis we want to place on the collection samples from the current policy,
which constitute the majority of our training data.: This setup is the middle
of the offline and purely on-policy setups, striking a balance between
deployment efficiency and adaptability. It involves iterative online data
collection and can use other offline data. Its distinctive feature is that
here the data collection in each iteration occurs in a batched fashion (e.g.,
akin to a dataset scale, much larger than the size of a typical mini-batch),
and the amount of policy change can be more significant (e.g., running
gradient steps over multiple epochs of a dataset, as opposed to tens of
updates).
### 2.1 RLHF Based on Reward Models
One typical approach to conducting RLHF is a two-step procedure through a
reward function (Christiano et al., 2017). Suppose a preference dataset
$\mathcal{D}_{\mathsf{pref}}\coloneqq\\{(x,y^{+},y^{-})\\}$ is given, where
$(y^{+},y^{-})\sim\pi_{\mathsf{ref}}(\cdot\mid x)$, $\pi_{\mathsf{ref}}$ is
some reference policy such as the policy obtained after supervised fine-tuning
(SFT), and a preference $y^{+}\succ y^{-}\mid x$ is labeled by some human or
AI annotator. In RLHF with reward functions, the preference is assumed to be
generated based on some latent reward $r^{\star}$. The first step is to learn
a reward function $r\in\mathcal{R}$ under some reward model assumption, where
$\mathcal{R}$ is the reward class. A number of reward model assumptions have
been studied, and the Bradley-Terry (BT) model (Bradley and Terry, 1952) is
the most commonly used one. The BT model assumes the probability of
$y^{+}\succ y^{-}\mid x$ satisfies
$\displaystyle\mathcal{P}(y^{+}\succ y^{-}\mid
x)\coloneqq\frac{\exp(r^{\star}(x,y^{+}))}{\exp(r^{\star}(x,y^{+}))+\exp(r^{\star}(x,y^{-}))}.$
This leads to the maximum-likelihood reward learning objective:
$\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:ppo_rm}}{e}q:ppo_{r}m}\widehat{r}\leftarrow\mathop{\mathrm{argmax}}_{r\in\mathcal{R}}{\mathbb{E}}_{(x,y^{+},y^{-})\sim\mathcal{D}}\left[\log\sigma(r(x,y^{+})-r(x,y^{-}))\right],$
(1)
where $\sigma(\cdot)\coloneqq\frac{\exp(\cdot)}{1+\exp(\cdot)}$ is the sigmoid
function. After that, the LLM is finetuned using the learned ${\widehat{r}}$
with RL,
$\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:ppo_pg}}{e}q:ppo_{p}g}{\widehat{\pi}}\leftarrow\mathop{\mathrm{argmax}}_{\pi\in\Pi}\mathbb{E}_{x\sim\mathcal{D}_{\mathsf{pref}},y\sim\pi(\cdot\mid
x)}\left[{\widehat{r}}(x,y)-\beta\log\frac{\pi(x,y)}{\pi_{\mathsf{ref}}(x,y)}\right],$
(2)
where the KL penalty term,
$\mathbb{E}_{x\sim\mathcal{D}_{\mathsf{pref}},y\sim\pi(\cdot\mid
x)}\left[\beta\log\frac{\pi(x,y)}{\pi_{\mathsf{ref}}(x,y)}\right]$, is used to
mitigate overoptimization of the reward model (Ouyang et al., 2022),
controlled by the coeffcient $\beta$. For the purposes of our discussion, we
will call the objective above as “PPO objective”, and this two-step learning
procedure as “PPO”.
#### DPO
Direct Preference Optimization (DPO) is proposed by Rafailov et al. (2023) as
an alternative RLHF approach for combining the two-step procedure of PPO into
a single objective. It utilizes the closed form solution ${\widehat{\pi}}$ in
Eq. 2, so that solving ${\widehat{\pi}}$ directly from Eq. 1 becomes possible
via
$\displaystyle{\widehat{\pi}}\leftarrow\mathop{\mathrm{argmax}}_{\pi\in\Pi}{\mathbb{E}}_{(x,y^{+},y^{-})\sim\mathcal{D}_{\mathsf{pref}}}\log\left[\sigma\left(\beta\log\frac{\pi(y^{+}\mid
x)}{\pi_{\mathsf{ref}}(y^{+}\mid x)}-\beta\log\frac{\pi(y^{-}\mid
x)}{\pi_{\mathsf{ref}}(y^{-}\mid x)}\right)\right].$
### 2.2 RLHF with General Preferences
We now introduce the setup for directly optimizing a general preference
function, as well as provide an overview of existing solutions to achieve this
goal (mostly by leveraging the symmetry of the preferences), especially those
proposed by Munos et al. (2023); Swamy et al. (2024).
Here we assume that the learner is given query access to a general preference
function $\mathcal{P}(y\succ y^{\prime}\mid x)\in[0,1]$, for any
$(x,y,y^{\prime})\in\mathcal{X}\times\mathcal{Y}\times\mathcal{Y}$. This
function indicates the probability that action $y$ is preferred over
$y^{\prime}$ given the context $x$. In practice, this setup can be viewed as
the theoretical mode of RLAIF (e.g., Bai et al., 2022b; Yuan et al., 2024),
human-in-the-loop RLHF (e.g., Ouyang et al., 2022), or distillation fine-
tuning (e.g., Tunstall et al., 2023).
One common difficulty in optimizing a general preference function is its
_intransitivity_ , e.g., it is possible that $\mathcal{P}(a\succ
b)=\mathcal{P}(b\succ c)=\mathcal{P}(c\succ a)=1$, for some options $(a,b,c)$
(details see, e.g., Bertrand et al., 2023; Munos et al., 2023; Swamy et al.,
2024). Therefore, the learning goal of optimizing general preferences can be
the Nash equilibrium of the two-player zero-sum game with the payoffs as the
general preference function $\mathcal{P}$. The formal definition of such Nash
equilibrium is defined by the _Minimax Winner_ , $\mathsf{MW}$ (see, e.g.,
Kreweras, 1965; Simpson, 1969; Kramer, 1973; Fishburn, 1984), or the _von
Neumann Winner_ (see, e.g., Dudík et al., 2015),
$\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:defMW}}{e}q:defMW}\mathsf{MW}(\mathcal{P})\coloneqq\mathop{\mathrm{argmax}}_{\pi\in\Pi}\mathop{\mathrm{argmin}}_{\pi^{\prime}\in\Pi}\mathcal{P}(\pi\succ\pi^{\prime})=\left(\mathop{\mathrm{argmax}}_{\pi\in\Pi}\min_{\pi^{\prime}\in\Pi}\mathcal{P}(\pi\succ\pi^{\prime}),\mathop{\mathrm{argmin}}_{\pi^{\prime}\in\Pi}\max_{\pi\in\Pi}\mathcal{P}(\pi\succ\pi^{\prime})\right),$
(3)
where
$\displaystyle\mathcal{P}(\pi\succ\pi^{\prime})\coloneqq{\mathbb{E}}_{x\sim\rho,y\sim\pi(\cdot\mid
x),y^{\prime}\sim\pi^{\prime}(\cdot\mid x)}\left[\mathcal{P}(y\succ
y^{\prime}\mid x)\right].$
#### SPO
To approximate the Nash equilibrium as defined in Eq. 3, Swamy et al. (2024)
proposed a single-player algorithm, SPO. This algorithm applies results from
no-regret algorithms (e.g., Freund and Schapire, 1997). The SPO algorithm is
executed essentially using the following two-step iterative process: for each
$t=1,2,\dotsc,T$,
(i)
$\displaystyle~{}r_{t}(x,y)\leftarrow{\mathbb{E}}_{y^{\prime}\sim\pi_{t}(\cdot\mid
x)}\left[\mathcal{P}(y\succ y^{\prime}\mid
x)\right],~{}~{}\forall(x,y)\in\mathcal{X}\times\mathcal{Y}$ (ii)
$\displaystyle~{}\pi_{t+1}(\cdot\mid
x)\leftarrow\frac{1}{Z_{t}(x)}\pi_{t}(\cdot\mid
x)\exp\left(\frac{r_{t}(x,\cdot)}{\eta}\right),~{}~{}\forall x\in\mathcal{X},$
(4)
where $\eta$ is the learning rate, $\pi_{1}$ is the uniform policy, i.e.,
$\pi_{1}(\cdot\mid x)\leftarrow{\sf unif}(\mathcal{Y}),~{}\forall
x\in\mathcal{X}$, and $Z_{t}(x)\coloneqq\sum_{y\in\mathcal{Y}}\pi_{t}(y\mid
x)\exp\left(\frac{r_{t}(x,y)}{\eta}\right)$ is the partition function for
iteration $t$.
Using the no-regret update of soft policy iteration, as shown in Eq. 4, Swamy
et al. (2024) proved that the uniform mixture of $\pi_{1:T}$ from SPO is an
approximation of the Nash equilibrium of $\mathsf{MW}(\mathcal{P})$, as
defined in Eq. 3.
#### Nash-MD
Munos et al. (2023) proposed Nash-MD to approximate the Nash equilibrium of a
KL-regularized preference function,
$\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:def_reg_pref}}{e}q:def_{r}eg_{p}ref}\mathcal{P}^{\pi,\pi^{\prime}}_{\tau}(y\succ
y^{\prime}\mid x)\coloneqq$ $\displaystyle~{}\mathcal{P}(y\succ y^{\prime}\mid
x)-\tau\log\frac{\pi(y\mid x)}{\pi_{\mathsf{ref}}(y\mid
x)}+\tau\log\frac{\pi^{\prime}(y\mid x)}{\pi_{\mathsf{ref}}(y\mid x)},$ (5)
$\displaystyle\mathcal{P}_{\tau}(\pi\succ\pi^{\prime})\coloneqq$
$\displaystyle~{}{\mathbb{E}}_{x\sim\rho,y\sim\pi(\cdot\mid
x),y^{\prime}\sim\pi^{\prime}(\cdot\mid
x)}\left[\mathcal{P}^{\pi,\pi^{\prime}}_{\tau}(y\succ y^{\prime}\mid
x)\right]$ (6) $\displaystyle=$
$\displaystyle~{}\mathcal{P}(\pi\succ\pi^{\prime})-\tau{\mathbb{E}}_{x\sim\rho}\left[D_{\mathrm{KL}}(\pi(\cdot\mid
x)~{}\|~{}\pi_{\mathsf{ref}}(\cdot\mid
x))\right]+\tau{\mathbb{E}}_{x\sim\rho}\left[D_{\mathrm{KL}}(\pi^{\prime}(\cdot\mid
x)~{}\|~{}\pi_{\mathsf{ref}}(\cdot\mid x))\right].$
Following this, Munos et al. (2023) demonstrate that the Nash Equilibrium of
$\mathsf{MW}(\mathcal{P}_{\tau})$ can be approximated using a mirror descent
(Nemirovskij and Yudin, 1983; Bubeck, 2015; Lattimore and Szepesvári, 2020)
inspired algorithm, Nash-MD, which has a last-iteration guarantee. The Nash-MD
algorithm can be viewed as a two-step iterative process: for each
$t=1,2,\dotsc,T$,
(i)
$\displaystyle~{}r_{t}(x,y)\leftarrow{\mathbb{E}}_{y^{\prime}\sim\pi^{\tau}_{t}(\cdot\mid
x)}\left[\mathcal{P}(y\succ y^{\prime}\mid
x)\right],~{}~{}\forall(x,y)\in\mathcal{X}\times\mathcal{Y}$ (ii)
$\displaystyle~{}\pi_{t+1}(\cdot\mid
x)\leftarrow\frac{1}{Z_{t}(x)}\pi_{t}^{\tau}(\cdot\mid
x)\exp\left(\frac{r_{t}(x,\cdot)}{\eta}\right),~{}~{}\forall x\in\mathcal{X},$
where $\eta$ is the learning rate, $\pi_{t}^{\tau}$ is the geometric mixture
between $\pi_{t}$ and $\pi_{\mathsf{ref}}$,
$\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:def_smooth_pit}}{e}q:def_{s}mooth_{p}it}\pi_{t}^{\tau}(y\mid
x)\coloneqq\frac{\pi_{t}(y\mid
x)^{1-\nicefrac{{\tau}}{{\eta}}}\pi_{\mathsf{ref}}(y\mid
x)^{\nicefrac{{\tau}}{{\eta}}}}{\sum_{y^{\prime}\in\mathcal{Y}}\pi_{t}(y\mid
x)^{1-\nicefrac{{\tau}}{{\eta}}}\pi_{\mathsf{ref}}(y\mid
x)^{\nicefrac{{\tau}}{{\eta}}}},~{}\forall(x,y)\in\mathcal{X}\times\mathcal{Y},$
(7)
and $Z_{t}(x)\coloneqq\sum_{y\in\mathcal{Y}}\pi_{t}^{\tau}(y\mid
x)\exp\left(\frac{r_{t}(x,y)}{\eta}\right)$ is the partition function for
iteration $t$.
## 3 Direct Nash Optimization
While the no-regret update of soft policy iteration used in SPO and Nash-MD
has inspired many standard (deep) reinforcement learning algorithms (e.g.,
Kakade, 2001, NPG,; Schulman et al., 2015, TRPO,; Schulman et al., 2017, PPO,;
Haarnoja et al., 2018, SAC,), its faithful implementation still usually
involves the two-timescale update. This could potentially lead to complex
hyperparameter tuning and unstable performance. In this section, we propose a
direct and iterative algorithm, Direct Nash Optimization (Algorithm 1), to
approximate the Nash equilibrium of $\mathsf{MW}(\mathcal{P})$. This algorithm
is primarily inspired by SPO. It can be readily adapted to Nash-MD for
approximating the Nash equilibrium of $\mathsf{MW}(\mathcal{P}_{\tau})$ with
the last-iteration guarantee, and we will discuss this in Appendix A.
Algorithm 1 Direct Nash Optimization (DNO)
input: General preference function $\mathcal{P}$, learning rate $\eta$, number
of iterations $T$, prompt distribution $\rho$.
1:Initialize $\pi_{1}\leftarrow{\sf unif}(\mathcal{A})$.
2:for iteration $t=1,2,\dotsc,T$ do
3: Compute $r_{t}(x,y)\leftarrow{\mathbb{E}}_{y^{\prime}\sim\pi_{t}(\cdot\mid
x)}\left[\mathcal{P}(y\succ y^{\prime}\mid x)\right]$,
$\forall(x,y)\in\mathcal{X}\times\mathcal{Y}$.
4: Obtain $\pi_{t+1}$ by,
$\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:dnoloss}}{e}q:dnoloss}\begin{gathered}\pi_{t+1}\leftarrow\mathop{\mathrm{argmax}}_{\pi\in\Pi}{\mathbb{E}}_{(x,y_{1},y_{2})\sim\mathcal{D}_{t}}\bigg{\\{}\sigma\left(r_{t}(x,y_{1})-r_{t}(x,y_{2})\right)\log\left[\sigma\left(\eta\log\frac{\pi(y_{1}\mid
x)}{\pi_{t}(y_{1}\mid x)}-\eta\log\frac{\pi(y_{2}\mid x)}{\pi_{t}(y_{2}\mid
x)}\right)\right]\\\ \hskip
120.0pt+\sigma\left(r_{t}(x,y_{2})-r_{t}(x,y_{1})\right)\log\left[\sigma\left(\eta\log\frac{\pi(y_{2}\mid
x)}{\pi_{t}(y_{2}\mid x)}-\eta\log\frac{\pi(y_{1}\mid x)}{\pi_{t}(y_{1}\mid
x)}\right)\right]\bigg{\\}},\end{gathered}$ (8) where $\mathcal{D}_{t}$ is
generated by $x\sim\rho,y_{1}\sim\mu_{1,t}(\cdot\mid
x),y_{2}\sim\mu_{2,t}(\cdot\mid x)$; $\mu_{1,t}$ and $\mu_{2,t}$ can be either
off-policy (e.g., pre-defined) or on-policy (based on $\pi_{t}$).
5:end for
6:return $\bar{\pi}={\sf unif}(\pi_{1:T})$.
### 3.1 Derivation of Algorithm 1
In most practical algorithms which are inspired by soft policy iteration,
including the original practical version of SPO, they typically adopt the
following approach: “pushing” $\pi$ towards this subsequent learning goal in
each iteration (we will refer to this as the soft policy iteration target
throughout the paper):
$\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:def_pistar}}{e}q:def_{p}istar}\pi^{\star}_{t+1}(\cdot\mid
x)\coloneqq\frac{1}{Z_{t}(x)}\pi_{t}(\cdot\mid
x)\exp\left(\frac{r_{t}(x,\cdot)}{\eta}\right),$ (9)
where $Z_{t}(x)=\sum_{y\in\mathcal{Y}}\pi_{t}(y\mid
x)\exp\left(\frac{r_{t}(x,y)}{\eta}\right)$ is the partition function. It can
be realized by minimizing a distance metric between $\pi_{t+1}$ and $\pi$. For
example, the PPO algorithm for RLHF (e.g., Christiano et al., 2017; Ouyang et
al., 2022) essentially minimizes the reverse KL divergence as follows,
$\displaystyle(\pi_{t+1}^{\text{PPO}}\leftarrow)$
$\displaystyle~{}\mathop{\mathrm{argmin}}_{\pi\in\Pi}{\mathbb{E}}_{x\sim\rho}\left[D_{\mathrm{KL}}(\pi(\cdot\mid
x)~{}\|~{}\pi^{\star}_{t+1}(\cdot\mid x))\right]$ $\displaystyle=$
$\displaystyle~{}\mathop{\mathrm{argmax}}_{\pi\in\Pi}\mathbb{E}_{x\sim\rho,y\sim\pi(\cdot\mid
x)}\left[\eta\log\frac{\pi^{\star}_{t+1}(x,y)}{\pi_{t}(x,y)}-\eta\log\frac{\pi(x,y)}{\pi_{t}(x,y)}\right]$
$\displaystyle=$
$\displaystyle~{}\mathop{\mathrm{argmax}}_{\pi\in\Pi}\mathbb{E}_{x\sim\rho,y\sim\pi(\cdot\mid
x)}\left[r_{t}(x,y)-\eta
Z_{t}(x)-\eta\log\frac{\pi(x,y)}{\pi_{t}(x,y)}\right]$ $\displaystyle=$
$\displaystyle~{}\mathop{\mathrm{argmax}}_{\pi\in\Pi}\mathbb{E}_{x\sim\rho,y\sim\pi(\cdot\mid
x)}\left[r_{t}(x,y)-\eta\log\frac{\pi(x,y)}{\pi_{t}(x,y)}\right].$
($\Leftrightarrow$ PPO objective, as $Z_{t}$ is independent of $\pi$)
However, implementing the above approach typically necessitates _on-policy_
sampling from the current policy $\pi$. Ignoring the $Z_{t}(x)$ term could
also lead to high variance in the empirical gradient estimation. This is a
persistent issue in actor-critic style algorithms that usually suggests the
need for an additional baseline (details see, e.g., Mnih et al., 2016), which
also requires on-policy estimation. When $r_{t}$ also varies over iterations,
as in SPO or Nash-MD, we then need to update all of the policy, baseline, and
reward online simultaneously. These challenges have hindered the scalability
of existing algorithms which are based on learning the Nash equilibrium of
general preference functions.
#### Regressing “internal rewards” towards preference-based rewards
Different from the mentioned approaches above which are mostly focusing on the
concept of “pushing” $\pi\to\pi^{\star}_{t+1}$. We now consider the following
mechanism: regressing $r_{\pi,t}\to r_{t}$, where $r_{\pi,t}$ is the internal
reward function of a given $\pi$ at iteration $t$:
$\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:def_rpit}}{e}q:def_{r}pit}r_{\pi,t}(x,y)\coloneqq\eta\log\frac{\pi(y\mid
x)}{\pi_{t}(y\mid x)}+\eta Z_{t}(x).$ (10)
This can be interpreted as a reparameterization trick, where $\pi$ is exactly
the soft policy iteration target (refer to Eq. 9) induced by $\pi_{t}$ and the
defined $r_{\pi,t}$. Therefore, regressing that specifically parameterized
$r_{\pi,t}$ to $r_{t}$ allows us to directly optimize the soft policy
iteration target with respect to $r_{\pi,t}$ and $\pi_{t}$. This idea is
inspired by techniques from inverse RL (e.g., Finn et al., 2016b, a, Guided
Cost Learning) as well as recent advances in RLHF (Rafailov et al., 2023,
DPO). To avoid the issues arising from the partition function $Z_{t}(x)$, we
consider learning from the $(x,y_{1},y_{2})$ tuple, where $y_{1}$ and $y_{2}$
are both responses to textual input $x$. Note that, due to the offline
learning nature of the regressive objective, the sampling distribution of
$y_{1}$ and $y_{2}$ does not impact the learning objective (i.e.,
$r_{\pi,t}\to r_{t}$, but it may affect the sample complexity from the
coverage reason as we will discuss later), whereas pushing
$\pi\to\pi_{t+1}^{\star}$ requires sampling $y$ on-policy, as previously
discussed. Therefore, given an arbitrary $(x,y_{1},y_{2})$ tuple, we regress
the “prediction” $\hat{z}$ to the “goal” $z$ (both defined below), using
binary logarithmic/cross-entropy loss to measure the prediction error (see,
e.g., Foster and Krishnamurthy, 2021),
$\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:def_z}}{e}q:def_{z}}{\widehat{z}}\coloneqq\sigma\left(r_{\pi,t}(x,y_{1})-r_{\pi,t}(x,y_{2})\right)=\sigma\bigg{(}\underbrace{\eta\log\frac{\pi(y_{1}\mid
x)}{\pi_{t}(y_{1}\mid x)}-\eta\log\frac{\pi(y_{2}\mid x)}{\pi_{t}(y_{2}\mid
x)}}_{\eqqcolon\Delta_{\pi,t}(x,y_{1},y_{2})}\bigg{)},\quad
z\coloneqq\sigma\big{(}\underbrace{r_{t}(x,y_{1})-r_{t}(x,y_{2})}_{\eqqcolon\Delta_{t}^{\star}(x,y_{1},y_{2})}\big{)};$
(11) $\displaystyle\ell_{\pi,t}(x,y_{1},y_{2})\coloneqq
z\log(1/{\widehat{z}})+(1-z)\log(1/(1-{\widehat{z}}))$
$\displaystyle=-\sigma\big{(}\Delta_{t}^{\star}(x,y_{1},y_{2})\big{)}\log\left[\sigma\big{(}\Delta_{\pi,t}(x,y_{1},y_{2})\big{)}\right]-\sigma\big{(}\Delta_{t}^{\star}(x,y_{2},y_{1})\big{)}\log\left[\sigma\big{(}\Delta_{\pi,t}(x,y_{2},y_{1})\big{)}\right].$
Therefore, we obtain the following objective to learn $\pi_{t+1}$,
$\displaystyle~{}\mathop{\mathrm{argmin}}_{\pi\in\Pi}\mathcal{L}_{\mathcal{D}_{t}}(\pi;\pi_{t})$
$\displaystyle\coloneqq$
$\displaystyle~{}\mathop{\mathrm{argmin}}_{\pi\in\Pi}{\mathbb{E}}_{(x,y_{1},y_{2})\sim\mathcal{D}_{t}}\left[\ell_{\pi,t}(x,y_{1},y_{2})\right]$
$\displaystyle=$
$\displaystyle~{}\mathop{\mathrm{argmax}}_{\pi\in\Pi}{\mathbb{E}}_{(x,y_{1},y_{2})\sim\mathcal{D}_{t}}\Big{[}\sigma\big{(}\Delta_{t}^{\star}(x,y_{1},y_{2})\big{)}\log\left[\sigma\big{(}\Delta_{\pi,t}(x,y_{1},y_{2})\big{)}\right]+\sigma\big{(}\Delta_{t}^{\star}(x,y_{2},y_{1})\big{)}\log\left[\sigma\big{(}\Delta_{\pi,t}(x,y_{2},y_{1})\big{)}\right]\Big{]}$
$\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:reward_regression}}{e}q:reward_{r}egression}=$
$\displaystyle~{}\mathop{\mathrm{argmax}}_{\pi\in\Pi}{\mathbb{E}}_{(x,y_{1},y_{2})\sim\mathcal{D}_{t}}\Bigg{[}\sigma\left(r_{t}(x,y_{1})-r_{t}(x,y_{2})\right)\log\left[\sigma\left(\eta\log\frac{\pi(y_{1}\mid
x)}{\pi_{t}(y_{1}\mid x)}-\eta\log\frac{\pi(y_{2}\mid x)}{\pi_{t}(y_{2}\mid
x)}\right)\right]$ (12) $\displaystyle~{}\hskip
100.0pt+\sigma\left(r_{t}(x,y_{2})-r_{t}(x,y_{1})\right)\log\left[\sigma\left(\eta\log\frac{\pi(y_{2}\mid
x)}{\pi_{t}(y_{2}\mid x)}-\eta\log\frac{\pi(y_{1}\mid x)}{\pi_{t}(y_{1}\mid
x)}\right)\right]\Bigg{]}.$
Here, $\mathcal{D}_{t}$ is generated by
$x\sim\rho,y_{1}\sim\mu_{1,t}(\cdot\mid x),y_{2}\sim\mu_{2,t}(\cdot\mid x)$
with some policies $\mu_{1,t}$ and $\mu_{2,t}$. It should be noted that
$\mu_{1,t}$ and $\mu_{2,t}$ for each $t\in[T]$ are parts of our algorithm’s
design decisions. We will provide choices for them in Section 3.2 to promote
sample efficiency, which are informed by our finite-sample analysis.
#### Monotonic improvement from the _batched on-policy_ updates
One key distinction between DNO and existing algorithms for learning Nash
equilibrium (such as SPO and Nash-MD) is that those algorithms aim to approach
the Nash equilibrium in a purely on-policy manner, which can be potentially
unstable and may need to incorporate two-timescale updates (that change the
reward function used in the inner problem more frequently). On the other hand,
DNO is a batched on-policy algorithm with single-timescale updates.
From a purely theoretical perspective, it seems that DNO may require many
iterations to ensure the convergence of $\bar{\pi}$ to the Nash equilibrium,
which could potentially be costly. Additionally, DNO only converges on-
average, and it is unrealistic to deploy in practice that uniform mixture
policy $\bar{\pi}$ (note that, as inspired by Munos et al. (2023), DNO could
be extended to regularized preferences with last-iteration convergence, which
is discussed in Appendix A). However, from a practical perspective, we can
leverage the following two desirable properties from LLMs scenario to
eliminate these concerns and ensure _monotonic improvement_ over the DNO
iterations:
Firstly, the soft policy iteration target Eq. 9 is actually the analytical
solution for maximizing the following loss,
$\ell_{t}(\pi)\coloneqq\mathcal{P}(\pi\succ\pi_{t})-\eta{\mathbb{E}}_{x\sim\rho}\left[D_{\mathrm{KL}}(\pi(\cdot\mid
x)~{}\|~{}\pi_{t}(\cdot\mid x))\right]$, and
$\pi_{t+1}^{\star}=\mathop{\mathrm{argmax}}_{\pi}\ell_{t}(\pi)$. We can notice
that $\ell_{t}(\pi_{t})=0.5$ and
${\mathbb{E}}_{x\sim\rho}\left[D_{\mathrm{KL}}(\pi(\cdot\mid
x)~{}\|~{}\pi_{t}(\cdot\mid x))\right]\geq 0$. This means
$0.5\leq\ell_{t}(\pi_{t+1}^{\star})=\mathcal{P}(\pi_{t+1}^{\star}\succ\pi_{t})-\eta{\mathbb{E}}_{x\sim\rho}\left[D_{\mathrm{KL}}(\pi_{t+1}^{\star}(\cdot\mid
x)~{}\|~{}\pi_{t}(\cdot\mid
x))\right]\Longrightarrow\mathcal{P}(\pi_{t+1}^{\star}\succ\pi_{t})\geq
0.5+\eta{\mathbb{E}}_{x\sim\rho}\left[D_{\mathrm{KL}}(\pi_{t+1}^{\star}(\cdot\mid
x)~{}\|~{}\pi_{t}(\cdot\mid x))\right]$. This means $\pi_{t+1}^{\star}$ is
guaranteed to be more preferred than $\pi_{t}$ with respect to the preference
$\mathcal{P}$, and there is even a computable lower bound of the amount of
improvement—$\eta{\mathbb{E}}_{x\sim\rho}\left[D_{\mathrm{KL}}(\pi_{t+1}^{\star}(\cdot\mid
x)~{}\|~{}\pi_{t}(\cdot\mid x))\right]$. Therefore, if $\pi_{t+1}$ learned
from 4 of Algorithm 1 is a accurate enough approximation of
$\pi_{t+1}^{\star}$ (which is proved in Section 3.2), we could expect that the
policy is monotonically improved over DNO iterations. Note that the monotonic
improvement guarantee is _exclusive_ to our design choice of _batched on-
policy_ updates in DNO, because the alternatives are either unclear or
unstable: it is undefined how to perform iterative updates offline, and one
gradient update from a purely online algorithm may not be able able to
accurately approximate the the soft policy iteration target
$\pi_{t+1}^{\star}$. Secondly, in practice, we usually have validation data
available, which allows us to deploy the best policy over $\pi_{1:(T+1)}$.
### 3.2 Theoretical Analysis
One of our major proposals is to use a regression-based objective to
approximate the explicit soft policy iteration; in this section we show the
approximation error from this regression is tightly bounded with finite-sample
analysis. The following proposition discusses how well the solution of the
regression-based objective (defined in Eq. 12 or 4 of Algorithm 1) can
approximate the soft policy iteration (Eq. 9) in terms of the total variation
metric at each iteration.
###### Theorem 1 (informal).
Fix an arbitrary iteration $t\in[T]$. Suppose $\pi_{t+1}$ is from 4 of
Algorithm 1, and $\pi_{t+1}^{\star}$ is defined in Eq. 9. Then, under mild
assumptions (realizability and boundedness, formally introduced in Appendix
B), we have
$\displaystyle{\mathbb{E}}_{x\sim\rho}\left[\left(D_{\mathrm{TV}}(\pi_{t+1}(\cdot\mid
x),\pi_{t+1}^{\star}(\cdot\mid
x))\right)^{2}\right]\leq\mathcal{O}\left(\frac{\mathfrak{C}_{t}R_{\max}^{2}\log(\nicefrac{{|\Pi|}}{{\delta}})}{N}\right),$
where the concentrability coefficient $\mathfrak{C}_{t}$ is defined as below,
$\displaystyle\mathfrak{C}_{t}\coloneqq\frac{{\mathbb{E}}_{x\sim\rho,y_{1}\sim\pi_{t+1}^{\star}(\cdot\mid
x),y_{2}\sim\pi_{t+1}(\cdot\mid
x)}\left[\left(\log\frac{\pi_{t+1}^{\star}(y_{1}\mid x)}{\pi_{t+1}(y_{1}\mid
x)}-\log\frac{\pi_{t+1}^{\star}(y_{2}\mid x)}{\pi_{t+1}(y_{2}\mid
x)}\right)^{2}\right]}{{\mathbb{E}}_{x\sim\rho,y_{1}\sim\mu_{1,t}(\cdot\mid
x),y_{2}\sim\mu_{2,t}(\cdot\mid
x)}\left[\left(\log\frac{\pi_{t+1}^{\star}(y_{1}\mid x)}{\pi_{t+1}(y_{1}\mid
x)}-\log\frac{\pi_{t+1}^{\star}(y_{2}\mid x)}{\pi_{t+1}(y_{2}\mid
x)}\right)^{2}\right]}.$
If $\pi_{t}=\pi_{t}^{\star}$ for all $t\in[T]$, the reader can refer to (Swamy
et al., 2024, Section 3) for the convergence of $\bar{\pi}$ (returned by
Algorithm 1) to the Nash equilibrium. We expect the total variation difference
between $\pi_{t}$ and $\pi_{t}^{\star}$ provided by Theorem 1 will be additive
errors on top of the guarantees from Swamy et al. (2024).
Note that, we present the concentrability coefficient $\mathfrak{C}_{t}$ as
data-dependent, with $\pi_{t+1}$ (learned from data) as part of its
definition. We aim to make this guiding the design choices of $\mu_{1,t}$ and
$\mu_{2,t}$ from such $\mathfrak{C}_{t}$ for the purpose of sample efficiency.
The formal statement and detailed proof of Theorem 1, without involving
$\pi_{t+1}$, are deferred to Appendix B. Although it shares a similar
expression to the concentrability coefficient in offline reinforcement
learning (e.g., Chen and Jiang, 2019; Xie et al., 2021), the policies
$\mu_{1,t}$ and $\mu_{2,t}$ are flexible here due to the generative nature of
large language models. This flexibility allows for additional intervention,
enhancing sample efficiency.
We can notice that the value of $\mathfrak{C}_{t}$ can be always bounded by
$\mathfrak{C}_{t}\leq\max_{(x,y)\in\mathcal{X}\times\mathcal{Y}}\frac{\pi_{t+1}^{\star}(y\mid
x)\pi_{t+1}(y\mid x)}{\mu_{1,t}(y\mid x)\mu_{2,t}(y\mid x)}$ in the worst
case. However, as $\pi_{t+1}$ is likely to be restricted within a certain
region, for instance, because fine-tuning will not significantly alter the
behavior of the language model, we anticipate that such a coefficient will not
depend on the per-$(x,y)$ worst case. On the other hand, as a direct
observation, we notice that the ideal selection of $\mu_{1,t}$ and $\mu_{2,t}$
should be close to the target of soft policy iteration $\pi_{t+1}^{\star}$
(assuming $\pi_{t+1}^{\star}$ and $\pi_{t+1}$ are close). Interestingly, this
theoretical observation coincides with recent empirical results. Here, Liu et
al. (2024b) suggests that using statistical rejection sampling to sample from
the soft policy iteration target (which is almost equivalent to sampling
$y_{1}$ and $y_{2}$ from $\pi_{t+1}^{\star}$) could benefit preference tuning.
However, in our case, if we use similar statistical rejection sampling
techniques on $\pi_{t}$ to sample $\pi_{t+1}^{\star}$ (and $\pi_{t+1}$), the
cost of rejection sampling is likely to be comparable to the concentrability
coefficient $\mathfrak{C}_{t}$ when choosing $\mu_{1,t}$ and $\mu_{2,t}$ to be
$\pi_{t}$ (see, e.g., Owen, 2013). This suggests that both $\pi_{t}$ and
$\pi_{t+1}^{\star}$ (via rejection sampling) as the choices of $\mu_{1,t}$ and
$\mu_{2,t}$ will be comparable options in terms of sample efficiency. On the
other hand, as we will demonstrate in the next section, since $r_{t}$ is
defined based on $\pi_{t}$ (as shown in 3 of Algorithm 1), choosing
$\mu_{1,t}$ and $\mu_{2,t}$ to be $\pi_{t}$ can easily adapt to such a reward
of $r_{t}$.
Another interesting observation is that despite Eq. 12 sharing a similar form
with Bradley-Terry style reward modeling with using MLE, the target
distributions used to measure distribution shift appear to be quite different.
This disparity is due to the different objectives: fitting soft policy
iteration versus reward estimation. For the Bradley-Terry style reward
modeling using MLE, the desired distribution of $y_{1}$ and $y_{2}$ should be
two distinct distributions (see, e.g., Zhan et al., 2024; Xiong et al., 2023).
However, in our case where the learning goal is to fit the soft policy
iteration, we may prefer $y_{1}$ and $y_{2}$ from two (near) on-policy
distributions as discussed above, as long as we expect the learned $\pi_{t+1}$
will be accurate enough. To the best of our knowledge, this is the first
theoretical result that illustrates the importance of on-policy sampling
beyond policy optimization style algorithms for RLHF.
## 4 Practical Algorithm – Iterative Contrastive Self-Improvement
In this section, we shift our focus to the algorithmic design of the
practically scalable version of DNO, following the principles discussed in the
last section. A primary challenge encountered in the implementation of the
conceptual algorithm DNO (Algorithm 1) stems from the necessity to compute the
expectation with respect to the preference function $\mathcal{P}$ under the
current policy $\pi_{t}$. Perhaps surprisingly, as we will show, all we need
is a properly implemented iterative DPO-like contrastive learning algorithm.
Algorithm 2 DNO-Prct: Practical Implementation of DNO via Iterative
Contrastive Self-Improvement
input: General preference function $\mathcal{P}$, learning rate
${\widetilde{\eta}}$, iterations $T$, reference policy $\pi_{\mathsf{ref}}$,
prompt distribution $\rho$.
1:Initialize $\pi_{1}\leftarrow\pi_{\mathsf{ref}}$.
2:for iteration $t=1,2,\dotsc,T$ do
3: Construct $\mathcal{D}_{t}=\\{(x,y^{\mathsf{gold}})\\}$ where $x\sim\rho$
and $y\sim\pi_{\mathsf{gold}}(\cdot\mid x)$.
4: Sample _batched on-policy_ responses: Sample $K$ outputs per per prompt
using the current $\pi_{t}$:
$\\{y_{t}^{1},y_{t}^{2},\dotsc,y_{t}^{K}\\}\sim\pi_{t}(\cdot\mid x)$, $\forall
x\in\mathcal{D}_{t}$.
5: Rank responses: For each $x\in\mathcal{D}_{t}$, rank the corresponding
$\\{y_{t}^{1},y_{t}^{2},\dotsc,y_{t}^{K},y^{\mathsf{gold}}\\}$ using the pair-
wise win-rate by sampling from the general preference function $\mathcal{P}$.
6: Filter preference pairs: Construct
$\mathcal{D}_{t+1}=\\{(x,y_{t}^{+},y_{t}^{-})\\}$, for all
$x\in\mathcal{D}_{t+1}$, and $(y_{t}^{+},y_{t}^{-})$ are large-margin pairs
(based on the win-rate rank) within the responses for $x$ from the previous
step.
7: Contrastive learning: Obtain $\pi_{t+1}$ by,
$\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:nashdpo}}{e}q:nashdpo}\pi_{t+1}\leftarrow\mathop{\mathrm{argmax}}_{\pi\in\Pi}{\mathbb{E}}_{(x,y_{t}^{+},y_{t}^{-})\sim\mathcal{D}_{t+1}}\log\left[\sigma\left({\widetilde{\eta}}\log\frac{\pi(y_{t}^{+}\mid
x)}{\pi_{t}(y_{t}^{+}\mid x)}-{\widetilde{\eta}}\log\frac{\pi(y_{t}^{-}\mid
x)}{\pi_{t}(y_{t}^{-}\mid x)}\right)\right].$ (13)
8:end for
9:return best of $\pi_{1:(T+1)}$ on the validation data.
We present our the practical implementation of DNO in Algorithm 2 (DNO-Prct),
which is a batched on-policy algorithm that conducts self-improvement
iteratively via contrastive learning. One key consideration in our algorithmic
design is that we only need to implicitly use the reward function $r_{t}$.
This comes from the specifically designed on-policy sampling, data filtering,
and pair construction. While these specific design choices make DNO-Prct seem
similar to simply performing DPO iteratively, there are significant reasons
for these design decisions, as we will discuss below.
#### Batched on-policy sampling
The use of batched on-policy sampling in 4 of Algorithm 2 is crucial to avoid
explicit use of $r_{t}$ (defined as
${\mathbb{E}}_{y^{\prime}\sim\pi_{t}(\cdot\mid x)}\left[\mathcal{P}(y\succ
y^{\prime}\mid x)\right]$ in 3 of Algorithm 1). This means we essentially
choose $\mu_{1}$ and $\mu_{2}$ in DNO to be $\pi_{t}$ in DNO-Prct, but we are
free to let them vary slightly as a mixture of other policies, e.g., from a
stronger teacher. Specifically, it is unrealistic to assume in practice that
we can access the exact value of $\mathcal{P}(y\succ y^{\prime}\mid x)$ given
an $(x,y,y^{\prime})$ tuple. Based on the definition of $r_{t}$ and the fact
of $\\{y_{t}^{1},y_{t}^{2},\dotsc,y_{t}^{K}\\}$ are sampled from $\pi_{t}$,
DNO-Prct essentially uses the following sampled based approach to estimate
$r_{t}$:
$r_{t}(x,y)\approx\frac{1}{K}\sum_{y^{\prime}\in\\{y_{t}^{1},y_{t}^{2},\dotsc,y_{t}^{K},y^{\mathsf{gold}}\\}\setminus
y}\mathds{1}_{\mathcal{P}}(\text{Is $y$ better than $y^{\prime}$ on $x$}?)$,
for any $x$ and
$y\in\\{y_{t}^{1},y_{t}^{2},\dotsc,y_{t}^{K},y^{\mathsf{gold}}\\}$, where
$\mathds{1}_{\mathcal{P}}$ denotes one sample from $\mathcal{P}$ and output
$\\{0,1\\}$. This is implemented in 5 of Algorithm 2, and its precise
implementation on this is discussed in the Section 5. On the other hand, as we
discussed in the last section, the batched on-policy sampling from $\pi_{t}$
is an appropriate option due to the consideration of sample efficiency when we
use Eq. 13 to approximate the soft policy iteration (see Theorem 1 and its
discussion).
#### Preference pair construction
Another key design choice in Algorithm 2 is that Eq. 13 of Algorithm 2 only
uses the purely contrastive loss, whereas Eq. 8 of Algorithm 1 also contains
the regression target $\sigma\left(r_{t}(x,y)-r_{t}(x,y^{\prime})\right)$ (for
a given $(x,y,y^{\prime})$ tuple), which is not necessarily $\\{0,1\\}$. As we
discussed above, it is unrealistic to expect access to the exact value of
$\mathcal{P}(y\succ y^{\prime}\mid x)$, so it is also unlikely to get an
accurate value of the regression target
$\sigma(r_{t}(x,y)-r_{t}(x,y^{\prime}))$. Thus, we add an additional data
filtering step to address this issue as in 6 of Algorithm 2. Ideally, we want
the selected $(x,y^{+},y^{-})$ tuple to satisfy
$\sigma(r_{t}(x,y_{t}^{+})-r_{t}(x,y_{t}^{-}))\approx 1$, so that Eq. 8 can be
approximated by Eq. 13. However, one can notice that it requires
$r_{t}(x,y_{t}^{+})-r_{t}(x,y_{t}^{-})\to\infty$, but we know
$r_{t}(x,y)\in[0,1]$, $\forall(x,y)\in\mathcal{X}\times\mathcal{Y}$.
From the derivation of DNO in Section 3, it is clear that scaling up $r_{t}$
and $\eta$ with the same absolute constant $c$ does not affect the soft policy
iteration target of Eq. 9, but it will slightly change the DNO objective (Eq.
8 in Algorithm 1) by $r_{t}\to c\cdot r_{t}$ and $\eta\to
c\cdot\eta\eqqcolon{\widetilde{\eta}}$. This scaling strategy helps us
sidestep the problem of bounded $r_{t}$, and in this sense, we may expect the
proper ${\widetilde{\eta}}$ in DNO-Prct to be relatively larger (than, e.g.,
$\eta$ in Algorithm 1). However, an enlarged ${\widetilde{\eta}}$ in Eq. 8
will worsen the sample complexity suggested in Theorem 1 (for details, refer
to its proof in Appendix B, especially for the derivation of Eq. 18). So, to
avoid the proper ${\widetilde{\eta}}$ being too large, we only use pairs with
large margin as in 6 of Algorithm 2 to make sure
$r_{t}(x,y_{t}^{+})-r_{t}(x,y_{t}^{-})$ is not too small. This decision is
also supported empirically in techniques like RLCD (Yang et al., 2023) and
Axiomatic Preference Models (Rosset et al., 2023) which highlight the
importance of having large margin or clear directional differences between
positive and negative LLM responses when training preference models.
#### Relationship between DNO-Prct and DPO
The reader may discern that DNO-Prct (Algorithm 2)—the practical
implementation of DNO—can be described as an iterative version of the DPO
algorithm. Such similarity is by design, intended to harness the simplicity
and effectiveness of DPO (Rafailov et al., 2023) and build on empirical
advancements from recent work that applies DPO iteratively (e.g., Yuan et al.,
2024; Tran et al., 2024). Our experiments point to the importance of several
design choices which help accommodate the general preferences, such as
rankings derived from pair-wise win rates. More interestingly, our findings
point to a surprising connection—that _“a meticulously designed iterative DPO
algorithm” could approach the Nash equilibrium of any given general
preferences._
Our general algorithmic framework—DNO (Algorithm 1)—is broader and
fundamentally different from iterative DPO. For example, the DNO framework
could also be directly extended to the regularized preference case (as
discussed in Appendix A) or equipped with other advanced sample techniques
(e.g., Liu et al., 2024b, RSO) as suggested by Theorem 1 for sample
efficiency. On the other hand, although the soft policy iteration (or the KL-
regularized reward optimization) is used in both DNO and DPO, they arise from
fundamentally different reasons. For DNO, KL-regularization originates from
online learning, no-regret learning through mirror descent (Nemirovskij and
Yudin, 1983) or follow-the-regularized-leader (FTRL) (Kalai and Vempala, 2005;
Cesa-Bianchi and Lugosi, 2006; Shalev-Shwartz et al., 2012; Hazan et al.,
2016). For DPO and PPO, the KL-regularization is an approximation for the
total variation penalty to ensure monotonic improvement of the policy (Kakade
and Langford, 2002; Schulman et al., 2015). Later, this approach was
simplified by Schulman et al. (2017, PPO), and recently used for post-training
LLMs (Ouyang et al., 2022).
## 5 Experiments
Figure 2: Comparison of various post-training techniques showing that Direct
Nash Optimization (DNO) is the most effective. All methods with colorful error
bands are 1) implemented by ourselves, 2) initialized with a 7B parameter
Orca-2.5 LLM, and 3) are “batched on-policy” (except SFT and Offline DPO which
are epochs), all else being equal.
Algorithm 2 is chosen for its efficiency and simplicity from an implementation
standpoint (in this section, we will use DNO to denote Algorithm 2 or DNO-Prct
for simplicity). Once the input dataset $\\{x_{i}\in\mathcal{X}\\}$ is chosen,
each iteration of DNO proceeds in three phrases: sampling outputs from the
current policy, annotating outputs for preference pair generation, and then
training the next policy with the new training pairs. Iteration 0 is defined
to start by sampling from the initial SFT model to produce training data for
iteration 1.
### 5.1 Experimental Setup
Data: We mainly use Ultrafeedback (Cui et al., 2023), which consists of 60k
prompts, several models’ outputs to those prompts, and preference annotations
from GPT-4-Turbo. This dataset thus provides a source of offline preferences.
For our iterative experiments, we split this dataset into three non-
overlapping partitions of the inputs to be used for separate iterations of
batched on-policy learning. For each input, we also collect the GPT-4-Turbo
output if it was not already present in the original dataset to be reserved
for $y^{\text{gold}}$.
Every experiment except one in this study solely uses UltraFeedback. The
exception is one “scaled up” experiment with about 10x more data sourced from
a mixture of datasets aggregated including Anthropic HH (Bai et al., 2022a),
UltraChat (Ding et al., 2023), MetaMathQA (Yu et al., 2023), EvolInstruct (Xu
et al., 2023a), UltraFeedback (Cui et al., 2023) and Orca-2 (Mitra et al.,
2023). Note that we only use the input prompts for these datasets and collect
a GPT-4-Turbo responses for all 600k of these input prompts.
Sampling from the Policy: At the end of training, we sample 5 outputs from the
resulting student policy using top p sampling with $p=0.95$ and temperature
0.7. Several works have shown the benefit of sampling and comparing multiple
diverse outputs from the policy (Yuan et al., 2023a; Mitra et al., 2024; Liu
et al., 2024b; Dong et al., 2023; Wang et al., 2022). We implement a simple
defect detection system which flags any sample that has a high amount of
repeated n-grams as automatic negative.
Preference Annotation: We use GPT-4-Turbo “as a judge” to label preferences
among the 5 policy samples and 1 gold sample (which is also GPT-4-Turbo) as
shown in Fig. 3. This prompt contains a few minor modifications from the that
used in (Yuan et al., 2024). It implements an additive scoring framework on a
6-point scale where a score of 6 represents the highest quality answer
according to certain dimensions like “correctness”, “expert knowledge”,
“conciseness” etc. By following this rubric, GPT-4 acting as an annotator
represents a best-effort general preference model because it compares multiple
candidate responses side-by-side in the context window, and stratifies them
along meaningful dimensions of quality.
| | Alpaca Eval 2 | MT Bench
---|---|---|---
Technique | | Epoch
---
or Iter
| Len-control.
---
Win Rate
| Win Rate
---
vs. GPT-4
| Avg. len
---
(chars)
| 1st
---
Turn
| 2nd
---
Turn
Avg
Orca-2.5 SFT | Epoch 1 | 10.76 | 6.99 | 1174 | 7.72 | 6.02 | 6.88
Orca-2.5 SFT on Positives | Epoch 4 | 11.62 | 7.96 | 1420 | 7.62 | 6.23 | 6.92
Offline DPO (ours) | Epoch 4 | 19.49 | 18.22 | 1884 | 7.69 | 7.08 | 7.38
Self-Rewarding 70B | Iter 3 | - | 20.44 | 2552 | - | - | 7.25
SPIN (ours) | Iter 3 | 16.18 | 16.13 | 1922 | 7.58 | 7.53 | 7.55
DNO-Restrictive | Iter 3 | 21.61 | 19.21 | 1795 | 7.59 | 7.35 | 7.46
DNO-Lookahead | Epoch 1 | 18.58 | 18.28 | 1907 | 8.09 | 7.32 | 7.70
DNO | Iter 3 | 22.59 | 24.97 | 2228 | 7.62 | 7.35 | 7.48
Table 1: AlpacaEval 2.0 and MT-Bench results in our controlled setting after
training on UltraFeedback.
Training Pair Construction: Adhering to 6 in Algorithm 2 implies that not all
pairs are suitable for training. Firstly, we must enforce the positives to be
high quality in an absolute sense, and secondly, the negatives are
directionally worse by a large margin. On the 6 point annotation scale, only
samples that score a 5 or 6 are allowed to be positives. From the positives
that meet this criteria, if any, we then construct all pairs such that the
negative is at least 2 points lower. If the positive happens to be from the
student, we relax this constraint to 1 point margin since the GPT-4-Turbo
teacher outputs rarely receive a score less than 5 (as shown by the average
teacher score in Table 2).
Additionally, we are motivated to preserve the preference behavior from
previous iterations so that new policies do not inadvertently regress to past
bad behavior. To enforce this, we incorporate an exponentially decaying
proportion of prior iterations’ training pairs into the current iteration,
i.e. we sample at most 30% of training pairs from iteration $t-1$, 15% from
$t-2$, and so on. We do not re-inference outputs for those inputs from the
most recent policy. Recall that previous iterations’ inputs are non-
overlapping with the splits for other iterations.
Training: To prevent overfitting, we train our batched on-policy methods for
at most one epoch on newly constructed pairs. Our effective batch size is
fixed to 64 for all experiments. Our learning rate, beta, and alpha are found
with brief hyperparameter searches. For most experiments, the learning rate is
5E-5, beta is either 0.1 or 0.05, and alpha is 0.005. We found that at higher
iterations, the learning rate needs to be lowered. In SFT (supervised fine-
tuning) experiments, our learning rate is 5E-6 and we mask out loss for the
inputs. We use the open-source TRL library’s implementation to run our
experiments.
Evaluation: Our primary goal is to train a policy that is comparable to the
most powerful state-of-the-art langauge models. Hence, AlpacaEval 2.0 (Dubois
et al., 2023) is an appropriate benchmark because it computes win-rate against
GPT-4-Turbo in a head-to-head fashion on a dataset of 805 input prompts that
is shown to correlate with human preferences (0.93 spearman correlation with
Chatbot Arena). While it is known that auto-eval methods also correlate with
spurious features such as length, a new version of AlpacaEval 2.0 corrects for
this with a length-controlled win-rate that has an even higher spearman
correlation (0.98) with Chatbot Arena 444https://github.com/tatsu-
lab/alpaca_eval.
We also evaluate on MT-Bench (Zheng et al., 2023) which allows the llm-as-a-
judge to first explain its reasoning before providing a scalar score on 1-10
for the candidate response to a bank of 80 questions. One crucial difference
between AlpacaEval 2.0 and MT Bench is that the former asks GPT-4-Turbo to
predict which of two side-by-side responses humans would prefer, weighted by
the logits to represent its uncertainty, whereas MT-Bench asks the model to
first generate a justification and then output a score on 1-10, but it neither
defines the ratings (e.g. how a 7 is different than a 5) nor accounts for
uncertainty in the logits of the score.
We also evaluate on the OpenLLM leaderboard (Beeching et al., 2023), which
measures reasoning ability on downstream NLP tasks like coding and question
answering by evaluating the accuracy of the multiple choice answer option with
the highest logit. Since our training data is primarily instruction-following
and not trained to output just the sole answer option, this benchmark is not
the primary target of this study; nonetheless, DNO on instruction tuning tasks
ought to show no regression on reasoning tasks.
| | | Annotations of Training Data | New Training Pairs
---|---|---|---|---
| inputs | | student
---
length
(words)
| best-of-n
---
student
win-rate
| Avg. #
---
student
wins
| Avg.
---
student
score
| Avg.
---
teacher
score
$T\succ S$ | $S\succ T$ | $S\succ S$
DNO-Restrictive Iter 0 | 19.6k | 162 +/- 190 | 15.9% | 0.486 | 3.46 | 4.99 | 42.4k | 0 | 0
DNO-Restrictive Iter 1 | 19.9k | 359 +/- 350 | 34.2% | 1.11 | 4.86 | 4.77 | 17.5k | 0 | 0
DNO-Restrictive Iter 2 | 19.8k | 256 +/- 207 | 35.0% | 1.31 | 5.21 | 4.87 | 9.9k | 0 | 0
DNO Iter 0 | 19.6k | 162 +/- 190 | 15.9% | 0.486 | 3.46 | 4.99 | 30.7k | 4.2k | 25.9k
DNO Iter 1 | 19.9k | 671 +/- 546 | 34.6% | 1.22 | 4.61 | 4.62 | 20.3k | 19.4k | 62.9k
DNO Iter 2 | 19.8k | 361 +/- 251 | 43.6% | 1.90 | 5.25 | 4.59 | 7.1k | 32.4k | 10.9k
Table 2: The dynamics of how sampled outputs from a previous iteration’s
policy compare to their teacher, and how many new training pairs they give
rise to in the next iteration. The crucial point is that DNO constructs new
pairs where the student is compared to the teacher, whereas DNO-Restrictive,
SPIN, and IPO-MD do not.
### 5.2 Results and Analysis
We run several head-to-head experiments that control for hyperparameters and
input data. We often refer to the policy being trained as the “student” and
GPT-4 as a “teacher”; GPT-4 is also used as an annotator when prompted.
SFT Baselines The first baseline is Orca-2.5 itself, which is a
mistralai/Mistral-7B-v0.1 raw pretrained model fine-tuned on a new collection
of Orca-2 data (Mitra et al., 2023). This model was finetuned for three epochs
and achieves scores shown in the top of Table 4. All other experiments in this
study are initialized with Epoch 1 of Orca-2.5. This is the solid horizontal
line in Fig. 2.
The second baseline is continue-SFT of Orca-2.5 training towards the positives
in UltraFeedback (and masking out loss over the input prompts). If the
original positive in that dataset was not from GPT-4-Turbo, we replace it with
one that is. This is the red line in Fig. 2. It is clear that even offline
contrastive training methods are more beneficial than additional SFT, showing
that the difference between the positive and negative output provides more
valuable training signal than the positive in isolation.
Large Margin Filtering of Training Pairs: We ran a simple experiment of
Offline DPO for one epoch on UltraFeedback data. In the control, we trained on
all 63k preference pairs in the original dataset, whereas in the treatment we
filtered the 42k pairs that met a large margin requirement enforcing that the
positive’s scores exceeded that of the negative by at least 1.0 (out of 10)
according to their GPT-4-Turbo annotator. All else was equal. Even though the
treatment was trained for fewer steps on less data, it achieved an AlpacaEval
2.0 win rate of 11.60 vs 9.60 for the control, showing that fewer higher
quality preference pairs is better than a higher quantity of noisy pairs (not
shown in the tables).
On-Policy is Better than Off-Policy One of the critical questions in this
study whether to sample “on-policy” outputs from the current student to use in
training pairs, or whether “off-policy” outputs collected from other models
different than the student will suffice. We ran 4 epochs of Offline DPO on
UltraFeedback (filtered for large margin), and as shown in Table 1, on-policy
methods especially DNO surpass the off-policy DPO, even when trained for 4
epochs while the on-policy models were granted only three iterations. Recall
that each iteration of batched on-policy training sees only a third of the
UltraFeedback input data, whereas an epoch of Offline DPO sees the entire
dataset.
| | ARC-C
---
(25-shot)
| GSM8K
---
(5-shot)
| HellaSwag
---
(10-shot)
| MMLU
---
(5-shot)
| TruthfulQA
---
(0-shot)
| WinoGrande
---
(5-shot)
Avg
Orca-2.5 Epoch 1 | 0.609 | 0.635 | 0.818 | 0.614 | 0.489 | 0.738 | 0.652
Orca-2.5 Epoch 3 | 0.624 | 0.641 | 0.826 | 0.624 | 0.506 | 0.746 | 0.661
SPIN (ours) Iter 3 | 0.668 | 0.448 | 0.862 | 0.623 | 0.601 | 0.759 | 0.660
DNO Iter 1 | 0.657 | 0.572 | 0.834 | 0.623 | 0.568 | 0.755 | 0.668
DNO Iter 2 | 0.663 | 0.562 | 0.845 | 0.624 | 0.580 | 0.753 | 0.671
DNO Iter 3 | 0.672 | 0.542 | 0.852 | 0.622 | 0.606 | 0.753 | 0.675
Table 3: Results on Open-LLM Leaderboard reasoning tasks, which we do not
expect to decrease.
Higher Quality Annotators In our study, we use GPT-4-Turbo to provide the
annotations for preference pairs. However, the Self-Rewarding Language Model
uses the Llama-2-70B (Touvron et al., 2023) model trained to also give
feedback as the annotator, which in their study starts off with a 65%
agreement rate with human-labeled preferences improving to 80% in the last
iteration (Yuan et al., 2024). While it was not reported how well
GPT-4-Turbo’s annotations agree with their held-out human labels, we believe
that having a higher-quality annotator to start with will lead to higher
quality policies. Since both our studies use UltraFeedback data, and our
annotation prompt is based on their annotation prompt, we believe there is a
valid comparison.
We observe DNO initialized with a 7B base model outperforms the 70B parameter
Self-Rewarding model over the same number of training iterations (24.97 win-
rate vs 20.44 on AlpacaEval 2.0, and 7.46 MT-Bench vs 7.25), at least in part
due to the higher quality preference annotations. See the dark blue band
versus the gray line in Fig. 2 and the corresponding row in Table 1. However,
unlike Self-Rewarding LM, we saw a slight gain rather than a drop reasoning
benchmarks like ARC-Challenge (Clark et al., 2018) and HellaSwag (Zellers et
al., 2019). Granted, the evaluation of OpenLLM predicts the answer with the
max logit corresponding to one of the multiple-choice options, which is not
congruous with how these techniques are trained.
Training Pair Construction One of the most critical implementation questions
in this study is how to construct training pairs that help the student policy
exceed a strong teacher like GPT-4-Turbo. One approach, Self-Play Finetuning
(SPIN), removes the preference annotation step and automatically assigns the
teacher output to be the positive, and all student samples to be negative
(Chen et al., 2024). We find in our re-implementation of SPIN that this is
detrimental, presumably because this automatic assignment could lead to noisy
training pairs in cases where the student might actually be preferred. The
resulting win-rate of SPIN is only 16.13 after three epochs of iterative
training compared to 24.97 for DNO as shown in Table 1, all else being equal.
Similar results hold in the OpenLLM results in Table 3.
In a second experiment, which we denote DNO-Restrictive, we annotate all
preference pairs with GPT-4-Turbo as usual, but only admit training pairs
where the teacher’s output is the preferred one. The difference between DNO
and DNO-Restrictive is illustrated in Table 2 where 0 student-vs-teacher and
student-vs-student pairs are created. The same is also true for SPIN, but SPIN
would admit a greater quantity of noisy teacher-vs-student examples even when
they are dis-preferred: Table 2 shows that after Iteration 2 of DNO-
Restrictive, only 9.9k instances exist of the teacher being preferred over the
student, whereas SPIN would have automatically created about 100k (5 samples
$\times$ 20k inputs).
While DNO-Restrictive is slightly better (19.21 win-rate) than SPIN, it still
does not give the student a chance to compare its behavior to a powerful
teacher. Absence of this signal is a major oversight, since the last row of
Table 2 shows that by Iter 3, over 64% of the DNO training data (32k pairs)
are cases where the student is in fact preferred over the teacher, a number
which increases with iteration. We conclude it is imperative to “allow the
student to become the teacher” i.e. learn from comparisons where its own
outputs are preferred over a more powerful teacher.
One curious phenomenon in Table 2 is that while the teacher outputs are fixed
ahead of time, the annotator gives slightly lower scores to the teacher as the
student improves; we are not sure if this is an innocuous artifact of
preference annotations, or symptomatic of a deeper problem. Also, the total
quantity of new “large margin” training pairs (not counting those sampled from
previous iterations) in DNO tends to decrease as the policy improves across
iterations, but we do not have enough data to quantify how this relates to a
change in quality.
Lookahead to Future Iterations As a curiosity, we experimented with whether a
model could benefit from the knowledge of which training pairs it would
generate if it could look into the future. We tested this by running three-
iterations of DNO, accumulating all the preference pairs across iterations,
combining and shuffling them, and then re-starting training from the initial
model. In essence, this turns the batch-online DNO into an offline learning
algorithm we denote as DNO-Lookahead. We trained for one epoch on the three
iterations’ worth of preference data. It deteriorated more than we expected on
AlpacaEval 2.0 win-rate (24.97 to 18.18), however, even more surprisingly, the
MT-Bench numbers improved significantly (7.48 to 7.70). While the reasons for
the relatively low correlation between MT-Bench and AlpacaEval 2.0 are not
entirely clear, it is important to consider the disparity in the size of the
datasets. Given that MT-Bench consists of merely 80 examples, whereas
AlpacaEval 2.0 contains 10x more, we conjecture that the statistical
significance and reliability of the findings from AlpacaEval 2.0 are regarded
with greater confidence.
DNO Scales with More Data: One of the reasons we split UltraFeedback into
three non-overlapping partitions is to avoid overfitting. Another strategy to
avoid overfitting is to collect more data, so we increased by a factor of 10
the instruction data based on publicly available datasets. We split a large
mixture of datasets into six non-overlapping partitions of roughly 100k inputs
each (and inference GPT-4-Turbo outputs for all inputs), and show that DNO-
More-Data scales well in this expanded regime (see the purple line in Fig. 2
and the last row of Table 4.
We make some notes on the behavior of this experiment: because each iteration
builds on outputs of the previous iteration, if there are any anomalies or
errors in critical components such as preference annotation, those errors will
propagate and the only way to combat them is “roll back” to the iteration that
introduced them. This can result in wasted time and cost, which are both
already very high as shown in Appendix C. We suspect that the “depth” of
iterations matters more than the “width” or number of samples within each
iteration, and furthermore, that having equal number of inputs per iteration
may not be optimal, but we did not test this thoroughly. From an efficiency
standpoint, although this algorithm is “batched”, some optimizations can be
made, such as starting to annotate sampled policy outputs are soon as they are
ready instead of waiting for all inference jobs to finish.
“Exploding” Lengths It is known that contrastive LLM training techniques,
especially DPO, lead to longer outputs from the model which is widely
suspected to be a form of “reward hacking”. Curiously, Table 2 shows that the
largest jump comes after the first round of contrastive training (Iteration
1), where lengths explode by at least a factor of 2 over the initializing SFT
model, before inching down again in the next iteration. We interpret this
“length spike” as wasted computation optimizing towards a spurious signal; we
wish we were better equipped to control this phenomenon.
| | Alpaca Eval 2 | MT Bench
---|---|---|---
Technique | | Epoch
---
or Iter
| Len-control.
---
Win Rate
| Win Rate
---
vs. GPT-4
| Avg. len
---
(chars)
| 1st
---
Turn
| 2nd
---
Turn
Avg
Orca-2.5 SFT | Epoch 1 | 10.76 | 6.99 | 1174 | 7.72 | 6.02 | 6.88
Orca-2.5 SFT | Epoch 2 | 15.29 | 7.88 | 1060 | 7.56 | 6.38 | 6.98
Orca-2.5 SFT | Epoch 3 | 15.90 | 8.17 | 1058 | 7.53 | 6.73 | 7.13
DNO-More-Data | Iter 1 | 8.96 | 10.67 | 2795 | 7.00 | 6.06 | 6.53
DNO-More-Data | Iter 2 | 14.61 | 16.94 | 2782 | 7.62 | 7.23 | 7.42
DNO-More-Data | Iter 3 | 21.81 | 26.74 | 2539 | 7.74 | 6.66 | 7.21
DNO-More-Data | Iter 4 | 22.93 | 29.08 | 3033 | 7.54 | 6.92 | 7.24
DNO-More-Data | Iter 5 | 32.06 | 34.98 | 2856 | 7.10 | 6.39 | 6.75
DNO-More-Data | Iter 6 | 33.05 | 34.38 | 2683 | 7.28 | 6.65 | 6.97
Table 4: DNO-More-Data is trained on 10x more instruction data than DNO. It is
still initialized with Epoch 1 of Orca-2.5 SFT, so the delta it provides in
AlpacaEval 2.0 win rate is 27.39 absolute (22.29 length-controlled)
## 6 Related Work
We divide the space of related work into whehter or not the techniques use SFT
or contrastive losses, in offline or online update settings.
Online RLHF algorithms: RLHF innovated how to align language models with human
preferences (Christiano et al., 2017; Stiennon et al., 2020), but it is
unstable to train and memory-intensive, requiring all three of the
parameterized policy model, reward model, and advantage model to be on device
for training.
Reward-model Augmented SFT: Since the introduction of RLHF, several emergent
techniques apply reward models in various ways, such as to filter training
data or rank responses. Reward rAnked Finetuning (RAFT) (Dong et al., 2023)
and RRHF (Yuan et al., 2023b) offer the conceptually simplest solution for
offline preference learning, which is to sample multiple outputs from a
policy, rank them with a reward model, and then finetune on the best sampled
output using SFT. This resembles the iterative behavior-cloning technique
DAgger (Ross et al., 2011).
Offline Contrastive Preference Learning: There exist several loss functions
for contrastive preference learning, first introduced in the offline setting,
namely Direct Preference Optimization (Rafailov et al., 2023, DPO) and
Calibrated Sequence Likelihood Estimation a.k.a. SLiC (Zhao et al., 2023).
Azar et al. (2023) make it clear that point-wise reward estimates are no
substitute for pair-wise preferences, and that a policy can easily overfit to
deterministic preferences without proper regularization. They derive a more
general objective for RLHF, IPO, to directly optimize offline preference
probabilities.
Statistical Rejection Sampling Optimization (RSO) generates multiple samples
from an initial model, ranks them to create training pairs, and optimizes them
under a unified framework encompassing DPO and SLiC (Liu et al., 2024b).
Inspired by the learning-to-rank literature, Listwise preference optimization
(LIPO) extends pair-wise preference learning to list-wise (Liu et al., 2024a).
Preference Ranking Optimization (PRO) also learns towards list-wise
preferences (Song et al., 2024). The KTO algorithm takes a different approach
from DPO and does not assume that a pair of good-vs-bad outputs for the same
input exist, but rather a pool of good outputs and a pool of bad outputs for
any inputs exist and optimizes an “unpaired” loss (Ethayarajh et al., 2024).
Iterative Reward-based Finetuning: Reinforced Self-Training (ReST) is one of
the first methods to explore iterative self-improving training strategies
framed as a two-stage “Grow” step that samples from the current policy, and a
“Improve” step that uses a reward model to filter ever-higher quality samples
that are then used to improve the policy with offline RL (Gulcehre et al.,
2023). A follow-up work explores the use of AI feedback rather than reward
ranking (Singh et al., 2023).
On-policy Contrastive Learning: Self-Rewarding Language Models (Yuan et al.,
2024) is in practice very similar to DNO. They study the benefits of batched
iteratively training on preferences derived from a recent policy’s sampled
outputs, but in their work, they use the policy itself as the annotator, which
starts off being able to provide only weak preference signals. Self-Play Fine-
Tuning (Chen et al., 2024) a.k.a SPIN and Adversarial Preference Optimization
a.k.a APO (Cheng et al., 2023) are both iterative LLM training techniques that
are compatible with contrastive losses, but they make a very limiting
assumption that the teacher is better than the student (without regard to any
annotator feedback).
The Cringe Loss (Adolphs et al., 2022) is a token-level loss function that
contrasts the correct next token with a hard-negative token from the
vocabulary that has high logit weight but still incorrect. The Pairwise Cringe
Loss (Xu et al., 2023b) applies the cringe loss to an iterative self-improving
style of training.
On-Policy General Preference Optimization: Wang et al. (2023) consider finding
the von Neumann winner of general preferences via multi-agent RL from the
theoretical perspective. Nash-MD optimizes a policy towards the Nash
equilibrium of a generalized preference model using policy gradients, showing
that by sampling from a mixture of policies, one can converge to the Nash
equilibrium in the last iteration (Munos et al., 2023). Self-play Preference
Optimization (SPO) is another online two-player mini-max game that converges
to a Nash equilibrium with no-regret guarantees (Swamy et al., 2024). However,
these techniques are not as data efficient as contrastive losses and are
difficult to implement faithfully without cumbersome two-timescale updates
(Munos et al., 2023). A concurrent improvement, IPO-MD, mitigates these
difficulties by using purely on-policy IPO updates and is empirically
evaluated on an article summarization task (Calandriello et al., 2024). Guo et
al. (2024) also propose to eliminate rewards in online AI-feedback (OAIF) by
using another LLM to annotate which of two online-sampled outputs from the
current policy is preferred. However, all the above studies only consider
training pairs constructed between self-play “student vs student” samples, and
between student and initial $\pi_{\text{ref}}$. That is, there is no concept
of a more powerful “teacher” to compare against in their training pairs. We
showed in Table 2 that omitting these “student vs teacher” preferences may
hinder performance.
## 7 Conclusion
In this paper we achieve dual goals of post-training LLMs against a more
general class of preference models while providing a practical and scalable
implementation with finite-sample analysis. Our strong empirical results are
based on the insight that optimizing general preference functions can be
reduced to finding the Nash equilibrium of a two-player game with the payoff
as the preference, and further solved by a single-play algorithm. Most
techniques to optimize for this objective use soft policy iteration, which is
difficult to implement faithfully and may require unstable on-policy and two-
timescale updates. Our contribution, Direct Nash Optimization, addresses these
challenges by approximating soft policy iteration updates with a regression-
based contrastive objective in a batched manner, which is a much more stable
and forgiving learning objective, and we establish a concentration bound of
$\widetilde{O}(\nicefrac{{1}}{{N}})$ on the squared total variation error
between the learned policy and its target of the soft policy iteration update
at any given iteration $t$. Theoretically, DNO converges to the Nash
equilibrium on-average, but in practice enjoys monotonic improvement across
iterations. Training a 7B parameter LLM with DNO achieves state-of-the-art
performance on AlpacaEval 2.0, exceeding both Mistral Large and older versions
of GPT-4. We illuminate many of the practical design choices that will aid
future development of iterative self-improving algorithms.
## References
* Adolphs et al. [2022] Leonard Adolphs, Tianyu Gao, Jing Xu, Kurt Shuster, Sainbayar Sukhbaatar, and Jason Weston. The cringe loss: Learning what language not to model. _arXiv preprint arXiv:2211.05826_ , 2022.
* Akrour et al. [2012] Riad Akrour, Marc Schoenauer, and Michèle Sebag. April: Active preference learning-based reinforcement learning. In _Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2012, Bristol, UK, September 24-28, 2012. Proceedings, Part II 23_ , pages 116–131. Springer, 2012.
* Amodei et al. [2016] Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. Concrete problems in ai safety. _arXiv preprint arXiv:1606.06565_ , 2016.
* Azar et al. [2023] Mohammad Gheshlaghi Azar, Mark Rowland, Bilal Piot, Daniel Guo, Daniele Calandriello, Michal Valko, and Rémi Munos. A general theoretical paradigm to understand learning from human preferences. _arXiv preprint arXiv:2310.12036_ , 2023.
* Bai et al. [2022a] Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. Training a helpful and harmless assistant with reinforcement learning from human feedback. _arXiv preprint arXiv:2204.05862_ , 2022a.
* Bai et al. [2022b] Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. Constitutional ai: Harmlessness from ai feedback. _arXiv preprint arXiv:2212.08073_ , 2022b.
* Beeching et al. [2023] Edward Beeching, Clémentine Fourrier, Nathan Habib, Sheon Han, Nathan Lambert, Nazneen Rajani, Omar Sanseviero, Lewis Tunstall, and Thomas Wolf. Open llm leaderboard. https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard, 2023.
* Bertrand et al. [2023] Quentin Bertrand, Wojciech Marian Czarnecki, and Gauthier Gidel. On the limitations of the elo, real-world games are transitive, not additive. In _International Conference on Artificial Intelligence and Statistics_ , pages 2905–2921. PMLR, 2023.
* Bradley and Terry [1952] Ralph Allan Bradley and Milton E. Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. _Biometrika_ , 39(3/4):324–345, 1952.
* Brown et al. [2020] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. _Advances in neural information processing systems_ , 33:1877–1901, 2020.
* Bubeck [2015] Sébastien Bubeck. Convex optimization: Algorithms and complexity. _Foundations and Trends® in Machine Learning_ , 8(3-4):231–357, 2015.
* Calandriello et al. [2024] Daniele Calandriello, Daniel Guo, Remi Munos, Mark Rowland, Yunhao Tang, Bernardo Avila Pires, Pierre Harvey Richemond, Charline Le Lan, Michal Valko, Tianqi Liu, et al. Human alignment of large language models through online preference optimisation. _arXiv preprint arXiv:2403.08635_ , 2024.
* Cesa-Bianchi and Lugosi [2006] Nicolo Cesa-Bianchi and Gábor Lugosi. _Prediction, learning, and games_. Cambridge university press, 2006.
* Chen and Jiang [2019] Jinglin Chen and Nan Jiang. Information-theoretic considerations in batch reinforcement learning. In _International Conference on Machine Learning_ , pages 1042–1051. PMLR, 2019.
* Chen et al. [2024] Zixiang Chen, Yihe Deng, Huizhuo Yuan, Kaixuan Ji, and Quanquan Gu. Self-play fine-tuning converts weak language models to strong language models. _arXiv preprint arXiv:2401.01335_ , 2024.
* Cheng et al. [2023] Pengyu Cheng, Yifan Yang, Jian Li, Yong Dai, and Nan Du. Adversarial preference optimization. _arXiv preprint arXiv:2311.08045_ , 2023.
* Christiano et al. [2017] Paul F. Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. _Advances in neural information processing systems_ , 30, 2017.
* Clark et al. [2018] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. _arXiv:1803.05457v1_ , 2018.
* Cui et al. [2023] Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, and Maosong Sun. Ultrafeedback: Boosting language models with high-quality feedback. _arXiv preprint arXiv:2310.01377_ , 2023.
* Ding et al. [2023] Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations. _arXiv preprint arXiv:2305.14233_ , 2023.
* Dong et al. [2023] Hanze Dong, Wei Xiong, Deepanshu Goyal, Yihan Zhang, Winnie Chow, Rui Pan, Shizhe Diao, Jipeng Zhang, SHUM KaShun, and Tong Zhang. Raft: Reward ranked finetuning for generative foundation model alignment. _Transactions on Machine Learning Research_ , 2023.
* Dubois et al. [2023] Yann Dubois, Chen Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy S. Liang, and Tatsunori B. Hashimoto. Alpacafarm: A simulation framework for methods that learn from human feedback. _Advances in Neural Information Processing Systems_ , 36, 2023.
* Dudík et al. [2015] Miroslav Dudík, Katja Hofmann, Robert E. Schapire, Aleksandrs Slivkins, and Masrour Zoghi. Contextual dueling bandits. In _Conference on Learning Theory_ , pages 563–587. PMLR, 2015.
* Elo [1978] Arpad E. Elo. _The rating of chessplayers, past and present_. Arco Pub., New York, 1978. ISBN 0668047216 9780668047210. URL http://www.amazon.com/Rating-Chess-Players-Past-Present/dp/0668047216.
* Ethayarajh et al. [2024] Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. Kto: Model alignment as prospect theoretic optimization. _arXiv preprint arXiv:2402.01306_ , 2024.
* Finn et al. [2016a] Chelsea Finn, Paul Christiano, Pieter Abbeel, and Sergey Levine. A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models. _arXiv preprint arXiv:1611.03852_ , 2016a.
* Finn et al. [2016b] Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In _International conference on machine learning_ , pages 49–58. PMLR, 2016b.
* Fishburn [1984] Peter C. Fishburn. Probabilistic social choice based on simple voting comparisons. _The Review of Economic Studies_ , 51(4):683–692, 1984.
* Foster and Krishnamurthy [2021] Dylan J. Foster and Akshay Krishnamurthy. Efficient first-order contextual bandits: Prediction, allocation, and triangular discrimination. _Advances in Neural Information Processing Systems_ , 34:18907–18919, 2021.
* Freund and Schapire [1997] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. _Journal of computer and system sciences_ , 55(1):119–139, 1997.
* Griffith et al. [2013] Shane Griffith, Kaushik Subramanian, Jonathan Scholz, Charles L. Isbell, and Andrea L. Thomaz. Policy shaping: Integrating human feedback with reinforcement learning. _Advances in neural information processing systems_ , 26, 2013.
* Gulcehre et al. [2023] Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, Wolfgang Macherey, Arnaud Doucet, Orhan Firat, and Nando de Freitas. Reinforced self-training (rest) for language modeling. _arXiv preprint arXiv:2308.08998_ , 2023.
* Guo et al. [2024] Shangmin Guo, Biao Zhang, Tianlin Liu, Tianqi Liu, Misha Khalman, Felipe Llinares, Alexandre Rame, Thomas Mesnard, Yao Zhao, Bilal Piot, Johan Ferret, and Mathieu Blondel. Direct language model alignment from online ai feedback. _arXiv preprint arXiv:2402.04792_ , 2024.
* Haarnoja et al. [2018] Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In _International conference on machine learning_ , pages 1861–1870. PMLR, 2018.
* Hazan et al. [2016] Elad Hazan et al. Introduction to online convex optimization. _Foundations and Trends® in Optimization_ , 2(3-4):157–325, 2016.
* Kakade and Langford [2002] Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning. In _Proceedings of the Nineteenth International Conference on Machine Learning_ , pages 267–274, 2002.
* Kakade [2001] Sham M. Kakade. A natural policy gradient. _Advances in neural information processing systems_ , 14, 2001.
* Kalai and Vempala [2005] Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems. _Journal of Computer and System Sciences_ , 71(3):291–307, 2005.
* Knox and Stone [2008] W Bradley Knox and Peter Stone. Tamer: Training an agent manually via evaluative reinforcement. In _2008 7th IEEE international conference on development and learning_ , pages 292–297. IEEE, 2008.
* Kramer [1973] Gerald H. Kramer. On a class of equilibrium conditions for majority rule. _Econometrica: Journal of the Econometric Society_ , pages 285–297, 1973.
* Kreweras [1965] Germain Kreweras. Aggregation of preference orderings. In _Mathematics and Social Sciences I: Proceedings of the seminars of Menthon-Saint-Bernard, France (1–27 July 1960) and of Gösing, Austria (3–27 July 1962)_ , pages 73–79, 1965.
* Lattimore and Szepesvári [2020] Tor Lattimore and Csaba Szepesvári. _Bandit algorithms_. Cambridge University Press, 2020.
* Lee et al. [2023] Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie Lu, Thomas Mesnard, Colton Bishop, Victor Carbune, and Abhinav Rastogi. Rlaif: Scaling reinforcement learning from human feedback with ai feedback. _arXiv preprint arXiv:2309.00267_ , 2023.
* Liu et al. [2024a] Tianqi Liu, Zhen Qin, Junru Wu, Jiaming Shen, Misha Khalman, Rishabh Joshi, Yao Zhao, Mohammad Saleh, Simon Baumgartner, Jialu Liu, Peter J. Liu, and Xuanhui Wang. Lipo: Listwise preference optimization through learning-to-rank. _arXiv preprint arXiv:2402.01878_ , 2024a.
* Liu et al. [2024b] Tianqi Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mohammad Saleh, Peter J. Liu, and Jialu Liu. Statistical rejection sampling improves preference optimization. In _The Twelfth International Conference on Learning Representations_ , 2024b.
* Mitra et al. [2023] Arindam Mitra, Luciano Del Corro, Shweti Mahajan, Andres Codas, Clarisse Simoes, Sahaj Agarwal, Xuxi Chen, Anastasia Razdaibiedina, Erik Jones, Kriti Aggarwal, Hamid Palangi, Guoqing Zheng, Corby Rosset, Hamed Khanpour, and Ahmed Awadallah. Orca 2: Teaching small language models how to reason. _arXiv preprint arXiv:2311.11045_ , 2023.
* Mitra et al. [2024] Arindam Mitra, Hamed Khanpour, Corby Rosset, and Ahmed Awadallah. Orca-math: Unlocking the potential of slms in grade school math. _arXiv preprint arXiv:2402.14830_ , 2024.
* Mnih et al. [2016] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In _International conference on machine learning_ , pages 1928–1937. PMLR, 2016.
* Munos et al. [2023] Rémi Munos, Michal Valko, Daniele Calandriello, Mohammad Gheshlaghi Azar, Mark Rowland, Zhaohan Daniel Guo, Yunhao Tang, Matthieu Geist, Thomas Mesnard, Andrea Michi, Marco Selvi, Sertan Girgin, Nikola Momchev, Olivier Bachem, Daniel J. Mankowitz, Doina Precup, and Bilal Piot. Nash learning from human feedback. _arXiv preprint arXiv:2312.00886_ , 2023.
* Nemirovskij and Yudin [1983] Arkadij Semenovič Nemirovskij and David Borisovich Yudin. _Problem complexity and method efficiency in optimization_. Wiley-Interscience, 1983.
* OpenAI et al. [2023] OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian, Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann, Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, Hyung Won Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, Simón Posada Fishman, Juston Forte, Isabella Fulford, Leo Gao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha Gontijo-Lopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua Gross, Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger Jiang, Haozhun Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo Jun, Tomer Kaftan, Łukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, Nitish Shirish Keskar, Tabarak Khan, Logan Kilpatrick, Jong Wook Kim, Christina Kim, Yongjik Kim, Jan Hendrik Kirchner, Jamie Kiros, Matt Knight, Daniel Kokotajlo, Łukasz Kondraciuk, Andrew Kondrich, Aris Konstantinidis, Kyle Kosic, Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy Lee, Jan Leike, Jade Leung, Daniel Levy, Chak Ming Li, Rachel Lim, Molly Lin, Stephanie Lin, Mateusz Litwin, Theresa Lopez, Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini, Sam Manning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, Andrew Mayne, Bob McGrew, Scott Mayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, David Medina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie Monaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David Mély, Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh, Long Ouyang, Cullen O’Keefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano, Giambattista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng, Adam Perelman, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto, Michael, Pokorny, Michelle Pokrass, Vitchyr H. Pong, Tolly Powell, Alethea Power, Boris Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario Saltarelli, Ted Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin Sokolowsky, Yang Song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang, Nikolas Tezak, Madeleine B. Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, Juan Felipe Cerón Uribe, Andrea Vallone, Arun Vijayvergiya, Chelsea Voss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan Ward, Jason Wei, CJ Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming Yuan, Wojciech Zaremba, Rowan Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao, Tianhao Zheng, Juntang Zhuang, William Zhuk, and Barret Zoph. Gpt-4 technical report. _arXiv preprint arXiv:2303.08774_ , 2023.
* Ouyang et al. [2022] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. _Advances in neural information processing systems_ , 35:27730–27744, 2022.
* Owen [2013] Art B. Owen. _Monte Carlo theory, methods and examples_. https://artowen.su.domains/mc/, 2013.
* Rafailov et al. [2023] Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. _Advances in Neural Information Processing Systems_ , 36, 2023.
* Ross et al. [2011] Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In _Proceedings of the fourteenth international conference on artificial intelligence and statistics_ , pages 627–635. JMLR Workshop and Conference Proceedings, 2011.
* Rosset et al. [2023] Corby Rosset, Guoqing Zheng, Victor Dibia, Ahmed Awadallah, and Paul Bennett. Axiomatic preference modeling for longform question answering. In _Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing_ , pages 11445–11475, 2023.
* Schulman et al. [2015] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In _International conference on machine learning_ , pages 1889–1897. PMLR, 2015.
* Schulman et al. [2017] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. _arXiv preprint arXiv:1707.06347_ , 2017.
* Shalev-Shwartz et al. [2012] Shai Shalev-Shwartz et al. Online learning and online convex optimization. _Foundations and Trends® in Machine Learning_ , 4(2):107–194, 2012.
* Simpson [1969] Paul B. Simpson. On defining areas of voter choice: Professor tullock on stable voting. _The Quarterly Journal of Economics_ , 83(3):478–490, 1969.
* Singh et al. [2023] Avi Singh, John D. Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Xavier Garcia, Peter J. Liu, James Harrison, Jaehoon Lee, Kelvin Xu, Aaron Parisi, Abhishek Kumar, Alex Alemi, Alex Rizkowsky, Azade Nova, Ben Adlam, Bernd Bohnet, Gamaleldin Elsayed, Hanie Sedghi, Igor Mordatch, Isabelle Simpson, Izzeddin Gur, Jasper Snoek, Jeffrey Pennington, Jiri Hron, Kathleen Kenealy, Kevin Swersky, Kshiteej Mahajan, Laura Culp, Lechao Xiao, Maxwell L. Bileschi, Noah Constant, Roman Novak, Rosanne Liu, Tris Warkentin, Yundi Qian, Yamini Bansal, Ethan Dyer, Behnam Neyshabur, Jascha Sohl-Dickstein, and Noah Fiedel. Beyond human data: Scaling self-training for problem-solving with language models. _arXiv preprint arXiv:2312.06585_ , 2023.
* Song et al. [2024] Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, and Houfeng Wang. Preference ranking optimization for human alignment. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 38, pages 18990–18998, 2024.
* Stiennon et al. [2020] Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F. Christiano. Learning to summarize with human feedback. _Advances in Neural Information Processing Systems_ , 33:3008–3021, 2020.
* Swamy et al. [2024] Gokul Swamy, Christoph Dann, Rahul Kidambi, Zhiwei Steven Wu, and Alekh Agarwal. A minimaximalist approach to reinforcement learning from human feedback. _arXiv preprint arXiv:2401.04056_ , 2024.
* Touvron et al. [2023] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint arXiv:2307.09288_ , 2023.
* Tran et al. [2024] Hoang Tran, Chris Glaze, and Braden Hancock. snorkelai/snorkel-mistral-pairrm-dpo, 2024. https://huggingface.co/snorkelai/Snorkel-Mistral-PairRM-DPO.
* Tunstall et al. [2023] Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Kashif Rasul, Younes Belkada, Shengyi Huang, Leandro von Werra, Clémentine Fourrier, Nathan Habib, Nathan Sarrazin, Omar Sanseviero, Alexander M. Rush, and Thomas Wolf. Zephyr: Direct distillation of lm alignment. _arXiv preprint arXiv:2310.16944_ , 2023.
* Wang et al. [2022] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. _arXiv preprint arXiv:2203.11171_ , 2022.
* Wang et al. [2023] Yuanhao Wang, Qinghua Liu, and Chi Jin. Is rlhf more difficult than standard rl? _arXiv preprint arXiv:2306.14111_ , 2023.
* Wirth et al. [2017] Christian Wirth, Riad Akrour, Gerhard Neumann, and Johannes Fürnkranz. A survey of preference-based reinforcement learning methods. _Journal of Machine Learning Research_ , 18(136):1–46, 2017.
* Xie et al. [2021] Tengyang Xie, Ching-An Cheng, Nan Jiang, Paul Mineiro, and Alekh Agarwal. Bellman-consistent pessimism for offline reinforcement learning. _Advances in neural information processing systems_ , 34:6683–6694, 2021.
* Xie et al. [2023] Tengyang Xie, Dylan J. Foster, Yu Bai, Nan Jiang, and Sham M. Kakade. The role of coverage in online reinforcement learning. In _The Eleventh International Conference on Learning Representations_ , 2023.
* Xiong et al. [2023] Wei Xiong, Hanze Dong, Chenlu Ye, Han Zhong, Nan Jiang, and Tong Zhang. Iterative preference learning from human feedback: Bridging theory and practice for rlhf under kl-constraint. _arXiv preprint arXiv:2312.11456_ , 2023.
* Xu et al. [2023a] Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions, 2023a.
* Xu et al. [2023b] Jing Xu, Andrew Lee, Sainbayar Sukhbaatar, and Jason Weston. Some things are more cringe than others: Preference optimization with the pairwise cringe loss. _arXiv preprint arXiv:2312.16682_ , 2023b.
* Yang et al. [2023] Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, and Yuandong Tian. Rlcd: Reinforcement learning from contrast distillation for language model alignment. _arXiv preprint arXiv:2307.12950_ , 2023.
* Yu et al. [2023] Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models, 2023.
* Yuan et al. [2024] Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Sainbayar Sukhbaatar, Jing Xu, and Jason Weston. Self-rewarding language models. _arXiv preprint arXiv:2401.10020_ , 2024.
* Yuan et al. [2023a] Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. Scaling relationship on learning mathematical reasoning with large language models. _arXiv preprint arXiv:2308.01825_ , 2023a.
* Yuan et al. [2023b] Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. Rrhf: Rank responses to align language models with human feedback without tears. _arXiv preprint arXiv:2304.05302_ , 2023b.
* Zellers et al. [2019] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? _arXiv preprint arXiv:1905.07830_ , 2019.
* Zhan et al. [2024] Wenhao Zhan, Masatoshi Uehara, Nathan Kallus, Jason D. Lee, and Wen Sun. Provable offline preference-based reinforcement learning. In _The Twelfth International Conference on Learning Representations_ , 2024.
* Zhao et al. [2023] Yao Zhao, Misha Khalman, Rishabh Joshi, Shashi Narayan, Mohammad Saleh, and Peter J. Liu. Calibrating sequence likelihood improves conditional language generation. In _The Eleventh International Conference on Learning Representations_ , 2023.
* Zheng et al. [2023] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena. _Advances in Neural Information Processing Systems_ , 36, 2023.
Appendix
## Appendix A Extension to Regularized Preferences
In this section, we discuss how to extend the DNO framework to the case of
regularized preferences (defined in Eq. 5),
$\displaystyle\mathcal{P}^{\pi,\pi^{\prime}}_{\tau}(y\succ y^{\prime}\mid
x)=\mathcal{P}(y\succ y^{\prime}\mid x)-\tau\log\frac{\pi(y\mid
x)}{\pi_{\mathsf{ref}}(y\mid x)}+\tau\log\frac{\pi^{\prime}(y\mid
x)}{\pi_{\mathsf{ref}}(y\mid x)},$
which was first introduced and solved by Munos et al. [2023] via Nash-MD
introduced earlier.
One can notice that the only difference between SPO and Nash-MD is that SPO
uses the last iteration policy $\pi_{t}$ for both constructing reward $r_{t}$
and performing a soft policy iteration update, whereas Nash-MD uses the
smoothed version $\pi_{t}^{\tau}$ (firstly defined in Eq. 7),
$\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:def_pitsmooth_app}}{e}q:def_{p}itsmooth_{a}pp}\pi_{t}^{\tau}(y\mid
x)\coloneqq\frac{\pi_{t}(y\mid
x)^{1-\nicefrac{{\tau}}{{\eta}}}\pi_{\mathsf{ref}}(y\mid
x)^{\nicefrac{{\tau}}{{\eta}}}}{\sum_{y^{\prime}\in\mathcal{Y}}\pi_{t}(y\mid
x)^{1-\nicefrac{{\tau}}{{\eta}}}\pi_{\mathsf{ref}}(y\mid
x)^{\nicefrac{{\tau}}{{\eta}}}},~{}\forall(x,y)\in\mathcal{X}\times\mathcal{Y},$
(14)
for both. This allows Nash-MD to obtain a late-iteration guarantee.
On the other hand, due to the symmetry of regularized preferences, if we
consider on-average convergence case, it is likely that SPO can be adapted
with a simpler way as follows: for each $t=1,2,\dotsc,T$,
(i)
$\displaystyle~{}r_{t}(x,y)\leftarrow{\mathbb{E}}_{y^{\prime}\sim\pi_{t}(\cdot\mid
x)}\left[\mathcal{P}(y\succ y^{\prime}\mid
x)\right],~{}~{}\forall(x,y)\in\mathcal{X}\times\mathcal{Y}$ (ii)
$\displaystyle~{}\pi_{t+1}(\cdot\mid
x)\leftarrow\frac{1}{Z_{t}(x)}\pi_{t}^{\tau}(\cdot\mid
x)\exp\left(\frac{r_{t}(x,\cdot)}{\eta}\right),~{}~{}\forall x\in\mathcal{X},$
where $Z_{t}(x)\coloneqq\sum_{y\in\mathcal{Y}}\pi_{t}^{\tau}(y\mid
x)\exp\left(\frac{r_{t}(x,y)}{\eta}\right)$ is the partition function for
iteration $t$. Here, the smoothed policy $\pi_{t}^{\tau}$ is only used in the
soft policy iteration step, and this coincides with the OMD algorithm from
Munos et al. [2023].
Algorithm 3 DNO (Regularized Preferences Version)
input: General preference function $\mathcal{P}$, learning rate $\eta$,
coefficient of KL-regularization $\tau$, number of iterations $T$, prompt
distribution $\rho$.
1:Initialize $\pi_{1}\leftarrow{\sf unif}(\mathcal{A})$.
2:for iteration $t=1,2,\dotsc,T$ do
3: Compute $r_{t}(x,y)$ by,
4: Option I:$\triangleright$ for on-average convergence
5: $r_{t}(x,y)\leftarrow{\mathbb{E}}_{y^{\prime}\sim\pi_{t}(\cdot\mid
x)}\left[\mathcal{P}(y\succ y^{\prime}\mid
x)\right],~{}\forall(x,y)\in\mathcal{X}\times\mathcal{Y}$.
6: Option II:$\triangleright$ for last-iteration convergence
7: $r_{t}(x,y)\leftarrow{\mathbb{E}}_{y^{\prime}\sim\pi_{t}^{\tau}(\cdot\mid
x)}\left[\mathcal{P}(y\succ y^{\prime}\mid
x)\right],~{}\forall(x,y)\in\mathcal{X}\times\mathcal{Y}$, where
$\pi_{t}^{\tau}$ is defined in Eq. 14.
8: Obtain $\pi_{t+1}$ by,
$\begin{gathered}\pi_{t+1}\leftarrow\mathop{\mathrm{argmax}}_{\pi\in\Pi}{\mathbb{E}}_{(x,y_{1},y_{2})\sim\mathcal{D}_{t}}\bigg{\\{}\sigma\left(r_{t}(x,y_{1})-r_{t}(x,y_{2})\right)\log\left[\sigma\left(\eta\log\frac{\pi(y_{1}\mid
x)}{\widetilde{\pi}_{t}^{\tau}(y_{1}\mid x)}-\eta\log\frac{\pi(y_{2}\mid
x)}{\widetilde{\pi}_{t}^{\tau}(y_{2}\mid x)}\right)\right]\\\ \hskip
120.0pt+\sigma\left(r_{t}(x,y_{2})-r_{t}(x,y_{1})\right)\log\left[\sigma\left(\eta\log\frac{\pi(y_{2}\mid
x)}{\widetilde{\pi}_{t}^{\tau}(y_{2}\mid x)}-\eta\log\frac{\pi(y_{1}\mid
x)}{\widetilde{\pi}_{t}^{\tau}(y_{1}\mid
x)}\right)\right]\bigg{\\}},\end{gathered}$ where $\mathcal{D}_{t}$ is
generated by $x\sim\rho,y_{1}\sim\mu_{1,t}(\cdot\mid
x),y_{2}\sim\mu_{2,t}(\cdot\mid x)$ with some policies $\mu_{1,t}$ and
$\mu_{2,t}$, and $\widetilde{\pi}_{t}^{\tau}(y\mid x)\coloneqq\pi_{t}(y\mid
x)^{1-\nicefrac{{\tau}}{{\eta}}}\pi_{\mathsf{ref}}(y\mid
x)^{\nicefrac{{\tau}}{{\eta}}},\forall(x,y)\in\mathcal{X}\times\mathcal{Y}$
(the unnormalized version of $\pi_{t}^{\tau}(y\mid x)$ defined in Eq. 14).
9:end for
10:return $\bar{\pi}={\sf unif}(\pi_{1:T})$.
Algorithm 4 DNO-Prct (Regularized Preferences Version)
input: General preference function $\mathcal{P}$, learning rate
${\widetilde{\eta}}$, coefficient of KL-regularization $\tau$, number of
iterations $T$, reference policy $\pi_{\mathsf{ref}}$, seed dataset
$\mathcal{D}_{0}=\\{(x,y^{\mathsf{gold}})\\}$ where $x\sim\rho$ and
$y\sim\pi_{\mathsf{gold}}(\cdot\mid x)$, reference model $\pi_{\mathsf{ref}}$.
1:Initialize $\pi_{1}\leftarrow\pi_{\mathsf{ref}}$.
2:for iteration $t=1,2,\dotsc,T$ do
3: Sample _batched on-policy_ responses:
4: Option I: $\triangleright$ for on-average convergence
5: Sample $K$ outputs per per prompt using the current $\pi_{t}$:
$\\{y_{t}^{1},y_{t}^{2},\dotsc,y_{t}^{K}\\}\sim\pi_{t}(\cdot\mid x)$, $\forall
x\in\mathcal{D}_{0}$.
6: Option II: $\triangleright$ for last-iteration convergence
7: Sample $K$ outputs per per prompt using the smoothed current policy
$\pi_{t}^{\tau}$:
$\\{y_{t}^{1},y_{t}^{2},\dotsc,y_{t}^{K}\\}\sim\pi_{t}^{\tau}(\cdot\mid x)$,
$\forall x\in\mathcal{D}_{0}$, where $\pi_{t}^{\tau}$ is defined in Eq. 14
with accommodating $\eta\to{\widetilde{\eta}}$.
8: Rank responses: For each $x\in\mathcal{D}_{0}$, rank the corresponding
$\\{y_{t}^{1},y_{t}^{2},\dotsc,y_{t}^{K},y^{\mathsf{gold}}\\}$ using the pair-
wise win-rate by sampling from the general preference function $\mathcal{P}$.
9: Filter and construct preference pairs: Construct
$\mathcal{D}_{t}=\\{(x,y_{t}^{+},y_{t}^{-})\\}$, for all
$x\in\mathcal{D}_{0}$, and $(y_{t}^{+},y_{t}^{-})$ are large-margin pairs
(based on the win-rate rank) within the responses for $x$ from the previous
step.
10: Contrastive learning: Obtain $\pi_{t+1}$ by,
$\displaystyle\pi_{t+1}\leftarrow\mathop{\mathrm{argmax}}_{\pi\in\Pi}{\mathbb{E}}_{(x,y_{t}^{+},y_{t}^{-})\sim\mathcal{D}_{t}}\log\left[\sigma\left({\widetilde{\eta}}\log\frac{\pi(y_{t}^{+}\mid
x)}{\widetilde{\pi}_{t}^{\tau}(y_{t}^{+}\mid
x)}-{\widetilde{\eta}}\log\frac{\pi(y_{t}^{-}\mid
x)}{\widetilde{\pi}_{t}^{\tau}(y_{t}^{-}\mid x)}\right)\right],$ where
$\widetilde{\pi}_{t}^{\tau}(y\mid x)\coloneqq\pi_{t}(y\mid
x)^{1-\nicefrac{{\tau}}{{{\widetilde{\eta}}}}}\pi_{\mathsf{ref}}(y\mid
x)^{\nicefrac{{\tau}}{{{\widetilde{\eta}}}}},\forall(x,y)\in\mathcal{X}\times\mathcal{Y}$
(the unnormalized version of $\pi_{t}^{\tau}(y\mid x)$ defined in Eq. 14,
after accommodating $\eta\to{\widetilde{\eta}}$).
11:end for
12:return best of $\pi_{1:(T+1)}$ on the validation data.
Based on discuss above, we can then obtain the extension of DNO to the
regularized preferences in Algorithm 3, and its practical implementation in
Algorithm 4. Note that, similar to Nash-MD, the late-iteration option for both
Algorithm 3 and Algorithm 4 requires sampling from the smoothed policy
$\pi_{t}^{\tau}$ (the mixture between $\pi_{t}$ and $\pi_{\mathsf{ref}}$,
defined in Eq. 14). One solution to address this can be sampling from the
token-level between $\pi_{t}$ and $\pi_{\mathsf{ref}}$ instead as suggested by
Munos et al. [2023].
## Appendix B Detailed Proofs
In this section, we provide detailed proofs for our theoretical results. Note
that, the definitions and assumptions presented heavily adopts the ideas
related to version space and concentrability from reinforcement learning
theory literature [esp., Xie et al., 2021, 2023]. Nevertheless, the
descriptions provided herein are intentionally simplified to elucidate the
core insights into the algorithmic design. A full and exhaustive theoretical
analysis falls outside the primary scope of this paper. We now make the
following definitions and assumptions.
###### Definition 1 (Feasible solution space).
For each iteration $t\in[T]$, we define $\Pi_{t}\subseteq\Pi$ as the feasible
solution space for iteration $t$. The $\pi_{t}$ obtained by Algorithm 1 is
always belong to $\Pi_{t}$, regardless of the randomness of the data sampling
procedure in Algorithm 1.
Here, Definition 1 follows a similar spirit as the version space in RL theory
literature, where $\Pi_{t}$ only contains policies that have a small empirical
loss, which can be further converted to a small population loss under standard
concentration procedures.
###### Definition 2 (Concentrability coefficient over the feasible solution
space).
For all $t\in[T]$, suppose $\Pi_{t}$ is defined in Definition 1, and
$\mu_{1,t}$ and $\mu_{2,t}$ are some given data generate policy. Now, for any
$t\in[T]$, we define $\mathfrak{C}_{t}$ to be the concentrability coefficient
at iteration $t$ over its feasible solution space, where
$\displaystyle\frac{{\mathbb{E}}_{x\sim\rho,y_{1}\sim\pi_{t+1}^{\star}(\cdot\mid
x),y_{2}\sim\pi_{t+1}(\cdot\mid
x)}\left[\left(\log\frac{\pi_{t+1}^{\star}(y_{1}\mid x)}{\pi_{t+1}(y_{1}\mid
x)}-\log\frac{\pi_{t+1}^{\star}(y_{2}\mid x)}{\pi_{t+1}(y_{2}\mid
x)}\right)^{2}\right]}{{\mathbb{E}}_{x\sim\rho,y_{1}\sim\mu_{1,t}(\cdot\mid
x),y_{2}\sim\mu_{2,t}(\cdot\mid
x)}\left[\left(\log\frac{\pi_{t+1}^{\star}(y_{1}\mid x)}{\pi_{t+1}(y_{1}\mid
x)}-\log\frac{\pi_{t+1}^{\star}(y_{2}\mid x)}{\pi_{t+1}(y_{2}\mid
x)}\right)^{2}\right]}\leq\mathfrak{C}_{t},$
for any $\pi_{t+1}\in\Pi_{t+1}$ and any
$\pi_{t+1}^{\star}\in\left\\{\frac{1}{Z_{\pi}(x)}\pi(\cdot\mid
x)\exp\left(\frac{r_{\pi}(x,\cdot)}{\eta}\right):\pi\in\Pi_{t}\right\\}$; and
here we use the definition of
$r_{\pi}(x,y)\coloneqq{\mathbb{E}}_{y^{\prime}\sim\pi(\cdot\mid
x)}\left[\mathcal{P}(y\succ y^{\prime}\mid
x)\right],\forall(x,y)\in\mathcal{X}\times\mathcal{Y}$, and
$Z_{\pi}(x)=\sum_{y\in\mathcal{Y}}\pi(y\mid
x)\exp\left(\frac{r_{\pi}(x,y)}{\eta}\right),\forall x\in\mathcal{X}$.
Definition 2 can be viewed as a natural extension of concentrability from the
(offline) reinforcement learning literature to our setup.
###### Assumption 1 (Realizability over the feasible solution space).
For any $\pi\in\Pi_{t}$ where $\Pi_{t}$ is defined in Definition 1 for all
$t\in[T]$, we assume the following soft-policy iteration update
$\displaystyle\pi^{\sf SPI}(\cdot\mid
x)\coloneqq\frac{1}{Z_{\pi}(x)}\pi(\cdot\mid
x)\exp\left(\frac{r_{\pi}(x,\cdot)}{\eta}\right),$
where $r_{\pi}(x,y)\coloneqq{\mathbb{E}}_{y^{\prime}\sim\pi(\cdot\mid
x)}\left[\mathcal{P}(y\succ y^{\prime}\mid
x)\right],\forall(x,y)\in\mathcal{X}\times\mathcal{Y}$, and
$Z_{\pi}(x)=\sum_{y\in\mathcal{Y}}\pi(y\mid
x)\exp\left(\frac{r_{\pi}(x,y)}{\eta}\right),\forall x\in\mathcal{X}$ is the
partition function.
###### Assumption 2 (Boundedness over the feasible solution space).
Suppose $\Pi_{t}$ is defined in Definition 1 for all $t\in[T]$, then we assume
$\log\frac{\pi(y\mid x)}{\pi_{t}(y\mid x)}\in[-R_{\max},R_{\max}]$ for all
$\pi\in\Pi$, $\pi_{t}\in\Pi_{t}$, and $(x,y)\in\mathcal{X}\times\mathcal{Y}$.
Assumption 2 may appear somewhat unconventional, as it explicitly assumes
boundedness on the log probabilities. Nonetheless, it is important to note
that the value of $\log\frac{\pi(y\mid x)}{\pi_{t}(y\mid x)}$ is directly
measurable and controllable in practice, which is different from the common
use case, such as maximum likelihood problems.
###### Theorem 2 (Formal Version of Theorem 1).
Under Assumptions 1 and 2, and fix an arbitrary iteration $t\in[T]$. Suppose
$\pi_{t+1}$ is from 4 of Algorithm 1, and $\pi_{t+1}^{\star}$ is defined in
Eq. 9. Then, we have
$\displaystyle{\mathbb{E}}_{x\sim\rho}\left[\left(D_{\mathrm{TV}}(\pi_{t+1}(\cdot\mid
x),\pi_{t+1}^{\star}(\cdot\mid
x))\right)^{2}\right]\leq\mathcal{O}\left(\frac{\mathfrak{C}_{t}R_{\max}^{2}\log(\nicefrac{{|\Pi|}}{{\delta}})}{N}\right),$
where $\mathfrak{C}_{t}$ is defined in Definition 2.
###### Proof of Theorem 2.
We will now present the proof using the following two-step procedure.
Step 1: From regression with log loss to squared error bound. By standard
results on the regression with the logarithmic loss, we know,
$\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:logloss_reg_bound}}{e}q:logloss_{r}eg_{b}ound}{\mathbb{E}}_{(x,y_{1},y_{2})\sim\mathcal{D}_{t}}\Big{[}\sigma\big{(}\Delta_{t}^{\star}(x,y_{1},y_{2})\big{)}\log\left[\sigma\big{(}\Delta_{\pi_{t+1},t}(x,y_{1},y_{2})\big{)}\right]+\sigma\big{(}\Delta_{t}^{\star}(x,y_{2},y_{1})\big{)}\log\left[\sigma\big{(}\Delta_{\pi_{t+1},t}(x,y_{2},y_{1})\big{)}\right]\Big{]}\lesssim\frac{\log(\nicefrac{{|\Pi|}}{{\delta}})}{N}.$
(15)
Note that similar results could also apply beyond finite $\Pi$. For
simplicity, we omit the detailed discussion in our paper. For more in-depth
discussions about regression with the logarithmic loss, the reader can refer
to, e.g., Foster and Krishnamurthy [2021].
Next, by the Pinsker’s inequality, we have for any $z,{\widehat{z}}\in[0,1]$,
$\displaystyle\frac{(z-{\widehat{z}})^{2}}{2}\leq
z\log\left(\frac{z}{{\widehat{z}}}\right)+(1-z)\log\left(\frac{1-z}{1-{\widehat{z}}}\right).$
Substituting the $z$ and ${\widehat{z}}$ with Eq. 11 and combining with Eq.
15, we obtain that
$\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:reward_sq_Dt}}{e}q:reward_{s}q_{D}t}{\mathbb{E}}_{(x,y_{1},y_{2})\sim\mathcal{D}_{t}}\left[\big{(}\sigma\left(r_{t}(x,y_{1})-r_{t}(x,y_{2})\right)-\sigma\left(r_{\pi_{t+1},t}(x,y_{1})-r_{\pi_{t+1},t}(x,y_{2})\right)\big{)}^{2}\right]\lesssim\frac{\log(\nicefrac{{|\Pi|}}{{\delta}})}{N},$
(16)
where $a\lesssim b$ means $a\leq c\cdot b$ for some absolute constant $c$.
Then, by the standard concentration for squared loss, e.g., Lemma A.4 of Xie
et al. [2021] with $\gamma=0$, Eq. 16 implies
$\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:reward_sq_pop}}{e}q:reward_{s}q_{p}op}{\mathbb{E}}_{(x,y_{1},y_{2})\sim\rho\times\mu_{1:2,t}}\left[\big{(}\sigma\left(r_{t}(x,y_{1})-r_{t}(x,y_{2})\right)-\sigma\left(r_{\pi_{t+1},t}(x,y_{1})-r_{\pi_{t+1},t}(x,y_{2})\right)\big{)}^{2}\right]\lesssim\frac{\log(\nicefrac{{|\Pi|}}{{\delta}})}{N},$
(17)
where we use “$\times$” as the shorthand of joint distribution for the sake of
simplicity, for example, $(x,y_{1},y_{2})\sim\rho\times\mu_{1:2,t}$ is
shorthand for $x\sim\rho,y_{1}\sim\mu_{1,t}(\cdot\mid
x),y_{2}\sim\mu_{2,t}(\cdot\mid x)$.
By the definition of $r_{t}$ in 3 of Algorithm 1, we know $r_{t}(x,y)\in[0,1]$
for all $(x,y)\in\mathcal{X}\times\mathcal{Y}$. Thus, by a variant of mean
value theorem, we know
$\displaystyle~{}\big{|}r_{t}(x,y_{1})-r_{t}(x,y_{2})-r_{\pi_{t+1},t}(x,y_{1})+r_{\pi_{t+1},t}(x,y_{2})\big{|}$
(18) $\displaystyle\leq$ $\displaystyle~{}\frac{\eta
R_{\max}}{1-\sigma(1)}\big{|}\sigma\left(r_{t}(x,y_{1})-r_{t}(x,y_{2})\right)-\sigma\left(r_{\pi_{t+1},t}(x,y_{1})-r_{\pi_{t+1},t}(x,y_{2})\right)\big{|},$
for any $(x,y_{1},y_{2})\in\mathcal{X}\times\mathcal{Y}\times\mathcal{Y}$,
where $R_{\max}$ is introduced from Assumption 2. This is because: let
$a\coloneqq r_{t}(x,y_{1})-r_{t}(x,y_{2})\in[-1,1]$, and $b\coloneqq
r_{\pi_{t+1},t}(x,y_{1})-r_{\pi_{t+1},t}(x,y_{2})\in[-\eta R_{\max},\eta
R_{\max}]$, and, then, we can directly verify that the slope we need to bound
$\nicefrac{{\big{|}a-b\big{|}}}{{\big{|}\sigma\left(a\right)-\sigma\left(b\right)\big{|}}}$
reaches its maximum at $a=1$ and $b=\eta R_{\max}$.
Combining Eqs. 17 and 18, we obtain
$\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:dno_reward_bound}}{e}q:dno_{r}eward_{b}ound}{\mathbb{E}}_{(x,y_{1},y_{2})\sim\rho\times\mu_{1:2,t}}\left[\big{(}r_{t}(x,y_{1})-r_{t}(x,y_{2})-r_{\pi_{t+1},t}(x,y_{1})+r_{\pi_{t+1},t}(x,y_{2})\big{)}^{2}\right]\lesssim\frac{\eta^{2}R_{\max}^{2}\log(\nicefrac{{|\Pi|}}{{\delta}})}{N}.$
(19)
Step 2: Concentration in the policy space. We now reason about the
concentration of $\pi_{t+1}\to\pi_{t+1}^{\star}$ from Eq. 19, where
$\pi_{t+1}^{\star}$ is defined in Eq. 9 and $\pi_{t+1}$ is the policy
corresponding to the learned $r_{\pi_{t+1},t}$. By the definition of
$r_{\pi,t}$ in Eq. 10, we have
$\displaystyle~{}r_{t}(x,y_{1})-r_{t}(x,y_{2})-r_{\pi_{t+1},t}(x,y_{1})+r_{\pi_{t+1},t}(x,y_{2})$
$\displaystyle=$
$\displaystyle~{}r_{t}(x,y_{1})-r_{t}(x,y_{2})-\eta\log\frac{\pi_{t+1}(y_{1}\mid
x)}{\pi_{t}(y_{1}\mid x)}+\eta\log\frac{\pi_{t+1}(y_{2}\mid
x)}{\pi_{t}(y_{2}\mid x)}$ $\displaystyle=$
$\displaystyle~{}\eta\log\frac{\pi_{t+1}^{\star}(y_{1}\mid
x)}{\pi_{t+1}(y_{1}\mid x)}-\eta\log\frac{\pi_{t+1}^{\star}(y_{2}\mid
x)}{\pi_{t+1}(y_{2}\mid x)}.$
This implies
$\displaystyle{\mathbb{E}}_{(x,y_{1},y_{2})\sim\rho\times\mu_{1:2,t}}\left[\left(\eta\log\frac{\pi_{t+1}^{\star}(y_{1}\mid
x)}{\pi_{t+1}(y_{1}\mid x)}-\eta\log\frac{\pi_{t+1}^{\star}(y_{2}\mid
x)}{\pi_{t+1}(y_{2}\mid
x)}\right)^{2}\right]\lesssim\frac{\eta^{2}R_{\max}^{2}\log(\nicefrac{{|\Pi|}}{{\delta}})}{N}$
$\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:oneside_dno_bound}}{e}q:oneside_{d}no_{b}ound}\Longrightarrow{\mathbb{E}}_{(x,y_{1},y_{2})\sim\rho\times\pi_{t+1}^{\star}\times\pi_{t+1}}\left[\left(\log\frac{\pi_{t+1}^{\star}(y_{1}\mid
x)}{\pi_{t+1}(y_{1}\mid x)}-\log\frac{\pi_{t+1}^{\star}(y_{2}\mid
x)}{\pi_{t+1}(y_{2}\mid
x)}\right)^{2}\right]\lesssim\frac{\mathfrak{C}_{t}R_{\max}^{2}\log(\nicefrac{{|\Pi|}}{{\delta}})}{N},$
(20)
where the last step follows from the definition of $\mathfrak{C}_{t}$
(Definition 2).
On the other hand, we have
$\displaystyle~{}{\mathbb{E}}_{(x,y_{1},y_{2})\sim\rho\times\pi_{t+1}^{\star}\times\pi_{t+1}}\left[\left(\log\frac{\pi_{t+1}^{\star}(y_{1}\mid
x)}{\pi_{t+1}(y_{1}\mid x)}-\log\frac{\pi_{t+1}^{\star}(y_{2}\mid
x)}{\pi_{t+1}(y_{2}\mid x)}\right)^{2}\right]$ $\displaystyle=$
$\displaystyle~{}{\mathbb{E}}_{(x,y_{1},y_{2})\sim\rho\times\pi_{t+1}^{\star}\times\pi_{t+1}}\left[\left(\log\frac{\pi_{t+1}^{\star}(y_{1}\mid
x)}{\pi_{t+1}(y_{1}\mid
x)}\right)^{2}+\left(\log\frac{\pi_{t+1}^{\star}(y_{2}\mid
x)}{\pi_{t+1}(y_{2}\mid
x)}\right)^{2}-2\left(\log\frac{\pi_{t+1}^{\star}(y_{1}\mid
x)}{\pi_{t+1}(y_{1}\mid
x)}\right)\cdot\left(\log\frac{\pi_{t+1}^{\star}(y_{2}\mid
x)}{\pi_{t+1}(y_{2}\mid x)}\right)\right]$ $\displaystyle=$
$\displaystyle~{}{\mathbb{E}}_{(x,y)\sim\rho\times\pi_{t+1}^{\star}}\left[\left(\log\frac{\pi_{t+1}^{\star}(y\mid
x)}{\pi_{t+1}(y\mid
x)}\right)^{2}\right]+{\mathbb{E}}_{(x,y)\sim\rho\times\pi_{t+1}}\left[\left(\log\frac{\pi_{t+1}^{\star}(y\mid
x)}{\pi_{t+1}(y\mid x)}\right)^{2}\right]$
$\displaystyle~{}+2{\mathbb{E}}_{x\sim\rho}\Bigg{[}\underbrace{{\mathbb{E}}_{y\sim\pi_{t+1}^{\star}(\cdot\mid
x)}\left[\log\frac{\pi_{t+1}^{\star}(y\mid x)}{\pi_{t+1}(y\mid
x)}\right]}_{=D_{\mathrm{KL}}(\pi^{\star}_{t+1}(\cdot\mid
x)~{}\|~{}\pi_{t+1}(\cdot\mid
x))}\cdot\underbrace{{\mathbb{E}}_{y\sim\pi_{t+1}(\cdot\mid
x)}\left[\log\frac{\pi_{t+1}(y\mid x)}{\pi_{t+1}^{\star}(y\mid
x)}\right]}_{=D_{\mathrm{KL}}(\pi_{t+1}(\cdot\mid
x)~{}\|~{}\pi^{\star}_{t+1}(\cdot\mid x))}\Bigg{]}$
$\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:dno_mid}}{e}q:dno_{m}id}\geq$
$\displaystyle~{}{\mathbb{E}}_{(x,y)\sim\rho\times\pi_{t+1}^{\star}}\left[\left(\log\frac{\pi_{t+1}^{\star}(y\mid
x)}{\pi_{t+1}(y\mid
x)}\right)^{2}\right]+{\mathbb{E}}_{(x,y)\sim\rho\times\pi_{t+1}}\left[\left(\log\frac{\pi_{t+1}^{\star}(y\mid
x)}{\pi_{t+1}(y\mid x)}\right)^{2}\right].$ (21)
Next, we fix an arbitrary ${\widetilde{x}}\in\mathcal{X}$, and we have
$\displaystyle~{}{\mathbb{E}}_{y\sim\pi_{t+1}^{\star}(\cdot\mid{\widetilde{x}})}\left[\left(\log\frac{\pi_{t+1}^{\star}(y\mid{\widetilde{x}})}{\pi_{t+1}(y\mid{\widetilde{x}})}\right)^{2}\right]+{\mathbb{E}}_{y\sim\pi_{t+1}(\cdot\mid{\widetilde{x}})}\left[\left(\log\frac{\pi_{t+1}^{\star}(y\mid{\widetilde{x}})}{\pi_{t+1}(y\mid{\widetilde{x}})}\right)^{2}\right]$
$\displaystyle\geq$
$\displaystyle~{}\left({\mathbb{E}}_{y\sim\pi_{t+1}^{\star}(\cdot\mid{\widetilde{x}})}\left[\left|\log\frac{\pi_{t+1}^{\star}(y\mid{\widetilde{x}})}{\pi_{t+1}(y\mid{\widetilde{x}})}\right|\right]\right)^{2}+\left({\mathbb{E}}_{y\sim\pi_{t+1}(\cdot\mid{\widetilde{x}})}\left[\left|\log\frac{\pi_{t+1}^{\star}(y\mid{\widetilde{x}})}{\pi_{t+1}(y\mid{\widetilde{x}})}\right|\right]\right)^{2}$
(by Jensen’s inequality)
$\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:dno_fixx}}{e}q:dno_{f}ixx}\gtrsim$
$\displaystyle~{}\left({\mathbb{E}}_{y\sim\pi_{t+1}^{\star}(\cdot\mid{\widetilde{x}})}\left[\left|\log\frac{\pi_{t+1}^{\star}(y\mid{\widetilde{x}})}{\pi_{t+1}(y\mid{\widetilde{x}})}\right|\right]+{\mathbb{E}}_{y\sim\pi_{t+1}(\cdot\mid{\widetilde{x}})}\left[\left|\log\frac{\pi_{t+1}^{\star}(y\mid{\widetilde{x}})}{\pi_{t+1}(y\mid{\widetilde{x}})}\right|\right]\right)^{2},$
(22)
where $a\gtrsim b$ means $a\geq c\cdot b$ for some absolute constant $c$.
We now recall the definition of $f$-divergence:
$D_{f}(p,q)\coloneqq{\mathbb{E}}_{y\sim q}[f(\nicefrac{{p(y)}}{{q(y)}})]$ for
two distributions $p$ and $q$, where $f:\mathbb{R}^{+}\to\mathbb{R}$ is convex
with $f(1)=0$. Thus, we can notice that,
$\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:dno_df}}{e}q:dno_{d}f}{\mathbb{E}}_{y\sim
p}\left[\left|\log\frac{p(y)}{q(y)}\right|\right]+{\mathbb{E}}_{y\sim
q}\left[\left|\log\frac{p(y)}{q(y)}\right|\right]=D_{f_{1}}(p,q),\quad\text{where}~{}f_{1}(u)\coloneqq(1+u)\cdot\left|\log(u)\right|,~{}u\in\mathbb{R}^{+}.$
(23)
|
# Efficient Long-Range Entanglement using Dynamic Circuits
Elisa Bäumer<EMAIL_ADDRESS>IBM Quantum, IBM Research – Zurich, 8803
Rüschlikon, Switzerland Vinay Tripathi IBM Quantum, IBM T.J. Watson Research
Center, Yorktown Heights, NY 10598, USA Department of Physics & Astronomy,
University of Southern California, Los Angeles, California 90089, USA Derek
S. Wang IBM Quantum, IBM T.J. Watson Research Center, Yorktown Heights, NY
10598, USA Patrick Rall IBM Quantum, MIT-IBM Watson AI Lab, Cambridge, MA
02142, USA Edward H. Chen IBM Quantum, Almaden Research Center, San Jose, CA
95120, USA IBM Quantum, Research Triangle Park, NC 27709, USA Swarnadeep
Majumder IBM Quantum, IBM T.J. Watson Research Center, Yorktown Heights, NY
10598, USA Alireza Seif IBM Quantum, IBM T.J. Watson Research Center,
Yorktown Heights, NY 10598, USA Zlatko K. Minev IBM Quantum, IBM T.J. Watson
Research Center, Yorktown Heights, NY 10598, USA
###### Abstract
Quantum simulation traditionally relies on unitary dynamics, inherently
imposing efficiency constraints on the generation of intricate entangled
states. In principle, these limitations can be superseded by non-unitary,
dynamic circuits. These circuits exploit measurements alongside conditional
feed-forward operations, providing a promising approach for long-range
entangling gates, higher effective connectivity of near-term hardware, and
more efficient state preparations. Here, we explore the utility of shallow
dynamic circuits for creating long-range entanglement on large-scale quantum
devices. Specifically, we study two tasks: CNOT gate teleportation between up
to 101 qubits by feeding forward 99 mid-circuit measurement outcomes, and the
preparation of Greenberger–Horne–Zeilinger (GHZ) states with genuine
entanglement. In the former, we observe that dynamic circuits can outperform
their unitary counterparts. In the latter, by tallying instructions of
compiled quantum circuits, we provide an error budget detailing the obstacles
that must be addressed to unlock the full potential of dynamic circuits.
Looking forward, we expect dynamic circuits to be useful for generating long-
range entanglement in the near term on large-scale quantum devices.
## I Introduction
Quantum systems present two distinct modes of evolution: deterministic unitary
evolution, and stochastic evolution as the consequence of quantum
measurements. To date, quantum computations predominantly utilize unitary
evolution to generate complex quantum states for information processing and
simulation. However, due to inevitable errors in current quantum devices [1],
the computational reach of this approach is constrained by the depth of the
quantum circuits that can realistically be implemented on noisy devices. The
introduction of non-unitary dynamic circuits, or adaptive circuits, may be
able to overcome some of these limitations by employing mid-circuit
measurements and feed-forward operations. Such conditional operations are a
necessary ingredient for quantum error correction (see, e.g., Ref. [2]). In
the near term, dynamic circuits present a promising avenue for generating
long-range entanglement, a task at the heart of quantum algorithms. This
includes both implementation of long-range entangling gates that, due to local
connectivity among the qubits in many quantum platforms, can require deep
unitary quantum circuits, and preparation of many-qubit entangled [3, 4] and
topologically ordered quantum states [5, 6, 7, 8, 9, 10, 11, 12, 13].
From a physical standpoint, the entanglement needs to propagate across the
entire range between the qubits. Given that the entanglement cannot spread
faster than its information light cone [14, 15], entangling two qubits that
are a distance $n$ apart requires a minimum two-qubit gate-depth that scales
as $\mathcal{O}(n)$, and even when assuming all-to-all connectivity, the
generation of entanglement over $n$ qubits necessitates a minimum two-qubit
gate depth of $\mathcal{O}(\log n)$. Thus, the task becomes challenging when
applying only unitary gates. Using dynamic circuits, the spread of information
can be mostly conducted by classical calculations, which can be faster and
with a higher fidelity than the quantum gates, and long-range entanglement can
be created in a shallow quantum circuit [16, 17, 18], i.e. the depth of
quantum gates is constant for any $n$.
While dynamic circuits have been explored in small-scale experiments [19, 20,
21, 22, 23], only recently have there been experimental capabilities on large-
scale quantum devices. However, most demonstrations (with the exception of
e.g. Refs. [24, 25, 26]) have utilized post-selection [27] or post-processing
[28, 29] instead of feed-forward to prepare entangled states. Such approaches
enable the study of properties of the state prepared in isolation, but have
limited applicability when the state preparation is part of a larger quantum
information processing task.
Here, we explore the utility of shallow dynamic circuits for creating long-
range entanglement on large-scale superconducting quantum devices. In Section
II, we demonstrate an advantage with dynamic circuits by teleporting a long-
range entangling CNOT gate over up to 101 locally connected superconducting
qubits. We also discuss how this approach can be generalized to more complex
gates, such as the three-qubit Toffoli gate. Then, in Section III, we prepare
a long-range entangled state, the GHZ state [3], with a dynamic circuit. We
show that—with a composite error mitigation stack customized for the hardware
implementation of dynamic circuits—we can prepare genuinely entangled GHZ
states but fall short of state-of-the-art system sizes achieved with unitary
gates due to hardware limitations. We predict conditions under which dynamic
circuits should be advantageous over unitary circuits based on our error
budget calculation.
Figure 1: Teleporting a CNOT gate for long-range entanglement. (a) Left:
Circuit for a long-range CNOT gate spanning a 1D chain of $n$-qubits subject
to nearest-neighbor connections only. Middle: Equivalent unitary decomposition
into implementable CNOT gates; circuit depth $\mathcal{O}(n)$. Right:
Equivalent circuit employing measurements with feed-forward operations;
circuit depth $\mathcal{O}(1)$. If the post-measurement state is unused, feed-
forward operations can be handled in post-processing, eliminating the need for
their experimental implementation. Yellow regions indicate the idle time
during CNOT gates on other qubits as well as during measurement and feed-
forward (which is denoted by duration $\mu$). (b) Error model inputs for
unitary, measurement-based, and dynamic-circuit CNOT protocols comprise the
total number of: non-zero idle-block times, CNOT gates, and additional
measurements. (c) Experimental results, where dynamic circuits offer improved
fidelity for CNOT gate teleportation across a qubit chain $\gtrsim$ 10 qubits.
(d) Map of a 127-qubit heavy-hexagonal processor, ibm_sherbrooke, overlaid
with system configurations for long-range gate teleportation across a locally
connected bus. To establish an effective all-to-all connectivity, we show one
possible strategy of dividing the qubits into system (purple and orange) and
sacrificial ancilla (turquoise and blue for extra connections) qubits. To
parallellize gate execution with increased connectivity, orange qubits can be
used as ancillas. We show how a particular long-range CNOT can be implemented
through an ancilla bus marked as turquoise spins.
## II Gate teleportation
The limited connectivity between qubits in many quantum computational
platforms can result in the compilation of non-local unitary circuits into
deep and error-prone unitary circuits. A potential solution is the use of
shallow dynamic circuits. The crucial ingredient for such protocols is long-
range CNOT gates from the first to $n$th qubit, as shown on the left in Fig.
1(a). In the following, we demonstrate a regime under which dynamic circuits
enable higher-fidelity long-range CNOT gates via gate teleportation. We first
describe the dynamic circuit and compare to its equivalent unitary
counterpart. We argue, using a simple error budget, that there exists a regime
in which the dynamic circuit implementation has an advantage over the unitary
one, see Fig. 1(b). Then, using up to $101$ qubits on a superconducting
processor, we demonstrate a crossover in the fidelity of CNOT gate
teleportation, where dynamic circuits perform better for entangling qubits
over longer ranges; see Fig. 1(c). This gate teleportation scheme enables an
effective all-to-all connectivity in devices with a more limited connectivity,
such as those on a heavy-hexagonal lattice. By using some of the qubits as
ancillas for measurement and classical feed-forward operations, the ancilla
qubits form a bus that connects all system qubits with each other. Therefore,
by sacrificing some of the qubits in a large device with limited connectivity,
we gain effective access to an all-to-all connected device with fewer qubits;
see Fig 1(d). Finally, we show that these ideas can be generalized to
teleporting multi-qubit gates, such as the Toffoli or CCZ gate.
### II.1 CNOT
We describe the dynamic circuit for CNOT gate teleportation, shown on the
right in Fig. 1(a) and derived in Appendix A.1. Importantly, this dynamic
circuit can be straightforwardly extended for any number of qubits $n$ (where
$n$ is the number of ancillas) such that the depth remains constant for any
initial states $\ket{\varphi_{1}}$ $\left(\ket{\varphi_{2}}\right)$ of the
control (target) qubit. We expect the error to be dominated by the $n$ mid-
circuit measurements, $n+1$ CNOT gates parallelized over 2 gate layers, and
idle time mostly over the classical feed-forward time. Note that in this
particular realization, each of the $n$ ancilla qubits between the two system
qubits must be in the state $\ket{0}$. Therefore, during the course of the
gate teleportation, the ancillas cannot also be used as memory qubits, further
motivating the division of qubits into system and sacrificial ancilla qubits
in Fig. 1(d).
We also present an equivalent, low-error unitary counterpart in the middle of
Fig. 1(a). (In Appendix B, we propose several different unitary
implementations of the long-range CNOT gate. Based on experimental results, as
well as the noise model described in Appendix E that gives rise to the error
budget described in Appendix B.2, we select this one.) In this unitary
realization, the system qubits are connected by a bus of ancilla qubits that
are initialized in and returned to the $|0\rangle$ state, just as in its
dynamic counterpart. In our particular compilation, throughout the execution
of the circuit, qubits that are not in the $|\phi_{1}\rangle$ or
$|\phi_{2}\rangle$ state are in the $|0\rangle$ state. Doing so minimizes both
decoherence and cross-talk errors intrinsic to our superconducting qubit
design. Therefore, relative to the dynamic version, there is no error due to
idle time or mid-circuit measurements, although there are $\sim$4 times more
CNOT gates.
A summary of the error budgets for the dynamic and unitary circuits is in Fig.
1(b). Based on this table, we expect that dynamic circuits should be
advantageous over unitary circuits if the additional $n$ mid-circuit
measurements in the dynamic circuit introduce less error than the $3n$ extra
CNOT gates in the unitary circuit, assuming $n$ is large enough such that the
idling error $\mu$ incurred during measurement and classical feed-forward in
the dynamic circuit is relatively small. Importantly, we should note that
these error analyses only consider the gate error on the two respective
qubits, but not the error introduced on other qubits, which we expect to be
much larger in the unitary case due to the linear depth. Thus, the constant-
depth dynamic circuit might be even more advantageous than what we can see
from the gate fidelity.
To determine the experimental gate fidelity, let our ideal unitary channel be
$\mathcal{U}(\rho)\coloneqq U\rho U^{\dagger}$ and its noisy version be
$\tilde{\mathcal{U}}(\rho)\coloneqq\mathcal{U}(\Lambda(\rho))$, where
$\Lambda$ is the effective gate noise channel and $\rho$ is a quantum state.
The average gate fidelity of the noisy gate is
$\mathcal{F}_{\mathrm{avg}}\left(\mathcal{U},\tilde{\mathcal{U}}\right)=\int\mathrm{d}\psi\,\operatorname{Tr}\left[\mathcal{U}\left(\rho_{\psi}\right)\tilde{\mathcal{U}}\left(\rho_{\psi}\right)\right]$,
where the Haar average is taken over the pure states
$\rho_{\psi}=\left|\psi\vphantom{\psi}\right\rangle\left\langle\vphantom{\psi}\psi\right|$.
This fidelity can be faithfully estimated from Pauli measurements on the
system, using Monte Carlo process certification [30, 31], as detailed in
Appendix C.2.
The results from a superconducting quantum processor are shown in Fig. 1(c).
As expected, for a small number of qubits $n\lesssim 10$ the unitary
implementation yields the best fidelities. However, for increasing $n$ it
converges much faster to the fidelity of a random gate (0.25) than the dynamic
circuits implementation, which converges to a value slightly below 0.4. These
align well with the error budget analysis in Appendix B.2 and the noise model
predictions depicted in Appendix E. Note that, in the limit of large $n$, the
fidelities of the measurement-based scheme are limited by the $Z$ and $X$
corrections on $\ket{\phi_{1}}$ and $\ket{\phi_{2}}$ (see Fig. 1(a)). A
straightforward derivation using this noise model shows that the minimum
possible process fidelity due to only incorrect $Z$ and $X$ corrections
(without the fixed infidelity from the idle time and CNOT gates) is 0.25,
which converts to a gate fidelity of 0.4.
The measurement-based protocol with post-processing performs slightly better
than the dynamic circuits as the former does not incur errors from the
classical feed-forward, allowing us to isolate the impact of classical feed-
forward from other errors, such as the $n+1$ intermediate CNOT gates and mid-
circuit measurements. Note, however, that the post-processing approach is
generally not scalable if further circuit operations follow the teleported
CNOT due to the need to simulate large system sizes, further emphasizing the
advantage of dynamic circuits as errors rooted in classical feedforward are
reduced. Overall, we find that CNOT gates over large distances are more
efficiently executed with dynamic circuits than unitary ones.
### II.2 Toffoli or CCZ
Dynamic circuits can also be applied to more efficiently compile multi-qubit
gates. As an example, we describe how the CCZ, or Toffoli gate up to two
single-qubit Hadamard gates, can be implemented by optimizing multiple
teleported CNOT gates. Compilation of the unitary circuit on a 1D chain of
$n+3$ qubits using CNOT gates naïvely requires a two-qubit gate depth of
$\mathcal{O}\left(n\right)$. Using dynamic circuits, we can implement this
long-range entangling gate in shallow depth. Naïvely, one could successively
implement each CNOT gate of the typical Toffoli decomposition (shown at the
top of Fig. 2(a)) using the gate teleportation described previously. However,
involving an ancillary qubit between the three system qubits to merge the
teleported gates, as shown at the bottom of Fig. 2(a), allows for a more
efficient implementation with the dynamic circuit; see Fig. 2(b). In total,
this formulation requires $n+1$ measurements, $n+6$ CNOT gates, and 5 feed-
forward operations divided across two sequential steps. Notably, as most
qubits are projectively measured early in the circuit, the idling error should
be low. Thus, we expect this shallow implementation with dynamic circuits to
be advantageous over its unitary counterpart, especially for large $n$.
Figure 2: CCZ with (a) unitary circuit and (b) a dynamic circuit over long
ranges.
## III State preparation: GHZ
Figure 3: Preparing long-range entangled states. (a) Illustration of a GHZ
state with chosen qubit spins (spheres) in a superposition of “all up” and
“all down” polarizations (arrows), overlaid on a quantum processor. (b)
Circuits to prepare an $n$-qubit GHZ state using either a unitary (left) or
dynamic (right) circuit. For a 1D qubit chain, the depth of the unitary
(resp., dynamic) circuit scales as $\mathcal{O}(n)$ (resp., $\mathcal{O}(1)$).
If the final state is not directly used, the feed-forward operations can be
implemented in classical post-processing on the output bits (classically
controlled X gates and resets can be omitted). Yellow regions indicate the
idle time during CNOT gates on other qubits as well as during measurement and
feed-forward (which is denoted by duration $\mu$). (c) Error model inputs for
the GHZ preparation circuits. The model incorporates the noisy components of
the circuits: non-zero idle circuit periods (yellow), number of CNOT gates
(pink), and the number of mid-circuit measurements (green). These parameters
are used to derive an error model that yields a lower-bound on the protocol
fidelity, shown in the following panel. (d) Fidelity of preparing the GHZ
state on quantum hardware using unitary, measurement-based post-processing, or
dynamic circuits in the absence or presence of dynamical decoupling (DD). Data
shown with dots. Theory curves based on the error model parameters of panel
(c) shown in dashed lines.
Dynamic circuits can also be used to prepare long-range entangled states. A
prototypical example is the GHZ state [3], shown schematically in Fig. 3(a).
While it can be created using only Clifford gates and thus can be simulated
efficiently on a classical computer [32], it becomes non-simulatable when
followed by a sufficient number of non-Clifford gates in a larger algorithm,
or when inserted as a crucial ingredient in e.g. efficient compilation of
multi-qubit gates [33, 34].
Here, we show that GHZ states with long-range entanglement can be prepared
with dynamic circuits. Although we do not see a clear advantage of dynamic
circuits over unitary ones in this case, we provide a detailed description of
the challenges that must be addressed to realize such an advantage.
For preparation of a GHZ state on a 1D $n$-qubit chain, in Fig. 3, we show the
equivalence between the unitary circuit (left) and dynamic circuit (right).
(For a detailed derivation, see Appendix A.2.) Notably, the unitary equivalent
has a two-qubit gate depth that scales as $\mathcal{O}\left(n\right)$ with
quadratically increasing idle time and $n-1$ total CNOT gates, while the depth
of the dynamic circuits remains constant with linearly increasing idle time,
$3n/2-1$ total CNOT gates, and $n/2-1$ mid-circuit measurements (see Fig.
3(c)). The dynamic circuit incurs less idle time and fewer two-qubit gate
depth at the cost of increased CNOT gates and mid-circuit measurements.
Therefore, we expect dynamic circuits to be advantageous for large system
sizes $n$ and low errors in mid-circuit measurement. For a more detailed
analysis of the error budget, see Appendix D.1.
We explore whether current large-scale superconducting quantum devices enable
an advantage with dynamic circuits for preparation of the entangled GHZ state.
To efficiently verify the preparation of a quantum state $\sigma$, we use the
Monte Carlo state certification that samples from Pauli operators with non-
zero expectation values, as implemented in Ref. [27] and described in detail
in Appendix C.1. As the $n$-qubit GHZ state is a stabilizer state, we can
randomly sample $m$ of the $2^{n}$ stabilizers $\\{S_{i}\\}_{i=1..2^{n}}$ and
approximate the fidelity by $F=\frac{1}{m}\sum_{k=1}^{m}\langle
S_{k}\rangle_{\sigma}+\mathcal{O}\left(\frac{1}{\sqrt{m}}\right)$.
The experimental results of GHZ state preparation with unitary and dynamic
circuits are shown in Fig. 3(d). They all include measurement error mitigation
on the final measurements [35]. On the left, we show the results without
dynamical decoupling. In the unitary case, we observe genuine multipartite
entanglement, defined as state fidelity $F>0.5$ [36], within a confidence
interval of $95\%$ up to 7 qubits with a rapid decay in fidelity with
increasing system size due to coherent errors in two-qubit gates and ZZ
crosstalk errors during idling time [37]. In the dynamic case, we observe
genuine entanglement up to 6 qubits. Here, we do not find a crossover point
after which dynamic circuits have an advantage over unitary circuits. We
attribute the performance of dynamic circuits to several factors, including
the fact that the current implementation results in an average classical
feedforward time that scales with the number of potential mid-circuit
measurement bitstring outcomes, which itself grows exponentially with system
size. By reducing the error induced by idle time during classical feedforward,
we expect dynamic circuits to surpass unitary circuits at $\gtrsim$10
qubits—we can see this by studying the post-processing case, which is
equivalent to the dynamic circuit implementation except that the classical
logic is executed in post-processing, not during execution of the quantum
circuit itself. We expect the exponential scaling of classical feedforward
time to be reduced to linear or constant scaling in the near term.
On the right of Fig. 3(d), we show the results using dynamical decoupling (DD)
[38, 39]. We observe improved fidelities for both the unitary and dynamic
circuit cases, but not for the post-processing case as there is little error
induced by idle times to quench with dynamical decoupling in the first place.
For the unitary case, we observe genuine multipartite entanglement up to 17
qubits, more than twice as many compared to the unmitigated unitary case. This
result is close to the state of the art on superconducting quantum processors
and is limited by the fact that we do not leverage the 2D connectivity of the
device, as in Ref. [40]. While the fidelities are improved with DD for dynamic
circuits, the improvement is less dramatic. We attribute this difference to
two reasons: First, the unitary circuit has a quadratic idling error term in
contrast to a leading linear term for dynamic circuits, resulting in
comparatively smaller improvement for dynamic circuits with dynamical
decoupling. Second, with the current controls, we are not able to apply DD
pulses during the classical feedforward time, which is the main source of
idling error in the dynamic circuit. As in the unmitigated case, we observe
rapid decay of the fidelity with increasing system. This can again be
partially attributed to exponential growth of the classical feedforward time.
In the near future, we expect to reduce this scaling to linear, in which case
we expect drastically improved performance and genuine entanglement up to
$\sim$15 qubits. Still, however, we do not expect to observe an advantage with
dynamic circuits for preparation of GHZ states over unitary ones. To realize
an advantage with dynamic circuits, we require a scenario where the
quadratically scaling idle error of the unitary circuit dominates over
sufficiently small CNOT and mid-circuit measurement error; see Appendix D.2
for a more detailed analysis. We anticipate these conditions can be realized
through a combination of hardware improvements and the extension of error
mitigation techniques, such as probabilistic error cancellation [41, 42],
toward mid-circuit measurements.
## IV Conclusion and Outlook
Dynamic circuits are a promising feature toward overcoming connectivity
limitations of large-scale noisy quantum hardware. Here, we demonstate their
potential for efficiently generating long-range entanglement with two useful
tasks: teleporting entangling gates over long ranges to enable effective all-
to-all connectivity, and state preparation with the GHZ state as an example.
For CNOT gate teleportation, we show a regime in which dynamic circuits result
in higher fidelities on up to 101 qubits of a large-scale superconducting
quantum processor. We leave incorporating this more efficient implementation
of long-range entangling gates as a subroutine in another quantum algorithm to
future work; potential studies can include simulating many-body systems with
non-local interactions. As we demonstrate theoretically, gate teleportation
schemes can be extended beyond CNOT gates to multi-qubit ones, such as the CCZ
gate. Its experimental implementation is also a promising project for the
future. For state preparation, based on both unmitigated and mitigated
hardware experiments, we expect to see the value of dynamic circuits once the
classical post-processing becomes more efficient and the mid-circuit
measurement errors can be reduced. We plan on revising the experiments as soon
these capabilities are available. We anticipate that further experiments with
dynamic circuits and the development of noise models describing them will be
vital contributions toward efficient circuit compilation, measurement-based
quantum computation, and fault-tolerant quantum computation.
## V Acknowledgements
We thank Diego Ristè, Daniel Egger, and Alexander Ivrii for valuable
discussions and feedback. We thank Emily Pritchett, Maika Takita, Abhinav
Kandala, and Sarah Sheldon for their help. We also thank Thomas Alexander,
Marius Hillenbrand, and Reza Jokar for their support with implementing dynamic
circuits.
## References
* Preskill [2018] J. Preskill, Quantum computing in the NISQ era and beyond, Quantum 2, 79 (2018).
* Terhal [2015] B. M. Terhal, Quantum error correction for quantum memories, Rev. Mod. Phys. 87, 307 (2015).
* Greenberger _et al._ [1989] D. M. Greenberger, M. A. Horne, and A. Zeilinger, Going beyond bell’s theorem, in _Bell’s Theorem, Quantum Theory and Conceptions of the Universe_, edited by M. Kafatos (Springer Netherlands, Dordrecht, 1989) pp. 69–72.
* Raussendorf _et al._ [2005] R. Raussendorf, S. Bravyi, and J. Harrington, Long-range quantum entanglement in noisy cluster states, Phys. Rev. A 71, 062313 (2005).
* Dennis _et al._ [2002] E. Dennis, A. Kitaev, A. Landahl, and J. Preskill, Topological quantum memory, J. Math. Phys. 43, 4452 (2002).
* Piroli _et al._ [2021] L. Piroli, G. Styliaris, and J. I. Cirac, Quantum circuits assisted by local operations and classical communication: Transformations and phases of matter, Phys. Rev. Lett. 127, 220503 (2021).
* Lu _et al._ [2022] T.-C. Lu, L. A. Lessa, I. H. Kim, and T. H. Hsieh, Measurement as a shortcut to long-range entangled quantum matter, PRX Quantum 3, 040337 (2022).
* Verresen _et al._ [2021] R. Verresen, N. Tantivasadakarn, and A. Vishwanath, Efficiently preparing schrödinger’s cat, fractons and non-abelian topological order in quantum devices,(2021), arXiv:2112.03061 (2021).
* Tantivasadakarn _et al._ [2021] N. Tantivasadakarn, R. Thorngren, A. Vishwanath, and R. Verresen, Long-range entanglement from measuring symmetry-protected topological phases, arXiv:2112.01519 (2021).
* Tantivasadakarn _et al._ [2022] N. Tantivasadakarn, R. Verresen, and A. Vishwanath, The shortest route to non-abelian topological order on a quantum processor, arXiv:2209.03964 (2022).
* Tantivasadakarn _et al._ [2023] N. Tantivasadakarn, A. Vishwanath, and R. Verresen, Hierarchy of topological order from finite-depth unitaries, measurement, and feedforward, PRX Quantum 4, 020339 (2023).
* Bravyi _et al._ [2022] S. Bravyi, I. Kim, A. Kliesch, and R. Koenig, Adaptive constant-depth circuits for manipulating non-abelian anyons, arXiv:2205.01933 (2022).
* Lu _et al._ [2023] T.-C. Lu, Z. Zhang, S. Vijay, and T. H. Hsieh, Mixed-state long-range order and criticality from measurement and feedback, PRX Quantum 4, 030318 (2023).
* Lieb and Robinson [1972] E. H. Lieb and D. W. Robinson, The finite group velocity of quantum spin systems, Commun. Math. Phys. 28, 251 (1972).
* Bravyi _et al._ [2006] S. Bravyi, M. B. Hastings, and F. Verstraete, Lieb-robinson bounds and the generation of correlations and topological quantum order, Phys. Rev. Lett. 97, 050401 (2006).
* Raussendorf _et al._ [2003] R. Raussendorf, D. E. Browne, and H. J. Briegel, Measurement-based quantum computation on cluster states, Phys. Rev. A 68, 022312 (2003).
* Jozsa [2005] R. Jozsa, An introduction to measurement based quantum computation, arXiv:0508124 (2005).
* Beverland _et al._ [2022] M. Beverland, V. Kliuchnikov, and E. Schoute, Surface code compilation via edge-disjoint paths, PRX Quantum 3, 10.1103/prxquantum.3.020342 (2022).
* Barrett _et al._ [2004] M. D. Barrett, J. Chiaverini, T. Schaetz, J. Britton, W. M. Itano, J. D. Jost, E. Knill, C. Langer, D. Leibfried, R. Ozeri, and D. J. Wineland, Deterministic quantum teleportation of atomic qubits, Nature 429, 737 (2004).
* Pfaff _et al._ [2012] W. Pfaff, T. H. Taminiau, L. Robledo, H. Bernien, M. Markham, D. J. Twitchen, and R. Hanson, Demonstration of entanglement-by-measurement of solid-state qubits, Nat. Phys. 9, 29 (2012).
* Ristè _et al._ [2013] D. Ristè, M. Dukalski, C. A. Watson, G. de Lange, M. J. Tiggelman, Y. M. Blanter, K. W. Lehnert, R. N. Schouten, and L. DiCarlo, Deterministic entanglement of superconducting qubits by parity measurement and feedback, Nature 502, 350 (2013).
* Wan _et al._ [2019] Y. Wan, D. Kienzler, S. D. Erickson, K. H. Mayer, T. R. Tan, J. J. Wu, H. M. Vasconcelos, S. Glancy, E. Knill, D. J. Wineland, A. C. Wilson, and D. Leibfried, Quantum gate teleportation between separated qubits in a trapped-ion processor, Science 364, 875 (2019).
* Córcoles _et al._ [2021] A. D. Córcoles, M. Takita, K. Inoue, S. Lekuch, Z. K. Minev, J. M. Chow, and J. M. Gambetta, Exploiting dynamic quantum circuits in a quantum algorithm with superconducting qubits, Phys. Rev. Lett. 127, 100501 (2021).
* Foss-Feig _et al._ [2023] M. Foss-Feig, A. Tikku, T.-C. Lu, K. Mayer, M. Iqbal, T. M. Gatterman, J. A. Gerber, K. Gilmore, D. Gresh, A. Hankin, N. Hewitt, C. V. Horst, M. Matheny, T. Mengle, B. Neyenhuis, H. Dreyer, D. Hayes, T. H. Hsieh, and I. H. Kim, Experimental demonstration of the advantage of adaptive quantum circuits, arXiv:2302.03029 (2023).
* Iqbal _et al._ [2023] M. Iqbal, N. Tantivasadakarn, T. M. Gatterman, J. A. Gerber, K. Gilmore, D. Gresh, A. Hankin, N. Hewitt, C. V. Horst, M. Matheny, T. Mengle, B. Neyenhuis, A. Vishwanath, M. Foss-Feig, R. Verresen, and H. Dreyer, Topological order from measurements and feed-forward on a trapped ion quantum computer, arXiv:2302.01917 (2023).
* Moses _et al._ [2023] S. Moses, C. Baldwin, M. Allman, R. Ancona, L. Ascarrunz, C. Barnes, J. Bartolotta, B. Bjork, P. Blanchard, M. Bohn, _et al._ , A race track trapped-ion quantum processor, arXiv:2305.03828 (2023).
* Cao _et al._ [2023] S. Cao, B. Wu, F. Chen, M. Gong, Y. Wu, Y. Ye, C. Zha, H. Qian, C. Ying, S. Guo, Q. Zhu, H.-L. Huang, Y. Zhao, S. Li, S. Wang, J. Yu, D. Fan, D. Wu, H. Su, H. Deng, H. Rong, Y. Li, K. Zhang, T.-H. Chung, F. Liang, J. Lin, Y. Xu, L. Sun, C. Guo, N. Li, Y.-H. Huo, C.-Z. Peng, C.-Y. Lu, X. Yuan, X. Zhu, and J.-W. Pan, Generation of genuine entanglement up to 51 superconducting qubits, Nature 619, 738 (2023).
* Smith _et al._ [2023] K. C. Smith, E. Crane, N. Wiebe, and S. Girvin, Deterministic constant-depth preparation of the aklt state on a quantum processor using fusion measurements, PRX Quantum 4, 020315 (2023).
* Chen _et al._ [2023] E. H. Chen, G.-Y. Zhu, R. Verresen, A. Seif, E. Bäumer, D. Layden, N. Tantivasadakarn, G. Zhu, S. Sheldon, A. Vishwanath, S. Trebst, and A. Kandala, Realizing the nishimori transition across the error threshold for constant-depth quantum circuits, _to appear_ (2023).
* Flammia and Liu [2011] S. T. Flammia and Y.-K. Liu, Direct fidelity estimation from few pauli measurements, Phys. Rev. Lett. 106, 230501 (2011).
* da Silva _et al._ [2011] M. P. da Silva, O. Landon-Cardinal, and D. Poulin, Practical characterization of quantum devices without tomography, Phys. Rev. Lett. 107, 210404 (2011).
* Gottesman [1998] D. Gottesman, The heisenberg representation of quantum computers, arXiv:9807006 (1998).
* Hoyer and Spalek [2005] P. Hoyer and R. Spalek, Quantum fan-out is powerful, Theory Comput. 1, 81 (2005).
* Yang and Rall [2023] W. Yang and P. Rall, Harnessing the power of long-range entanglement for clifford circuit synthesis, arXiv:2302.06537 (2023).
* Nation and Treinish [2023] P. D. Nation and M. Treinish, Suppressing quantum circuit errors due to system variability, PRX Quantum 4, 010327 (2023).
* Leibfried _et al._ [2005] D. Leibfried, E. Knill, S. Seidelin, J. Britton, R. B. Blakestad, J. Chiaverini, D. B. Hume, W. M. Itano, J. D. Jost, C. Langer, R. Ozeri, R. Reichle, and D. J. Wineland, Creation of a six-atom ‘schrödinger cat’ state, Nature 438, 639 (2005).
* Tripathi _et al._ [2022] V. Tripathi, H. Chen, M. Khezri, K.-W. Yip, E. Levenson-Falk, and D. A. Lidar, Suppression of crosstalk in superconducting qubits using dynamical decoupling, Phys. Rev. Appl. 18, 024068 (2022).
* Viola _et al._ [1999] L. Viola, E. Knill, and S. Lloyd, Dynamical decoupling of open quantum systems, Phys. Rev. Lett. 82, 2417 (1999).
* Jurcevic _et al._ [2021] P. Jurcevic, A. Javadi-Abhari, L. S. Bishop, I. Lauer, D. F. Bogorin, M. Brink, L. Capelluto, O. Günlük, T. Itoko, N. Kanazawa, A. Kandala, G. A. Keefe, K. Krsulich, W. Landers, E. P. Lewandowski, D. T. McClure, G. Nannicini, A. Narasgond, H. M. Nayfeh, E. Pritchett, M. B. Rothwell, S. Srinivasan, N. Sundaresan, C. Wang, K. X. Wei, C. J. Wood, J.-B. Yau, E. J. Zhang, O. E. Dial, J. M. Chow, and J. M. Gambetta, Demonstration of quantum volume 64 on a superconducting quantum computing system, Quantum Sci. Technol. 6, 025020 (2021).
* Mooney _et al._ [2021] G. J. Mooney, G. A. L. White, C. D. Hill, and L. C. L. Hollenberg, Generation and verification of 27-qubit greenberger-horne-zeilinger states in a superconducting quantum computer, J. Phys. Commun. 5, 095004 (2021).
* Temme _et al._ [2017] K. Temme, S. Bravyi, and J. M. Gambetta, Error mitigation for short-depth quantum circuits, Phys. Rev. Lett. 119, 180509 (2017).
* van den Berg _et al._ [2023] E. van den Berg, Z. K. Minev, A. Kandala, and K. Temme, Probabilistic error cancellation with sparse pauli–lindblad models on noisy quantum processors, Nat. Phys. 19, 1116 (2023).
* Jamiołkowski [1972] A. Jamiołkowski, Linear transformations which preserve trace and positive semidefiniteness of operators, Rep. Math. Phys. 3, 275 (1972).
* Horodecki _et al._ [1999] M. Horodecki, P. Horodecki, and R. Horodecki, General teleportation channel, singlet fraction, and quasidistillation, Phys. Rev. A 60, 1888 (1999).
## Appendix A Circuit Derivations
Figure 4: Useful circuit identities that are used in the illustrative
derivation of the CNOT gate teleportation and GHZ state preparation.
In the following we show the circuit equivalences of the CNOT gate
teleportation (Fig. 1(a)) and the GHZ state preparation (Fig. 3(b)). We are
not claiming any novelty with this “proof”, but just wanted to show the reader
how to derive them in an illustrative way. Before, let us start with some
features that we will be using:
* •
The Bell state $\frac{1}{\sqrt{2}}(\ket{00}+\ket{11})$ can be illustrated as a
so-called “cup”, as shown in Fig. 4(a), We can move gates along wires
including along the cup, as in Fig. 4(b).
* •
Principle of deferred measurement: a controlled gate followed by a measurement
of the controlled qubit results in the same as first performing the
measurement and then applying a classically-controlled gate as in Fig. 4(c).
* •
While CNOT gates commute when they are conditioned on the same qubit or have
the same target qubit, we get an extra gate when they act on the same qubit
differently as shown in Fig. 4(d).
### A.1 Long-Range CNOT
Figure 5: Graphical derivation for reducing a long-range CNOT gate into gate
teleportation executed with measurements and feed-forward operations, i.e., a
dynamic circuit. Roman numerals indicate sequential step numbers described in
main text.
In Fig. 5 we illustrate a derivation of the CNOT gate teleportation, as
exemplified for $7$ qubits, which can be straightforwardly extended to an
arbitrary number of qubits. In the following, we provide explanations for each
step of the derivation, labeled by roman numerals in the figure:
1. (i)
In the first step, we observe that entangling, measuring, and resetting the
ancilla qubits does not affect the circuit.
2. (ii)
We insert CNOT gates that would cancel each other. From now on we omit to
write down the reset of the ancilla qubits following the measurement.
3. (iii)
We move the pink CNOT gates along the Bell states to the respective qubits
above. Also, we add Hadamard gates to flip the direction of the orange CNOT
gates (except for the one at the bottom). Note that we can omit the hadamard
gates right before the measurements, as they are not affecting the other
qubits anymore.
4. (iv)
By moving the bottom orange CNOT “up” along the Bell state and passing a pink
CNOT, we get the extra purple CNOT gate.
5. (v)
Moving the new purple CNOT “up” along the Bell state, an extra gate appears
that cancels with the initial long-range CNOT gate when pushed to the left
(and then it is controlled on state $\ket{0}$, so can be omitted as well).
6. (vi)
Now we make use of the principle of deferred measurement.
7. (vii)
In a final step we merge the classically-conditioned gates. The orange
$\oplus$ correspond to XOR gates, i.e. addition mod 2. We also represented the
initial Bell states again with their circuit representation.
### A.2 GHZ state preparation
Figure 6: Graphical derivation for the preparation of a GHZ state by
converting its canonical but deep unitary circuit into a constant-depth
circuit utilizing measurement and feed-forward operations—a dynamic circuit.
Roman numerals indicate sequential step numbers described in main text.
In Fig. 6 we have illustrated a derivation of the GHZ state preparation,
exemplary for $7$ qubits, but it can be straightforwardly extended to an
arbitrary number of qubits. In the following, we provide explanations for each
step of the derivation, labeled by roman numerals in the figure:
1. (i)
Pushing every second CNOT gate to the very right introduces the extra pink
CNOT gates.
2. (ii)
We can omit CNOT gates that are conditioned on state $\ket{0}$.
3. (iii)
As every second qubit is only involved at the very end, we can use those
before and reset them.
4. (iv)
A Bell state followed by a CNOT gate results in two uncorrelated qubits in
states $\ket{+}$ and $\ket{0}$.
5. (v)
We move the pink CNOT gates along the Bell states to the respective qubits
above (they commute with the other CNOTs they are “passing”).
6. (vi)
Pushing the pink CNOT gates to the left through the purple CNOT gates
introduces the extra orange CNOT gates.
7. (vii)
We make use of the principle of deferred measurement.
8. (viii)
In a final step we merge the classically-conditioned gates. As the classical
calculation can be done extremely fast compared to quantum gates, we draw it
as a vertical line. The orange $\oplus$ correspond to XOR gates, i.e. addition
mod 2. We also represented the initial Bell states again with their circuit
representation.
## Appendix B CNOT circuits
### B.1 Unitary variants
In order to compare the dynamic circuits implementation to a solely unitary
one, let us first consider different unitary strategies that might be more or
less powerful in different regimes:
Strategy I: Ancilla-based implementation
We can consider a similar setting as for dynamic circuits, where we place the
system qubits in a way that they are connected by a bus of empty ancilla
qubits. In this case, we need to swap the system qubits towards each other and
back, so that the ancillas are empty in the end again. The swaps can be
simplified since the ancillas are empty in the beginning. Here we can divide
into different scenarios:
* •
Circuit Ia: To minimize the number of CNOT gates, we could swap the controlled
qubit all the way to the target qubit and back, which results in the circuit
depicted in Fig. 7. Here, a lot of gates cancel, so given $n$ ancilla qubits,
the number of CNOT gates is $2n+1$. However, here the idle time of the qubits
while they are not in state $\ket{0}$ equals $n^{2}+2n$ times the CNOT gate
time.
* •
Circuit Ib: In order to decrease the idle time, we could essentially swap
both, the controlled qubit and the target qubit half-way and back as
illustrated in Fig. 7. In that case, less gates “cancel”, so for $n$ ancilla
qubits we get $3n+1$ CNOT gates, but the idle time reduces to
$\frac{n^{2}}{4}+n$ times the CNOT gate time.
* •
Circuit Ic: If we wanted to reduce the idle time even further, it might be
beneficial to not cancel the CNOT gates in scenario 1b), but keep them to
bring the swapped qubits back to state $\ket{0}$ as shown in Fig. 7. In that
case, we have essentially no idle time (as qubits in state $\ket{0}$ are not
prone to idling errors). Here, the number of CNOT gates increased to $4n+1$
though.
Strategy II: SWAP-based implementation without ancillas
This is the case that happens if we just feed our circuit to the transpiler.
Here we do not use any ancilla qubits, but only system qubits and apply swaps
to move them around. The qubits can be at a different location in the end, so
we do not need to swap back. The corresponding circuit is shown in Fig. 7. In
this case we require $3\tilde{n}+1$ CNOT gates and the idle time is
$\frac{3}{2}\tilde{n}^{2}-2\tilde{n}$ times the CNOT gate time. However, it is
important to note here that the number of qubits lying between the two qubits
of interest $\tilde{n}$ is on average much shorter than the number of ancillas
between two system qubits in the first scenario. Considering the connectivity
illustrated in Fig. 1 (c), the relation is approximately $n=2\tilde{n}+3$.
Figure 7: Comparison of the different unitary implementations of a long-range
CNOT gate. While the circuits in panels (Ia), (Ib), and (Ic) realize ancilla-
based implementations, the circuit of panel (II) realizes a SWAP-based
implementation without ancillas. The shaded regions indicate idle periods that
accumulate errors.
### B.2 Error budget
Let us now compare the regimes in which we expect the different
implementations to be most useful to demonstrate the benefit of dynamic
circuits. In Appendix E we derive a simple noise model that allows us to
compute the combined effect of different sources of decoherence as a single
Pauli noise rate:
$\displaystyle\lambda_{\mathrm{tot}}=t_{\mathrm{idle}}\lambda_{\mathrm{idle}}+N_{\mathrm{CNOT}}\lambda_{\mathrm{CNOT}}+N_{\mathrm{meas}}\lambda_{\mathrm{meas}}\;.$
(1)
In Lemma 1 we show that the final process fidelity is loosely lower-bounded by
$e^{-\lambda_{\mathrm{tot}}}$. The quantity $\lambda_{\mathrm{tot}}$ combines
the following noise sources:
* •
The total amount of time $t_{\mathrm{idle}}$ that qubits spend idle within the
circuit, and a conversion factor $\lambda_{\mathrm{idle}}$ that quantifies the
strength of decoherence. $t_{\mathrm{idle}}$ is expressed in multiples of the
CNOT gate time (i.e. $t_{\mathrm{idle}}=3$ for 3 CNOT gate times). The time
for a mid-circuit measurement, including the additional time waiting for
feedback, is defined as $\mu$ times the time for a CNOT gate.
* •
The total number of CNOT gates $N_{\mathrm{CNOTs}}$ and an average Pauli noise
rate $\lambda_{\mathrm{CNOT}}$ per CNOT.
* •
The total number of mid-circuit measurements $N_{\mathrm{meas}}$ and an
average Pauli noise rate $\lambda_{\mathrm{meas}}$ per measurement.
In Table 1, we have summarized the error budget for each of the cases.
Case | | $t_{\mathrm{idle}}$
---
| $N_{\mathrm{CNOT}}$
---
| $N_{\mathrm{meas}}$
---
| Two-qubit gate depth
---
Unitary Ia) | $n^{2}+2n$ | $2n+1$ | $0$ | $2n+1$
Unitary Ib) | $\frac{n^{2}}{4}+n$ | $3n+1$ | $0$ | $2n+1$
Unitary Ic) | $0$ | $4n+1$ | $0$ | $2n+1$
Unitary II) | $\frac{3}{4}\tilde{n}^{2}-\frac{3}{2}\tilde{n}$ | $3\tilde{n}+1$ | $0$ | $\frac{3}{2}\tilde{n}+1$
Unitary II) with normed $n$ | $\approx\frac{3}{16}n^{2}-\frac{15}{8}n+\frac{45}{16}$ | $\approx\frac{3}{2}n-2$ | $0$ | $\approx\frac{3}{4}n-\frac{5}{4}$
Dynamic circuits | $2\mu+2$ | $n+1$ | $n$ | $2+\mu$, or $O(1)$
Table 1: Comparison of the error budget of the unitary and the dynamic
circuits implementation in terms of idle time, number of CNOT gates and mid-
circuit measurements and two-qubit gate depth. Note, that as the number of
involved qubits $\tilde{n}$ needed for the unitary implementation II) is in
general much smaller, we rescale it for the error budget with the relation
$n\approx 2\tilde{n}+3$
Comparing the different unitary cases it becomes clear that for large $n$ the
unitary implementation Ic) will be the best, as all other implementations have
an error in the idling time that scales quadratically. This might be slightly
counter-intuitive, as it tells us that even without measurement and feed-
forward, it can be still beneficial to use ancilla qubits and thereby increase
the distances. For small $n$ we need to keep in mind that for the swap-based
implementation (unitary II) the number of involved qubits $\tilde{n}$ is
smaller than the number of qubits $n$ needed for the same task in the ancilla-
based implementation. Respecting the rescaled errors, unitary II would be the
most promising implementation for small $n$. In addition to the CNOT errors
and idling errors, for dynamic circuits we also need to consider the error
from the additional measurements, as well as a constant term $\mu$ that comes
from the idling error during measurement and feed-forward.
Given this rough error analysis in Table 1, we can infer that for large $n$
dynamic circuits will be beneficial if the measurement of $n$ qubits
introduces less error than $3n$ CNOT gates, that is, when
$\lambda_{\mathrm{meas}}<3\lambda_{CNOT}$. A sketch of how the fidelities for
the different cases decrease with $n$ is illutrated in Fig. 8. Note, that
these error analyses only take into account the error on the involved qubits
though. Considering also the fact that there are potentially a lot of other
$m$ qubits “waiting” for this operation to be performed would add another
idling error of $m\cdot(2n+1)$. So the fact that dynamic circuits can perform
entangled gates between arbitrary qubits in constant depth instead of linear
depth with only unitary operations speeds up the whole algorithm and therefore
might be much more powerful than what we can see in the error on the
respective qubits.
Figure 8: Comparison of the process fidelities of the different unitary
implementations as well as the dynamic circuits implementation considering the
error budget indicated in Table 1. In this figure we use
$\mu=(t_{\text{meas}}+t_{\text{feed-forward}})/t_{\text{cnot}}\approx 3.65$,
$\lambda_{\mathrm{idle}}=0.03$, $\lambda_{\mathrm{CNOT}}=0.02$ and
$\lambda_{\mathrm{meas}}=0.03$.
## Appendix C Estimation of the state and gate fidelities using Monte-Carlo
sampling
### C.1 State fidelity
In order to determine the fidelity of the experimentally prepared quantum
state, denoted as $\sigma$, we employ the Monte Carlo state certification
method, which was introduced inRefs. [31, 30]. We first briefly review the
notion of fidelity between two quantum states.
#### Quantum state fidelity.
Let us introduce the Uhlmann-Jozsa state fidelity between two general quantum
states $\rho$ and $\sigma$. These objects are elements of the space of valid
density operators associated with the system Hilbert space, $\mathcal{H}$,
i.e., $\rho,\sigma\in D\left(\mathcal{H}\right)$. Assuming one of them is a
pure state
$\sigma=\left|\phi\vphantom{\phi}\right\rangle\left\langle\vphantom{\phi}\phi\right|$,
we can simplify the general expression as shown in the following:
$\displaystyle F\left(\rho,\sigma\right)$
$\displaystyle\coloneqq\left[\operatorname{Tr}\left(\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}\right)\right]^{2}$
(2) $\displaystyle=\left\langle\phi\middle|\rho\middle|\phi\right\rangle$ (3)
$\displaystyle=\operatorname{Tr}\left[\rho\sigma\right]\;.$ (4)
If $\rho$ is also a pure state
$\rho=\left|\psi\vphantom{\psi}\right\rangle\left\langle\vphantom{\psi}\psi\right|$,
the expression reduces to a simple overlap
$F\left(\rho,\sigma\right)=\left|\left\langle\psi\middle|\phi\right\rangle\right|^{2}$.
We note that some authors define the square root of this as the fidelity.
#### Pauli decomposition.
To connect to experimental measurements, let us decompose the quantum sates in
the standard Pauli basis. The set of all Pauli operators on $n$ qubits
$\left\\{I,X,Y,Z\right\\}^{\otimes n}$ forms an orthogonal Hermitian operator
basis. The inner product in operator space $L\left(\mathcal{H}\right)$ between
two Pauli operators $P_{i},P_{j}\in\mathcal{L}\left(\mathcal{H}\right)$ is
$\left\langle
P_{i},P_{j}\right\rangle=\operatorname{Tr}\left(P_{i}P_{j}\right)=d\delta_{ij}$,
where the dimension of the pure state Hilbert space
$d\coloneqq\dim\mathcal{H}=2^{n}$. In terms of this basis, any quantum state
$\rho\in D\left(\mathcal{H}\right)$, can be decomposed into
$\rho=\sum_{i=0}^{4^{n}-1}\frac{\left\langle
P_{i},\rho\right\rangle}{\left\langle
P_{i},P_{i}\right\rangle}P_{i}=\frac{1}{d}\sum_{i=0}^{4^{n}-1}\rho_{i}P_{i}\;,\quad\mathrm{with}\quad\rho_{i}\coloneqq\left\langle
P_{i},\rho\right\rangle=\operatorname{Tr}\left(P_{i}\rho\right)\;,$
where the Pauli expectation value of the state with respect to the $i$-th
Pauli is $\rho_{i}$—an easily measurable quantity. We can similarly define the
expectation values of the Pauli $P_{i}$ with respect to the prepared state
$\sigma$ and the desired state $\rho$ as $\sigma_{i}:=\langle
P_{i}\rangle_{\sigma}=\text{Tr}\left(\sigma P_{i}\right)$ and
$\rho_{i}:=\langle P_{i}\rangle_{\rho}=\text{Tr}\left(\rho P_{i}\right)$,
respectively.
#### Fidelity in terms of Pauli expectation values.
The state fidelity between the measured $\sigma$ and ideally expected pure
$\rho$ state, see Eq. (2), in terms of the Pauli decomposition of each is
$\displaystyle
F(\rho,\sigma)=\text{Tr}\left[\rho\sigma\right]=\sum_{i}\frac{\rho_{i}\sigma_{i}}{d}=\sum_{i}\frac{\rho_{i}^{2}}{d}\frac{\sigma_{i}}{\rho_{i}}\;,$
(5)
where $\sigma_{i}$ is an experimentally measured expectation value and
$\rho_{i}$ is a theoretically calculated one. Given this, we can now define
the relevance distribution $r(P_{i}):=\frac{\rho_{i}^{2}}{d}$, such that
$F(\rho,\sigma)=\sum_{i:\rho_{i}\neq 0}r(P_{i})\frac{\sigma_{i}}{\rho_{i}}$.
#### Random sampling of expectation values.
When sampling $m$ random operators $\\{P_{k}\\}_{k=1..m}$ according to the
relevance distribution $r(P_{k})$ and determining their expectation values
$\sigma_{k}$, the estimated fidelity
$\tilde{F}:=\sum_{k=1}^{m}\frac{\sigma_{k}}{\rho_{k}}$ approximates the actual
fidelity $F$ with an uncertainty that decreases as $\frac{1}{\sqrt{m}}$. Note
that there is also an uncertainty in estimating each $\sigma_{k}$, where for
an additive precision $\epsilon$ roughly $(\epsilon\rho_{k})^{-2}$ shots are
required.
#### Random sampling of GHZ stabilizers.
As the GHZ state is a stabilizer state, for each $n$ there are exactly $2^{n}$
non-zero Pauli operators $P_{i}$ that each have eigenvalue $\pm 1$. Note that
some stabilizers of the GHZ state have a minus sign, e.g. $-YYX$. For the
$n$-qubit GHZ state, by defining the set of stabilizers
$\\{S_{i}\\}_{i=1..2^{n}}$, we can express the fidelity in terms of only
expectation values on the stabilizers
$\displaystyle F(\rho,\sigma)$
$\displaystyle=\frac{1}{2^{n}}\sum_{i=1}^{2^{n}}\langle
S_{i}\rangle_{\sigma}\;.$ (6)
This expression can be approximated by randomly sampling $m$ of the $2^{n}$
stabilizers, defining the unbiased estimator
$\tilde{F}=\frac{1}{m}\sum_{k=1}^{m}\langle
S_{k}\rangle_{\sigma}=F+\mathcal{O}\left(\frac{1}{\sqrt{m}}\right)$, which
converges with the number of random samples chosen to the ideal fidelity.
### C.2 Average gate fidelity
Similarly to the state fidelity, we use the Monte Carlo process certification
following [31] to determine the average gate fidelity of our noisy CNOT gate.
#### Average gate fidelity.
Consider the case in which we want to implement an ideal gate
$\mathcal{U}(\rho)\coloneqq U\rho U^{\dagger}$. However, instead we can
implement only a noisy gate
$\tilde{\mathcal{U}}(\rho)\coloneqq\mathcal{U}(\Lambda(\rho))$, where
$\Lambda$ is some effective noise channel and $\rho$ is a quantum state. What
is the gate fidelity of noisy $\tilde{\mathcal{U}}$ relative to the ideal
$\mathcal{U}$. For a single given pure state
$\rho=\left|\phi\vphantom{\phi}\right\rangle\left\langle\vphantom{\phi}\phi\right|$,
the state fidelity of the output of the ideal and noisy channels is
$\displaystyle F\left(\mathcal{U},\tilde{\mathcal{U}};\rho\right)$
$\displaystyle=\left[\operatorname{Tr}\left[\sqrt{\sqrt{\mathcal{U}\left(\rho\right)}\tilde{\mathcal{U}}\left(\rho\right)\sqrt{\mathcal{U}\left(\rho\right)}}\right]\right]^{2}$
(7)
$\displaystyle=\operatorname{Tr}\left[\mathcal{U}\left(\rho\right)\tilde{\mathcal{U}}\left(\rho\right)\right]$
(8)
$\displaystyle=\operatorname{Tr}\left[\rho\Lambda\left(\rho\right)\right]\;,$
(9)
which can be used to obtain the average gate fidelity devised by a uniform
Haar average over the fidelity of the ideal and noisy output states, with
$\rho_{\psi}=\left|\psi\vphantom{\psi}\right\rangle\left\langle\vphantom{\psi}\psi\right|$,
$\displaystyle\mathcal{F}_{\mathrm{avg}}\left(\mathcal{U},\tilde{\mathcal{U}}\right)$
$\displaystyle=\int\mathrm{d}\psi\,F\left(\mathcal{U},\tilde{\mathcal{U}};\rho_{\psi}\right)$
(10)
$\displaystyle=\int\mathrm{d}\psi\,\operatorname{Tr}\left[\mathcal{U}\left(\rho\right)\tilde{\mathcal{U}}\left(\rho\right)\right]$
(11)
$\displaystyle=\operatorname{Tr}\left[\int\mathrm{d}\psi\,\left|\psi\vphantom{\psi}\right\rangle\left\langle\vphantom{\psi}\psi\right|\Lambda\left(\left|\psi\vphantom{\psi}\right\rangle\left\langle\vphantom{\psi}\psi\right|\right)\right]\;.$
(12)
To estimate
$\mathcal{F}_{\mathrm{avg}}\left(\mathcal{U},\tilde{\mathcal{U}}\right)$, we
will use the process (or entanglement) fidelity as a more experimentally-
accessible quantity.
#### Process fidelity.
Compared to the gate fidelity, the process fidelity is more readily estimated.
It can in turn serve as a direct proxy to the gate fidelity. To make the
connection, recall that the Choi-Jamiolkowski isomorphism [43] maps every
quantum operation $\Lambda$ on a $d$-dimensional space to a density operator
$\rho_{\Lambda}=(\mathbb{I}\otimes\Lambda)\ket{\phi}\bra{\phi}$, where
$\ket{\phi}=\frac{1}{\sqrt{d}}\sum_{i=1}^{d}\ket{i}\otimes\ket{i}$. For a
noise-free, ideal unitary channel $\mathcal{U}$ and its experimental, noisy
implementation $\tilde{\mathcal{U}}$, the process fidelity
$\mathcal{F}_{\mathrm{proc}}$ is the state fidelity of the respective Choi
states $\rho_{\mathcal{U}}$ and $\rho_{\tilde{\mathcal{U}}}$:
$\displaystyle\mathcal{F}_{\mathrm{proc}}({{\mathcal{U}}},{\tilde{\mathcal{U}}}):=F(\rho_{{\mathcal{U}}},\rho_{\tilde{\mathcal{U}}})\;.$
(13)
From this fidelity, the gate fidelity can be extracted using the following
relation derived in [44]:
$\displaystyle\mathcal{F}_{\mathrm{gate}}({{\mathcal{U}}},{\tilde{\mathcal{U}}})=\frac{d\mathcal{F}_{\mathrm{proc}}(\rho_{{\mathcal{U}}},\rho_{\tilde{\mathcal{U}}})+1}{d+1}\;.$
(14)
#### Estimating the process fidelity.
As described in Ref. [31], instead of a direct implementation of
$(\mathbb{I}\otimes\tilde{\mathcal{U}})\ket{\phi}\bra{\phi}$ followed by
measuring random Pauli operators on all qubits, we follow the more practical
approach, where $\tilde{\mathcal{U}}$ is applied to the complex conjugate of a
random product of eigenstates of local Pauli operators $P_{i}\otimes P_{j}$,
followed by a measurement of random Pauli operators $P_{k}\otimes P_{l}$. This
leads to the same expectation values
$\displaystyle\rho_{ijkl}:=\text{Tr}\left[(P_{i}\otimes P_{j}\otimes
P_{k}\otimes
P_{l})(\mathbb{I}\otimes\mathcal{U})\ket{\phi}\bra{\phi}\right]=\text{Tr}\left[(P_{k}\otimes
P_{l})\mathcal{U}(P_{i}\otimes P_{j})^{\ast}\right]/d\;.$ (15)
The operators are then sampled according to the relevance distribution
$\displaystyle r_{ijkl}:=r(P_{i}P_{j}P_{k}P_{l})=\frac{\rho_{ijkl}^{2}}{d}\;.$
(16)
For $\Lambda(\rho)=\mathrm{CNOT}\rho\mathrm{CNOT}^{\dagger}$, there are only
$16$ combinations of Pauli operators with a non-zero expectation value
$\rho_{ijkl}$: $\rho_{ijkl}=-\frac{1}{2}$ for $P_{i}P_{j}P_{k}P_{l}\in\
\\{YYXZ,XZYY\\}$ and $\rho_{ijkl}=\frac{1}{2}$ for the remaining 14. Thus, the
relevance distribution is uniform amongst those with $r=\frac{1}{16}$ and we
can just take the average expectation value of those $16$ operators.
## Appendix D Error analysis for GHZ states
### D.1 Error budget
As in Appendix B.2, we leverage (1) to estimate the total noise
$\lambda_{\mathrm{tot}}$ of a quantum circuit as motivated by the model
discussed in Appendix E. There, it is derived that
$e^{-\lambda_{\mathrm{tot}}}$ gives a lower bound on the _process_ fidelity of
the circuit. For GHZ states however, we are interested the _state_ fidelity,
so the bound from Lemma 1 no longer applies in a rigorous sense. However, we
find that the same model can still provide useful intuition if we accept that
the model parameters $\lambda_{\mathrm{CNOT}},\lambda_{\mathrm{meas}}$ no
longer have a direct interpretation in terms of worst-case Pauli-Lindblad
noise or a combination of amplitude- and phase-damping noise respectively. See
Appendix E for details.
For the unitary approach we require $n$ CNOT gates to entangle $n$ qubits. For
simplicity we assume (and implement) only a one-dimensional connectivity chain
in our protocols and the following numbers correspond to an even number $n$
(only constant terms change when considering odd $n$). To minimize the idling
time, we start in the middle and apply CNOT gates simultaneously towards both
ends. This leads to an idle time of $\frac{n^{2}}{4}-\frac{3}{2}n+2$ times the
CNOT gate time, as displayed in Table 2. In the dynamic circuits approach we
require $\frac{3}{2}n-2$ CNOT gates in total, while the idling time is
$\frac{\mu}{2}n+1$ times the CNOT gate time, where $\mu$ corresponds to the
measurement and feed-forward time (as a multiple of the CNOT gate time).
However, here we also need to consider the errors of the additional
$\frac{n}{2}-1$ measurements. As the error coming from the CNOT gates and the
measurements is usually substantially larger than the error from the idling
time, we expect that for small $n$ the standard unitary preparation succeeds.
However, as the idling time there scales as $\mathcal{O}\left(n^{2}\right)$ in
contrast to all errors in the measurement-based approach scaling only as
$\mathcal{O}\left(n\right)$, we expect a crossover for large $n$, where the
implementation with dynamic circuits will become more beneficial. The error
budget is summarized in Table 2.
Case | | $t_{\mathrm{idle}}$
---
| $N_{\mathrm{CNOT}}$
---
| $N_{\mathrm{meas}}$
---
| Two-qubit gate depth
---
Unitary | $n^{2}/4-3n/2+2$ | $n-1$ | $0$ | $n-1$
Dynamic circuits | $1+\mu n/2$ | $3n/2-2$ | $n/2-1$ | $3+\mu$, or $O(1)$
Table 2: Comparison of the error budget of the unitary and the dynamic
circuits implementation in terms of idle time, number of CNOT gates and mid-
circuit measurements and two-qubit gate depth.
### D.2 Expected cross-over for lower mid-circuit measurement errors
Here we use the values of $t_{\mathrm{idle}},N_{\mathrm{CNOT}},$ and
$N_{\mathrm{meas}}$ shown in Table 2 to predict how many qubits are required
to see and the state fidelity at the cross-over, or where the performance of
dynamic circuits becomes higher than that of its unitary counterpart, as a
function of the mid-circuit measurement errors. Note that in this noise model
we assume that we can eliminate all ZZ errors by applying dynamical
decoupling. We keep the idling error constant at
$\lambda_{\mathrm{idle}}=0.001$ and consider different CNOT errors
$\lambda_{\mathrm{CNOT}}\in\\{0.001,0.01,0.02\\}$. We can reach a fidelity
$>0.5$ for a CNOT error of $\lambda_{\mathrm{CNOT}}=0.01$ with mid-circuit
measurement errors $\lambda_{\mathrm{meas}}\lesssim 0.003$ and for a CNOT
error $\lambda_{\mathrm{CNOT}}=0.001$ with mid-circuit measurement errors
$\lambda_{\mathrm{meas}}\lesssim 0.012$
Figure 9: Noise-model predictions that indicate how many qubits are required
to see a cross-over and what the corresponding fidelity would be as a function
of the mid-circuit measurement errors.
## Appendix E Pauli-Lindblad Noise Model
In this section we present a simple framework for computing lower bounds on
fidelities using the Pauli-Lindblad noise model discussed in [42]. Pauli-
Lindblad noise channels have several nice properties that we can use to
simplify calculations, and also allow us to reduce estimates of the noise
properties of our hardware to relatively few parameters.
Normally, Pauli-Lindblad noise is the workhorse of probabilistic error
cancellation - an error mitigation scheme that leverages characterization of
noise in order to trade systematic uncertainty for statistical uncertainty.
But we are more interested in using Pauli-Lindblad noise as a tool for
capturing the behavior of fidelity as a function of circuit size with an
appropriate balance of rigor and simplicity.
As such, our central goal in this section is to develop mathematical tools
that allow us develop a Pauli-Lindblad representation of various noise sources
such as decoherence and gate noise and to find a method to combine all of this
noise into a fidelity for the entire process. In particular, we aim to give a
justification for modeling noise via the quantity $\lambda_{\mathrm{tot}}$ as
in (1). This is achieved by Lemma 1, which states that
$e^{-\lambda_{\mathrm{tot}}}$ gives a lower bound on the process fidelity.
We leave the majority of our mathematical exposition without proof for sake of
brevity, but present the proof of Lemma 1 at the end of this section.
#### Pauli-Lindblad noise.
Pauli-Lindblad noise is a quantum channel defined as follows. Let
$\mathcal{P}$ be the $n$-qubit Pauli group modulo phase, and consider some
$P\in\mathcal{P}$. Then for some noise rate $\lambda\in\mathbb{R}^{+}$ the
noise channel $\Gamma^{\lambda}_{P}$ is given by:
$\displaystyle\Gamma_{P}^{\lambda}(\rho)=(1-\omega)\rho+\omega P\rho
P^{\dagger}\quad\mathrm{where}\quad\omega:=\frac{1-e^{-2\lambda}}{2}\;.$ (17)
This is essentially applying $P$ with probability $\omega$. Pauli noise
channels also have a representation as time evolution with respect to a simple
Lindbladian: for $P\in\mathcal{P}$, let $\mathcal{L}_{P}(\rho):=P\rho P-\rho$.
This way $\Gamma_{P}^{\lambda}=e^{\lambda\mathcal{L}_{P}}$.
The main justification for why we can restrict to Pauli noise channels is
twirling. Conjugating an arbitrary noise channel by a random Pauli matrix
yields a channel that is always expressible as a product of Pauli noise.
Although our experiments do not feature twirling, even for untwirled circuits
we expect the Pauli-Lindblad noise to capture the first-order noise behavior.
Another reason why we expect our noise model to only capture the behavior to
first-order is that we assume the noise rates are the same for all qubits. All
CNOT gates and idle times are assumed to contribute the same amount of noise.
But this is not a realistic representation of our hardware - in actuality
different qubits have different coherence times and gate qualities also vary.
When we consider circuits on many qubits we expect these differences to
average out.
Let $\Lambda$ be a quantum channel. Then let $\tilde{\Lambda}$ be its Pauli-
twirled version given by:
$\displaystyle\tilde{\Lambda}:=\frac{1}{|\mathcal{P}|}\sum_{P\in\mathcal{P}}P\Lambda(P\rho
P)P\;.$ (18)
For $Q\in\mathcal{P}$, twirled channels $\tilde{\Lambda}$ satisfy
$\tilde{\Lambda}(Q)=c_{Q}Q$ for some coefficients $c_{Q}$. For every
$\tilde{\Lambda}$ there exist noise rates $\lambda_{P}$ for
$P\in\mathcal{P}/\\{I\\}$ such that
$\tilde{\Lambda}=\prod_{P}\Gamma_{P}^{\lambda_{P}}$. These noise rates
satisfy:
$\displaystyle c_{Q}=e^{-2\sum_{P}(\lambda_{P}\cdot 1_{PQ=-QP})}\;.$ (19)
A central convenience of Pauli noise channels is that they do not interfere
with each other when propagated: Pauli noise channels commute
$\Gamma^{\lambda_{P}}_{P}\Gamma^{\lambda_{Q}}_{Q}=\Gamma^{\lambda_{Q}}_{Q}\Gamma^{\lambda_{Q}}_{P}$,
and the noise rates can be added together when the Pauli is the same
$\Gamma^{\lambda_{1}}_{P}\Gamma^{\lambda_{2}}_{P}=\Gamma^{\lambda_{1}+\lambda_{2}}_{P}$
.
#### Combining noise channels into a single fidelity.
Say we are trying to compute the overall amount of noise in a particular
quantum circuit that has been appropriately twirled. Gates and idle time of
the qubits all contribute some amount of Pauli noise. We propagate all of the
Pauli noise to the end of the circuit, thereby removing any noise that does
not affect certain mid-circuit measurements. Finally, we must tally up the
noise Paulis on the resulting quantum state.
One metric for measuring the error on the final state is trace distance, or
diamond norm if we are considering a channel. For a single Pauli noise source,
we have the simple relation that for any $P$ we have
$\left|\Gamma^{\lambda}_{P}-I\right|_{\diamond}=1-e^{-2\lambda}$. To
generalize this to multiple Paulis, a simple approach could be to just apply
the triangle inequality to all of the different Paulis. But it turns out we
can do much better using the following bound on the process fidelity:
###### Lemma 1.
Consider a channel $\Lambda=\prod_{P}\Gamma_{P}^{\lambda_{P}}$ for some rates
$\lambda_{P}$. Then
$\mathcal{F}_{\mathrm{proc}}(\Lambda,\mathcal{I})\geq\exp(-\sum_{P}\lambda_{P})$.
This bound is still pretty loose, but it is very simple and does better than
adding up diamond norms. This can be seen by, for example, looking at the
channel $\prod_{i=1}^{N}\Gamma_{P_{i}}^{c/N}$. Lemma 1 gives
$\mathcal{F}_{\mathrm{proc}}\geq\exp(-c)$ while adding up diamond norms and
and converting them to a fidelity bound gives $\mathcal{F}_{\mathrm{proc}}\geq
1-\frac{1}{2}N(1-e^{-2c/N})$. The latter is looser for $N\geq 2$ and for any
$c$.
Lemma 1 also has the key advantage that it makes computation of the overall
noise rate very simple: just add up all the noise rates. This allows us to
simply tally the total idle time and count the number of CNOTs to obtain the
total amount of noise, as in Appendix B.2.
An issue with using Lemma 1 is that it becomes increasingly loose in the limit
of large $\sum_{P}\lambda_{P}$. The quantity $\exp(-\sum_{P}\lambda_{P})$
vanishes in this limit, but in general we have
$\mathcal{F}_{\mathrm{proc}}(\Lambda,\Lambda^{\prime})\geq 1/d$ for all
$\Lambda,\Lambda^{\prime}$. When we only have one source of Pauli noise
$\Gamma_{P}^{\lambda}$ then not even the lower limit of $1/d$ can be reached
as $\lambda\to\infty$. Unfortunately, we see now way of overcoming this
limitation while preserving the mathematical elegance of this tool: we would
like to simply consider the quantity $\sum_{P}\lambda_{P}$. The reason for
this shortcoming is that we do not account for cancellations between Pauli
errors - we discuss the details of the derivation at the end of this section.
Another limitation of this analysis is that it completely ignores crosstalk.
Every gate is assumed to behave independently. Assuming independent errors
corresponds to a worst-case analysis analogous to the union bound, so we would
expect the bounds resulting from Lemma 1 to still roughly capture average
error from crosstalk by accounting for it as $T_{2}$ dephasing noise, an error
that we include when modeling experiments without dynamical decoupling.
#### Propagating noise to the end of the circuit.
Next, we discuss how to move all the noise sources to the end of the circuit.
This is particularly easy since we are considering Clifford circuits. Once all
the noise is in one place, we can use Lemma 1 to combine it into a single
fidelity.
With $\mathcal{U}\cdot\coloneqq U\cdot U^{\dagger}$ as before, elementary
calculation shows that
$\mathcal{U}\Gamma^{\lambda}_{P}=\Gamma^{\lambda}_{\mathcal{U}(P)}\mathcal{U}$,
so Pauli-Lindblad noise propagated through a unitary Clifford circuit is still
Pauli-Lindblad noise.
Our circuits also feature several adaptive gates, propagation through which
can be achieved as follows. Let $\Lambda_{\mathrm{disc}}$ be the channel that
traces out the first of two qubits. Then
$\Lambda_{\mathrm{disc}}\Gamma^{\lambda}_{P\otimes
Q}=\Gamma^{\lambda}_{Q}\Lambda_{\mathrm{disc}}$. Similarly, let
$\Lambda_{\mathrm{corr},P}$ be the channel that measures the first qubit and
applies a correction $P$ onto the second qubit. If $P$ and $Q$ commute, then
$\Lambda_{\mathrm{corr},P}\Gamma^{\lambda}_{Q\otimes
R}=\Gamma^{\lambda}_{R}\Lambda_{\mathrm{corr},P}$. Otherwise,
$\Lambda_{\mathrm{corr},P}\Gamma^{\lambda}_{Q\otimes
R}=\Gamma^{\lambda}_{PR}\Lambda_{\mathrm{corr},P}$.
Now that we have established how to move noise to the end of the circuit and
to tally it into a bound on the fidelity, all that remains is to show how to
bring various noise sources into Pauli-Lindblad form.
#### Decoherence noise.
We begin with decoherence noise that affects idling qubits. We consider
depolarizing, dephasing, and amplitude damping noise.
Conveniently, depolarizing and dephasing noise are already Pauli noise
channels. A depolarizing channel $\Lambda_{\mathrm{dep},q}$ replaces the input
$\rho$ with the maximally mixed state with probability $1-q$:
$\displaystyle\Lambda_{\mathrm{dep},q}(\rho)=q\rho+(1-q)\frac{I}{2^{n}}\;.$
(20)
We derive that
$\Lambda_{\mathrm{dep},q}=\prod_{P\in\mathcal{P}_{n}/\\{I\\}}\Gamma^{\lambda}_{P}$
with $q=\exp(-4^{n}\lambda)$.
The phase damping process is given by the Lindbladian with
$L_{0}=\ket{0}\bra{0}$ and $L_{1}=\ket{1}\bra{1}$:
$\displaystyle\mathcal{L}_{\mathrm{ph}}=\sum_{i\in\\{0,1\\}}L_{i}\rho
L_{i}^{\dagger}-\frac{1}{2}\left\\{L^{\dagger}_{i}L_{i},\rho\right\\}\;.$ (21)
Since $\mathcal{L}_{\mathrm{ph}}=\frac{1}{2}\left(Z\rho Z-\rho\right)$, it
satisfies $e^{\lambda\mathcal{L}_{\mathrm{ph}}}=\Gamma^{\lambda/2}_{Z}$. We
can easily compute $\lambda$ from a phase damping experiment: since
$\bra{+}\Lambda^{\lambda}_{\mathrm{damp}}(\ket{+}\bra{+})\ket{+}=\frac{1}{2}(1+e^{-\lambda})$
we have $\lambda=t/T_{2}$.
The amplitude damping channel is not a Pauli-Lindblad channel, and must be
twirled in order to bring into Pauli-Lindblad form. The amplitude damping
process $\mathcal{L}_{\mathrm{damp}}$ is given by $L=\ket{0}\bra{1}$ with:
$\displaystyle\mathcal{L}_{\mathrm{damp}}(\rho)=L\rho
L-\frac{1}{2}\left\\{L^{\dagger}L,\rho\right\\}\;.$ (22)
If we let
$\Lambda^{\lambda}_{\mathrm{damp}}:=e^{\lambda\mathcal{L}_{\mathrm{damp}}}$
then we have
$\tilde{\Lambda}^{\lambda}_{\mathrm{damp}}=\Gamma^{\lambda/4}_{X}\Gamma^{\lambda/4}_{Y}$.
Similarly, $\lambda$ can be obtained from an amplitude damping experiment:
since
$\bra{1}\Lambda^{\lambda}_{\mathrm{damp}}(\ket{1}\bra{1})\ket{1}=e^{-\lambda}$
we straightforwardly have $\lambda=t/T_{1}$.
If we have both dephasing and amplitude damping noise, we can combine the two
together as follows. For some $T_{1},T_{2}$, consider the combined noise
channel
$\Lambda^{t}_{\mathrm{noise}}=\exp\left(\frac{t}{T_{1}}\mathcal{L}_{\mathrm{damp}}+\frac{t}{T_{2}}\mathcal{L}_{\mathrm{ph}}\right)$.
Then:
$\displaystyle\tilde{\Lambda}^{t}_{\mathrm{noise}}=\Gamma_{X}^{\frac{t}{4T_{1}}}\Gamma_{Y}^{\frac{t}{4T_{1}}}\Gamma_{Z}^{\frac{t}{2T_{2}}}\;.$
(23)
This follows from the fact that $\mathcal{L}_{\mathrm{damp}}$ and
$\mathcal{L}_{\mathrm{ph}}$ commute.
#### Noise from unitary gates.
In principle we could perform experiments, as in [42], to determine the exact
Pauli rates for each unitary, as is necessary for probabilistic error
cancellation. However, two-qubit gates like the CNOT gate have fifteen noise
parameters corresponding to the $4^{2}-1$ nontrivial two-qubit Pauli
operators. For our purposes we would prefer to model CNOT noise using just a
single number.
One approach could be to just assume that the CNOT noise is simply
depolarizing noise. In this case, all fifteen Pauli noise rates are equal and
can be connected to the process fidelity. Say we aim to implement an ideal
unitary $U$, but our hardware can only implement
$\bar{\mathcal{U}}=\mathcal{U}\Lambda_{\mathrm{dep},q}$ up to a known fidelity
$F(\mathcal{U},\bar{\mathcal{U}})$. Then
$q=(4^{n}F(\mathcal{U},\bar{\mathcal{U}})-1)/(4^{n}-1).$
However, it turns out that spreading out the error uniformly over all the
Paulis is rather cumbersome because it requires propagating every possible
Pauli error. A more tractable approach is to just consider the worst case
Pauli error. In that case, For any unitary $U$ and $P\in\mathcal{P}$, we have
$F(\mathcal{U},\mathcal{U}\Gamma_{P}^{\lambda})=(1+e^{-2\lambda})/2$.
#### Conclusions.
We have derived a rigorous justification for a rather simple strategy for
deriving theoretical predictions of noisy superconducting quantum hardware.
Expressions for noise as a function of circuit size can be derived simply by
counting the amount of idle time, CNOT gates, and number of mid-circuit
measurements. The model has very few parameters, which are simply the Pauli-
Lindblad noise rates corresponding to each of these operations (sometimes per
unit time). These different noise rates are added up and converted to a
fidelity via Lemma 1.
The advantage of a rigorous derivation is that we can directly see the ways in
which this model fails to tightly capture the actual error. A central issue is
that Lemma 1 does not take into account cancellation between various noise
sources, causing the fidelity to approach zero in the limit of high rate. This
is despite the fact that the worst possible process fidelity is nonzero.
Another oversimplification is that we do not capture the fact that not all
possible Pauli noise rates can affect a given observable. We also cannot
capture correlations between errors, as may be the case with crosstalk, and
instead take a worst-case approach reminiscent of the union-bound. All of
these reasons indicate that this model should produce relatively loose lower
bounds.
###### Proof of Lemma 1..
Say $\Lambda(\rho)=\sum_{P,Q}c_{P,Q}P\rho Q$. Then
$\mathcal{F}_{\mathrm{proc}}(I,\Lambda)=\mathcal{F}_{\mathrm{proc}}(I,\tilde{\Lambda})=c_{I,I}$.
The proof proceeds with two loose lower bounds that notably fail to capture
cancellations between different error sources. Given
$\Lambda=\prod_{P}\Gamma_{P}^{\lambda_{P}}$, recall that
$\Gamma_{P}^{\lambda_{P}}(\rho)=(1-\omega_{P})\rho+\omega_{P}P\rho
P^{\dagger}$. Expanding out $\Lambda$, we see that:
$\displaystyle
c_{I,I}\geq\prod_{P}(1-\omega_{P})=\prod_{P}\frac{1+e^{-2\lambda_{P}}}{2}\;.$
(24)
Next, observing that $(1+e^{-2x})/2\geq e^{-x}$ for $x>0$:
$\displaystyle...\geq\prod_{P}e^{-\lambda_{P}}=\exp\left(-\sum_{P}\lambda_{P}\right)\;.$
(25)
∎
#### Convergence to 0.4.
In the main text, we remarked that the fidelities of the measurement-based
CNOT experiments converge to a value slightly below 0.4, as is observed in
Figure 1 (c). As discussed, this is due to the structure of the measurement-
based circuit in Figure 1 (a). While the circuit also experiences infidelity
on the top and bottom qubits due to idle time and some CNOTs, the only
infidelity that actually scales with $n$ is due to incorrect $Z$ and $X$
corrections on the top and bottom qubits respectively.
We can model this noise as
$\Gamma_{ZI}^{\lambda_{ZI}}\Gamma_{IX}^{\lambda_{IX}}$ in the limit of large
$\lambda_{ZI},\lambda_{IX}$, in which case $\omega_{ZI},\omega_{IX}$ approach
$1/2$. We proceed as in (24). Since these Pauli errors cannot cancel, the
calculation is exact.
$\displaystyle\mathcal{F}_{\mathrm{proc}}(I,\Gamma_{ZI}^{\lambda_{ZI}}\Gamma_{IX}^{\lambda_{IX}})=c_{I,I}=(1-\omega_{ZI})(1-\omega_{IX})=1/4\;.$
(26)
This converts to
$\mathcal{F}_{\mathrm{gate}}(I,\Gamma_{ZI}^{\lambda_{ZI}}\Gamma_{IX}^{\lambda_{IX}})=(4\mathcal{F}_{\mathrm{proc}}(I,\Gamma_{ZI}^{\lambda_{ZI}}\Gamma_{IX}^{\lambda_{IX}})+1)/(4+1)=0.4$.
|
11institutetext: Language Technologies Research Center, KCIS, IIIT Hyderabad
11email<EMAIL_ADDRESS><EMAIL_ADDRESS>
22institutetext: TCS Research, Hyderabad, India
22email<EMAIL_ADDRESS>
# LimGen: Probing the LLMs for Generating Suggestive Limitations of Research
Papers
Abdur Rahman Bin Md Faizullah 11 * * Ashok Urlana 22 * * Rahul Mishra 11
###### Abstract
Examining limitations is a crucial step in the scholarly research reviewing
process, revealing aspects where a study might lack decisiveness or require
enhancement. This aids readers in considering broader implications for further
research. In this article, we present a novel and challenging task of
Suggestive Limitation Generation (SLG) for research papers. We compile a
dataset called LimGen, encompassing 4068 research papers and their associated
limitations from the ACL anthology. We investigate several approaches to
harness large language models (LLMs) for producing suggestive limitations, by
thoroughly examining the related challenges, practical insights, and potential
opportunities. Our LimGen dataset and code can be accessed at
https://github.com/armbf/LimGen.
###### Keywords:
Limitations LLM Constrained Text Generation.
**footnotetext: Equal contribution.
## 1 Introduction
The process of reviewing research articles lies at the core of the scientific
publication process. It serves as a pivotal stage wherein experts meticulously
scrutinize the content, methodology, and findings of a study before it is
disseminated to the broader scientific community. This rigorous evaluation and
critique of the strengths and limitations of the research work ensures the
integrity, reproducibility, and credibility of the research being presented.
Identifying the potential limitations of the research papers is a very tedious
and time-consuming process that involves delving into the intricacies of the
research methodology, understanding the theoretical framework, and grasping
the nuances of the subject matter. This endeavor demands not only a broad
knowledge base but also keen attention to detail and analytical thinking.
Table 1: Comparison of actual vs LLM generated suggestive limitations (best
viewed in color). Magenta indicates the matching content between the actual
and LLM-generated limitations, whereas Blue represents the novel limitations.
Actual Limitation: Contamination is a significant problem in the era of
pretrained language models trained on large web corpora… Additionally, this
data’s distribution is fixed to pre-college Physics, Chemistry, and
Mathematics. There are more gradations and difficulty levels at which the
model can be evaluated, which have not been tested as part of our analysis.
---
Llama2-DPR: The limitations of the approach are: The dataset is limited to
only three subjects (Physics, Chemistry, and Mathematics) and does not cover
other important subjects like Biology, Geology, and Computer Science. The
dataset does not include any real-world applications or scenarios, limiting
the generalization of the model’s capabilities. The evaluation metric used is
based on the number of correct answers, which does not fully capture the
model’s reasoning abilities. The dataset does not include any adversarial
examples or challenging cases to test the model’s robustness and adaptability.
This paper presents a novel task of Suggestive Limitation Generation (SLG) for
research papers, which aims to generate a diverse array of potential
limitations specific to each paper, providing reviewers with valuable insights
to facilitate their assessment process. The task of SLG poses a greater
challenge compared to text summarization, question-answering (QnA), and open-
ended text generation tasks. It demands not only reasoning abilities but also
the capacity to discern and incorporate associations from a corpus of
previously encountered papers during the fine-tuning process. For example, in
Table 1, we show the actual limitation block from a research paper in row 1
and the corresponding generated limitations in row 2. Illustrated in magenta,
the proposed model demonstrates the capability to generate limitations akin to
those originally outlined in the paper (the dataset is limited to only three
subjects, Physics, Chemistry, and Mathematics) Additionally, it showcases the
ability to propose novel, valid limitations (depicted in blue) related to
adversarial examples, a facet not included by authors in original limitations.
The key idea behind this approach is to capitalize on cues regarding the
similarities or differences among research papers and learn to recommend
comparable limitations for a paper, especially when its underlying methodology
closely aligns with that of a set of other papers. To this end, we create a
dataset called LimGen, comprising 4068 research papers and corresponding
limitations from the ACL anthology. Subsequently, we probe many state-of-the-
art large language models (LLMs) in a multitude of experimental setups to
generate the suggestive limitations of the research papers. We conduct a very
thorough evaluation by utilizing automatic (rouge-based), human (manual
expert-based), and model assistive (LLM-based) approaches.
The key contributions of this work are: 1) To the best of our knowledge, we
are the first to propose the task of Suggestive Limitation Generation (SLG)
for research papers. 2) We release a SLG dataset LimGen, consisting of 4068
papers and corresponding limitations. 3) We propose and experiment with
several schemes to utilize LLMs for SLG. 4) We perform thorough evaluations
using automated, human, and LLM-driven methodologies.
## 2 Related work
Scientific document understanding poses a persistent challenge, primarily
attributable to its structure, diverse content modalities (such as tables and
figures), and the incorporation of citations within the text. The recent
emergence of large-scale scientific document summarization datasets were
automatically collected from the public repositories [3, 7]. Several works on
scientific documents encompass tasks such as abstract generation [12], delving
into the contributions outlined in a paper [17, 10], scientific papers
summarization [2] and formulate multi-perspective summaries by leveraging
reviews of the research papers [4, 24].
Moreover, few works delve into other forms of supervision for scientific
document understanding, including citations [18, 25], author-written
highlights [5], transcripts from conference presentations of the research
papers [14] and annotations [19]. In our work, we collected the research
papers and corresponding limitations from the ACL anthology. In contrast to
existing works, we attempt to generate suggestive limitations of the research
papers using LLMs. To the best of our knowledge, this is the first work
towards generating limitations of the research papers.
## 3 LimGen Dataset
### 3.1 Dataset collection
We obtain the dataset from ACL Anthology111https://aclanthology.org/ website.
We take advantage of the recent mandatory inclusion of the ‘limitations’
section in the research paper for the submission of Computational Linguistics-
related venues. We scrap the proceedings of EMNLP, ACL, and EACL venues of
2022 and 2023 years respectively. After obtaining the papers, the initial step
involves using scipdf_parser222https://github.com/titipata/scipdf_parser to
parse the PDFs. The parser segregates the content of the paper into section-
wise information. From the extracted sections, to create the ‘source’ text for
the SLG task, we discard some of the sections Abstract, Introduction, Related
Works, Acknowledgements, Conclusion, Ethics Statement, Appendix, References,
Limitations and preserve the main contribution of the paper in form of
methodology, experiments, results, and discussions, etc. We use the
corresponding actual ‘limitations’ section as the reference limitations. In
total, we utilize 4068 peer-reviewed short and long papers from three
different ACL venues.
As an initial exploratory analysis of the proposed LimGen dataset, we computed
several key statistics. Notably, the average length of the research papers
stands at approximately 5000 tokens and 187 sentences. Whereas the limitation
sections, average around 230 tokens, and 9 sentences. The longer length of the
papers poses a challenge for the large language models to process the longer
context length. For detailed statistics corresponding to each conference,
please refer to the provided Table 2.
Table 2: LimGen Dataset Statistics Number of research papers 4068
---
ACL 2022 | 1750 | | #Avg words per paper | 5122
EMNLP 2022 | 1227 | | #Avg sentences per paper | 188
EACL 2022 | 456 | | #Avg words per limitation | 230
EMNLP 2023 | 635 | | #Avg sentences per limitation | 10
Table 3: Manual analysis of 60 research papers; Relevance | Deduce Limitation | Future work or Limitation
---|---|---
Yes | No | Partial | Yes | No | Partial | Yes | No | Partial
37 | 3 | 20 | 12 | 13 | 35 | 15 | 13 | 32
Limitations related to
Methodology | Experimental setup | Dataset | Evaluation
41 | 17 | 22 | 7
### 3.2 Nature of the limitations
To understand the nature of the limitations in research papers, we conduct a
manual analysis of 60 papers. We maintain diversity (short, and long papers
from diverse venues) in paper selection to capture the stylistic variations of
the limitations present in the research papers.
The analysis aims to understand, 1) The relevance of the underlying limitation
to the research paper, 2) whether the given limitation can be deduced by
reading the paper or not?, 3) establish whether the mentioned limitation
represents a real constraint or suggests potential avenues for future
research, 4) classify the limitation according to its relevance to specific
sections of the paper, including Methodology, dataset, evaluation or
experimental setup. We present the outcome of our manual analysis in Table 3.
We note that within our sample, numerous papers regard their future prospects
as limitations. The extent of the limitation section ranges from mere
sentences to substantially lengthy paragraphs. Additionally, our observations
indicate that in over 50% of the papers, discerning the stated limitation
directly from the text is not straightforward. Furthermore, a significant
portion of these limitations are predominantly associated with the methodology
section of the paper. There are few instances, where the limitations cover
more than one section of the research paper.
## 4 Benchmark Experiments
### 4.1 Task formulation
This section introduces the Suggestive Limitation Generation (SLG) task
formulation. To produce the limitations of the papers, we approach the task as
a Seq2Seq problem. Precisely, we craft a model designed to intake a scientific
paper R as input and systematically produce a structured limitation block L =
l(1:n), where l(1:n) represents the combination of n limitations of R,
sequentially generated.
### 4.2 Methodology
This section describes the various approaches to generate the suggestive
limitations. We explore two suitable text generation paradigms to generate the
limitations of the research papers. Firstly, we consider the limitation
generation as a summarization task and utilize the summarization-specific pre-
trained models including BART333https://huggingface.co/facebook/bart-large-cnn
[15] and PEGASUS444https://huggingface.co/google/pegasus-large [26] to
generate the suggestive limitations. Given that the objective of the SLG task
surpasses the complexity of the summarization task, which typically involves
limited or constrained generation entropy, the SLG task demands a higher
degree of generation entropy. Unlike summarization, where the model’s task is
to condense information, in SLG, the model must infer and recommend
limitations from the source content, drawing from its understanding during
fine-tuning. Hence, the generative scheme becomes pivotal. To this end, as the
second paradigm, we utilize popular Large Language Models (LLMs) namely
T5555https://huggingface.co/google-t5/t5-base [20], Cerebras-GPT [9] and Llama
2 [22]. To experiment with both of these paradigms, we utilize the following
three schemes.
Figure 1: General architecture diagram for the suggestive limitation
generation. Figure 2: Architecture diagram for DPR fine-tuning. Figure 3:
Architecture diagram for Chain modeling.
#### 4.2.1 Non-truncated research paper
In this scheme, we employ the entire research paper and its associated
limitations to experiment with both summarization-specific and generative
models, as illustrated in Figure 1. We fine-tune the summarization models such
as BART, PEGASUS and also utilize generative models including T5, Llama 2, and
Cerbras GPT to perform the zero-shot prompting and fine-tuning. To experiment
with the generative models in this scheme, we use the prompt depicted in Table
4. However, this approach is constrained by the models’ max input token limit,
resulting in a lack of comprehensive context for the research paper.
Table 4: Prompt for fine-tuning with the non-truncated research papers. | Prompt: Generate limitations or shortcomings for the following scientific paper
---
{Paper Text}
Limitations:
End-of-Prompt
#### 4.2.2 Dense Passage Retrieval (DPR)
To address the length constraint in the Non-truncated research paper scheme,
we employ the DPR approach. This approach processes only the relevant passages
for each sentence present in the limitation section. To obtain the relevant
passages, we utilize the following three-stage approach.
1. 1.
Paragraph and sentence processing: We segment the research papers into
paragraphs and obtain the sentences of each paragraph by using
Spacy666https://spacy.io/api/tokenizer.
2. 2.
Tokenization and paragraph management: We tokenize these passages using
BertTokenizer [8] to manage the length of each passage to maintain the max
input token limit of 1024 for Llama 2 and 2048 for Cerebras. We merge smaller
passages for optimization and split larger passages to adhere to the max input
token limit.
3. 3.
Limitation sentence extraction and encoding: We encode limitation sentences
for each document and the passages using SentenceTransformer [21] (all-
MiniLM-L6-v2777https://www.sbert.net/). We compute the cosine similarity
between each sentence in the limitations and all sentences within each
passage. Further, discard the passages with a similarity score of less than
0.5. Ultimately, we identify the top three passages with the highest cosine
similarity for each sentence in the limitations.
Table 5: Prompt for DPR-based fine-tuning. | Prompt: Generate limitations or shortcomings for the following passage from a
---
scientific paper
passage:{DPR paragraph}
A brief technical summary of the scientific paper for context:{Summary of the
paper}
Limitations:
End-of-Prompt
We fine-tune the model by employing individual limitation sentences as target
outputs, with the input comprising the top three passages retrieved through
DPR (Dense Passage Retrieval). These passages are specifically obtained using
the respective limitation sentence as the query. The prompt for the DPR-based
fine-tuning is depicted in Table 5. The fine-tuned model is utilized to
generate limitations for each passage extracted from the paper. These
limitations are then compiled to form the comprehensive set of limitations
associated with the research paper as shown in Figure 2. Both the
summarization and generative models utilize the DPR-retrieved passages for the
experiments. While generally effective, this approach may produce irrelevant
limitations due to the lack of a broader context of the entire research paper.
To tackle this challenge, we investigate the application of chain modeling
techniques as a prospective solution.
#### 4.2.3 Chain Modeling
To overcome the constraints posed by the lack of contextual information in the
DPR approach, we implement the chain modeling approach inspired by
LangChain888https://github.com/LangChain-ai/LangChain. The chain modeling
consists of two stages. In stage 1 (limitation generation), we generate the
limitations for all passages of the research paper obtained by following the
steps 1 and 2 in DPR model creation (see Section 4.2.2). However, going
through individual passages, we observe that, the model lacks the
comprehensive context provided by the entire research paper. To overcome this,
in the prompt, we include the summary of the research paper along with each
passage. The prompt to obtain the summary is mentioned in Table 6. In stage 2
(refinement), we refine and distill all the generated limitations, eliminate
obvious duplicates, and standardize the language as depicted in Figure 3. The
prompt for both stages is shown in Table 7.
The chain modeling approach utilizes two distinct fine-tuned models. The
former one (LLM1 in Figure 3) is fine-tuned on DPR passages along with summary
of the research paper as input and limitation as output. The prompt for the
same is mentioned in Table 5. Whereas the latter one (LLM2 in Figure 3) is
fine-tuned on full non-truncated paper as input and corresponding paper
limitations as output using the prompt depicted in Table 4. This approach is
indicated as Llama2-DPR-Distilled in Table 9. Another approach using the full
non-truncated fine-tuned Llama 2 model for both LLM1 and LLM2, which is
indicated as Llama2-FT-Distilled (FT refers to Full-text) in Table 9. We also
implement chain modelling approach where the result of the previous iteration
is passed to the current iteration with the passage. But it is observed that
when the model hallucinates or causes repetitions, all the subsequent steps
are affected. For this setting, the corresponding results are reported in the
table 9 under Llama2-Continuous.
Table 6: Prompt for generating the summary of the research paper. | Initial Prompt: Write a concise technical summary of the following: {first passage}
---
CONCISE TECHNICAL SUMMARY: {Summary of the paper}
End-of-Prompt
Iteration Prompt: Your job is to produce a final technical summary of a
research
paper. We have provided a summary generated by you up to a certain point:
{summary from the previous step}
We have the opportunity to refine the existing summary (only if needed)
with some more context below.
{next passage}
Given the new context, refine the original technical summary. Keep the
technical
summary less than 350 words. If the context isn’t useful in the technical
research
context, return the original summary. Do not ask any questions in the
response.
Refined Technical Summary:
End-of-Prompt
### 4.3 Experimental Setup
To conduct experiments with generative models with DPR and chain modeling
approaches, we use the Cerebras-GPT [9] 1.3B and Llama 2 [22] 7B versions. Due
to hardware constraints, we utilize the smaller versions of the LLMs and
perform the fine-tuning with the LoRA [11] configuration with 8-bit
quantization. We utilize the XTuring code
base999https://github.com/stochasticai/xTuring for the fine-tuning. For the
DPR approach, we limit the number of top passages passed to the model to 2 for
Llama 2 and 3 for Cerbras-GPT. Moreover, we utilize four Nvidia GeForce RTX
2080 Ti GPUs (11GB). Due to increased memory requirements for finetuning Llama
2 models for chain modeling with non-truncated research papers and processing
the DPR dataset with summaries as context, we temporarily upgrade to 4 NVIDIA
GeForce RTX 3090 GPUs. We use 80-10-10 split of the LimGen dataset for the
creation of train, valid, and test datasets.
##### Specifics for Chain Modelling:
The chain modeling approach requires more computation due to an increase in
the input context length and the need for higher inference speed to process
all the passages in a paper. To accommodate these requirements, we fine-tune
the Llama 2 model using Axolotl101010https://github.com/OpenAccess-AI-
Collective/axolotl with LoRA and FlashAttention [6]. Further, we use AWQ via
AutoAWQ111111https://github.com/casper-hansen/AutoAWQ with zero-point, group
size of 128, 4 bits, and GEMM version on vLLM [13] for efficient inference.
Table 7: Prompts for the chain modeling approach. | Iteration Prompt: Your job is to take in a passage from a research paper
---
and a concise summary of that research and identify one or two main
limitations from the given passage using the summary as context.
Paper Passage: {passage}
Paper Summary: {summary}
Limitations:
End-of-Prompt
Distill Prompt:
The following is set of limitations:. {list of generated limitations}
Take these and distill them into a final, consolidated list of limitations:
End-of-Prompt
## 5 Experimental Results and Analysis
Summarization and generative approaches. The results for experiments with
summarization and generative models are listed in Table 8. Although the
objective of generating a summary and suggestive limitation may appear
similar, our experiments reveal that models trained specifically for
summarization do not effectively generate insightful limitations, often merely
extracting sentences from the texts. Whereas, in case of generative models, we
observe a significant decline in output quality when excluding ‘Limitations:’
keyword in the prompt. In the generation approach, Llama 2 and Cerebras models
demonstrate proficiency in limitations generation, but their effectiveness is
hindered by the truncation of the input paper due to token length limitations.
Moreover, they fail to generate limitations across different sections of the
research paper due to the handling of limited context length.
Dense Passage Retrieval. To overcome the limited context issue, we utilized
the relevant passages obtained from the DPR approach to fine-tine the model.
This model then iteratively takes passages and generates limitations for each
passage, which are collated to get the set of limitations for a paper. This
experiment’s performance is generally effective when a limitation could be
derived from a passage. However, despite the DPR approach being effective in
highlighting relevant limitations, it inadvertently points to less pertinent,
paragraph-level limitations due to the lack of overall context of the paper.
Chain modeling. In this approach, we experiment with fine-tuning the models by
providing the summary of the entire paper along with the passages to avoid the
overall context issue. After all the limitations are generated using the input
passages and summary, in the refinement stage, all the duplicates are
discarded.
Table 8: Limitation generation experimental results; BS refers to BERTScore. Model | Approach | Without DPR data | With DPR data
---|---|---|---
R-1 | R-2 | R-L | BS | R-1 | R-2 | R-L | BS
BART-large | Fine-tuning | 30.8 | 4.5 | 15.8 | 82.8 | 10.7 | 0.6 | 8.3 | 82.8
PEGASUS-large | 20.2 | 3.1 | 12.7 | 82.3 | 16.7 | 7.1 | 14.2 | 84.7
T5-base | 27.7 | 4.3 | 16.4 | 83.6 | 18.8 | 7.6 | 16.2 | 85.9
Llama-2-7b | Zero-shot | 21.3 | 3.3 | 12.1 | 81.9 | 16.7 | 5.2 | 8.9 | 82.4
Cerebras GPT2.7B | 17.6 | 2.1 | 12.1 | 78.9 | 19.8 | 4.8 | 10.3 | 80.5
Llama-2-7b | Fine-tuning | 21.4 | 3.1 | 12.7 | 81.1 | 34.8 | 11.0 | 17.7 | 83.5
Cerebras GPT2.7B | 16.9 | 1.9 | 12.0 | 79.1 | 32.4 | 9.6 | 15.9 | 83.4
Table 9: Limitation generation experimental results for chain modeling. The model utilized for distillation/refinement consistently involves Llama2 fine-tuned on Full paper. ‘Fine-tuning’ column indicates the type of the dataset used for fine-tuning. Chain Modeling | Fine-tuning | R-1 | R-2 | R-L | BS
---|---|---|---|---|---
| Llama2-Continuous
---
Full Paper | 24.3 | 6.2 | 15.6 | 82.9
| Llama2-DPR
---
DPR dataset | 33.5 | 9.9 | 16.3 | 83.2
Llama2-FT-Distilled | Full paper | 28.5 | 5.8 | 15.2 | 83.5
Llama2-DPR-Distilled | DPR dataset | 30.4 | 7.6 | 16.2 | 83.3
### 5.1 Evaluation
To assess the performance of the proposed models, we adopt the three types of
evaluation strategies, 1) Automatic evaluation, 2) LLM-base evaluation, and 3)
Human evaluation.
Automatic evaluation. We conduct the automatic evaluation by using the
standard evaluation metrics such as ROUGE [16] and BERTScore [27]. We use the
RoBERTa121212https://huggingface.co/FacebookAI/roberta-large large model for
the BERTScore calculation.
Table 10: Prompts for GPT-4-based evaluation of the generated limitations.
Evaluation Prompt: For the below sets of limitations created for the above
research paper, tell me if they are actually limitations and if each
limitation set is a proper limitation of the paper even though it might not be
mentioned as a limitation in the original paper. Rate them with respect to the
original limitations section in the above paper and with respect to the paper
itself. Also give each set a score of 1 to 5 on the above qualities, with 5
indicating very good limitations for the above paper. Defend the score.
---
{Limitations generated by each model}
End-of-Prompt
LLM-based Evaluation. We perform the LLM-based evaluation by utilizing the
GPT-4 [1] to evaluate the quality of the generated limitations. We use the
zero-shot prompting strategy to obtain the evaluation scores from GPT-4. We
instructed the GPT-4 to assign a score between 1 (least) to 5 (best). As shown
in Table 11, we observe that the ‘Llama2-FT-Distilled’ approach outperforms
the remaining models and the T5-base obtains the lowest score. It indicates
that summarization-specific models fail to generate limitations of research
paper due to their inherent nature of training objective. The prompt used to
perform the LLM-based evaluation is mentioned in Fig 10. We notice that GPT-4
does not thoroughly analyze the limitations. It tends to assign high scores to
general limitations of the model or approach, even if they may not be accurate
within the context of the provided research paper. When prompted again to
verify the limitations once more, it might fail to identify incorrect
limitations if they’re presented in a manner that implies they pertain to any
aspect discussed in the paper. When a specific limitation is singled out and
prompted for re-evaluation, GPT-4 shows improved performance but struggles
until the actual issue with the limitation is pointed out explicitly.
Human Evaluation. We perform the human evaluation of 50 research papers
(selected at random from the test set) and corresponding generated
limitations. The limitations are generated by four different systems including
Llama2-DPR, Llama2, T5-base, and Llama2-FT-Distilled approaches. We asked the
evaluators to rate Yes, No, or Partial for each of the following questions.
Q1. Whether the limitation generated by the model makes sense or not?, Q2. Is
there any overlap between the gold and generated limitation?, Q3. The
generated limitation is an actual limitation or not (can be a summary or
prospective work), Q4. The generated limitation contains any hallucinations,
repetitive sentences, or grammatical errors.
The results of the human evaluation are detailed in Table 12. Our findings and
manual evaluations suggest a superior performance of chain modeling using
Llama2 trained on the full paper dataset for both limitation generation and
reduction step (Llama2-FT-Distilled). This approach yields fewer but proper,
coherent limitations, effectively reducing non-limitation and speculative
content. The enhancement in the quality of generated limitations can be
credited to the comprehensive summary provided to the model, enabling it to
grasp the entirety of the paper’s context. Also, the distillation step of the
chain modeling adds coherence and structure to the results. The DPR dataset-
trained model (Llama2-DPR-Distilled) closely trails behind, likely owing to
its proficiency in generating highly pertinent limitations. However, it
occasionally produces generic limitations rather than focused ones, such as
“The model employs Micro-F1 scores as the primary evaluation metric, yet other
metrics might be more suitable depending on the particular task or
application”.
Table 11: Automatic evaluation using GPT-4 | T5-base | | Llama2
---
| Llama2-DPR
---
| Llama2-FT-Distilled
---
Score | 2.71 | 3.60 | 3.12 | 4.10
Table 12: Human evaluation of LLMs generated limitations; For Q1-Q3, the higher values of ‘Yes’ are preferred, whereas for Q4 the higher values of ‘No’ are desired. | Llama2-DPR | Llama2 | T5-base | Llama2-FT-Distilled
---|---|---|---|---
| Yes | No | Partial | Yes | No | Partial | Yes | No | Partial | Yes | No | Partial
Q1 | 10 | 24 | 16 | 19 | 12 | 19 | 10 | 22 | 18 | 35 | 04 | 11
Q2 | 06 | 36 | 08 | 05 | 31 | 14 | 02 | 34 | 14 | 14 | 15 | 21
Q3 | 08 | 24 | 18 | 22 | 13 | 15 | 10 | 21 | 19 | 31 | 04 | 15
Q4 | 18 | 26 | 07 | 11 | 30 | 09 | 27 | 12 | 11 | 10 | 31 | 09
Error analysis. As part of human evaluation, Q4 helps to obtain insights for
the error analysis of the models. We notice that the ‘Llama2-FT-Distilled’
model generates better limitations compared to other models. The limitations
generated by ‘Llama2-DPR’ and ‘T5-base’ do not make sense and contain a lot of
noisy information or produce the summary of the research paper rather than
limitations. Moreover, the T5-base model is highly prone to directly copying
several phrases from the research paper. Llama2-DPR considers local
limitations, referring to limitations mentioned in a passage that refers to
another paper or approach but are not the limitations of the current paper. We
have also observed that more than 50% of limitations generated from T5-base
are prone to hallucination or contain repetitive phrases. Despite generating
lengthy limitations, most of the ‘Llama2-FT-Distilled’ model-generated
limitations make sense.
## 6 Challenges and Future work
In this section, we explore the key challenges and potential opportunities
associated with the SLG task.
Complexity of the SLG task. As illustrated in Table 3, it is often difficult
to infer the actual limitations solely from the content of the papers due to
the lack of detailed context surrounding these limitations. Consequently,
predicting such nuanced limitations poses a challenge for the model. Our
experiments reveal that both summarization and open-text generation strategies
struggle to generate these intricate limitations. Therefore, in our
experiments, we place emphasis on the fine-tuning phase, with the expectation
that the model will learn to discern similarities and differences across
research papers, enabling it to infer nuanced limitations more effectively.
Furthermore, the concept of similarity among papers can be explicitly modeled
by incorporating auxiliary information, such as citations.
Evaluation metrics suitability. Lexical overlapping-based metrics such as
ROUGE operate on n-gram-based matching, yet many generated limitations feature
valid novel sentences and phrase formulations compared to those mentioned in
the research papers. This disparity makes lexical matching-based metrics
imperfect for evaluating the task of SLG. As shown in Tables 8 and 9, We find
no notable discernable variation in the BERTScore values across the models.
However, our human evaluation reveals considerable variations in the quality
of the limitations generated by different models. Developing a tailored
evaluation metric for the SLG task stands out as the most promising path
forward.
Multi-modal content. We do not explore the utilization of non-textual elements
such as images and tables present in the research paper to generate the
limitations. Images such as architecture diagrams and plots, along with tables
like result tables and ablation tables, can provide supplementary context,
especially for grasping nuanced potential limitations.
Coherence and relevance. We observe that despite the apparent superior
performance of DPR-based models, they fail to generate coherent limitations.
The best Llama2-FT-Distilled model also generates a few speculative
limitations and has difficulty in generating highly relevant limitations for
every paper.
Open-ended generation of LLMs. As illustrated in Table 12, around 20% of the
limitations generated with the aid of LLMs are susceptible to issues such as
hallucination, repetitions, and grammatically incorrect sentence structures.
Controlling these issues leads to the generation of more faithful limitations.
Controllability. Recent advancements in controllable generation emphasize the
ability to address the specific intentions of users [23]. However, we note
that LLMs occasionally struggle to consistently generate specific structured
limitations. At times, the models produce limitations as bullet points, while
in other instances, they generate them as paragraphs.
## 7 Limitations
In this work, our focus is solely on extracting limitations from only textual
content within research papers. Nevertheless, incorporating non-textual
elements like tables and images could enhance the generation of more
contextually relevant limitations. Due to compute constraints, we experimented
with the Llama 2 7B and Cerebras-GPT 1.3B models. The automated evaluation
metrics used are insufficient for the complexity of this task, so we
supplemented our evaluation process with manual evaluations. Since the LimGen
dataset, sourced from the ACL anthology, it may exhibit bias towards
generating limitations specific to computational linguistic aspects. As a
result, its applicability to other domains like physics, chemistry, and
mathematical research studies could be limited. We listed the limitations
generated by our best model for this paper in Table 13. We observe that,
despite the broad coverage of the limitations, the model generates few of the
sub-standard limitations.
Table 13: Limitations generated by our best-performing model for our paper
(best viewed in color). Magenta indicates the appropriate limitations, whereas
Blue represents sub-standard limitations. Hallucination: The model may
generate limitations that are not present in the input text, which can lead to
inaccurate or irrelevant information in the generated limitations. Repetition:
The model may repeat similar phrases or sentences multiple times in the
generated limitations, which can result in a lack of diversity in the
generated limitations. Limited training data: The model may struggle to
generate limitations that are not present in the training data, which can
affect the quality of the output. Inadequate evaluation metrics: The authors
do not use adequate evaluation metrics to assess the performance of their
approach, which can affect the validity of their results. Lack of
consideration of the research question: The authors do not consider the
research question when generating limitations, which can lead to inaccurate or
irrelevant limitations. Lack of consideration of the methodology: The authors
do not consider the methodology used in the research paper when generating
limitations, which can lead to inaccurate or irrelevant limitations.
---
## 8 Conclusions
In this paper, we introduce the novel task of Suggestive Limitation Generation
(SLG) for research papers, aiming to provide reviewers with potential
limitations of the underlying work, thereby assisting in the review process.
We compile a dataset of 4068 research papers and corresponding limitations
from the ACL anthology. We propose an LLM-based baseline for SLG tasks and
conduct several ablation studies. Subsequently, we perform a thorough
evaluation of the proposed models with automatic, LLM-based, and human
evaluation schemes. Moving forward, our plans involve incorporating images and
tabular content in addition to text for the SLG task.
## 9 Ethics Statement
In the creation of the LimGen dataset, we did not collect any personal data.
Instead, we rely on the publicly accessible dataset from the ACL anthology.
This research did not involve any user data. In the release of LimGen, all
personal information pertaining to the authors of the research papers will be
excluded.
## References
* [1] Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F.L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., et al.: Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023)
* [2] Cachola, I., Lo, K., Cohan, A., Weld, D.S.: Tldr: Extreme summarization of scientific documents. In: Findings of the Association for Computational Linguistics: EMNLP 2020. pp. 4766–4777 (2020)
* [3] Cohan, A., Dernoncourt, F., Kim, D.S., Bui, T., Kim, S., Chang, W., Goharian, N.: A discourse-aware attention model for abstractive summarization of long documents. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). pp. 615–621 (2018)
* [4] Cohan, A., Feigenblat, G., Ghosal, T., Shmueli-Scheuer, M.: Overview of the first shared task on multi perspective scientific document summarization (mup). In: Proceedings of the Third Workshop on Scholarly Document Processing. pp. 263–267 (2022)
* [5] Collins, E., Augenstein, I., Riedel, S.: A supervised approach to extractive summarisation of scientific papers. In: Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017). pp. 195–205 (2017)
* [6] Dao, T., Fu, D., Ermon, S., Rudra, A., Ré, C.: Flashattention: Fast and memory-efficient exact attention with io-awareness. Advances in Neural Information Processing Systems 35, 16344–16359 (2022)
* [7] Delort, J.Y., Alfonseca, E.: Dualsum: a topic-model based approach for update summarization. In: Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics. pp. 214–223 (2012)
* [8] Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). pp. 4171–4186 (2019)
* [9] Dey, N., Gosal, G., Zhiming, Chen, Khachane, H., Marshall, W., Pathria, R., Tom, M., Hestness, J.: Cerebras-gpt: Open compute-optimal language models trained on the cerebras wafer-scale cluster (2023)
* [10] Hayashi, H., Kryściński, W., McCann, B., Rajani, N., Xiong, C.: What’s new? summarizing contributions in scientific literature. In: Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. pp. 1019–1031 (2023)
* [11] Hu, E.J., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., Chen, W., et al.: Lora: Low-rank adaptation of large language models. In: International Conference on Learning Representations (2021)
* [12] Kumarasinghe, D., de Silva, N.: Automatic generation of abstracts for research papers. In: Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022). pp. 221–229 (2022)
* [13] Kwon, W., Li, Z., Zhuang, S., Sheng, Y., Zheng, L., Yu, C.H., Gonzalez, J., Zhang, H., Stoica, I.: Efficient memory management for large language model serving with pagedattention. In: Proceedings of the 29th Symposium on Operating Systems Principles. pp. 611–626 (2023)
* [14] Lev, G., Shmueli-Scheuer, M., Herzig, J., Jerbi, A., Konopnicki, D.: Talksumm: A dataset and scalable annotation method for scientific paper summarization based on conference talks. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. pp. 2125–2131 (2019)
* [15] Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., Zettlemoyer, L.: Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. pp. 7871–7880 (2020)
* [16] Lin, C.Y.: ROUGE: A package for automatic evaluation of summaries. In: Text Summarization Branches Out. pp. 74–81. Association for Computational Linguistics, Barcelona, Spain (Jul 2004)
* [17] Liu, M.H., Yen, A.Z., Huang, H.H., Chen, H.H.: Contributionsum: Generating disentangled contributions for scientific papers. In: Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. pp. 5351–5355 (2023)
* [18] Mao, Y., Zhong, M., Han, J.: CiteSum: Citation text-guided scientific extreme summarization and domain adaptation with limited supervision. In: Goldberg, Y., Kozareva, Z., Zhang, Y. (eds.) Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. pp. 10922–10935. Association for Computational Linguistics, Abu Dhabi, United Arab Emirates (Dec 2022)
* [19] Meng, R., Thaker, K., Zhang, L., Dong, Y., Yuan, X., Wang, T., He, D.: Bringing structure into summaries: a faceted summarization dataset for long scientific documents. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). pp. 1080–1089 (2021)
* [20] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21(1), 5485–5551 (2020)
* [21] Reimers, N., Gurevych, I.: Sentence-bert: Sentence embeddings using siamese bert-networks. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). pp. 3982–3992 (2019)
* [22] Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., et al.: Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023)
* [23] Urlana, A., Mishra, P., Roy, T., Mishra, R.: Controllable text summarization: Unraveling challenges, approaches, and prospects–a survey. arXiv preprint arXiv:2311.09212 (2023)
* [24] Urlana, A., Surange, N., Shrivastava, M.: Ltrc@ mup 2022: Multi-perspective scientific document summarization using pre-trained generation models. In: Proceedings of the Third Workshop on Scholarly Document Processing. pp. 279–284 (2022)
* [25] Yasunaga, M., Kasai, J., Zhang, R., Fabbri, A.R., Li, I., Friedman, D., Radev, D.R.: Scisummnet: A large annotated corpus and content-impact models for scientific paper summarization with citation networks. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33, pp. 7386–7393 (2019)
* [26] Zhang, J., Zhao, Y., Saleh, M., Liu, P.: Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In: International Conference on Machine Learning. pp. 11328–11339. PMLR (2020)
* [27] Zhang, T., Kishore, V., Wu, F., Weinberger, K.Q., Artzi, Y.: Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675 (2019)
|
# Unisolvence of random Kansa collocation by Thin-Plate Splines for the
Poisson equation
F. Dell’Accio University of Calabria, Rende (CS), Italy A. Sommariva M.
Vianello
###### Abstract
Existence of sufficient conditions for unisolvence of Kansa unsymmetric
collocation for PDEs is still an open problem. In this paper we make a first
step in this direction, proving that unsymmetric collocation matrices with
Thin-Plate Splines for the 2D Poisson equation are almost surely nonsingular,
when the discretization points are chosen randomly on domains with analytic
boundary.
## 1 Introduction
Kansa unsymmetric collocation, originally proposed in the mid ’80s [11], has
become over the years a popular meshless method for the discretization of
boundary value problems for PDEs. Despite its wide and successful adoption for
the numerical solution of a variety of physical and engineering problems (cf.
e.g. [4] with the references therein), a sound theoretical foundation
concerning unisolvence of the corresponding linear systems is still missing.
Indeed, it was shown by Hon and Schaback [10] that there exist point
configurations that lead to singularity of the collocation matrices, though
these are very special and “rare”cases. For this reason greedy and other
approaches have been developed to overcome the theoretical problem and ensure
invertibility, cf. e.g. [14, 18]. On the other hand, in the textbook [8] one
can read : “Since the numerical experiments by Hon and Schaback show that
Kansa’s method cannot be well-posed for arbitrary center locations, it is now
an open question to find sufficient conditions on the center locations that
guarantee invertibility of the Kansa matrix”, and the situation does not seem
to have changed so far.
In this paper we make a first step in this direction, proving that unsymmetric
collocation matrices with Thin-Plate Splines (without polynomial addition) for
the 2D Poisson equation are almost surely nonsingular, when the discretization
points are chosen randomly on domains with analytic boundary. Though TPS are
not the most adopted option for Kansa collocation, they have been often used
in the meshless literature, cf. e.g. [4, 5, 20] with the references therein.
One of their most relevant features is that they are scale invariant, thus
avoiding the delicate matter of the scaling choice with scale dependent RBF,
which is still an active research topic, cf. e.g. [2, 13]. On the other hand,
the fact that TPS without polynomial addition can guarantee unisolvence in the
interpolation framework has been recently recognized experimentally in [17]
and theoretically in [1, 6].
As we shall see, one of the key aspects is that Thin-Plate Splines
$\phi(\|P-A\|_{2})$, which correspond to the radial functions
$\phi(r)=r^{2\nu}\log(r)\;,\;\;\nu\in\mathbb{N}\;,$ (1)
are real analytic functions off their center $A$, due to analyticity of the
univariate functions $\log(\cdot)$ and $\sqrt{\cdot}$ in $\mathbb{R}^{+}$.
Analiticity together with the presence of a singularity at the center will be
the key ingredients of our unisolvence result by random collocation.
## 2 Unisolvence of random Kansa collocation
Consider the Poisson equation with Dirichlet boundary conditions (cf. e.g.
[7])
$\left\\{\begin{array}[]{l}\Delta u(P)=f(P)\;,\;P\in\Omega\\\
u(P)=g(P)\;,\;P\in\partial\Omega=\gamma([a,b])\;,\end{array}\right.$ (2)
where we assume that $\Omega\subset\mathbb{R}^{2}$ is a domain with analytic
boundary (a bounded connected open set whose boundary is an analytic curve),
namely a curve $\gamma:[a,b]\to\mathbb{R}^{2}\;,\;\gamma(a)=\gamma(b)$, that
is analytic and regular (i.e. $\gamma^{\prime}(t)\neq(0,0)$ for every
$t\in[a,b]$).
In Kansa collocation (see e.g. [8, 10, 11, 14, 18, 19]) one determines a
function
$u_{N}(P)=\sum_{j=1}^{n}{c_{j}\,\phi_{j}(P)}+\sum_{k=1}^{m}{d_{k}\,\psi_{k}(P)}\;,\;\;N=n+m\;,$
(3)
where
$\phi_{j}(P)=\phi(\|P-P_{j}\|_{2})\;,\;\;\\{P_{1},\dots,P_{n}\\}\subset\Omega\;,$
(4)
$\psi_{k}(P)=\phi(\|P-Q_{k}\|_{2})\;,\;\;\\{Q_{1},\dots,Q_{m}\\}\subset\partial\Omega\;,$
(5)
such that
$\left\\{\begin{array}[]{l}\Delta u_{N}(P_{i})=f(P_{i})\;,\;i=1,\ldots,n\\\
u_{N}(Q_{h})=g(Q_{h})\;,\;h=1,\ldots,m\;.\end{array}\right.$ (6)
The following facts will be used below. Defining $\phi_{A}(P)=\phi(\|P-A\|)$,
we have $\phi_{A}(B)=\phi_{B}(A)$ and $\Delta\phi_{A}(B)=\Delta\phi_{B}(A)$.
In fact, the Laplacian in polar coordinates centered at $A$ (cf. e.g. [7,
Ch.2]) is the radial function
$\Delta\phi_{A}={\frac{\partial^{2}\phi}{\partial^{2}r}}+{\frac{1}{r}}\frac{\partial\phi}{\partial
r}=4\nu r^{2(\nu-1)}(\nu\log(r)+1)\;.$ (7)
Moreover, $\phi_{A}(A)=0$ and $\Delta\phi_{A}(A)=0$ for $\nu\geq 2$, since
$\Delta\phi\to 0$ as $r\to 0$.
Kansa collocation can be rewritten in matrix form as
$\left(\begin{array}[]{cc}\Delta\Phi&\Delta\Psi\\\ \\\
\Phi&\Psi\end{array}\right)\left(\begin{array}[]{c}\mathbf{c}\\\ \\\
\mathbf{d}\end{array}\right)=\left(\begin{array}[]{c}\mathbf{f}\\\ \\\
\mathbf{g}\end{array}\right)$ (8)
where the block matrix is
$K_{N}=K_{N}(\\{P_{i}\\},\\{Q_{h}\\})=\left(\begin{array}[]{cc}\Delta\Phi&\Delta\Psi\\\
\\\ \Phi&\Psi\end{array}\right)$
$=\left(\begin{array}[]{ccccccc}0&\cdots&\cdots&\Delta\phi_{n}(P_{1})&\Delta\psi_{1}(P_{1})&\cdots&\Delta\psi_{m}(P_{1})\\\
\\\ \vdots&\ddots&&\vdots&\vdots&\cdots&\vdots\\\
\vdots&&\ddots&\vdots&\vdots&\cdots&\vdots\\\ \\\
\Delta\phi_{1}(P_{n})&\cdots&\cdots&0&\Delta\psi_{1}(P_{n})&\cdots&\Delta\psi_{m}(P_{n})\\\
\\\ \phi_{1}(Q_{1})&\cdots&\cdots&\phi_{n}(Q_{1})&0&\cdots&\psi_{m}(Q_{1})\\\
\\\ \vdots&\cdots&\cdots&\vdots&\vdots&\ddots&\vdots\\\ \\\
\phi_{1}(Q_{m})&\cdots&\cdots&\phi_{n}(Q_{m})&\psi_{1}(Q_{m})&\cdots&0\\\ \\\
\end{array}\right)$
and $\textbf{f}=\\{f(P_{i})\\}_{i=1,\ldots,n}$,
${\textbf{g}}=\\{g(Q_{h})\\}_{h=1,\ldots,m}$.
We can now state and prove our main result.
###### Theorem 1
Let $K_{N}$ be the TPS Kansa collocation matrix defined above, with $N=n+m\geq
2$, where $\\{P_{i}\\}$ is a sequence of independent uniformly distributed
random points in $\Omega$, and $\\{Q_{h}\\}$ a sequence of independent
uniformly distributed points on $\partial\Omega$. Namely,
$\\{Q_{h}\\}=\\{\gamma(t_{h})\\}$ with $\\{t_{h}\\}$ sequence of independent
identically distributed random abscissas in $(a,b)$ with respect to the
arclength density $\|\gamma^{\prime}(t)\|_{2}/L$, $L=length(\gamma([a,b]))$.
Then for every $N\geq 2$ the matrix $K_{N}$ is a.s. (almost surely)
nonsingular.
Proof. The proof proceeds by complete induction on $N$. For the induction
base, we prove that $\mbox{det}(K_{N})$ is a.s. nonzero for $N=2$, that is for
$n=2$ and $m=0$, or $n=0$ and $m=2$, or $n=1$ and $m=1$. In the first case,
$\mbox{det}(K_{2})=-\Delta\phi_{2}(P_{1})\Delta\phi_{1}(P_{2})=-(\Delta\phi_{1}(P_{2}))^{2}$
$=-16\nu^{2}\|P_{2}-P_{1}\|_{2}^{4\nu-4}\left(\nu\log(\|P_{2}-P_{1}\|_{2})+1\right)^{2}$
which vanishes iff $P_{2}=P_{1}$ (an event with null probability) or $P_{2}$
falls on (the intersection with $\Omega$ of) the curve
$\nu\log(\|P-P_{1}\|_{2})+1=0$, that is on the circle
$\|P-P_{1}\|_{2}^{2}=\exp(-2/\nu)\;.$
But this event has null probability, since any algebraic curve is a null set
in $\mathbb{R}^{2}$.
In the second case,
$\mbox{det}(K_{2})=-\psi_{2}(Q_{1})\psi_{1}(Q_{2})=-\psi_{1}^{2}(Q_{2})=-\psi_{1}^{2}(\gamma(t_{2}))\;.$
Now, given $P_{1}=\gamma(t_{1})$, the function
$\lambda(t)=\psi_{1}^{2}(\gamma(t))$ is analytic in $(a,t_{1})$ and in
$(t_{1},b)$. Then $\psi_{1}^{2}(\gamma(t_{2}))$ is zero iff $t_{2}=t_{1}$ (an
event that has null probability), or $t_{2}$ falls on the zero set of
$\lambda$ in $(a,t_{1})$ or $(t_{1},b)$. Again this event has null probability
since the zero set of an univariate analytic function in an open interval is a
null set (cf. [12, 15]).
As for the third case, assume that $Q_{1}$ is chosen on the boundary (randomly
or not) and that $P_{1}$ is chosen randomly in the interior. Since
$\mbox{det}(K_{2})=-\phi_{1}(Q_{1})\Delta\psi_{1}(P_{1})$
$=-4\nu\phi_{1}(Q_{1})\|P_{1}-Q_{1}\|_{2}^{2\nu-2}\left(\nu\log(\|P_{1}-Q_{1}\|_{2})+1\right)\;,$
and $\phi_{1}(Q_{1})\neq 0$ being $P_{1}\neq Q_{1}$, the determinant vanishes
if and only if $P_{1}$ falls on (the intersection with $\Omega$ of) the curve
$\nu\log(\|P-Q_{1}\|_{2})+1=0$, that is on the circle
$\|P-Q_{1}\|_{2}^{2}=\exp(-2/\nu)$
and again this event has null probability.
For the inductive step, we consider separately the case where a boundary point
is added, for which we define the matrix
$U(P)=\left(\begin{array}[]{cccccccc}0&\cdots&\cdots&\Delta\phi_{n}(P_{1})&\Delta\psi_{1}(P_{1})&\cdots&\Delta\psi_{m}(P_{1})&\Delta\phi_{1}(P)\\\
\\\ \vdots&\ddots&&\vdots&\vdots&\cdots&\vdots&\vdots\\\
\vdots&&\ddots&\vdots&\vdots&\cdots&\vdots&\vdots\\\ \\\
\Delta\phi_{1}(P_{n})&\cdots&\cdots&0&\Delta\psi_{1}(P_{n})&\cdots&\Delta\psi_{m}(P_{n})&\Delta\phi_{n}(P)\\\
\\\
\phi_{1}(Q_{1})&\cdots&\cdots&\phi_{n}(Q_{1})&0&\cdots&\psi_{m}(Q_{1})&\psi_{1}(P)\\\
\\\ \vdots&\cdots&\cdots&\vdots&\vdots&\ddots&\vdots&\vdots\\\ \\\
\phi_{1}(Q_{m})&\cdots&\cdots&\phi_{n}(Q_{m})&\psi_{1}(Q_{m})&\cdots&0&\psi_{m}(P)\\\
\\\ \phi_{1}(P)&\cdots&\cdots&\phi_{n}(P)&\psi_{1}(P)&\cdots&\psi_{m}(P)&0\\\
\\\ \end{array}\right)$
Observe that in this case $K_{N+1}=U(Q_{m+1})$. Indeed,
$\psi_{k}(Q_{h})=\psi_{h}(Q_{k})$ and
$\Delta\phi_{i}(Q_{m+1})=\Delta\psi_{m+1}(P_{i})$.
Differently, if an interior point is added, we define the matrix
$V(P)=\left(\begin{array}[]{cccccccc}0&\cdots&\cdots&\Delta\phi_{n}(P_{1})&\Delta\phi_{1}(P)&\Delta\psi_{1}(P_{1})&\cdots&\Delta\psi_{m}(P_{1})\\\
\\\ \vdots&\ddots&&\vdots&\vdots&\vdots&\cdots&\vdots\\\
\vdots&&\ddots&\vdots&\vdots&\vdots&\cdots&\vdots\\\ \\\
\Delta\phi_{1}(P_{n})&\cdots&\cdots&0&\Delta\phi_{n}(P)&\Delta\psi_{1}(P_{n})&\cdots&\Delta\psi_{m}(P_{n})\\\
\\\
\Delta\phi_{1}(P)&\cdots&\cdots&\Delta\phi_{n}(P)&0&\Delta\psi_{1}(P)&\cdots&\Delta\psi_{m}(P)\\\
\\\
\phi_{1}(Q_{1})&\cdots&\cdots&\phi_{n}(Q_{1})&\psi_{1}(P)&0&\cdots&\psi_{m}(Q_{1})\\\
\\\ \vdots&\cdots&\cdots&\vdots&\vdots&\vdots&\ddots&\vdots\\\ \\\
\phi_{1}(Q_{m})&\cdots&\cdots&\phi_{n}(Q_{m})&\psi_{m}(P)&\psi_{1}(Q_{m})&\cdots&0\\\
\\\ \end{array}\right)$
Observe that in this case $K_{N+1}=V(P_{n+1})$ since
$\psi_{k}(P_{n+1})=\phi_{n+1}(Q_{k})$ and
$\Delta\phi_{j}(P_{i})=\Delta\phi_{i}(P_{j})$.
Concerning the determinants, applying Laplace determinantal rule on the last
row of $U(P)$ we see that for every $\ell$, $1\leq\ell\leq m$, we get the
representation
$F(P)=\mbox{det}(U(P))=\delta_{N-1}\psi_{\ell}^{2}(P)+A(P)\psi_{\ell}(P)+B(P)$
(9)
where
$|\delta_{N-1}|=|\mbox{det}(K_{N-1}(\\{P_{i}\\},\\{Q_{h}\\}_{h\neq\ell}))|$
$A\in\mbox{span}\\{\phi_{j},\Delta\phi_{j},\psi_{k}\,;\,1\leq j\leq
n\,,\,1\leq k\leq m\,,\,k\neq\ell\\}$
$B\in\mbox{span}\\{\phi_{i}\Delta\phi_{j},\psi_{k}\phi_{i},\psi_{k}\Delta\phi_{i},\psi_{k}\psi_{h}\,;\,1\leq
i,j\leq n\,,\,1\leq k,h\leq m\,,\,k,h\neq\ell\\}\;.$
Similarly, developing $det(V(P))$ by the $(n+1)$-row we have
$G(P)=\mbox{det}(V(P))=-\mbox{det}(K_{N-1})(\Delta\phi_{n}(P))^{2}+C(P)\Delta\phi_{n}(P)+D(P)$
(10)
where
$C\in\mbox{span}\\{\Delta\phi_{j},\psi_{k},\Delta\psi_{k}\,;\,1\leq j\leq
n-1\,,\,1\leq k\leq m\\}$
$D\in\mbox{span}\\{\Delta\phi_{i}\Delta\phi_{j},\Delta\phi_{i}\Delta\psi_{h},\psi_{k}\Delta\phi_{i},\psi_{k}\Delta\psi_{h}\,;\,1\leq
i,j\leq n-1\,,\,1\leq k,h\leq m\\}\;.$
First, we prove that $G$ is not identically zero in $\Omega$ if
$\mbox{det}(K_{N-1})\neq 0$ (the latter a.s. holds by inductive hypothesis).
Let $P(t)=P_{n}+t(1,0)$, $t\in\mathbb{R}$, and $r(t)=\|P(t)-P_{n}\|_{2}=|t|$.
If $G\equiv 0$ then $G(P(t))\equiv 0$ in neighborhood of $t=0$. Then, we would
locally have
$u^{2}(t)=c(t)u(t)+d(t)\;,\;\;u(t)=\Delta\phi_{n}(P(t))\;,$ (11)
where $c(t)=C(P(t))/\mbox{det}(K_{N-1})$ and
$d(t)=D(P(t))/\mbox{det}(K_{N-1})$. Notice that both $c$ and $d$ are analytic
in a neighborhood of $t=0$, since $C$ and $D$ are analytic in a neighborhood
of $P_{n}$. By (11) and (7) we get
$u(t)=4\nu t^{2(\nu-1)}\left(\nu\log(|t|)+1\right)\;.$ (12)
Clearly $c$ cannot be identically zero there, otherwise $u^{2}$ would be
analytic at $t=0$ and thus would have an algebraic order of infinitesimal as
$t\to 0$, whereas by (12) we have $u^{2}(t)\sim
16\nu^{4}t^{4(\nu-1)}\log^{2}(|t|)$. Hence taking the Maclaurin expansion of
$c$ we get $c(t)\sim c_{s}t^{s}$ as $t\to 0$ for some $s\geq 0$, the order of
the first nonvanishing derivative at $t=0$. Now, $u^{2}(t)\sim
16\nu^{4}t^{4(\nu-1)}\log^{2}(|t|)$, whereas by $u^{2}\equiv cu+d$ we would
have $u^{2}(t)\sim 4\nu^{2}c_{s}t^{s+2(\nu-1)}\log(|t|)+d_{p}t^{p}$, where
either $d(0)\neq 0$ and $p=0$, or $d(0)=0$ and $p>0$ (the order of the first
nonvanishing derivative at $t=0$). Then we get a contradiction, since $u^{2}$
cannot have two distinct limits or orders of infinitesimal at the same point.
Moreover, $G$ is clearly continuous in $\Omega$ and analytic in
$\Omega\setminus\\{P_{1},\dots,P_{n}\\}$, since all the functions involved in
its definition (10) are analytic up to their own center. Consequently, if
$\mbox{det}(K_{N-1})\neq 0$ by continuity $G$ is not identically zero also in
$\Omega\setminus\\{P_{1},\dots,P_{n}\\}$.
Then, $\mbox{det}(K_{N+1})=\mbox{det}(V(P_{n+1}))=G(P_{n+1})$ is a.s. nonzero,
since the zero set of a not identically zero real analytic function on an open
connected set in $\mathbb{R}^{d}$ is a null set (cf. [15] for an elementary
proof). More precisely, denoting by $Z_{G}$ the zero set of $G$ in $\Omega$,
we have that
$Z_{G}=(Z_{G}\cap\\{P_{1},\dots,P_{n}\\})\cup(Z_{G}\cap(\Omega\setminus\\{P_{1},\dots,P_{n}\\}))\;.$
Hence $Z_{G}$ is a null set if $G\not\equiv 0$, because the first intersection
is a finite set, and the second is the zero set of a not identically zero real
analytic function. Considering the probability of the corresponding events and
recalling that $\mbox{det}(K_{N-1})\neq 0$ (which a.s. holds) implies
$G\not\equiv 0$, we can then write
$\mbox{prob}\\{\mbox{det}(K_{N+1})=0\\}=\mbox{prob}\\{G(P_{n+1})=0\\}$
$=\mbox{prob}\\{G\equiv 0\\}+\mbox{prob}\\{G\not\equiv 0\;\&\;P_{n+1}\in
Z_{G}\\}=0+0=0\;,$
and this branch of the inductive step is completed.
We turn now to the branch of the inductive step where a boundary point is
added. In this case we consider the function $F$ in (9) restricted to the
boundary, that is $F(P(t))$ with $P(t)=\gamma(t)$, $t\in(a,b)$, which for
every fixed $\ell\in\\{1,\dots,m\\}$ has the representation
$F(\gamma(t))=\mbox{det}(U(\gamma(t)))=\delta_{N-1}v^{2}(t)+A(\gamma(t))v(t)+B(\gamma(t))$
where
$v(t)=\psi_{\ell}(\gamma(t))=r_{\ell}^{2\nu}(t)\log(r_{\ell}(t))\;,\;\;r_{\ell}(t)=\|\gamma(t)-Q_{\ell}\|_{2}$
(13)
with $Q_{\ell}=\gamma(t_{\ell})$, $\;t_{\ell}\in(a,b)$. We claim that if
$\delta_{N-1}\neq 0$ (which a.s. holds by inductive hypothesis),
$F\circ\gamma$ cannot be identically zero in any of the two connected
components of $(a,b)\setminus\\{t_{1},\dots,t_{m}\\}$ (i.e., the subintervals)
having $t_{\ell}$ as extremum. Otherwise, we would have in a left or in a
right neighborhood of $t_{\ell}$
$v^{2}(t)=\alpha(t)v(t)+\beta(t)\;,$ (14)
where $\alpha(t)=A(\gamma(t))/\delta_{N-1}$ and
$\beta(t)=B(\gamma(t))/\delta_{N-1}$ are both analytic in a full neighborhood
of $t_{\ell}$. Notice that, since $\gamma^{\prime}(t_{\ell})\neq(0,0)$ (the
curve is regular),
$r_{\ell}(t)\sim\|\gamma^{\prime}(t_{\ell})\|_{2}|t-t_{\ell}|$ which by (13)
gives $v(t)\sim\|\gamma^{\prime}(t_{\ell})\|_{2}^{2\nu}(t-t_{\ell})^{2\nu}\l
og(|t-t_{\ell}|)$ and
$v^{2}(t)\sim\|\gamma^{\prime}(t_{\ell})\|_{2}^{4\nu}(t-t_{\ell})^{4\nu}\log^{2}(|t-t_{\ell}|)$
as $t\to t_{\ell}$. Now $\alpha$ cannot be identically zero in any left or
right neighborhood, otherwise $v^{2}\equiv\beta$ there and would have an
algebraic order of infinitesimal at $t_{\ell}$. Hence taking the Taylor
expansion of $\alpha$ we get $\alpha(t)\sim\alpha_{s}(t-t_{\ell})^{s}$ as
$t\to t_{\ell}$ for some $s\geq 0$, the order of the first nonvanishing
derivative at $t=t_{\ell}$. On the other hand, by $v^{2}\equiv\alpha v+\beta$
locally, we would have
$v^{2}(t)\sim\|\gamma^{\prime}(t_{\ell})\|_{2}^{2\nu}\alpha_{s}(t-t_{\ell})^{s+2\nu}\log(|t-t_{\ell}|)+\beta_{p}(t-t_{\ell})^{p}$,
where either $\beta(t_{\ell})\neq 0$ and $p=0$, or $\beta(t_{\ell})=0$ and
$p>0$ (the order of the first nonvanishing derivative at $t=t_{\ell}$). Again
we get a contradiction, since $v^{2}$ cannot have two distinct limits or
orders of infinitesimal at the same point.
The result is that $F\circ\gamma$ is a.s. not identically zero in any
connected component of $(a,b)\setminus\\{t_{1},\dots,t_{m}\\}$. Then,
$\mbox{det}(K_{N+1})=\mbox{det}(U(Q_{m+1}))=F(\gamma(t_{m+1}))$ is a.s.
nonzero. In fact, observe that $F\circ\gamma$ is analytic in
$(a,b)\setminus\\{t_{1},\dots,t_{m}\\}$, since $F$ is analytic in
$\mathbb{R}^{2}\setminus(\\{Q_{1},\dots,Q_{m}\\}\cup\\{P_{1},\dots,P_{n}\\})$.
Moreover, denoting by $Z_{F\circ\gamma}$ the zero set of $F\circ\gamma$ in
$(a,b)$, we have that
$Z_{F\circ\gamma}=(Z_{F\circ\gamma}\cap\\{t_{1},\dots,t_{m}\\})\cup(Z_{F\circ\gamma}\cap((a,b)\setminus\\{t_{1},\dots,t_{m}\\}))\;.$
Hence $Z_{F\circ\gamma}$ is a null set if ${F\circ\gamma}\not\equiv 0$,
because the first intersection is a finite set, and the second is the
componentwise finite union of the zero sets of a not identically zero real
analytic function on each connected component. Considering the probability of
the corresponding events and recalling that $\mbox{det}(K_{N-1})\neq 0$ (which
a.s. holds) implies $F\circ\gamma\not\equiv 0$, we can then write
$\mbox{prob}\\{\mbox{det}(K_{N+1})=0\\}=\mbox{prob}\\{F(Q_{m+1})=0\\}$
$=\mbox{prob}\\{F\circ\gamma\equiv 0\\}+\mbox{prob}\\{F\circ\gamma\not\equiv
0\;\&\;t_{m+1}\in Z_{F\circ\gamma}\\}=0+0=0\;,$
and also the boundary branch of the inductive step is completed. $\square$
### 2.1 Remarks on possible extensions
The result of Theorem 1 is a first step towards a theory of Kansa collocation
unisolvence, and could be extended in several directions within the random
framework. The first extension comes immediately from the fact that a null set
has also measure zero for any continuous measure with density (that is,
absolutely continuous with respect to the Lebesgue measure). We can state
indeed the following
###### Theorem 2
The assertion of Theorem 1 holds true if the points $\\{P_{i}\\}$ are
independent identically distributed with respect any continuous probability
measure with density on $\Omega$, say $\sigma\in L^{1}_{+}(\Omega)$, and the
abscissas $\\{t_{h}\\}$ are independent identically distributed with respect
any continuous probability measure with density on $(a,b)$, say $w\in
L^{1}_{+}(a,b)$.
This extension could be interesting whenever it is known that the solution has
steep gradients or other regions where it is useful to increase the
discretization density. Concerning the implementation of random sampling with
respect to continuous probability densities, we recall the well-known
“acceptance-rejection method”, cf. e.g. [3, 9, 16] with the references
therein.
More difficult but worth of further investigations are:
* •
extension to $\Omega\subset\mathbb{R}^{d}$, $d\geq 3$;
* •
extension to other analitic RBF up to the center, e.g. Radial Powers;
* •
extension to piecewise analytic boundaries;
* •
extension to other differential operators and/or boundary conditions.
The latter in particular could be challenging, since the operators involved in
the equation and in the boundary conditions may not be radial.
Acknowledgements.
Work partially supported by the DOR funds of the University of Padova, and by
the INdAM-GNCS 2024 Projects “Kernel and polynomial methods for approximation
and integration: theory and application software”.
This research has been accomplished within the RITA “Research ITalian network
on Approximation” and the SIMAI Activity Group ANA&A, and the UMI Group TAA
“Approximation Theory and Applications”.
## References
* [1] L.P. Bos, A. Sommariva, M. Vianello, A note on polynomial-free unisolvence of polyharmonic splines at random points. https://arxiv.org/abs/2312.13710
* [2] R. Cavoretto, A. De Rossi, Adaptive procedures for meshfree RBF unsymmetric and symmetric collocation methods, Appl. Math. Comput. 382 (2020), 125354.
* [3] A. Chalkis, C. Katsamaki, J. Tonelli-Cueto, On the Error of Random Sampling: Uniformly Distributed Random Points on Parametric Curves, ISSAC ’22: Proceedings of the 2022 Intern. Symp. on Symb. and Alg. Comput., July 2022, pp. 273–282. https://arxiv.org/pdf/2203.02832.pdf
* [4] W. Chen, Z.-J. Fu, C.S. Chen, Different Formulations of the Kansa Method: Domain Discretization, in W. Chen et al., Recent Advances in Radial Basis Function Collocation Methods, SpringerBriefs in Applied Sciences and Technology, 2014.
* [5] P.P. Chinchapatnam, K. Djidjeli, P.B. Nair, Unsymmetric and symmetric meshless schemes for the unsteady convection–diffusion equation, Comput. Methods Appl. Mech. Engrg. 195 (2006), 2432–-2453.
* [6] F. Dell’Accio, A. Sommariva, M. Vianello, Random sampling and unisolvent interpolation by almost everywhere analytic functions, Appl. Math. Lett. 145 (2023).
* [7] L.C. Evans, Partial Differential Equations, Graduate Studies in Mathematics 19, AMS, 1998.
* [8] G.E. Fasshauer, Meshfree Approximation Methods with Matlab, Interdisciplinary Mathematical Sciences, Vol. 6, World Scientific, 2007.
* [9] B.D. Flury, Acceptance-Rejection Sampling Made Easy, SIAM Rev. 32 (1990), 474–476.
* [10] Y.C. Hon, R. Schaback, On unsymmetric collocation by radial basis functions, Appl. Math. Comput. 119 (2001), 177–-186.
* [11] E.J. Kansa, Application of Hardy’s multiquadric interpolation to hydrodynamics, in Proc. 1986 Simul. Conf., Vol. 4, pp. 111–-117.
* [12] S.G. Krantz, H.R. Parks, A Primer of Real Analytic Functions, Second Edition, Birkhäuser, Boston, 2002.
* [13] E. Larsson, R. Schaback, Scaling of radial basis functions, IMA Journal of Numerical Analysis (2023), drad035.
* [14] L. Ling, R. Opfer, R. Schaback, Results on meshless collocation techniques, Eng. Anal. Bound. Elem. 30 (2006), 247–253.
* [15] B.S. Mityagin, The Zero Set of a Real Analytic Function, Math. Notes 107 (2020), 529–530.
* [16] N. Nguyen, G. Ökten, The acceptance-rejection method for low-discrepancy sequences, Monte Carlo Methods Appl. 22 (2016), 133–148.
* [17] A. Pasioti, On the Constrained Solution of RBF Surface Approximation, MDPI Mathematics 10 (2022), 2582.
* [18] R. Schaback, H. Wendland, Kernel techniques: From machine learning to meshless methods, Acta Numer. 15 (2006) 543–639.
* [19] H. Wendland, Scattered Data Approximation, Cambridge Monogr. Appl. Comput. Math., vol. 17, Cambridge Univ. Press, Cambridge, 2005.
* [20] M. Zerroukat, K. Djidjeli, A. Charafi, Explicit and implicit meshless methods for linear advection–diffusion-type partial differential equations, Int. J. Numer. Methods Eng. 48 (2000), 19–35.
|
# Conversion efficiency in Kerr microresonator optical parametric oscillators:
From three modes to many modes
Jordan R. Stone<EMAIL_ADDRESS>Joint Quantum Institute, NIST/University of
Maryland, College Park, MD 20742 National Institute for Standards and
Technology, Gaithersburg, MD 20899 Gregory Moille Joint Quantum Institute,
NIST/University of Maryland, College Park, MD 20742 National Institute for
Standards and Technology, Gaithersburg, MD 20899 Xiyuan Lu National
Institute for Standards and Technology, Gaithersburg, MD 20899 Institute for
Research in Electronics and Applied Physics and Maryland NanoCenter,
University of Maryland, College Park, MD 20742, USA Kartik Srinivasan Joint
Quantum Institute, NIST/University of Maryland, College Park, MD 20742
National Institute for Standards and Technology, Gaithersburg, MD 20899
###### Abstract
We study optical parametric oscillations in Kerr-nonlinear microresonators,
revealing an intricate solution space – parameterized by the pump-to-signal
conversion efficiency – that arises from an interplay of nonlinear processes.
Using a three-mode approximation, we derive an efficiency-maximizing relation
between pump power and frequency mismatch. To move beyond a three-mode
approximation, a necessity for geometries such as integrated microring
resonators, we numerically simulate the Lugiato-Lefever Equation that accounts
for the full spectrum of nonlinearly-coupled resonator modes. We observe and
characterize two nonlinear phenomena linked to parametric oscillations in
multi-mode resonators: Mode competition and cross phase modulation-induced
modulation instability. Both processes may impact conversion efficiency.
Finally, we show how to increase the conversion efficiency by tuning the
microresonator loss rates. Our analysis will guide microresonator designs that
aim for high conversion efficiency and output power.
## I Introduction
Integrated photonics offers scalable options for generating, processing, and
routing optical signals within classical and quantum networks Liang and Bowers
(2010); Agrell _et al._ (2016); Sipahigil _et al._ (2016); Elshaari _et
al._ (2020); Jin _et al._ (2021). In general, optical processors apply linear
and/or nonlinear operations to light. A notable case is the optical
microresonator, whose small size and large quality factor ($Q$) work to
intensify circulating light and promote efficient nonlinear interactions
Vahala (2003); Yang _et al._ (2018). Indeed, microresonators host a veritable
zoo of nonlinear eigenstates, including soliton frequency combs Kippenberg
_et al._ (2018), Raman frequency combs Liu _et al._ (2018), Hz-linewidth
lasers based on stimulated Brilluoin scattering Loh _et al._ (2015),
$\chi^{(2)}$ and $\chi^{(3)}$-type parametric oscillators Bruch _et al._
(2019); Lu _et al._ (2019), and more for applications in communications,
timekeeping, and sensing Spencer _et al._ (2018); Marin-Palomo _et al._
(2017); Lai _et al._ (2020); Newman _et al._ (2019).
Many of the experiments cited above were motivated by a high demand for
coherent light sources on a chip. One important type of coherent source is the
optical parametric oscillator (OPO), which is often employed to reach
wavelengths not directly accessible by conventional laser gain Vodopyanov _et
al._ (2000); Mieth _et al._ (2014). Optical parametric oscillations occur in
$\chi^{(3)}$-nonlinear media when vacuum fluctuations are amplified by
stimulated four wave mixing (FWM), if the FWM gain exceeds the resonator
losses Boyd (2020). Degenerately-pumped OPOs are a special case in which two
frequency-degenerate pump photons are converted into one higher-frequency
signal photon and one lower-frequency idler photon. In principle, a
degenerately-pumped OPO can generate coherent light within the frequency range
$\textrm{DC}-2\omega_{\rm{p}}$, where $\omega_{\rm{p}}$ is the pump laser
frequency Lu and Srinivasan (2021). Hence, a chip-scale, degenerately-pumped
OPO could offer superior scalability, higher efficiencies, and broader
spectral coverage than alternatives. It would be readily implemented in
miniaturized technologies, from optical clocks in which the OPO could be tuned
to clock-type or cooling-type atomic transitions, to quantum processors in
which the OPO could be tuned to qubit frequencies.
Figure 1: Introduction to the microresonator-based, degenerately-pumped
optical parametric oscillator ($\mu$OPO). (a) Schematic of a microring
resonator coupled to an access waveguide that carries the input and output
fields. (b) Energy diagram for the degenerate four wave mixing (FWM) process
that drives parametric oscillation. (c) Depictions of mode spectra, nonlinear
couplings, and frequency shifts in both a three-mode approximation (TMA) and
multi-moded model. Dashed lines correspond to zero frequency mismatch. Red
arrows indicate mode frequency shifts induced by Kerr nonlinearity. The TMA
considers FWM between only the pump, signal, and idler modes, while multi-
moded models account for the nonlinear couplings (orange, hollow arrows)
between all mode sets conducive to FWM.
Recently, several experiments have been reported that advance the
microresonator-based, degenerately-pumped OPO ($\mu$OPO) and make real headway
towards a chip-scale, wavelength-by-design light source. Achievements include
sub-milliwatt oscillation thresholds Lu _et al._ (2019), octave-spanning and
tunable spectra Sayson _et al._ (2019); Tang _et al._ (2020), visible-light
generation spanning red to green Lu _et al._ (2020); Domeneguetti _et al._
(2021), and a $\mu$OPO that uses a 2D photonic crystal cavity Marty _et al._
(2021). Nonetheless, the reported pump-to-signal conversion efficiencies are
typically (except in instances involving narrow spectral bandwidths) $<0.1$% –
a nonstarter for applications Lu _et al._ (2020); Sayson _et al._ (2019);
Domeneguetti _et al._ (2021); Tang _et al._ (2020). Indeed, while the demand
for efficiency calls for a deeper understanding of the underlying nonlinear
physics, experiments have so far relied on a simplified theoretical framework
for $\mu$OPOs. For instance, frequency matching is considered in either a
cold-cavity limit Lu _et al._ (2019), or otherwise only accounts for a
populated pump mode Sayson _et al._ (2019). A more accurate description of
frequency matching should account for the exact distribution of intraresonator
photons. Moreover, analyses have relied on a three-mode approximation (TMA),
in which only the pump, signal, and idler modes interact through Kerr
nonlinearity Sayson _et al._ (2019); Hansson _et al._ (2013). Of course,
real microresonators comprise a more complex spectrum of modes that are
nonlinearly coupled together. As a result, there is presently a gap between
theoretical and experimental progress revolving around $\mu$OPOs; ultimately,
there is little theoretical basis on which to design microresonators to meet
end-user demands.
Here, we construct a generalized $\mu$OPO solution space; thereby, we reveal
connections between the $\mu$OPO state and experimental parameters, and we
identify processes that limit conversion efficiency. We adopt a model based on
the Lugiato-Lefever Equation (LLE) Coen _et al._ (2013); Chembo and Menyuk
(2013) and support our main numerical results with theoretical analyses. In
the next section, we explain our modeling and present simulation results using
a TMA. Then, we expand the model to include a spectrum of nonlinearly-coupled
resonator modes. We demonstrate two nonlinear phenomena that cannot be
explained within a TMA. In the first, a mode competition takes place between
multiple signal and idler mode pairs. In the second, modulation instability
induced by cross-phase modulation constrains the $\mu$OPO conversion
efficiency. Finally, we propose two strategies for increasing the $\mu$OPO
conversion efficiency and output power. Surprisingly, when comparing
microresonators with different loss rates but identical geometries, we find
that the resonator with greater losses will, in some cases, promote higher
efficiency.
## II Modeling the $\mu$OPO: The Lugiato-Lefever Equation, three-mode
approximation, and dispersion
To study $\mu$OPOs, we consider a microring resonator coupled to an optical
access waveguide and pumped by a continuous-wave (CW) laser, as depicted in
Fig. 1a. This structure supports whispering gallery modes, in which azimuthal
modes are grouped into families sharing a transverse spatial mode profile.
Modes within a family are spaced (in the frequency domain) by a free-spectral
range (FSR) that is inversely proportional to the ring circumference, $L$. In
our model, we consider a single mode family and denote its resonant
frequencies as $\omega_{\rm{\mu}}$, where $\mu$ is the azimuthal mode number
shifted to make $\omega_{\rm{0}}$ the frequency of the pumped mode. The pump
laser has frequency $\omega_{\rm{p}}$ and waveguide power $P_{\rm{in}}$, and
the intraresonator field, $a$, obeys the Lugiato-Lefever Equation (LLE) Chembo
and Menyuk (2013):
${}\frac{da}{dt}=\sqrt{\frac{\kappa_{\rm{c}}(0)}{\hbar\omega_{\rm{p}}}P_{\rm{in}}}-\left(\frac{\kappa_{i}}{2}+i\frac{\kappa(0)}{2}\alpha-
ig_{0}|a|^{2}\right)a-i\mathcal{D}(\mu)\tilde{a},$ (1)
where $|a|^{2}$ gives the intraresonator energy in units of photon number,
$\kappa_{\rm{c}}(\mu)$ is the mode-dependent coupling rate to the access
waveguide, $\kappa_{\rm{i}}$ is the mode-independent intrinsic loss rate,
$\kappa(\mu)=\kappa_{\rm{i}}+\kappa_{\rm{c}}(\mu)$ is the mode-dependent total
loss rate, $\alpha=\frac{\omega_{\rm{0}}-\omega_{\rm{p}}}{\kappa(0)/2}$ is the
normalized pump-resonator frequency detuning,
$g_{\rm{0}}=\frac{n_{\rm{2}}c\hbar\omega_{\rm{0}}^{2}}{n^{2}V}$ is the
nonlinear gain per photon, $n_{\rm{2}}$ is the Kerr index, $c$ is the speed of
light in vacuum, $n$ is the refractive index, $V$ is the mode volume, and
$\mathcal{D}(\mu)=\omega_{\rm{\mu}}-(\omega_{\rm{0}}+\mu
D_{\rm{1}})+i\kappa_{\rm{c}}(\mu)$, where $D_{\rm{1}}=2\pi\times\textrm{FSR}$;
$\tilde{a}$ indicates that operations to $a$ are performed in the frequency
domain. Notably, the integrated dispersion, $D_{\rm{int}}$, is contained in
Eq. 1 as $D_{\rm{int}}=Re(\mathcal{D})/\kappa(0)$. For concreteness, we use
$n_{\rm{2}}=2.4\times 10^{-19}$ m2/W and $n=1.9$, which are typical values for
silicon nitride (SiN) microrings, and $\omega_{\rm{p}}\approx 2\pi\times 384$
THz ($\approx$ 780 nm wavelength). Unless otherwise stated, we use
$\kappa_{\rm{i}}=2\pi\times 200$ MHz and $\kappa_{\rm{c}}=\kappa_{\rm{i}}$
(i.e., critical coupling).
Figure 2: Study of the $\mu$OPO conversion efficiency using a three-mode
approximation. (a) Conversion efficiency, $CE$ (gray), and effective mismatch,
$\Delta_{s}$ (green), versus the pump-resonator frequency detuning, $\alpha$,
for a pump laser power $P_{\rm{in}}=20~{}\rm{mW}$. From top to bottom, the
mismatch values are $\delta=1.25,2,4$, and $7$, respectively. Dashed sections
indicate unstable or oscillatory solutions. (b) A generalized $CE$ map showing
the relationship between maximum $CE$, normalized pump power, $X$, and
$\delta$. The white dashed line is expressed as $X=8\delta$ and closely
follows the contour of highest maximum $CE$. (c) $CE$ (gray) and effective
pump-resonator detuning, $\Delta_{\rm{p}}$ (green), versus $\alpha$. (d)
$\Delta_{\rm{p}}$ versus $\delta$ at values of $\alpha$ that maximize $CE$ for
$P_{\rm{in}}=15$ mW (red, $X\approx 5$) and $P_{\rm{in}}=25$ mW (orange,
$X\approx 8$).
Figure 1c introduces some key concepts and illustrates differences between the
TMA and the multi-mode LLE (here, ’multi-mode’ refers to the inclusion of many
longitudinal modes from the same spatial mode family). The main difference is
that a multi-mode model accounts for many more modes that can be coupled
together by the Kerr nonlinearity. In contrast, a TMA only considers nonlinear
interactions (e.g., FWM) between the pump, signal, and idler modes.
Importantly, both models account for imperfect frequency matching. In general,
phase-matched mode pairs (i.e., with azimuthal numbers $\pm\mu$) are not
frequency matched – the associated FWM process does not conserve energy.
Imperfect frequency matching is often quantified by the frequency mismatch
parameter
${}\delta_{\rm{\mu}}=\frac{\omega_{\mu}+\omega_{-\mu}-2\omega_{\rm{0}}}{\kappa(0)}.$
(2)
Throughout, we use $\delta_{\rm{\mu}}$ to refer to the dispersive mismatch
spectrum and $\delta$ to refer to the value of $\delta_{\rm{\mu(-\mu)}}$ at
the targeted signal (idler) mode. A degenerate FWM process only conserves
energy and momentum if $\delta$ is compensated by nonlinear frequency shifts
of $\omega_{\rm{p(s,i)}}$, which we denote as $N_{\rm{p(s,i)}}$ for the pump
(signal, idler) mode. Nonlinear shifts arise from self- and cross-phase
modulation (SPM and XPM, respectively) and are related to the intraresonator
intensity spectrum. Indeed, an occupied resonator necessarily implies
$N_{p(s,i)}<0$; therefore, $\delta=0$ is not conducive to parametric
oscillation.
Using a TMA, we investigate how changes in $\delta$ impact the $\mu$OPO, and
then we introduce other modes into our simulations. To ensure the consistency
of our numerical methods, we use the split-step Fourier method to simulate Eq.
1 for both the TMA and multi-mode LLE. Our primary goal is to understand the
efficiency with which pump photons are converted into signal or idler photons.
Accordingly, we define the conversion efficiency in units of photon flux as
${}CE=\frac{\omega_{\rm{p}}}{\omega_{\rm{s(i)}}}\frac{P_{\rm{s(i)}}}{P_{\rm{in}}},$
(3)
where $\omega_{\rm{s(i)}}$ is the signal (idler) frequency and $P_{\rm{s(i)}}$
is the signal (idler) output power in the access waveguide. Importantly, the
underlying symmetry of the FWM process implies that the signal and idler
fields have equal $CE$ values and that $N_{\rm{s}}=N_{\rm{i}}$ for a
critically-coupled resonator. Throughout, we present $CE$ for the signal; the
same results apply to the idler.
In Fig. 2, we present simulation results obtained using a TMA. In a single
simulation, $P_{\rm{in}}$ and $\delta$ are held fixed; we vary $\alpha$ to
simulate a $\omega_{\rm{p}}$ scan across resonance from blue to red detuning,
as shown in Fig. 2a. During the simulation, we record various data, including
the $CE$, $N_{\rm{p(s,i)}}$, and optical spectra (see Supplemental Material
for details). In Fig. 2a, each panel corresponds to $P_{\rm{in}}=20$ mW, but
$\delta$ is increased from top to bottom. Our results indicate a highest
obtainable $CE$ of $12.5$ % for critically-coupled resonators, in agreement
with Ref. Sayson _et al._ (2019). Additionally, each panel depicts the
effective signal detuning, defined as
$\Delta_{\rm{s}}=\delta+\frac{\alpha}{2}-N_{\rm{s}}$. Notably, when $CE>0$,
$\Delta_{\rm{s}}\approx 0$, which indicates that dispersion is perfectly
compensated by detuning and nonlinearity in the $\mu$OPO state.
To fully characterize the relationship between $CE$, $P_{\rm{in}}$, and
$\delta$, we construct the universal $CE$ map shown in Fig. 2b. The parameter
space is defined by $P_{\rm{in}}$ and $\delta$; in Fig. 2b, we normalize
$P_{\rm{in}}$ as
${}X=\frac{P_{\rm{in}}}{P_{\rm{th}}},$ (4)
where
$P_{\rm{th}}=\frac{\hbar\omega_{\rm{0}}\kappa^{3}(0)}{8g_{\rm{0}}\kappa_{\rm{c}}(0)}$
is the oscillation threshold power Kippenberg _et al._ (2004). Each data
pixel in the $CE$ map represents the maximum $CE$ value taken from the
corresponding LLE simulation, as indicated by the dashed line connecting Figs.
2a and 2b. The $CE$ map has a few notable features. First, we do not observe
parametric oscillation for any values of $P_{\rm{in}}$ when $\delta\leq 0$.
This indicates that nonlinear frequency shifts of $\omega_{\rm{\mu}}$ inhibit
frequency matching even when $X\approx 1$. Second, $CE$ contours follow clear
trends through the parameter space. In particular, to maintain $CE$ at larger
values of $X$, $\delta$ must be increased, apparently to compensate for larger
$N_{s(p,i)}$. Remarkably, we can derive an analytical expression for the
contour of highest $CE$. We provide the derivation in the Supplemental
Material; here, we present the final result, $X=8\delta$, and indicate it with
the white dashed line in Fig. 2b. Clearly, to maximize $CE$ for a given
$P_{\rm{in}}$, the microresonator dispersion (i.e., $\delta$) should be
designed appropriately.
Figure 3: Depictions of microresonator dispersion. (a) $\delta_{\rm{\mu}}$
(green circles), and integrated dispersion, $D_{\rm{int}}$ (purple stripe),
for the TE1 mode family calculated using finite-element method eigenfrequency
simulations. The microresonator has ring radius $RR=15$ $\mu$m, ring width
$RW=800$ nm, height $H=600$ nm, and pump frequency $\omega_{\rm{p}}=2\pi\times
388$ THz. Inset: Illustration of the test device cross section. (b) Group
velocity refractive index, $n_{\rm{g}}$, versus $\omega/2\pi$ for the same
device as in (a). The dashed lines illustrate how a turning point in
$\delta_{\rm{\mu}}$ occurs when $n_{\rm{g}}(-\mu)=n_{\rm{g}}(\mu)$.
In Fig. 2a, it is perhaps remarkable that $CE$ values tend to increase
monotonically until an abrupt cutoff. To elucidate this observation, we
monitor the effective pump detuning,
$\Delta_{\rm{p}}=\frac{\alpha}{2}-N_{\rm{p}}$, during our simulations and
present a sample result in Fig. 2c. When $CE$ is greatest, $\Delta_{\rm{p}}$
reaches a minimum value near zero. Physically, this condition implies that
$\omega_{\rm{p}}$ is nearly resonant; intuitively, this is necessary to
maximize the dropped power and, in turn, the nonlinear gain. The cutoff
results from Kerr bistability; i.e., when further increases in $\alpha$ are no
longer compensated by $N_{\rm{p}}$, the intraresonator field abruptly
transitions to the CW state. To characterize the universality of this feature,
we record $\Delta_{\rm{p}}$ (evaluated for the highest $CE$) versus $\delta$
for two different powers, as shown in Fig. 2d. Apparently, realizing high $CE$
requires $\Delta_{\rm{p}}\approx 0$.
As a first step towards transitioning from a TMA to a multi-mode LLE, we
briefly discuss the microresonator dispersion and its different
representations. In Fig. 3a, we present $\delta_{\rm{\mu}}$ and $D_{\rm{int}}$
spectra for the fundamental transverse electric (TE) mode family, labeled TE1,
of a SiN microring resonator (hereafter referred to as our test device) with
ring radius $RR=15$ $\mu$m, ring width $RW=800$ nm, and height $H=600$ nm. We
extract $\delta_{\rm{\mu}}$ and $D_{\rm{int}}$ from the mode spectrum,
$\omega_{\rm{\mu}}$, that we calculate from finite-element method
eigenfrequency simulations 111Finite element simulations performed using
Comsol Multiphysics, which is identified here to foster understanding, without
implying recommendation or endorsement by NIST.. It is easy to show that
$\delta_{\rm{\mu}}=2\times\mathcal{S}(D_{\rm{int}}(\mu))$, where
$\mathcal{S}(D_{\rm{int}}(\mu))$ indicates the symmetric part of
$D_{\rm{int}}$; i.e., the even orders of $D_{\rm{int}}$ when expressed as a
Taylor expansion around $\mu=0$. We have assessed that analyzing
$\delta_{\rm{\mu}}$ is sufficient to understand the $\mu$OPO dynamics, which
are not sensitive to odd orders of $D_{\rm{int}}$ Sayson _et al._ (2019).
This is a notable departure from other nonlinear microresonator eigenstates,
e.g., dissipative Kerr solitons.
There are two defining features of the $\delta_{\rm{\mu}}$ spectrum shown in
Fig. 3a. First, its negative curvature around $\mu=0$ indicates normal
dispersion, which is required to suppress comb formation. Second, to overcome
the normal dispersion and achieve frequency matching, the $D_{\rm{int}}$
expansion must contain higher order (even) terms such that $\delta_{\rm{\mu}}$
turns and becomes positive. Physically, this means that an anomalous-to-normal
dispersion transition is necessary. Indeed, the dashed lines connecting Figs.
3a and 3b illustrate how $n_{\rm{g}}(\mu)=n_{\rm{g}}(-\mu)$ corresponds to a
$\delta_{\rm{\mu}}$ turning point, where $n_{\rm{g}}(\mu)$ is the dispersive
group velocity refractive index.
Figure 4: Transition from a TMA to multi-mode LLE simulations. (a) Depiction
of the four wave mixing (FWM) processes studied in this work. Mode competition
(orange arrows) suppresses FWM to the target signal and idler mode pair (with
mode numbers $\pm\mu$) in favor of spectrally-adjacent mode pairs (with mode
numbers $\pm(\mu\pm 1)$), and modulation instability induced by cross-phase
modulation (XPM-MI, purple arrows) reroutes energy in the pump mode to its
spectrally-nearest neighbors. (b) $CE$ map for mode $\mu=46$; the resonator
parameters are $RR=15\mu$m, $RW=800$ nm, $H=600$ nm, and
$\omega_{\rm{p}}=2\pi\times 382$ THz. (c) Difference in $CE$ between
simulations based on a TMA and the $CE$ map in part (b).
Next, we preview the differences between $CE$ maps calculated from multi-mode
and TMA models. We observe two FWM processes, depicted in Fig. 4a, that
require the multi-mode model. In one case, we observe mode competition between
mode pairs with consecutive $\mu$ values. In the second case, we observe
modulation instability (MI) in the normally-dispersive spectral region around
$\omega_{\rm{p}}$, and we explain it as arising from XPM between the pump,
signal, and idler fields. We mostly constrain our multi-mode LLE simulations
to $P_{\rm{in}}\leq 25$ mW ($X\leq 8$); at higher values of $P_{\rm{in}}$, the
$\mu$OPO dynamics become more complex.
Figure 5: Mode competition and switching. (a) Illustration of mode competition
in a $\mu$OPO. Oscillations on the target signal and idler mode pair (with
mode numbers $\pm\mu$) are suppressed in favor of spectrally-adjacent mode
pairs (with mode numbers $\pm(\mu\pm 1)$). (b) $\delta_{\rm{\mu}}$ for the
test device with $\omega_{\rm{p}}=2\pi\times 382$ THz. The pale orange stripe
overlaps modes that are predicted to oscillate when $P_{\rm{in}}=20$ mW. (c)
$CE$ maps for modes $\mu=46$, $\mu=45$, and $\mu=44$. The maps indicate a mode
competition in which the mode pair with smallest $\delta$ is favored for
oscillation. (d) $CE$ versus $\alpha$ for modes $\mu=46$ (blue) and $\mu=45$
(green), where $P_{\rm{in}}=20$ mW and $\beta=5\times 10^{-4}$. The pale
stripes come from a TMA. (e) Optical spectra that correspond to different
values of $\alpha$ in (d). The blue, orange, green, and red spectra correspond
to $\alpha=1,1.5,2.5$, and $3.5$, respectively.
Figure 4b shows a $CE$ map for our test device pumped near
$\omega_{\rm{p}}=2\pi\times 382$ THz. To make a straightforward comparison
between these data and Fig. 2b, we calculate the difference in $CE$ between
the two $CE$ maps, as shown in Fig. 4c. In general, XPM-induced MI (XPM-MI)
explains $CE$ differences for small $\delta$, while mode competition is
responsible for the abrupt cutoff in $CE$ (marked by the sharp transition to
zero $CE$ in Fig. 4b or the bold yellow stripe in Fig. 4c) that indicates a
different mode pair is oscillating. In the following sections, we explore mode
competition and XPM-MI in detail.
## III Mode competition and switching
Mode competition is ubiquitous in laser systems with multi-mode resonators
Narducci _et al._ (1986); Gong _et al._ (2007). In general, mode competition
occurs when several resonator modes experience amplification simultaneously;
hence, modes compete for gain and become coupled. In this section, we present
the results of multi-mode LLE simulations in which several mode pairs are
simultaneously nearly frequency matched. We use our findings to establish
general principles for mode competition that will inform future microresonator
designs.
In Fig. 5, we present simulation results for our test device pumped near
$\omega_{\rm{p}}=2\pi\times 382$ THz. The FWM process related to mode
competition is illustrated in Fig. 5a. Modes that are spectrally adjacent to
the targeted signal and idler pair may be nearly frequency matched; hence,
light in these modes may become amplified through FWM. To explore this
phenomenon, we consider the $\delta_{\rm{\mu}}$ spectrum shown in Fig. 5b.
According to the TMA, any mode pairs with $\delta_{\rm{\mu}}>0$ may oscillate,
provided $P_{\rm{in}}$ is large enough. In Fig. 5b, the data points covered by
the pale gold stripe indicate mode pairs that would oscillate when
$P_{\rm{in}}=20$ mW, if they (along with the pump mode) were the only modes in
the system (i.e., in a TMA). Hence, we endeavour to understand how a mode pair
(or pairs) is chosen for oscillation over its spectral neighbors. To study
mode competition in our test device, we perform multi-mode LLE simulations,
from which we construct $CE$ maps for the modes corresponding to $\mu=46$,
$\mu=45$, and $\mu=44$, as shown in Fig. 5c. To vary $\delta_{\rm{\mu}}$
(which is fixed for a given geometry), we apply a quadratic dispersion to
$\omega_{\rm{\mu}}$, such that
$\omega_{\rm{\mu}}\rightarrow\omega_{\rm{\mu}}+\beta\kappa(0)\mu^{2}$, where
$\beta$ quantifies the added dispersion. We choose this approach in order to
maintain the overall shape of $\delta_{\rm{\mu}}$. Note that
$\delta_{\rm{45}}$ and $\delta_{\rm{44}}$ are both negative in Fig. 5b, but
they will become positive as $\beta$ is increased.
Figure 6: Characterization of XPM-MI in microresonators with normal
dispersion. (a) Illustration of the XPM-MI FWM process (purple arrows), in
which XPM between the pump, signal, and idler fields induces MI that
distributes pump energy into spectrally-nearby sidebands. (b)
$\delta_{\rm{\mu}}$ for $\beta=0.0625$ (green circles), $\beta=1.25$ (cyan
upward-pointing triangles), and $\beta=5$ (blue downward-pointing triangles).
One mode pair ($\mu=\pm 50$) is frequency shifted to be frequency matched to
the pump mode. (c) $CE$ versus $\alpha$ for the $\delta_{\rm{\mu}}$ spectra in
(b), with $P_{\rm{in}}=20$ mW and $\delta=3$. (d) Optical spectrum associated
with part (c), with $\alpha=4$ and $\beta=0.0625$. (e) $CE$ maps for
microresonators with the $\delta_{\rm{\mu}}$ spectra in (b), ordered from
least to greatest $\beta$.
The $CE$ maps in Fig. 5c present clear evidence for mode competition, and we
identify three notable features in them. First, the $CE$ map for mode $\mu=46$
resembles that of the TMA for small $\delta_{\rm{46}}$. Second, as $\beta$ is
increased, the maximum $CE$ for mode $\mu=46$ declines sharply, and this
coincides with $\delta_{\rm{45}}>0$ and oscillation on mode $\mu=45$. This
pattern repeats as $\beta$ is increased further – the oscillating mode changes
from $\mu=45$ to $\mu=44$, and so on. Hence, we assess that the mode pair with
smallest positive $\delta$ is favored for oscillation. Finally, there are
small regions of parameter space where multiple mode pairs seem to oscillate
simultaneously – we explore this phenomenon in Figs. 5d and 5e.
Figure 5d shows $CE$ for modes $\mu=46$ and $\mu=45$ during a
$\omega_{\rm{p}}$ scan (i.e., varying $\alpha$) with $\beta=5\times 10^{-4}$
($\delta_{\rm{46}}\approx 3$) and $P_{\rm{in}}=20$ mW. During the scan, mode
$\mu=46$ begins to oscillate first; its $CE$ closely follows predictions made
using a TMA (pale blue stripe) and reaches a maximum when $\alpha\approx 1.3$.
In this regime, the $\mu$OPO spectrum is tri-modal, as depicted by the blue
spectrum in Fig. 5e (the spectral bandwidth in Fig. 5e only spans
$\omega_{\rm{p}}$ and $\omega_{\rm{s}}$; we have confirmed the spectrum is
symmetric around $\omega_{\rm{p}}$). Beyond $\alpha\approx 1.3$, mode $\mu=45$
begins to oscillate, and $CE$ for mode $\mu=46$ deviates from its TMA
counterpart. Between $\alpha\approx 1.3$ and $\alpha\approx 2.2$, both modes
oscillate simultaneously, but their respective $CE$ values are not predicted
by a TMA. Moreover, in this region the $\mu$OPO spectrum is not tri-modal;
rather, it is strongly multi-moded with intensity peaks occurring near
$\omega_{\rm{p}}$, $\omega_{\rm{s}}$, and $\omega_{\rm{i}}$, as shown by the
orange spectrum in Fig. 5e. Beyond $\alpha\approx 2.2$, $CE$ for mode $\mu=46$
is zero, and $CE$ for mode $\mu=45$ follows its TMA counterpart. We term this
phenomenon, wherein $CE$ values for different mode pairs conform to their TMA
counterparts for different portions of a $\omega_{\rm{p}}$ scan, mode
switching. Finally, at $\alpha\geq 3$, the $\mu$OPO decays in favor of MI.
Remarkably, the MI state is supported in the normally-dispersive region around
$\mu=0$ and without strong signal or idler fields. The MI spectrum is shown by
the shaded red curve in Fig. 5e. In the next section, we explore MI and its
impact on $CE$.
## IV XPM-MI: Characterization and Theory
It is clear from Fig. 5 that mode competition is not the only process that
differentiates the multi-mode LLE and TMA. Specifically, Figs. 5d and 5e
establish that MI can suppress or extinguish the signal and idler fields, and
this occurs in regions of parameter space where a TMA predicts efficient
parametric oscillation. In this section, we isolate the MI process in our
simulations by considering a special dispersion profile that eliminates mode
competition, and we develop a theory for MI as arising from XPM between the
pump, signal, and idler fields. We term this process XPM-MI to link it to
previous investigations Agrawal (1987).
The XPM-MI FWM process is illustrated by the purple arrows in Fig. 6a. Energy
from the pump laser is distributed to sidebands in a narrow spectral bandwidth
around $\omega_{\rm{p}}$. If we use a multi-mode LLE to simulate our test
device, the effects of mode competition can obfuscate the impact of XPM-MI.
Therefore, we contrive the heuristically useful $\delta_{\rm{\mu}}$ spectra
depicted in Fig. 6b. Here, we define $\delta_{\rm{\mu}}=-2\beta\mu^{2}$ for
all $\mu\neq\pm 50$. The modes $\mu=\pm 50$ are designated for parametric
oscillation and assigned a frequency mismatch value $\delta$ that can be
manipulated apart from $\beta$.
Figure 7: Theory of XPM-MI. (a) MI gain parameter, $\lambda$, versus $\alpha$
for $\mu=1$. Orange and gold curves correspond to $P_{\rm{in}}=15$ mW,
$\delta=1.5$, $\beta=0.25$, and $P_{\rm{in}}=25$ mW, $\delta=2.5$,
$\beta=0.25$, respectively. The red curves all use $P_{\rm{in}}=20$ mW, with
bold, dotted, and dashed curves corresponding to $\delta=2.5$ and $\beta=1$,
$\delta=5$ and $\beta=0.25$, and $\delta=2$ and $\beta=0.25$, respectively.
(b) The value of $\alpha$ at which MI first appears, $\alpha_{\rm{ON}}$,
versus $\beta$ for $P_{\rm{in}}=20$ mW and $\delta=2.5$. Light blue circles
indicate theoretical values using Eq. 5, while dark blue diamonds mark values
predicted by LLE simulations. (c) $\lambda$ map for $\mu=1$ and $\beta=0.25$.
In Fig. 6c, we present LLE simulations of $CE$ versus $\alpha$ for
microresonators with the $\delta_{\rm{\mu}}$ spectra in Fig. 6b; these data
overlay the corresponding simulation using a TMA (pale gray stripe). They are
representative of the entire ($P_{\rm{in}},\delta)$ parameter space, and they
are noteworthy for two reasons. First, in all cases $CE$ follows the gray
stripe until $\alpha\geq\alpha_{\rm{ON}}$, where $\alpha_{\rm{ON}}$
corresponds to the onset of XPM-MI. An example XPM-MI spectrum is shown in
Fig. 6d. Unlike the XPM-MI spectrum shown in Fig. 5e, here the XPM-MI and
$\mu$OPO states co-exist, albeit with suppressed $CE$. Second,
$\alpha_{\rm{ON}}$ increases with increasing $\beta$. As a result, $CE$ fully
converges to its TMA counterpart in the high-$\beta$ limit. To investigate
this convergence, we construct the $CE$ maps for different values of $\beta$,
and we present the results in Fig. 6e. We observe steady convergence to the
TMA $CE$ map as $\beta$ is increased. Clearly, it is crucial to realize strong
normal dispersion near $\omega_{\rm{p}}$ to observe efficient parametric
oscillation when $X>>1$.
Next, we present an expression for the XPM-MI gain that we derive from a set
of coupled mode equations (see Supplemental Material for details), and we
analyze this expression to support our conclusion that XPM underlies the
observed MI states. As noted in Ref. Hansson _et al._ (2013), MI may occur in
normally-dispersive microresonators; however, it usually requires a hard
excitation, i.e., the MI sidebands are not amplified from vacuum fluctuations.
This fact explains how the MI state can sometimes persist after parametric
oscillations decay. Still, in our simulations we have never observed MI emerge
before parametric oscillations. Indeed, it was predicted in Ref. Agrawal
(1987) that XPM between two waves can make them modulationally unstable, even
when one or both waves propagates in normally-dispersive media. Motivated by
this study, we analyze a set of coupled mode equations that assume the pump,
signal, and idler modes are occupied, and we derive the MI gain,
$\lambda_{\rm{\mu}}$, as
${}\lambda_{\rm{\mu}}=-1+\sqrt{I_{\rm{0}}^{2}+I_{\rm{s}}^{2}+2I_{\rm{0}}I_{\rm{s}}\rm{cos}(\Delta\phi)-\textit{k}_{\rm{\mu}}^{2}},$
(5)
where $I_{\rm{0}}$ and $I_{\rm{s}}$ are the photon numbers for the pump and
signal modes, respectively,
$\Delta\phi=2\phi_{\rm{p}}-\phi_{\rm{s}}-\phi_{\rm{i}}$ is the relative phase
mismatch between the pump, signal, and idler fields, which are expressed as
$\tilde{a}_{\rm{p(s,i)}}=\sqrt{I_{\rm{p(s,i)}}}e^{-i\phi_{\rm{p(s,i)}}}$, and
$k_{\rm{\mu}}=2I_{\rm{0}}+4I_{\rm{s}}-\beta\mu^{2}-\alpha$. MI occurs when
$\lambda_{\rm{\mu}}>0$, for any $\mu$. Note from these definitions that
$\beta\mu^{2}$ is the frequency mismatch parameter for the MI sidebands, with
mode numbers $\pm\mu$, and not the frequency mismatch for the $\mu$OPO mode
pair, which we denote as $\delta$. In the limit $I_{\rm{s}}\rightarrow 0$, Eq.
5 is equivalent to the MI gain expression derived in Ref. Hansson _et al._
(2013). Moreover, it is clear that $\lambda_{\rm{\mu}}$ is maximized when the
pump, signal, and idler fields are in phase (i.e. when $\Delta\phi=0$), which
corresponds to the in-phase addition of two FWM processes: the degenerate FWM
process $2\omega_{\rm{p}}\rightarrow\omega_{\rm{\mu}}+\omega_{\rm{-\mu}}$ and
the non-degenerate process
$\omega_{\rm{s}}+\omega_{\rm{i}}\rightarrow\omega_{\rm{\mu}}+\omega_{\rm{-\mu}}$.
To analyze XPM-MI using Eq. 5, we perform LLE simulations, using a TMA, for
various values of $P_{\rm{in}}$ and $\delta$ and extract values for
$I_{\rm{0}},I_{\rm{s}}$, and $\Delta\phi$ that we use to calculate
$\lambda_{\rm{\mu}}$. When $\lambda_{\rm{\mu}}>0$, the mode pair with mode
numbers $\pm\mu$ will be amplified and steal energy from the $\mu$OPO. In
general, a large normal dispersion leads to the sideband pair with $\mu=\pm 1$
having the largest gain; therefore, in what follows we drop the $\mu$
subscript and define $\lambda$ as the MI gain for this pair. We have confirmed
that $\lambda_{\rm{\mu}}<0$ whenever $I_{\rm{s}}=0$ and $\beta>0$; physically,
this means that XPM between the pump, sigal, and idler fields is required to
initiate MI for microresonators with normal dispersion. Hence, we term this
process XPM-MI. Figure 7a presents $\lambda$ calculations for various values
of $P_{\rm{in}}$, $\delta$, and $\beta$. They indicate that $\lambda$ grows
with increasing $\alpha$, which is explained by the stronger XPM that results
from more powerful $\mu$OPO sidebands at large $\alpha$. To compare our XPM-MI
theory (i.e., Eq. 5) to multi-mode LLE simulations, we calculate
$\alpha_{\rm{ON}}$ in both cases, for different values of $\beta$. Overall, we
observe good agreement, but Eq. 5 generally predicts higher values of
$\alpha_{\rm{ON}}$ than the multi-mode LLE. A sample comparison of this type
is presented in Fig. 7b.
Figure 8: Strategies to achieve high $CE$ and increase power output from a
$\mu$OPO. (a) Depiction of a microring resonator and its loss rates. (b)
Simulated mode transmission vs $\omega_{\rm{p}}$ for
$\kappa_{\rm{i}}=2\pi\times 125$ MHz (red), $\kappa_{\rm{i}}=2\pi\times 200$
MHz (orange), and $\kappa_{\rm{i}}=2\pi\times 275$ MHz (gold). In each case,
$\kappa_{\rm{c}}=\kappa_{\rm{i}}$. (c) $CE$ maps corresponding to (b), for a
resonator with $RR=15~{}\mu$m, $RW=800$ nm, $H=600$ nm, and
$\omega_{\rm{p}}=2\pi\times 385$ THz. (d) Simulated mode transmissions for
$\kappa_{\rm{c}}=\kappa_{\rm{i}}$ (green) and
$\kappa_{\rm{c}}=10\times\kappa_{\rm{i}}$ (blue), where
$\kappa_{\rm{i}}=2\pi\times 200$ MHz. (e) Maximum $CE$ versus
$\kappa_{\rm{c}}/\kappa_{\rm{i}}$ and $\delta$, where $\kappa_{\rm{c}}$ is the
signal mode coupling rate. The pump and idler modes are critically coupled.
To more comprehensively compare our theory with multi-mode LLE simulations, we
calculate the maximum gain, $\lambda_{\rm{max}}$, for $\beta=0.25$ and
different values of $P_{\rm{in}}$ and $\delta$. We use these data to construct
the $\lambda_{\rm{max}}$ map presented in Fig. 7c. It accurately predicts the
threshold $P_{\rm{in}}$ values for XPM-MI, and it has the overall trend that
$\lambda_{\rm{max}}$ grows with increasing $P_{\rm{in}}$ and $\delta$. This
trend is consistent with the $CE$ maps presented in Fig. 6e, which deviate
from the TMA as $P_{\rm{in}}$ and $\delta$ are increased. Importantly, our
observations may explain prior experimental results, which consistently report
that increasing $P_{\rm{in}}$ leads to the formation of undesired sidebands Lu
_et al._ (2019); Tang _et al._ (2020); Fujii _et al._ (2019). Overall, our
theory supports the hypothesis that XPM drives MI in a $\mu$OPO. Moreover, we
can now form a concise description of parasitic FWM in $\mu$OPOs: Mode
competition dictates which mode pair oscillates, while XPM-MI constrains $CE$.
## V Towards efficient and high-power $\mu$OPO
Our results suggest several design rules for avoiding mode competition and
mitigating XPM-MI through dispersion engineering. For example, Fig. 6
demonstrates that larger normal dispersion around $\omega_{\rm{p}}$ will
suppress XPM-MI. Moreover, increasing $\frac{d\delta_{\rm{\mu}}}{d\mu}$ will
help prevent mode competitions. This can be accomplished by using smaller
resonators or in resonators with greater dispersion. Still, in practice it is
nontrivial to fabricate devices with ideal dispersion characteristics.
Therefore, in this section we describe two strategies to increase $CE$ by
tuning the microresonator loss rates, $\kappa_{\rm{c}}(\mu)$ and
$\kappa_{\rm{i}}$. We assume that users of real-world $\mu$OPO devices will
value high output power; therefore, we focus on ways to increase $CE$ for
fixed $P_{\rm{in}}$.
Figure 8a depicts the coupling and intrinsic loss rates that are quantified by
$\kappa_{\rm{c}}(\mu)$ and $\kappa_{\rm{i}}$, respectively. In practice,
$\kappa_{\rm{c}}(\mu)$ is controllable by various design parameters, including
the waveguide-resonator separation and waveguide-resonator coupling length,
for example, in a ‘pulley’ configuration Moille _et al._ (2019). Moreover,
$\kappa_{\rm{i}}$ can be reduced within some spectral bands by annealing
Spencer _et al._ (2014); Graziosi _et al._ (2018); hence, one gains some
control over $\kappa_{\rm{i}}$ by making a suitable choice for the annealing
time or temperature (also, increasing $\kappa_{\rm{i}}$ can be realized
through, for example, intentionally-introduced surface roughness).
We consider two ways to increase $CE$. First, by increasing $P_{\rm{th}}$, one
increases the $P_{\rm{in}}$ values for which XPM-MI occurs. To demonstrate
this approach, we perform multi-mode LLE simulations of our test device for
three different values of $\kappa$. In every simulation,
$\kappa_{\rm{c}}(\mu)=\kappa_{\rm{i}}$ and $\omega_{\rm{p}}=2\pi\times 385$
THz. The simulated modal lineshapes for these resonators are shown in Fig. 8b,
and the corresponding $CE$ maps are shown in Fig. 8c. In Fig. 8c, $\kappa$ is
increased from the left-most panel to the right-most panel. As $\kappa$ is
increased, the region of highest $CE$ is shifted towards higher values of
$P_{\rm{in}}$. Moreover, this region is broadened along both axes, indicating
that devices with larger $\kappa$ will have a greater tolerance for design
errors in $\delta$. While increases in $\kappa$ prevent high $CE$ at low
$P_{\rm{in}}$, the obtainable output power generated with high $P_{\rm{in}}$
has grown; for instance, with $P_{\rm{in}}=30$ mW, the maximum output power
increases from $P_{\rm{sig}}\approx 2.25$ mW at $\kappa/2\pi=250$ MHz (Fig.
8c, left-most panel) to $P_{\rm{sig}}\approx 3.75$ mW at $\kappa/2\pi=550$ MHz
(Fig. 8c, right-most panel).
Finally, we consider the relationship between the signal mode coupling rate,
$\kappa_{\rm{c}}$, and $CE$. This relationship was explored in Ref. Sayson
_et al._ (2019), with the result that $CE$ may be as large as $25$%
(overcoupling the pump mode may further increase $CE$). Here, we reiterate
this result and explore it within the $\mu$OPO parameter space. Figure 8d
shows simulated signal mode lineshapes for two values of $\kappa_{\rm{c}}$,
and Fig. 8e shows the $CE$ map for $P_{\rm{in}}=20$ mW in a parameter space
defined by the coupling ratio, $\kappa_{\rm{c}}/\kappa_{\rm{i}}$, and
$\delta$. In our simulations, we use a TMA and keep the pump and idler modes
critically coupled. We not only observe an increase in the highest obtainable
$CE$ to nearly $25$%, but we also observe that the region of highest $CE$ is
broadened for increasing $\kappa_{\rm{c}}/\kappa_{\rm{i}}$. Broadening occurs
until $\kappa_{\rm{c}}/\kappa_{\rm{i}}\approx 30$, at which point the
advantages of overcoupling are overcome by the corresponding increases in
$P_{\rm{th}}$.
## VI Discussion
In conclusion, we have established a foundation of simulation results for
$\mu$OPOs that moves beyond a TMA and will help guide experimental efforts to
realize the high conversion efficiencies predicted by the simplified theory.
We introduced a $CE$ map that encapsulates the $\mu$OPO solution space, and
through multi-mode LLE simulations we reveal nonlinear dynamics not
predictable from a TMA. In particular, we identified mode competition and
demonstrated how it determines the oscillating mode pair. Meanwhile, the range
of parameter space over which high $CE$ can be obtained is constrained by XPM-
MI. Mode competition and XPM-MI both depend on the microresonator dispersion,
$\delta_{\rm{\mu}}$, and suitable dispersion engineering may circumvent these
processes. Still, optimizing $\delta_{\rm{\mu}}$ may be nontrivial in
practice; therefore, we have proposed two strategies to increase $CE$ apart
from dispersion engineering. Ultimately, suitable control of both resonator
loss (including waveguide coupling) and dispersion makes it possible to tailor
microresonator geometries for high $CE$ at a targeted pump power, that is, to
produce a useful amount of output power. Such engineering will be crucial in
the development of compact, coherent light sources that take advantage of the
enormous wavelength flexibility inherent to $\chi^{(3)}$ OPOs.
###### Acknowledgements.
We thank Travis Briles and Edgar Perez for a careful reading of the paper.
This project is funded by the DARPA LUMOS program.
Supplemental Material
## I Calculations of $\mu$OPO variables
In this section, we describe mathematical formulae that relate LLE variables
to the data (e.g. $CE$ and optical spectra) presented in the main text.
Simulations of the Lugiato-Lefever Equation (LLE) yield solutions for the
complex intraresonator field, $a$; we denote the intraresonator field spectrum
$\tilde{a}=\mathcal{F}(a)$, where $\mathcal{F}$ denotes the Fourier transform,
and $a$ is normalized such that $|\tilde{a}|^{2}_{\omega=\omega_{\mu}}$ gives
the number of intraresonator photons in the mode $\mu$.
Table 1 lists the important variables, along with their physical descriptions
and how they are calculated from $a$, $\tilde{a}$, and simulation parameters.
Notably, the expressions for $\phi_{\rm{p}}$ and $N_{\rm{p}}$ also apply to
the signal and idler modes when the expressions are evaluated at the
appropriate value of $\omega$. We follow Ref. Yu _et al._ (2021) in our
calculation of the nonlinear mode frequency shifts.
Variable | Description (units) | LLE Calculation
---|---|---
$P_{\rm{sig}}$ | Output signal power (W) | $\kappa_{\rm{c}}\hbar\omega_{\rm{s}}|\tilde{a}|^{2}_{\omega=\omega_{\rm{s}}}$
$\phi_{\rm{p}}$ | Pump phase (rad) | $\angle\tilde{a}_{\omega=\omega_{\rm{0}}}$
$N_{\rm{p}}$ | Pump mode nonlinear frequency shift ($\kappa(0)/2\pi$) | $\frac{1}{\kappa(0)}\left(\frac{\mathcal{F}(g_{\rm{0}}|a|^{2}a)}{\tilde{a}}\right)_{\omega=\omega_{\rm{0}}}$
$P_{\rm{out}}$ | Pump laser output power ($W$) | $\hbar\omega_{\rm{p}}|\sqrt{\kappa_{\rm{c}}(0)}\tilde{a}_{\omega=\omega_{\rm{0}}}-\sqrt{\frac{P_{\rm{in}}}{\hbar\omega_{\rm{p}}}}|^{2}$
Table 1: List and descriptions of important variables derived from the LLE.
## II Derivation of highest maximum $CE$ contour
Next, we seek to derive the result - stated in the main text - that the
contour of highest maximum $CE$ in the TMA is given by $X=8\delta$. Our
starting point is the following set of coupled equations Sayson _et al._
(2019):
$\displaystyle\frac{dA_{\rm{p}}}{dt}$
$\displaystyle=-(1+i\alpha)A_{\rm{p}}+i(|A_{\rm{p}}|^{2}+2|A_{\rm{s}}|^{2}+2|A_{\rm{i}}|^{2})A_{\rm{p}}+2A_{\rm{p}}^{*}A_{\rm{s}}A_{\rm{i}}+\sqrt{X}$
(6) $\displaystyle\frac{dA_{\rm{s}}}{dt}$
$\displaystyle=-(1+i\alpha+i\delta)A_{\rm{s}}+i(|A_{\rm{s}}|^{2}+2|A_{\rm{p}}|^{2}+2|A_{\rm{i}}|^{2})A_{\rm{s}}+2A_{\rm{s}}^{*}A_{\rm{p}}A_{\rm{i}}$
(7) $\displaystyle\frac{dA_{\rm{i}}}{dt}$
$\displaystyle=-(1+i\alpha+i\delta)A_{\rm{i}}+i(|A_{\rm{i}}|^{2}+2|A_{\rm{p}}|^{2}+2|A_{\rm{s}}|^{2})A_{\rm{i}}+2A_{\rm{i}}^{*}A_{\rm{p}}A_{\rm{s}},$
(8)
where $A_{\rm{p(s,i)}}=\sqrt{I_{\rm{p(s,i)}}}e^{-i\phi_{\rm{p(s,i)}}}$ denotes
the pump (signal, idler) field, and $I_{\rm{p(s,i)}}=|A_{\rm{p(s,i)}}|^{2}$ is
proportional to the pump (signal, idler) intraresonator photon number. If we
substitute the field expressions into the above equations, we obtain six
coupled equations for the real variables $I_{\rm{p(s,i)}}$ and
$\phi_{\rm{p(s,i)}}$,
$\displaystyle\frac{dI_{\rm{p}}}{dt}$
$\displaystyle=-2I_{\rm{p}}-4I_{\rm{p}}\sqrt{I_{\rm{s}}I_{\rm{i}}}\textrm{sin}(\Delta\phi)+2\sqrt{I_{\rm{p}}X}\textrm{cos}(\phi_{\rm{p}})$
(9) $\displaystyle\frac{dI_{\rm{s}}}{dt}$
$\displaystyle=-2I_{\rm{s}}+2I_{\rm{p}}\sqrt{I_{\rm{s}}I_{\rm{i}}}\textrm{sin}(\Delta\phi)$
(10) $\displaystyle\frac{dI_{\rm{i}}}{dt}$
$\displaystyle=-2I_{\rm{i}}+2I_{\rm{p}}\sqrt{I_{\rm{s}}I_{\rm{i}}}\textrm{sin}(\Delta\phi)$
(11) $\displaystyle\frac{d\phi_{\rm{p}}}{dt}$
$\displaystyle=I_{\rm{p}}+2I_{\rm{s}}+2I_{\rm{i}}+2\sqrt{I_{\rm{s}}I_{\rm{i}}}\textrm{cos}(\Delta\phi)-\alpha-\sqrt{\frac{X}{I_{\rm{p}}}}\textrm{sin}(\phi_{\rm{p}})$
(12) $\displaystyle\frac{d\phi_{\rm{s}}}{dt}$
$\displaystyle=I_{\rm{s}}+2I_{\rm{p}}+2I_{\rm{i}}+I_{\rm{p}}\sqrt{\frac{I_{\rm{i}}}{I_{\rm{s}}}}\textrm{cos}(\Delta\phi)-\alpha-\delta$
(13) $\displaystyle\frac{d\phi_{\rm{i}}}{dt}$
$\displaystyle=I_{\rm{i}}+2I_{\rm{p}}+2I_{\rm{s}}+I_{\rm{p}}\sqrt{\frac{I_{\rm{s}}}{I_{\rm{i}}}}\textrm{cos}(\Delta\phi)-\alpha-\delta,$
(14)
where $\Delta\phi=2\phi_{\rm{p}}-\phi_{\rm{s}}-\phi_{\rm{i}}$. We next reduce
this system to four equations by first noting that, from the symmetry of Eqs.
7 and 8, $I_{\rm{s}}=I_{\rm{i}}$. Therefore, we define
$M=I_{\rm{s}}=I_{\rm{i}}$. Moreover, momentum conservation implies that
$\frac{d}{dt}(\phi_{\rm{s}}+\phi_{\rm{i}})=0$. Hence, for steady state
conditions we obtain
$\displaystyle I_{\rm{p}}$
$\displaystyle=-2I_{\rm{p}}M\textrm{sin}(\Delta\phi)+\sqrt{I_{\rm{p}}X}\textrm{cos}(\phi_{\rm{p}})$
(15) $\displaystyle 1$ $\displaystyle=I_{\rm{p}}\textrm{sin}(\Delta\phi)$ (16)
$\displaystyle\sqrt{\frac{X}{I_{\rm{p}}}}\textrm{sin}(\phi_{\rm{p}})$
$\displaystyle=I_{\rm{p}}+4M+2M\textrm{cos}(\Delta\phi)-\alpha$ (17)
$\displaystyle I_{\rm{p}}\textrm{cos}(\Delta\phi)$
$\displaystyle=\alpha-3M-2I_{\rm{p}}+\delta.$ (18)
Equations 15-18 are still sufficiently general to study the stationary
solutions. However, we proceed to simplify them via two ansatzes associated
with a maximally efficient $\mu$OPO. Specifically, we set $\phi_{\rm{p}}=0$,
which corresponds to the pump laser being on resonance, and
$M=\frac{I_{\rm{p}}}{2}$. The validity of these assumptions is confirmed by
our numerical results, but they are also physically intuitive. For instance,
because the FWM process that drives $\mu$OPO
($2\omega_{\rm{p}}\rightarrow\omega_{\rm{s}}+\omega_{\rm{i}}$) is reversible,
one expects the intraresonator photons to be evenly distributed between the
pump mode and sideband pair.
After inserting our two ansatzes into Eqs. 15-18, we combine Eqs. 15 and 16 to
obtain $I_{\rm{p}}=X/4$. Then, Eqs. 17 and 18 are combined to obtain
$\frac{I_{\rm{p}}}{2}=\delta$. Insertion of the former into the latter yields
the desired result, $X=8\delta$.
## III Derivation of XPM-MI gain
Finally, in this section we derive Eq. 4 from the main text. We follow the
classical procedure of linearizing a set of coupled equations around a steady-
state solution and then allowing plane-wave perturbations to grow
exponentially. We start from the equations of motion for the fields
$A_{\rm{\mu}}$ and $A_{\rm{-\mu}}$ when the pump, signal, and idler modes are
occupied,
$\displaystyle\frac{dA_{\rm{\mu}}}{dt}$
$\displaystyle=i\beta\mu^{2}+i(2I_{\rm{p}}+2I_{\rm{s}}+2I_{\rm{i}}+2I_{\rm{-\mu}}+I_{\rm{\mu}})A_{\rm{\mu}}-(1+i\alpha)A_{\rm{\mu}}+iA_{\rm{\mu}}^{*}A_{\rm{p}}^{2}+iA_{\rm{\mu}}^{*}A_{\rm{s}}A_{\rm{i}}$
(19) $\displaystyle\frac{dA_{\rm{-\mu}}}{dt}$
$\displaystyle=i\beta\mu^{2}+i(2I_{\rm{p}}+2I_{\rm{s}}+2I_{\rm{i}}+I_{\rm{-\mu}}+2I_{\rm{\mu}})A_{\rm{-\mu}}-(1+i\alpha)A_{\rm{-\mu}}+iA_{\rm{-\mu}}^{*}A_{\rm{p}}^{2}+iA_{\rm{-\mu}}^{*}A_{\rm{s}}A_{\rm{i}},$
(20)
where $\beta$ quantifies the dispersion. Next, we introduce the perturbation
$\delta f_{\rm{\mu}}(t)$, so that the equations of motion for the
perturbations, after simplifying, read
$\displaystyle\frac{d\delta f_{\rm{\mu}}}{dt}$
$\displaystyle=(ik_{\rm{\mu}}-1)\delta
f_{\rm{\mu}}+i(A_{\rm{p}}^{2}+A_{\rm{s}}A_{\rm{i}})\delta f_{\rm{\mu}}^{*}$
(21) $\displaystyle\frac{d\delta f_{\rm{-\mu}}^{*}}{dt}$
$\displaystyle=-(ik_{\rm{\mu}}+1)\delta
f_{\rm{-\mu}}^{*}-i(A_{\rm{p}}^{*2}+A_{\rm{s}}^{*}A_{\rm{i}}^{*})\delta
f_{\rm{\mu}},$ (22)
where $k_{\rm{\mu}}=\beta\mu^{2}+2I_{\rm{p}}+4I_{\rm{s}}-\alpha$, and we have
again assumed $I_{\rm{s}}=I_{\rm{i}}$. If we set $\delta
f_{\rm{\mu}}(t)=ae^{\lambda_{\rm{\mu}}t}$, then we obtain a set of linear
homogenous equations with eigenvalue $\lambda_{\rm{\mu}}$. Solving the
eigenvalue problem yields
$\lambda_{\rm{\mu}}=-1\pm\sqrt{I_{\rm{p}}^{2}+I_{\rm{s}}^{2}+2I_{\rm{p}}I_{\rm{s}}\textrm{cos}(\Delta\phi)-k_{\rm{\mu}}^{2}},$
(23)
which is the desired result. Notably, the original set of coupled equations
are approximations in several ways. For instance, we only consider one
sideband pair, but the full XPM-MI dynamics involves the collective
interactions of multiple sideband pairs simultaneously. In addition, we
neglect FWM processes that do not involve both MI sidebands (e.g. the
interaction $A_{\rm{\mu}}^{*}A_{\rm{s}}^{*}A_{\rm{p}}A_{\rm{s+\mu}}$). These
approximations may explain the small discrepancies between our theory and LLE
simulations.
## References
* Liang and Bowers (2010) D. Liang and J. E. Bowers, Nature Photonics 4, 511 (2010).
* Agrell _et al._ (2016) E. Agrell, M. Karlsson, A. Chraplyvy, D. J. Richardson, P. M. Krummrich, P. Winzer, K. Roberts, J. K. Fischer, S. J. Savory, B. J. Eggleton, _et al._ , Journal of Optics 18, 063002 (2016).
* Sipahigil _et al._ (2016) A. Sipahigil, R. E. Evans, D. D. Sukachev, M. J. Burek, J. Borregaard, M. K. Bhaskar, C. T. Nguyen, J. L. Pacheco, H. A. Atikian, C. Meuwly, _et al._ , Science 354, 847 (2016).
* Elshaari _et al._ (2020) A. W. Elshaari, W. Pernice, K. Srinivasan, O. Benson, and V. Zwiller, Nature Photonics 14, 285 (2020).
* Jin _et al._ (2021) W. Jin, Q.-F. Yang, L. Chang, B. Shen, H. Wang, M. A. Leal, L. Wu, M. Gao, A. Feshali, M. Paniccia, _et al._ , Nature Photonics 15, 346 (2021).
* Vahala (2003) K. J. Vahala, Nature 424, 839 (2003).
* Yang _et al._ (2018) K. Y. Yang, D. Y. Oh, S. H. Lee, Q.-F. Yang, X. Yi, B. Shen, H. Wang, and K. Vahala, Nature Photonics 12, 297 (2018).
* Kippenberg _et al._ (2018) T. J. Kippenberg, A. L. Gaeta, M. Lipson, and M. L. Gorodetsky, Science 361 (2018).
* Liu _et al._ (2018) X. Liu, C. Sun, B. Xiong, L. Wang, J. Wang, Y. Han, Z. Hao, H. Li, Y. Luo, J. Yan, _et al._ , ACS Photonics 5, 1943 (2018).
* Loh _et al._ (2015) W. Loh, A. A. Green, F. N. Baynes, D. C. Cole, F. J. Quinlan, H. Lee, K. J. Vahala, S. B. Papp, and S. A. Diddams, Optica 2, 225 (2015).
* Bruch _et al._ (2019) A. W. Bruch, X. Liu, J. B. Surya, C.-L. Zou, and H. X. Tang, Optica 6, 1361 (2019).
* Lu _et al._ (2019) X. Lu, G. Moille, A. Singh, Q. Li, D. A. Westly, A. Rao, S.-P. Yu, T. C. Briles, S. B. Papp, and K. Srinivasan, Optica 6, 1535 (2019).
* Spencer _et al._ (2018) D. T. Spencer, T. Drake, T. C. Briles, J. Stone, L. C. Sinclair, C. Fredrick, Q. Li, D. Westly, B. R. Ilic, A. Bluestone, _et al._ , Nature 557, 81 (2018).
* Marin-Palomo _et al._ (2017) P. Marin-Palomo, J. N. Kemal, M. Karpov, A. Kordts, J. Pfeifle, M. H. Pfeiffer, P. Trocha, S. Wolf, V. Brasch, M. H. Anderson, _et al._ , Nature 546, 274 (2017).
* Lai _et al._ (2020) Y.-H. Lai, M.-G. Suh, Y.-K. Lu, B. Shen, Q.-F. Yang, H. Wang, J. Li, S. H. Lee, K. Y. Yang, and K. Vahala, Nature Photonics 14, 345 (2020).
* Newman _et al._ (2019) Z. L. Newman, V. Maurice, T. Drake, J. R. Stone, T. C. Briles, D. T. Spencer, C. Fredrick, Q. Li, D. Westly, B. R. Ilic, _et al._ , Optica 6, 680 (2019).
* Vodopyanov _et al._ (2000) K. Vodopyanov, F. Ganikhanov, J. Maffetone, I. Zwieback, and W. Ruderman, Optics Letters 25, 841 (2000).
* Mieth _et al._ (2014) S. Mieth, A. Henderson, and T. Halfmann, Optics express 22, 11182 (2014).
* Boyd (2020) R. W. Boyd, _Nonlinear optics_ (Academic press, 2020).
* Lu and Srinivasan (2021) X. Lu and K. Srinivasan, Physical Review Applied 16, 014027 (2021).
* Sayson _et al._ (2019) N. L. B. Sayson, T. Bi, V. Ng, H. Pham, L. S. Trainor, H. G. Schwefel, S. Coen, M. Erkintalo, and S. G. Murdoch, Nature Photonics 13, 701 (2019).
* Tang _et al._ (2020) Y. Tang, Z. Gong, X. Liu, and H. X. Tang, Optics Letters 45, 1124 (2020).
* Lu _et al._ (2020) X. Lu, G. Moille, A. Rao, D. A. Westly, and K. Srinivasan, Optica 7, 1417 (2020).
* Domeneguetti _et al._ (2021) R. R. Domeneguetti, Y. Zhao, X. Ji, M. Martinelli, M. Lipson, A. L. Gaeta, and P. Nussenzveig, Optica 8, 316 (2021).
* Marty _et al._ (2021) G. Marty, S. Combrié, F. Raineri, and A. De Rossi, Nature Photonics 15, 53 (2021).
* Hansson _et al._ (2013) T. Hansson, D. Modotto, and S. Wabnitz, Physical Review A 88, 023819 (2013).
* Coen _et al._ (2013) S. Coen, H. G. Randle, T. Sylvestre, and M. Erkintalo, Optics Letters 38, 37 (2013).
* Chembo and Menyuk (2013) Y. K. Chembo and C. R. Menyuk, Physical Review A 87, 053852 (2013).
* Kippenberg _et al._ (2004) T. Kippenberg, S. Spillane, and K. Vahala, Physical review letters 93, 083904 (2004).
* Note (1) Finite element simulations performed using Comsol Multiphysics, which is identified here to foster understanding, without implying recommendation or endorsement by NIST.
* Narducci _et al._ (1986) L. Narducci, J. Tredicce, L. Lugiato, N. Abraham, and D. Bandy, Physical Review A 33, 1842 (1986).
* Gong _et al._ (2007) M. Gong, Y. Yuan, C. Li, P. Yan, H. Zhang, and S. Liao, Optics Express 15, 3236 (2007).
* Agrawal (1987) G. P. Agrawal, Physical Review Letters 59, 880 (1987).
* Fujii _et al._ (2019) S. Fujii, S. Tanaka, M. Fuchida, H. Amano, Y. Hayama, R. Suzuki, Y. Kakinuma, and T. Tanabe, Optics letters 44, 3146 (2019).
* Moille _et al._ (2019) G. Moille, Q. Li, T. C. Briles, S.-P. Yu, T. Drake, X. Lu, A. Rao, D. Westly, S. B. Papp, and K. Srinivasan, Optics Letters 44, 4737 (2019).
* Spencer _et al._ (2014) D. T. Spencer, J. F. Bauters, M. J. Heck, and J. E. Bowers, Optica 1, 153 (2014).
* Graziosi _et al._ (2018) T. Graziosi, S. Mi, M. Kiss, and N. Quack, in _2018 International Conference on Optical MEMS and Nanophotonics (OMN)_ (IEEE, 2018) pp. 1–5.
* Yu _et al._ (2021) S.-P. Yu, D. C. Cole, H. Jung, G. T. Moille, K. Srinivasan, and S. B. Papp, Nature Photonics 15, 461 (2021).
|
# AutoProtoNet: Interpretability for Prototypical Networks
Pedro Sandoval-Segura
Department of Computer Science
University of Maryland
<EMAIL_ADDRESS>
Wallace Lawson
Navy Center for Applied Research in Artificial Intelligence
Naval Research Laboratory
<EMAIL_ADDRESS>
###### Abstract
In meta-learning approaches, it is difficult for a practitioner to make sense
of what kind of representations the model employs. Without this ability, it
can be difficult to both understand what the model knows as well as to make
meaningful corrections. To address these challenges, we introduce
AutoProtoNet, which builds interpretability into Prototypical Networks by
training an embedding space suitable for reconstructing inputs, while
remaining convenient for few-shot learning. We demonstrate how points in this
embedding space can be visualized and used to understand class
representations. We also devise a prototype refinement method, which allows a
human to debug inadequate classification parameters. We use this debugging
technique on a custom classification task and find that it leads to accuracy
improvements on a validation set consisting of in-the-wild images. We advocate
for interpretability in meta-learning approaches and show that there are
interactive ways for a human to enhance meta-learning algorithms.
## 1 Introduction
††DISTRIBUTION STATEMENT A. Approved for public release: distribution
unlimited.
It is expensive and time-consuming to collect data to train current state-of-
the-art image classification systems [13]. When a classification algorithm is
deployed, new classes or labels cannot be easily added without incurring new
costs related to re-training the model [1][2]. Meta-learning approaches for
few-shot learning solve both these problems by training networks that learn
quickly from little data with computationally inexpensive fine-tuning
[23][20][15]. Despite these methods performing well on benchmark few-shot
image classification tasks, these methods are not interpretable; a human may
have no way of knowing why a certain classification decision was made.
Additionally, the lack of interpretability limits any kind of debugging of
network representations. In this work, we take a step toward the development
of a meta-learning algorithm which can learn in a few-shot setting, can handle
new classes at test time, is interpretable enough for a human to understand
how the model makes decisions, and which can be debugged in a simple way.
We revisit Prototypical Networks (ProtoNets) [20] as the focus of our study.
ProtoNets are based on a simple idea: there exists an embedding space where
images cluster around a single “prototype” for each class. Given the
simplicity of this few-shot learning approach, it makes sense to ask: what
does a prototype look like? And, have we learned an adequate prototype
representation?
The outcomes of our study can be summarized as follows:
* •
We introduce AutoProtoNet, which merges ideas from autoencoders and
Prototypical Networks, to perform few-shot image classification and prototype
reconstruction.
* •
We use AutoProtoNet to visualize prototypes and find that they are comparable
in quality to those of an autoencoder. AutoProtoNet also remains accurate on
few-shot image classification benchmarks.
* •
We devise a prototype refinement method, which can be used to debug inadequate
prototypes, and we validate the performance of the resulting model using a
novel validation set of in-the-wild images.
Our goal in this work is to elucidate the benefits of learning embeddings that
can be visualized and interpreted by humans. To the best of our knowledge,
there is no meta-learning approach that allows for a human to play a role in
the fine-tuning of the base model.
## 2 Related Work
### 2.1 Meta-learning and Prototypical Networks
Before meta-learning, transfer learning was used to handle few-shot problems.
In transfer learning, a feature extractor is trained on a large dataset, then
fine-tuned for new tasks [2]. However, transfer learning has some drawbacks.
For example, adding a new class may require re-training the model and, in the
few-shot setting, overfitting few example images is possible.
Meta-learning algorithms aim to learn a “base” model that can be quickly fine-
tuned for a new task. The base model is trained using a set of training tasks
$\\{\mathcal{T}_{i}\\}$, sampled from some task distribution. Each task
consists of _support_ data, $\mathcal{T}_{i}^{s}$, and _query_ data,
$\mathcal{T}_{i}^{q}$. Support data is used to fine-tune the model, while
query data is used to evaluate the resulting model. Practically speaking, each
task is an image classification problem involving only a small number of
classes. The number of examples per class in the support set is called the
_shot_ , and the number of classes is called the _way_. For example, in 5-way
1-shot learning, we are given 1 example for each of the 5 classes to use for
fine-tuning.
Following the meta-learning framework presented in [8], Algorithm 1 can be
used as a general way to understand both metric-learning methods [23] [20] and
gradient-based methods like MAML [6].
Algorithm 1 The meta-learning framework
Input: Base model, $F_{\theta}$
Input: Fine-tuning algorithm, $A$
Input: Learning rate, $\gamma$
Input: Distribution over tasks, $p(\mathcal{T})$
1:Initialize $\theta$, the weights of $F$
2:while not done do
3: Sample batch of tasks $\\{\mathcal{T}_{i}\\}_{i=1}^{n}$, where
$\mathcal{T}_{i}\sim p(\mathcal{T})$ and
$\mathcal{T}_{i}=(\mathcal{T}_{i}^{s},\mathcal{T}_{i}^{q})$
4: for i=1,…,n do
5: $\theta_{i}\leftarrow A(\theta,\mathcal{T}_{i}^{s})$ $\triangleright$ Fine-
tune model on $\mathcal{T}_{i}^{s}$ (inner loop)
6:
$g_{i}\leftarrow\nabla_{\theta}\mathcal{L}(F_{\theta_{i}},\mathcal{T}_{i}^{q})$
7: end for
8: $\theta\leftarrow\theta-\frac{\gamma}{n}\sum_{i}g_{i}$ $\triangleright$
Update base model parameters (outer loop)
9:end while
For ProtoNets [20], the base model
$F_{\theta}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{M}$ is an embedding network
which takes an image $x\in\mathbb{R}^{D}$ as input and outputs an embedding
vector of dimension $M$. Suppose, for example, we have a $K$-way task
$\mathcal{T}_{i}=(\mathcal{T}_{i}^{s},\mathcal{T}_{i}^{q})$ where
$\mathcal{T}_{i}^{s}=\\{(x_{i,1},y_{i,1}),(x_{i,2},y_{i,2}),...,(x_{i,N},y_{i,N})\\}$,
and where $y_{i,j}\in\\{1,...,K\\}$. Additionally, let
$S_{k}\subset\mathcal{T}_{i}^{s}$ denote the set of support examples of class
$k$. Then, a prototypical network computes a prototype $p_{k}$ for each class
$k$ by computing a class-wise mean of embedded support examples:
$p_{k}=\frac{1}{|S_{k}|}\sum_{(x,y)\in S_{k}}F_{\theta}(x)$ (1)
Thus, in the case of ProtoNets, the fine-tuning algorithm $A$ does not update
model parameters $\theta$, but instead it computes a set of prototypes which
the base model will use to classify query data. We can think of $A$ as a
function taking both embedding network parameters $\theta$ and support data
$\mathcal{T}_{i}^{s}$ and returning a tuple $\theta_{i}$ consisting of a set
of prototypes and an unchanged set of model parameters; i.e.,
$A(\theta,\mathcal{T}_{i}^{s})=(\\{p_{k}\\}_{i=0}^{k},\theta)=\theta_{i}$. In
this way, $F_{\theta_{i}}$ in Algorithm 1 refers to using the base model
parameters $\theta$ and the set of prototypes $\\{p_{k}\\}_{i=0}^{k}$ during
inference. Given a distance function
$d:\mathbb{R}^{M}\times\mathbb{R}^{M}\rightarrow[0,\infty)$ and a query point
$x$, a ProtoNet produces a distribution over classes based on a softmax over
distances to the prototypes in embedding space:
$p_{\theta}(y=k|x)=\frac{\exp(-d(F_{\theta}(x),p_{k}))}{\sum_{k^{\prime}}\exp(-d(F_{\theta}(x),p_{k^{\prime}}))}$
(2)
Training proceeds by minimizing the negative log-likelihood
$\mathcal{L}(\theta)=-\log p_{\theta}(y=k|x)$ of the true class $k$ using SGD.
Unfortunately, ProtoNet does not provide a way to understand the embedding
space or visualize $p_{k}$ – a problem we directly address in this work.
### 2.2 Understanding Meta-learning Approaches
Investigating the ability of meta-learning methods to adapt to new tasks has
been the subject of numerous studies. The success of meta-learning approaches
certainly seems to suggest that the representations learned by meta-learning
must be different than those learned through standard training [9]. Goldblum
et al. [9] find that meta-learned feature extractors outperform classically
trained models of the same architecture and suggest that meta-learned features
are qualitatively different from conventional features. While work has been
done to understand how the meta-learning networks train [10][7], there has
been little to no focus on developing tools to interpret the meta-learned
models.
### 2.3 Interpretability in Convolutional Models
In safety or security-critical applications, understanding why a
classification system made a certain prediction is important. Just because a
classification system is highly accurate, does not mean the network has
learned the right kinds of features [11]. We believe that a system that can
demonstrate its logic semantically or visually is more likely to be trusted
and used. Being that a ProtoNet is primarily a convolutional neural network,
it is appropriate to understand progress on interpretability of convolutional
neural networks (CNN).
There are many research branches within the umbrella of CNN interpretability
including visualizations of intermediate network layers [25][16][19][21],
diagnosis of CNN representations [27][26], and building explainable models
[28]. In contrast to works which focus their attention on CNN layers and
activations, we take a more specific approach in visualizing embedding space
for ProtoNets.
Zhang et al. [28] propose a compelling method of modifying convolutional
layers so that each filter learns to represent a particular object part, thus
allowing for each filter to correspond to a semantically meaningful image
feature. We believe there could be interesting work incorporating this
technique into meta-learning approaches, but is not appropriate for a shallow
embedding network like the one we employ for ProtoNets.
### 2.4 Generative Models
Work on Variational Prototyping Encoder (VPE) [12] is most similar to ours in
that a meta-task is used to learn an embedding space suitable for both few-
shot learning and unseen data representation. In contrast, we do not focus on
the image translation task from real images to prototypes and instead focus
our attention on visualizing prototypes for interpretability and refinement.
There are also a number of works which investigate connections between
autoencoder architectures and meta-learning, but which are not directly
applicable for interpretability of few-shot image classification. For example,
Wu et al. [24] propose the Meta-Learning Autoencoder (MeLA) framework which
learns a recognition and generative model to transform a single-task model
into one that can quickly adapt to new tasks using few examples. However,
their framework is meant for the more general understanding of _tasks_ like
physical state estimation and video prediction, as opposed to the image
classification tasks which we focus on. Similarly, Epstein et al. [5] develop
a meta-learning framework consisting of joint autoencoders for the purpose of
learning multiple tasks simultaneously, but this approach is tailored more for
the field of multi-task learning.
## 3 Algorithm
Our interpretability algorithm takes advantage of the simplicity of the
ProtoNet classification method. In particular, a ProtoNet classifies query
data according to the class of the prototype which the query data’s embedding
is nearest to, typically in Euclidean space. This classification method raises
an obvious question: what does a prototype look like? To answer this question,
we extend ProtoNets with a decoder to reconstruct images from embeddings.
### 3.1 Data
The CIFAR-FS dataset [3] is a recent few-shot image classification benchmark
consisting of all $100$ classes from CIFAR-100 [14]. Classes are randomly
split into 64, 16, and 20 for meta-training, meta-validation, and meta-testing
respectively. Every class contains $600$ images of size $32\times 32$.
The _mini_ ImageNet dataset [23] is another standard benchmark for few-shot
image classification. It consists of 100 randomly chosen classes from ILSVRC
2012 [4], which are split into 64, 16, and 20 classes for meta-training, meta-
validation, and meta-testing respectively. For every class, there are $600$
images of size $84\times 84$. We adopt the commonly-used Ravi and Larochelle
split proposed in [18].
### 3.2 Architecture
AutoProtoNet consists of an encoder-decoder architecture which compresses the
input to produce an embedding which must be reconstructed by the decoder.
There 4 sequential convolution blocks for the encoder and 4 sequential
transpose convolution blocks for the decoder. The details of these blocks can
be found in Table 2 of Appendix B. A forward pass through the model is shown
in Figure 1.
Output padding is used in the second transpose convolution block of the
decoder to ensure that the output size of the final transpose convolution
block matches the input $84\times 84$ dimensions of _mini_ ImageNet images,
but no output padding modifications are necessary for CIFAR-FS images.
Our architectural design choices imply that a $84\times 84$ _mini_ ImageNet
image is embedded as $1600$-dimensional vector, while a $32\times 32$ CIFAR-FS
image is embedded as $256$-dimensional vector.
Figure 1: Visualization of the forward pass through AutoProtoNet.
### 3.3 Training
Algorithm 2 AutoProtoNet Meta-Learning
Input: Encoder and decoder networks, $F_{\theta}$ and $G_{\phi}$, where
$\psi=[\theta;\phi]$
Input: Fine-tuning algorithm, $A$
Input: Reconstruction loss weight, $\lambda$
Input: Learning rate, $\gamma$
Input: Distribution over tasks, $p(\mathcal{T})$
1:Initialize $\theta,\phi$, the weights of encoder and decoder
2:while not done do
3: Sample batch of tasks $\\{\mathcal{T}_{i}\\}_{i=1}^{n}$, where
$\mathcal{T}_{i}\sim p(\mathcal{T})$ and
$\mathcal{T}_{i}=(\mathcal{T}_{i}^{s},\mathcal{T}_{i}^{q})$
4: for i=1,…,n do
5: $\hat{\mathcal{T}_{i}}\leftarrow G_{\phi}(F_{\theta}(\mathcal{T}_{i}))$
$\triangleright$ Reconstruct task data
6:
$\mathcal{L}_{R}\leftarrow\mathrm{MSE}(\mathcal{T}_{i},\hat{\mathcal{T}_{i}})$
$\triangleright$ Compute reconstruction loss
7: $\theta_{i}\leftarrow A(\theta,\mathcal{T}_{i}^{s})$ $\triangleright$
Compute prototypes (inner loop)
8: $\mathcal{L}_{C}\leftarrow\mathrm{NLL}(F_{\theta_{i}},\mathcal{T}_{i}^{q})$
$\triangleright$ Compute classification loss
9: $\mathcal{L}\leftarrow\mathcal{L}_{C}+\lambda\mathcal{L}_{R}$
10: $g_{i}\leftarrow\nabla_{\psi}\mathcal{L}$
11: end for
12: $\psi\leftarrow\psi-\frac{\gamma}{n}\sum_{i}g_{i}$ $\triangleright$ Update
base model parameters (outer loop)
13:end while
Training AutoProtoNet is not much different from training a ProtoNet. The main
difference is that we augment the meta-training loop with a reconstruction
loss to regularize the embedding space and make it suitable for image
reconstruction. We display the forward pass through AutoProtoNet in Figure 1
and adapt the meta-learning framwork from Section 2.1 to describe the meta-
training of AutoProtoNet in Algorithm 2.
Our “base” model now consists of parameters $\psi$ which is a concatenation of
encoder network parameters $\theta$ and decoder network parameters $\phi$. In
Line 5 of Algorithm 2, we pass both support and query data from the current
task $\mathcal{T}_{i}$ through the encoder and decoder to produce a
reconstruction $\hat{\mathcal{T}_{i}}$. This reconstruction is then compared
to the original data using mean squared error (MSE) loss. The finetuning
algorithm in Line 7 of Algorithm 2 is identical to the description in Section
2.1, where $\theta_{i}=(\\{p_{k}\\}_{i=0}^{k},\theta)$ is a tuple consisting
of a set of prototypes for every class and the encoder network’s model
parameters. Both of these are used to compute the likelihood of the true
labels of our query data as in Equation 2, which is maximized by minimizing
the negative log-likelihood (NLL). Finally, the classification loss
$\mathcal{L}_{C}$ and the reconstruction loss $\mathcal{L}_{R}$ are summed so
they can be jointly optimized.
We meta-train ProtoNet and AutoProtoNet on both _mini_ ImageNet and CIFAR-FS.
To create a prototype reconstruction baseline, we also train two models which
make use of ILSVRC 2012 [4], which we refer to as ImageNet Autoencoder and
ImageNet AutoProtoNet. Note that because _mini_ ImageNet is a subset of ILSVRC
2012, the ImageNet models also provide insight into whether more data during
pretraining offers any benefit for meta-learning or prototype reconstructions.
All training was performed on a single NVIDIA Quadro P6000 from our internal
cluster. Training details for each model used in this work are described
below.
#### ProtoNet
Using Algorithm 1, we meta-train a standard ProtoNet for 30 epochs using SGD.
Our SGD optimizer uses Nesterov momentum of $0.9$, weight decay of $5\times
10^{-4}$, and a learning rate of $0.1$, which we decrease to $0.06$ after $20$
epochs.
#### AutoProtoNet
Using Algorithm 2, we meta-train an AutoProtoNet for 30 epochs using SGD. We
use the same SGD settings as in ProtoNet training. We use a reconstruction
loss weight $\lambda=1$. Following [20], both ProtoNet and AutoProtoNet models
were trained using 20-way 5-shot episodes, where each class contains $15$
query points per episode, for $30$ epochs.
#### ImageNet Autoencoder
We train an autoencoder of the same architecture as AutoProtoNet using only
mean squared error (MSE) loss on ILSVRC 2012 [4] for 20 epochs. We use the SGD
optimizer with Nesterov momentum of $0.9$, weight decay of $5\times 10^{-4}$,
and a learning rate of $0.1$, which we decrease by a factor of $10$ every $5$
epochs. To evaluate this model’s performance on benchmark few-shot image
classification datasets, we make use of the only the encoder to produce
embeddings and produce classification labels using the standard ProtoNet
classification rule.
#### ImageNet AutoProtoNet
We use the encoder and decoder weights from the ImageNet Autoencoder as a
starting point for the weights of an AutoProtoNet. All other training details
are identical to that of AutoProtoNet, which we meta-train using Algorithm 2.
The 5-way 5-shot test set accuracies of all models used in this work are shown
in Table 1. AutoProtoNet is able to maintain the same level of few-shot image
classification accuracy on benchmark datasets as a standard ProtoNet. While we
expected AutoProtoNet to have an advantage due to having to incorporate
features useful for reconstruction into embeddings, our results suggest that
these reconstruction features are not always useful. Given the additional
ILSVRC 2012 [4] data during pretraining, we also expected that ImageNet
AutoProtoNet would outperform all other models, but our test results
demonstrate that representations learned for image reconstruction are not too
helpful for few-shot image classification. Test set accuracies for ImageNet
Autoencoder underscore the point that an embedding space trained for only
reconstruction is by no means competitive for few-shot classification, though
it does achieve better than chance accuracy.
## 4 Experiments
### 4.1 Prototype Visualization
While a standard ProtoNet employs an intuitive nearest-neighbor classification
rule for query points, there is no intuitive way for a user to understand what
a prototype embedding represents. Prototypical embeddings are crucial to
understanding the decision boundaries of ProtoNets. The idea is that a
ProtoNet embeds similar images nearby in embedding space, but without a way to
visualize these embeddings, we argue that a human practitioner would be unable
to debug or improve their deployed model. AutoProtoNet addresses this issue by
learning an embedding space that is suitable for image reconstruction.
Figure 2 displays prototype visualizations given a validation support set from
_mini_ ImageNet and CIFAR-FS. The ImageNet Autoencoder (IA) and ImageNet
AutoProtoNet (IAP) were both pretrained on all of ILSVRC 2012 [4], and so
classes present in this validation support set are not novel classes because
_mini_ ImageNet is a subset of ILSVRC 2012. However, in the case of the
AutoProtoNet (AP), the classes in this validation support set are novel and
the synthesized prototype images remain qualitatively on-par with the models
trained with more data (such as ImageNet Autoencoder), suggesting that meta-
tasks during training were sufficient to regularize an embedding space
suitable for image synthesis. Analyzing the prototype reconstructions from
CIFAR-FS in Figure 2(b), we see that prototype visualizations are generally
too blurry to help a human determine whether the model has learned a
sufficient representation of a class. We believe part of the problem is the
low resolution and size of CIFAR-FS images.
(a) _mini_ ImageNet
(b) CIFAR-FS
Figure 2: Support sets for a 5-way 5-shot validation task of _mini_ ImageNet (a) and CIFAR-FS (b). The embeddings of every image within a class are averaged to form a prototype embedding which is then synthesized as an image by using the decoder of an ImageNet Autoencoder (IA), an ImageNet AutoProtoNet (IAP), and an AutoProtoNet (AP). Table 1: 5-way 5-shot test set accuracies with 95% confidence intervals. Model | _mini_ ImageNet | CIFAR-FS
---|---|---
ImageNet Autoencoder | $36.83\pm 0.48$% | $46.08\pm 0.58$%
ImageNet AutoProtoNet | $70.76\pm 0.51$% | $79.65\pm 0.52$%
ProtoNet | $70.20\pm 0.52$% | $80.31\pm 0.51$%
AutoProtoNet | $70.61\pm 0.52$% | $80.16\pm 0.52$%
### 4.2 Human-guided Prototype Refinement
(a) Support set and prototype visualizations
(b) New image and corresponding embedding
(c) Interpolating 10 steps from initial prototype to new image embedding
(d) New set of prototypes
Figure 3: Steps for human-guided prototype selection in a 5-way 1-shot task.
Step (a): a human chooses an initial prototype to refine. Step (b): a human
captures one additional image to guide prototype refinement. Step (c):
Interpolations between the initial prototype and the new image embedding
(index 9) are shown to the human and a new prototype selection is made. Step
(d): A new set of prototypes is set, with class 2 having been refined.
To highlight the benefits of an embedding space suitable for image
reconstruction, we designed an experiement to demonstrate how a human can
guide prototype selection at test-time using AutoProtoNet. Assuming the user
knows the kinds of images the model will encounter at inference time and given
the ability to capture one more image, could we refine an initial prototype to
achieve higher accuracy on the validation set?
#### Data Collection
Based on objects we had around the house, we chose to formulate a 5-way 1-shot
classification problem between “door knob”, “frying pan”, “light switch”,
“orange”, and “water bottle”. Note that “orange” and “frying pan” are classes
in the _mini_ ImageNet training split, but all other classes are novel.
Because we sought to demonstrate how one might use an AutoProtoNet in a real-
world setting, all $55$ images in this task are novel, in-the-wild images,
captured using an iPhone 12. Our support set consists of $5$ images ($1$ image
per class). Our validation set consists of $50$ images ($10$ images per class)
and can be found in Figure 4 of Appendix A.
#### Prototype Refinement
Prototype refinement is a debugging technique meant for cases in which a human
believes prototype visualization may not be representative of the class. To
exaggerate the idea of prototype refinement, we purposefully choose the back-
side of a frying pan as a support image for class 1 (“frying pan”) so that the
prototype visualization has undesirable image features. Generally, a prototype
for an arbitrary object of a novel class is likely to be visually ambiguous if
the embedding network did not train on a suitable dataset, so this setup is
conceivable in the real-world.
For our classification model, we make use of the AutoProtoNet described in
Section 3.3. To apply AutoProtoNet to this new classification task, we “fine-
tine” AutoProtoNet by providing a support set shown in Figure 3(a). After
meta-learning, an AutoProtoNet’s only changeable parameters are its prototypes
which, by design, can be reconstructed into images using the decoder. By
visually understanding an AutoProtoNet’s embedding space, a user can choose to
change image features of a prototype reconstruction, thus changing the
prototype itself. In contrast, a standard ProtoNet performs inference using
its support data, which is visually inaccessible and uninterpretable.
Using a newly captured image $x\in\mathbb{R}^{d}$, we use the encoder
$F_{\theta}$ to generate an embedding $p=F_{\theta}(x)$. Given an initial
prototype $p_{k}$ for class $k$, we use the decoder $G_{\phi}$ to synthesize
images $\hat{x}_{i}\in\mathbb{R}^{d}$ for interpolations between $p_{k}$ and
$p$ as follows:
$\hat{x}_{i}=G_{\phi}((1-\alpha)p_{k}+\alpha p)\qquad\alpha\in[0,1]$ (3)
#### Results
Using the initial prototypes from Figure 3(a), AutoProtoNet achieves $80\%$
accuracy on the validation set consisting of $50$ images from all $5$ classes.
The $10$ misclassified images are all of the “frying pan” class. After
debugging the “frying pan” prototype by capturing an additional image of a
correctly-oriented frying pan and choosing an interpolation, the resulting
embedding is used as the new support as shown in Figure 3(d). Under the new
human-guided prototypes, AutoProtoNet achieves an accuracy of $98\%$ on the
validation set, where the single misclassified image is of the “door knob”
class.
The novelty of our method lies in the ability for a human to fine-tune the
model in an interactive way, leading to a performance increase in validation
set accuracy. In this example, AutoProtoNet’s decoder allowed for the
visualization of the prototype embedding, which we found to be visually
incorrect. Thus, we captured an additional, more representative image to
designate the direction in which to move the initial prototype to fit a human-
designated criteria.
## 5 Conclusion
With AutoProtoNet, we present a step toward meta-learning approaches capable
of giving some insight into their learned parameters. We argue that if meta-
learning approaches are to be useful in practice, there should be ways for a
human to glean some insight into why a classification might have been made.
Through prototype visualizations and a prototype refinement method, we
highlight the benefits of AutoProtoNet and take steps to improve a simple few-
shot classification algorithm by making it more interpretable while
maintaining the same degree of accuracy as a standard ProtoNet.
Our proposed method could likely be extended to Relation Networks [22],
MetaOptNet [15], or R2D2 [3], with a decoder network to visualize embeddings.
It may also be possible to meta-train a variational autoencoder to learn a
latent space more suitable for detailed image synthesis. We believe generative
models can play a larger role in interpretability of meta-learning algorithms.
To confirm the effectiveness of our interpretability results, we intend to
perform a human subjects study where a human determines whether prototype
visualizations help in understanding classification results. We also recognize
the limits of using a small dataset to evaluate the performance of our
prototype refinement method. We leave the creation of a larger, more diverse
validation set to future work.
## Acknowledgments and Disclosure of Funding
This work was supported by the Office of Naval Research (WL). The views and
conclusions contained in this document are those of the authors and should not
be interpreted as necessarily representing the official policies, either
expressed or implied, of the U.S. Navy.
## References
* Altae-Tran et al. [2016] H. Altae-Tran, B. Ramsundar, A. S. Pappu, and V. S. Pande. Low data drug discovery with one-shot learning. _CoRR_ , abs/1611.03199, 2016. URL http://arxiv.org/abs/1611.03199.
* Bengio [2012] Y. Bengio. Deep learning of representations for unsupervised and transfer learning. In _Proceedings of ICML workshop on unsupervised and transfer learning_ , pages 17–36. JMLR Workshop and Conference Proceedings, 2012.
* Bertinetto et al. [2018] L. Bertinetto, J. F. Henriques, P. H. S. Torr, and A. Vedaldi. Meta-learning with differentiable closed-form solvers. _CoRR_ , abs/1805.08136, 2018. URL http://arxiv.org/abs/1805.08136.
* Deng et al. [2009] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In _2009 IEEE Conference on Computer Vision and Pattern Recognition_ , pages 248–255, 2009. doi: 10.1109/CVPR.2009.5206848.
* Epstein et al. [2019] B. Epstein, R. Meir, and T. Michaeli. Joint autoencoders: A flexible meta-learning framework. In M. Berlingerio, F. Bonchi, T. Gärtner, N. Hurley, and G. Ifrim, editors, _Machine Learning and Knowledge Discovery in Databases_ , pages 494–509, Cham, 2019. Springer International Publishing. ISBN 978-3-030-10925-7.
* Finn et al. [2017] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. _CoRR_ , abs/1703.03400, 2017. URL http://arxiv.org/abs/1703.03400.
* Frosst et al. [2019] N. Frosst, N. Papernot, and G. Hinton. Analyzing and improving representations with the soft nearest neighbor loss, 2019.
* Goldblum et al. [2019] M. Goldblum, L. Fowl, and T. Goldstein. Robust few-shot learning with adversarially queried meta-learners. _CoRR_ , abs/1910.00982, 2019. URL http://arxiv.org/abs/1910.00982.
* Goldblum et al. [2020] M. Goldblum, S. Reich, L. Fowl, R. Ni, V. Cherepanova, and T. Goldstein. Unraveling meta-learning: Understanding feature representations for few-shot tasks. _CoRR_ , abs/2002.06753, 2020. URL https://arxiv.org/abs/2002.06753.
* Huang et al. [2019] W. R. Huang, Z. Emam, M. Goldblum, L. Fowl, J. K. Terry, F. Huang, and T. Goldstein. Understanding generalization through visualizations. _CoRR_ , abs/1906.03291, 2019. URL http://arxiv.org/abs/1906.03291.
* Ilyas et al. [2019] A. Ilyas, S. Santurkar, D. Tsipras, L. Engstrom, B. Tran, and A. Madry. Adversarial examples are not bugs, they are features, 2019.
* Kim et al. [2019] J. Kim, T. Oh, S. Lee, F. Pan, and I. S. Kweon. Variational prototyping-encoder: One-shot learning with prototypical images. _CoRR_ , abs/1904.08482, 2019. URL http://arxiv.org/abs/1904.08482.
* Kolesnikov et al. [2019] A. Kolesnikov, L. Beyer, X. Zhai, J. Puigcerver, J. Yung, S. Gelly, and N. Houlsby. Large scale learning of general visual representations for transfer. _CoRR_ , abs/1912.11370, 2019. URL http://arxiv.org/abs/1912.11370.
* Krizhevsky [2009] A. Krizhevsky. Learning multiple layers of features from tiny images. 2009\.
* Lee et al. [2019] K. Lee, S. Maji, A. Ravichandran, and S. Soatto. Meta-learning with differentiable convex optimization. _CoRR_ , abs/1904.03758, 2019. URL http://arxiv.org/abs/1904.03758.
* Mahendran and Vedaldi [2014] A. Mahendran and A. Vedaldi. Understanding deep image representations by inverting them. _CoRR_ , abs/1412.0035, 2014. URL http://arxiv.org/abs/1412.0035.
* Paszke et al. [2019] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems 32_ , pages 8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf.
* Ravi and Larochelle [2017] S. Ravi and H. Larochelle. Optimization as a model for few-shot learning. In _ICLR_ , 2017.
* Simonyan et al. [2014] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps, 2014.
* Snell et al. [2017] J. Snell, K. Swersky, and R. S. Zemel. Prototypical networks for few-shot learning. _CoRR_ , abs/1703.05175, 2017. URL http://arxiv.org/abs/1703.05175.
* Springenberg et al. [2015] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for simplicity: The all convolutional net, 2015.
* Sung et al. [2017] F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. S. Torr, and T. M. Hospedales. Learning to compare: Relation network for few-shot learning. _CoRR_ , abs/1711.06025, 2017. URL http://arxiv.org/abs/1711.06025.
* Vinyals et al. [2016] O. Vinyals, C. Blundell, T. P. Lillicrap, K. Kavukcuoglu, and D. Wierstra. Matching networks for one shot learning. _CoRR_ , abs/1606.04080, 2016. URL http://arxiv.org/abs/1606.04080.
* Wu et al. [2018] T. Wu, J. Peurifoy, I. L. Chuang, and M. Tegmark. Meta-learning autoencoders for few-shot prediction, 2018.
* Zeiler and Fergus [2013] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. _CoRR_ , abs/1311.2901, 2013. URL http://arxiv.org/abs/1311.2901.
* Zhang et al. [2017a] Q. Zhang, R. Cao, F. Shi, Y. N. Wu, and S.-C. Zhu. Interpreting cnn knowledge via an explanatory graph, 2017a.
* Zhang et al. [2017b] Q. Zhang, R. Cao, Y. N. Wu, and S.-C. Zhu. Growing interpretable part graphs on convnets via multi-shot learning, 2017b.
* Zhang et al. [2018] Q. Zhang, Y. N. Wu, and S.-C. Zhu. Interpretable convolutional neural networks. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 8827–8836, 2018.
## Appendix A Validation Set for Custom Classification Task
In Figure 4, we display the $50$ images of our custom $5$-way validation set.
The images from the “light switch” and “door knob” classes are diverse in
terms of shape, pose, and lighting condition.
Figure 4: Validation set for experiment described in Section 4.2
## Appendix B Architecture Details
In our description of the AutoProtoNet architecture in Table 2, we display
output sizes for the first Conv Block of the encoder and the first Conv
Transpose Block of the decoder, assuming an $84\times 84$ _mini_ ImageNet
image is used as input.
Table 2: AutoProtoNet Architecture Components Conv Block | | Conv Transpose Block
---|---|---
Layer | Parameters | Output Size | | Layer | Parameters | Output Size
Conv | $3\times 3$, $64$ | $64\times 84\times 84$ | | Conv Transpose | $2\times 2$, $*2$ | $64\times 10\times 10$
Batch Norm | | | | Batch Norm | |
Max Pool | $3\times 3$, /2 | $64\times 42\times 42$ | | Conv | $3\times 3$, $64$ | $64\times 10\times 10$
## Appendix C Implementation Details
We use PyTorch [17] and work on a fork of code used for [8], which uses the
MIT License. Our fork can be used to reproduce experiments and is available
here: https://github.com/psandovalsegura/AdversarialQuerying.
|
# ${\bf{\rm NK}_{1}}$ of Bak’s unitary group over Graded Rings
Rabeya Basu Indian Institute of Science Education and Research (IISER) Pune,
India<EMAIL_ADDRESS><EMAIL_ADDRESS>and Kuntal Chakraborty
Indian Institute of Science Education and Research (IISER) Pune, India
<EMAIL_ADDRESS>
Research by the first author was supported by SERB-MATRICS grant for the
financial year 2020–2021. And, research by the second author was supported by
IISER (Pune) post-doctoral research grant.
Corresponding Author<EMAIL_ADDRESS><EMAIL_ADDRESS>
Abstract: For an associative ring $R$ with identity, we study the absence of
$k$-torsion in ${\rm NK_{1}GQ}(R)$; Bass nil-groups for the general quadratic
or Bak’s unitary groups. By using a graded version of Quillen–Suslin theory we
deduce an analog for the graded rings.
2020 Mathematics Subject Classification: 19-XX, 15-XX, 16-XX, 20-XX
Key words: General linear groups, Elementary subgroups, Quadratic forms,
Higman linearisation, $k$-torsion, Whitehead group - $\rm K_{1}$.
## 1\. Introduction
Let $R$ be an associative ring with identity element $1$. When $R$ is
commutative, we define ${\rm SK_{1}}(R)$ as the kernel of the determinant map
from the Whitehead group ${\rm K_{1}}(R)$ to the group of units of $R$. The
Bass nil-group ${\rm NK_{1}}(R)=\textnormal{ker}({\rm
K_{1}}(R[X])\rightarrow{\rm K_{1}}(R))$; $X=0$. i.e., the subgroup consisting
of elements $[\alpha(X)]\in{\rm K_{1}}(R[X])$ such that $[\alpha(0)]=[{\rm
I}]$. Hence ${\rm K_{1}}(R[X])\cong{\rm NK}_{1}(R)\oplus{\rm K_{1}}(R)$. The
aim of this paper is to study some properties of Bass nil-groups ${\rm
NK_{1}}$ for the general quadratic groups or Bak’s unitary groups.
It is well-known that for many rings, e.g. if $R$ is regular Noetherian,
Dedekind domain, or any ring with finite global dimension, the group ${\rm
NK}_{1}(R)$ is trivial. On the other hand, if it is non-trivial, then it is
not finitely generated as a group. e.g. if $G$ is a non-trivial finite group,
the group ring $\mathbb{Z}G$ is not regular. In many such cases, it is
difficult to compute ${\rm NK}_{1}(\mathbb{Z}[G])$. In [13], D.R. Harmon
proved the triviality of this group when $G$ is finite group of square free
order. C. Weibel, in [20], has shown the non-triviality of this group for $G$
= $\mathbb{Z}/2\oplus\mathbb{Z}/2$, $\mathbb{Z}/4$, and $D_{4}$. Some more
results are known for finite abelian groups from the work of R.D. Martin;
cf.[16]. It is also known (cf.[12]) that for a general finite group $G$, ${\rm
NK}_{1}(R[G])$ is a torsion group for the group ring $R[G]$. In fact, for
trivial ${\rm NK}_{1}(R)$, every element of finite order of ${\rm
NK}_{1}(R[G])$ is some power of the cardinality of $G$. For $R=\mathbb{Z}$,
this is a result of Weibel. In particular, if $G$ is a finite $p$-group ($p$ a
prime), then every element of ${\rm NK}_{1}(\mathbb{Z}[G])$ has $p$-primary
order. In [17], J. Stienstra showed that ${\rm NK_{1}}(R)$ is a ${\rm
W}(R)$-module, where ${\rm W}(R)$ is the ring of big Witt vectors (cf.[11] and
[19]). Consequently, in ([18], §3), C. Weibel observed that if $k$ is a unit
in $R$, then ${\rm SK_{1}}(R[X])$ has no $k$-torsion, when $R$ is a
commutative local ring. Note that if $R$ is a commutative local ring then
${\rm SK_{1}}(R[X])$ coincides with ${\rm NK_{1}}(R)$; indeed, if $R$ is a
local ring then ${\rm SL}_{n}(R)={\rm E}_{n}(R)$ for all $n>0$. Therefore, we
may replace $\alpha(X)$ by $\alpha(X)\alpha(0)^{-1}$ and assume that
$[\alpha(0)]=[{\rm I}]$. In [7], the first author extended Weibel’s result for
arbitrary associative rings. In this paper we prove the analog result for
$\lambda$-unitary Bass nil-groups, viz. ${\rm NK_{1}GQ}^{\lambda}(R,\Lambda)$,
where $(R,\Lambda)$ is the form ring as introduced by A. Bak in [1]. The main
ingredient for our proof is an analog of Higman linearisation (for a subclass
of Bak’s unitary group) due to V. Kopeiko; cf.[15]. For the general linear
groups, Higman linearisation (cf.[6]) allows us to show that ${\rm NK_{1}}(R)$
has a unipotent representative. The same result is not true in general for the
unitary nil-groups. Kopeiko’s results in [15] explain a complete description
of the elements of ${\rm NK_{1}GQ}^{\lambda}(R,\Lambda)$ that have (unitary)
unipotent representatives. Followings are the main results in this article.
###### Theorem 1.1.
Let $[\alpha(X)]=\big{[}\begin{pmatrix}A(X)&B(X)\\\
C(X)&D(X)\end{pmatrix}\big{]}\in{\rm NK_{1}GQ}^{\lambda}(R,\Lambda)$ with
$A(X)\in{\rm GL}_{r}(R[X])$ for some $r\in\mathbb{N}$. Then $[\alpha(X)]$ has
no $k$-torsion if $kR=R$.
And, an analog for the graded rings:
###### Theorem 1.2.
Let $R=R_{0}\oplus R_{1}\oplus\dots$ be a graded ring. Let $k$ be a unit in
$R_{0}$. Let $N=N_{0}+N_{1}+\dots+N_{r}\in{\rm M}_{r}(R)$ be a nilpotent
matrix, and ${\rm I}$ denote the identity matrix. If $[({\rm I}+N)]^{k}=[{\rm
I}]$ in ${\rm K_{1}GQ}^{\lambda}(R,\Lambda)$, then $[{\rm I}+N]=[{\rm
I}+N_{0}]$.
In the proof of 1.2, we have used a graded version of Quillen–Suslin’s local-
global principle for Bak’s unitary group over graded rings. This unify and
generalize the results proved in [5], [7], [9], and [10].
###### Theorem 1.3.
(Graded local-global principle) Let $R=R_{0}\oplus R_{1}\oplus
R_{2}\oplus\cdots$ be a graded ring with identity $1$. Let $\alpha\in{\rm
GQ}(2n,R,\Lambda)$ be such that $\alpha\equiv{\rm I}_{2n}\pmod{R_{+}}$. If
$\alpha_{\mathfrak{m}}\in{\rm
EQ}(2n,R_{\mathfrak{m}},\Lambda_{\mathfrak{m}})$, for every maximal ideal
$\mathfrak{m}\in\rm Max(C(R_{0}))$, then $\alpha\in{\rm EQ}(2n,R,\Lambda).$
## 2\. Preliminaries
Let $R$ be an associative ring with identity element $1$. Let ${\rm M}(n,R)$
denote the additive group of $n\times n$ matrices, and ${\rm GL}(n,R)$ denote
the multiplicative group of $n\times n$ invertible matrices. Let $e_{ij}$ be
the matrix with $1$ in the $ij$-th position and $0$’s elsewhere. The
elementary subgroup of ${\rm GL}(n,R)$ plays a key role in classical algebraic
K-theory. We recall,
###### Definition 2.1.
Elementary Group ${\rm E}(n,R)$: The subgroup of all matrices in ${\rm
GL}(n,R)$ generated by $\\{{\rm E}_{ij}(\lambda):\lambda\in R,i\neq j\\}$,
where ${\rm E}_{ij}(\lambda)={\rm I}_{n}+\lambda e_{ij}$, and $e_{ij}$ is the
matrix with $1$ in the $ij$-position and $0$’s elsewhere.
###### Definition 2.2.
For $\alpha\in{\rm M}(r,R)$ and $\beta\in{\rm M}(s,R)$, the matrix
$\alpha\perp\beta$ denotes its embedding in ${\rm M}(r+s,R)$ (here $r$ and $s$
are even integers in the non-linear cases), given by
$\alpha\perp\beta=\left(\begin{array}[]{cc}\alpha&0\\\
0&\beta\end{array}\right).$
There is an infinite counterpart: Identifying each matrix $\alpha\in{\rm
GL}(n,R)$ with the large matrix $(\alpha\perp\\{1\\})$ gives an embedding of
${\rm GL}(n,R)$ into ${\rm GL}(n+1,R)$. Let ${\rm
GL}(R)=\underset{n=1}{\overset{\infty}{\cup}}{\rm GL}(n,R)$, and ${\rm
E}(R)=\underset{n=1}{\overset{\infty}{\cup}}{\rm E}(n,R)$ be the corresponding
infinite linear groups.
As a consequence of classical Whitehead Lemma (cf.[3]) due to A. Suslin, one
gets
$[{\rm GL}(R),{\rm GL}(R)]={\rm E}(R).$
###### Definition 2.3.
The quotient group
${\rm K_{1}}(R)=\frac{{\rm GL}(R)}{[{\rm GL}(R),{\rm GL}(R)]}=\frac{{\rm
GL}(R)}{{\rm E}(R)}$
is called the Whitehead group of the ring $R$. For $\alpha\in{\rm GL}(n,R)$,
let $[\alpha]$ denote its equivalence class in ${\rm K_{1}}(R)$.
In the similar manner we define ${\rm K_{1}}$ group for the other types of
classical groups; viz., the symplectic Whitehead group ${\rm K_{1}}{\rm
Sp}(R)$ and the orthogonal Whitehead group ${\rm K_{1}}{\rm O}(R)$.
This paper explores a uniform framework for classical type groups over graded
structures. Let us begin by recalling the concept of form rings and form
parameter as introduced by A. Bak in [1]. This allows us to give a uniform
definition for classical type groups.
###### Definition 2.4.
(Form rings): Let $R$ be an associative ring with identity, and with an
involution $-:R\rightarrow R$, $a\mapsto\overline{a}$. Let $\lambda\in C(R)$ =
the center of $R$, with the property $\lambda\overline{\lambda}=1$. We define
two additive subgroups of $R$, viz.
$\Lambda_{\rm max}=\\{a\in R\mid
a=-\lambda\overline{a}\\}~{}\textit{and}~{}\Lambda_{\rm
min}=\\{a-\lambda\overline{a}\mid a\in R\\}.$
One checks that for any $x\in R$, $\Lambda_{\rm max}$ and $\Lambda_{\rm min}$
are closed under the conjugation operation $a\mapsto\overline{x}ax$.
A $\lambda$-form parameter on $R$ is an additive subgroup $\Lambda$ of $R$
such that $\Lambda_{\rm min}\subseteq\Lambda\subseteq\Lambda_{\rm max}$, and
$\overline{x}\Lambda x\subseteq\Lambda$ for all $x\in R$. i.e., a subgroup
between two additive groups which is also closed under the conjugation
operation. A pair $(R,\Lambda)$ is called a form ring.
To define Bak’s unitary group or the general quadratic group, we fix a central
element $\lambda\in R$ with $\lambda\overline{\lambda}=1$, and then consider
the form
$\psi_{n}=\begin{pmatrix}0&{\rm I}_{n}\\\ \lambda{{\rm
I}}_{n}&0\end{pmatrix}.$
For more details, see [7], and [8].
Bak’s Unitary or General Quadratic Groups ${\rm GQ}$:
${\rm GQ}(2n,R,\Lambda)~{}=~{}\\{\sigma\in{\rm
GL}(2n,R,\Lambda)\,|\,\overline{\sigma}\psi_{n}\sigma=\psi_{n}\\}.$
### Elementary Quadratic Matrices :
Let $\rho$ be the permutation, defined by $\rho(i)=n+i$ for $i=1,\dots,n$. For
$a\in R$, and $1\leq i,j\leq n$, we define
$q\varepsilon_{ij}(a)={\rm I}_{2n}+ae_{ij}-\overline{a}e_{\rho(j)\rho(i)}$ for
$i\neq j$,
$qr_{ij}(a)=\left\\{\begin{array}[]{ll}{\rm
I}_{2n}+ae_{i\rho(j)}-\lambda\overline{a}e_{j\rho(i)}&\text{for}~{}i\neq j\\\
{\rm I}_{2n}+ae_{\rho(i)j}&\text{for}~{}i=j\end{array}\right.$
$ql_{ij}(a)=\left\\{\begin{array}[]{ll}{\rm
I}_{2n}+ae_{\rho(i)j}-\overline{\lambda}\overline{a}e_{\rho(j)i}&\text{for}~{}i\neq
j\\\ {\rm I}_{2n}+ae_{\rho(i)j}&\text{for}~{}i=j\end{array}\right.$
(Note that for the second and third type of elementary matrices, if $i=j$,
then we get $a=-\lambda\overline{a}$, and hence it forces that
$a\in\Lambda_{\rm max}(R)$. One checks that these above matrices belong to
$\rm GQ(2n,R,\Lambda)$; cf.[1].
$n$-th Elementary Quadratic Group ${\rm EQ}(2n,R,\Lambda)$:
The subgroup generated by $q\varepsilon_{ij}(a),qr_{ij}(a)\text{and
}ql_{ij}(a)$, for $a\in R$ and $1\leq i,j\leq n$. For uniformity we denote the
elementary generators of ${\rm EQ}(2n,R,\Lambda)$ by $\eta_{ij}(*)$.
Stabilization map: There are standard embeddings:
${\rm GQ}(2n,R,\Lambda)\longrightarrow{\rm GQ}(2n+2,R,\Lambda)$
given by
$\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\mapsto\begin{pmatrix}a&0&b&0\\\
0&1&0&0\\\ c&0&d&0\\\ 0&0&0&1\end{pmatrix}.$
Hence we have
${\rm GQ}(R,\Lambda)=\underset{\longrightarrow}{\lim}\,\,{\rm
GQ}(2n,R,\Lambda)$.
It is clear that the stabilization map takes generators of ${\rm
EQ}(2n,R,\Lambda)$ to the generators of ${\rm EQ}(2(n+1),R,\Lambda)$. Hence we
have
${\rm EQ}(R,\Lambda)=\underset{\longrightarrow}{\lim}\,\,{\rm
EQ}(2n,R,\Lambda)$
There are standard formulas for the commutators between quadratic elementary
matrices. For details, we refer [1] (Lemma 3.16). In later sections there are
repeated use of those relations. The analogue of the Whitehead Lemma for the
general quadratic groups (cf.[1]) due to Bak allows us to write:
$[{\rm GQ}(R,\Lambda),{\rm GQ}(R,\Lambda)]=[{\rm EQ}(R,\Lambda),{\rm
EQ}(R,\Lambda)]={\rm EQ}(R,\Lambda).$
Hence we define the Whitehead group of the general quadratic group
${\rm K_{1}}{\rm GQ}=\frac{{\rm GQ}(R,\Lambda)}{{\rm EQ}(R,\Lambda)}.$
And, the Whitehead group at the level $m$
${\rm K}_{1,m}{\rm GQ}=\frac{{\rm GQ}_{m}(R,\Lambda)}{{\rm
EQ}_{m}(R,\Lambda)},$
where $m=2n$ in the non-linear cases.
Let $(R,\Lambda)$ be a form ring. We extend the involution of $R$ to the ring
$R[X]$ of polynomials by setting $\overline{X}=X$. As a result we obtain a
form ring $(R[X],\Lambda[X])$.
###### Definition 2.5.
The kernel of the group homomorphism
${\rm K_{1}GQ}(R[X],\Lambda[X])\rightarrow{\rm K_{1}GQ}(R,\Lambda)$
induced from the form ring homomorphism
$(R[X],\Lambda[X])\rightarrow(R,\Lambda):X\mapsto 0$ is denoted by ${\rm
NK_{1}GQ}(R,\Lambda)$. We often say it as Bass nilpotent unitary ${\rm
K_{1}}$-group of $R$, or just unitary nil-group.
From the definition it follows that
${\rm K_{1}GQ}(R[X],\Lambda[X])={\rm K_{1}GQ}(R,\Lambda)\oplus{\rm
NK_{1}GQ}(R,\Lambda).$
In this context, we will use following two types of localizations, mainly over
graded ring $R=R_{0}\oplus R_{1}\oplus R_{2}\oplus\cdots$.
1. (1)
Principal localization: for a non-nilpotent, non-zero divisor $s$ in $R_{0}$
with $\overline{s}=s$, we consider the multiplicative subgroup
$S=\\{1,s,s^{2},\dots\\}$, and denote localized form ring by
$(R_{s},\Lambda_{s})$.
2. (2)
Maximal localization: for a maximal ideal $\mathfrak{m}\in\rm Max(R_{0})$, we
take the multiplicative subgroup $S=R_{0}-\mathfrak{m}$, and denote the
localized form ring by $(R_{\mathfrak{m}},\Lambda_{\mathfrak{m}})$.
Blanket assumption: We always assume that $2n\geq 6$.
Next, we recall the well-known “Swan–Weibel’s homotopy trick”, which is the
main ingredient to handle the graded case. Let $R=R_{0}\oplus R_{1}\oplus
R_{2}\oplus\cdots$ be a graded ring. An element $a\in R$ will be denoted by
$a=a_{0}+a_{1}+a_{2}+\cdots$, where $a_{i}\in R_{i}$ for each $i$, and all but
finitely many $a_{i}$ are zero. Let $R_{+}=R_{1}\oplus R_{2}\oplus\cdots$.
Graded structure of $R$ induces a graded structure on ${\rm M}_{n}(R)$ (ring
of $n\times n$ matrices).
###### Definition 2.6.
Let $a\in R_{0}$ be a fixed element. We fix an element $b=b_{0}+b_{1}+\cdots$
in $R$ and define a ring homomorphism $\epsilon:R\rightarrow R[X]$ given by
$\epsilon(b)=\epsilon(b_{0}+b_{1}+\cdots)\;=\;b_{0}+b_{1}X+b_{2}X^{2}+\cdots+b_{i}X^{i}+\cdots.$
Then we evaluate the polynomial $\epsilon(b)(X)$ at $X=a$ and denote the image
by $b^{+}(a)$ i.e., $b^{+}(a)=\epsilon(b)(a)$. Note that
$\big{(}b^{+}(x)\big{)}^{+}(y)=b^{+}(xy)$. Observe, $b_{0}=b^{+}(0)$. We shall
use this fact frequently.
The above ring homomorphism $\epsilon$ induces a group homomorphism at the
${\rm GL}(2n,R)$ level for every $n\geq 1$, i.e., for $\alpha\in{\rm
GL}(2n,R)$ we get a map
$\epsilon:{\rm GL}(2n,R,\Lambda)\rightarrow{\rm GL}(2n,R[X],\Lambda[X])\text{
defined by}$
$\alpha=\alpha_{0}\oplus\alpha_{1}\oplus\alpha_{2}\oplus\cdots\mapsto\alpha_{0}\oplus\alpha_{1}X\oplus\alpha_{2}X^{2}\cdots,$
where $\alpha_{i}\in{\rm M}(2n,R_{i})$. As above for $a\in R_{0}$, we define
$\alpha^{+}(a)$ as
$\alpha^{+}(a)=\epsilon(\alpha)(a).$
###### Notation 2.7.
By ${\rm GQ}(2n,R[X],\Lambda[X],(X))$ we shall mean the group of all quadratic
matrices over $R[X]$, which are ${\rm I}_{2n}$ modulo $(X)$. Also if $R$ is a
graded ring, then by ${\rm GQ}(2n,R,\Lambda,(R_{+}))$ we shall mean the group
of all quadratic matrices over $R$ which are ${\rm I}_{2n}$ modulo $R_{+}$.
The following lemma highlights very crucial fact which we use (repeatedly) in
the proof of “Dilation Lemma”.
###### Lemma 2.8.
Let $R$ be a Noetherian ring and $s\in R$. Then there exists a natural number
$k$ such that the homomorphism ${\rm GQ}(2n,R,\Lambda,s^{k}R)\rightarrow{\rm
GQ}(2n,R_{s},\Lambda_{s})$ $($induced by localization homomorphism
$R\rightarrow R_{s})$ is injective. Moreover, it follows that the induced map
${\rm EQ}(2n,R,\Lambda,s^{k}R)\rightarrow{\rm EQ}(2n,R_{s},\Lambda_{s})$ is
injective.
For the proof of the above lemma we refer [14], Lemma 5.1. Recall that any
module finite ring $R$ is direct limit of its finitely generated subrings.
Also, ${\rm G}(R,\Lambda)=\underset{\longrightarrow}{\lim}\,{\rm
G}(R_{i},\Lambda_{i})$, where the limit is taken over all finitely generated
subring of $R$. Thus, one may assume that $C(R)$ is Noetherian. Hence one may
consider module finite (form) rings $(R,\Lambda)$ with identity.
Now we recall few technical definitions and useful lemmas.
###### Definition 2.9.
A row $(a_{1},a_{2},\dots,a_{n})\in R^{n}$ is said to be unimodular if there
exists $(b_{1},b_{2},\dots,b_{n})\in R^{n}$ such that
$\sum_{i=1}^{n}a_{i}b_{i}=1$. The set of all unimodular rows of length $n$ is
denoted by ${\rm Um}_{n}(R)$.
For any column vector $v\in(R^{2n})^{t}$ we define the row vector
$\widetilde{v}=\overline{v}^{t}\psi_{n}$.
###### Definition 2.10.
We define the map $M:(R^{2n})^{t}\times(R^{2n})^{t}\rightarrow M(2n,R)$ and
the inner product $\langle,\rangle$ as follows:
$\displaystyle M(v,w)$
$\displaystyle=v.\widetilde{w}-\overline{\lambda}\,\overline{w}.\widetilde{v}$
$\displaystyle\langle v,w\rangle$ $\displaystyle=\widetilde{v}.w$
Note that the elementary generators of the groups ${\rm EQ}(2n,R,\Lambda)$ are
of the form ${\rm I}_{2n}+M(*_{1},*_{2})$ for suitably chosen standard basis
vectors.
###### Lemma 2.11.
$($cf.[1]$)$ The group ${\rm E}(2n,R,\Lambda)$ is perfect for $n\geq 3$, i.e.,
$[{\rm EQ}(2n,R,\Lambda),{\rm EQ}(2n,R,\Lambda)]={\rm EQ}(2n,R,\Lambda).$
###### Lemma 2.12.
For all elementary generators of ${\rm GQ}(2n,R,\Lambda)$ we have the
following splitting property: for all $x,y\in R$,
$\eta_{ij}(x+y)=\eta_{ij}(x)\eta_{ij}(y).$
${{\bf Proof:}}$ See pg. 43-44, Lemma 3.16, [1].
###### Lemma 2.13.
Let $G$ be a group, and $a_{i},b_{i}\in G$, for $i=1,2,\ldots,n$. Then for
$r_{i}=\Pi_{j=1}^{i}a_{j}$, we have
$\Pi_{i=1}^{n}r_{i}b_{i}r_{i}^{-1}\Pi_{i=1}^{n}a_{i}$.
###### Lemma 2.14.
The group ${\rm GQ}(2n,R,\Lambda,R_{+})\cap{\rm EQ}(2n,R,\Lambda)$ generated
by the elements of the type $\varepsilon\eta_{ij}(*)\varepsilon^{-1}$, where
$\varepsilon\in{\rm EQ}(2n,R,\Lambda)$ and $*\in R_{+}$.
${{\bf Proof:}}$ Let $\alpha\in{\rm GQ}(2n,R,\Lambda,R_{+})\cap{\rm
EQ}(2n,R,\Lambda)$. Then we can write
$\alpha=\Pi_{k=1}^{r}\eta_{i_{k}j_{k}}(a_{k})$
for some element $a_{k}\in R$, $k=1,\dots,r$. We can write $a_{k}$ as
$a_{k}=(a_{0})_{k}+(a_{+})_{k}$ for some $(a_{0})_{k}\in R_{0}$ and
$(a_{+})_{k}\in R_{+}$. Using Lemma 2.12, we can write $\alpha$ as,
$\alpha=\Pi_{k=1}^{r}(\eta_{i_{k}j_{k}}(a_{0})_{k})(\eta_{i_{k}j_{k}}(a_{+})_{k}).$
Let $\epsilon_{t}=\Pi_{k=1}^{t}\eta_{i_{k}j_{k}}((a_{0})_{k})$ for $1\leq
t\leq r$. By the Lemma 2.13, we have
$\alpha=\left(\Pi_{k=1}^{r}\epsilon_{k}\eta_{i_{k}j_{k}}((a_{+})_{k})\epsilon_{k}^{-1}\right)\left(\Pi_{k=1}^{r}\eta_{i_{k}j_{k}}((a_{0})_{k})\right).$
Let us write
$A=\Pi_{k=1}^{r}\epsilon_{k}\eta_{i_{k}j_{k}}((a_{+})_{k})\epsilon_{k}^{-1}$
and $B=\Pi_{k=1}^{r}\eta_{i_{k}j_{k}}((a_{0})_{k})$. Hence $\alpha=AB$. Let
‘over-line’ denotes the quotient ring modulo $R_{+}$. Now going modulo
$R_{+}$, we have $\overline{\alpha}=\overline{AB}=\bar{A}\bar{B}=\overline{\rm
I}_{2n}\bar{B}=\overline{\rm I}_{2n}$, the last equality holds as
$\alpha\in{\rm GQ}(2n,R,\Lambda,R_{+})$. Hence, $\overline{B}=\overline{{\rm
I}}_{2n}$. Since the entries of $B$ are in $R_{0}$, it follows that $B={\rm
I}_{2n}$. Therefore it follows that
$\alpha=\Pi_{k=1}^{r}\epsilon_{k}\eta_{i_{k}j_{k}}((a_{+})_{k})\epsilon_{k}^{-1}.$
$\Box$
## 3\. Quillen–Suslin Theory for Bak’s Group over Graded Rings
### 3.1. Local–Global Principle
###### Lemma 3.1.
Let $(R,\Lambda)$ be a form ring and $v\in{\rm EQ}(2n,R,\Lambda)e_{1}$. Let
$w\in R^{2n}$ be a column vector such that $\langle v,w\rangle=0$. Then ${\rm
I}_{2n}+M(v,w)\in{\rm EQ}(2n,R,\Lambda)$.
${{\bf Proof:}}$ Let $v=\varepsilon e_{1}$. Then we have ${\rm
I}_{2n}+M(v,w)=\varepsilon({\rm I}_{2n}+M(e_{1},w_{1}))\varepsilon^{-1}$,
where $w_{1}=\varepsilon^{-1}w$. Since $\langle e_{1},w_{1}\rangle=\langle
v,w\rangle=0$, we have $w_{1}^{T}=(w_{11},\dots,w_{1n-1},0,\dots,w_{12n})$.
Therefore, since $\lambda\bar{\lambda}=1$, we have
${\rm I}_{2n}+M(v,w)=\prod_{\begin{subarray}{c}1\leq j\leq n\\\ 1\leq i\leq
n-1\end{subarray}}\varepsilon
ql_{in}(-\bar{\lambda}\overline{w}_{1n+i})q\varepsilon_{jn}(-\bar{\lambda}\overline{w}_{1j})ql_{nn}^{-1}(*)\varepsilon^{-1}$
###### Lemma 3.2.
Let $R$ be a graded ring. Let $\alpha\in{\rm EQ}(2n,R,\Lambda)$. Then for
every $a\in R_{0}$ one gets $\alpha^{+}(a)\in{\rm EQ}(2n,R,\Lambda)$.
${{\bf Proof:}}$ Let $\alpha=\Pi_{k=1}^{t}({\rm
I}_{2n}+aM(e_{i_{k}},e_{j_{k}})),$ where $a\in R$ and $t\geq 1$. Hence for
$b\in R_{0}$, we have $\alpha^{+}(b)=\Pi_{k=1}^{t}({\rm
I}_{2n}+a^{+}(b)M(e_{i_{k}},e_{j_{k}}))$. Now taking $v=e_{i}$ and
$v=a^{+}(b)e_{j}$ we have $\langle v,w\rangle=0$ and ${\rm I}_{2n}+M(v,w)={\rm
I}_{2n}+a^{+}(b)M(e_{i},e_{j}))$ which belongs to ${\rm EQ}(2n,R,\Lambda)$ by
Corollary 3.1. Hence we have $\alpha^{+}(b)\in{\rm EQ}(2n,R,\Lambda)$ for
$b\in R_{0}$. $\Box$
###### Lemma 3.3.
(Graded Dilation Lemma) Let $\alpha\in{\rm GQ}(2n,R,\Lambda)$ with
$\alpha^{+}(0)={\rm I}_{2n}$ and $\alpha_{s}\in{\rm EQ}(2n,R_{s},\Lambda_{s})$
for some non-zero-divisor $s\in R_{0}$. Then there exists $\beta\in{\rm
EQ}(2n,R,\Lambda)$ such that
$\beta_{s}^{+}(b)=\alpha_{s}^{+}(b)$
for some $b=s^{l}$ and $l\gg 0$.
${{\bf Proof:}}$ Since $\alpha_{s}\in{\rm EQ}(2n,R_{s},\Lambda_{s})$ with
$(\alpha_{0})_{s}={\rm I}_{2n}$, then $\alpha_{s}=\gamma$, where
$\gamma_{ii}=1+g_{ii}$ where $g_{ii}\in(R^{+})_{s}$ and $\gamma_{ij}=g_{ij}$
for $i\neq j$, where $g_{ij}\in(R^{+})_{s}$. Choose $l$ large enough such that
every denominator of $g_{ij}$ for all $i,j$ divides $s^{l}$. Then by Lemma
3.2, we have $\alpha_{s}^{+}(s^{l})\in{\rm EQ}(2n,R_{s},\Lambda_{s})$. As all
denominator is cleared then $\alpha_{s}^{+}(s^{l})$ permits a natural
pullback. Hence we have $\alpha^{+}(s^{l})\in{\rm EQ}(2n,R,\Lambda).$ Call
this pullback as $\beta$. $\Box$
###### Lemma 3.4.
Let $\alpha_{s}\in{\rm EQ}(2n,R_{s},\Lambda_{s})$ with $\alpha_{s}^{+}(0)={\rm
I}_{2n}$. Then one gets
$\alpha_{s}^{+}(b+d)\alpha_{s}^{+}(d)^{-1}\in{\rm EQ}(2n,R,\Lambda)$
for some $s,d\in R_{0}$ and $b=s^{l},l\gg 0$.
${{\bf Proof:}}$ Since $\alpha_{s}\in{\rm EQ}(2n,R_{s},\Lambda_{s})$, we have
$\alpha_{s}^{+}(X)\in{\rm EQ}(2n,R_{s}[X],\Lambda_{s}[X])$. Let
$\beta^{+}(X)=\alpha^{+}(X+d)\alpha^{+}(d)^{-1},$
where $d\in R_{0}$. Then we have
$\beta^{+}_{s}(X)\in{\rm EQ}(2n,R_{s}[X],\Lambda_{s}[X])$
and $\beta^{+}(0)={\rm I}_{2n}$. Hence by Lemma 3.3, we have, there exists
$b=s^{l}$, $l\gg 0$, such that $\beta^{+}(bX)\in{\rm EQ}(2n,R[X],\Lambda[X])$.
Putting $X=1$, we get our desired result. $\Box$
Proof of Theorem 1.3 – Graded Local-Global Principle:
Since $\alpha_{\mathfrak{m}}\in{\rm
EQ}(2n,R_{\mathfrak{m}},\Lambda_{\mathfrak{m}})$ for all $\mathfrak{m}\in{\rm
Max}(C(R_{0}))$, for each $\mathfrak{m}$ there exists $s\in
C(R_{0})\setminus\mathfrak{m}$ such that $\alpha_{s}\in{\rm
EQ}(2n,R_{s},\Lambda_{s})$. Using Noetherian property we can consider a finite
cover of $C(R_{0})$, say $s_{1}+\dots+s_{r}=1$. From Lemma 3.3, we have
$\alpha^{+}(b_{i})\in{\rm EQ}(2n,R,\Lambda)$ for some $b_{i}=s_{i}^{l_{i}}$
with $b_{1}+\dots+b_{r}=1$. Now consider $\alpha_{s_{1}s_{2}\dots s_{r}}$,
which is the image of $\alpha$ in $R_{s_{1}s_{2}\dots s_{r}}$. By Lemma 2.8,
$\alpha\mapsto\alpha_{s_{1}s_{2}\dots s_{r}}$ is injective. Hence we can
perform our calculation in $R_{s_{1}s_{2}\dots s_{r}}$ and then pull it back
to $R$.
$\begin{split}\alpha_{s_{1}s_{2}\dots s_{r}}=&\alpha_{s_{1}s_{2}\dots
s_{r}}^{+}(b_{1}+b_{2}+\dots+b_{r})\\\
=&((\alpha_{s_{1}})_{s_{2}s_{3}\dots})^{+}(b_{1}+\dots+b_{r})((\alpha_{s_{1}})_{s_{2}s_{3}\dots})^{+}(b_{2}+\dots+b_{r})^{-1}\dots\\\
&((\alpha_{s_{i}})_{s_{1}\dots\hat{s_{i}}\dots
s_{r}})^{+}(b_{i}+\dots+b_{r})((\alpha_{s_{i}})_{s_{1}\dots\hat{s_{i}}\dots
s_{r}})^{+}(b_{i+1}+\dots+b_{r})^{-1}\\\ &((\alpha_{s_{r}})_{s_{1}s_{2}\dots
s_{r-1}})^{+}(b_{r})((\alpha_{s_{r}})_{s_{1}s_{2}\dots
s_{r-1}})^{+}(0)^{-1}\end{split}$
Observe that $((\alpha_{s_{i}})_{s_{1}\dots\hat{s_{i}}\dots
s_{r}})^{+}(b_{i}+\dots+b_{r})((\alpha_{s_{i}})_{s_{1}\dots\hat{s_{i}}\dots
s_{r}})^{+}(b_{i+1}+\dots+b_{r})^{-1}\in{\rm EQ}(2n,R,\Lambda)$ due to Lemma
3.4 (here $\hat{s_{i}}$ means we omit $s_{i}$ in the product
$s_{1}\dots\hat{s_{i}}\dots s_{r}$), and hence $\alpha_{s_{1}s_{2}\dots
s_{r}}\in{\rm EQ}(2n,R_{s_{1}\dots s_{r}},\Lambda_{s_{1}\dots s_{r}})$. This
proves $\alpha\in{\rm EQ}(2n,R,\Lambda)$. $\Box$
### 3.2. Normality and Local–Global
Next we are going to show that if $K$ is a commutative ring with identity and
$R$ is an associative $K$-algebra such that $R$ is finite as a left
$K$-module, then the normality criterion of elementary subgroup is equivalent
to the Local-Global principle for quadratic group. (One can also consider $R$
as a right $K$-algebra.)
###### Lemma 3.5.
$($Bass; cf.[4]$)$ Let $A$ be an associative $B$-algebra such that $A$ is
finite as a left $B$-module and $B$ be a commutative local ring with identity.
Then $A$ is semilocal.
###### Theorem 3.6.
$($cf.[7]$)$ Let $A$ be a semilocal ring $($not necessarily commutative$)$
with involution. Let $v\in{\rm Um}_{2n}(A)$.Then $v\in e_{1}{\rm EQ}(2n,A)$.
In other words the group ${\rm EQ}(2n,A)$ acts transitively on ${\rm
Um}_{2n}(A)$.
Before proving the next theorem we need to recall a theorem from [7]:
###### Theorem 3.7.
$($Local-Global Principle$)$ Let $A$ be an associative $B$-algebra such that
$A$ is finite as a left $B$-module and $B$ be a commutative ring with
identity.. If $\alpha(X)\in{\rm GQ}(2n,A[X],\Lambda[X])$, $\alpha(0)=\rm{\rm
I}_{2n}$ and $\alpha_{\mathfrak{m}}(X)\in{\rm
EQ}(2n,A_{\mathfrak{m}}[X],\Lambda_{\mathfrak{m}}[X])$ for every maximal ideal
$\mathfrak{m}\in{\rm Max}(B)$, then $\alpha\in{\rm EQ}(2n,A[X],\Lambda[X])$.
###### Theorem 3.8.
Let $K$ be a commutative ring with unity and $R=\oplus_{i=0}^{\infty}R_{i}$ be
a graded $K$-algebra such that $R_{0}$ is finite as a left $K$-module. Then
for $n\geq 3$ the following are equivalent:
$(1)$ ${\rm EQ}(2n,R,\Lambda)$ is a normal subgroup of ${\rm
GQ}(2n,R,\Lambda)$.
$(2)$ If $\alpha\in{\rm GQ}(2n,R,\Lambda)$ with $\alpha^{+}(0)={\rm I}_{2n}$
and $\alpha_{\mathfrak{m}}\in{\rm
EQ}(2n,R_{\mathfrak{m}},\Lambda_{\mathfrak{m}})$ for every maximal ideal
$\mathfrak{m}\in{\rm Max}(K)$, then $\alpha\in{\rm EQ}(2n,R,\Lambda)$.
${{\bf Proof:}}$ $(1)\Rightarrow(2)$ We have proved the Lemma 3.1 for any form
ring with identity and shown that the local-global principle is a consequence
of Lemma 3.1. So, the result is true in particular if we have ${\rm
EQ}(2n,R,\Lambda)$ is a normal subgroup of ${\rm GQ}(2n,R,\Lambda)$.
$(2)\Rightarrow(1)$ Since polynomial rings are special case of graded rings,
the result follows by using the Theorem 3.7. Let $\alpha\in{\rm
EQ}(2n,R,\Lambda)$ and $\beta\in{\rm GQ}(2n,R,\Lambda)$. Then we have $\alpha$
can be written as product of the matrices of the form $({\rm I}_{2n}+\beta
M(*_{1},*_{2})\beta^{-1})$, with $\langle*_{1},*_{2}\rangle=0$ where $*_{1}$
and $*_{2}$ are suitably chosen basis vectors. Let $v=\beta*_{1}$. Then we can
write $\beta\alpha\beta^{-1}$ as a product of the matrices of the form ${\rm
I}_{2n}+M(v,w)$ for some $w\in R^{2n}$. We must show that each ${\rm
I}_{2n}+M(v,w)\in{\rm EQ}(2n,R,\Lambda)$.
Consider $\gamma={\rm I}_{2n}+M(v,w)$. Then $\gamma^{+}(0)={\rm I}_{2n}$. By
Lemma 3.5 we have the ring $S^{-1}R$ is semilocal where
$S=K\setminus\mathfrak{m}$, and $\mathfrak{m}\in{\rm Max}(K)$. Since $v\in{\rm
Um}_{2n}(R)$, then by Theorem 3.6, we have $v\in{\rm
EQ}(2n,S^{-1}R,S^{-1}\Lambda)e_{1}$. Therefore by applying Lemma 3.1 to the
ring $(S^{-1}R,S^{-1}\Lambda)$, we have $\gamma_{\mathfrak{m}}\in{\rm
EQ}(2n,R_{\mathfrak{m}},\Lambda_{\mathfrak{m}})$ for every maximal ideal
$\mathfrak{m}\in{\rm Max}(K)$. Hence by hypothesis we have $\gamma\in{\rm
EQ}(2n,R,\Lambda)$. This completes the proof. $\Box$
###### Remark 3.9.
We conclude that the local-global principle for the elementary subgroups and
their normality properties are equivalent.
## 4\. Bass Nil Group ${\rm{\rm NK_{1}}{\rm GQ}(R)}$
In this section recall some basic definitions and properties of the
representatives of ${\rm{\rm NK_{1}}{\rm GQ}(R)}$. We represent any element of
${\rm M}_{2n}(R)$ as $\begin{pmatrix}a&b\\\ c&d\end{pmatrix},$ where
$a,b,c,d\in{\rm M}_{n}(R)$. For $\alpha=\begin{pmatrix}a&b\\\
c&d\end{pmatrix}$ we call $\begin{pmatrix}a&b\end{pmatrix}$ the upper half of
$\alpha$. Let $(R,\lambda,\Lambda)$ be a form ring. By setting
$\bar{\Lambda}=\\{\bar{a}:a\in\Lambda\\}$ we get another form ring
$(R,\bar{\lambda},\bar{\Lambda})$. We can extend the involution of $R$ to
${\rm M}_{n}(R)$ by setting $(a_{ij})^{*}=(\overline{a}_{ji})$.
###### Definition 4.1.
Let $(R,\lambda,\Lambda)$ be a form ring. A matrix $\alpha=(a_{ij})\in{\rm
M}_{n}(R)$ is said to be $\Lambda$-Hermitian if $\alpha=-\lambda\alpha^{*}$
and all the diagonal entries of $\alpha$ are contained in $\Lambda$. A matrix
$\beta\in{\rm M}_{n}(R)$ is said to be $\bar{\Lambda}$-Hermitian if
$\beta=-\bar{\lambda}\beta^{*}$ and all the diagonal entries of $\beta$ are
contained in $\bar{\Lambda}$.
###### Remark 4.2.
A matrix $\alpha\in{\rm M}_{n}(R)$ is $\Lambda$-Hermitian if and only if
$\alpha^{*}$ is $\bar{\Lambda}$-Hermitian. The set of all $\Lambda$-Hermitian
matrices forms a group under matrix multiplication.
###### Lemma 4.3.
[15, Example 2] Let $\beta\in{\rm GL}_{n}(R)$ be a $\Lambda$-Hermitian matrix.
Then the matrix $\alpha^{*}\beta\alpha$ is $\Lambda$-Hermitian for every
$\alpha\in{\rm GL}_{n}(R)$.
###### Definition 4.4.
Let $\alpha=\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\in{\rm M}_{2n}(R)$ be a
matrix. Then $\alpha$ is said to be a $\Lambda$-quadratic matrix if one of the
following equivalent conditions holds:
1. (1)
$\alpha\in{\rm GQ}(2n,R,\Lambda)$ and the diagonal entries of the matrices
$a^{*}c,b^{*}d$ are in $\Lambda$,
2. (2)
$a^{*}d+\lambda c^{*}d={\rm I}_{n}$ and the matrices $a^{*}c,b^{*}d$ are
$\Lambda$-Hermitian,
3. (3)
$\alpha\in{\rm GQ}(2n,R,\Lambda)$ and the diagonal entries of the matrices
$ab^{*},cd^{*}$ are in $\Lambda$,
4. (4)
$ad^{*}+\lambda bc^{*}={\rm I}_{n}$ and the matrices $ab^{*},cd^{*}$ are
$\Lambda$-Hermitian.
###### Remark 4.5.
The set of all $\Lambda$-quadratic matrices of order $2n$ forms a group called
$\Lambda$-quadratic group. We denote this group by ${\rm
GQ}^{\lambda}(2n,R,\Lambda)$. If $2\in R^{*}$, then we have $\Lambda_{\rm
min}=\Lambda_{\rm max}$. In this case notions of quadratic groups and notions
of $\Lambda$-quadratic groups coincides. Also this happens when
$\Lambda=\Lambda_{\rm max}$. Hence quadratic groups are special cases of
$\Lambda$-quadratic groups. Other classical groups appear as
$\Lambda$-quadratic groups in the following way. Let $R$ be a commutative ring
with trivial involution. Then
${\rm GQ}^{\lambda}(2n,R,\Lambda)=\begin{cases}{\rm Sp}_{2n}(R),&\text{if
}\lambda=-1\text{ and }\Lambda=\Lambda_{\rm max}=R\\\ {\rm
O}_{2n}(R),&\text{if }\lambda=1\text{ and }\Lambda=\Lambda_{\rm
min}=0\end{cases}$
And for general linear group ${\rm GL}_{n}(R)$, we have, ${\rm GL}_{n}(R)={\rm
GQ}^{1}(2n,H(R),\Lambda=\Lambda_{\rm max})$, where $\mathbb{H}(R)$ denotes the
ring $R\oplus R^{op}$ with $R^{op}$ is the opposite ring of $R$ and the
involution on $\mathbb{H}(R)$ is defined by $\overline{(x,y)}=(y,x)$. Thus the
study of $\Lambda$-quadratic matrices unifies the study of quadratic matrices.
We recall following results from [15].
###### Lemma 4.6.
Let $\alpha=\begin{pmatrix}a&0\\\ 0&d\end{pmatrix}\in{\rm M}_{2n}(R)$. Then
$\alpha\in{\rm GQ}^{\lambda}(2n,R,\Lambda)$ if and only if $a\in{\rm
GL}_{n}(R)$ and $d=(a^{*})^{-1}$.
${{\bf Proof:}}$ Let $\alpha\in{\rm GQ}^{\lambda}(2n,R,\Lambda)$. In view of
$(2)$ of Definition 4.4, we have, $a^{*}d={\rm I}_{n}$. Hence $a$ is
invertible and $d=(a^{*})^{-1}$. Converse holds by $(2)$ of Definition 4.4.
$\Box$
###### Definition 4.7.
Let $\alpha\in{\rm GL}_{n}(R)$ be a matrix. A matrix of the form
$\begin{pmatrix}\alpha&0\\\ 0&(\alpha^{*})^{-1}\end{pmatrix}$ is denoted by
$\mathbb{H}(\alpha)$ and is said to be hyperbolic.
###### Remark 4.8.
In a similar way we can show that matrices of the form
$T_{12}(\beta):=\begin{pmatrix}{\rm I}_{n}&\beta\\\ 0&{\rm
I}_{n}\end{pmatrix}$ is $\Lambda$-quadratic matrix if and only if $\beta$ is
$\bar{\Lambda}$-Hermitian. And the matrix of the form
$T_{21}(\gamma):=\begin{pmatrix}{\rm I}_{n}&0\\\ \gamma&{\rm
I}_{n}\end{pmatrix}$ is $\Lambda$-quadratic matrix if and only if $\gamma$ is
$\Lambda$-Hermitian.
Likewise in the quadratic case we can define the notion of
$\Lambda$-elementary quadratic groups in the following way:
###### Definition 4.9.
The $\Lambda$-elementary quadratic group is denoted by ${\rm
EQ}^{\lambda}(2n,R,\Lambda)$ and defined by the group generated by $2n\times
2n$ matrices of the form $\mathbb{H}(\alpha)$ where $\alpha\in{\rm E}_{n}(R)$,
$T_{12}(\beta)$ and $\beta$ is $\bar{\Lambda}$-Hermitian and $T_{21}(\gamma)$
is $\gamma$ $\Lambda$-Hermitian.
###### Lemma 4.10.
Let $A=\begin{pmatrix}\alpha&\beta\\\ 0&\delta\end{pmatrix}\in{\rm
M}_{2n}(R)$. Then $A\in{\rm GQ}^{\lambda}(2n,R,\Lambda)$ if and only if
$\alpha\in{\rm GL}_{n}(R)$, $\delta=(\alpha^{*})^{-1}$ and $\alpha^{-1}\beta$
is $\bar{\Lambda}$-Hermitian. In this case
$A\equiv\mathbb{H}(\alpha)\pmod{{\rm EQ}^{\lambda}(2n,R,\Lambda)}$.
${{\bf Proof:}}$ Let $A\in{\rm GQ}^{\lambda}(2n,R,\Lambda)$. Then by $(4)$ of
Definition 4.4, we have $\alpha\delta^{*}={\rm I}_{n}$ and $\alpha\beta^{*}$
is $\Lambda$-Hermitian. Hence $\alpha$ is invertible and
$\delta=(\alpha^{*})^{-1}$. For $\alpha^{-1}\beta$, we get
$(\alpha^{-1}\beta)^{*}=\beta^{*}(\alpha^{-1})^{*}=\alpha^{-1}(\alpha\beta^{*})(\alpha^{-1})^{*},$
which is $\Lambda$-Hermitian by Lemma 4.3. Hence $\alpha^{-1}\beta$ is
$\bar{\Lambda}$-Hermitian. Conversely, the condition on $A$ will fulfill the
condition $(4)$ of Definition 4.4. Hence $A$ is $\Lambda$-quadratic. Since
$\alpha^{-1}\beta$ is $\bar{\Lambda}$-Hermitian,
$T_{12}(-\alpha^{-1}\beta)\in{\rm EQ}^{\lambda}(2n,R,\Lambda)$
and $AT_{12}(\alpha^{-1}\beta)=\mathbb{H}(\alpha)$. Thus
$A\equiv\mathbb{H}(\alpha)\pmod{{\rm EQ}^{\lambda}(2n,R,\Lambda)}$. $\Box$
A similar proof will prove the following:
###### Lemma 4.11.
Let $B=\begin{pmatrix}\alpha&0\\\ \gamma&\delta\end{pmatrix}\in{\rm
M}_{2n}(R)$. Then $B\in{\rm GQ}^{\lambda}(2n,R,\Lambda)$ if and only if
$\alpha\in{\rm GL}_{n}(R)$, $\delta=(\alpha^{*})^{-1}$ and $\gamma$ is
$\Lambda$-Hermitian. In this case
$B\equiv\mathbb{H}(\alpha)\pmod{{\rm EQ}^{\lambda}(2n,R,\Lambda)}.$
###### Lemma 4.12.
Let $\alpha=\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\in{\rm
GQ}^{\lambda}(2n,R,\Lambda)$. Then
$\alpha\equiv\mathbb{H}(a)\pmod{{\rm EQ}^{\lambda}(4n,R,\Lambda)}$
if $a\in{\rm GL}_{n}(R).$ Moreover, if $a\in{\rm E}_{n}(R)$, then
$\alpha\equiv\mathbb{H}(a)\pmod{{\rm EQ}^{\lambda}(2n,R,\Lambda)}$.
${{\bf Proof:}}$ By same argument as given in Lemma 4.10, we have $a^{-1}b$ is
$\Lambda$-Hermitian. Hence $T_{12}(-a^{-1}b)\in{\rm
EQ}^{\lambda}(2n,R,\Lambda)$, and consequently $\alpha
T_{12}(-a^{-1}b)=\begin{pmatrix}a&0\\\ c&d^{\prime}\end{pmatrix}\in{\rm
GQ}^{\lambda}(2n,R,\Lambda)$ for some $d^{\prime}\in{\rm GL}_{n}(R)$. Hence by
Lemma 4.11, we get
$\alpha T_{12}(-a^{-1}b)\equiv H(a)\pmod{{\rm EQ}^{\lambda}(2n,R,\Lambda)}.$
Hence $\alpha\equiv H(a)\pmod{{\rm EQ}^{\lambda}(2n,R,\Lambda)}$. $\Box$
###### Definition 4.13.
Let $\alpha=\begin{pmatrix}a_{1}&b_{1}\\\ c_{1}&d_{1}\end{pmatrix}\in{\rm
M}_{2r}(R)$, $\beta=\begin{pmatrix}a_{2}&b_{2}\\\
c_{2}&d_{2}\end{pmatrix}\in{\rm M}_{2s}(R)$. As before, we define
$\alpha\perp\beta$, and consider an embedding
$\rm GQ^{\lambda}(2n,R,\Lambda)\rightarrow\rm
GQ^{\lambda}(2n+2,R,\Lambda),\,\,\,\alpha\mapsto\alpha\perp{\rm I}_{2}.$
We denote ${\rm
GQ}^{\lambda}(R,\Lambda)=\underset{n=1}{\overset{\infty}{\cup}}{\rm
GQ}^{\lambda}(2n,R,\Lambda)$ and ${\rm
EQ}^{\lambda}(R,\Lambda)=\underset{n=1}{\overset{\infty}{\cup}}{\rm
EQ}^{\lambda}(2n,R,\Lambda)$.
In view of quadratic analog of Whitehead Lemma, we have the group ${\rm
EQ}^{\lambda}(R,\Lambda)$ coincides with the commutator of ${\rm
GQ}^{\lambda}(R,\Lambda)$. Therefore the group
${\rm K_{1}}{\rm GQ}^{\lambda}(R,\Lambda):=\frac{{\rm
GQ}^{\lambda}(R,\Lambda)}{{\rm EQ}^{\lambda}(R,\Lambda)}$
is well-defined. The class of a matrix $\alpha\in{\rm
GQ}^{\lambda}(R,\Lambda)$ in the group ${\rm K_{1}}{\rm
GQ}^{\lambda}(R,\Lambda)$ is denoted by $[\alpha]$. In this way we obtain a
${\rm K_{1}}$-functor ${\rm K_{1}}{\rm GQ}^{\lambda}$ acting form the category
of form rings to the category of abelian groups.
###### Remark 4.14.
Likewise in the quadratic case, the kernel of the group homomorphism
${\rm K_{1}GQ}^{\lambda}(R[X],\Lambda[X])\rightarrow{\rm
K_{1}GQ}^{\lambda}(R,\Lambda)$
induced from the form ring homomorphism
$(R[X],\Lambda[X])\rightarrow(R,\Lambda);X\mapsto 0$ is denoted by ${\rm
NK_{1}GQ}^{\lambda}(R,\Lambda)$. Since the $\Lambda$-quadratic groups are
subclass of the quadratic groups, the Local-global principle holds for
$\Lambda$-quadratic groups. We use this throughout for the next section.
## 5\. Absence of torsion in ${\rm NK_{1}}{\rm GQ}^{\lambda}(R,\Lambda)$
In this section we give the proof of Theorem 1.1 and Theorem 1.2. In [6], the
proof of the theorem for the linear case is based on two key results, viz. the
Higman linearisation, and a lemma on polynomial identity in the truncated
polynomial rings. Here we recall the lemma with its proof to highlight its
connection with the big Witt vectors. Recently, in [15], V. Kopeiko deduced an
analog of Higman linearisation process for a subclass of the general quadratic
groups.
###### Definition 5.1.
For a associative ring $R$ with unity we consider the truncated polynomial
ring
$R_{t}=\frac{R[X]}{(X^{t+1})}.$
###### Lemma 5.2.
$($cf.[6], Lemma 4.1$)$ Let $P(X)\in R[X]$ be any polynomial. Then the
following identity holds in the ring $R_{t}:$
$(1+X^{r}P(X))=(1+X^{r}P(0))(1+X^{r+1}Q(X)),$
where $r>0$ and $Q(X)\in R[X]$, with $\deg(Q(X))<t-r$.
${{\bf Proof:}}$ Let us write $P(X)=a_{0}+a_{1}X+\cdots+a_{t}X^{t}$. Then we
can write $P(X)=P(0)+XP^{\prime}(X)$ for some $P^{\prime}(X)\in R[X]$. Now, in
$R_{t}$
$\displaystyle(1+X^{r}P(X))(1+X^{r}P(0))^{-1}$
$\displaystyle=(1+X^{r}P(0)+X^{r+1}P^{\prime}(X))(1+X^{r}P(0))^{-1}$
$\displaystyle=1+X^{r+1}P^{\prime}(X)(1-X^{r}P(0)+X^{2r}(P(0))^{2}-\cdots)$
$\displaystyle=1+X^{r+1}Q(X)$
where $Q(X)\in R[X]$ with $\deg(Q(X))<t-r$. Hence the lemma follows. $\Box$
Remark. Iterating the above process we can write for any polynomial $P(X)\in
R[X]$,
$(1+XP(X))=\Pi_{i=1}^{t}(1+a_{i}X^{i})$
in $R_{t}$, for some $a_{i}\in R$. By ascending induction it will follow that
the $a_{i}$’s are uniquely determined. In fact, if $R$ is commutative then
$a_{i}$’s are the $i$-th component of the ghost vector corresponding to the
big Witt vector of $(1+XP(X))\in{\rm W}(R)=(1+XR[[X]])^{\times}$. For details
see ([11], $\mathcal{x}$I).
###### Lemma 5.3.
Let $R$ be a ring with $1/k\in R$ and $P(X)\in R[X]$. Assume $P(0)$ lies in
the center of $R$. Then
$(1+X^{r}P(X))^{k^{r}}=1\Rightarrow(1+X^{r}P(X))=(1+X^{r+1}Q(X))$
in the ring $R_{t}$ for some $r>0$ and $Q(X)\in R[X]$ with $\deg(Q(X))<t-r$.
Following result is due to V. Kopeiko, cf. [15].
###### Proposition 5.4.
(Higman linearisation) Let $(R,\Lambda)$ be a form ring. Then, every element
of the group ${\rm NK_{1}}{\rm GQ}^{\lambda}(R,\Lambda)$ has a representative
of the form
$[a;b,c]_{n}=\begin{pmatrix}{\rm I}_{r}-aX&bX\\\ -cX^{n}&{\rm
I}_{r}+a^{*}X+\cdots+(a^{*})^{n}X^{n}\end{pmatrix}\in{\rm
GQ}^{\lambda}(2r,R[X],\Lambda[X])$
for some positive integers $r$ and $n$, where $a,b,c\in{\rm M}_{r}(R)$ satisfy
the following conditions:
1. (1)
the matrices $b$ and $ab$ are Hermitian and also $ab=ba^{*}$,
2. (2)
the matrices $c$ and $ca$ are Hermitian and also $ca=a^{*}c$,
3. (3)
$bc=a^{n+1}$and $cb=(a^{*})^{n+1}$.
###### Corollary 5.5.
Let $[\alpha]\in{\rm NK_{1}}{\rm GQ}^{\lambda}(R,\Lambda)$ has the
representation $[a;b,c]_{n}$ for some $a,b,c\in{\rm M}_{n}(R)$ according to
Proposition 5.4. Then
$[\alpha]=[\mathbb{H}({\rm I}_{r}-aX)]$
in ${\rm NK_{1}}{\rm GQ}^{\lambda}(R,\Lambda)$ if $({\rm I}_{r}-aX)\in{\rm
GL}_{r}(R)$.
${{\bf Proof:}}$ By Lemma 4.12 we have $[a;b,c]_{n}\equiv\mathbb{H}({\rm
I}_{r}-aX)\pmod{{\rm EQ}^{\lambda}(2r,R[X],\Lambda[X])}$. Hence we have
$[\alpha]=[\mathbb{H}({\rm I}_{r}-aX)]$ in ${\rm NK_{1}}{\rm
GQ}^{\lambda}(R,\Lambda)$. $\Box$
Proof of Theorem 1.1:
By the Theorem 5.4, we have $[\alpha]=[[a;b,c]_{n}]$ for some $a,b,c\in{\rm
M}_{s}(R)$ and for some natural numbers $n$ and $s$. Note that in the Step $1$
of the Proposition 5.4, the invertibility of the first corner of the matrix
$\alpha$ will not be changed during the linearisation process. Also the
invertibility of the first corner is preserved in the remaining steps of the
Proposition 5.4. Therefore since the first corner matrix $A(X)\in{\rm
GL}_{r}(R[X])$, then we have $({\rm I}_{s}-aX)\in{\rm GL}_{s}(R[X])$. By
Corollary 5.5, we have $[\alpha]=[\mathbb{H}({\rm I}_{s}-aX)]$. Now let
$[\alpha]$ be a $k$-torsion. Then we have $[\mathbb{H}({\rm I}_{r}-aX)]$ is a
$k$-torsion. Since $({\rm I}_{r}-aX)$ is invertible, it follows that $a$ is
nilpotent. Let $a^{t+1}=0$. Since $[({\rm I}_{r}-aX)]^{k}=[{\rm I}]$ in ${\rm
K_{1}}{\rm GQ}^{\lambda}(R[X],\Lambda[X])$, then by arguing as given in [7],
we have $[{\rm I}_{r}-aX]=[I]$ in ${\rm K_{1}}{\rm
GQ}^{\lambda}(R[X],\Lambda[X])$. This completes the proof. $\Box$
Proof of Theorem 1.2 – (Graded Version):
Consider the ring homomorphism $f:R\rightarrow R[X]$ defined by
$f(a_{0}+a_{1}+\dots)=a_{0}+a_{1}X+\dots.$
Then
$\displaystyle[({\rm I}+N)^{k}]=[{\rm I}]$ $\displaystyle\Rightarrow f([{\rm
I}+N]^{k})=[f({\rm I}+N)]^{k}=[{\rm I}]$ $\displaystyle\Rightarrow[({\rm
I}+N_{0}+N_{1}X+\dots+N_{r}X^{r})]^{k}=[{\rm I}].$
Let $\mathfrak{m}$ be a maximal ideal in $R_{0}$. By Theorem 1.1, we have
$[({\rm I}+N_{0}+N_{1}X+\dots+N_{r}X^{r})]=[{\rm I}]$
in ${\rm NK_{1}}{\rm
GQ}^{\lambda}((R)_{\mathfrak{m}},\Lambda_{\mathfrak{m}})$. Hence by using the
local-global principle we conclude
$[({\rm I}+N)]=[{\rm I}+N_{0}]$
in ${\rm NK_{1}}{\rm GQ}^{\lambda}(R,\Lambda)$, as required. $\Box$
Acknowledgment: We thank Sergey Sinchuk and V. Kopeiko for many useful
discussions.
## References
* [1] A. Bak; ${\rm K}$-Theory of forms. Annals of Mathematics Studies, 98\. Princeton University Press, Princeton, N.J. University of Tokyo Press, Tokyo (1981).
* [2] A. Bak, R. Basu, R.A. Rao; Local-global principle for transvection groups. Proceedings of The American Mathematical Society 138 (2010), no. 4, 1191–1204.
* [3] H. Bass; Algebraic K-Theory, Benjamin, New York-Amsterdam (1968).
* [4] H. Bass; K-Theory and stable algebra, Publ. Math. I.H.E.S. No. 22 (1964), 5–60.
* [5] R. Basu, R.A. Rao, Reema Khanna; On Quillen’s local global principle. Commutative algebra and algebraic geometry, Contemp. Math., 390, Amer. Math. Soc., Providence, RI (2005), 17–30.
* [6] R. Basu; Absence of torsion for $NK_{1}(R)$ over associative rings, J. Algebra Appl. 10(4) (2011), 793–799.
* [7] R. Basu; Local-global principle for general quadratic and general Hermitian groups and the nilpotence of ${\rm KH}_{1}$. Zap. Nauchn. Sem. S.-Petersburg. Otdel. Mat. Inst. Steklov. (POMI) 452 (2016), Voprosy Teorii Predstavleniĭ Algebr i Grupp. 30, 5–31. translation in J. Math. Sci. (N.Y.) 232 (2018), no. 5, 591–609.
* [8] R. Basu; On Transvection Subgroups of General Quadratic Modules. Journal of Algebra and Its Application. Vol. 17, No. 11, 1850217 (2018).
* [9] R. Basu, Ravi A. Rao, Reema Khanna; Pillars of relative Quillen–Suslin Theory. “Leavitt Path Algebras and K-theory”, ISI Series, Springer (2020) 211–223.
* [10] R. Basu, Manish Kumar Singh; On Quillen–Suslin Theory for Classical Groups; Revisited over Graded Rings. Contemp. Math. Amer. Math. Soc., Vol. 751, (2020), 5–18.
* [11] S. Bloch; Algebraic ${\rm K}$-Theory and crystalline cohomology, Publ. Math. I.H.E.S. 47 (1977), 187–268. itary $K_{1}$-group of unitary ring, Journal of Mathematical Sciences, Vol 240, No 4 (2019), 459–473.
* [12] I. Hambleton, W. Lück; Induction and computation of Bass Nil groups for finite groups. Pure Appl. Math. Q. 8 (2012), no. 1, 199–219.
* [13] D.R. Harmon; ${\rm NK}_{1}$ of finite groups. Proc. Amer. Math. Soc. 100(2) (1987), 229–232.
* [14] R. Hazrat, N. Vavilov; ${\rm K_{1}}$ of Chevalley groups are nilpotent. Journal of Pure and Applied Algebra 179 (2003), no. 1-2, 99–116.
* [15] V.I. Kopeiko; Bass nilpotent unitary ${\rm K_{1}}$ group of unitary ring. Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) 460 (2017), Voprosy Teorii Predstavleniĭ Algebr i Grupp. 33, 111–119. translation in J. Math. Sci. (N.Y.) 243 (2019), no. 4, 577–582.
* [16] R.D. Martin; Nilgroups of finite abelian groups. Ph.D.Thesis, Columbia Iniversity, ProQuest LLC, Ann Arbor, MI, 1976.
* [17] J. Stienstra; Operation in the linear ${\rm K}$-theory of endomorphisms, Current Trends in Algebraic Topology, Conf. Proc. Can. Math. Soc. 2 (1982).
* [18] C. Weibel; Mayer-Vietoris Sequence and module structure on ${\rm K_{1}0}$, Springer Lecture Notes in Mathematics 854 (1981), 466–498.
* [19] C. Weibel; Module Structures on the ${\rm K}$-theory of Graded Rings, Journal of Algebra 105 (1987), 465–483.
* [20] C. Weibel; ${\rm NK}_{0}$ and ${\rm NK}_{1}$ of the groups $C_{4}$ and $D_{4}$. Addendum to ”Lower algebraic K-theory of hyperbolic 3-simplex reflection groups” by J.-F. Lafont and I. J. Ortiz [MR2495796]. Comment. Math. Helv. 84 (2009), no. 2, 339–349.
|
# Resilience through Scene Context in Visual Referring Expression Generation
Simeon Junker Sina Zarrieß
Bielefeld University
<EMAIL_ADDRESS>
###### Abstract
Scene context is well known to facilitate humans’ perception of visible
objects. In this paper, we investigate the role of context in Referring
Expression Generation (REG) for objects in images, where existing research has
often focused on distractor contexts that exert pressure on the generator. We
take a new perspective on scene context in REG and hypothesize that contextual
information can be conceived of as a resource that makes REG models more
resilient and facilitates the generation of object descriptions, and object
types in particular. We train and test Transformer-based REG models with
target representations that have been artificially obscured with noise to
varying degrees. We evaluate how properties of the models’ visual context
affect their processing and performance. Our results show that even simple
scene contexts make models surprisingly resilient to perturbations, to the
extent that they can identify referent types even when visual information
about the target is completely missing.
## 1 Introduction
Objects do not appear randomly in the world that surrounds us, but they occur
in predictable spatial, semantic, or functional configurations and relations
to their environment. Research on human perception shows that we “see the
world in scenes” (Bar, 2004), and that prior experience and knowledge of the
world helps us to efficiently process visual stimuli. Even with an extremely
short glimpse at an image, humans remember essential semantic aspects of the
scene and object arrangement (Oliva and Torralba, 2006). This rapid scene
understanding allows us to handle the complexity of the visual world and to
recognize objects in context, e.g., when they are not fully visible (Võ,
2021).
In this paper, we take a new perspective on how visual context facilitates the
automatic generation of descriptions for visual objects, a task well-known as
Referring Expression Generation (REG).
TRFv-0.0 | couch on right (A)
---|---
TRFv-0.5 | right brown chair (F)
TRFv-1.0 | right couch (A)
TRFt-0.0 | right couch (A)
TRFt-0.5 | right couch (A)
TRFt-1.0 | right elephant (F)
TRFs-0.0 | couch on right (A)
TRFs-0.5 | right couch (A)
TRFs-1.0 | right couch (A)
Figure 1: Example from RefCOCO testB (displayed with noise level $0.5$) with
generated expressions and human judgements. Scene context enables target
identification even with full occlusion (TRFv-1.0, TRFs-1.0)
In past years, datasets have become available that provide referring
expressions for objects in images, with objects appearing in relatively
complex real-world contexts (Kazemzadeh et al., 2014; Mao et al., 2016). Yet,
recent work in this area has largely followed the traditional REG paradigm by
Dale and Reiter 1995, where (visual) context is mainly considered in terms of
so-called distractor objects, that are similar to the target and must
therefore be excluded by naming differences (Krahmer and van Deemter, 2012).
These distractors do not facilitate the description task, but even exert
“contextual pressure”, as the speaker needs to reason about which attributes
and words make the expression unambiguous (Cohn-Gordon et al., 2018; Schüz and
Zarrieß, 2021). The main goal of this paper is to widen this commonly accepted
view on the role of visual context in visual REG and investigate how
contextual information can be conceived as a resource that makes the
generation of descriptions easier rather than harder.
In visual REG from images, scene and object information is not available a
priori: Whereas classical REG algorithms mostly rely on symbolic scene
representations, neural generation models in visual REG have to extract object
properties from low-level visual representations of the target and its context
(Schüz et al., 2023). This even applies to properties as fundamental as the
type of an object, i.e. how it is named in the expression. Under ideal
conditions, determining a referent’s type and properties can be regarded as a
relatively simple task, but it becomes non-trivial in the presence of
imperfect visual information, occlusion or noise. At the same time, global
visual scene context can be expected to be of great support in this task, in
light of previous findings on human scene understanding (cf. Section 2).
However, to date, little is known as to how processes of scene understanding
and object type identification interact in REG.
In this work, we hypothesize that visual scene context makes REG models more
resilient, i.e., it allows them to recalibrate predictions that were based on
imperfect target representations. To test this, we adopt a novel experimental
setup for REG: we train and test different model architectures with target
representations that have been artificially obscured with varying degrees of
noise (cf. Figure 1). We provide the models with different context
representations and compare their performance concerning common quality
metrics and a focused human evaluation of their ability to determine referent
types. We test how certain properties of the visual context affect the
processing and performance of REG models, and verify our results with
experiments using further datasets that are substantially different from the
ones commonly used in existing REG research. Our results show that context
makes models surprisingly resilient to perturbations in target
representations, to the extent that they can identify referent types even when
information about the objects themselves is completely missing. We believe
that these results offer new perspectives on the role of scene context in
visual REG.
## 2 Background
#### Human scene understanding
Research on human vision and perception emphasizes the fact that scenes are
not mere collections of objects (Võ, 2021). When humans view a scene, they do
not simply recognize the objects in it, but they understand it is a coherent
whole. Oliva and Torralba (2006) observe that humans perceive the so-called
gist of a scene rapidly and even when local information is missing (e.g.
blurred). Experiments in this field indicate that contextual information can
facilitate the recognition of visible objects across different tasks (Oliva
and Torralba, 2007; Divvala et al., 2009; Galleguillos and Belongie, 2010;
Parikh et al., 2012), and that, on the other hand, incongruent context can
also be misleading (Zhang et al., 2020; Gupta et al., 2022). This means that
the human vision exploits learned knowledge about these regularities of the
visual word for visual processing (Biederman, 1972; Bar, 2004; Greene, 2013;
Pereira and Castelhano, 2014; Sadeghi et al., 2015). To model these
regularities, Võ (2021) proposed the notion of a “scene grammar” that can
account for the interaction of global and local visual perception and
understanding in humans.
#### Scenes, objects, and image captioning
Much research on V&L is currently concerned with modeling the generation and
understanding of image descriptions, e.g. in tasks like image captioning or
retrieval. Yet, many captioning tasks focus on rather object-centric
descriptions that mention objects and their spatial relationships (Cafagna et
al., 2021). A common representation of scene context in image captioning is
scene graphs, cf. (Yang et al., 2023), which are usually modeled via spatial
relations between bounding boxes of objects. Cafagna et al. 2023 propose a new
task and dataset that foregrounds scene-level instead of object-centric
descriptions. Another perspective on scene knowledge in captioning models is
coming from work that focuses on probing them with perturbed or systematically
varied images: Yin and Ordonez (2017) found that captioning with extremely
reduced inputs of labeled object layouts performs surprisingly well. Related
to this, Nikolaus et al. (2019) find that image captioning models often rely
on regularities in object occurrences, to the extent that they fail to
generalize to new combinations of objects. Their solution is to generate
unseen combinations and challenge models on these. Our goal in this work is
complementary: we aim to understand how exactly generation models may be able
to leverage regular scene knowledge and patterns of object co-occurrence, and
how this may facilitate the handling of imperfect visual information.
#### REG and scene context
REG is concerned with the generation of descriptions that distinguish a
particular object in a given visual context, cf. Krahmer and van Deemter 2012.
Recent visual REG models usually build on image captioning models but are
adapted to generate more pragmatically informative expressions, using e.g.
training objectives (Mao et al., 2016), comprehension modules (Luo and
Shakhnarovich, 2017), reinforcement agents (Yu et al., 2017) or decoding
strategies (Schüz and Zarrieß, 2021). REG models usually process different
forms of context information. Whereas some models encode differences in
appearance between targets and surrounding objects (Yu et al., 2016, 2017;
Tanaka et al., 2019; Kim et al., 2020; Liu et al., 2020), others use
representations of the global image (Mao et al., 2016; Luo and Shakhnarovich,
2017; Zarrieß and Schlangen, 2018; Panagiaris et al., 2020, 2021). Visual
context is often supplemented with the relative position and size of the
target in the image (Mao et al., 2016; Yu et al., 2017; Luo and Shakhnarovich,
2017; Li and Jiang, 2018; Tanaka et al., 2019; Kim et al., 2020; Panagiaris et
al., 2020; Liu et al., 2020).
#### Research gap
Little is known about how visual REG models internally exploit their context
representations and in what way context exactly enhances the generation of
expressions. Here, the implicit assumption often is that models exploit
context in a similar way as symbolic REG models, e.g. the Incremental
Algorithms by Reiter and Dale (2000). However, a key difference to symbolic
REG is that in visual REG failures in the scene and object understanding can
arise from model hallucination or imperfect visual input, cf. (Schüz et al.,
2023). This is especially evident for the type of objects: this attribute had
a privileged role in early works (Dale and Reiter, 1995) as they are essential
as the heads of referential noun phrases. In visual REG, referents must first
be correctly identified to name them appropriately (Zarrieß and Schlangen,
2017; Silberer et al., 2020a, b), which is challenging in cases of deficient
input, e.g. small or partially occluded objects (Yao and Fei-Fei, 2010). In
this paper, we aim to close this gap and investigate how visual context
information helps REG models to be more resilient to deficits in their target
inputs.
## 3 Experimental Set-Up
### 3.1 Outline and Research Hypotheses
The main idea of this work is to train and test standard REG models on visual
target representations occluded with varying amounts of noise, to investigate
how different combinations of target and context can compensate for this
perturbation. For this, we draw on existing model architectures, and evaluate
the trained models using both out-of-the-box quality metrics and more fine-
grained human evaluation capturing the validity of assigned referent type
labels. The evaluation results are also supported by supplementary analyses as
well as further experiments with an additional data set.
Generally, we expect that automatic metrics and human evaluation scores will
drop for increasing amounts of target noise. However, we also hypothesize that
visual context makes models more resilient, i.e., for the same amount of
noise, models supplied with context outperform variants with only target
information. While we expect this general effect across all conditions, it
should be more pronounced as the amount of occlusion increases.
### 3.2 Models
We set up two transformer-based REG models: TRF and CC. TRF is a transformer
trained from scratch on REG data, CC builds upon a pre-trained language model.
We define variants of both models using a) different combinations of target
and context representations, as the respective model inputs, and b) the amount
of target noise during training and inference. Implementation and training
details for our models can be found in appendix B.
Target representations include the visual contents of the target bounding box
($V_{t}$), its location, and size relative to the global image ($Loc_{t}$). As
context representations, we use the embedding of the global image with the
target masked out ($V_{c}$). For TRF, which is our better performing model, we
also experiment with scene-level information (or scene summaries) about what
kinds of objects are present in the surrounding scene ($S_{c}$), which are
derived from panoptic segmentations (Kirillov et al. 2018, see Section 3.3).
Models processing only target information are indicated with the subscript
$t$, whereas models processing $V_{c}$ and $S_{c}$ context information are
indexed with $v$ and $s$, respectively.
To test our systems for perturbed target representations, we randomly replaced
a fixed proportion of the pixels in the bounding box contents with random
noise during both training and inference. All systems are trained and tested
with three noise settings: $0.0$ as our baseline setting, where no pixels are
perturbed; $0.5$, where $50\%$ of the pixels are replaced with noise; and
$1.0$, where the entire content of the target bounding box is occluded, i.e.
no visual information about the target is available. Noise levels for training
and evaluation are shown in the index of the model identifiers.
#### Vanilla Transformer (TRF)
We use the model from Schüz and Zarrieß (2023), which is based on an existing
implementation for image captioning.111https://github.com/saahiluppal/catr The
model builds on ResNet (He et al., 2015) encodings for targets and context,
which are passed on to an encoder/decoder transformer in the style of Vaswani
et al. (2017). The model is largely comparable to the system in Panagiaris et
al. (2021), but without self-critical sequence training and layer-wise
connections between encoder and decoder. Unlike e.g. Mao et al. (2016), we
train the model using Cross Entropy Loss.
We compare three variants of this model, which take as input concatenated
feature vectors comprised of the representations described above. TRFt
receives only target information, i.e. the input vector is composed as
$[V_{t};Loc_{t}]$. In addition to this, TRFv receives representations of the
global image, with the input vector structure $[V_{t};Loc_{t};V_{c}]$.
Finally, TRFs takes scene-level representations about the relative area
occupied by different object classes in the visual context, i.e.
$[V_{t};Loc_{t};S_{c}]$.
For both $V_{t}$ and $V_{c}$, the respective parts of the image are scaled to
$224\times 224$ resolution (keeping the original ratio and masking out the
padding) and encoded with ResNet-152 (He et al., 2015), resulting in
representations with 196 features ($14\times 14$) and hidden size $512$ for
both target and context. $Loc_{t}$ is a vector of length 5 with the corner
coordinates of the target bounding box and its area relative to the whole
image, projected to the model’s hidden size. The scene summary input for TRFs
consists of 134 features, which represent the relative area all of the 134
object or stuff types in COCO occupy in the visual context. To use this
information in our model, we embed each of the object and stuff types in an
additional embedding layer with $512$, which is jointly trained with the
model. In the model’s forward pass, we weight each of the individual category
embeddings with the relative area it occupies in the respective input image,
and form $S_{c}$ by concatenating the embeddings.
#### Fine-tuned GPT-2 (CC)
We adapt the ClipCap model in Mokady et al. (2021) to the REG task. The
authors use a simple MLP-based mapping network to construct fixed-size
prefixes for GPT-2 (Radford et al., 2019) from CLIP encodings (Radford et al.,
2021), and fine-tune both the mapping network and the language model for the
image captioning task. To the best of our knowledge, this is the first model
tested for REG which utilizes a pre-trained language model.
As for the TRF model, we compare different variants of this base architecture.
First, in CCt, the GPT-2 prefix is constructed as $[V_{t};Loc_{t}]$, where
$V_{t}$ is computed like the CLIP prefix in the original paper (but for the
contents of the target bounding box) and $Loc_{t}$ is the location features
described above, projected into a single prefix token. For CCv, we again add
the global image (minus the target) as context: $V_{c}$ is computed in the
same way as $V_{t}$, but with a separate mapping network and with the global
image as the visual input. Here, the final prefix is $[V_{t};V_{c};Loc_{t}]$.
### 3.3 Data
We use RefCOCO (Kazemzadeh et al., 2014) for training and evaluation, which
contains bounding boxes and expressions for MSCOCO images (Lin et al., 2014).
The dataset contains separate testA and testB splits (1.9k and 1.8k items),
where testA only contains humans as referential targets and testB all other
object classes (but not humans).
We also conduct experiments on the detection dataset PACO-EGO4D (Ramanathan et
al., 2023), which contains annotations for objects and object parts in first-
person video frames (Grauman et al., 2022). In comparison to RefCOCO, PACO is
larger (75k items in test split), data is less standardized and objects are
often harder to recognize.
To construct the scene summaries for our TRFs model and analyze attention
allocation patterns, we rely on annotations for panoptic segmentation
(Kirillov et al., 2018), i.e. dense pixel-level segmentation masks for both
thing and stuff classes in MSCOCO images (Caesar et al., 2016).
### 3.4 Evaluation
#### Generation Quality / N-Gram Metrics
To estimate the general generation capabilities of our models we rely on BLEU
(Papineni et al., 2002) and CIDEr (Vedantam et al., 2014) as established
metrics for automatic evaluation.
#### Referent Type Assignment / Human Evaluation
To test whether our models succeed in assigning valid types to referents, we
collect human judgements for generated expressions for a subset of 200 items
from the RefCOCO testB split. This split contains only non-human referents,
and was chosen due to preliminary tests indicating that judgments about the
validity of type labels for human referents are often difficult, e.g. due to
ambiguity regarding the gender of depicted individuals. The annotators were
instructed to rate only those parts of the expressions that refer to the type
of the referential target. For example, “the black dog” should be rated as
correct if the target is of the type dog, but is actually white. All items
should be assigned exactly one of the following categories:
* •
Adequate / A: The generated expression contains a valid type description for
the referent.
* •
Misaligned / M: Type designators do not apply to the intended target, but to
other objects (partially) captured by the bounding box.
* •
Omission / O: Omission of the target type, e.g. description via non-type
attributes, pronominalization or general nouns such as “thing”.
* •
False / F: Type designations that do not apply to the intended target or other
objects captured by the bounding box.
Previous research has shown considerable variation in object naming (Silberer
et al. 2020a, b, among others). Therefore, for the A category, type
descriptions do not have to match the ground truth annotations, but different
labels can be considered adequate if they represent valid descriptions of the
target type. For example, dog, pet and animal would be considered equally
correct for depicted dogs. Subsequent to the human evaluation, we investigate
correlations between the evaluation results and further properties of the
visual context.
#### Referent Type Assignment / Classification Accuracy
We complement the human evaluation with RefCOCO with further experiments using
PACO-EGO4D. While PACO does not provide referring expressions, we treat the
object and object-part annotations similarly, i.e. our models generate the
category strings (instead of predicting the respective category in a
multiclass classification scheme). We evaluate the identification of object
and part types by measuring the accuracy of the models in exactly reproducing
the respective category strings (for entire objects) or the strings in the
object-part tuples (for object parts).
#### Attention Allocation
We also examine how our TRFv model allocates attention over different parts of
the input as a result of different noise levels during training. First, we
follow Schüz and Zarrieß (2023) in measuring the ratio between target and
context partitions, i.e. the summed attention weights directed to the target
and its context in both the encoder and decoder multi-head attention. For
this, we compute $\alpha_{t}$, $\alpha_{l}$ and $\alpha_{c}$ as the cumulative
attention weights directed to $V_{t}$, $Loc_{t}$ and $V_{c}$, respectively, by
calculating the sum of the attention weights assigned to each input partition,
normalized such that $\alpha_{t}+\alpha_{l}+\alpha_{c}=1$. We also quantify
the attention difference between $\alpha_{t}$ and $\alpha_{c}$ as
$\Delta_{t,c}$, by excluding $\alpha_{l}$ and normalizing the target and
context scores such that $\alpha_{t}+\alpha_{c}=1$. Then, we calculate
$\Delta_{t,c}=\alpha_{t}-\alpha_{c}$, i.e. $0<\Delta_{t,c}\leq 1$ if there is
relative focus on the target, $-1\leq\Delta_{t,c}<0$ if there is relative
focus on the context, and $\Delta_{t,c}=0$ when both parts are weighted
equally.
Second, we measure the model attention allocated to different classes of
objects in the visual context, using the panoptic segmentation data described
in Section 3.3. Here, we first interpolate the model attention map to fit the
original dimensions of the image, and retrieve the segmentation masks for the
respective image. For each category $x\in X$, we then compute the cumulative
attention weight $\alpha_{x}$ by computing the sum of pixels attributed to
this category, weighted by the model attention scores over the image and
normalized such that $\sum_{x\in X}\alpha_{x}=1$. We report $\alpha_{x=tgt}$,
i.e. attention allocated to areas of the visual context assigned the same
category as the referential target. As the covered area varies between object
categories, we get different scores even if the model attention is perfectly
balanced over the image. To address this, we also report scores that are
normalized by the area covered by the category. Scores $>1$ indicate that the
category is attended more than to be expected based on the coverage area.
## 4 Results on RefCOCO
### 4.1 Automatic Quality Metrics
| testA | testB
---|---|---
| Bl1 | Bl2 | CDr | Bl1 | Bl2 | CDr
system | | | | | |
TRFt-0.0 | 0.55 | 0.35 | 0.86 | 0.57 | 0.35 | 1.28
TRFt-0.5 | 0.49 | 0.32 | 0.73 | 0.51 | 0.32 | 1.04
TRFt-1.0 | 0.35 | 0.17 | 0.34 | 0.30 | 0.14 | 0.20
TRFv-0.0 | 0.58 | 0.39 | 0.93 | 0.61 | 0.39 | 1.36
TRFv-0.5 | 0.54 | 0.35 | 0.81 | 0.56 | 0.36 | 1.24
TRFv-1.0 | 0.46 | 0.29 | 0.60 | 0.55 | 0.36 | 1.14
TRFs-0.0 | 0.54 | 0.34 | 0.84 | 0.57 | 0.35 | 1.27
TRFs-0.5 | 0.52 | 0.35 | 0.81 | 0.56 | 0.35 | 1.28
TRFs-1.0 | 0.42 | 0.24 | 0.51 | 0.53 | 0.33 | 1.12
CCt-0.0 | 0.48 | 0.30 | 0.70 | 0.47 | 0.28 | 0.88
CCt-0.5 | 0.38 | 0.22 | 0.48 | 0.36 | 0.20 | 0.52
CCt-1.0 | 0.35 | 0.16 | 0.37 | 0.29 | 0.12 | 0.16
CCv-0.0 | 0.57 | 0.38 | 0.92 | 0.58 | 0.37 | 1.25
CCv-0.5 | 0.51 | 0.32 | 0.77 | 0.49 | 0.31 | 0.97
CCv-1.0 | 0.40 | 0.23 | 0.46 | 0.38 | 0.21 | 0.46
Table 1: BLEU1, BLEU2 and CIDEr scores on RefCOCO testA and testB for all TRF
and CC variants. Systems indicated with t can only access target information,
v and s models are supplied with visual context and scene summaries,
respectively. Target noise proportions ($0.0$, $0.5$, $1.0$) are denoted in
the indices. Generally, context information leads to improved results,
especially for high noise settings.
Table 1 shows the results of the automatic evaluation of our systems on
RefCOCO testA and testB. Interestingly, throughout all conditions, the simpler
TRF model outperforms CC, although the latter builds on pre-trained CLIP and
GPT-2 which are known to be effective for image captioning (Mokady et al.,
2021). It is possible that CC cannot fully benefit from CLIP pre-training due
to the structural differences between bounding box contents and full images,
or that performance drops result from higher compression when constructing the
GPT prefixes. Also, TRF achieves a considerably larger performance gain than
CC when adding scene context, indicating that this model is more effective at
exploiting contextual information.
For both TRF and CC, scores consistently drop with increasing target noise.
However, this is mitigated if context is available: For both model types,
variants incorporating visual context are substantially more robust against
target noise, even in cases where target representations are entirely occluded
by noise ($1.0$ in the subscripts). A striking example is RefCOCO testB, where
CIDEr drops to $0.20$ for TRFt-1.0 and $0.16$ for CCt-1.0, but TRFv-1.0
achieves scores as high as $1.14$. Here, CCv-1.0 drastically underperforms
with CIDEr $0.46$, but still outperforms its no-context counterpart.
Interestingly, we see considerable differences between testA and testB. Both
TRFt-1.0 and CCt-1.0 achieve better results on testA, but the scores are
generally higher on testB. Importantly, testA is restricted to human
referents, while testB encompasses all non-human object classes. Therefore,
models without any access to meaningful visual input could often guess right
on the frequent human classes, but struggle with the higher variation in
testB. This is supported by the inverse pattern that visual context
particularly improves the testB results. Here, differences between $t$ and $v$
variants are much higher, suggesting that context is more informative for non-
human objects, i.e. there are stronger associations between certain types of
objects and the contexts in which they occur.
Another striking result is that the same patterns emerge if we exchange visual
context representations with more abstract scene summaries: TRFs-1.0 achieves
CIDEr $1.12$ for entirely occluded targets in testB, comparable to TRFv-1.0.
Interestingly, between TRFs-0.0 and TRFs-0.5 the scene model slightly improves
in CIDEr scores, i.e. it can fully compensate for partial target occlusion.
### 4.2 Target Identification
Human judgements were collected from 5 expert annotators, including the first
author. Every system was evaluated independently by three annotators, with a
Fleiss’ Kappa of 0.85, indicating almost perfect agreement (Landis and Koch,
1977). The final judgments are determined by majority vote.
The human evaluation results for the 200-item subset of RefCOCO testB are
shown in Table 2 and illustrated in Figure 2. Generally, the results mirror
the pattern in the BLEU and CIDEr scores discussed previously: Across all
conditions, A scores drop if noise ratios increase, while F scores increase at
the same time. For M and O the results are less clear, but higher noise
settings generally lead to higher rates than the baseline setting for both
categories. This holds for TRF and CC, but TRF again performs better
throughout all conditions. Again, visual context clearly allows the models to
compensate for deficient target representations: While CCv-1.0 assigns
adequate types in almost $20\%$ of all cases (as compared to $0.5\%$ without
context information), TRFv-1.0 scores an impressive $66\%$, only $15.5\%$ less
than without any target noise.
Interestingly, scene summaries allow the model to compensate for deficient
target representations even more effectively. Across all noise settings, TRFs
achieves the highest A scores and the lowest F and O rates, even without any
target noise, unlike for BLEU and CIDEr (cf. Section 4.1).
| % A | % F | % O | % M
---|---|---|---|---
TRFt-0.0 | 84.0 | 10.5 | 5.0 | 0.5
TRFt-0.5 | 66.0 | 27.5 | 4.0 | 2.5
TRFt-1.0 | 1.5 | 75.5 | 19.5 | 3.5
TRFv-0.0 | 81.5 | 12.0 | 5.5 | 1.0
TRFv-0.5 | 70.5 | 18.5 | 7.0 | 4.0
TRFv-1.0 | 66.0 | 26.5 | 4.0 | 3.5
TRFs-0.0 | 89.0 | 7.0 | 3.5 | 0.5
TRFs-0.5 | 81.0 | 14.5 | 2.5 | 2.0
TRFs-1.0 | 68.0 | 22.0 | 1.5 | 8.5
CCn-0.0 | 45.5 | 46.5 | 7.0 | 1.0
CCn-0.5 | 23.0 | 61.0 | 13.0 | 3.0
CCn-1.0 | 0.5 | 84.5 | 11.0 | 4.0
CCv-0.0 | 76.0 | 21.0 | 3.0 | 0.0
CCv-0.5 | 55.0 | 36.0 | 6.5 | 2.5
CCv-1.0 | 19.5 | 68.5 | 9.0 | 3.0
$human$ | 91.0 | 2.0 | 6.5 | 0.5
Table 2: Results of the human evaluation on 200 items from RefCOCO-testB.
Generally, contextual information leads to more adequate type descriptions,
even if target representations are entirely occluded.
### 4.3 How do models exploit scene context?
So far, our results indicate that the scene context of referential targets
greatly improves the resilience of REG, to the extent that correct predictions
are possible to a surprising rate even if target information is missing. Here,
we aim to analyze how exactly contextual information is exploited by the
models. As discussed in Section 2, previous research indicates that
regularities of object co-occurrence and scene properties facilitate e.g.
object recognition in context. However, qualitative inspection of our data
indicates that for high noise, our systems often copy from context, i.e.
correctly predict referent types that are also present in the surrounding
scene, given that many classes of objects tend to appear in groups. We
investigate this in more detail and (a) perform statistical tests to check
whether similar objects in context support identification performance and (b)
analyze the attention distribution for TRFv to see whether the model learns to
attend to the respective objects in context.
#### Statistical analysis: Target categories in context
We hypothesize that recalibration through context should be more effective
when the target class is also present in the scene. To test this, we conduct a
correlation analysis between identification accuracy and the relative coverage
of the target class in the context. For this, we again rely on panoptic
segmentation annotations (cf. Section 3.3) to compute the proportion of pixels
of the same class as the referential target, normalized by the total size of
the context. We binarize the human evaluation scores (True if rated as A, else
False) and compute the Point-biserial correlation coefficient between the
relative coverage of the target class in context and the identification
accuracy.
For both TRFv-1.0 (corr: 0.321, p < 0.001) and TRFs-1.0 (corr: 0.277, p <
0.001) we found that a higher prevalence of the target class in the visual
context leads to significantly higher scores in human evaluation, i.e. systems
can easier compensate a lack of visual target information if the context
contains similar objects. For CCv-1.0, we see the same trend, but without
statistical significance (corr: 0.136, p = 0.055). For cases, where neither
system can identify the target class, we see a strong inverse correlation
(corr: -0.267, p < 0.001).
#### Model attention to target category in context
| Encoder | Decoder
---|---|---
| $\alpha_{x=tgt}$ | norm. | $\alpha_{t}$ | $\alpha_{l}$ | $\alpha_{c}$ | $\Delta_{t,c}$ | $\alpha_{x=tgt}$ | norm. | $\alpha_{t}$ | $\alpha_{l}$ | $\alpha_{c}$ | $\Delta_{t,c}$
TRFv-0.0 | 36.70 | 1.77 | 44.49 | 9.20 | 46.31 | -0.02 | 26.94 | 1.20 | 52.65 | 9.69 | 37.67 | 0.16
TRFv-0.5 | 35.27 | 1.64 | 18.90 | 16.06 | 65.04 | -0.55 | 40.56 | 2.05 | 32.65 | 14.41 | 52.94 | -0.24
TRFv-1.0 | 35.63 | 1.70 | 41.05 | 0.67 | 58.28 | -0.17 | 43.66 | 2.26 | 43.75 | 0.48 | 55.78 | -0.12
Table 3: Attention allocation scores for TRFv, averaged over all RefCOCO testB
items. $\alpha$ scores are reported in %.
To see whether the TRFv model has indeed learned the hypothesized copying
strategy, we compute the distribution of attention mass directed to target,
location and context partitions as well as to objects sharing the target
category in the visual context, as described in Section 3.4.
In Table 3, we report the analysis results, averaged for all items in the
RefCOCO testB split. The $\Delta_{t,c}$ scores show that the context partition
receives more attention if the target is occluded with noise during the
training, in line with our previous results. However, surprisingly, more
attention is allocated to the context in the $0.5$ noise setting than if no
target information is accessible. The $\alpha$ scores also indicate that
location features are especially focused in this case, suggesting that this
source of information is especially helpful if visual target information is
reduced, but not entirely missing.
As shown by the $\alpha_{x=tgt}$ scores in Table 3, target noise during model
training does not seem to have a consistent effect on encoder attention to
context objects sharing the target category. For the decoder, however, we see
a notable increase: Whereas the baseline model assigns an average of 26.94 %
of its attention mass on context objects with the target class, this is
significantly increased for higher noise settings (40.56 % and 43.66 %). The
normalized results exhibit the same patterns, i.e. as a result of target
noise, context objects sharing the target class receive more than double of
the attention mass as to be expected based on their size in the image.
These results suggest that the TRF model learns to exploit the occurrence of
similar objects in target and context as a common property of scenes in
RefCOCO. However, due to the prevalence of frequent object classes and the
reliance on published photos, it is unclear how representative these results
are. In the next section, we examine whether these patterns can be replicated
for the PACO dataset.
## 5 Results on PACO
In our experiments on the EGO4D portion of the PACO dataset (Section 3.3), we
treat the category strings in the detection dataset as expressions and train
TRFt and TRFc to generate those strings given the contents of the target
bounding box and (for the latter variant) the visual context (see Section
3.4). We report accuracy scores for the test split in Table 4. Here, notably,
the TRFt variant achieves higher accuracy scores than TRFv, unless the entire
visual target representation is covered with random noise. This suggests that
visual context is less informative or more difficult to process in PACO than
RefCOCO. However, the (comparably small) gain of TRFv-1.0 over TRFt-1.0
indicates that the model can leverage the visual context to a certain degree.
While some of the differences to RefCOCO may result from different
experimental settings (e.g. class strings instead of expressions), the PACO
results also hint towards general problems with datasets relying on scraped
images such as RefCOCO, in that they may not be not sufficiently
representative of the visual complexity in everyday scenes.
| $acc_{obj}$ | $acc_{obj-part}$
---|---|---
TRFt-0.0 | 60.93 | 34.03
TRFt-0.5 | 45.30 | 25.57
TRFt-1.0 | 16.47 | 7.55
TRFv-0.0 | 56.97 | 33.22
TRFv-0.5 | 34.92 | 20.31
TRFv-1.0 | 22.34 | 11.11
Table 4: Results for TRFt and TRFv on PACO-EGO4D. $acc_{obj}$ describes the
accuracy for reproducing object category strings, $acc_{obj-part}$ for
reproducing (object, part) tuples for annotated object parts.
## 6 Discussion and Conclusion
Our findings show that contextual information about the surroundings of
referential targets makes REG models more resilient against pertubations in
visual target representations. Even for conditions where no target information
is present at all, REG models maintain good results in automatic quality
metrics and identify referent types with high accuracy, as shown in the human
evaluation results. This holds for different kinds of context: While
especially the TRFv model is able to leverage ResNet encodings of image
contents outside the target bounding box, the same applies to scene-level
representations of depiced objects, as included in the TRFs model.
Interestingly, our subsequent analysis suggest that our context models
implicitly learned to copy from the visual context, i.e. assign labels to
referents which also apply to visible context objects. While the weaker
context effects in our PACO results suggest that this strategy is not
universally applicable, it appears to be highly effective the more regular
RefCOCO data. This is in stark contrast to basic assumptions of the REG
paradigm, where context information is considered important mainly to ensure
that references can be resolved without ambiguity. Here, we show, that is also
a valuable source for further communicative goals, i.e. the truthfulness of
generated expressions.
Overall, our results indicate that the influence of visual context in REG is
more multifaceted than reflected in previous studies. Importantly, however,
this study only provides an initial spotlight, as research in related fields
suggests that there are other and more complex ways in which visual scene
context may facilitate reference production. With this in mind, we strongly
advocate further research into scene context at the interface of perceptual
psychology, computer vision and language generation.
## 7 Limitations
We identify the following limitations in our study:
First, in both training and evaluation, we do not consider pragmatic
informativeness as a core criterion for the REG task. We train our models
using Cross Entropy Loss and do not test whether the generated expressions
unambiguously describe the referential target, instead focusing on semantic
adequacy as an important prerequisite for the generation of successful
referential expressions. However, we acknowledge that a comprehensive view
would require the consideration of both semantic and pragmatic aspects.
Also, we do not consider recent developments such as multimodal LLMs, although
the high diversity of their training data would contribute an interesting
aspect to this study. Here, we selected our models with a focus on both
modifiability and transparent processing.
Finally, additional vision and language datasets such as VisualGenome (Krishna
et al., 2016) would have made the results more representative. However, due to
time and space constraints, we leave this for future research.
## References
* Bar (2004) Moshe Bar. 2004. Visual objects in context. _Nature Reviews Neuroscience_ , 5(8):617–629.
* Biederman (1972) Irving Biederman. 1972. Perceiving real-world scenes. _Science_ , 177(4043):77–80.
* Caesar et al. (2016) Holger Caesar, Jasper Uijlings, and Vittorio Ferrari. 2016. Coco-stuff: Thing and stuff classes in context.
* Cafagna et al. (2021) Michele Cafagna, Kees van Deemter, and Albert Gatt. 2021. What vision-language models ‘see’ when they see scenes.
* Cafagna et al. (2023) Michele Cafagna, Kees van Deemter, and Albert Gatt. 2023. HL dataset: Visually-grounded description of scenes, actions and rationales. In _Proceedings of the 16th International Natural Language Generation Conference_ , pages 293–312, Prague, Czechia. Association for Computational Linguistics.
* Cohn-Gordon et al. (2018) Reuben Cohn-Gordon, Noah Goodman, and Christopher Potts. 2018. Pragmatically informative image captioning with character-level inference.
* Dale and Reiter (1995) Robert Dale and Ehud Reiter. 1995. Computational interpretations of the gricean maxims in the generation of referring expressions. _Cognitive Science_ , 19(2):233–263.
* Divvala et al. (2009) Santosh K. Divvala, Derek Hoiem, James H. Hays, Alexei A. Efros, and Martial Hebert. 2009. An empirical study of context in object detection. In _2009 IEEE Conference on Computer Vision and Pattern Recognition_. IEEE.
* Galleguillos and Belongie (2010) Carolina Galleguillos and Serge Belongie. 2010. Context based object categorization: A critical survey. _Computer Vision and Image Understanding_ , 114(6):712–722.
* Grauman et al. (2022) Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abrham Gebreselasie, Cristina Gonzalez, James Hillis, Xuhua Huang, Yifei Huang, Wenqi Jia, Weslie Khoo, Jachym Kolar, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi Zhu, Pablo Arbelaez, David Crandall, Dima Damen, Giovanni Maria Farinella, Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, and Jitendra Malik. 2022. Ego4d: Around the world in 3,000 hours of egocentric video. In _2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_. IEEE.
* Greene (2013) Michelle R. Greene. 2013. Statistics of high-level scene context. _Frontiers in Psychology_ , 4.
* Gupta et al. (2022) Vipul Gupta, Zhuowan Li, Adam Kortylewski, Chenyu Zhang, Yingwei Li, and Alan Yuille. 2022. Swapmix: Diagnosing and regularizing the over-reliance on visual context in visual question answering. In _2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_. IEEE.
* He et al. (2015) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for image recognition.
* Kazemzadeh et al. (2014) Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. 2014. ReferItGame: Referring to objects in photographs of natural scenes. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 787–798, Doha, Qatar. Association for Computational Linguistics.
* Kim et al. (2020) Jungjun Kim, Hanbin Ko, and Jialin Wu. 2020. CoNAN: A complementary neighboring-based attention network for referring expression generation. In _Proceedings of the 28th International Conference on Computational Linguistics_ , pages 1952–1962, Barcelona, Spain (Online). International Committee on Computational Linguistics.
* Kirillov et al. (2018) Alexander Kirillov, Kaiming He, Ross Girshick, Carsten Rother, and Piotr Dollár. 2018. Panoptic segmentation.
* Krahmer and van Deemter (2012) Emiel Krahmer and Kees van Deemter. 2012. Computational generation of referring expressions: A survey. _Computational Linguistics_ , 38(1):173–218.
* Krishna et al. (2016) Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. 2016. Visual genome: Connecting language and vision using crowdsourced dense image annotations.
* Landis and Koch (1977) J. Richard Landis and Gary G. Koch. 1977. The measurement of observer agreement for categorical data. _Biometrics_ , 33(1):159.
* Li and Jiang (2018) Xiangyang Li and Shuqiang Jiang. 2018. Bundled object context for referring expressions. _IEEE Transactions on Multimedia_ , 20(10):2749–2760.
* Lin et al. (2014) Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In _Computer Vision – ECCV 2014_ , pages 740–755, Cham. Springer International Publishing.
* Liu et al. (2020) Jingyu Liu, Wei Wang, Liang Wang, and Ming-Hsuan Yang. 2020. Attribute-guided attention for referring expression generation and comprehension. _IEEE Transactions on Image Processing_ , 29:5244–5258.
* Luo and Shakhnarovich (2017) R. Luo and Gregory Shakhnarovich. 2017. Comprehension-guided referring expressions. _2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 3125–3134.
* Mao et al. (2016) Junhua Mao, J. Huang, A. Toshev, Oana-Maria Camburu, A. Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous object descriptions. _2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 11–20.
* Mokady et al. (2021) Ron Mokady, Amir Hertz, and Amit H. Bermano. 2021. Clipcap: Clip prefix for image captioning.
* Nikolaus et al. (2019) Mitja Nikolaus, Mostafa Abdou, Matthew Lamm, Rahul Aralikatte, and Desmond Elliott. 2019. Compositional generalization in image captioning. In _Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)_ , pages 87–98, Hong Kong, China. Association for Computational Linguistics.
* Oliva and Torralba (2006) Aude Oliva and Antonio Torralba. 2006. Chapter 2 building the gist of a scene: the role of global image features in recognition. In _Progress in Brain Research_ , pages 23–36. Elsevier.
* Oliva and Torralba (2007) Aude Oliva and Antonio Torralba. 2007. The role of context in object recognition. _Trends in Cognitive Sciences_ , 11(12):520–527.
* Panagiaris et al. (2020) Nikolaos Panagiaris, Emma Hart, and Dimitra Gkatzia. 2020. Improving the naturalness and diversity of referring expression generation models using minimum risk training. In _Proceedings of the 13th International Conference on Natural Language Generation_ , pages 41–51, Dublin, Ireland. Association for Computational Linguistics.
* Panagiaris et al. (2021) Nikolaos Panagiaris, Emma Hart, and Dimitra Gkatzia. 2021. Generating unambiguous and diverse referring expressions. _Computer Speech & Language_, 68:101184.
* Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In _Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics_ , pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
* Parikh et al. (2012) Devi Parikh, C. Lawrence Zitnick, and Tsuhan Chen. 2012. Exploring tiny images: The roles of appearance and contextual information for machine and human object recognition. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 34(10):1978–1991.
* Pedregosa et al. (2011) Fabian Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. _Journal of Machine Learning Research_ , 12:2825–2830.
* Pereira and Castelhano (2014) Effie J. Pereira and Monica S. Castelhano. 2014. Peripheral guidance in scenes: The interaction of scene context and object content. _Journal of Experimental Psychology: Human Perception and Performance_ , 40(5):2056–2072.
* Radford et al. (2021) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In _Proceedings of the 38th International Conference on Machine Learning_ , volume 139 of _Proceedings of Machine Learning Research_ , pages 8748–8763. PMLR.
* Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. _OpenAI blog_ , 1(8):9.
* Ramanathan et al. (2023) Vignesh Ramanathan, Anmol Kalia, Vladan Petrovic, Yi Wen, Baixue Zheng, Baishan Guo, Rui Wang, Aaron Marquez, Rama Kovvuri, Abhishek Kadian, Amir Mousavi, Yiwen Song, Abhimanyu Dubey, and Dhruv Mahajan. 2023. Paco: Parts and attributes of common objects. In _2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_. IEEE.
* Reiter and Dale (2000) Ehud Reiter and Robert Dale. 2000. _Building natural language generation systems_. Cambridge University Press, Cambridge, U.K. New York.
* Sadeghi et al. (2015) Zahra Sadeghi, James L. McClelland, and Paul Hoffman. 2015. You shall know an object by the company it keeps: An investigation of semantic representations derived from object co-occurrence in visual scenes. _Neuropsychologia_ , 76:52–61.
* Schüz and Zarrieß (2021) Simeon Schüz and Sina Zarrieß. 2021. Decoupling pragmatics: Discriminative decoding for referring expression generation. In _Proceedings of the Reasoning and Interaction Conference (ReInAct 2021)_ , pages 47–52, Gothenburg, Sweden. Association for Computational Linguistics.
* Schüz and Zarrieß (2023) Simeon Schüz and Sina Zarrieß. 2023. Keeping an eye on context: Attention allocation over input partitions in referring expression generation. In _Proceedings of the Workshop on Multimodal, Multilingual Natural Language Generation and Multilingual WebNLG Challenge (MM-NLG 2023)_ , pages 20–27, Prague, Czech Republic. Association for Computational Linguistics.
* Schüz et al. (2023) Simeon Schüz, Albert Gatt, and Sina Zarrieß. 2023. Rethinking symbolic and visual context in referring expression generation. _Frontiers in Artificial Intelligence_ , 6.
* Silberer et al. (2020a) Carina Silberer, Sina Zarrieß, and Gemma Boleda. 2020a. Object naming in language and vision: A survey and a new dataset. In _Proceedings of the Twelfth Language Resources and Evaluation Conference_ , pages 5792–5801, Marseille, France. European Language Resources Association.
* Silberer et al. (2020b) Carina Silberer, Sina Zarrieß, Matthijs Westera, and Gemma Boleda. 2020b. Humans meet models on object naming: A new dataset and analysis. In _Proceedings of the 28th International Conference on Computational Linguistics_ , pages 1893–1905, Barcelona, Spain (Online). International Committee on Computational Linguistics.
* Tanaka et al. (2019) M. Tanaka, Takayuki Itamochi, K. Narioka, Ikuro Sato, Y. Ushiku, and T. Harada. 2019\. Generating easy-to-understand referring expressions for target identifications. _2019 IEEE/CVF International Conference on Computer Vision (ICCV)_ , pages 5793–5802.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need.
* Vedantam et al. (2014) Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2014. Cider: Consensus-based image description evaluation.
* Võ (2021) Melissa Le-Hoa Võ. 2021. The meaning and structure of scenes. _Vision Research_ , 181:10–20.
* Yang et al. (2023) Xu Yang, Jiawei Peng, Zihua Wang, Haiyang Xu, Qinghao Ye, Chenliang Li, Songfang Huang, Fei Huang, Zhangzikang Li, and Yu Zhang. 2023. Transforming visual scene graphs to image captions. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 12427–12440, Toronto, Canada. Association for Computational Linguistics.
* Yao and Fei-Fei (2010) Bangpeng Yao and Li Fei-Fei. 2010. Modeling mutual context of object and human pose in human-object interaction activities. In _2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition_. IEEE.
* Yin and Ordonez (2017) Xuwang Yin and Vicente Ordonez. 2017. Obj2Text: Generating visually descriptive language from object layouts. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 177–187, Copenhagen, Denmark. Association for Computational Linguistics.
* Yu et al. (2016) Licheng Yu, Patrick Poirson, Shan Yang, Alexander C. Berg, and Tamara L. Berg. 2016\. Modeling context in referring expressions. In _Computer Vision – ECCV 2016_ , pages 69–85, Cham. Springer International Publishing.
* Yu et al. (2017) Licheng Yu, Hao Tan, Mohit Bansal, and Tamara L Berg. 2017. A joint speaker-listener-reinforcer model for referring expressions. In _Computer Vision and Pattern Recognition (CVPR)_ , volume 2.
* Zarrieß and Schlangen (2017) Sina Zarrieß and David Schlangen. 2017. Obtaining referential word meanings from visual and distributional information: Experiments on object naming. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 243–254, Vancouver, Canada. Association for Computational Linguistics.
* Zarrieß and Schlangen (2018) Sina Zarrieß and David Schlangen. 2018. Decoding strategies for neural referring expression generation. In _Proceedings of the 11th International Conference on Natural Language Generation_ , pages 503–512, Tilburg University, The Netherlands. Association for Computational Linguistics.
* Zhang et al. (2020) Mengmi Zhang, Claire Tseng, and Gabriel Kreiman. 2020. Putting visual object recognition in context. In _2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 12982–12991.
## Appendix A Risks and Ethical Considerations
We do not believe that there are significant risks associated with this work,
as we consider the generation of general expressions for generic objects in
freely available datasets with limited scale. When selecting samples for human
evaluation, we refrain from descriptions of people (that could potentially be
perceived as hurtful). No ethics review was required. Our data (published upon
acceptance) does not contain any protected information and will be fully
anonymized prior to publication.
## Appendix B Model implementation and training
For the hyperparameters of our models, we largely followed Panagiaris et al.
(2021) (TRF) and Mokady et al. (2021) (CC). During inference, we relied on
(deterministic) greedy decoding.
The TRF model has 3 encoder and 3 decoder layers with 8 attention heads,
hidden dimension and feedforward dimension of 512, and was trained with a
initial learning rate of 0.0001 for the transformer encoder and decoder, and
0.00001 for the pre-trained ResNet-152 backbone. Our TRF models have
approximately 103,000,000 parameters.
For our CC model, we kept the settings defined by Mokady et al. (2021). From
the two models proposed in this work, we used the variant where a simple MLP
is used as a mapping network and the GPT-2 language model is fine-tuned during
training. However, we have different prefix sizes than in the original paper:
For CCt, we have a prefix size of 11, i.e. 10 for the visual target
representation and 1 for the target location information. For CCv, our prefix
size is 21, with additional 10 tokens for the visual context representation.
The model was trained using a learning rate of 0.00001. CCv has approximately
338,000,000 and CCt has 307,000,000 parameters.
We trained our models on an Nvidia RTX A40. RefCOCO contains 42k and PACO-
EGO4D contains 116k items for training. The number of training epochs per
system and the final CIDEr scores over the validation sets are displayed in
Table 5. We trained all our models for a maximum of 15 epochs, with early
stopping if no new maximum for CIDEr over the validation set has been achieved
for three consecutive epochs. Per epoch, the compute time was approximately
2.30 h for TRF and CC on RefCOCO and 4.30 h for TRF on PACO.
| dataset | epochs | CIDEr (val)
---|---|---|---
TRF${}_{t}-0.0$ | RefCOCO | 8 | 1.074
TRF${}_{t}-0.5$ | RefCOCO | 11 | 0.936
TRF${}_{t}-1.0$ | RefCOCO | 5 | 0.302
TRF${}_{v}-0.0$ | RefCOCO | 6 | 1.156
TRF${}_{v}-0.5$ | RefCOCO | 9 | 1.035
TRF${}_{v}-1.0$ | RefCOCO | 6 | 0.869
TRF${}_{s}-0.0$ | RefCOCO | 8 | 1.075
TRF${}_{s}-0.5$ | RefCOCO | 14 | 1.032
TRF${}_{s}-1.0$ | RefCOCO | 12 | 0.818
CG${}_{t}-0.0$ | RefCOCO | 7 | 0.824
CG${}_{t}-0.5$ | RefCOCO | 8 | 0.554
CG${}_{t}-1.0$ | RefCOCO | 2 | 0.294
CG${}_{v}-0.0$ | RefCOCO | 4 | 1.103
CG${}_{v}-0.5$ | RefCOCO | 10 | 0.894
CG${}_{v}-1.0$ | RefCOCO | 7 | 0.526
TRF${}_{t}-0.0$ | PACO | 3 | 3.236
TRF${}_{t}-0.5$ | PACO | 8 | 2.662
TRF${}_{t}-1.0$ | PACO | 7 | 0.814
TRF${}_{v}-0.0$ | PACO | 3 | 3.554
TRF${}_{v}-0.5$ | PACO | 14 | 3.047
TRF${}_{v}-1.0$ | PACO | 5 | 2.15
Table 5: Training information for all TRF and CC variants.
## Appendix C Scientific Artifacts
In our work, we mainly used scientific artifacts in the form of existing model
implementations, all of which are cited or referenced in Section 3. The model
implementations were published under permissive licences, i.e. MIT (TRF) and
Apache 2.0 (CC). Upon acceptance, we will publish our modifications to the
model implementations using the same licences, and our other code and data
using permissive licences. Apart from this, we relied on scikit-learn (version
1.2.0, Pedregosa et al. 2011) for our statistic analysis and the RefCOCO API
(Kazemzadeh et al., 2014; Yu et al.,
2016)222https://github.com/lichengunc/refer for computing BLEU and CIDEr
scores.
## Appendix D Human Evaluation
We conducted a human evaluation in which the adequacy of assigned referent
types in English referring expressions was assessed. The annotation guidelines
are published as supplementary material. Our annotators were undergrad student
assistants from linguistics and computational linguistics, which were paid by
the hour according to the applicable pay scale. The annotators were informed
about the intended use of their produced data. Along with our code, upon
acceptance, we will publish the fully anonymized raw and aggregated results of
the human evaluation.
Figure 2 illustrates the results of the human evaluation described in Section
4.2.
Figure 2: Visualization of human evaluation results for all tested systems and
a sample of human annotations in RefCOCO testB.
|
# _Ab initio_ design of charge-mismatched ferroelectric superlattices
Claudio Cazorla Institut de Ci$\grave{e}$ncia de Materials de Barcelona
(ICMAB-CSIC), 08193 Bellaterra, Spain Massimiliano Stengel Institut de
Ci$\grave{e}$ncia de Materials de Barcelona (ICMAB-CSIC), 08193 Bellaterra,
Spain ICREA - Institució Catalana de Recerca i Estudis Avançats, 08010
Barcelona, Spain<EMAIL_ADDRESS>
###### Abstract
We present a systematic approach to modeling the electrical and structural
properties of charge-mismatched superlattices from first principles. Our
strategy is based on bulk calculations of the parent compounds, which we
perform as a function of in-plane strain and out-of-plane electric
displacement field. The resulting two-dimensional phase diagrams allow us to
accurately predict, without performing further calculations, the behavior of a
layered heterostructure where the aforementioned building blocks are
electrostatically and elastically coupled, with an arbitrary choice of the
interface charge (originated from the polar discontinuity) and volume ratio.
By using the [PbTiO3]m/[BiFeO3]n system as test case, we demonstrate that
interface polarity has a dramatic impact on the ferroelectric behavior of the
superlattice, leading to the stabilization of otherwise inaccessible bulk
phases.
###### pacs:
71.15.-m, 77.22.Ej, 77.55.+f, 77.84.Dy
## I Introduction
When layers of perovskite oxides are epitaxially stacked to form a
periodically repeated heterostructure, new intriguing functionalities can
emerge in the resulting superlattice [ghosez08, ; junquera11, ]. These are
further tunable via applied electric fields and thermodynamic conditions, and
thus attractive for nanoelectronics and energy applications. An excellent
example is the [PbTiO3]m/[SrTiO3]n system, where the polarization,
tetragonality, piezoelectric response, and ferroelectric transition
temperature strongly change with the volume ratio of the parent compounds
[dawber05, ; dawber07, ; dawber12, ]. Such a remarkable tunability is usually
rationalized in terms of epitaxial strains [dawber05b, ], electrostatic
coupling (see Fig. 1a) [zubko12, ; wu12, ], and local interface effects
[junquera12, ; bousquet08, ].
While perovskite titanates with ATiO3 formula (A=Sr, Pb, Ba or Ca) have
traditionally been the most popular choice as the basic building blocks, a
much wider range of materials (e.g., BiFeO3) is currently receiving increasing
attention by the community. The motivation for such an interest is clear: a
superlattice configuration provides the unique opportunity of enhancing
materials properties via “strain engineering”, and a multifunctional compound
such as BiFeO3 appears to be a natural candidate in this context. (For
example, strain has been predicted to enhance the magnetoelectric response of
BiFeO3 by several orders of magnitude compared to bulk samples [wojdel09, ;
wojdel10, ].) Also, a superlattice geometry can alleviate the leakage issues
of pure BiFeO3 films [ranjith07, ; ranjith08, ].
Combining a III–III perovskite like BiFeO3 (or I–V, like KNbO3) with a II–IV
titanate appears, however, problematic from the conceptual point of view. In
fact, the charge-family mismatch inevitably leads to polar (and hence
electrostatically unstable) interfaces between layers murray09 . This is not
necessarily a drawback, though: recent research has demonstrated that polar
interfaces can be, rather than a nuisance to be avoided, a rich playground to
be exploited for exploring exciting new phenomena. The prototypical example is
the LaAlO3/SrTiO3 system, where a metallic two-dimensional electron gas
appears at the heterojunction to avoid a “polar catastrophe” nakagawa06 ;
ohtomo04 . Remarkably, first-principles calculations have shown that
interfaces in oxide superlattices can remain insulating provided that the
layers are thin enough, and produce rather dramatic effects on the respective
polarization of the individual components bristowe09 ; murray09 . This means
that, in a superlattice, polar discontinuities need not be compensated by
electronic or ionic reconstructions; they can, instead, be used as an
additional, powerful materials-design tool to control the behavior of the
polar degrees of freedom therein. Such a control may be realized, for
instance, by altering the stoichiometry at the interfaces (see Fig. 1b). To
fully explore the potential that this additional degree of freedom (the
interface built-in polarity) provides, and guide the experimental search for
the most promising materials combinations, one clearly needs to establish a
general theoretical framework where the “compositional charge” murray09 is
adequately taken into account.
Figure 1: (Color online) (a) Description of the electrostatic coupling in a
ferroelectric (orange)/paraelectric (blue) bilayer; $P$, $\mathcal{E}$, and
$D$ represent the component of the polarization, electric field and electric
displacement vectors along the stacking direction, and $\sigma_{\rm int}$ is
the interface charge density. (b) Intermixed AO-type interfaces in a
[BiFeO3]m/[PbTiO3]n superlattice and the resulting interface charge densities.
(c) Illustration of the $20$-atom simulation cell used in our calculations;
red, blue and black spheres represent O, B, and A atoms in the ABO3
perovskite.
In this Letter, we present a general first-principles approach to predict the
behavior of charge-mismatched perovskite oxide superlattices based exclusively
on the properties of their individual bulk constituents. Our formalism
combines the constrained-$D$ strategies of Wu et al. wu08 , which are key to
decomposing the total energy of the system into the contributions of the
individual layers, with the rigorous description of the interface polarity
proposed in Ref. stengel11 . As a result, we are able to exactly describe the
electrostatic coupling and mechanical boundary conditions, enabling a clear
separation between genuine interface and bulk effects. Crucially, the present
method allows one to quantify, in a straightforward way, the impact that
interface polarity has on the equilibrium (and metastable) phases of the
superlattice. As a proof of concept we apply our formalism to the study of
[PbTiO3]m/[BiFeO3]n (PTO/BFO) heterostructures. We find that (i) our _bulk_
model accurately matches earlier first-principles predictions obtained for
ultrashort-period superlattices (i.e., $m=n=3$) by using explicit _supercell_
simulations stengel12 , and (ii) by assuming interface terminations with
different nominal charge, we obtain a radical change in the overall
ferroelectric properties of the superlattice, which demonstrates the crucial
role played by the polar mismatch.
Figure 2: (Color online) Energy of PTO/BFO superlattices with $a=3.81$ Å
expressed as a function of $D$, for selected values of $\lambda$ and
$\sigma_{\rm int}$. Equilibrium and metastable superlattice states are
represented with solid and empty dots. Red (green) vertical lines indicate
phase transitions occurring in bulk BFO (PTO) under different $D$ conditions.
(a) and (b) represent the cases of neutral and polar interfaces, respectively.
We start by expressing the total energy of a monodomain two-color superlattice
(i.e., composed of species A and B) as,
$U_{\rm tot}(D,\lambda,a)=\lambda\cdot U_{\rm
A}(D,a)+\left(1-\lambda\right)\cdot U_{\rm B}(D,a)~{}.$ (1)
Here $U_{\rm A}$ and $U_{\rm B}$ are the internal energies of the individual
constituents, $D$ is the electric displacement along the out-of-plane stacking
direction (i.e., $D\equiv{\cal E}+4\pi P$ where ${\cal E}$ is the electric
field and $P$ is the _effective_ polarization, relative to the centrosymmetric
reference configuration), $\lambda$ is the relative volume ratio of material A
(i.e, $\lambda\equiv m/(n+m)$ where $m$ and $n$ are the thicknessess of layers
A and B, respectively), and $a$ is the in-plane lattice parameter (we assume
heterostructures that are coherently strained to the substrate). Note that
short-range interface effects have been neglected. (While it is certainly
possible to incorporate the latter in the model, e.g. along the guidelines
described in Ref. wu08 , we believe these would have been an unnecessary
complication in the context of the present study.) By construction, Eq. (1)
implicitly enforces the continuity of $D$ along the out-of-plane stacking
direction (which we label as $z$ henceforth), which is appropriate for
superlattices where the interfaces are nominally uncharged [ghosez08, ;
junquera11, ].
In presence of a polar mismatch, one has a net “external” interface charge, of
compositional origin murray09 , $\sigma_{\rm int}$ (see Fig. 1a), which is
localized at the interlayer junctions. In such a case, Eq. (1) needs to be
revised as follows,
$U_{\rm tot}(D,\sigma_{\rm int},\lambda,a)=\lambda\cdot U_{\rm
A}(D,a)+\left(1-\lambda\right)\cdot U_{\rm B}(D-\sigma_{\rm int},a)~{},$ (2)
i.e. the $U_{\rm B}$ curve is shifted in $D$-space to account for the jump in
$D$ produced by $\sigma_{\rm int}$. (Recall the macroscopic Maxwell equation,
$\nabla\cdot{\bf D}=\rho_{\rm ext}$, where $\rho_{\rm ext}$, the “external”
charge, encompasses all contributions of neither dielectric nor ferroelectric
origin.) Once the functions $U_{\rm A}$ and $U_{\rm B}$ are computed and
stored (e.g. by using the methodology of Ref. stengel09b ), one can predict
the ground-state of a hypothetical A/B superlattice by simply finding the
global minimum of $U_{\rm tot}$ with respect to $D$ at fixed values of
$\sigma_{\rm int}$, $\lambda$ and $a$. The advantage of this procedure is
that, for a given choice of A and B, the aforementioned four-dimensional
parameter space can be explored very efficiently, as no further _ab initio_
calculations are needed.
It is useful, before going any further, to specify the physical origin of
$\sigma_{\rm int}$ in the context of this work. Consider, for example, a
periodic BiFeO3/PbTiO3 superlattice, which we assume (i) to be stoichiometric
(and therefore charge-neutral) as a whole, (ii) to have an ideal AO-BO2-AO-BO2
stacking along the (001) direction, and (iii) to form (say) AO-type interfaces
(see Fig. 1b). (The same arguments can be equally well applied to the case of
BO2-type interfaces.) Depending on the growth conditions, one can have a
certain degree of intermixing in the boundary AO layers, which will adopt an
intermediate composition Bix Pb(1-x)O. As a pure BiO layer is formally charged
$+1$ and PbO is neutral, we can readily write $\sigma_{\rm
int}=\pm\left(x-\frac{1}{2}\right)$ (expressed in units of $e/S$ with $S$
being the surface of the corresponding 5-atom perovskite cell), where the
choice of plus or minus depends on the arbitrary assignment of BiFeO3 and
PbTiO3 as the A or B component in Eq. (2) [see Fig. 1b]. In the following we
shall illustrate the crucial role played by $\sigma_{\rm int}$ (and hence, by
the interface stoichiometry) on the ferroelectric properties of a BFO/PTO
superlattice, by combining Eq. (2) with the bulk $U_{\rm BFO}(D,a)$ and
$U_{\rm PTO}(D,a)$ curves that we calculate from first principles.
Our calculations are performed with the “in-house” LAUTREC code within the
local spin density approximation to density-functional theory. (We
additionally apply a Hubbard $U=3.8$ eV to Fe ions kornev07 ; yang12 .) We use
the $20$-atom simulation cell depicted in Fig. 1c for both BFO and PTO, which
allows us to describe the ferroelectric and anti-ferrodistortive (AFD) modes
of interest (i.e. in-phase AFDzi and out-of-phase AFDzo and AFDxy, see Ref.
[bousquet08, ]). Atomic and cell relaxations are performed by constraining the
out-of-plane component of $D$ stengel09b and the in-plane lattice constant
$a$ to a given value. [Calculations are repeated many times in order to span
the physically relevant two-dimensional $(D,a)$ parameter space.]
We start by illustrating the results obtained at fixed strain, $a=3.81$ Å (see
Fig. 2), by assuming $\sigma_{\rm int}=0$, which corresponds to fully
intermixed junctions ($x=0.5$), and we vary the BFO volume ratio, $\lambda$.
At the extreme values of $\lambda$, the results are consistent with the
expectations: the equilibrium configuration of BFO (i.e., the minimum of
$U_{tot}$ with $\lambda=1$) at this value of $a$ is the well-known R-type
$Cc$-I phase alison10 , derived from the bulk ground state via the application
of epitaxial compression; PTO ($\lambda=0$), on the other hand, is in a
tetragonal $P4mm$ phase with the polarization vector oriented out of plane.
Intermediate values of $\lambda$ yield a linear combination of the two single-
component $U(D)$ curves, where the spontaneous $P_{z}$ at equilibrium
gradually moves from the pure PTO to the pure BFO value.
Unfortunately, the possible equilibrium states that can be attained by solely
varying $\lambda$ (at this value of $a$ and $\sigma_{\rm int}$) lie far from
any physically “interesting” region of the phase diagram. For example, note
the kink at $|D|\sim$0.3 C/m2 in the pure BFO case, which corresponds to a
first-order transition to an orthorhombic $Pna2_{1}$ phase (a close relative
of the higher-symmetry $Pnma$ phase, occurring at $D=0$). A huge piezoelectric
and dielectric response is expected in BFO in a vicinity of the transition
cazorla14 , raising the question of whether one could approach this region by
playing with $\sigma_{\rm int}$, in addition to $\lambda$.
The answer is yes: when oxide superlattices with $\sigma_{\rm int}=0.5$ are
considered [corresponding to “ideal” (BiO)+/TiO2 and (FeO2)-/PbO interfaces],
the stable minimum of the system favors a smaller spontaneous polarization in
the BFO layers, approaching the aforementioned ($Cc{\rm-I}\to Pna2_{1}$) phase
boundary in the limit of small $\lambda$. Interestingly, the $U_{\rm tot}(D)$
curve becomes asymmetric (the interfacial charge breaks inversion symmetry),
and a secondary, metastable minimum appears. Overall, the resulting phase
diagram turns out to be much richer, with new combinations of phases emerging
(e.g. in region II’, where BFO exists in the orthorhombic $Pna2_{1}$ phase and
PTO in the tetragonal $P4mm$ phase), and highly non-trivial changes in the
electrical properties occurring as a function of $\lambda$.
Figure 3: Total energy (a) and out-of-plane electric displacement $D$ (b) of
the equilibrium (solid symbols) and metastable (empty symbols) states of
PTO/BFO superlattices with $\lambda=\frac{1}{2}$ and $\sigma_{\rm int}=0.5$,
expressed as a function of the in-plane lattice parameter. Regions in which
PTO and BFO exist in different phases are delimited with vertical dashed
lines; the corresponding space groups and AFD distortion patterns in Glazer’s
notation are shown in (a), and the components of the ferroelectric
polizarization in (b).
In order to further illustrate the power of our approach, we shall now fix the
volume ratio to $\lambda=0.5$ (corresponding to alternating BFO and PTO layers
of equal thickness) and vary the in-plane lattice parameter in the range
$3.6\leq a\leq 4.2$ Å . We shall first consider the case of charged interfaces
with $\sigma_{\rm int}=0.5$, as this choice allows for a direct comparison
with the results of Yang _et al._ (obtained via standard supercell
simulations) [stengel12, ]. In Fig. 3 we show the energy and spontaneous
electric displacement of the equilibrium and metastable states as a function
of $a$. Four regions can be identified in the diagrams depending on the phases
adopted by BFO and PTO at each value of the in-plane strain. (Their crystal
space groups, AFD pattern and in-plane / out-of-plane ferroelectric
polarization, respectively $P_{xy}$ and $P_{z}$, are specified in compact form
in the figure.) In region I’ both PTO and BFO adopt a monoclinic $Pc$ phase
characterized by large in-phase AFDz distortions and non-zero $P_{xy}$ and
$P_{z}$. Such a monoclinic $Pc$ phase is closely related to the orthorhombic
$Pmc2_{1}$ structure which has been recently predicted in PTO and BFO at large
tensile strains [yang12, ]. In region II’ PTO adopts an orthorhombic $Ima2$
phase, characterized by vanishing AFD distortions and a large in-plane ${\bf
P}$ (we neglect the small out-of-plane $P_{z}$), while BFO is in its well-
known $Cc$-I state. In region III, BFO remains $Cc$-I, while PTO adopts a
$P4mm$ phase, both with _opposite_ out-of-plane polarization with respect to
region II’. These structures switch back to a positively oriented $P_{z}$ in
region IV’, respectively transforming into a monoclinic $Cc$-II and a
tetragonal $I4cm$ phase. The $I4cm$ phase is characterized by anti-phase AFDz
distorsions and an out-of-plane ${\bf P}$, while the $Cc$-II corresponds to
the “supertetragonal” T-type phase of BFO zeches09 . Note that, as observed
already while discussing Fig. 2, the net interface charge leads to an
asymmetric double-well potential, and consequently to an energy difference
(typically of $\sim 20$ meV/f.u. or less, see Fig. 3a) between the two
oppositely polarized states. (Only one minimum survives at large tensile
strains, where the superlattice is no longer ferroelectric.) At the phase
boundaries such energy difference vanishes; the obvious kinks in the $U_{\rm
tot}$ curve shown in Fig. 3(a) indicate that the transitions (at $a=3.71$,
$3.87$, and $4.05$ Å) are all of first-order type.
The above results are in remarkable agreement with those of Yang _et al._
stengel12 . The only apparent discrepancy concerns the ordering of the
stable/metastable states in region III, which anyway involves a very subtle
energy difference (and is therefore sensitive to short-range interface
effects, not considered here). Obtaining such an accurate description of
superlattices where the individual layers are as thin as three perovskite
units stengel12 provides a stringent benchmark for our method, and validates
it as a reliable modeling tool. From the physical point of view this
comparison suggest that, even in the ultrathin limit, PTO/BFO superlattices
can be well understood in terms of macroscopic bulk effects, i.e., short-range
interface-specific phenomena appear to play a relatively minor role.
Figure 4: Same as in Fig. 3, but considering neutral interfaces. The out-of-
plane polarization is the same in PTO and BFO layers.
Having gained confidence in our method, we can use it to predict the behavior
of a hypothetical superlattice with $\sigma_{\rm int}=0$, corresponding to a
centrosymmetric reference structure with fully intermixed Pb0.5Bi0.5O
interface layers (see Fig. 4). Note the symmetry of the two opposite
polarization states, and the common value of the spontaneous electric
displacement adopted by BFO and PTO. The resulting phase diagram consists,
again, in four regions, with a first-order and two second-order phase
transitions occurring at $a=4.07$, $3.88$ and $3.73$ Å , respectively (see
Fig. 4a). In three of these regions, the individual layers display structures
which are different from those obtained in the $\sigma_{\rm int}=0.5$ case: in
region I both PTO and BFO stabilize in an orthorhombic $Pmc2_{1}$ phase
[yang12, ], characterized by a vanishing $P_{z}$; in region II PTO adopts a
monoclinic $Cm$ phase with the polarization roughly oriented along (111)
($P_{z}\neq P_{xy}\neq 0$) and no AFD, while BFO stabilizes in the already
discussed $Cc$-I phase; finally, in region IV, PTO is tetragonal $P4mm$ and
BFO is monoclinic $Cc$-II. These findings quantitatively demonstrates that the
interface charge mismatch can have a tremendous impact on the physical
properties of oxide superlattices. Our simple and general method allows one to
model and quantify accurately these effects, and most importantly to
rationalize them in terms of intuitive physical concepts.
In summary, we have discussed a general theoretical framework to predict the
behavior of charge-mismatched superlattices. We have showed that the effect of
the interface stoichiometry, which we describe via the “compositional”
interface charge $\sigma_{\rm int}$, is quite dramatic, and needs to be
properly accounted for in the models. More generally, we argue that
$\sigma_{\rm int}$ can be regarded, in addition to $\lambda$ and $a$, as a
further degree of freedom in designing oxide heterostructures with tailored
functionalities, opening exciting new avenues for future research.
###### Acknowledgements.
This work was supported by MICINN-Spain [Grants No. MAT2010-18113 and No.
CSD2007-00041], and the CSIC JAE-DOC program (C.C.). We thankfully acknowledge
the computer resources, technical expertise and assistance provided by RES and
CESGA.
## References
* (1) P. Ghosez and J. Junquera, J. Comp. Theor. Nanosci. 5, 2071 (2008).
* (2) C. Lichtensteiger et al., in Oxides Ultrathin Films: Science and Technology, edited by G. Pacchioni and S. Valeri, Ch. 12, 265 (Wiley-VCH, Germany, 2011).
* (3) M. Dawber et al., Phys. Rev. Lett. 95, 177601 (2005).
* (4) M. Dawber et al., Adv. Mater. 19, 4153 (2007).
* (5) J. Sinsheimer et al., Phys. Rev. Lett. 109, 167601 (2012).
* (6) M. Dawber, K. M. Rabe, and J. F. Scott, Rev. Mod. Phys. 77, 1083 (2005).
* (7) P. Zubko et al., Nano Letters 12, 2846 (2012).
* (8) C. W. Swartz and X. Wu, Phys. Rev. B 85, 054102 (2012).
* (9) P. Aguado-Puente, P. García-Fernández, and J. Junquera, Phys. Rev. Lett. 107, 217601 (2011).
* (10) E. Bousquet, M. Dawber, N. Stucki, C. Lichtensteiger, P. Hermet, S. Gariglio, J.-M. Triscone, and P. Ghosez, Nature (London) 452, 732 (2008).
* (11) J. C. Wojdeł and J. ${\rm\acute{I}}$${\rm\tilde{n}}$iguez, Phys. Rev. Lett. 103, 267205 (2009).
* (12) J. C. Wojdeł and J. ${\rm\acute{I}}$${\rm\tilde{n}}$iguez, Phys. Rev. Lett. 105, 037208 (2010).
* (13) R. Ranjith, B. Kundys, and W. Prellier, Appl. Phys. Lett. 91, 222904 (2007).
* (14) R. Ranjith et al., Appl. Phys. Lett. 92, 232905 (2008).
* (15) E. D. Murray and D. Vanderbilt, Phys. Rev. B 79, 100102 (2009).
* (16) N. Nakagawa, H. Y. Hwang, and D. A. Muller, Nature Mater. 5, 204 (2006).
* (17) A. Ohtomo and H. Y. Hwang, Nature (London) 427, 423 (2004).
* (18) N. C. Bristowe, E. Artacho, and P. B. Littlewood, Phys. Rev. B 80, 045425 (2009).
* (19) X. Wu, M. Stengel, K. M. Rabe, and D. Vanderbilt, Phys. Rev. Lett. 101, 087601 (2008).
* (20) M. Stengel and D. Vanderbilt, Phys. Rev. B 80, 241103(R) (2009).
* (21) Y. Yang, M. Stengel, W. Ren, X. H. Yan, and L. Bellaiche, Phys. Rev. B 86, 144114 (2012).
* (22) M. Stengel, N. A. Spaldin, and D. Vanderbilt, Nature Physics 5, 304 (2009).
* (23) I. A. Kornev, S. Lisenkov, R. Haumont, B. Dkhil, and L. Bellaiche, Phys. Rev. Lett. 99, 227602 (2007).
* (24) Y. Yang, W. Ren, M. Stengel, X. H. Yan, and L. Bellaiche, Phys. Rev. Lett. 109, 057602 (2012).
* (25) A. J. Hatt, N. A. Spaldin, and C. Ederer, Phys. Rev. B 81, 054109 (2010).
* (26) C. Cazorla and M. Stengel, to be published.
* (27) R. J. Zeches et al., Science 326, 977 (2009).
|
# Fuzzy Logic-based Robust Failure Handling Mechanism for Fog Computing
Ranesh Kumar Naha, Saurabh Garg, Muhammad Bilal Amin School of Technology,
Environments and Design
University of Tasmania
Hobart, Australia
<EMAIL_ADDRESS>Rajiv Ranjan School
of Computing
Newcastle University
Newcastle, United Kingdom
<EMAIL_ADDRESS>
###### Abstract
Fog computing is an emerging computing paradigm which is mainly suitable for
time-sensitive and real-time Internet of Things (IoT) applications. Academia
and industries are focusing on the exploration of various aspects of Fog
computing for market adoption. The key idea of the Fog computing paradigm is
to use idle computation resources of various handheld, mobile, stationery and
network devices around us, to serve the application requests in the Fog-IoT
environment. The devices in the Fog environment are autonomous and not
exclusively dedicated to Fog application processing. Due to that, the
probability of device failure in the Fog environment is high compared with
other distributed computing paradigms. Solving failure issues in Fog is
crucial because successful application execution can only be ensured if
failure can be handled carefully. To handle failure, there are several
techniques available in the literature, such as checkpointing and task
migration, each of which works well in cloud based enterprise applications
that mostly deals with static or transactional data. These failure handling
methods are not applicable to highly dynamic Fog environment. In contrast,
this work focuses on solving the problem of managing application failure in
the Fog environment by proposing a composite solution (combining fuzzy logic-
based task checkpointing and task migration techniques with task replication)
for failure handling and generating a robust schedule. We evaluated the
proposed methods using real failure traces in terms of application execution
time, delay and cost. Average delay and total processing time improved by 56%
and 48% respectively, on an average for the proposed solution, compared with
the existing failure handling approaches.
###### Index Terms:
Fog Computing, Application Failure, Dynamic Resource, Fault Tolerance,
Robustness
## I Introduction
Fog computing is a distributed computing paradigm in which any device that has
computation capability can contribute to application processing. These devices
include mobile, network, handheld and mobile devices [1]. Cloud computing has
latency issues in which time-sensitive real-time application execution cannot
be performed. Hence, Fog computing has emerged; it can process user
application requests with minimum latency, since Fog devices are located close
to the users. Applications in a smart environment must be able to respond
instantly, without delay. Some example of these kinds of applications are
smart cars, augmented reality applications, online multiplayer games and
emergency response applications.
Fog is a highly distributed environment in which numerous autonomous devices
contribute to processing application requests; these are known as Fog devices
[1]. The contributing devices can produce a financial benefit by allowing the
Fog platform to use their resources for Fog application processing [2]. Unlike
cloud resources, fog computing resources are not managed service, hence, they
have high probability of failure. Furthermore, the devices in Fog environment
are not completely dedicated to Fog application processing [3]. Hence, there
is no guarantee that devices are always available. The device can even fail
after starting the processing of the Fog application. In such a scenario, it
is important to make the scheduling of the application robust for successful
execution in the Fog environment, despite any failure of the Fog devices at
this stage. This will reduce the impact of any application failure in the Fog
environment.
Without ensuring proper failure handling mechanisms, it is not possible to run
time-sensitive real-time applications in the Fog environment. This is because
the failure of a Fog application or device might contribute to the high rates
of delay which lead to the execution of the application being unsuccessful.
Thus, this is an important and challenging area for research. The need for the
successful execution of Fog applications in the dynamic Fog environment
motivated us to explore the reasons for failure and to investigate possible
solutions.
In order to meet the time sensitivity of the applications, handling failure in
the Fog environment is an important and challenging task [4]. For application
processing, the cloud computing environment mostly depends on the cloud data
centre [5] in which the rate of failure is not that high compared with the Fog
environment. Fog devices are controlled by decentralised entities while cloud
data centres are managed by some central entities. Hence, predicting
application failure in the cloud environment is less complicated compared with
such predictions in Fog computing. In the Fog environment, it is difficult to
predict the failure of the computation resources due to the unstable
characteristics of the available resources in the Fog devices. Thus, a robust
scheduling algorithm needs to be developed. On the other hand, to prove the
correctness of the robust scheduling, an evaluation with real failure traces
is required. Robust scheduling is a mechanism by which application will be
executed in a way in which there will no opportunity for application failure.
Even if there is a failure robust adaptation mechanism can bring back the
system to stable state. To ensure this, applications might need to execute in
different places at the same time, in order to avoid the risk of failure.
Two methods are available for handling failures in service-oriented computing;
these are proactive and reactive failure handling mechanisms [6] [7].
Proactive failure needs to be considered in a highly distributed environment
in such a way that failure handling has taken place before occurrence of the
failure. However, proactive failure handling will not always be suitable
because some failures might occur beyond our prediction. This is because of
the possibilities of device malfunction, user interruption and uncertain
changes in resource availability. Therefore, we need to consider reactive
failure handling which usually takes place after the failure has occurred. Due
to the unstable nature of Fog devices, applying either of the methods is not
always useful for ensuring the successful execution of the application in a
time-sensitive manner. Hence, we used task replication methods, along with
proactive and reactive failure handling methods.
In this paper, we propose a composite solution by using proactive and reactive
failure handling methods with replication. The key contributions of this paper
are as follows:
1. 1.
We propose a fuzzy logic-based method to deal with unpredicted and predicted
failures.
2. 2.
Our failure handling method considers time sensitivity of the application, as
well as dynamic changes in the available resources in the devices.
3. 3.
We evaluate the proposed failure handling method using real failure traces.
The rest of the paper is organised as follows: Section II presents related
literature of failure handling mechanisms in the P2P system, a cluster, a
grid, the cloud and the Fog computing paradigm. Section III discusses the
application scenario for the proposed solution. Section IV discusses the
definition of the problem. A brief description of various resources failures
and their solutions is presented in Section V. Section VI gives a detailed
description of the proposed Fuzzy logic-based solution. Section VII discusses
the experimental setup and evaluation technique. Section VIII presents
experimental results and discussion. Finally, Section IX concludes the paper.
## II Related Work
This section describes some related work on failure handling mechanisms in
different distributed computing paradigms. We chose to survey failure handling
methods in order to verify the uniqueness of our research. We reviewed some
methods that are being used for the P2P system, as well as cluster, grid,
cloud and Fog computing paradigms, in order to make the system fault-tolerant.
### II-A Failure handling in P2P System
Samant and Bhattacharyya [8] examined the impact of node failure on the
accessibility of resources on P2P networks. Their work examined how search
efforts, the topology of the network and redundant resources can influence
accessibility when various level node failures take place. Vanthournout and
Deconinck [9] proposed three strategies to realise the use of self-
organisation mechanisms for failure handling and failure detection.
Lin et al. [10] presented an efficient fault-tolerant approach for the super-
peers of peer-to-peer (P2P) file-sharing systems. Mawlood-Yunis et al. [11]
identified the disconnection failure problem, due to temporary semantic
mapping faults, and proposed a game theory based Fault Tolerant Adaptive Query
Routing (FTAQR) algorithm to resolve it.
### II-B Failure handling in Cluster
Li and Lan [12] proposed FT-Pro, an adaptive fault management mechanism that
optimally chooses migration, checkpointing or no action to reduce the
application execution time in the presence of failures based on the failure
prediction. Various methods have been used in cluster computing to predict
failure event. These methods include genetic algorithms [13], rule-based data
mining [14, 15, 16] and Markov chain [17]. Many other works focus on fault
management techniques which are based on prediction. Leangsuksun et al. [18]
proposed a predictive failure handling mechanism which scheduled deliberate
job checkpointing and migration. In another work, Castelli et al. [19]
employed a different approach to failure prediction. In their approach, they
first predicted the resource exhaustion failure proactively and then conducted
software rejuvenation. To maximise system throughput, Oliner et al. [20] used
the coordinative checkpointing strategy that optimistically skips a checkpoint
when no failure prediction exists in the near future. Chakravorty et al. [21]
proposed software-based prediction of failure which basically migrates a task
before the failure actually occurs.
### II-C Failure handling in Grid
Hwang and Kesselman [22] proposed a flexible failure handling framework for
the grid which is comprised of two phases: failure detection and recovery
phases. In the failure detection phase, an event notification mechanism
reports failures. A failure handler deals with the failures at two levels: the
task level and the workflow level. Task level failures are handled by
retrying, checkpointing and replication. At the workflow level, they are
managed by alternative task and redundancy. Jin et al. [23] proposed the Fault
Tolerant Grid Platform (FTGP) approach from the perspective of grid users,
taking the nature of grid faults into account.
Lee et al. [24] proposed a resource manager for optimal resource selection.
The proposed resource manager automatically selects a set of optimal resources
from candidate resources which achieve optimal performance using a genetic
algorithm with a fault tolerance service that satisfies QoS requirements. Lee
et al. [24] implemented a fault detector and a fault manager which will handle
failure by job migration, using a checkpoint. Kandaswamy et al. [25] proposed
a fault-tolerance technique using over-provisioning and migration for
computational grids. Khoo and Veeravalli [26] proposed a failure handling
mechanism based on pro-active failure handling strategies for grid
environments.
### II-D Failure handling in cloud computing
Much research on handling failures in the cloud environment has been
undertaken to provide a failure-prone environment. Two review works [6, 7], in
which all kinds of failures were categorised into reactive and proactive
failure methods, extensively evaluated failure handling mechanisms in the
cloud. Reactive failure mechanisms were further divided into three sub-
categories: checkpointing, replication and logging. While Virtual Machine (VM)
migration was considered as proactive failure management, Gill and Buyya [6]
suggested continuous monitoring of resource allocation to manage failures in
the cloud environment during operation. Sharma et al. [7] point out that
predicting resource behaviour is critical in the cloud environment.
Sharma et al. [27] proposed a failure-aware VM consolidation technique based
on exponential smoothing. They employed checkpointing and migration of VM in
their proposed method. Luo et al. [28] proposed a Markov-based model to
examine failures in large-scale cloud computing systems. They employed
reliability-aware resource scheduling to improve fault tolerance. Although
cloud computing is a mature technology, it still lacks service reliability.
Hence, Buyya et al. [29] suggested investigating failure-aware provisioning
and reliability-aware service management mechanisms for the cloud computing
environment.
### II-E Failure handling in Fog computing
Existing failure handling methods in P2P, distributed and cloud computing
mainly scale infrastructure to utilise extra resources to cover failure but in
Fog computing, fault tolerance is challenging due to some unfavourable
factors, such as resource constraints and multiple procedures [30]. Most of
those methods considered only one failure handling method (for example,
checkpointing or replication or resubmission) for fault tolerance [31]. Also,
they did not consider any time sensitivity of the user request [31]. Hence,
some researchers proposed new methods for failure handling in the Fog
computing environment [31, 32].
A Fault-Tolerant Scheduling Method (FTSM) was proposed by Alarifi et al. [31]
for the Fog-Cloud environment. In their approach, the system submits time-
tolerant requests to the cloud and time-sensitive requests to the edge
devices. FTSM finds the checkpoint interval based on the operation time
between failures for the devices. However, Alarifi et al. [31] did not
consider any prediction of the failure for devices based on the fluctuating
availability of computation resources in the devices. Tajiki et al. [32]
proposed the Heuristic Fast Failure Recovery (HFFR) algorithm for software-
defined service function chaining for Fog computing with failure
consideration. The main idea of their proposed method is to find failure
probability based on the predefined threshold. Similar to FTSM, HFFR did not
consider the dynamic changes in the available resources. In addition, neither
of the works considered real failure traces for evaluating their proposed
methods. Battula et al. [33] proposed an efficient resource monitoring service
for Fog computing which suggested failure handling is essential for efficient
resource management in the Fog environment.
In summary, existing failure handling methods in Fog computing did not take
into account fully the dynamic availability of Fog resources. In this paper,
we propose a combined approach of proactive and reactive failure handling with
task replication to tackle highly dynamic behaviour of Fog resources. Thus,
this research was carried out to propose a composite solution of utilising
proactive, reactive and replication failure handling methods with dynamic
changes of the resources in the Fog devices. The bivalent proposition of the
Fuzzy logic technique motived us to employ this approach for failure handling.
## III Failure Issues and Scenario
In this section we describe the application scenario and the research problem.
We also discuss the reasons for resource failure and some possible solutions
for handling failures in the Fog environment.
### III-A Application Scenario
To demonstrate the problem solved in this paper, an application scenario is
presented in this section. Let us assume that an emergency vehicle is using a
smart transportation application and moving from point A to point B. The
vehicle has to choose the shortest route to the destination. To fulfil this
requirement the system needs to process data generated or stored in a dash
cam, surveillance camera and sensors. Based on the traffic conditions, the
following actions need to be taken: (i) inform other vehicles ahead that an
emergency vehicle is approaching; (ii) override signals if there are multiple
road junctions along the way; (iii) do the relevant processing in the Fog
devices, and (iv) take action following the processing. The overall scenario
and system architecture is presented in Figure 1.
Figure 1: Application scenario and system architecture.
Other incidents might also occur while the emergency vehicle is enroute. The
system should act promptly to minimise the delay in reaching the destination.
Here, the system needs to process data from sensors, as well as video data
from dashcams and surveillance cameras. All of the processing for the above
application scenarios is done in Fog devices to comply with the need for time
sensitivity. Therefore, the utilisation of processing power and on-time
processing are important. It is possible to ensure time sensitivity of the
application by distributing the application workload among Fog devices . But
the issue is what will happen if the Fog node has failed? We need to ensure
that the outcome of the application should meet time-sensitive requirements in
which the robustness of the scheduling approach will be assured.
Robustness is a feature of the scheduling process in which application
execution will be successful by ensuring time sensitivity, even if the
resource has failed, any errors have occurred in the system components or any
erroneous input has taken place. In our application scenario, the application
always requests the completion of the processing by defining a deadline.
However, our concern is how to deal with the failure of the resources during
operation. We are specifically focused on minimising the impact of the failure
on the applications, due to resource failure, by handling the situations in
which Fog device resources have failed.
### III-B Problem Definition
This research was carried out to solve the following problem: How to meet user
requirements for applications in the Fog environment, with consideration of
device failure, in order to satisfy any time-sensitive requirements of the
application, while available resources in the devices are changing
dynamically?
Scheduling all related tasks to Fog devices is not such a complicated task if
we can assume that all devices are up and running, and there are no chances of
their failure. But, in reality, the chance of failure in the Fog environment
is very high since the devices are not dedicated to running Fog applications.
On the other hand, most of the devices in the Fog depend on wireless
connectivity. Also, the devices are mobile and are moving frequently from one
cluster to another. Next, most of the Fog devices are not stationary, meaning
that the devices have limited battery power. Furthermore, the application
might be interrupted by the owner of the devices (for example, the owner turns
off the device for some reason; the owner does not want to participate at that
moment or the owner wants to run another application which requires some
resources to be freed up). Due to all of the above reasons, the chances of
failure of the computation resources are very much higher than in any other
distributed system. To ensure the robustness of the scheduling algorithm, we
need to deal with resource failures in a way that the application user would
not affected.
### III-C Resource Failure and Counter-measures
The resources could fail in the Fog environment for many reasons. The reasons
for failure can be categorised, such as the termination of the application to
run the native application, network failure, the device being moved to another
cluster, power outage, human interruption, software and hardware failure, and
network attacks. Due to the mobility and dynamicity of the available resources
in the devices, we can categorise all types of failure into two basic types:
(i) unpredicted/immediate failure and (ii) predicted failure.
We can handle failures in two different ways. Firstly, we can manage the
resource failure after it took place; this is referred to as reactive failure.
Secondly, it is possible to have countermeasures before the occurrence of the
resource failure; this is known as proactive failure handling. Both types of
failure handling mechanism have different approaches to manage resource
failures.
In a reactive failure handling mechanism, we can employ checkpointing and
replication. In application checkpointing the state of the application is
stored in reliable storage and, if the application has failed, it does not
need to rerun the application from the beginning. It will start the
application from the point where the latest state has been saved. There are
two types of checkpointing: i) coordinated or periodic checkpointing and ii)
uncoordinated or random checkpointing. In coordinated checkpointing, the
checkpoint should be consistent for the processes. In uncoordinated
checkpointing, each process checkpoints its state. The other type of reactive
failure handling mechanism is replication which always run replicas of the
running processes in different devices.
The basic way to solve immediate failure is re-running the whole application
but this is not the optimum way to solve the problem. For example, if a
certain percentage of processing is completed, there is no point in processing
the same portion of the application. Hence, the only solution for immediate
failure is checkpointing. Some researchers have argued that checkpointing is
not a good solution for the Fog environment because the Fog is a highly
dynamic environment [3, 34]. Yi et al. [3] suggested that replications are
more suitable for the Fog but multiple Fog nodes would need to be working
together. Madsen et al. [35] suggested using checkpointing for the Fog which
would save computation time for the faulty tasks. Some researchers used
checkpointing in the Fog as a fault-tolerant technique [36, 37]. We needed to
ascertain if there were any way to accommodate checkpointing in the Fog
environment. To do that we needed to evaluate our solution in simulation and
also in a real environment. We evaluated our proposed method in a simulated
environment with real failure traces.
In a proactive failure handling process, we can employ the migration process
before the resource failed. Since the Fog is for time-sensitive applications,
we were required to migrate the application without disconnecting devices.
Hence, we needed to employ live migration for this process. Two basic types of
migration are i) pre-copy migration and ii) post-copy migration [7]. In post-
copy migration, application migration needs to be initiated by suspending the
application at the source which will increase down-time. To minimise downtime,
pre-copy migration needed to have been employed. To resolve the predicted
failure, we could have employed pre-copy live migration. Once we could predict
that an application was going to fail then we could migrate the application to
another Fog device. But again, the question is raised: how to decide when and
where to migrate? However, this research only dealt with when to migrate, not
where to migrate to. To ensure the robustness of the scheduling algorithm, we
needed to handle both predicted and unpredicted failures which would minimise
their impact.
## IV Fuzzy logic-based failure handling mechanism
To handle predicted and unpredicted failure we employed the fuzzy logic-based
solution. Classical logic usually has a bivalent proposition, which may be
true or false in principle. On the other hand, fuzzy logic can represent
actual membership of both true and false for a function. Some propositions
might be true and false to a certain degree, rather than being true or false
only. For example, for a Fog device, mobility, response time and power
availability might cause the failure of a device. However, the chances of
failure completely depend on the membership of each parameter (for example,
mobility, response time and power availability). To represent the exact degree
of membership of each parameter, a fuzzy logic-based approach was undertaken.
If the unpredicted failure for a Fog device were high then the Fog device
would be flagged as unreliable. To handle failure for unreliable Fog devices,
replication was used to ensure the robustness of the scheduling algorithm.
A predicted failure handling mechanism basically acts before the resource
failure takes place. However, due to decentralised management of the Fog
devices, an application might have failed but this was beyond our prediction.
Thus, an unpredicted failures handling mechanism allows seamless application
processing. Frequent unpredicted failures caused by a Fog device will trigger
replication to ensure successful application execution. Therefore, to ensure a
reliable application processing environment, all three approaches (predicted
failure, unpredicted failure and replication) need to be considered. Figure 2
shows what action will be taken after calculating the failure score.
Figure 2: Proposed failure handling mechanism.
Over utilisation of resources always causes failure. Suleiman and Venugopal
[38] modelled elasticity rules for the cloud when the resource has been
scaled, when utilisation is either 75% or 80%. This indicated that the chances
of failure was high when the utilisation was more than 80%. Hence, we assumed
that 80% to 100% utilisation was unsafe and service migration needed to have
been triggered. In another work Liu et al. [39] mentioned that the chances of
server crash were high when utilisation was more than 60%. Therefore, they
have chosen a workload threshold of 50% to 70%. Al-Haidari et al. [40]
revealed that the upper threshold for cloud resources utilisation should be
70% to 90%. Based on these studies [38, 39, 40] we assumed that less than 50%
utilisation was safe and it was necessary to checkpoint in case of failure if
the utilisation were 50% to 80%. A user could change these thresholds while
they were being implemented in a real environment through the proposed
algorithm.
### IV-A MRP score calculation for unpredicted failure
To find an unpredicted failure, the system always calculates the degree of
failure by calculating membership of the following parameters: (i) device
mobility, (ii) device response time and (iii) device power availability.
Based on the degree of failure, the system will decide how frequently
checkpointing needs to be undertaken. Based on the percentage of the device
movement, we defined how readily the device could be completely moved to
another network. Device mobility can be represented as $D_{m}$ which could be
0% to 100%. Device response time always maps with required response time to
meet the application time sensitivity. For example, to complete an application
request, the device response time should be 2ms but the device is responding
in 1m; therefore, the degree of failure is within the group of 0%. On the
other hand, if the device response time suddenly changed to 2.5ms then the
degree of failure is within the group of 100% since it is not meeting the
application time-sensitive requirements. Device response time can be
represented as $D_{r}$ which could be 0% to 100%. Similarly, the power
available score can be calculated based on the power that is required to
complete the submitted application. All the parameters of device
characteristics are transformed into a normalised range [0 to 1] during
fuzzification. Fuzzy logic usually includes three phases: fuzzification, fuzzy
inference and defuzzification. The fuzzy sets for the above parameters are as
follows:
* •
Device mobility: $D_{m}\in\\{Low,Normal,High\\}$
* •
Device response time: $D_{r}\in\\{Fast,Normal,Slow\\}$
* •
Device power: $D_{p}\in\\{Rich,Standard,Poor\\}$
Using Equation 1, the value can be normalised to fall in the interval [0 to
1].
$\overline{D_{x}}=\frac{D_{x}-\alpha_{x}}{\beta_{x}-\alpha_{x}}$ (1)
In the Equation 1, $D_{x}$ is the numerical value of $x$ where $x$ is either
mobility, response time or power. The value of $x$ is within the range of
$\alpha_{x}$ to $\beta_{x}$. The normalised value of the parameters’ mobility,
response time and power were calculated for further operation. The degree of
membership of each parameter is shown in Figure 3.
(a) Mobility
(b) Response time
(c) Power availability
Figure 3: Membership of different parameters for unpredicted failure. Figure
4: Membership for mrp score.
The mobility parameter of 0% to 50% is considered as low mobility; 30% to 90%
is normal mobility and 70% to 100% is considered to be high mobility. Until
30% mobility membership, we considered that the system was in safe zone.
However, at the point of 30%, the mobility membership was low and was
decreasing, and normal mobility membership was increasing. At the point of
50%, the low mobility membership was 0 and normal mobility membership was 1.
However, at the point of 70% normal mobility, membership decreased and became
0 at 90%. On the other hand, a 70% high mobility score, which started to
increase and reach 1 at 90%, meant that the device was about to fail. A
similar approach was employed for response time and membership of the power
availability parameters. Based on the membership of each parameter,
fuzzification was completed in the Fuzzy Interference System (FIS). To develop
FIS we used the jfuzzylogic toolbox [41]. The membership function for low,
normal and high mobility is shown in Equations 2, 3 and 4. A similar equation
is used for response time and power parameters.
$\mu_{mL}(x)=\begin{cases}0,&x>d\\\ \frac{d-x}{d-c},&c\leq x\leq d\\\
1,&x<c\end{cases}$ (2)
$\mu_{mN}(x)=\begin{cases}0,&(x<a)\ or\ (x>d)\\\ \frac{x-a}{b-a},&a\leq x\leq
b\\\ 1,&b\leq x\leq c\\\ \frac{d-x}{d-c},&c\leq x\leq d\end{cases}$ (3)
$\mu_{mH}(x)=\begin{cases}0,&x<a\\\ \frac{x-a}{b-a},&a\leq x\leq b\\\
1,&x>b\end{cases}$ (4)
We used max function as an accumulation method by which the fuzzy outcome of a
particular application is represented as $X_{i}$
Fuzzy rules: Based on the behaviour of the system fuzzy, rules were generated.
If any of the parameters were high, the system would not have been capable of
running any application. More clearly, if a system were highly mobile, there
was a high chance of resource failure in that device. In the same way, if
response time or power membership were high, then resource failure in that
particular device was also high. For this particular instance the rule should
be as follows:
* •
If $D_{m}$ is high or $D_{r}$ is slow or $D_{p}$ is poor then $UF_{{mrp}_{m}}$
is high
In the above rule, $UF_{{mrp}_{m}}$ represents an unpredicted failure score
for application m. In order to consider some devices as being in a safe zone,
all scores of all parameters should have safe zone scores with a 0% to 50%
variation. For this particular instance, the rule is as follows:
* •
If $D_{m}$ is low and $D_{r}$ is fast and $D_{p}$ is rich then
$UF_{{mrp}_{m}}$ is low
To define device membership in the checkpoint zone, mobility should be low or
normal, response time should be fast or normal, and power availability should
be rich or standard. The mobility membership should not be high; response time
should not be slow and power should not be poor, to be in the checkpointing
zone. In addition, mobility should not be low, response time should not be
fast and power should not be rich at the same time. To represent the
situations described above, we defined seven different rules. An example of
such a rule is given as follows:
* •
If $D_{m}$ is low and $D_{r}$ is fast and $D_{p}$ is standard then
$UF_{{mrp}_{m}}$ is normal
Fuzzy inference and defuzzification: To generate an mrp score we used 0% 50%
as a low score, 50% to 80% as a normal score and 80% to 100% as a high score
(See Figure 3). A Center of Gravity (CoG) defuzzification method was used for
calculating the mrp score. The equation for CoG is shown in equation 5.
$UF_{{mrp}_{x}}=\frac{\sum_{i=1}^{n}X_{i}\times\mu_{i}}{\sum_{i=1}^{n}X_{i}}$
(5)
In the above equation $n$ is the number of rules needing to be triggered.
$\mu_{i}$ is the singleton value which refers to the maximum score for a
particular parameter. The defuzzification value for an application was used to
make decisions about application failure handling ($mrp$ score).
### IV-B CPMNR score calculation for predicted failure
Some failures can be predicted based on the following criteria:
* •
Effect on processing based on the current CPU utilisation
* •
Effect on processing based on available power in the device
* •
Effect on processing based on device mobility
* •
Effect on processing based on network communication (Device is capable of
completing the request but network communication might be the cause of not
meeting time-sensitive requirements)
* •
Effect on processing based on device response time
All device behaviour parameters were transformed into a normalised range [0 to
1] during fuzzification. The fuzzy sets for the above parameters are as
follows:
* •
CPU utilisation: $D_{mc}\in\\{Low,Normal,High\\}$
* •
Device power: $D_{mp}\in\\{Rich,Standard,Poor\\}$
* •
Device mobility: $D_{mm}\in\\{Low,Normal,High\\}$
* •
Network communication: $D_{mn}\in\\{Fast,Medium,Slow\\}$
* •
Response time $D_{mr}\in\\{Fast,Normal,Slow\\}$
Using Equation 6, the value can be normalised to fall into the interval [0 to
1].
$\overline{D_{mx}}=\frac{D_{mx}-\alpha_{mx}}{\beta_{mx}-\alpha_{mx}}$ (6)
In the Equation 6, $D_{mx}$ is the numerical value of $mx$ where $mx$ is
either CPU utilisation, power, mobility, network communication or response
time. The value of $mx$ was within the range of $\alpha_{mx}$ to $\beta_{mx}$.
The normalised value of the parameters’ CPU utilisation, power, mobility,
network communication and response time was calculated for further operation.
The degree of membership of each parameter is shown in Figures 5.
(a) CPU utilisation
(b) Power availability
(c) Mobility
(d) Network communication
(e) Response time
(f) cpmnr score
Figure 5: Membership of different parameters for predicted failure and cpmnr
score.
The CPU utilisation parameter 0% to 50% was considered to be low CPU
utilisation; 30% to 90% was normal CPU utilisation and 70% to 100% was
considered to be high CPU utilisation. Until 30% CPU utilisation membership,
we considered that the system was in safe zone. However, at the point of 30%,
low CPU utilisation membership decreased and normal CPU utilisation membership
increased. At the point of 50%, low CPU utilisation membership was 0 and
normal CPU utilisation membership was 1. However, at the point of 70%, normal
CPU utilisation membership started to decrease and became 0 at 90%. On the
other hand, a 70% high CPU utilisation score starting to increase and reaching
1 at 90% meant that the device was about to fail due to the over utilisation
of the CPU. A similar approach was employed for power availability, mobility,
network communication and response time parameters. Based on the membership of
each parameter, fuzzification was undertaken in the FIS system. Similar to the
calculation of the MRP score, we used Equations 2, 3 and 4 for the membership
function of low, normal and high for CPU utilisation, power availability,
mobility, network communication and response time parameters.
Similar to the MRP score calculation we used the max function as an
accumulation method by which the fuzzy outcome of a particular application is
represented as $X_{i}$.
Fuzzy rules: Based on the behaviour of the system, fuzzy rules have been
generated. If any of the parameters are high the system will not be capable of
running any application. More clearly, if a system has high CPU utilisation,
there is a high chance of application failure in that device. In the same way,
if power, mobility, network communication and response time membership are
high, then application failure in that particular device will be high as well.
For this particular instance the rule should be as follows:
* •
If $D_{mc}$ is high or $D_{mp}$ is poor or $D_{mm}$ is high or $D_{mn}$ is
slow or $D_{mr}$ is slow then $PF_{{cpmnr}_{m}}$ is high
In the above rule, $PF_{{cpmnr}_{m}}$ represents the unpredicted failure score
for application $m$. In order to consider some devices as being in a safe
zone, all scores of all parameters should have safe zone scores which are
within 0% to 50% variation. For this particular instance the rule is as
follows:
* •
If $D_{mc}$ is low and $D_{mp}$ is rich and $D_{mm}$ is low and $D_{mn}$ is
fast and $D_{mr}$ is fast then $PF_{{cpmnr}_{m}}$ is low
To define device membership in a checkpoint zone, CPU utilisation should be
low or normal, power availability should be rich or standard, mobility should
be low or normal, network communication should be fast or medium and response
time should be fast or normal. The CPU utilisation membership should not be
high, power should not be poor, mobility membership should not be high,
network communication membership should not be slow and response time should
not be slow to arrive in the checkpoint zone. Also, CPU utilisation should not
be low, power should not be rich, mobility should not be low, network
communication should not be fast, and response time should not be fast at the
same time. To represent the situations described above, we need to defined 31
different rules. An example of such a rule is given as follows:
* •
If $D_{mc}$ is low and $D_{mp}$ is rich and $D_{mm}$ is low and $D_{mn}$ is
fast and $D_{mr}$ is normal then $PF_{{cpmnr}_{m}}$ is normal
Fuzzy inference and defuzzification: To generate the cpmnr score we used 0%
50% as a low score, 50% to 80% as a normal score and 80% to 100% as a high
score (See Figure 5(f)). The Centre of Gravity (CoG) defuzzification method
was used for calculating the cpmnr score. The equation for CoG is shown in
Equation 7.
$PF_{{cpmnr}_{x}}=\frac{\sum_{i=1}^{n}X_{i}\times\mu_{i}}{\sum_{i=1}^{n}X_{i}}$
(7)
In the above equation $n$ is the number of rules needing to be triggered.
$\mu_{i}$ is the singleton value which refers to the maximum score for a
particular parameter. The defuzzification value for an application was used
for making decisions about the predicted application failure handling (cpmnr
score).
### IV-C Replication
Replication of the application only applies if the rate of unpredicted
(immediate) failure is high. The failure rate cannot be calculated within the
execution of a few application attempts. Due to this we considered at least 10
application executions before deciding whether replication was required or
not. The overall process of the failure handling process is presented in
Figure 6.
Figure 6: Failure handling process in the proposed method. Algorithm 1 Fuzzy-
logic-based failure handling (FLBFH).
Input:
$App[id,D_{m},D_{r},D_{p},D_{mc},D_{mp},D_{mm},D_{mn},D_{mr},SD_{ft},A_{c}]$
Output:$Ac_{tr}[App_{id},Actions]$
for all $App[id]$ do
Calculate degree of changes in mobility
Calculate degree of response time changes
Calculate degree of power profile changes
Calculate degree of CPU utilization changes
Calculate degree of changes in network comm
Calculate degree of failure ($d_{f}$)
if $d_{f}\geq 50$ and $d_{f}\leq 80$ then
$Ac_{tr}.INSERT[App_{id},Checkpoint]$
else if $d_{f}\geq 80$ and $d_{f}<100$ then
$Ac_{tr}.INSERT[App_{id},Migrate]$
else if $d_{f}\geq 100$ then
$Ac_{tr}.INSERT[App_{id},CheckpointRecover]$
end if
$ASD_{f}$ = ($SD_{ft}$ \+ $d_{f}$)/$A_{c}$
if $ASD_{f}\geq 50$ and $A_{c}>10$ then
$Ac_{tr}.INSERT[App_{id},Replicate]$
end if
end for
return $Ac_{tr}[]$
### IV-D Mapping
If the mrp score for unpredicted failure is in the unsafe zone, then the
system will migrate the application to other available Fog devices. It is
obvious that the cpmnr score will also be in the unsafe zone, if the mrp score
is in the unsafe zone. However, the cpmnr score also considers CPU utilisation
and network communication for making more accurate decisions about
checkpointing and migration. If either of the two scores is in the
checkpointing zone, then the application checkpointing will be triggered.
However, if any of these two scores are in the failed zone, then the system
will see if any replicated application is running or not. If so, the system
will interact with one of the replicated applications. In the case of there
being no running replicated application, the system will check if there are
any checkpoints there or not. If there are any checkpoints, then the
application will recover from that checkpoint. In the case of no checkpoint
and no replicated application, $n$, then the system will rerun the whole
application which is a worse case scenario. A proposed Fuzzy-Logic-based
Failure Handling (FLBFH) algorithm is presented in Algorithm 1.
In Algorithm 1, $SD_{f}$ is the score for degree of failure, $A_{c}$ is the
app count (The total number of times a task for an application is running),
$SD_{ft}$ is the total score for degree of failure and $ASD_{f}$ is the
average score for degree of failure.
## V Experimental Setup and Evaluation Technique
### V-A Failure Modelling
Since no failure traces are available for the Fog, we used failure traces from
the Failure Trace Archive (FTA) [42]. There are 27 real failure traces
available in FTA. Most of those traces have two events: failed or not failed
(available). Among them, only Los Alamos National Laboratory (LANL) [43] has
failure traces with reasons such as CPU failure, power failure or network
failure. Therefore, we selected LANL failure traces to model failure in the
Fog environment. LANL has failure traces for nine years (1996 to 2005) which
consist of 4750 nodes that form 22 High-Performance Computing (HPC) systems
[43]. This trace has the records for every failure that takes place within the
system and which needs administrator attention. We selected those devices from
LANL failure traces which had comparatively high failure rates compared with
other devices. Those selected devices did not have failure traces for the year
1996 and 2005. Due to that, we used failure traces from 1997 to 2004. The Fog
environment consists of numerous nodes, each HPC nodes being considered as a
single Fog node. The LANL failure traces only have the information as to
whether the resource failed or not. By keeping the Fog device characteristics
intact, we utilised the failure information of the Fog node during simulation-
based experiments. Hence, it is logical to use LANL failure traces in our
experimental scenario.
### V-B Experimental Setup
In order to control over the experimental environment, we chose simulation to
evaluate the proposed method. We adopted a simulation environment and
performance parameters from our previous works [44] [45]. In addition, we
modelled a realistic Fog environment by extending the CloudSim [46] toolkit,
similar to our previous work. All submitted tasks followed deadlines which
varied dynamically from 10% to 80%, similar to our previous work [45].
Successful execution of application by maintaining deadlines indicated
successful processing.
### V-C Performance Metrics
All the performance metrics were adopted from our previous works [44, 45].
Delay: We considered the delay between the user to the Fog resources. Delay is
the time between task submission and the starting of task execution. It can be
calculated as follows:
$\begin{split}\color[rgb]{0,0,0}d_{t}^{x}=E_{st}^{x}-U_{S}\end{split}$ (8)
In Equation 8, $d_{t}^{x}$ denotes the delay for the $x$ Fog device which is
involved in task execution. $E_{st}$ is the task start time and $x-U_{S}$ is
the time when the user requested the execution of the task.
Processing time: Processing time is the required time to process a task. It is
the time between the task processing start time $p_{st}$ and the task
processing end time $p_{en}$ which can be calculated by using the Equation 10.
$\begin{split}\color[rgb]{0,0,0}Pt_{t}^{x}=p_{en}^{x}-p_{st}^{x}\end{split}$
(9)
In the above equation, $x$ is the Fog device which is involved in task
execution, and $Pt_{t}$ is the processing time for task $t$.
Processing Cost: We considered connectivity and messaging costs for processing
costs. These costs are based on the AWS IoT pricing model. Cost is from $1 to
$1.65 per million messages for messaging and from $0.08 to $0.132 for
connectivity cost for per million minutes for various regions. We considered
the price that has been allocated for the Sydney region. Processing cost can
be calculated as follows:
$\begin{split}\color[rgb]{0,0,0}Pc_{t}=\sum_{k=a}^{n}(M_{c}+C_{c})\end{split}$
(10)
In the above Equation, $M_{c}$ is the messaging cost and $C_{c}$ is the
connectivity cost and $Pc_{t}$ is the total processing cost. We calculated
cost for Fog device $a$ to Fog device $n$.
### V-D Evaluation Technique
We compared the proposed FLBFH with two recent works HFFR [32] and FTSM [31].
Since those two works were implemented in a different simulation environment
and did not consider real failure traces, we adopted the key idea of both
proposed methods to fit with our simulation environment and failure traces. We
compared both methods with our proposed method in the results and discussion
section to show the improvement of the FLBFH failure handling method over
previously proposed methods.
## VI Results and Discussion
We took eight years of failure traces to perform simulations and simulate each
year’s failure traces separately for HFFR, FTSM and the proposed FLBFH
methods. Performance comparison of each metric is presented below in different
sub-sections.
### VI-A Delay
We measured average, maximum and minimum delays for each task, as shown in
Figure 7, 8 and 9. The average delay for the proposed FLBFH method was
improved by around 52% and 58% for HFFR and FTSM respectively, on an average
for all failure traces (Figure 7).
Figure 7: Average delay for different failure traces.
For the failure traces of the year 2000, 2001 and 2003, the improvement was
around 51% for HFFR compared with the proposed method. The maximum improvement
was found for the 2004 failure traces which was 54%. Delay improvement for the
rest of the failure traces was between 52% to 53%. On the other hand, compared
with FTSM, the maximum improvement was 60% for the 2003 failure traces in the
proposed method. The minimum delay improvement found for the 2001 failure
traces compared with the FTSM, was 55%. However, the improvement was 59% for
1998, 2000 and 2002 failure traces, and 56%, 57% and 58% respectively for
1997, 1999 and 2004 failure traces. The delay improvement was different
because of the difference failure handling technique. However, our proposed
FLBFH method performed better over both HFFR and FTSM methods.
The maximum delay for the proposed FLBFH method was improved by around 50% and
56% for HFFR and FTSM respectively (Figure 8).
Figure 8: Maximum delay for different failure traces.
For all failure traces, the maximum delay was improved 49% to 50% in the
proposed method, compared with the HFFR method. On the other hand, the
improvement was in between of 55% and 56% in the proposed method, compared
with the FTSM method. Maximum improvement was found over FTSM method for
average delay and the same trend was found for maximum delay.
The minimum delay was more in FLBFH for most of the cases, as compared with
HFFR and FTSM (Figure 9). Compared with HFFR, it was 19% more, while it was
13% more for FRSM on an average, for all failure traces. However, for 1997,
1998 and 2001, the failure traces minimum delay improved compared with FTSM,
the improvement being 17% to 20%. Since the average delay was improved for the
proposed algorithm, the minimum delay will not have much effect on application
processing.
Figure 9: Minimum delay for different failure traces.
### VI-B Processing time
There is no significant difference in the average processing time for HFFR and
FTSM, compared with the proposed FLBFH method (Figure 10). However, the number
of failed tasks was less in the proposed FLBFH method. Since the proposed
method used a Fuzzy-logic based approach for failure handling and prediction,
it was able to handle failure more efficiently, with a resulting improvement
in the total processing time.
Figure 10: Average processing time for different failure traces.
On average, the total processing time improved by 51% and 45% for the FLBFH
method, compared with HFFR and FTSM respectively as shown in Figure 11.
Compared with HFFR, the improvement was around 50% to 51% and compared with
FTSM, it was around 44% to 46%. Total processing time improved in the proposed
method because the number of failed tasks in the proposed method was fewer
which provided better failure handling and robust scheduling.
Figure 11: Improvement in processing time for different failure traces with
FLBFH. Figure 12: Processing cost for different failure traces.
### VI-C Cost
The total processing cost was less for the proposed FLBFH method, compared
with HFFR and FTSM, as shown in Figure 12. HFFR had around 77% higher cost on
average for all failure traces. On the other hand, FTSM had around 44% higher
cost compared with FLBFH. This indicates that the number of failed tasks was
higher in HFFR and FTSM, compared with the proposed FLBFH method.
## VII Conclusion
The Fog computing environment is highly dynamic in terms of available
resources in the devices and the chances of failure are very high. This
research contributes to minimising the total number of application failures
due to the failure of the resources; it helps to improve delay and processing
time by proposing a Fuzzy-logic-based failure handling method. The proposed
failure handling method was evaluated using real failure traces from LANL.
Compared with the existing failure handling approaches, we found an
improvement in average delay and total processing time which were 56% and 48%
respectively on average. For future work, we will consider the implementation
and evaluation of the proposed method in the real Fog computing environment,
as well as power-aware resource allocation. The proposed method can be
improved further by selecting more appropriate Fog devices which have less
chance of failure.
## References
* [1] R. K. Naha, S. Garg, D. Georgakopoulos, P. P. Jayaraman, L. Gao, Y. Xiang, and R. Ranjan, “Fog computing: survey of trends, architectures, requirements, and research directions,” _IEEE access_ , vol. 6, pp. 47 980–48 009, 2018\.
* [2] S. K. Battula, S. Garg, R. K. Naha, P. Thulasiraman, and R. Thulasiram, “A micro-level compensation-based cost model for resource allocation in a fog environment,” _Sensors_ , vol. 19, no. 13, p. 2954, 2019.
* [3] S. Yi, C. Li, and Q. Li, “A survey of fog computing: concepts, applications and issues,” in _Proceedings of the 2015 workshop on mobile big data_. ACM, 2015, pp. 37–42.
* [4] D. Bermbach, F. Pallas, D. G. Pérez, P. Plebani, M. Anderson, R. Kat, and S. Tai, “A research perspective on fog computing,” in _International Conference on Service-Oriented Computing_. Springer, 2017, pp. 198–210.
* [5] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, “Cloud computing and emerging it platforms: Vision, hype, and reality for delivering computing as the 5th utility,” _Future Generation computer systems_ , vol. 25, no. 6, pp. 599–616, 2009.
* [6] S. S. Gill and R. Buyya, “Failure management for reliable cloud computing: A taxonomy, model and future directions,” _Computing in Science & Engineering_, pp. 1–10, 2018.
* [7] Y. Sharma, B. Javadi, W. Si, and D. Sun, “Reliability and energy efficiency in cloud computing systems: Survey and taxonomy,” _Journal of Network and Computer Applications_ , vol. 74, pp. 66–85, 2016.
* [8] K. Samant and S. Bhattacharyya, “Topology, search, and fault tolerance in unstructured p2p networks,” in _37th Annual Hawaii International Conference on System Sciences, 2004. Proceedings of the_. IEEE, 2004, pp. 1–6.
* [9] K. Vanthournout, G. Deconinck, and R. Belmans, “Building dependable peer-to-peer systems,” in _DSN 2004 Workshop on Architecting Dependable Systems_ , 2004, pp. 297–301.
* [10] J.-W. Lin, M.-F. Yang, and J. Tsai, “Fault tolerance for super-peers of p2p systems,” in _13th Pacific Rim International Symposium on Dependable Computing (PRDC 2007)_. IEEE, 2007, pp. 107–114.
* [11] A.-R. Mawlood-Yunis, M. Weiss, and N. Santoro, “From p2p to reliable semantic p2p systems,” _Peer-to-peer networking and applications_ , vol. 3, no. 4, pp. 363–381, 2010.
* [12] Y. Li and Z. Lan, “Exploit failure prediction for adaptive fault-tolerance in cluster computing,” in _Sixth IEEE International Symposium on Cluster Computing and the Grid (CCGRID’06)_ , vol. 1. IEEE, 2006, pp. 1–8.
* [13] G. M. Weiss and H. Hirsh, “Learning to predict rare events in event sequences,” in _KDD_ , 1998, pp. 359–363.
* [14] R. Vilalta and S. Ma, “Predicting rare events in temporal domains,” in _2002 IEEE International Conference on Data Mining, 2002. Proceedings._ IEEE, 2002, pp. 474–481.
* [15] R. K. Sahoo, A. J. Oliner, I. Rish, M. Gupta, J. E. Moreira, S. Ma, R. Vilalta, and A. Sivasubramaniam, “Critical event prediction for proactive management in large-scale computer clusters,” in _Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining_. ACM, 2003, pp. 426–435.
* [16] R. Vilalta and S. Ma, “Predicting rare events in temporal domains using associative classification rules,” _Technical Report_ , pp. 426–435, 2002\.
* [17] G. A. Hoffmann, F. Salfner, and M. Malek, _Advanced Failure Prediction in Complex Software Systems_. Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, Institut für Informatik, 2011.
* [18] C. Leangsuksun, T. Liu, T. Rao, S. Scott, and R. Libby, “A failure predictive and policy-based high availability strategy for linux high performance computing cluster,” in _The 5th LCI International Conference on Linux Clusters: The HPC Revolution_. Citeseer, 2004, pp. 18–20.
* [19] V. Castelli, R. E. Harper, P. Heidelberger, S. W. Hunter, K. S. Trivedi, K. Vaidyanathan, and W. P. Zeggert, “Proactive management of software aging,” _IBM Journal of Research and Development_ , vol. 45, no. 2, pp. 311–332, 2001.
* [20] A. J. Oliner, L. Rudolph, R. K. Sahoo, J. E. Moreira, and M. Gupta, “Probabilistic qos guarantees for supercomputing systems,” in _2005 International Conference on Dependable Systems and Networks (DSN’05)_. IEEE, 2005, pp. 634–643.
* [21] S. Chakravorty, C. Mendes, and L. Kalé, “Proactive fault tolerance in large systems,” in _HPCRI Workshop in conjunction with HPCA_ , 2005, pp. 1–7.
* [22] S. Hwang and C. Kesselman, “Grid workflow: a flexible failure handling framework for the grid,” in _High Performance Distributed Computing, 2003\. Proceedings. 12th IEEE International Symposium on_. IEEE, 2003, pp. 126–137.
* [23] H. Jin, D. Zou, H. Chen, J. Sun, and S. Wu, “Fault-tolerant grid architecture and practice,” _Journal of Computer Science and Technology_ , vol. 18, no. 4, p. 423, 2003.
* [24] H. Lee, K. Chung, S. Chin, J. Lee, D. Lee, S. Park, and H. Yu, “A resource management and fault tolerance services in grid computing,” _Journal of Parallel and Distributed Computing_ , vol. 65, no. 11, pp. 1305–1317, 2005.
* [25] G. Kandaswamy, A. Mandal, and D. A. Reed, “Fault tolerance and recovery of scientific workflows on computational grids,” in _2008 Eighth IEEE International Symposium on Cluster Computing and the Grid (CCGRID)_. IEEE, 2008, pp. 777–782.
* [26] B. B. Khoo and B. Veeravalli, “Pro-active failure handling mechanisms for scheduling in grid computing environments,” _Journal of Parallel and Distributed Computing_ , vol. 70, no. 3, pp. 189–200, 2010.
* [27] Y. Sharma, W. Si, D. Sun, and B. Javadi, “Failure-aware energy-efficient vm consolidation in cloud computing systems,” _Future Generation Computer Systems_ , vol. 94, pp. 620–633, 2019.
* [28] L. Luo, S. Meng, X. Qiu, and Y. Dai, “Improving failure tolerance in large-scale cloud computing systems,” _IEEE Transactions on Reliability_ , vol. 68, no. 2, pp. 620–632, 2019.
* [29] R. Buyya, S. N. Srirama, G. Casale, R. Calheiros, Y. Simmhan, B. Varghese, E. Gelenbe, B. Javadi, L. M. Vaquero, M. A. Netto _et al._ , “A manifesto for future generation cloud computing: research directions for the next decade,” _ACM computing surveys (CSUR)_ , vol. 51, no. 5, p. 105, 2019\.
* [30] Y. Liu, J. E. Fieldsend, and G. Min, “A framework of fog computing: Architecture, challenges, and optimization,” _IEEE Access_ , vol. 5, pp. 25 445–25 454, 2017.
* [31] A. Alarifi, F. Abdelsamie, and M. Amoon, “A fault-tolerant aware scheduling method for fog-cloud environments,” _PLOS ONE_ , vol. 14, no. 10, pp. 1–24, 2019.
* [32] M. M. Tajiki, M. Shojafar, B. Akbari, S. Salsano, and M. Conti, “Software defined service function chaining with failure consideration for fog computing,” _Concurrency and Computation: Practice and Experience_ , vol. 31, no. 8, pp. 1–14, 2019.
* [33] S. K. Battula, S. Garg, J. Montgomery, and B. H. Kang, “An efficient resource monitoring service for fog computing environments,” _IEEE Transactions on Services Computing_ , 2019.
* [34] K. Kai, W. Cong, and L. Tao, “Fog computing for vehicular ad-hoc networks: paradigms, scenarios, and issues,” _the journal of China Universities of Posts and Telecommunications_ , vol. 23, no. 2, pp. 56–96, 2016.
* [35] H. Madsen, B. Burtschy, G. Albeanu, and F. Popentiu-Vladicescu, “Reliability in the utility computing era: Towards reliable fog computing,” in _2013 20th International Conference on Systems, Signals and Image Processing (IWSSIP)_. IEEE, 2013, pp. 43–46.
* [36] J. P. de Araujo Neto, D. M. Pianto, and C. G. Ralha, “A fault-tolerant agent-based architecture for transient servers in fog computing,” in _2018 30th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD)_. IEEE, 2018, pp. 282–289.
* [37] C. Puliafito, E. Mingozzi, C. Vallati, F. Longo, and G. Merlino, “Companion fog computing: Supporting things mobility through container migration at the edge,” in _2018 IEEE International Conference on Smart Computing (SMARTCOMP)_. IEEE, 2018, pp. 97–105.
* [38] B. Suleiman and S. Venugopal, “Modeling performance of elasticity rules for cloud-based applications,” in _2013 17th IEEE International Enterprise Distributed Object Computing Conference_. IEEE, 2013, pp. 201–206.
* [39] C. Liu, K. Li, J. Liang, and K. Li, “Service reliability in an hc: Considering from the perspective of scheduling with load-dependent machine reliability,” _IEEE Transactions on Reliability_ , vol. 68, no. 2, pp. 476–495, 2019.
* [40] F. Al-Haidari, M. Sqalli, and K. Salah, “Impact of cpu utilization thresholds and scaling size on autoscaling cloud resources,” in _2013 IEEE 5th International Conference on Cloud Computing Technology and Science_ , vol. 2. IEEE, 2013, pp. 256–261.
* [41] P. Cingolani and J. Alcalá-Fdez, “jfuzzylogic: a java library to design fuzzy logic controllers according to the standard for fuzzy control programming,” _International Journal of Computational Intelligence Systems_ , vol. 6, no. sup1, pp. 61–75, 2013.
* [42] B. Javadi, D. Kondo, A. Iosup, and D. Epema, “The failure trace archive: Enabling the comparison of failure measurements and models of distributed systems,” _Journal of Parallel and Distributed Computing_ , vol. 73, no. 8, pp. 1208–1223, 2013.
* [43] B. Schroeder and G. Gibson, “A large-scale study of failures in high-performance computing systems,” _IEEE transactions on Dependable and Secure Computing_ , vol. 7, no. 4, pp. 337–350, 2009.
* [44] R. K. Naha, S. Garg, A. Chan, and S. K. Battula, “Deadline-based dynamic resource allocation and provisioning algorithms in fog-cloud environment,” _Future Generation Computer Systems_ , vol. 104, pp. 131–141, 2020.
* [45] R. K. Naha and S. Garg, “Multi-criteria-based dynamic user behaviour aware resource allocation in fog computing,” _arXiv preprint_ , 2019.
* [46] R. N. Calheiros, R. Ranjan, A. Beloglazov, C. A. De Rose, and R. Buyya, “Cloudsim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms,” _Software: Practice and experience_ , vol. 41, no. 1, pp. 23–50, 2011.
| Ranesh Kumar Naha Ranesh Kumar Naha completed his Ph.D. studies on reliable
resource allocation and scheduling in the Fog computing environment with the
University of Tasmania, Australia. He has been awarded a Tasmania Graduate
Research Scholarship (TGRS) to support his studies. He received his Master of
Science (M.Sc.) degree from Universiti Putra Malaysia, in 2015. He has been
awarded a prestigious Commonwealth Scholarship provided by the Ministry of
Higher Education, Malaysia. His research interests include wired and wireless
networks, parallel and distributed computing, security, Blockchain, Cloud
computing, Internet of Things (IoT) and Fog/Edge computing.
---|---
| Dr. Saurabh Garg is currently a Lecturer with the University of Tasmania,
Australia. He is one of the few Ph.D. students who completed in less than
three years from the University of Melbourne. He has authored over 40 papers
in highly cited journals and conferences. During his Ph.D., he has been
received various special scholarships for his Ph.D. candidature. His research
interests include resource management, scheduling, utility and grid computing,
Cloud computing, green computing, wireless networks, and ad hoc networks.
---|---
| Dr. Muhammad Bilal Amin received the M.S. degree from DePaul University,
Chicago, IL, USA, in 2006, and the Ph.D. degree from Kyung Hee University,
South Korea, in 2015. He is currently a Korea Research Fellow and serving as a
Lecturer with the Department of ICT, University of Tasmania, Australia. He has
an experience of more than ten years in the software industry, working for
Fortune 500 companies in the USA. He is an author of more than 50
publications. His research interests include blockchain, distributed systems,
software engineering and architecture, and performance-based cloud
applications
---|---
| Prof. Dr. Rajiv Ranjan (Senior Member, IEEE) is currently a chair and
professor of computing science and Internet of Things with Newcastle
University (from August 2018), United Kingdom. He is an internationally
established scientist with more than 300 scientific publications and expertise
in cloud computing, big data, and the Internet of Things. He has secured more
than $24 Million AUD in the form of competitive research grants from both
public and private agencies. He is an innovator with strong and sustained
academic and industrial impact and a globally recognized R&D leader with the
proven track record. His work has been extensively cited (17K+, Google
Scholar; 9K+ Scopus; 6K+ Web of Science). He serves on the editorial boards of
top-quality international journals including the IEEE Transactions on
Computers, IEEE Transactions on Cloud Computing, IEEE Cloud Computing, and
Future Generation Computer Systems.
---|---
|
* (60) P. De Lange, A. Maloney and E. Verlinde, _Monstrous Product CFTs in the Grand Canonical Ensemble_ , 1807.06200.
* (61) R. Dijkgraaf, G. W. Moore, E. P. Verlinde and H. L. Verlinde, _Elliptic Genera of Symmetric Products and Second Quantized Strings_ , _Commun. Math. Phys._ 185 (1997) 197 [hep-th/9608096].
* (62) J. M. Maldacena, G. W. Moore and A. Strominger, _Counting BPS Black Holes in Toroidal Type II String Theory_ , hep-th/9903163.
* (63) A. Maloney and E. Witten, _Quantum Gravity Partition Functions in Three Dimensions_ , _JHEP_ 02 (2010) 029 [0712.0155].
* (64) R. Dijkgraaf, J. M. Maldacena, G. W. Moore and E. P. Verlinde, _A Black Hole Farey Tail_ , hep-th/0005003.
* (65) J. Manschot and G. W. Moore, _A Modern Farey Tail_ , _Commun. Num. Theor. Phys._ 4 (2010) 103 [0712.0573].
* (66) C. A. Keller and A. Maloney, _Poincare Series, 3D Gravity and CFT Spectroscopy_ , _JHEP_ 02 (2015) 080 [1407.6008].
* (67) J. Maldacena and L. Susskind, _Cool Horizons for Entangled Black Holes_ , _Fortsch. Phys._ 61 (2013) 781 [1306.0533].
* (68) N. Engelhardt, S. Fischetti and A. Maloney, _Free Energy from Replica Wormholes_ , 2007.07444.
* (69) N. Seiberg and E. Witten, _The ${\rm D}1$ / ${\rm D}5$ system and singular CFT_, _JHEP_ 04 (1999) 017 [hep-th/9903224].
* (70) R. Gopakumar, _From Free Fields to AdS_ , _Phys. Rev. D_ 70 (2004) 025009 [hep-th/0308184].
* (71) R. Gopakumar, _From Free Fields to AdS. 2._ , _Phys. Rev. D_ 70 (2004) 025010 [hep-th/0402063].
* (72) R. Gopakumar, _From Free Fields to AdS: III_ , _Phys. Rev. D_ 72 (2005) 066008 [hep-th/0504229].
* (73) S. S. Razamat, _On a Worldsheet Dual of the Gaussian Matrix Model_ , _JHEP_ 07 (2008) 026 [0803.2681].
* (74) M. R. Gaberdiel, R. Gopakumar, B. Knighton and P. Maity, _From Symmetric Product CFTs to ${\rm AdS}_{3}$_, 2011.10038.
* (75) H. Maxfield and G. J. Turiaci, _The Path Integral of 3D Gravity Near Extremality; Or, Jt Gravity with Defects as a Matrix Integral_ , 2006.11317.
* (76) L. Susskind, _Some Speculations About Black Hole Entropy in String Theory_ , hep-th/9309145.
* (77) G. T. Horowitz and J. Polchinski, _A Correspondence Principle for Black Holes and Strings_ , _Phys. Rev. D_ 55 (1997) 6189 [hep-th/9612146].
* (78) A. Giveon, D. Kutasov, E. Rabinovici and A. Sever, _Phases of Quantum Gravity in $\mathrm{AdS}_{3}$ and Linear Dilaton Backgrounds_, _Nucl. Phys. B_ 719 (2005) 3 [hep-th/0503121].
* (79) V. Balasubramanian, B. Craps, B. Czech and G. Sárosi, _Echoes of chaos from string theory black holes_ , _JHEP_ 03 (2017) 154 [1612.04334].
* (80) A. Belin, _Permutation Orbifolds and Chaos_ , _JHEP_ 11 (2017) 131 [1705.08451].
* (81) S. R. Coleman, _Black Holes as Red Herrings: Topological Fluctuations and the Loss of Quantum Coherence_ , _Nucl. Phys. B_ 307 (1988) 867.
* (82) S. B. Giddings and A. Strominger, _Loss of Incoherence and Determination of Coupling Constants in Quantum Gravity_ , _Nucl. Phys. B_ 307 (1988) 854.
* (83) S. B. Giddings and A. Strominger, _Baby Universes, Third Quantization and the Cosmological Constant_ , _Nucl. Phys. B_ 321 (1989) 481.
* (84) O. Aharony, S. M. Chester and E. Y. Urbach, _A Derivation of AdS/CFT for Vector Models_ , 2011.06328.
* (85) N. Berkovits and C. Vafa, _Towards a Worldsheet Derivation of the Maldacena Conjecture_ , _AIP Conf. Proc._ 1031 (2008) 21 [0711.1799].
* (86) N. Berkovits, _Sketching a Proof of the Maldacena Conjecture at Small Radius_ , _JHEP_ 06 (2019) 111 [1903.08264].
* (87) B. Sundborg, _Stringy gravity, interacting tensionless strings and massless higher spins_ , _Nucl. Phys. Proc. Suppl._ 102 (2001) 113 [hep-th/0103247].
* (88) H. Maxfield, S. Ross and B. Way, _Holographic Partition Functions and Phases for Higher Genus Riemann Surfaces_ , _Class. Quant. Grav._ 33 (2016) 125018 [1601.00980].
* (89) N. Hitchin, _Harmonic spinors_ , _Advances in Mathematics_ 14 (1974) 1 .
* (90) L. Bers, _Simultaneous uniformization_ , _Bull. Amer. Math. Soc._ 66 (1960) 94.
* (91) D. Gallo, M. Kapovich and A. Marden, _The monodromy groups of schwarzian equations on closed riemann surfaces_ , _Annals of Mathematics_ (2000) 625\.
* (92) R. Mandelbaum, _Branched structures and affine and projective bundles on riemann surfaces_ , _Transactions of the American Mathematical Society_ 183 (1973) 37.
* (93) A. Gerasimov, A. Morozov, M. Olshanetsky, A. Marshakov and S. L. Shatashvili, _Wess-Zumino-Witten Model as a Theory of Free Fields_ , _Int. J. Mod. Phys. A_ 5 (1990) 2495.
* (94) R. Iengo and B. Ivanov, _Fermionization of chiral string determinants in factorizable metrics_ , _Physics Letters B_ 203 (1988) 89 .
|
# Density functional theory plus dynamical mean field theory within the
framework of linear combination of numerical atomic orbitals: Formulation and
benchmarks
Xin Qu Rocket Force University of Engineering, Xi’an, Shaanxi 710025, China
CAS Key Laboratory of Quantum Information, University of Science and
Technology of China, Hefei, Anhui 230026, China Peng Xu Rocket Force
University of Engineering, Xi’an, Shaanxi 710025, China Rusong Li College of
Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an, Shaanxi
710049, China Gang Li<EMAIL_ADDRESS>School of Physical Science
and Technology, ShanghaiTech University, Shanghai 201210, China ShanghaiTech
Laboratory for Topological Physics, ShanghaiTech University, Shanghai 201210,
China Lixin He<EMAIL_ADDRESS>CAS Key Laboratory of Quantum Information,
University of Science and Technology of China, Hefei, Anhui 230026, China
Xinguo Ren<EMAIL_ADDRESS>Institute of Physics, Chinese Academy of
Sciences, Beijing 100190, China Songshan Lake Materials Laboratory, Dongguan
523808, Guangdong, China
###### Abstract
The combination of density functional theory with dynamical mean-field theory
(DFT+DMFT) has become a powerful first-principles approach to tackle strongly
correlated materials in condensed matter physics. The wide use of this
approach relies on robust and easy-to-use implementations, and its
implementation in various numerical frameworks will increase its applicability
on the one hand and help crosscheck the validity of the obtained results on
the other. In the work, we develop a formalism within the linear combination
of numerical atomic orbital (NAO) basis set framework, which allows for
merging NAO-based DFT codes with DMFT quantum impurity solvers. The formalism
is implemented by interfacing two NAO-based DFT codes with three DMFT impurity
solvers, and its validity is testified by benchmark calculations for a wide
range of strongly correlated materials, including 3d transition metal
compounds, lanthanides, and actinides. Our work not only enables DFT+DMFT
calculations using popular and rapidly developing NAO-based DFT code packages,
but also facilitates the combination of more advanced beyond-DFT methodologies
available in this codes with the DMFT machinery.
## I INTRODUCTION
The understanding of strong correlations among electrons in realistic
materials is of great importance for both fundamental science and
technological applications. The strong electron correlations trigger a variety
of exotic phenomena, which provide a fruitful avenue for designing novel
materials [1, 2, 3]. Kohn-Sham (KS) density functional theory (DFT) [4, 5]
under local-density approximation (LDA) and generalized gradient approximation
(GGA) has achieved remarkable success in describing a very wide range of
materials. However, these approximations are found to be inadequate for
correctly describing strongly correlated materials, e.g., 3d transitional
metal compounds, lanthanides, actinides, quantitatively or even qualitatively
[6, 7, 8, 9]. To overcome this limitation, the so-called DFT+DMFT method that
merges DFT and many-body technique based dynamical mean-field theory (DMFT)
has been developed [10, 11, 12, 13, 7, 14, 15, 8, 9]. Various studies have
shown the prominent strength of DFT+DMFT in describing electronic structures
of strongly correlated materials. DFT+DMFT has thus becoming a promising
method of choice in studying realistic strongly correlated materials with
certain predictive power. [16, 17, 18, 19, 8, 9]
Comparing to DFT+U [6, 20, 21], a method that shares similar spirit as
DFT+DMFT and is available in nearly all popular DFT code packages, DFT+DMFT is
not only computationally more expensive, but also relatively more difficult to
access for normal users. Implementing DFT+DMFT with well-defined local orbital
basis and in a way that doesn’t requires much expertise from users is, thus,
of practical importance for promoting the methodology in studying realistic
correlated materials. One of the key issues in the implementation of DFT+DMFT
scheme is the definition of local correlated orbitals, in terms of which the
DMFT impurity problem is defined and solved. Many-body corrections to the KS
Hamiltonian exist only in this subspace. As a result, the choice of local
correlated orbitals has a noticeable influence on the obtained DFT+DMFT
results at the quantitative level.
In early days, DFT+DMFT was implemented within the linear muffin-tin orbital
(LMTO) basis set framework [14, 15, 16, 22, 23], where the local LMTOs were
chosen as the basis orbitals hosting strongly correlated d or f electrons.
Later, different Wannier-type orbitals, such as projective Wannier functions
[24] and maximally localized Wannier functions, were used to define the local
orbitals [25, 26]. As for plane-wave based DFT, Amadon et al. [27, 28] used
the all-electron atomic partial waves within the projector augmented wave
(PAW) framework or pseudoatomic wave functions as the local correlated
orbitals. Within the linearized augmented plane-wave (LAPW) method, Aichhorn
et al. [29], and Haule et al. and Kim [30] independently developed projector
schemes that convert the KS orbital space to local correlated subspace. We
note that, in recent years, the underlying embedding idea of DMFT has been
used in broader sense whereby the embedded cluster is solved in a ab initio
way using beyond-DFT approaches such as $GW$ or coupled-cluster method,
whereas the environment is treated using less advanced approaches [31, 32,
33].
In recent years, numerical atomic orbitals (NAOs) have been employed as the
basis set choice in several first-principles software packages [34, 35, 36,
37, 38]. Unlike most other basis sets, the linear combination of NAOs
represents a versatile framework that can be used in both all-electron and
pseudopotential-based DFT calculations. Past experience indicates that NAOs
are an efficient basis set choice not only for conventional ground-state
calculations, but also for functionalities that go beyond conventional DFT
calculations whereby explicit two-electron Coulomb integrals and/or unoccupied
KS states are needed [39, 40, 41, 42]. In this regard, it is highly desirable
to develop computational schemes within the NAO basis set framework that
enables a convenient merging of first-principles and model-Hamiltonian based
approaches. In case of DFT+$U$, several NAO-based implementations have been
reported previously [43, 44, 45]. However, in case of DFT+DMFT, only one
recent work within the pseudoatomic orbital basis sets (i.e., NAOs tailored
for pseudopotential-based calculations) was reported by Sim and Han [46]. In
their work, these authors proposed to use the so-called natural atomic
orbitals – the eigen-orbitals for the local density matrix – to define the
local correlated orbitals. In the present work, we develop a projection scheme
that allows to conveniently interface the NAO-based DFT codes and the DMFT
impurity solvers and thus enables NAO-based DFT+DMFT calculations. We test the
efficacy of such a scheme for the all-electron FHI-aims code [37] and the
norm-conserving pseudopotential based ABACUS code [38], interfaced with three
different DMFT impurity solver packages. Consistent results are obtained for
transition metal compounds, including both correlated metals (SrVO3) and Mott
insulators (NiO and MnO), rare-earth systems (Ce metal and Ce2O3), and
actinides (Pu2O3 and PuO2). Our work thus extends the reach of the NAO-based
numerical framework to treat strongly correlated materials.
This paper is organized as follows. Sec. II focuses on the DFT+DMFT formalism.
After introducing the general DFT+DMFT formulation, we present our definition
of local correlated orbitals within the NAO framework. This is followed by a
discussion of the self-consistency scheme of our DFT+DMFT implementation. In
Sec. III, we benchmark the validity of our DFT+DMFT formalism and
implementation over a wide range of strongly correlated materials, i.e.,
3d-systems SrVO3, MnO and NiO, 4f-systems Ce metal and Ce2O3, and 5f-systems
Pu2O3 and PuO2. Sec. IV concludes this paper.
## II THEORETICAL FORMULATION
### II.1 General DFT+DMFT formalism
Combining DFT with DMFT is not straightforward conceptually as the two
theories are formulated in different forms and in different Hilbert spaces. In
KS-DFT, one needs to self-consistently solve an effective single-particle
problem, whereby (for periodic systems) a KS Hamiltonian is constructed and
solved separately at each individual ${\bf k}$ point in the Brillouin zone
(BZ). On the other hand, DMFT is a real-space approach developed to solve the
lattice Hamiltonian models in which the on-site two-electron interactions are
explicitly included. Furthermore, as a first-principles approach, DFT accounts
for all the chemical details in the systems and include all (at least valence)
electrons in the calculations. In contrast, only the strongly correlated
electrons sitting energetically in the vicinity of the Fermi level, relevant
for the low-energy physics, are included in the model Hamiltonian and treated
explicitly in DMFT. As such, key quantities like charge density, density
matrix, and Green functions all have different representations in DFT and
DMFT, impeding a straightforward combination of the two approaches. One
solution to address this challenge, as detailed below, is to define a suitable
projector to convert the key quantities back and forth between the two
representations. As a side note, in the following we only focus on the general
formalism of DFT+DMFT, without going into a detailed account of DMFT itself in
this paper. Interested readers are referred to Refs. [10, 12, 7, 8, 9, 47] for
a detailed discussion of this theory.
Briefly, in the DFT+DMFT formalism, the Green functions defined in the
correlated subspace are the central objects gluing the two theories. There are
two kinds of interacting Green functions defined in the corrected subspace:
One is the impurity Green function $G^{imp}$ determined by solving the
impurity problem in Anderson impurity model, describing a single site with on-
site Coulomb interactions and coupled to a mean-field electron bath. The
another one, denoted as $G^{loc}$, is the on-site term of the lattice Green
function, which is essentially the the Green function of the correlated site
described by the Hubbard model and is obtained by projecting the Green
function in the full Hilbert space into the correlated subspace. The
fundamental requirement in DMFT is that the “on-site” Green function of the
lattice problem and that of the Anderson impurity model are equal,
$G^{loc}=G^{imp}\,,$ (1)
which is achieved via the DMFT self-consistency cycle.
In the DFT+DMFT scheme, the single-particle effective Hamiltonian in the full
Hilbert space can be expressed as
$\hat{H}=\hat{H}_{\mathrm{KS}}+\hat{\Sigma}-\hat{H}_{\mathrm{dc}}\,,$ (2)
where $\hat{H}_{\mathrm{KS}}$ is the non-interacting KS Hamiltonian,
$\hat{\Sigma}$ the self-energy which encodes all the many-body complexities
arising from strong electron correlations and are nonzero only for correlated
orbitals. Finally, $\hat{H}_{\mathrm{dc}}$ is the double-counting term, which
is introduced to discount the interactions among correlated electrons that
have been included in a static mean-field level in $\hat{H}_{\mathrm{KS}}$.
Starting from the Hamiltonian Eq. (2), the key issue in the combination of DFT
and DMFT is the downfolding (projecting) of the physical quantities from the
full KS space to the local correlated space, and the upfolding (embedding) of
these quantities from the local space to the full space. Mathematically, the
local interacting Green function $G^{loc}$ can be obtained through a
projection procedure. In the literature, two different projection procedures
have been used. The first one can be seen as a “Hamiltonian projection”,
whereby a tight-binding Hamiltonian is first obtained from the KS Hamiltonian
by projecting the latter into the correlated subspace,
$H_{mm^{\prime}}^{TB}({\bf k})=\langle\phi_{m{\bf
k}}|\hat{H}_{\mathrm{KS}}|\phi_{m^{\prime}{\bf k}}\rangle\,.$ (3)
Here $\phi_{m{\bf k}}$ is the Bloch-summed local orbitals with $m$ denoting an
orbital index in the correlated subspace. In the second step, the local
interacting Green function can be directly calculated from this tight-binding
Hamiltonian as
$\displaystyle G_{mm^{\prime}}^{loc}(i\omega_{n})=$ $\displaystyle\sum_{{\bf
k}}\frac{1}{N_{\bf k}}\left[(i\omega_{n}+\mu)I-H^{TB}({\bf k})\right.$ (4)
$\displaystyle\left.-\Sigma(i\omega_{n})+H_{\mathrm{dc}}\right]_{mm^{\prime}}^{-1}\,,$
where $N_{\bf k}$ is the is the number of ${\bf k}$ points in the first
Brillouin zone, equivalent to the number of unit cells in the Born-von Kármán
supercell. In Eq. (4), $\mu$ is the chemical potential, $I$ an identity
matrix, and $[\cdots]^{-1}$ denotes the matrix inversion which is taken within
the correlated subspace. Here and below, we employ the Matsubara Green
function formalism where $\omega_{n}=(2n+1)\pi/\beta$ are the discrete
frequency points along the imaginary frequency axis, with $\beta=1/k_{B}T$
being the inverse temperature. The spectroscopy results on the real frequency
axis are obtained via analytical continuations. The above procedure to compute
the local interacting Green function is the so-called tight-binding
Hamiltonian method.
In the second scheme, one first constructs the interacting Green function in
the full Hilbert space as
$G_{ij}({\bf k},i\omega_{n})=\left[(i\omega_{n}+\mu)I-H({\bf
k})\right]_{ij}^{-1}\,,$ (5)
where the Hamiltonian matrix $H_{ij}({\bf k})=\langle\chi_{i,{\bf
k}}\mid\hat{H}\mid\chi_{j,{\bf k}}\rangle$ is the representation of the
interacting Hamiltonian operator introduced in Eq. (2) within an arbitrary
orthonormal basis set $\mid\chi_{j,{\bf k}}\rangle$ expanding the full Hilbert
space. While the matrix elements of the KS Hamiltonian $\hat{H}_{\mathrm{KS}}$
within such a basis set are given straightforwardly, those of $\hat{\Sigma}$
and $\hat{H}_{\mathrm{dc}}$ will be discussed later in Sec. II.3. Once the
full-space lattice Green function is obtained, the local interacting Green
function can be obtained through a projection procedure
$G_{mm^{\prime}}^{loc}(i\omega_{n})=\sum_{{\bf k}}\frac{1}{N_{\bf
k}}\sum_{ij}P_{ij}({\bf k},mm^{\prime})G_{ji}({\bf k},i\omega_{n})~{},$ (6)
where the projector above is given by
$P_{ij}({\bf k},mm^{\prime})=\langle\phi_{m}|\chi_{i{\bf
k}}\rangle\langle\chi_{j{\bf k}}|\phi_{m^{\prime}}\rangle\,.$ (7)
The construction of the local Green function via Eqs. (5)-(7) is known as the
projector method.
The two methods discussed above to construct the local Green function differ
in that the tight-binding Hamiltonian method projects the Hamiltonian matrix
while the projector method projects the Green function. In the projector
method, the matrix inversion is carried out in the full Hilbert space, as
indicated in Eq. (5), and thus the interactions between the correlated
electrons and the rest electrons are retained to some extent. On the other
hand, within the tight-binding method, if one wants to describe the
interaction between correlated electrons and the rest ones, e.g., studying the
charge transfer process between the correlated orbitals and the bath, one has
to enlarge the Hilbert space of the tight-binding Hamiltonian to encompass the
extra itinerant electrons, and it will inevitably increases the computational
complexity. In this work, we choose the projector method in our DFT+DMFT
implementation.
The other key quantity in DFT+DMFT calculations is the interacting impurity
Green function $G^{imp}$ in Eq. (1), corresponding to the local propagator of
the effective single-impurity Anderson model, which describes a single site
coupled to a bath that mimics the lattice environment at a mean-field level.
Formally $G^{imp}$ satisfies the following relationship
$\displaystyle G^{imp}_{mm^{\prime}}\left(i\omega_{n}\right)=$
$\displaystyle[(i\omega_{n}+\mu)I-{\cal
E}^{imp}-\Delta\left(i\omega_{n}\right)$ (8)
$\displaystyle-\Sigma^{imp}\left(i\omega_{n}\right)]_{mm^{\prime}}^{-1}\,,$
where ${\cal E}^{imp}$ is the impurity energy level,
$\Delta\left(i\omega_{n}\right)$ the so-called hybridization function
characterizing the influence of the environment on the embedded impurity, and
$\Sigma^{imp}\left(i\omega_{n}\right)$ the impurity self-energy. It should be
noted that ${\cal E}^{imp}$ and $\Delta\left(i\omega_{n}\right)$ are matrices
when there are multiple orbitals in the correlated subspace, which is typical
in DFT+DMFT calculations. In this context, it is also customary to define a
so-called Weiss Green function
$\displaystyle\mathcal{G}_{mm^{\prime}}^{-1}(i\omega_{n})=$ $\displaystyle
G_{mm^{\prime}}^{-1}(i\omega_{n})+\Sigma_{mm^{\prime}}$ (9) $\displaystyle=$
$\displaystyle[(i\omega_{n}+\mu)I-{\cal
E}^{imp}-\Delta\left(i\omega_{n}\right)]_{mm^{\prime}}~{},$
which acts as the dynamical (energy-dependent) mean field that the impurity
electrons experience, and encodes essentially the same information as the
hybridization function $\Delta\left(i\omega_{n}\right)$. When the self-
consistency in the DMFT loop is reached, the local Green function
$G_{mm^{\prime}}^{loc}(i\omega_{n})$ and the local self-energy
$\Sigma_{mm^{\prime}}(i\omega_{n})$ in Eq. (4) and (6) are equal to the
impurity Green function $G^{imp}_{mm^{\prime}}(i\omega_{n})$ and the impurity
self-energy $\Sigma^{imp}_{mm^{\prime}}(i\omega_{n})$, respectively.
The Weiss Green function together with the local Coulomb interactions defines
the Anderson impurity model, which can be expressed in terms of an action [7]
$\displaystyle S=$
$\displaystyle\int_{0}^{\beta}d\tau\sum_{mm^{\prime}}c_{m}^{\dagger}(\tau)\mathcal{G}_{mm^{\prime}}^{-1}\left(\tau\right)c_{m^{\prime}}\left(\tau\right)$
(10) $\displaystyle-\sum_{lmno}U_{lmno}\int_{0}^{\beta}d\tau
c_{l}^{\dagger}(\tau)c_{n}(\tau)c_{m}^{\dagger}(\tau)c_{o}(\tau)~{},$
where $c_{l}^{\dagger}(\tau)$, $c_{n}(\tau)$, etc., should be understood as
the Grassmann variables, and $U_{lmno}$ is the on-site Coulomb interaction
expressed within a set of local orbitals (labelled by $l,m,n,o$) spanning the
correlated subspace. The action $S$ is essentially the integration of the
Lagrangian over the imaginary time. For given $S$, the impurity Green function
can be calculated as
$G^{imp}_{mm^{\prime}}=-\frac{1}{\mathcal{Z}}\int\mathcal{D}\prod_{i}\left[c^{\dagger},c\right]c_{m}c_{m^{\prime}}^{\dagger}e^{-S}~{},$
(11)
where $i$ runs over all $m$ indices, and $\mathcal{Z}$ is the partition
function
$\mathcal{Z}=\int\mathcal{D}\prod_{i}\left[c^{\dagger},c\right]e^{-S}\,.$ (12)
The interacting impurity Green’s function defined via Eqs. (10-12) can then be
obtained through a variety of numerical approaches – usually termed as
impurity solvers. Up to date, several types of impurity solvers have been
developed, including the quantum Monte Carlo (QMC) [13, 48, 49, 50], non-
crossing approximation (NCA) [51, 52, 53], one-crossing approximation (OCA)
[54, 55], exact diagonalization (ED) [56, 57], numerical renormalization group
(NRG) [58, 59], etc. Among all these impurity solvers, continuous-time quantum
Monte Carlo (CTQMC) [60, 61, 62, 63] provides access to both high and low
energy scales and is effective for a wide class of realistic material
calculations. Nowadays, the CTQMC, especially the hybridization expansion
based CTQMC (CT-HYB), is the most popular impurity solver employed in DFT+DMFT
calculations.
Once the interacting impurity Green function is determined, the impurity self-
energy $\Sigma^{imp}(i\omega_{n})$ can be obtained via the Dyson equation
$\Sigma^{imp}(i\omega_{n})=\left[\mathcal{G}(i\omega_{n})\right]^{-1}-\left[G^{imp}(i\omega_{n})\right]^{-1}$,
which is usually done within the impurity solvers. This impurity self-energy
will be taken as the updated local self-energy and fed into Eq. (3) and (4) or
(2) and (5), from which a new local Green function and consequently a new
Weiss Green’s function can be obtained. This is where a new iteration starts.
This self-consistency loop keeps going until the self-energy reaches
convergence or the local and impurity interacting Green function satisfies the
self-consistency condition, i.e., Eq. (1).
### II.2 Construction of the projector
Within the NAO basis set framework, it’s natural to take the $d$ or $f$-type
NAOs that contribute most to the electronic states around the Fermi level as
the the local correlated orbitals. These NAOs by construction are localized
and atom-centered, and thus satisfy the usual requirement of correlated
orbitals. In early DFT+DMFT implementations, analogous atomic-like orbitals
like LMTOs were used. LMTOs are minimal basis sets in the sense that for each
angular momentum channel there is only one radial basis function. In contrast
with LMTOs, the NAO basis sets are of multi-$\zeta$ character meaning that
there are more than one radial functions per angular momentum, thus offering a
more accurate description of the electronic structure. In the past, the DFT+U
method has been successfully implemented within NAO-based DFT codes [43, 44,
45], whereby it turns out to be a good practice to choose the most localized
$d$ or $f$ basis functions as the correlated orbitals to apply the Hubbard $U$
correction. Thus the most localized $d$ or $f$ orbitals seems to span the
correlated subspace rather well. In practice, since NAOs centering on
neighboring atoms are non-orthogonal to each other, certain orthorgonalization
procedure is needed to generate an orthonormal local basis set, which is
convenient for DMFT calculations.
Below we shall discuss our procedure to construct the projector and local
correlated orbitals to facilitate DFT+DMFT calculations within the NAO basis
set framework. In analogy to the transformation between Bloch orbitals and
Wannier orbitals, we define the following Bloch-summed atomic orbitals as
$\Phi_{I,m}^{\bf{k}}({\bf r})=\frac{1}{\sqrt{N_{\bf
k}}}\sum_{\mathbf{R}}e^{i\mathbf{k}\cdot\mathbf{R}}\phi_{I,m}\left({\bf
r}-{\bm{\tau}}_{I}-\bf{R}\right)\,,$ (13)
where $\phi_{I,m}\left({\bf r}-{\bm{\tau}}_{I}-\bf{R}\right)$ is a NAO located
at atomic site $I$ in cell ${\bf R}$. Here the magnetic quantum number $m$
labels the different orbitals in the correlated angular moment channel, such
as the five $d$ orbitals or seven $f$ orbitals.
Since NAOs on neighboring atomic sites are non-orthogonal to each other, it is
obvious that the $\Phi_{I,m}^{{\bf k}}({\bf r})$’s defined in Eq. (13) with
different $m$ are also non-orthogonal. The key next step is to employ the
L$\ddot{\text{o}}$wdin orthonormalization procedure to
$\Phi_{I,m}^{\mathbf{k}}$, i.e.,
$\mid\tilde{\Phi}_{I,m}^{\mathbf{k}}\rangle=\sum_{I^{\prime}m^{\prime}}O^{-\frac{1}{2}}_{Im,I^{\prime}m^{\prime}}\left(\mathbf{k}\right)\mid\Phi_{I^{\prime},m^{\prime}}^{\mathbf{k}}\rangle\,,$
(14)
where
$O_{Im,I^{\prime}m^{\prime}}\left(\mathbf{k}\right)=\langle\Phi_{I,m}^{\mathbf{k}}\mid\Phi_{I^{\prime},m^{\prime}}^{\mathbf{k}}\rangle$
(15)
is the overlap matrix. The newly obtained $\tilde{\Phi}_{I,m}^{\mathbf{k}}$
orbitals are then orthonormal by construction. Afterwards, a Fourier transform
is applied to get the correlated orbital in real space, i.e.,
$\mid W_{I,m}^{{\bf R}}\rangle=\frac{1}{\sqrt{N_{\bf
k}}}\sum_{\mathbf{k}}e^{-i{\bf
k}\cdot\mathbf{R}}\mid\tilde{\Phi}_{I,m}^{\mathbf{k}}\rangle~{}.$ (16)
The orthonormality of $W_{I,m}^{{\bf R}}$ is also guaranteed by construction.
We choose the KS states $|\Psi_{i{\bf k}}\rangle$ as the basis sets
$|\chi_{i{\bf k}}\rangle$ (cf. Eq. (7)) to span the full Hilbert space, and
then Eq. (6) becomes
$\displaystyle G_{I,mm^{\prime}}^{loc}=$
$\displaystyle\sum_{\mathbf{k}}\frac{1}{N_{\bf
k}}\sum_{ij}P^{I}_{ij}(\mathbf{k},mm^{\prime})$ (17)
$\displaystyle\left[\frac{1}{i\omega_{n}+\mu-\epsilon({\mathbf{k}})-\bar{\Sigma}(\mathbf{k},i\omega_{n})}\right]_{ji},$
where
$\epsilon_{ji}({\mathbf{k}})=\langle\Psi_{j\mathbf{k}}|\hat{H}_{\textrm{KS}}|\Psi_{i\mathbf{k}}\rangle=\epsilon_{i{\bf
k}}\delta_{ij}$ and
$\bar{\Sigma}_{ji}(\mathbf{k},i\omega_{n})=\langle\Psi_{j\mathbf{k}}|\hat{\Sigma}(i\omega_{n})-\hat{H}_{\mathrm{dc}}|\Psi_{i\mathbf{k}}\rangle$,
with $\epsilon_{i{\bf k}}$ being the KS eigenvalues. The projector, Eq. (7),
then becomes
$P_{ij}^{I}\left({\bf k},mm^{\prime}\right)=\langle\Psi_{i\mathbf{k}}\mid
W_{I,m}^{0}\rangle\langle
W_{I,m^{\prime}}^{0}\mid\Psi_{j\mathbf{k}}\rangle\,,$ (18)
where the superscript $0$ denotes the central unit cell. Since in DMFT
calculations, only the “on-site” Green function, where $m$ and $m^{\prime}$
orbitals are located in the same unit cell is needed, the projector is
designed to project the full Green function into the central unit cell,
without losing generality. Formally, the projector for a given correlated atom
$I$ and a wave vector ${\bf k}$ is a fourth-order tensor, but since it is
separable and symmetric, only a second-order tensor, i.e., the overlap matrix
$\langle\Psi_{i\mathbf{k}}\mid W_{I,m}^{0}\rangle$ needs to be computed and
stored in practical implementations.
The whole DFT+DMFT scheme requires the orthonormality of local orbitals
representing the correlated subspace. In the language of the projector, it
requires the projector to satisfy the following orthonormal condition
$\displaystyle\sum_{i}P_{ii}^{I}\left({\bf k},mm^{\prime}\right)=$
$\displaystyle\sum_{i}\langle
W_{I,m^{\prime}}^{0}\mid\Psi_{i\mathbf{k}}\rangle\langle\Psi_{i\mathbf{k}}\mid
W_{I,m}^{0}\rangle$ (19) $\displaystyle=$ $\displaystyle\delta_{mm^{\prime}}.$
In principle, this condition is automatically satisfied if the summation over
$i$ goes over all the KS bands. In practical DFT+DMFT implementations,
however, one truncates the full KS states into a small subset around the Fermi
level, which means $i$ just runs over bands that are located in a chosen
energy window around the Fermi level (in the following, these subsets of bands
are denoted as $\mathcal{C}$). This truncation destroys the completeness of
$|\Psi_{ik}\rangle$ and thus the orthonormality of the projector. To deal with
this issue, one can introduce an extra transformation
$\displaystyle\tilde{P}_{ij}^{I}\left(\mathbf{k},mm^{\prime}\right)=$
$\displaystyle\sum_{m^{\prime\prime\prime}}\tilde{O}_{m^{\prime}m^{\prime\prime\prime}}^{-\frac{1}{2}}(\mathbf{k})\langle
W_{I,m^{\prime\prime\prime}}^{0}\mid\Psi_{j\mathbf{k}}\rangle$ (20)
$\displaystyle\sum_{m^{\prime\prime}}\langle\Psi_{i\mathbf{k}}\mid
W_{I,m^{\prime\prime}}^{0}\rangle\tilde{O}_{m^{\prime\prime}m}^{-\frac{1}{2}}(\mathbf{k})$
to orthonormalize the projector. The transformation matrix in the above
equation is given by
$\displaystyle\tilde{O}_{mm^{\prime}}(\mathbf{k})$
$\displaystyle=\sum_{i\in\mathcal{C}}P_{ii}^{I}\left(\mathbf{k},mm^{\prime}\right)$
$\displaystyle=\sum_{i\in\mathcal{C}}\langle
W_{I,m}^{0}|\Psi_{i\mathbf{k}}\rangle\langle\Psi_{i\mathbf{k}}|W_{I,m^{\prime}}^{0}\rangle\,,$
(21)
which is nothing but the overlap between the projections of the
orthonormalized local orbitals $|W_{I,m}^{0}\rangle$’s within the subspace
$\mathcal{C}$. Mathematically, the local correlated orbitals we use above to
construct the projector can be explicitly expressed as
$\displaystyle\mid\tilde{W}_{I,m}^{{\bf R}}\rangle=$
$\displaystyle\frac{1}{\sqrt{N_{\mathbf{k}}}}\sum_{\mathbf{k}}e^{-i\mathbf{k}{{\bf
R}}}\sum_{m^{\prime}}\tilde{O}_{mm^{\prime}}^{-\frac{1}{2}}(\mathbf{k})$ (22)
$\displaystyle\sum_{i\in\mathcal{C}}\langle\Psi_{i\mathbf{k}}|\tilde{\Phi}_{I,m^{\prime}}^{\mathbf{k}}\rangle|\Psi_{i\mathbf{k}}\rangle.$
In this form, our scheme is similar in spirit to the projective Wannier-
orbital scheme proposed by Anisimov et al. [24] in the context of LDA+DMFT
calculations. In our case, the most localized $d$ or $f$ NAO plays the role of
the LMTO in the work of Ref. [24]
There are a few advantages of using NAOs to define the local correlated space.
Firstly, this choice is physically intuitive and technologically
straightforward within the NAO basis set framework. We do not need to spend
extra efforts to generate a set of local orbitals and make sure they are
physically reasonable atomic-like orbitals. Secondly, from both the
theoretical and technical perspectives, the choice of NAOs to define the local
correlated space and the resulting projection scheme are suitable for all NAO-
based packages. Especially, the key quantities that are required in this
formalism, e.g., the KS wave functions and the overlap matrix of the basis
functions, exist naturally in NAO-based DFT code packages, and hence no
additional efforts are required to calculate these quantities. Thirdly, due to
its high flexibility, our DFT+DMFT infrastructure can be interfaced with a new
NAO-based DFT code without much effort. We hope it can serve as a platform to
enable NAO-based DFT codes to do DFT+DMFT researches on strongly correlated
materials. In this work, we implement the DFT+DMFT interface and test it with
two NAO-based DFT codes using different techniques, i.e., the pseudopotential-
based ABACUS code [38] and full-potential all-electron FHI-aims code [37].
### II.3 DFT+DMFT self-consistency scheme
In this section, we will explain step by step our DFT+DMFT calculation
procedure, according to the workflow outlined in the flow diagram depicted in
Fig. 1.
Step 1\. The DFT+DMFT calculation starts from well-converged DFT band
structures. To get high-quality KS orbitals $|\Psi_{i\bf{k}}\rangle$, a dense
${\bf k}$-point mesh is usually needed.
Step 2\. With $|\Psi_{i\bf{k}}\rangle$, the projector defined in Eq. (20) can
be straightforwardly constructed. To this end, the most localized $d$ or $f$
orbital of the correlated atoms in the NAO basis set is used to construct
$|W_{I,m}^{0}\rangle$.
Step 3\. In the DFT+DMFT iteration loop, the frequency-dependent self-energy
$\Sigma(i\omega_{n})$ is determined by the impurity solver at each iteration
step. To start with, the initial self-energy is set to be equal to the double-
counting term, i.e., $\bar{\Sigma}_{ij}(\mathbf{k},i\omega_{n})=0$. Here the
following choice of the double-counting term
$H_{\mathrm{dc},mm^{\prime}}^{I,\sigma}=\left[U(n_{I}-1/2)-1/2J(n_{I}-1)\right]\delta_{mm^{\prime}}$
(23)
is used. Here $n_{I}$ is the total number of correlated electrons associated
with the correlated atom $I$ and is fixed during the DMFT cycles. In the
spirit of reducing the necessity of introducing additional empirical
parameters, $n_{I}$ is given by projecting the KS orbitals in the subset
$\mathcal{C}$ to the local subspace as
$n_{I}=\sum_{m}\sum_{\mathbf{k}}\frac{1}{N_{\mathbf{k}}}\sum_{i\in\mathbf{C}}f_{i\mathbf{k}}\tilde{P}_{ii}^{I}\left(\mathbf{k},mm\right)\,,$
(24)
where $f_{i\mathbf{k}}$ is the occupation number of KS orbital
$\Psi_{i\mathbf{k}}$. This double-counting scheme is similar to the so-called
fixed double-counting [22, 64] scheme, which is considered to be able to
improve the stability of DFT+DMFT self-consistency loop [64] by fixing the
value of $n_{I}$. The difference is that the nominal number of strongly
correlated electrons is specified manually in the fixed double-counting
scheme, whereas in our case this number is determined using Eq. (24).
Step 4\. Using the projector constructed in Step 2, we embed (upfold) the
self-energy back to the selected KS space subset $\mathcal{C}$, which is
expressed as
$\displaystyle\bar{\Sigma}_{ij}(\mathbf{k},i\omega_{n})=$
$\displaystyle\sum_{I}\sum_{mm^{\prime}}\tilde{P}_{ij}^{I}\left(\mathbf{k},mm^{\prime}\right)$
(25)
$\displaystyle\left(\Sigma_{mm^{\prime}}^{I}(i\omega_{n})-H_{\mathrm{dc},mm^{\prime}}^{I}\right).$
Step 5\. During the DFT+DMFT self-consistency iteration, the electronic
chemical potential needs to be adjusted according to the newly obtained self-
energy at each iteration, to keep the number of electrons hosted by KS bands
in $\mathcal{C}$
$N_{\mathcal{C}}^{\mathrm{KS}}=\sum_{\mathbf{k},i\in\mathcal{C}}\frac{1}{N_{\mathbf{k}}}f_{i\mathbf{k}}$
(26)
conserved. Within the DMFT cycle, this condition means that
$\displaystyle N_{\mathcal{C}}^{\mathrm{KS}}=$
$\displaystyle\frac{1}{\beta}\sum_{\omega_{n}}\sum_{\mathbf{k},i\in\mathcal{C}}\frac{1}{N_{\mathbf{k}}}$
(27)
$\displaystyle\left[\frac{1}{i\omega_{n}+\mu-\epsilon_{\mathbf{k}}-\bar{\Sigma}(\mathbf{k},i\omega_{n})}\right]_{ii}\,,$
where $\beta$ is again the inverse temperature $1/{k_{B}T}$. The summation of
imaginary frequency $\omega_{n}$ should run from negative infinity to positive
infinity. However, in realistic calculations, to save computational cost, the
explicit summation over Matsubara frequency points is only carried within a
frequency window $[-\omega_{N},\omega_{N}]$, where the contribution from the
frequency points outside the window is treated approximately. This is enabled
by making use of the asymptotic behavior of the self-energy, i.e.,
$\lim_{n\to\infty}\bar{\Sigma}_{ii}(\mathbf{k},i\omega_{n})=\bar{\Sigma}_{ii}(\mathbf{k},\infty)$,
where $\bar{\Sigma}_{ii}(\mathbf{k},\infty)$ is a real value. Then the Eq.
(27) is approximated by
$\displaystyle N_{\mathcal{C}}^{\mathrm{KS}}=$
$\displaystyle\frac{1}{\beta}\sum_{\mathbf{k},i\in\mathcal{C}}\frac{1}{N_{\mathbf{k}}}\left\\{\sum_{\omega_{n}=-\omega_{N}}^{\omega_{N}}\right.$
(28)
$\displaystyle\left(\left[\frac{1}{i\omega_{n}+\mu-\epsilon_{\mathbf{k}}-\bar{\Sigma}(\mathbf{k},i\omega_{n})}\right]_{ii}\right.$
$\displaystyle\left.-\left[\frac{1}{i\omega_{n}+\mu-\epsilon_{k}-\bar{\Sigma}(\mathbf{k},\infty)}\right]_{ii}\right)$
$\displaystyle\left.+\frac{1}{1+e^{\beta(\epsilon_{i\mathbf{k}}+\bar{\Sigma}_{ii}(\mathbf{k},\infty)-\mu)}}\right\\}.$
When the chosen cutoff Matsubara frequency $\omega_{N}$ is high enough so that
the asymptotic behavior of the self-energy is correct, this approximation is
of good accuracy.
Step 6\. As all information is secured, the local interacting Green function
is constructed from Eq. (17), where, of course, the renormalized projector Eq.
(20) is used. Under the DFT+DMFT self-consistency condition, Eq. (1) and (8),
the matrices of the impurity level and hybridization function are determined
by
${\cal
E}_{I,mm^{\prime}}^{imp}=-H_{dc,mm^{\prime}}^{I}+\sum_{\mathbf{k},i\in\mathcal{C}}\frac{1}{N_{\mathbf{k}}}\tilde{P}_{ij}^{I}\left(\mathbf{k},mm^{\prime}\right)\varepsilon_{i\mathbf{k}}$
(29)
and
$\displaystyle\Delta_{mm\prime}^{I}\left(i\omega_{n}\right)=$
$\displaystyle(i\omega_{n}+\mu)\delta_{mm^{\prime}}-{\cal
E}^{imp}_{I,mm^{\prime}}$ (30)
$\displaystyle-\Sigma_{mm^{\prime}}^{I}\left(i\omega_{n}\right)-[G^{loc}]^{-1}_{I,mm^{\prime}}.$
For some CT-HYB impurity solver, the imaginary time hybridization function is
needed.
Figure 1: Flow diagram of the major steps in our DFT+DMFT implementation.
Step 7\. Solve the impurity problem with the determined $\mu$, ${\cal
E}_{I}^{imp}$, $\Delta^{I}\left(i\omega_{n}\right)$ and the given on-site
Coulomb interaction through the impurity solver to obtain the new self-energy
and impurity Green function. In this paper, we use the Kanamori [65, 66] form
Coulomb interaction in which only the density-density term is included, so
there are no sign problem in our CTQMC calculations.
Step 8\. Check whether the self-consistency is reached. If the self-
consistency is reached, stop the DFT+DMFT calcution, else return to Step 4.
## III Results and discussion
### III.1 d-electron systems
We first benchmark our DFT+DMFT implementation on three prototypical strongly
correlated d-electron systems – SrVO3, NiO, and MnO. For the DFT part, we
carried out GGA calculations using two NAO-based code packages – FHI-aims [37]
and ABACUS [38]. In FHI-aims, the default tight tier 1 basis set is used for
V, Ni, Mn, O, and Sr atoms, and the corresponding cutoff radii of the basis
functions are $6.0\text{\,}\mathrm{\SIUnitSymbolAngstrom}$,
$6.0\text{\,}\mathrm{\SIUnitSymbolAngstrom}$,
$6.0\text{\,}\mathrm{\SIUnitSymbolAngstrom}$,
$6.0\text{\,}\mathrm{\SIUnitSymbolAngstrom}$, and
$8.0\text{\,}\mathrm{\SIUnitSymbolAngstrom}$, respectively. In ABACUS we use
the SG15 optimized norm-conserving Vanderbilt (ONCV) multi-projector pseudo-
potentials [67, 68, 69] and the corresponding optimized double-$\zeta$ plus
polarization (DZP) atomic basis sets, which comprise $4s2p2d1f$ basis
functions with a cutoff radius of
$9.0\text{\,}\mathrm{B}\mathrm{o}\mathrm{h}\mathrm{r}$ for transition metal
atoms and $2s2p1d$ basis functions with cutoff radii of
$7.0\text{\,}\mathrm{B}\mathrm{o}\mathrm{h}\mathrm{r}$ for O atoms and
$10.0\text{\,}\mathrm{B}\mathrm{o}\mathrm{h}\mathrm{r}$ for Sr atoms. In all
DFT calculations, the Perdew-Burke-Ernzerhof (PBE) GGA exchange-correlation
functional was used [70]. As for the DMFT part, we employed three different
CT-HYB impurity solvers to solve the single-site impurity problem. These are
PACS 111The segment implementation [60] of the ct-qmc impurity solver is a
part of the PACS@sdf package (Package for Analyzing Correlated Systems with
Spatial and Dynamical fluctuations). PACS@sdf aims at providing an integrated
framework for the study of strongly correlated models and materials beyond the
local approximation of the DMFT [7]. It takes DMFT as the zero-order
approximation and systematically provides non-local corrections to it
[Li2015]. developed by one of the present authors Gang Li, iQIST developed by
Huang et al. [72, 73, 74], and the one used in eDMFT [30] developed by Haule
at Rutgers University [62, 75], which is referred to as “Rutgers” below in
this paper.
#### III.1.1 SrVO3
SrVO3 has a simple cubic perovskite structure without magnetism. DFT fails to
reproduce the upper and lower Hubbard bands observed in experiments, and this
calls for advanced theoretical and computational techniques [8]. In the past,
this material has been extensively studied both theoretically and
experimentally [25, 76, 77, 78, 79, 80, 81, 82], which provides abundant
reference results. Therefore, SrVO3 is an ideal test example for DFT+DMFT
implementations [24, 26, 27, 66, 46]. The V4+ cation with only one 3d electron
is located in the center of the octahedron formed by its six surrouding axial
O2- ligand anions. In the presence of an octahedral crystal field, the five
degenerate d orbitals split into two subsets: Three-fold degenerate t2g
orbitals (i.e., $d_{xy}$, $d_{yz}$ and $d_{xz}$), and two-fold degenerate eg
orbitals (i.e., $d_{z^{2}}$ and $d_{x^{2}-y^{2}}$). The only one 3d electron
of V4+ occupies the lower-energy t2g orbitals with the higher-energy eg
orbitals being empty. As a common practice, we only consider three degenerate
t2g orbitals in our DFT+DMFT calculation.
In DFT calculations, we use 11$\times$11$\times$11 ${\bf k}$-point mesh
generated by the Monkhorst-Pack method [83]. The Hubbard U and Hund J
parameters are set to be $4.0\text{\,}\mathrm{eV}$ and
$0.65\text{\,}\mathrm{eV}$, respectively, following the choice in the
literature [26, 27, 46]. The DMFT calculation is carried out at a temperature
of $300\text{\,}\mathrm{K}$. DFT calculations give a group of bands around the
Fermi level with substantial 3d characterizes, which are well separated from
other bands [26, 27]. We enclose the six (twelve if spin degree of freedom is
taken into account) bands crossing the Fermi level in the subset of KS bands
$\mathcal{C}$.
Figure 2: The single-particle energy spectral function of SrVO3 3d electrons
given from direct analytical continuation of the impurity Green function. The
results presented in the six panels are obtained by six computational schemes
combining two DFT codes – FHI-aims (upper panels) and ABACUS (lower panels)
and three different impurity solvers – PACS (left panels), iQIST (middle
panels), and Rutgers (right panels).
One of the powerful strengths of Green-function based DFT+DMFT approach is
that it can deliver physically meaningful single particle excitation spectral
functions, as given by the the imaginary part of Green function. Such spectral
functions can be directly measured by photoemission and/or inverse
photoemission experiments. In Fig. 2, the spectral functions of 3d electrons
of SrVO3 calculated by combining two NAO-based DFT packages – FHI-aims and
ABACUS, and three impurity solvers – PACS, iQIST and Rutgers are presented.
Despite the considerable differences underlying the implementations of the DFT
codes, as well as the DMFT impurity solvers, the calculated spectral functions
are remarkably similar. Most importantly, they all successfully reproduce the
typical three-peak structure: A quasi-particle band located at the Fermi level
with the lower and upper Hubbard bands on the two sides, arising from the
strong electron correlations which traditional LDA and GGA fail to capture.
The main features of the spectral functions given by our DFT+DMFT calculations
are in good agreement with previous theoretical results except for some small
details such as the exact peak positions and intensities [78, 25, 79, 24, 76,
26, 27, 66, 46]. Our results also correctly describe the strong particle-hole
asymmetry of the lower and upper Hubbard band as the intensity of the occupied
lower Hubbard band is much smaller than the empty upper Hubbard band. This is
the consequence that the V4+, with only one 3d electron, is far away from
half-filling.
Similar to previous DFT+DMFT studies [25, 79, 24, 76, 26, 27, 66, 46], our
spectral function results are in excellent agreement with the experimental
photoemission spectrum [78]. The first main feature lies in the reproduction
of the lower Hubbard band at around $-2.0\text{\,}\mathrm{eV}$, which is the
manifestation of the strong correlation among 3d electrons. The second
feature, in quantitative agreement with experiment, is that the lower Hubbard
band vanishes at $\sim$-1.0\text{\,}\mathrm{eV}$$, and then the quasiparticle
band starts to rise sharply.
Figure 3: The spectral function of MnO 3d electrons obtained from direct
analytical continuation of the impurity Green function. Similar to Fig. 2,
results presented in the six panels are obtained by two different DFT codes
combined with three different impurity solvers. Each panel is labelled by the
specific combination.
#### III.1.2 MnO and NiO
Next we test our DFT+DMFT scheme on the late transition metal oxides MnO and
NiO. Both MnO and NiO crystallize in face-centered cubic (FCC) NaCl-type
structures where 3d orbitals are split into threefold degenerate $t_{2g}$ and
twofold degenerate $e_{g}$ orbitals as in the SrVO3 case. In the MnO system,
we adopt the parameters of U$=$$5.0\text{\,}\mathrm{eV}$,
J$=$$1.0\text{\,}\mathrm{eV}$, and the temperature
$T=$$300\text{\,}\mathrm{K}$. For MnO, the KS bands within the energy window
of [$-3.0\text{\,}\mathrm{eV}$, $3.0\text{\,}\mathrm{eV}$] are included in the
subset $\mathcal{C}$. For NiO, another set of parameters with the temperature
of $1160\text{\,}\mathrm{K}$, U of $8.0\text{\,}\mathrm{eV}$ and J of
$1.0\text{\,}\mathrm{eV}$ [6, 84, 85, 46] is used. The energy window enclosing
the subset of KS bands $\mathcal{C}$ is chosen to be
[$-2.0\text{\,}\mathrm{eV}$, $1.0\text{\,}\mathrm{eV}$].
Figure 4: The single-particle spectral function of NiO 3d electron calculated
from direct analytical continuation of the impurity Green function. The
meaning of the six panels is same as Fig. 2 and 3.
Figure 5: The ${\bf k}$-resolved spectral function $A(\mathbf{k},\omega)$ of
MnO, obtained by (a) FHI-aims+Rutgers and (b) ABACUS+Rutgers, respectively.
Figure 6: The k-resolved spectral function $A(\mathbf{k},\omega)$ of NiO,
obtained by (a) FHI-aims+Rutgers and (b) ABACUS+Rutgers, respectively.
MnO and NiO are classical examples that clearly demonstrate the failure of
traditional band theory, e.g., LDA and GGA. LDA and GGA predict MnO and NiO to
be metallic while they are wide-gap insulators experimentally. Even though a
gap will appear if the spin-symmetry-broken antiferromagnetic state is
considered, the calculated gap is however still one order of magnitude smaller
than the experimental values. By merging the Hubbard model and ab-initio DFT,
static DFT+U [6, 20, 21] and the more advanced dynamical DFT+DMFT [84, 85]
successfully open the gaps in those prototypical transitional metal oxides.
Then obtained band gaps are in quantitative agreement with experiments, if
proper parameters are used.
We then apply our DFT+DMFT implementation to both MnO and NiO, and the
obtained spectral functions are presented in Fig. 3 and Fig. 4, respectively.
Again in each figure, we present six panels containing results obtained by
combining two NAO-based DFT codes and three DMFT impurity solvers. The most
prominent feature in the DFT+DMFT spectra of both MnO and NiO is that a
sizeable gap is opened up, arising apparently from the strong Coulomb
interactions whose effect is properly captured by DMFT. One can further see
from Fig. 3 that the intensity of the lower and upper Hubbard band of Mn 3d
electrons are nearly the same. This result is consistent with the nominally
half-filling 3d orbitals of Mn given by chemical analysis, which is confirmed
by the fact that the number of 3d electrons of Mn given by
$n=-\sum_{m}G^{imp}_{mm}(\beta)$ is 5. Similar to the SrVO3 case, the results
presented in all six panels are very close, except that the peaks given by the
Rutgers impurity solver is slightly more pronounced.
For NiO, the number of nominal $3d$ electrons is approximately 8, and DFT+DMFT
calculations lead to a splitting of the $3d$ bands, with fully occupied
$t_{2g}$ orbitals and half-filled $e_{g}$ orbitals. Our results in general
agree well with previous DFT+DMFT results reported in the literature [84, 85].
However, Fig. 4 also reveals that both the DFT codes and the impurity solvers
have certain influence on the obtained DFT+DMFT energy spectra. Although there
are no qualitative differences, the shape and width of the left peak, and the
depth of the dip between the first and middle peaks show noticeable
quantitative differences. It is not entirely clear to us yet what factors
caused such differences. We note that, however, NiO is a prototypical charge
transfer insulator [86], and the competition between the strong Coulomb
interactions among the Ni $3d$ electrons and the hybridization between the Ni
$3d$ and O $2p$ electrons governs its underlying physics. Thus a complete
DFT+DMFT treatment of NiO should also include O $2p$ electrons in the game.
With the O $2p$ excluded, the DFT+DMFT calculations are probably more
sensitive to the details in the band structures and numerical techniques
behind the impurity solvers.
Figure 3 and 4 only contain the spectral information of 3d electrons. Our
DFT+DMFT implementation also allows for calculating the ${\bf k}$-resolved
spectral function in the KS orbital space, as determined by
$A(\mathbf{k},\omega)=-\frac{1}{\pi}\sum_{i\in\mathcal{C}}\operatorname{Im}\left[(\omega+\mu-\epsilon({\mathbf{k}}))I-\bar{\Sigma}(\mathbf{k},\omega)\right]^{-1}_{ii},$
(31)
where the real-frequency self-energy is evaluated by analytical continuation
of the imaginary frequency-self-energy through the maximum entropy formalism
in Ref. [87]. The corresponding results for MnO and NiO are presented in Figs.
5 and 6, respectively. Now for simplicity only results obtained using the two
DFT codes combined with the Rutgers impurity solver are presented. The
agreement between our theoretical gaps and experimental gaps [88, 89, 90] is
rather satisfactory. The two ${\bf k}$-resolved spectral functions of MnO is
nearly identical whereas again noticeable differences, in particular regarding
the spectral weights of certain bands, exist for NiO. Nevertheless the main
features, e.g., the energy positions and dispersion of the bands, are
reasonably similar in Fig. 6(a) and 6(b).
NiO has been extensively studied experimentally, and reliable experimental
photoemission spectra are available. In Fig. 7, we directly compare our
DFT+DMFT spectrum of NiO with the experimental spectrum of Sawatzky and Allen
[89] measured by x-ray-photoemission (XPS) and bremsstrahlung-isochromat-
spectroscopy (BIS) techniques. Again, the two sets of theoretical spectra are
obtained using FHI-aims and ABACUS codes interfaced with the Rutgers impurity
solver, respectively. For the spectrum below the Fermi level, corresponding to
the (negative of) energy cost for removing a particle from the occupied
levels, we reproduce the sharp peak around $-2.0\text{\,}\mathrm{eV}$ and a
local minimum at $-3.0\text{\,}\mathrm{eV}$ given by XPS. For the spectrum
above the Fermi level, corresponding to the (negative of) energy cost for
adding a particle to the system, our curves match perfectly with the BIS
result. Although small deviations of the shoulder peak at approximately
$-3.5\text{\,}\mathrm{eV}$ are visible, our DFT+DMFT spectrum, in general,
agrees well with previous theoretical work [84] and the experiment data[89].
Figure 7: Comparison of theoretical and experimental spectral functions of
NiO. The theoretical results are given by the imaginary part of the impurity
Green function on the real frequency axis, analytically continued from the
Matsubara impurity Green function. The experimental data are obtained from
XPS+BIS measurements, taken from Ref. [89].
In ABACUS, the pseudopotential method is used and only the valence electrons
are explicitly included. Furthermore, as given in the beginning of this
section, the spatial cutoffs of NAOs used in ABACUS are much smaller than
those used in FHI-aims. By contrast, in FHI-aims full potential is used with
all electrons being included in DFT calculations. Although these two NAO
schemes are rather different, we get nearly identical DFT+DMFT results of
SrVO3 and MnO and reasonably similar results for NiO. It proves the efficacy
and robustness of our DFT+DMFT formalism and implementation. We expect that
the DFT+DMFT formalism presented in this paper should work for other NAO-based
DFT codes as well.
### III.2 f-electron systems
The f-electron systems, including 4f lanthanides and 5f actinides, are an
important class of strongly correlated materials characterized by partially
filled f-type orbitals. These systems exhibit a variety of exotic phenomena,
such as the heavy-fermion behavior, metal-insulator transition, Kondo physics,
volume collapses accompanying phase transitions, etc. It has been generally
accepted that traditional DFT calculations based on static mean-field-type
approximations such as LDA and GGAs do not provide adequate accuracy for
describing these physical scenarios. DFT+DMFT has proved to be a powerful
approach to tackle these systems and achieved remarkable successes in the last
two decades. In this section we test our DFT+DMFT implementation on several
typical f-electron systems, including 4f systems like $\alpha$ and $\gamma$
cerium (Ce) metal and Ce2O3, and 5f systems like PuO2 and Pu2O3.
DFT calculations are carried out using FHI-aims with the default tight tier1
basis sets. For these basis sets, the cutoff radii of Ce, Pu, and O elements
are $7\text{\,}\mathrm{\SIUnitSymbolAngstrom}$,
$6.0\text{\,}\mathrm{\SIUnitSymbolAngstrom}$, and
$6.0\text{\,}\mathrm{\SIUnitSymbolAngstrom}$, respectively. Analogous to the
case of $3d$-electron systems, the GGA-PBE functional [70] is used in the DFT
calculations and 11$\times$11$\times$11 Monkhorst-Pack ${\bf k}$-point mesh
[83] is used for Brillouin-zone integration, which is deemed dense enough to
obtain accurate DFT band structures. Unlike the practice in previous sections
where results obtained using three DMFT impurity solvers are shown for
comparison, here for simplicity only the results obtained using the Rutgers
solver will be presented.
#### III.2.1 Ce metal
Despite its simple FCC structure and the fact that only one 4f valence
electron is present per Ce atom, Ce exhibits spectacular physical properties,
which has attracted considerable research interest over the last forty decades
[91, 92, 93, 94, 95, 96, 97, 66, 98, 99]. At low temperature and ambient
pressure, Ce crystalizes in the $\alpha$ phase (smaller volume), where the
system is paramagnetic and shows Pauli-like magnetic susceptibility without
forming local magnetic moments. At high temperature, Ce transforms into the
$\gamma$ phase (larger volume), which instead carries local magnetic moments
and exhibits Curie-Weiss behavior of magnetic susceptibility. When increasing
pressure or decreasing temperature, Ce undergoes the famous isostructural
$\gamma$-$\alpha$ phase transition accompanied by a 14% volume collapse and a
drastic change of magnetic properties [93, 94, 95, 96, 97, 66, 98, 99]. A
theoretical understanding of this behavior poses a great challenge to
condensed matter physics and has motivated lots of experimental and
theoretical investigations. From the perspective of first-principles
computations, standard DFT calculations are unable to give a proper
description of this phase transition, and the approaches going beyond
conventional DFT are required.
In this paper, the crystal structures of $\alpha$ and $\gamma$ Ce phases are
discerned by different volumes, i.e.,
$29\text{\,}\mathrm{\SIUnitSymbolAngstrom}$3 for $\alpha$ and
$34\text{\,}\mathrm{\SIUnitSymbolAngstrom}$3 for $\gamma$ phase, respectively,
which covers the volume ranges during the $\alpha$-$\gamma$ transition [94].
The chosen energy window for KS subset $\mathcal{C}$ is
[$-5.0\text{\,}\mathrm{eV}$, $5.0\text{\,}\mathrm{eV}$]. The Coulomb
interaction parameter U given by constrained DFT is $6.0\text{\,}\mathrm{eV}$
[66, 98]. Here we used the value of U of $6.0\text{\,}\mathrm{eV}$ and Hund
parameter J of $0.5\text{\,}\mathrm{eV}$ [96, 98]. DFT+DMFT calculations are
done at the temperature of $800\text{\,}\mathrm{K}$.
Figure 8: Comparison of theoretical and experimental spectral functions of
$\alpha$ and $\gamma$ Ce. The DFT+DMFT calculation is carried out by FHI-aims
interfaced with the Rutgers impurity solver and the corresponding spectral
results are given by direct analytical continuation of the impurity Green
function. The resonant inverse photoemission spectroscopies (RIPES) are taken
from Ref. [91], and the photoemission (PE) spectra are taken from Ref. [92].
Our DFT+DMFT spectra for Ce are shown in Fig. 8. First of all, the energy
spectra of both phases show the characteristic three-peak structures of
strongly correlated metals. One can also see that, going from the $\alpha$
phase to the $\gamma$ phase, the central quasiparticle peak gets substantially
suppressed, and its spectral weights transfer to the lower and upper Hubbard
bands . Such results are in agreement with experimental observations [91, 92]
and previous DFT+DMFT calculations for Ce [94, 95, 96, 98]. The relative
strengths of the quasi-particle peak and the upper Hubbard peak of both
$\alpha$ and $\gamma$ phases are also captured qualitatively. However, a
direct comparison of the calculated spectra with the experimental spectra
reveals that the energy separation between the quasi-particle peak and upper
Hubbard band is underestimated in our calculations, while that between the
lower Hubbard band and quasi-particle peak is overestimated. Compared to
previous DFT+DMFT calculations, the location of the lower Hubbard peak of our
results is close to that reported in the literature [94, 98] where the maximum
of lower Hubbard peaks is around -4.0$\sim$$-3.0\text{\,}\mathrm{eV}$.
Concerning the energy positions of the upper Hubbard band, we found that the
results reported in the literature also vary quite a bit, with those reported
in Ref. [98] agreeing best with experiment. Our results are close to those
reported in Ref. [96], where the upper Hubbard peaks are located at around
$2.5\text{\,}\mathrm{eV}$. In general, our DFT+DMFT spectrums are in
quantitative agreement with experimental and previous theoretical results. The
discrepancies between our results and experimental and previous theoretical
results indicate that details of the DFT calculations, the definition of the
projector, as well as the numerical implementation of the impurity solvers
will still have appreciable influence on the outcomes of DFT+DMFT
calculations. Further investigations along these lines are still needed.
#### III.2.2 Ce2O3
Ce2O3 is a Mott insulator with a gap of about $2.4\text{\,}\mathrm{eV}$ [100,
22] arising from the strong correlation among 4f electrons. This system has
been calculated using a variety of approaches such as DFT+U, GW [101], and
DFT+DMFT [22, 28]. Ce2O3 crystalizes in the hexagonal lattice with space group
$P\bar{3}m1$. The experimental lattice parameters
$a=b=$3.891\text{\,}\mathrm{\SIUnitSymbolAngstrom}$$,
$c=$6.059\text{\,}\mathrm{\SIUnitSymbolAngstrom}$$ [102] are used in our
calculations. The energy window for KS subset $\mathcal{C}$ is
[$-3.0\text{\,}\mathrm{eV}$, $2.0\text{\,}\mathrm{eV}$]. Constrained DFT
calculation predicted the parameter U varying from 5.5 to
$8.0\text{\,}\mathrm{eV}$, and we adopt U as $6.5\text{\,}\mathrm{eV}$ and J
as $0.5\text{\,}\mathrm{eV}$. The temperature is set to be
$300\text{\,}\mathrm{K}$.
Figure 9: DOS of Ce2O3. The DFT+DMFT calculation is carried out by combination
FHI-aims and impurity solver Rutgers.
Here we calculate the total density of states (TDOS) of Ce2O3 through
$\rho(\omega)=\frac{1}{N_{\mathbf{k}}}\sum_{\mathbf{k}}A(\mathbf{k},\omega),$
(32)
where $A(\mathbf{k},\omega)$ is defined in Eq. (31). The result is shown in
Fig. 9, our DFT+DMFT calculation gives a band gap of about
$2.5\text{\,}\mathrm{eV}$, which agrees well with the experimental and
previous theoretical results [100, 22, 101, 28]. For example, all predict that
the lower Hubbard peaks are sharp and narrow whereas the upper Hubbard peaks
are strong and wide. The interesting feature about the upper Hubbard part is
that it emerges as a low plateau between 1 and 3 eV above the Fermi level,
continued by a pronounced peak between 3 and 4 eV. Note that the peaks mainly
composed of O p bands sitting below the lower Hubbard peaks does not show up
here due to the fact that those O p characteristic bands are not included in
the subset $\mathcal{C}$ in the DFT+DMFT calculations whereas they are
retained in Refs. [22, 101, 28].
#### III.2.3 Pu2O3 and PuO2
Plutonium (Pu)-based oxides such as Pu2O3 and PuO2 are essential for the fuel
components in current nuclear reactors, as well as transmutation of the minor
actinides from spent nuclear fuels [103, 104]. A clear understanding of the
physio-chemical properties of plutonium-based oxides is of key importance for
the safe operation and development of nuclear reactor systems and nuclear
waste reprocessing [105, 106], and the correct description of the oxidation
and reduction processes [107, 108]. The physical, chemical, and
thermodynamical properties of Pu-based oxides such as the chemical bonding and
electronic structure are intimately related to the states of strongly
correlated 5f electrons. Conversely, the relative tendency of delocalization
versus localization of strongly correlated 5f electrons is extremely sensitive
to the physical and chemical environment of the Pu atom [109]. The description
of complicated behaviors of 5f electrons, e.g., whether they are settled in
the delocalized or localized states, or in the intermediate regime, is out of
the reach of standard DFT. In contrast, DFT+DMFT is becoming a promising
approach that facilitates an in-depth understanding of this type of materials.
In this paper, we look at two plutonium oxides, i.e., Pu2O3 in its $\beta$
phase [110] (simply called Pu2O3 below) and PuO2 [111] with our DFT+DMFT
implementation. We use the Hubbard parameter U of $4.0\text{\,}\mathrm{eV}$
and the Hund parameter J of $0.5\text{\,}\mathrm{eV}$ for Pu2O3. As for PuO2,
U and J are chosen to be $5.0\text{\,}\mathrm{eV}$ and
$0.6\text{\,}\mathrm{eV}$. The electronic temperatures for both systems are
set at $300\text{\,}\mathrm{K}$. The chosen energy windows for the KS subsets
$\mathcal{C}$ are [$-3.0\text{\,}\mathrm{eV}$, $2.0\text{\,}\mathrm{eV}$] for
Pu2O3, and [$-3.0\text{\,}\mathrm{eV}$, $3.0\text{\,}\mathrm{eV}$] for PuO2.
We calculated DFT+DMFT TDOS of Pu2O3 through the method introduced in the case
of Ce2O3. The TDOS here contains contributions from both the correlated
orbitals and the rest (here $spd$) orbitals. The projected density of states
(PDOS) belonging to correlated ($5f$) orbitals, is evaluated by
$\displaystyle\rho_{f}(\omega)=$
$\displaystyle\sum_{m}\frac{1}{N_{\mathbf{k}}}\sum_{\mathbf{k}}-\frac{1}{\pi}\operatorname{Im}\left\\{\tilde{P}^{I}_{ij}(\bm{k},mm)\right.$
(33)
$\displaystyle\left.\left[(\omega+\mu-\epsilon({\mathbf{k}}))I-\bar{\Sigma}(\mathbf{k},\omega)\right]^{-1}_{ij}\right\\}\,$
whereas the PDOS for the $spd$ orbitals is obtained by taking the difference
between the TDOS and $\rho_{f}(\omega)$. The results are depicted in Fig. 10.
Figure 10 indicates that the band gap of Pu2O3 as determined by DFT+DMFT is
about $1.7\text{\,}\mathrm{eV}$, which is in good agreement with the
experimental bands gap of $1.8\text{\,}\mathrm{eV}$ [112, 28]. Additionally,
the occupation analysis yields an average occupation number nf=5.0 for Pu 5f
electrons through $n=-\sum_{m}G^{imp}_{mm}(\beta)$, which is consistent with
the chemical environment of Pu3+ in Pu2O3.
Figure 10: TDOS and PDOS of Pu2O3. The TDOS is given by Eq. (32). The PDOS of
Pu 5f electrons is evaluated through Eq. (33). The PDOS for spd electrons is
obtained by subtracting the PDOS of Pu 5f electrons from the TDOS. The
DFT+DMFT calculation is carried out by combining FHI-aims and the Rutgers
impurity solver.
In Fig. 11, the DFT+DMFT TDOS of PuO2 is plotted. The bands gap is predicted
to be about $2.5\text{\,}\mathrm{eV}$, which is in good agreement with
previous theoretical work [113] and the experimental band gap of
$2.8\text{\,}\mathrm{eV}$ [114]. In PuO2, the peak of occupied Pu 5f electrons
is noticeably sharper than that of Pu2O3, which indicates that the Pu 5f
electrons in PuO2 are likely more localized than the case of Pu2O3. This
localization picture is in agreement with the bigger gaps in PuO2. The
occupation analysis gives an average occupation number of nf=4.0 for Pu 5f
electrons, which agrees with the chemical state of Pu4+ in PuO2.
Figure 11: DFT+DMFT TDOS of PuO2. The DFT+DMFT calculation is carried out by a
combination FHI-aims and the Rutgers impurity solver.
## IV Summary
In summary, we developed and implemented a formalism that allows us to carry
out DFT+DMFT calculations within the NAO basis set framework. For transition
metal compounds and $f$-electron systems, the most localized $d$ or $f$-type
NAO is used to define the local correlated subspace. Such a choice is
physically intuitive and implementationally convenient for NAO basis sets.
Following what is usually done in the literature, only a subset of the KS
bands $\mathcal{C}$ out of the full KS space around the Fermi energy, which
hosts the majority of electrons with strong correlations, are enclosed in the
definition of the projector. Our projector scheme is mathematically equivalent
to the projective-Wannier-function approach adopted in Ref. [24]. We
implemented such a DFT+DMFT formalism by interfacing two NAO-based DFT codes,
i.e., FHI-aims [37] and ABACUS [38] with three DMFT quantum impurity solvers,
i.e., PACS, iQIST [72, 73, 74], and the Rutgers impurity solver [30, 62, 75].
In particular, interfacing with the all-electron FHI-aims code allows one to
study all types of strongly correlated materials over the periodic table.
Our DFT+DMFT formalism and implementation are testified for three typical
series of strongly correlated materials, namely the 3d transitional metal
compounds SrVO3, MnO and NiO, 4f materials including the Ce metal and Ce2O3,
as well as the 5f actinides Pu2O3 and PuO2. For SrVO3 and MnO, our calculated
one-electron removal and addition spectra are in good agreement with
previously reported DFT+DMFT results and experimental data. Furthermore, the
calculated results are rather robust against the use of different DFT codes or
different impurity solvers. For NiO, the obtained DFT+DMFT results show
noticeable dependence on the chosen DFT codes and/or impurity solvers,
although the key spectroscopic features are captured by all calculations. For
$f$-electron systems, the characteristic three-peak structures are obtained
for the Ce metal, whereas for the correlated insulators, the obtained DFT+DMFT
band gaps are in good agreement with experiments. However, there remain issues
calling for further investigations, like the energy separation of the
quasiparticle peak and the upper Hubbard band for the Ce metal.
Admittedly, our scheme is still in its infancy, and at the quantitative level
there are still issue to be sorted out. Further improvement would be necessary
for more reliable descriptions of charge-transfer-type Mott insulators, and
the intricate lanthanides and actinides. However, we consider that our attempt
of developing an infrastructure that merge NAO-based DFT codes and DMFT-based
techniques is rather rewarding. This will not only enable standard DFT+DMFT
calculations within the NAO basis set framework, but also paves the way for
developing more advanced schemes by combining beyond-DFT approaches like
hybrid functionals [40, 115, 41] or $GW$ [42], recently available in NAO-based
DFT codes, with the DMFT machinery.
###### Acknowledgements.
This work is supported by National Natural Science Foundation of China (Grant
Nos. 12134012, 11874335, 11874263), the Strategic Priority Research Program of
Chinese Academy of Sciences (Grant No. XDPB25), and The Max Planck Parter
Group for Advanced Electronic Structure Methods. We thank valuable help from
Li Huang on the impurity solver iQIST [72, 73, 74], E. Gull and S. Iskakov on
the impurity solver ALPS-CTHYB-SEGMENTS [116, 117, 118], H. Shinaoka on the
impurity solver ALPS-CTHYB [119, 120], and M. J. Han and J. H. Sim on the
DFT+DMFT package DMFT-pack [46, 121].
## References
* [1] Paul R. C. Kent and Gabriel Kotliar. Toward a predictive theory of correlated materials. Science, 361(6400):348–354, jul 2018.
* [2] Arpita Paul and Turan Birol. Applications of DFT+DMFT in Materials Science. Annu. Rev. Mater. Res., 49(1):31–52, jul 2019.
* [3] Ran Adler, Chang-Jong Kang, Chuck-Hou Yee, and Gabriel Kotliar. Correlated materials design: prospects and challenges. Reports Prog. Phys., 82(1):012504, jan 2019.
* [4] P. Hohenberg and W. Kohn. Inhomogeneous electron gas. Phys. Rev., 136:B864–B871, 1964.
* [5] W. Kohn and L. J. Sham. Self-Consistent Equations Including Exchange and Correlation Effects. Phys. Rev., 140(4A):A1133–A1138, 1965.
* [6] Vi Vi Anisimov, J. Zaanen, and O. K. Andersen. Band theory and Mott insulators: Hubbard U instead of Stoner I. Phys Rev B: Condens Matter, 44(3):943–954, 1991.
* [7] Antoine Georges, Gabriel Kotliar, Werner Krauth, and Marcelo J. Rozenberg. Dynamical mean-field theory of strongly correlated fermion systems and the limit of infinite dimensions. Rev. Mod. Phys., 68(1):13–125, 1996.
* [8] G. Kotliar, S. Y. Savrasov, K. Haule, V. S. Oudovenko, O. Parcollet, and C. A. Marianetti. Electronic structure calculations with dynamical mean-field theory. Rev. Mod. Phys., 78(3):865–951, 2006.
* [9] K. Held. Electronic structure calculations using dynamical mean field theory. Adv. Phys., 56(6):829–926, 2007.
* [10] Walter Metzner and Dieter Vollhardt. Correlated Lattice Fermions in $d$=$\infty$ Dimensions. Phys. Rev. Lett., 62(3):324–327, 1989.
* [11] Fusayoshi J. Ohkawa. Electron Correlation in the Hubbard Model in $d$=$\infty$ Dimension. J. Phys. Soc. Jpn., 60(10):3218–3221, 1991.
* [12] Antoine Georges and Gabriel Kotliar. Hubbard model in infinite dimensions. Phys. Rev. B, 45(12):6479–6483, mar 1992.
* [13] M. Jarrell. Hubbard model in infinite dimensions: A quantum Monte Carlo study. Phys. Rev. Lett., 69(1):168–171, 1992.
* [14] Vladimir I. Anisimov, F. Aryasetiawan, and A. I. Lichtenstein. First-principles calculations of the electronic structure and spectra of strongly correlated systems: the LDA + U method. J. Phys. Condens. Matter, 9(4):767–808, 1997.
* [15] A. I. Lichtenstein and M. I. Katsnelson. Ab initio calculations of quasiparticle band structure in correlated systems: LDA++ approach. Phys. Rev. B, 57(12):6884–6895, 1998.
* [16] S. Y. Savrasov, G. Kotliar, and E. Abrahams. Correlated electrons in $\delta$-plutonium within a dynamical mean-field picture. Nature, 410(6830):793–795, 2001.
* [17] X. Dai, S. Y. Savrasov, G. Kotliar, A. Migliori, H. Ledbetter, and E. Abrahams. Calculated phonon spectra of plutonium at high temperatures. Science, 300(5621):953–955, 2003.
* [18] J. H. Shim, K. Haule, and G. Kotliar. Fluctuating valence in a correlated solid and the anomalous properties of $\delta$-plutonium. Nature, 446(7135):513–516, 2007.
* [19] J. H. Shim, K. Haule, and G. Kotliar. Modeling the Localized-to-Itinerant Electronic Transition in the Heavy Fermion System CeIrIn5. Science, 318(5856):1615–1617, dec 2007.
* [20] V. I. Anisimov, I. V. Solovyev, M. A. Korotin, M. T. Czyżyk, and G. A. Sawatzky. Density-functional theory and NiO photoemission spectra. Phys. Rev. B, 48(23):16929–16934, 1993.
* [21] Vladimir I. Anisimov, F. Aryasetiawan, and A. I. Lichtenstein. First-principles calculations of the electronic structure and spectra of strongly correlated systems: the LDA+U method. J. Phys. Condens. Matter, 9(4):767–808, 1997.
* [22] L. V. Pourovskii, B. Amadon, S. Biermann, and A. Georges. Self-consistency over the charge density in dynamical mean-field theory: A linear muffin-tin implementation and some physical implications. Phys. Rev. B, 76(23):235101, dec 2007.
* [23] O. Grånäs, I. Di Marco, P. Thunström, L. Nordström, O. Eriksson, T. Björkman, and J.M. Wills. Charge self-consistent dynamical mean-field theory based on the full-potential linear muffin-tin orbital method: Methodology and applications. Comput. Mater. Sci., 55:295–302, apr 2012.
* [24] V. I. Anisimov, D. E. Kondakov, A. V. Kozhevnikov, I. A. Nekrasov, Z. V. Pchelkina, J. W. Allen, S.-K. Mo, H.-D. Kim, P. Metcalf, S. Suga, A. Sekiyama, G. Keller, I. Leonov, X. Ren, and D. Vollhardt. Full orbital calculation scheme for materials with strongly correlated electrons. Phys. Rev. B, 71(12):125119, mar 2005.
* [25] E. Pavarini, S. Biermann, A. Poteryaev, A. I. Lichtenstein, A. Georges, and O. K. Andersen. Mott transition and suppression of orbital fluctuations in orthorhombic $3d^{1}$ perovskites. Phys. Rev. Lett., 92(17):176403, 2004.
* [26] F. Lechermann, A. Georges, A. Poteryaev, S. Biermann, M. Posternak, A. Yamasaki, and O. K. Andersen. Dynamical mean-field theory using Wannier functions: A flexible route to electronic structure calculations of strongly correlated materials. Phys. Rev. B, 74(12):125120, 2006.
* [27] B. Amadon, F. Lechermann, A. Georges, F. Jollet, T. O. Wehling, and A. I. Lichtenstein. Plane-wave based electronic structure calculations for correlated materials using dynamical mean-field theory and projected local orbitals. Phys. Rev. B, 77(20), 2008.
* [28] B. Amadon. A self-consistent DFT+DMFT scheme in the projector augmented wave method: applications to cerium, Ce2O3 and Pu2O3 with the Hubbard I solver and comparison to DFT+U. J. Phys. Condens. Matter, 24(7):075604, 2012.
* [29] Markus Aichhorn, Leonid Pourovskii, Veronica Vildosola, Michel Ferrero, Olivier Parcollet, Takashi Miyake, Antoine Georges, and Silke Biermann. Dynamical mean-field theory within an augmented plane-wave framework: Assessing electronic correlations in the iron pnictide LaFeAsO. Phys. Rev. B, 80(8):085101, aug 2009.
* [30] Kristjan Haule, Chuck-Hou Yee, and Kyoo Kim. Dynamical mean-field theory within the full-potential methods: Electronic structure of CeIrIn5, CeCoIn5, and CeRhIn5. Phys. Rev. B, 81(19):195107, may 2010.
* [31] Dominika Zgid and Garnet Kin-Lic Chan. Dynamical mean-field theory from a quantum chemical perspective. J. Chem. Phys., 134(9):094115, mar 2011.
* [32] Wael Chibani, Xinguo Ren, Matthias Scheffler, and Patrick Rinke. Self-consistent Green’s function embedding for advanced electronic structure methods based on a dynamical mean-field concept. Phys. Rev. B, 93(16):165106, apr 2016.
* [33] Tianyu Zhu and Garnet Kin-Lic Chan. Ab Initio Full Cell GW+DMFT for Correlated Materials. Phys. Rev. X, 11(2):021006, apr 2021.
* [34] José M. Soler, Emilio Artacho, Julian D. Gale, Alberto García, Javier Junquera, Pablo Ordejón, and Daniel Sánchez-Portal. The SIESTA method for ab initio order-$N$ materials simulation. J. Phys. Condens. Matter, 14(11):2745–2779, mar 2002.
* [35] T. Ozaki, H. Kino, J. Yu, M. Han, N. Kobayashi, M. Ohfuti, F. Ishii, and T. Ohwaki. User’s manual of OpenMX, http://www.openmx-square.org, 2008.
* [36] B. Delley. From molecules to solids with the DMol3 approach. J. Chem. Phys., 113(18):7756–7764, nov 2000.
* [37] Volker Blum, Ralf Gehrke, Felix Hanke, Paula Havu, Ville Havu, Xinguo Ren, Karsten Reuter, and Matthias Scheffler. Ab initio molecular simulations with numeric atom-centered orbitals. Comput. Phys. Commun., 180(11):2175–2196, nov 2009.
* [38] Pengfei Li, Xiaohui Liu, Mohan Chen, Peize Lin, Xinguo Ren, Lin Lin, Chao Yang, and Lixin He. Large-scale ab initio simulations based on systematically improvable atomic basis. Comput. Mater. Sci., 112:503–517, 2016.
* [39] Xinguo Ren, Patrick Rinke, Volker Blum, Jürgen Wieferink, Alexandre Tkatchenko, Andrea Sanfilippo, Karsten Reuter, and Matthias Scheffler. Resolution-of-identity approach to Hartree–Fock, hybrid density functionals, RPA, MP2 and GW with numeric atom-centered orbital basis functions. New J. Phys., 14(5):053020, may 2012.
* [40] Sergey V. Levchenko, Xinguo Ren, Jürgen Wieferink, Rainer Johanni, Patrick Rinke, Volker Blum, and Matthias Scheffler. Hybrid functionals for large periodic systems in an all-electron, numeric atom-centered basis framework. Comput. Phys. Commun., 192:60–69, 2015.
* [41] P. Lin, X. Ren, and L. He. Efficient hybrid density functional calculations for large periodic systems using numerical atomic orbitals. J. Chem. Theory Comput., 17(1):222–239, 2021.
* [42] Xinguo Ren, Florian Merz, Hong Jiang, Yi Yao, Markus Rampp, Hermann Lederer, Volker Blum, and Matthias Scheffler. All-electron periodic $g_{0}w_{0}$ implementation with numerical atomic orbital basis functions: Algorithm and benchmarks. Phys. Rev. Mater., 5(1):013807, 2021.
* [43] Myung Joon Han, Taisuke Ozaki, and Jaejun Yu. O$\left(N\right)$ LDA+U electronic structure calculation method based on the nonorthogonal pseudoatomic orbital basis. Phys. Rev. B, 73(4):045110, 2006.
* [44] M. Kick, K. Reuter, and H. Oberhofer. Intricacies of DFT+U, Not Only in a Numeric Atom Centered Orbital Framework. J. Chem. Theory Comput., 15(3):1705–1718, 2019.
* [45] Xin Qu, Peng Xu, Hong Jiang, Lixin He, and Xinguo Ren. DFT+U within the framework of linear combination of numerical atomic orbitals. arXiv e-prints, page arXiv:2202.05409, 2022.
* [46] J. H. Sim and M. J. Han. Density functional theory plus dynamical mean-field theory with natural atomic orbital projectors. Phys. Rev. B, 100(11):115151, 2019.
* [47] Dieter Vollhardt. Dynamical mean-field theory of electronic correlations in models and materials. AIP Conf. Proc., 1297:339–403, 2010.
* [48] J. E. Hirsch and R. M. Fye. Monte carlo method for magnetic impurities in metals. Phys. Rev. Lett., 56(23):2521–2524, 1986.
* [49] M. J. Rozenberg, X. Y. Zhang, and G. Kotliar. Mott-hubbard transition in infinite dimensions. Phys. Rev. Lett., 69(8):1236–1239, 1992.
* [50] A. Georges and W. Krauth. Numerical solution of the $d$=$\infty$ Hubbard model: evidence for a Mott transition. Phys. Rev. Lett., 69(8):1240–1243, 1992.
* [51] Th Pruschke, D. L. Cox, and M. Jarrell. Transport Properties of the Infinite-Dimensional Hubbard Model. Europhys. Lett., 21(5):593–598, feb 1993.
* [52] Th. Pruschke, D. L. Cox, and M Jarrell. Hubbard model at infinite dimensions: Thermodynamic and transport properties. Phys. Rev. B, 47(7):3553–3565, feb 1993.
* [53] M. Jarrell and T. Pruschke. Anomalous properties of the Hubbard model in infinite dimensions. Phys. Rev. B, 49(2):1458–1461, 1994.
* [54] K. Haule, S. Kirchner, J. Kroha, and P. Wölfle. Anderson impurity model at finite Coulomb interaction U : Generalized noncrossing approximation. Phys. Rev. B, 64(15):155111, sep 2001.
* [55] Kristjan Haule, Viktor Oudovenko, Sergej Y. Savrasov, and Gabriel Kotliar. The $\alpha\rightarrow\gamma$ Transition in Ce: A Theoretical View from Optical Spectroscopy. Phys. Rev. Lett., 94(3):036401, jan 2005.
* [56] Q. Si, M. J. Rozenberg, G. Kotliar, and A. E. Ruckenstein. Correlation induced insulator to metal transitions. Phys. Rev. Lett., 72(17):2761–2764, 1994.
* [57] M. Caffarel and W. Krauth. Exact diagonalization approach to correlated fermions in infinite dimensions: Mott transition and superconductivity. Phys. Rev. Lett., 72(10):1545–1548, 1994.
* [58] O. Sakai and Y. Kuramoto. Application of the numerical renormalization group method to the hubbard model in infinite dimensions. Solid State Commun., 89(4):307–311, jan 1994.
* [59] R. Bulla, A. C. Hewson, and Th Pruschke. Numerical renormalization group calculations for the self-energy of the impurity anderson model. J. Phys. Condens. Matter, 10(37):8365–8380, 1998.
* [60] P. Werner, A. Comanac, L. De’ Medici, M. Troyer, and A. J. Millis. Continuous-time solver for quantum impurity models. Phys. Rev. Lett., 97(7):076405, 2006.
* [61] Emanuel Gull, Philipp Werner, Andrew Millis, and Matthias Troyer. Performance analysis of continuous-time solvers for quantum impurity models. Phys. Rev. B, 76(23), 2007.
* [62] K. Haule. Quantum Monte Carlo impurity solver for cluster dynamical mean-field theory and electronic structure calculations with adjustable cluster base. Phys. Rev. B, 75(15):155113, 2007.
* [63] Emanuel Gull, Andrew J. Millis, Alexander I. Lichtenstein, Alexey N. Rubtsov, Matthias Troyer, and Philipp Werner. Continuous-time monte carlo methods for quantum impurity models. Rev. Mod. Phys., 83(2):349–404, 2011.
* [64] Kristjan Haule, Turan Birol, and Gabriel Kotliar. Covalency in transition-metal oxides within all-electron dynamical mean-field theory. Phys. Rev. B, 90(7):075136, aug 2014.
* [65] Junjiro Kanamori. Electron correlation and ferromagnetism of transition metals. Prog. Theor. Phys., 30(3):275–289, 1963.
* [66] V. I. Anisimov and A. V. Lukoyanov. Investigation of real materials with strong electronic correlations by the LDA+DMFT method. Acta Crystallogr. Sect. C Struct. Chem., 70(2):137–159, feb 2014.
* [67] D. R. Hamann. Optimized norm-conserving Vanderbilt pseudopotentials. Phys. Rev. B, 88(8):085117, 2013.
* [68] Martin Schlipf and François Gygi. Optimization algorithm for the generation of ONCV pseudopotentials. Comput. Phys. Commun., 196:36–44, 2015.
* [69] Peter Scherpelz, Marco Govoni, Ikutaro Hamada, and Giulia Galli. Implementation and Validation of Fully Relativistic GW Calculations: Spin–Orbit Coupling in Molecules, Nanocrystals, and Solids. J. Chem. Theory Comput., 12(8):3523–3544, 2016.
* [70] John P. Perdew, Kieron Burke, and Matthias Ernzerhof. Generalized Gradient Approximation Made Simple. Phys. Rev. Lett., 77(18):3865–3868, 1996.
* [71] The segment implementation [60] of the ct-qmc impurity solver is a part of the PACS@sdf package (Package for Analyzing Correlated Systems with Spatial and Dynamical fluctuations). PACS@sdf aims at providing an integrated framework for the study of strongly correlated models and materials beyond the local approximation of the DMFT [7]. It takes DMFT as the zero-order approximation and systematically provides non-local corrections to it [Li2015].
* [72] L. Huang, Y. L. Wang, Z. Y. Meng, L. Du, P. Werner, and X. Dai. iqist: An open source continuous-time quantum monte carlo impurity solver toolkit. Comput. Phys. Commun., 195:140–160, 2015.
* [73] L. Huang. iqist v0.7: An open source continuous-time quantum monte carlo impurity solver toolkit. Comput. Phys. Commun., 221:423–424, 2017.
* [74] https://github.com/huangli712/iQIST.
* [75] http://hauleweb.rutgers.edu/tutorials/Tutorial0.html.
* [76] I. A. Nekrasov, K. Held, G. Keller, D. E. Kondakov, Th. Pruschke, M. Kollar, O. K. Andersen, V. I. Anisimov, and D. Vollhardt. Momentum-resolved spectral functions of SrVO3 calculatd by LDA+DMFT. Phys. Rev. B, 73(15):155112, apr 2006.
* [77] A. Liebsch. Surface versus Bulk Coulomb Correlations in Photoemission Spectra of SrVO3 and CaVO3. Phys. Rev. Lett., 90(9):096401, mar 2003.
* [78] A. Sekiyama, H. Fujiwara, S. Imada, S. Suga, H. Eisaki, S. I. Uchida, K. Takegahara, H. Harima, Y. Saitoh, I. A. Nekrasov, G. Keller, D. E. Kondakov, A. V. Kozhevnikov, Th. Pruschke, K. Held, D. Vollhardt, and V. I. Anisimov. Mutual Experimental and Theoretical Validation of Bulk Photoemission Spectra of Sr1-xCaxVO3. Phys. Rev. Lett., 93(15):156402, oct 2004.
* [79] I. A. Nekrasov, G. Keller, D. E. Kondakov, A. V. Kozhevnikov, Th. Pruschke, K. Held, D. Vollhardt, and V. I. Anisimov. Comparative study of correlation effects in CaVO3 and SrVO3. Phys. Rev. B, 72(15):155106, oct 2005.
* [80] T. Yoshida, K. Tanaka, H. Yagi, A. Ino, H. Eisaki, A. Fujimori, and Z.-X. Shen. Direct Observation of the Mass Renormalization in SrVO3 by Angle Resolved Photoemission Spectroscopy. Phys. Rev. Lett., 95(14):146404, sep 2005.
* [81] R. Eguchi, T. Kiss, S. Tsuda, T. Shimojima, T. Mizokami, T. Yokoya, A. Chainani, S. Shin, I. H. Inoue, T. Togashi, S. Watanabe, C. Q. Zhang, C. T. Chen, M. Arita, K. Shimada, H. Namatame, and M. Taniguchi. Bulk- and Surface-Sensitive High-Resolution Photoemission Study of Two Mott-Hubbard Systems: SrVO3 and CaVO3. Phys. Rev. Lett., 96(7):076402, feb 2006.
* [82] K Maiti, D. D Sarma, M. J Rozenberg, I. H Inoue, H Makino, O Goto, M Pedio, and R Cimino. Electronic structure of Ca1-xSrxVO3 : A tale of two energy scales. Europhys. Lett., 55(2):246–252, jul 2001.
* [83] Hendrik J. Monkhorst and James D. Pack. Special points for Brillouin-zone integrations. Phys. Rev. B, 13(12):5188–5192, jun 1976.
* [84] X. Ren, I. Leonov, G. Keller, M. Kollar, I. Nekrasov, and D. Vollhardt. LDA+DMFT computation of the electronic spectrum of NiO. Phys. Rev. B, 74(19):195114, nov 2006.
* [85] J. Kuneš, V. I. Anisimov, A. V. Lukoyanov, and D. Vollhardt. Local correlations and hole doping in NiO: A dynamical mean-field study. Phys. Rev. B, 75(16):165115, apr 2007.
* [86] J. Zaanen, G. A. Sawatzky, and J. W. Allen. Band gaps and electronic structure of transition-metal compounds. Phys. Rev. Lett., 55(4):418–421, jul 1985.
* [87] Gernot J. Kraberger, Robert Triebl, Manuel Zingl, and Markus Aichhorn. Maximum entropy formalism for the analytic continuation of matrix-valued Green’s functions. Phys. Rev. B, 96(15):155128, oct 2017.
* [88] L. Messick, W. C. Walker, and R. Glosser. Direct and Temperature-Modulated Reflectance Spectra of MnO, CoO, and NiO. Phys. Rev. B, 6(10):3941–3949, 1972.
* [89] G. A. Sawatzky and J. W. Allen. Magnitude and Origin of the Band Gap in NiO. Phys. Rev. Lett., 53(24):2339–2342, 1984.
* [90] S. Hüfner, J. Osterwalder, T. Riesterer, and F. Hulliger. Photoemission and inverse photoemission spectroscopy of NiO. Solid State Commun., 52(9):793–796, 1984.
* [91] M. Grioni, P. Weibel, D. Malterre, Y. Baer, and L. Du‘o. Resonant inverse photoemission in cerium-based materials. Phys. Rev. B, 55(4):2056–2067, jan 1997.
* [92] E. Weschke, C. Laubschat, T. Simmons, M. Domke, O. Strebel, and G. Kaindl. Surface and bulk electronic structure of Ce metal studied by high-resolution resonant photoemission. Phys. Rev. B, 44(15):8304–8307, oct 1991.
* [93] A. K. McMahan, C. Huscroft, R. T. Scalettar, and E. L. Pollock. Volume-collapse transitions in the rare earth metals. J. Comput. Aided Mater. Des., 5(2):131–162, 1998.
* [94] K. Held, A. K. McMahan, and R. T. Scalettar. Cerium Volume Collapse: Results from the Merger of Dynamical Mean-Field Theory and Local Density Approximation. Phys. Rev. Lett., 87(27):276404, dec 2001.
* [95] A. K. McMahan, K. Held, and R. T. Scalettar. Thermodynamic and spectral properties of compressed Ce calculated using a combined local-density approximation and dynamical mean-field theory. Phys. Rev. B, 67(7):075108, feb 2003.
* [96] B. Amadon, S. Biermann, A. Georges, and F. Aryasetiawan. The $\alpha$-$\gamma$ Transition of Cerium Is Entropy Driven. Phys. Rev. Lett., 96(6):066402, feb 2006.
* [97] M. J. Lipp, A. P. Sorini, J. Bradley, B. Maddox, K. T. Moore, H. Cynn, T. P. Devereaux, Y. Xiao, P. Chow, and W. J. Evans. X-ray Emission Spectroscopy of Cerium Across the $\gamma$-$\alpha$ Volume Collapse Transition. Phys. Rev. Lett., 109(19):195705, nov 2012.
* [98] J. Bieder and B. Amadon. Thermodynamics of the $\alpha$-$\gamma$ transition in cerium from first principles. Phys. Rev. B, 89(19):195132, 2014.
* [99] Marco Casadei, Xinguo Ren, Patrick Rinke, Angel Rubio, and Matthias Scheffler. Density functional theory study of the $\alpha$-$\gamma$ phase transition in cerium: Role of electron correlation and f-orbital localization. Phys. Rev. B, 93(7):075153, feb 2016.
* [100] A.V. Prokofiev, A.I. Shelykh, and B.T. Melekh. Periodicity in the band gap variation of Ln2X3 (X = O, S, Se) in the lanthanide series. J. Alloys Compd., 242(1-2):41–44, sep 1996.
* [101] Hong Jiang, Ricardo I. Gomez-Abal, Patrick Rinke, and Matthias Scheffler. Localized and Itinerant States in Lanthanide Oxides United by GW@LDA+U. Phys. Rev. Lett., 102(12):126403, mar 2009.
* [102] H. Bärnighausen and G. Schiller. The crystal structure of A-Ce2O3. J. Less Common Met., 110(1-2):385–390, aug 1985.
* [103] Juan J Carbajo, Gradyon L Yoder, Sergey G Popov, and Victor K Ivanov. A review of the thermophysical properties of MOX and UO2 fuels. J. Nucl. Mater., 299(3):181–198, dec 2001.
* [104] Laurent Claparede, Mireille Guigue, Gauthier Jouan, Nassima Nadah, Nicolas Dacheux, and Philippe Moisy. Long-term behavior of refractory thorium-plutonium dioxide solid solutions. J. Nucl. Mater., 483:158–166, jan 2017.
* [105] Tatsumi Arima, Sho Yamasaki, Yaohiro Inagaki, and Kazuya Idemitsu. Evaluation of thermal properties of UO2 and PuO2 by equilibrium molecular dynamics simulations from 300 to 2000K. J. Alloys Compd., 400(1-2):43–50, sep 2005.
* [106] L. Petit, A. Svane, Z. Szotek, W. M. Temmerman, and G. M. Stocks. Electronic structure and ionicity of actinide oxides from first principles. Phys. Rev. B, 81(4):045108, jan 2010.
* [107] Emily Moore, Christine Guéneau, and Jean-Paul Crocombette. Oxygen diffusion model of the mixed (U,Pu)O2±x: Assessment and application. J. Nucl. Mater., 485:216–230, mar 2017.
* [108] Ru-song Li, Zheng Xie, Ling-Yun Kong, Su-xia Hou, Ji-jun Luo, and Du-qiang Xin. Intermediate occupation numbers for 5f electrons in a Pu and U mixed oxide. Phys. Chem. Chem. Phys., 23(27):14725–14736, 2021.
* [109] Kevin T. Moore and Gerrit van der Laan. Nature of the 5f states in actinide metals. Rev. Mod. Phys., 81(1):235–298, feb 2009.
* [110] M. Wulff and G. H. Lander. Magnetic structure and Pu ground state in $\beta$‐Pu2O3. J. Chem. Phys., 89(5):3295–3299, sep 1988.
* [111] Renaud C. Belin, Philippe J. Valenza, Muriel A. Reynaud, and Philippe E. Raison. New hermetic sample holder for radioactive materials fitting to Siemens D5000 and Bruker D8 X-ray diffractometers: application to the Rietveld analysis of plutonium dioxide. J. Appl. Crystallogr., 37(6):1034–1037, dec 2004.
* [112] C.E. McNeilly. The electrical properties of plutonium oxides. J. Nucl. Mater., 11(1):53–58, jan 1964.
* [113] Jindřich Kolorenč, Alexander B. Shick, and Alexander I. Lichtenstein. Electronic structure and core-level spectra of light actinide dioxides in the dynamical mean-field theory. Phys. Rev. B, 92(8):085125, aug 2015.
* [114] T. Mark McCleskey, Eve Bauer, Quanxi Jia, Anthony K. Burrell, Brian L. Scott, Steven D. Conradson, Alex Mueller, Lindsay Roy, Xiaodong Wen, Gustavo E. Scuseria, and Richard L. Martin. Optical band gap of NpO2 and PuO2 from optical absorbance of epitaxial films. J. Appl. Phys., 113(1):013515, jan 2013.
* [115] P. Lin, X. Ren, and L. He. Accuracy of localized resolution of the identity in periodic hybrid functional calculations with numerical atomic orbitals. J. Phys. Chem. Lett., 11(8):3082–3088, 2020.
* [116] E. Gull, P. Werner, S. Fuchs, B. Surer, T. Pruschke, and M. Troyer. Continuous-time quantum monte carlo impurity solvers. Comput. Phys. Commun., 182(4):1078–1082, 2011.
* [117] Hartmut Hafermann, Philipp Werner, and Emanuel Gull. Efficient implementation of the continuous-time hybridization expansion quantum impurity solver. Comput. Phys. Commun., 184(4):1280–1286, apr 2013.
* [118] https://github.com/ALPSCore/CT-HYB-SEGMEN.
* [119] H. Shinaoka, E. Gull, and P. Werner. Continuous-time hybridization expansion quantum impurity solver for multi-orbital systems with complex hybridizations. Comput. Phys. Commun., 215:128–136, 2017.
* [120] https://github.com/ALPSCore/CT-HYB.
* [121] https://kaist-elst.github.io/DMFTpack/.
|
11institutetext: CAS Key Laboratory of Optical Astronomy, National
Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101,
People’s Republic of China
11email<EMAIL_ADDRESS>22institutetext: School of Astronomy and Space Science,
University of Chinese Academy of Sciences, Beijing 100049, People’s Republic
of China 33institutetext: National Astronomical Observatory of Japan, 2-21-1
Osawa, Mitaka, Tokyo 181-8588, Japan 44institutetext: Astronomical Institute,
Tohoku University, 6-3, Aramaki, Aoba-ku, Sendai, Miyagi 980-8578, Japan
55institutetext: Kavli Institute for the Physics and Mathematics of the
Universe (WPI), The University of Tokyo Institutes for Advanced Study, The
University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8583, Japan
66institutetext: Key Laboratory for Research in Galaxies and Cosmology,
Shanghai Astronomical Observatory, 80 Nandan Road, Shanghai 200030, People’s
Republic of China
# Existence of Tidal Tails for the Globular Cluster NGC 5824
Yong Yang 1122 Jing-Kun Zhao 11 Miho N. Ishigaki 334455 Masashi Chiba 44
Cheng-Qun Yang 66 Xiang-Xiang Xue 11 Xian-Hao Ye 1122 Gang Zhao 1122
###### Abstract
Context. Several dynamically cold streams have been associated with certain
globular clusters (GCs) based on orbital energies and angular momenta. Some of
these streams are surprisingly far apart from their progenitors and one such
pair is Triangulum and NGC 5824. Triangulum can be considered as a piece of
NGC 5824 leading tail since the cluster’s future orbit matches with the
stream’s track well. The existence of the leading tail for NGC 5824 is the
motivation behind the search for its trailing tail.
Aims. Our goal is to confirm the connection between Triangulum and NGC 5824
and seek the trailing tail of the cluster.
Methods. The selection of member stars of Triangulum is made through various
cuts in metallicity, proper motions (PMs), radial velocity and color-magnitude
diagram (CMD). The selected members are compared in phase space to a mock
stream which models the disruption of NGC 5824. We then try to detect the
trailing tail of the cluster based on a modified matched-filter technique.
Stars are assigned weights using their color differences from the cluster’s
locus in CMD. These weights are further scaled based on stars’ departures from
expected PMs of the model stream.
Results. A total of 26 member stars for Triangulum are obtained and 16 of them
are newly identified. These members are consistent with the mock stream in the
phase space and their metalicity and position on the CMD are in good
agreements with NGC 5824. By applying the matched-filter, a tenuous trailing
tail of the cluster is detected, spanning $\sim$ 50$\degr$ long on sky. The
signature matches with the mock stream’s trajectory well.
Conclusions. Our results support that Triangulum stream acts as a part of the
leading tail for NGC 5824. On the trailing side, we have detected a 50$\degr$
tail extended from the cluster. The existence of both leading and trailing
tails for the GC NGC 5824 is verified.
###### Key Words.:
globular clusters: individual: NGC 5824 – Galaxy: structure – Galaxy:
kinematics and dynamics – Galaxy: halo
## 1 Introduction
Increasing amount of data from various revolutionary surveys are revealing
mysteries of stellar streams in the Milky Way and providing unprecedented
details of the Galactic halo (e.g., Bell et al. 2008; Zhao et al. 2009; Law &
Majewski 2010; Bowden et al. 2015; Bernard et al. 2016; Liang et al. 2017;
Zhao et al. 2018; Malhan et al. 2018; Yang et al. 2019a, b; Zhao et al. 2020;
Yang et al. 2021; Ye et al. 2021; Zhao & Chen 2021). Tidal streams extending
from extant globular clusters (GCs) are usually thin and dynamically cold
(e.g., Odenkirchen et al. 2003; Grillmair & Johnson 2006; Palau & Miralda-
Escudé 2019; Grillmair 2019). Some narrow streams without explicit cores are
generally also attributed to GC origins (e.g., Grillmair 2009; Koposov et al.
2010; Bonaca et al. 2012; Koposov et al. 2014; Shipp et al. 2018; Malhan et
al. 2018). The progenitors of most of those streams are still unknown but
several streams have been recently associated with extant GCs (Ibata et al.
2021).
The connections between $\omega$ Centauri and Fimbulthul (Ibata et al. 2019),
NGC 3201 and Gjöll (Palau & Miralda-Escudé 2021), and NGC 4590 and Fjörm
(Palau & Miralda-Escudé 2019) have been reported, which suggest that the
associations between a stream and a GC, where the GC does not connect directly
to the stream, are present in the Milky Way. By exploring the orbits, Bonaca
et al. (2021) further attributed 5 more streams to extant GCs (Table 1
therein), and one pair is Triangulum and NGC 5824. Triangulum stream was first
detected by Bonaca et al. (2012) with a matched-filter technique (Rockosi et
al. 2002). Thereafter, Martin et al. (2013) kinematically discovered a part of
the stream and provided 11 possible member stars. The stream is in the
direction of M31 and M33 galaxies, and far apart from NGC 5824. However, the
cluster’s future orbit passes through the stream well, implying a connection
between them (Fig. 4 in Bonaca et al. 2021). Li et al. (2022) further
confirmed this connection by comparing a model stream of NGC 5824 in phase
space to the Triangulum member stars from Martin et al. (2013). Therefore,
Triangulum stream could be treated as a piece of NGC 5824 leading tail.
Based on the picture that tidal tails are developed symmetrically around GCs
(Küpper et al. 2010), the existence of leading tail for NGC 5824 motivates us
to search for its trailing tail. In this work, we provide a confirmation of
the connection between Triangulum and NGC 5824, which is similar to that of Li
et al. (2022) but with member stars that span a wider sky extent ($\sim$
16$\degr$). We further apply a modified match-filter method (Grillmair 2019)
to look for the trailing tail of NGC 5824. The paper is organized as follows.
In Sect. 2, we introduce the data. In Sect. 3, we show the selection of
Triangulum member stars and compare them to a model stream of NGC 5824. The
detection of the cluster’s trailing tail is given in Sect. 4. We present a
discussion in Sect. 5 and draw our conclusion in Sect. 6.
## 2 Data
We base our search on high-quality astrometric and photometric data provided
by the $Gaia$ EDR3 (Gaia Collaboration et al. 2021; Lindegren et al. 2021;
Riello et al. 2021), along with the spectroscopic data from the Sloan
Extension for Galactic Understanding and Exploration (SEGUE; Yanny et al.
2009) and the Large Sky Area Multi-Object Fiber Spectroscopic Telescope
(LAMOST; Cui et al. 2012; Zhao et al. 2006, 2012; Liu et al. 2015) surveys.
To obtain the individual members of Triangulum, we retrieve stars from the
$Gaia$ EDR3 gaia_source catalog overlapping with the stream region on the
celestial sphere. The stream region is determined by limiting 22$\degr$ $<$
$\delta$ $<$ 41$\degr$ and moving $\delta=-4.4\alpha+128.5$ by $\pm 1\degr$
along the $\alpha$ direction (green area in Fig. 1), where the equation was
defined in Bonaca et al. (2012) to describe the stream coordinates. Note that
Bonaca et al. (2012) traced Triangulum to $\delta\simeq 23\degr-35\degr$, and
Martin et al. (2014) extended the stream to further north $\delta\simeq
40\degr$. Our choice of $\delta$ extent is based on both of them. The zero-
point correction in the parallax is implemented using the code provided by
Lindegren et al. (2021), which requires astrometric_params_solved $>$ 3\. The
corrections of G-band magnitude and BP/RP excess factor are applied as
instructed in Riello et al. (2021). In order to ensure good astrometric and
photometric solutions, only stars with ruwe $<$ 1.4 and absolute corrected
BP/RP excess factor smaller than 3 times the associated uncertainty (see Sect.
9.4 in Riello et al. 2021) are retained. Given that both of estimated
distances of the stream in Bonaca et al. (2012) and Martin et al. (2013) are
farther than 20 kpc, we remove foreground stars that satisfy the criterion
$\varpi-3\sigma_{\varpi}>0.05$ mas. The remaining stars are cross-matched with
SDSS/SEGUE DR16 (Ahn et al. 2012) and LAMOST DR8, by which the metallicity and
heliocentric radial velocity are obtained. For stars that are common in both
datasets, we adopt measurements from SEGUE because signal-to-noise ratios of
spectra in SEGUE are mostly higher than those in LAMOST.
The data for detecting trailing tail of NGC 5824 are also obtained from $Gaia$
EDR3. Stars within the sky box of 210$\degr$ $<$ $\alpha$ $<$ 250$\degr$ and
-40$\degr$ $<$ $\delta$ $<$ 30$\degr$ are retrieved (orange area in Fig. 1)
and reduced with the same procedures as above (including the foreground stars
removing111Removing foreground stars within 20 kpc will not affect results
since if the cluster’s trailing tail exists, it would be farther than 30 kpc
from the sun (see Fig. 6).). Since the spectroscopic surveys are unavailable
in this sky region, only $Gaia$ data are used.
In Fig. 1, we show projections of the data (green and orange areas), along
with a mock stream (red dots) which will be described in Sect. 3.2. The black
line represents the Galactic plane, and the blue (inverted) triangle
represents the direction of Galactic (anti-) center. It should be noted that
the NGC 5824 field is exactly designed based on the mock stream.
Figure 1: Sky projections of the data (green and orange areas) and a mock
stream (red dots). The black line represents the Galactic plane, and the blue
(inverted) triangle represents the direction of Galactic (anti-) center. The
black circle denotes the GC NGC 5824.
## 3 Connection between Triangulum and NGC 5824
### 3.1 Triangulum Member Stars
Cross-matching between $Gaia$ sources and spectroscopic data yields 1,968
stars. Bonaca et al. (2012) presented an estimate of Triangulum’s [Fe/H] to be
$\sim$ -1.0 dex, while Martin et al. (2013) contended a poorer metallicity
$\simeq$ -2.2 dex for the stream. In order to obtain as many member stars as
possible, we adopt [Fe/H] $<$ -1.0 dex as the selection criterion and are left
with 451 candidates. After this cut, an overdensity can be seen clearly in
proper motion (PM) space (top panel of Fig. 2, only the local region around
the overdensity is shown). We overplot the member candidates provided by
Martin et al. (2013) (cross-matched with $Gaia$ EDR3) and verify that this
overdensity exactly corresponds to Triangulum stream. To pick out stream
stars, we define a dispersion ellipse whose center and semi-axes are
determined based on the known candidates from Martin et al. (2013). The center
(1.014, 0.012) mas yr-1 is the mean PM of the members in $\alpha$ and
$\delta$, and the semi-axes (0.777, 1.116) mas yr-1 are three times the PM
dispersions in respective directions. 47 stars enclosed within the ellipse are
selected.
These stars are then plotted in $\delta$ \- $V_{r}$ plane (middle panel of
Fig. 2) and a dominant monotonic sequence is clearly discernable. Generally,
the radial velocities of a halo stream are supposed to change monotonically
along coordinates as long as there is no turning point contained (like
apogalacticon), such as Pal 5 (Ishigaki et al. 2016), GD-1 (Bovy et al. 2016),
NGC 5466 (Yang et al. 2022), Hríd and Gjöll stream (Ibata et al. 2021). Hence
we consider that this dominant sequence should correspond to Triangulum
stream. We fit a straight line to the sequence where weights are given by the
uncertainties of $V_{r}$. The relation can be described with the equation
$V_{r}=-4.6\delta+86.5$. 31 stars with $V_{r}$ consistent to the fit in
3$\sigma$ range are retained.
Finally, we reject 4 more outliers on the basis of color-magnitude diagram
(CMD) and 27 member stars follow a typical GC isochrone (bottom panel of Fig.
2). All sources here have been extinction-corrected using the Schlegel et al.
(1998) maps as re-calibrated by Schlafly & Finkbeiner (2011) with RV = 3.1,
assuming $A_{G}/A_{V}=0.83627$, $A_{BP}/A_{V}=1.08337$,
$A_{RP}/A_{V}=0.63439$222These extinction ratios are listed on the Padova
model site http://stev.oapd.inaf.it/cgi-bin/cmd.. The detailed information of
27 member stars is summarized in Table 1.
Table 1: Triangulum stream member stars.
No. | $\alpha_{J2000}$ | $\delta_{J2000}$ | $\mu_{\alpha}^{*}$ | $\sigma_{\mu_{\alpha}^{*}}$ | $\mu_{\delta}$ | $\sigma_{\mu_{\delta}}$ | $V_{r}$ | $\sigma_{V_{r}}$ | [Fe/H] | $\sigma_{\rm[Fe/H]}$ | $G$ | $G_{bp}$ | $G_{rp}$ | Survey
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
| (°) | (°) | (mas yr-1) | (mas yr-1) | (mas yr-1) | (mas yr-1) | (km s-1) | (km s-1) | (dex) | (dex) | (mag) | (mag) | (mag) |
1⋆ | 23.8285 | 22.8031 | 1.0098 | 0.0986 | -0.1037 | 0.0784 | -16.84 | 3.11 | -1.913 | 0.068 | 16.759 | 17.222 | 16.106 | SEGUE
2${}^{\star}\times$ | 24.1829 | 22.9364 | 1.0127 | 0.1831 | -0.9748 | 0.1339 | 2.78 | 7.84 | -2.545 | 0.048 | 17.617 | 18.049 | 16.997 | SEGUE
3⋆ | 24.2157 | 22.9598 | 0.8790 | 0.3456 | 0.2562 | 0.2288 | -17.60 | 8.73 | -1.867 | 0.104 | 18.530 | 19.005 | 17.998 | SEGUE
4⋆ (BHB) | 23.2433 | 23.1934 | 0.6858 | 0.1844 | -0.2179 | 0.1546 | -32.59 | 6.76 | -2.121 | 0.078 | 17.967 | 18.031 | 17.815 | SEGUE
5⋆ | 24.1515 | 23.3639 | 1.2849 | 0.1128 | 0.2147 | 0.0938 | -23.76 | 4.55 | -2.348 | 0.114 | 17.335 | 17.803 | 16.718 | SEGUE
6⋆ | 23.9200 | 23.3903 | 0.5511 | 0.2549 | 0.4586 | 0.1641 | -26.84 | 7.61 | -2.128 | 0.114 | 18.251 | 18.672 | 17.667 | SEGUE
7⋆ | 23.8055 | 23.4783 | 1.1128 | 0.2314 | 0.0493 | 0.1889 | -16.15 | 8.19 | -2.371 | 0.056 | 18.457 | 18.891 | 17.927 | SEGUE
8 (BHB) | 24.5968 | 23.6575 | 1.1237 | 0.1755 | 0.1393 | 0.1465 | -28.03 | 4.77 | -2.278 | 0.152 | 17.834 | 17.903 | 17.692 | SEGUE
9⋆ | 23.7817 | 23.8800 | 1.0403 | 0.2248 | 0.2268 | 0.1843 | -36.43 | 5.51 | -2.383 | 0.095 | 18.256 | 18.669 | 17.716 | SEGUE
10⋆ | 23.9193 | 24.1185 | 0.8721 | 0.2242 | -0.1813 | 0.1614 | -31.52 | 5.94 | -1.953 | 0.128 | 18.220 | 18.705 | 17.689 | SEGUE
11 | 24.0593 | 24.1634 | 1.0918 | 0.0460 | 0.1548 | 0.0351 | -27.93 | 8.18 | -2.249 | 0.061 | 15.279 | 15.832 | 14.573 | LAMOST
12⋆ | 23.4145 | 24.3451 | 1.5329 | 0.3380 | 0.0838 | 0.1762 | -22.60 | 9.99 | -2.072 | 0.072 | 18.648 | 18.981 | 18.075 | SEGUE
13⋆ | 23.5792 | 24.3911 | 1.1712 | 0.2725 | 0.3187 | 0.1921 | -13.69 | 9.70 | -2.653 | 0.147 | 18.648 | 19.030 | 18.073 | SEGUE
14 | 23.5445 | 24.5692 | 1.0955 | 0.0584 | 0.1400 | 0.0363 | -32.73 | 9.18 | -1.962 | 0.135 | 16.025 | 16.537 | 15.347 | LAMOST
15 | 23.8850 | 24.7038 | 0.9478 | 0.1592 | 0.0680 | 0.1383 | -24.41 | 6.23 | -1.830 | 0.140 | 17.839 | 18.239 | 17.246 | SEGUE
16 | 22.3385 | 28.4684 | 0.9975 | 0.0801 | 0.3481 | 0.0569 | -41.53 | 12.97 | -2.311 | 0.111 | 16.581 | 17.045 | 15.961 | LAMOST
17 | 22.4622 | 30.0863 | 0.9254 | 0.1081 | 0.0873 | 0.0825 | -34.82 | 9.93 | -1.716 | 0.272 | 17.156 | 17.596 | 16.563 | LAMOST
18 | 22.3181 | 31.2075 | 0.9126 | 0.0968 | 0.0614 | 0.0716 | -57.01 | 16.48 | -2.143 | 0.241 | 17.152 | 17.616 | 16.521 | LAMOST
19 | 21.4457 | 34.2315 | 0.8154 | 0.0552 | 0.2357 | 0.0419 | -68.02 | 13.15 | -2.439 | 0.161 | 15.964 | 16.453 | 15.313 | LAMOST
20 | 20.8085 | 34.9178 | 0.7522 | 0.0638 | 0.4043 | 0.0443 | -75.26 | 12.41 | -2.056 | 0.136 | 16.213 | 16.680 | 15.569 | LAMOST
21 | 21.2461 | 35.1423 | 0.9246 | 0.1032 | 0.4356 | 0.0884 | -74.81 | 12.45 | -2.322 | 0.204 | 17.493 | 17.885 | 16.919 | LAMOST
22 | 21.3470 | 35.2527 | 0.8679 | 0.0563 | 0.2653 | 0.0435 | -74.42 | 11.61 | -2.141 | 0.211 | 16.273 | 16.759 | 15.613 | LAMOST
23 | 20.6703 | 37.3033 | 0.5967 | 0.2600 | 0.3732 | 0.2044 | -76.24 | 12.45 | -2.145 | 0.038 | 19.007 | 19.514 | 18.485 | SEGUE
24 (BHB) | 20.4665 | 37.9600 | 0.7622 | 0.1674 | 0.3160 | 0.1455 | -90.22 | 5.21 | -1.740 | 0.055 | 18.101 | 18.157 | 18.010 | SEGUE
25 | 20.6305 | 38.4133 | 0.9838 | 0.1966 | 0.2130 | 0.1741 | -87.51 | 7.20 | -2.042 | 0.089 | 18.524 | 18.966 | 17.931 | SEGUE
26 (BHB) | 20.7368 | 38.4931 | 1.0037 | 0.1538 | 0.2964 | 0.1236 | -105.28 | 7.22 | -1.378 | 0.040 | 17.908 | 18.044 | 17.637 | SEGUE
27 | 20.1517 | 38.6458 | 0.8931 | 0.1369 | 0.2047 | 0.0976 | -101.34 | 5.60 | -2.036 | 0.041 | 17.676 | 18.130 | 17.124 | SEGUE
333Identified member stars of Triangulum stream, sorted by $\delta$. Common
stars of Martin et al. (2013) are marked with “$\star$.” An outlier identifies
in $\mu_{\delta}$ panel of Fig. 3 is further labeled in “$\times$.” Cols.
12-14 are $Gaia$ magnitudes which have been extinction-corrected (see text).
The last column indicates which survey the radial velocity and metallicity
come from.
Figure 2: The selections of Triangulum member stars. The gray dots represent
rejected stars and the red ones represent the selected stars during each step.
The member candidates identified by Martin et al. (2013) are marked in the
green points. The top panel shows the local region of the overdensity in PM
space, where the ellipse is defined to select member candidates in this step.
The middle panel shows stars in $\alpha$ \- $V_{r}$ plane, where the error
bars represent three times uncertainties of $V_{r}$ and the red line is a
linear fit to the stream sequence. The bottom panel shows those candidates in
CMD.
### 3.2 NGC 5824 Model Stream
Li et al. (2022) have modeled the disruption of NGC 5824 in a static Milky Way
potential plus a moving Large Magellanic Cloud (LMC). As the authors pointed,
the model stream matched with observations of Triangulum well. Motivated by
this, we also generate our own mock stream to make a similar comparison
between the model and data, using the identified member stars above which span
a wider sky extent. The model body is nearly identical to that of Li et al.
(2022), but specific configurations are different, such as the Milky Way
potential, the adopted mass and radius of LMC, the velocity dispersion and
integration time (see details below).
We use the Python package GALA (Price-Whelan 2017), which is designed for
performing common tasks needed in Galactic Dynamics, to model the disruption
of NGC 5824. The procedure closely follows that of Yang et al. (2022) as
applied to NGC 5466. The adopted Milky Way potential consists of a Plummer
bulge (Plummer 1911), $\Phi_{\rm bulge}$, two Miyamoto-Nagai disks (Miyamoto &
Nagai 1975), $\Phi_{\rm thin}$ and $\Phi_{\rm thick}$, and a spherical NFW
halo (Navarro et al. 1996), $\Phi_{\rm halo}$:
$\Phi_{\rm bulge}(r)=\frac{-GM_{\rm bulge}}{\sqrt{r^{2}+b_{\rm bulge}^{2}}}$
(1)
$\Phi_{\rm thin/thick}(R,z)=\frac{-GM_{\rm thin/thick}}{\sqrt{R^{2}+(a_{\rm
thin/thick}+\sqrt{z^{2}+b_{\rm thin/thick}^{2}})^{2}}}$ (2)
$\Phi_{\rm halo}(r)=\frac{-4\pi G\rho_{s}r_{s}^{3}}{r}{\rm
ln}(1+\frac{r}{r_{s}})$ (3)
where $r$ is the Galactocentric radius, $R$ is the cylindrical radius and $z$
is the vertical height. For the bulge and disks, we adopt the parameters from
Pouliasis et al. (2017, Model I). The virial mass $M_{\rm virial}$ and
concentration $c$ used to initialize the NFW halo are from McMillan (2017).
Those chosen parameters are summarized in Table 2.
Table 2: Adopted parameters for the Galactic potential. Parameter | Value
---|---
$M_{\rm bulge}$ | $1.0672\times 10^{10}M_{\sun}$
$b_{\rm bulge}$ | 0.3 kpc
$M_{\rm thin}$ | $3.944\times 10^{10}M_{\sun}$
$a_{\rm thin}$ | 5.3 kpc
$b_{\rm thin}$ | 0.25 kpc
$M_{\rm thick}$ | $3.944\times 10^{10}M_{\sun}$
$a_{\rm thick}$ | 2.6 kpc
$b_{\rm thick}$ | 0.8 kpc
$M_{\rm virial}$ | $1.37\times 10^{12}M_{\sun}$
$c$ | 15.4
Following El-Falou & Webb (2022), we take a Hernquist Potential (Hernquist
1990) as the internal potential of LMC:
$\Phi_{\rm LMC}(r^{\prime})=\frac{-GM_{\rm LMC}}{r^{\prime}+a_{\rm LMC}}$ (4)
where $r^{\prime}$ is the distance to the LMC center and $M_{\rm LMC}$ and
$a_{\rm LMC}$ are set to $10^{11}M_{\sun}$ and 10.2 kpc as well. The position
and velocity of LMC are taken from Gaia Collaboration et al. (2018).
As for the internal gravity of the GC NGC 5824, we choose a Plummer potential:
$\Phi_{\rm GC}(r^{\prime\prime})=\frac{-GM_{\rm GC}}{\sqrt{r^{\prime\prime
2}+b_{\rm GC}^{2}}}$ (5)
with a $M_{\rm GC}$ of $7.6\times 10^{5}M_{\sun}$ and a $b_{\rm GC}$ of 6.51
pc (half-mass radius) (Baumgardt & Hilker 2018). Here $r^{\prime\prime}$
denotes the distance to the cluster’s center. The position and velocity of NGC
5824 come from Vasiliev & Baumgardt (2021) and Harris (1996, 2010 edition).
The solar distance to the Galactic center, circular velocity at the Sun and
solar velocities relative to the Local Standard of Rest are set to 8 kpc, 220
km s-1 (Bovy et al. 2012) and (11.1, 12.24, 7.25) km s-1 (Schönrich et al.
2010), respectively. In the static Milky Way potential accompanied with a
moving LMC, the cluster is initialized 2 Gyr ago444This integration time is
chosen such that the generated mock tidal tail is long enough to completely
cover the data. and integrated forward from then on, releasing two particles
(leading and trailing directions respectively) at Lagrange points (Gibbons et
al. 2014) per 0.05 Myr with a total of 40000 steps. The velocity dispersion is
set to 11.9 km s-1 (Baumgardt & Hilker 2018) and the cluster mass is fixed
during this process. By doing so, a mock stream for NGC 5824 is obtained as
illustrated with the red dots in Fig. 1. We note that the observed Triangulum
(green area) deviates a little from the locus of the mock stream, which also
happened in Bonaca et al. (2021, Fig. 4 therein) and Li et al. (2022, Fig. 8
therein). We consider that this deviation between the observation and
simulation might be common.
### 3.3 Phase Space
We compare the Triangulum member stars to the model stream of NGC 5824 in
phase space. In Fig. 3, right ascension $\alpha$, PMs $\mu^{*}_{\alpha}$ and
$\mu_{\delta}$, and radial velocity $V_{r}$ as a function of declination
$\delta$ are presented from top to bottom. The gray dots represent the stream
particles within the same sky area as Triangulum. The member stars are shown
in the red and green points.
It can be seen that even though the selection process of member stars in Sect.
3.1 is completely independent of the model, the stream particles show good
consistency with the observations in phase space. We note an outlier that
falls too far from the others in $\mu_{\delta}$ plane. This star was selected
by Martin et al. (2013) based on sky position, radial velocity, metallicity
and CMD, when PM measurements were unavailable. We mark it with “$\times$” in
Table 1 and remove it in subsequent analysis. Furthermore, we do not show
distance plane here because there is some confusion, and we present a
discussion about it in Sect. 5.
Figure 3: Right ascension $\alpha$, PMs $\mu^{*}_{\alpha}$ and $\mu_{\delta}$,
and radial velocity $V_{r}$ as a function of declination $\delta$ are
presented from top to bottom. The gray dots represent the stream particles
within the same sky area as Triangulum. The green and red points represent the
stream member stars.
### 3.4 Metallicity and CMD
To further examine whether Triangulum is stripped from GC NGC 5824, we compare
them on the basis of metallicity and CMD.
The metallicity distribution of Triangulum members is presented in Fig. 4.
There are 4 blue horizontal branch (BHB) stars and 22 red giant branch (RGB)
stars. For the whole sample, the mean value $\langle\rm[Fe/H]\rangle$ = -2.10
and standard deviation $\sigma_{\rm[Fe/H]}$ = 0.26 dex are consistent with
those of Martin et al. (2013) ($\langle\rm[Fe/H]\rangle$ = -2.2,
$\sigma_{\rm[Fe/H]}$ = 0.3 dex). Picking out RGB stars separately is aimed for
a comparison to some chemical researches on NGC 5824. Mucciarelli et al.
(2018) analyzed 87 RGB stars of the cluster and obtained a metallicity
distribution peaked at [Fe/H] = -2.11 dex, which is very similar to
$\langle\rm[Fe/H]\rangle$ = -2.14 dex here. The observed scatter
$\sigma_{\rm[Fe/H]}$ = 0.22 dex could probably be caused by observational
uncertainties in low-resolution spectra ($R\sim$1800).
Figure 4: The metallicity distribution of Triangulum member stars. The red
bars represent the whole sample and the green bars correspond to only RGB
stars.
To compare Triangulum with GC NGC 5824 in CMD, we need to know the stream’s
distance. Xue et al. (2011) estimated distances of $\sim$ 5000 BHB stars by
matching them in ($u-g$, $g-r$) space to theoretical colors for BHB stars with
a series of absolute magnitudes. The individual distances of 4 BHB stars in
our sample can be obtained from this catalog: 28.8, 26.9, 30.6 and 26.0 kpc
for stars with No. 4, 8, 24 and 26 in Table 1, respectively. This yields a
median distance of 27.85 kpc, close to 26 kpc proposed by Bonaca et al.
(2012). In addition, we also estimate distances to all 26 stars (see Fig. 10)
using the method from Carlin et al. (2015), which is a Bayesian approach with
likelihood estimated via comparison of spectroscopically derived atmospheric
parameters to a grid of stellar isochrones, and returns a posterior
probability density function for star’s absolute magnitude. This yields a
median value at 33 kpc similar to 35 kpc estimated by Martin et al. (2013). We
adopt the distance to Triangulum stream as $\sim$ 30 kpc, which is a median
value between BHB distance and our estimate.
In CMD, we move the member stars from 30 to 32.1 kpc, where GC NGC 5824 is
located (Harris 1996, 2010 edition), and find that they match well as shown in
Fig. 5. The cluster stars here marked in the orange dots are obtained through
sky and PM selections as instructed by Kundu et al. (2021). Specifically, we
retrieve stars within the tidal radius $r_{t}$ = 5.73′ of NGC 5824 (Harris
1996, 2010 edition) and clean the data following procedures as described in
Sect. 2. A 2D Gaussian mixture model consisting of two Gaussians is then
fitted in PM space to decompose the cluster and field stars apart. For the
cluster component, we get the center ($\mu^{*}_{\alpha}$, $\mu_{\delta}$) =
(-1.193, -2.235) with the intrinsic dispersion ($\sigma^{\rm
in}_{\mu^{*}_{\alpha}}$, $\sigma^{\rm in}_{\mu_{\delta}}$) = (0.424, 0.360)
mas yr-1, where the center is very close to (-1.189, -2.234) mas yr-1 measured
by Vasiliev & Baumgardt (2021). The cluster stars are selected as those whose
PMs, within uncertainties, match the PMs and dispersion of NGC 5824:
$\\{\mu^{*}_{\alpha}\pm\sigma_{\mu^{*}_{\alpha}},\mu_{\delta}\pm\sigma_{\mu_{\delta}}\\}^{\rm
star}$ $\leq$ $\\{\mu^{*}_{\alpha}\pm\sigma^{\rm
in}_{\mu^{*}_{\alpha}},\mu_{\delta}\pm\sigma^{\rm in}_{\mu_{\delta}}\\}^{\rm
cluster}$. The black line denotes the RGB locus obtained by fitting the RGB
stars directly with a third-order polynomial, which is used in Sect. 4.1 to
assign weights in CMD.
Figure 5: The orange dots represent GC NGC 5824 stars. The red and green dots
represent Triangulum members. The black line denotes the RGB locus obtained by
directly fitting the RGB stars with a third-order polynomial.
The connection between the stream and the cluster, based on the three aspects
above, confirms that Triangulum was disrupted from GC NGC 5824. In other
words, the stream can be treated as a part of the cluster’s leading tail.
## 4 Detecting the Trailing Tail
Motivated by the existence of leading tail for NGC 5824, in this section we
aim to search for its trailing tail.
### 4.1 A Modified Matched Filter Method
Combining PMs and CMD together to search for extra-tidal structures of GCs has
proved to be an effective way (e.g., Kundu et al. 2019a, b, 2021). Here we
adopt the method from Grillmair (2019) who applied a modified matched filter
technique and successfully detected a 50$\degr$ long tidal tail for GC M5.
Stars fetched in Sect. 2 are assigned weights based on their locations in CMD
and PM space. In CMD, individual stars in the NGC 5824 field are assigned
weights according to their color differences from the cluster locus, assuming
a Gaussian error distribution:
$w_{\rm CMD}=\frac{1}{\sqrt{2\pi}\sigma_{color}}{\rm
exp}\left[-\frac{1}{2}\left(\frac{color-
color_{0}}{\sigma_{color}}\right)^{2}\right].$ (6)
Here $color$ and $\sigma_{color}$ denote $G_{BP}$ \- $G_{RP}$ and
corresponding errors. Color errors are simply calculated through
$\sqrt{\sigma^{2}_{G_{BP}}+\sigma^{2}_{G_{RP}}}$ where $\sigma_{G_{BP}}$ and
$\sigma_{G_{RP}}$ are obtained with a propagation of flux errors (see CDS
website 555https://vizier.u-strasbg.fr/viz-
bin/VizieR-n?-source=METAnot&catid=1350¬id=63&-out=text.). $color_{0}$ is
determined by the cluster RGB locus (the black line in Fig. 5) at a given $G$
magnitude of a star. During assigning weights, we do not include $\sigma_{G}$
since uncertainties in $G$ band are much smaller than those in $G_{BP}$ and
$G_{RP}$ (on the order of $\sim$ 0.1) for $Gaia$ photometry. Stars from $G$ =
15 mag (tip of the cluster’s RGB) to the $Gaia$ limit $G\simeq$ 21 mag are
investigated.
The PMs of the model stream generated in Sect. 3.2 are further employed to
weight stars. Fig. 6 shows the stream particles within the NGC 5824 field in
phase space, which serves as an estimate to the real stream. In PM space,
weights are computed as:
$w_{\rm PMs}=\frac{1}{2\pi
n^{2}\sigma_{\mu^{*}_{\alpha}}\sigma_{\mu_{\delta}}}{\rm
exp}\left\\{-\frac{1}{2}\left[\left(\frac{\mu^{*}_{\alpha}-\mu^{*}_{\alpha,0}}{n\sigma_{\mu^{*}_{\alpha}}}\right)^{2}+\left(\frac{\mu_{\delta}-\mu_{\delta,0}}{n\sigma_{\mu_{\delta}}}\right)^{2}\right]\right\\}.$
(7)
$\mu^{*}_{\alpha}$, $\mu_{\delta}$, $\sigma_{\mu^{*}_{\alpha}}$ and
$\sigma_{\mu_{\delta}}$ are measured PMs and corresponding errors.
$\mu^{*}_{\alpha,0}$ and $\mu_{\delta,0}$ are the components of PMs predicted
at each star’s $\delta$ based on the model stream’s locus (blue lines of PM
panels in Fig. 6). The locus is obtained by dividing the particles into
$\delta$ bins (bin width = 1$\degr$) and calculating medians of PMs in each
bin. It is worth noting that PM errors are multiplied by $n$ and we choose a
moderate $n$ = 2 here, which is designed to allow some deviations between the
model and observations. This can be illustrated using a one-dimensional
example (see Fig. 7). Assume that we are going to assign a weight to a stream
star (if exist) with $\mu_{\delta}$ = $x$ and $\sigma_{\mu_{\delta}}$ = 0.4
mas yr-1. The $\mu_{\delta,0}$ predicted by the model stream at the star’s
$\delta$ is 2 mas yr-1. The star’s weight will be determined by a Gaussian
with mean = 2 and sigma = 0.4 ($n$ = 1, red line) or 0.8 ($n$ = 2, green
line). If the model predicts the stream very well, that is $x$ is very close
to 2 mas yr-1, the red line ($n$ = 1) will give a higher weight to the star
apparently. However, the model stream is just an approximation to the real one
and it is likely that there are small deviations between them, which might
lead to that $x$ falls out of the blue dashed lines. When this happens, the
green line ($n$ = 2) gives a higher weight. We have compared results using
different $n$ values and verified that $n$ = 2 is the most favorable.
Figure 6: The planes of $\alpha$, heliocentric distance, proper motion in
$\alpha$ and $\delta$, and radial velocity as a function of $\delta$, are
shown from the top to the bottom, respectively. The pink dots represent the
model stream particles within the NGC 5824 field. The red circle represents GC
NGC 5824. The blue lines denote medians of y-axis values in each $\delta$ bin
with a bin width of 1$\degr$. Figure 7: Illustration for using $n$ = 2 in Eq.
(7). The red and green lines represent Gaussians centered at 2 with sigma =
0.4 and 0.8 mas yr-1, respectively.
Finally, stars weights are obtained by multiplying $w_{\rm CMD}$ and $w_{\rm
PMs}$, and then summed in $0.2\degr\times 0.2\degr$ sky pixels to expose
structures.
### 4.2 Results
A weighted sky map is obtained after applying the above method to data in the
cluster field and shown in the left panel of Fig. 8. To make the stream look
more prominent, pixels with summed weights $>$ 80 and $<$ 2 are masked such
that too strong noises and weak background are not shown. The map is then
smoothed with a Gaussian kernel of $\sigma$ = 0.5$\degr$. The stretch is
logarithmic, with brighter areas corresponding to higher weight regions. The
blue circle on the bottom marks the location of NGC 5824. The white bottom-
right corner is due to being close to the Galactic disk, which is further
masked in the middle and right panels.
Figure 8: Log stretch of a matched filter map in the NGC 5824 field. The sky
pixel width is 0.2$\degr$ and the map is smoothed with a Gaussian kernel of
$\sigma$ = 0.5$\degr$. Three panels present the same map. The white arrows in
left panel point the stream features. The locus of the model stream is
overplotted in the small red dots in middle panel. The right panel illustrates
the way of creating the stream’s lateral profile (see text). Bottom-right
region close to the Galactic disk is masked in the middle and right panels.
Due to the photometric depth of $Gaia$, the cluster’s main sequence stars are
not observable and only RGB stars can be used to trace the underlying trailing
tail, which are much fewer than the former. However, some stream-like signals
are still detected. In the left panel, it is clear that there are several
structures (marked with arrows) with higher weights between $\delta\simeq$ -21
$-$ -4$\degr$ that could be connected smoothly and likely extended from NGC
5824. In the middle panel, we overplot the trajectory of the model stream
(small red dots) and find that it passes through the structures well. An
additional segment of $\delta\simeq$ 6 $-$ 16$\degr$ is a farther extension of
the stream. There is a gap in the middle at $\delta\simeq$ -4 $-$ 6$\degr$
corresponding to the most distant range of the model stream (see the distance
panel in Fig. 6), where many RGB stars might have been darker than 21 mag. The
detected signature traces the cluster’s trailing tail to $\sim 50\degr$ whose
path can be roughly fitted using
$\alpha=4.07\times 10^{-5}\delta^{3}+6.68\times
10^{-3}\delta^{2}+0.37\delta+232.45$ (8)
where -33$\degr$ $<$ $\delta$ $<$ 16$\degr$.
In the right panel of Fig. 8, stars enclosed by the red lines are selected to
calculate the statistical significance of the stream. The $\delta$ range is
$-22\degr--3\degr$. The central dashed line represents a more precise
description to the stream of this region, which is given by
$\alpha=7.15\times 10^{-3}\delta^{2}+0.38\delta+232.58+{\rm offset}$ (9)
with offset = 0$\degr$. The left and right boundaries correspond to offset =
-4 and 4$\degr$, respectively. A bin width = 0.2$\degr$ is used and at offset
= -4, -3.8, -3.6…, weights of stars around Eq. (9) $\pm 0.1\degr$ are
integrated to create a lateral profile of the stream as displayed in Fig. 9.
The central peak at offset = 0$\degr$ represents the stream feature. The
larger random counts at positive side are caused by higher stellar density
near the disk. The significance is defined as
$S=(w_{stream}-w_{background})/\sigma_{background}$, where $w_{stream}$ is the
stream signal and $w_{background}$ and $\sigma_{background}$ are the mean and
standard deviation of weights for off-stream regions 0.5$\degr$ $<$ —offset—
$<$ 4$\degr$. We get S = 7.5 and 3.6 for negative and positive sides,
respectively, and S is 4.3 if both are considered. It can be inferred from
Fig. 9 that the stream’s width is expected to be $\lesssim 0.2\degr$ because
signals drop back to the level of background when —offset— $>$ 0.1$\degr$
which means that there are few stream signals beyond this range. If we adopt
$d$ = 39 kpc for this segment based on the model, the physical width is
$\lesssim$ 136 pc.
Figure 9: The stream one-dimensional profile. The offset coordinate is defined
as deviation from the stream along $\alpha$ direction (see Eq. (9)).
### 4.3 A Part of Cetus?
Bonaca et al. (2021) pointed that GC NGC 5824 and Cetus (Newberg et al. 2009),
which is a stellar stream with a dwarf galaxy origin, have very close orbital
energies and angular momenta. Similar orbital trajectories between them are
also demonstrated in Chang et al. (2020). This arises a question: do those
features on the trailing side of the cluster belong to Cetus stream?
Combining the results here with previous researches on Cetus, we present 4
reasons for that the detected features are indeed related to the trailing tail
of NGC 5824.
1. 1.
The width of features in Fig. 8 is only $\lesssim 0.2\degr$, which is thin
compared to a stream produced by a dwarf galaxy.
2. 2.
Cetus stars should have a relatively spread distribution in CMD. However, the
stream features indicated with arrows in Fig. 8 disappear if the RGB locus
used to weight stars in CMD is shifted either blueward or redward by 0.1 mag,
which means that they are exactly corresponding to NGC 5824.
3. 3.
Chang et al. (2020) pointed that GC NGC 5824 should not be the core of Cetus,
implying that there is no direct connection between the cluster and Cetus
stream. Furthermore, Yuan et al. (2021) concentrated on searching for Cetus’s
members using data covering the cluster but they did not detect any densely
populated structure around NGC 5824. Hence the features should not be a part
of Cetus.
4. 4.
Triangulum as a piece of the leading tail also provides a weak evidence of
existence for the trailing tail.
## 5 Discussion
During comparing Triangulum to the model stream in distance, we find some
incompatibility and show them in Fig. 10. Bonaca et al. (2012) estimated a
Triangulum’s distance of 26 $\pm$ 4 kpc (the lower black error bar) while
Martin et al. (2013) proposed 35 $\pm$ 3 kpc (the upper black error bar) for
the stream. As mentioned above, we adopt a distance of 30 kpc (the green solid
line) and find that the member stars match with GC NGC 5824 well in CMD.
However, under a static Milky Way potential plus a moving LMC, the resulting
model stream predicts that Triangulum’s distance should be about 20 - 25 kpc
(gray dots), which is true in both this work and Li et al. (2022) (the second
panel of Fig. 8 therein). This arises a confusion: why is there such a
difference?
Figure 10: Heliocentric distance as a function of $\delta$. The gray dots
represent the stream particles. The blue points represent 4 BHB stars in our
samples, whose distances come from Xue et al. (2011). The red dots and error
bars present distances and corresponding errors for all member stars estimated
using the method of Carlin et al. (2015). The red dashed line corresponds to
their median value 33 kpc. The green solid line marks adopted distance 30 kpc.
Two black error bars represent 26 $\pm$ 4 kpc (Bonaca et al. 2012) and 35
$\pm$ 3 kpc (Martin et al. 2013).
Sheffield et al. (2014) presented an analysis on TriAnd1 ($d\sim$ 20 kpc) and
TriAnd2 ($d\sim$ 28 kpc) (Martin et al. 2007), other two stellar substructures
in the direction of M31 and M33. They show that even though the two structures
are separated by more than 5 kpc in distance, they are indistinguishable in
radial velocity and PMs. We note that this kinematic feature is very similar
to that of Triangulum when compared to the model stream. The real and mock
streams are separated by more than 5 kpc as well but their trends in phase
space are still in concordance. Considering that the stream and those
structures are exactly in the same region, it is very likely that Triangulum
has been affected by the mechanism which leads to TriAnd1 and TriAnd2.
Specifically, either related to a dwarf galaxy (Sheffield et al. 2014) or the
Galactic disk (Xu et al. 2015), some process that created TriAnd1 and TriAnd2
might push Triangulum farther away (30 kpc) from where it should be (20 - 25
kpc). We anticipate that this prediction could be proved by later simulations
on the formation of TriAnd overdensities.
It is also worth nothing that there is another stream segment named Turbio
(Shipp et al. 2018) between Triangulum and GC NGC 5824 that was considered to
be disrupted from the cluster based on their similar dynamics in Bonaca et al.
(2021) and Li et al. (2022). We do not inspect this stream due to lack of
spectroscopic data. It is expected that upcoming observations will be able to
provide more details on connections between Turbio and the cluster, and even
more opportunities of searching for other stream segments on the leading side.
If these can be confirmed, NGC 5824 tidal tails would be the longest cold
stream ever discovered in the Milky Way.
## 6 Conclusions
We first validate the connection between Triangulum stream and NGC 5824. A
total of 26 stream member stars are selected and 16 of them are newly
identified. We model the cluster’s disruption under a static Milky Way
potential accompanied with a moving LMC. The real stream is compared to the
mock one in phase space and consistent trends can be found. In metallicity and
CMD, the member stars and the cluster are also in good agreement. These
results support the previous statement that Triangulum originates from GC NGC
5824 (Bonaca et al. 2021; Li et al. 2022).
Given that Triangulum can be considered as a segment of the cluster’s leading
tail, we examine the existence of its trailing tail. Using a matched-filer
method that combines CMD and PMs to weight stars, we find a $\sim$ 50$\degr$
trailing tail for GC NGC 5824. The features match with the model stream well.
Although the signals are tenuous and discrete, a peak of $>$ 3$\sigma$ over
background noises can be still discerned in the lateral stream profile, from
which we estimate that its width is $\lesssim$ 0.2$\degr$. We expect that
follow-up observations will be able to provide more details about the NGC 5824
stream.
###### Acknowledgements.
We thank the anonymous referee, whose comments greatly improved this
publication. This study was supported by the National Natural Science
Foundation of China under grant nos 11988101, 11973048, 11927804, 11890694 and
11873052, and the National Key R&D Program of China, grant no. 2019YFA0405500.
This work (MNI) is also supported by JSPS KAKENHI Grant Number 20H05855 and
the GHfund A (202202018107). We acknowledge the support from the 2m Chinese
Space Station Telescope project: CMS-CSST-2021-B05. Guoshoujing Telescope (the
Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a
National Major Scientific Project built by the Chinese Academy of Sciences.
Funding for the project has been provided by the National Development and
Reform Commission. LAMOST is operated and managed by the National Astronomical
Observatories, Chinese Academy of Sciences. This work presents results from
the European Space Agency (ESA) space mission $Gaia$. $Gaia$ data are being
processed by the $Gaia$ Data Processing and Analysis Consortium (DPAC).
Funding for the DPAC is provided by national institutions, in particular the
institutions participating in the $Gaia$ MultiLateral Agreement (MLA). The
$Gaia$ mission website is https://www.cosmos.esa.int/gaia. The $Gaia$ archive
website is https://archives.esac.esa.int/gaia. Funding for the Sloan Digital
Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S.
Department of Energy Office of Science, and the Participating Institutions.
SDSS-IV acknowledges support and resources from the Center for High
Performance Computing at the University of Utah. The SDSS website is
www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for
the Participating Institutions of the SDSS Collaboration including the
Brazilian Participation Group, the Carnegie Institution for Science, Carnegie
Mellon University, Center for Astrophysics — Harvard & Smithsonian, the
Chilean Participation Group, the French Participation Group, Instituto de
Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the
Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the
Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz
Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie
(MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-
Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical
Observatories of China, New Mexico State University, New York University,
University of Notre Dame, Observatário Nacional / MCTI, The Ohio State
University, Pennsylvania State University, Shanghai Astronomical Observatory,
United Kingdom Participation Group, Universidad Nacional Autónoma de México,
University of Arizona, University of Colorado Boulder, University of Oxford,
University of Portsmouth, University of Utah, University of Virginia,
University of Washington, University of Wisconsin, Vanderbilt University, and
Yale University.
## References
* Ahn et al. (2012) Ahn, C. P., Alexandroff, R., Allende Prieto, C., et al. 2012, ApJS, 203, 21
* Baumgardt & Hilker (2018) Baumgardt, H. & Hilker, M. 2018, MNRAS, 478, 1520
* Bell et al. (2008) Bell, E. F., Zucker, D. B., Belokurov, V., et al. 2008, ApJ, 680, 295
* Bernard et al. (2016) Bernard, E. J., Ferguson, A. M. N., Schlafly, E. F., et al. 2016, MNRAS, 463, 1759
* Bonaca et al. (2012) Bonaca, A., Geha, M., & Kallivayalil, N. 2012, ApJ, 760, L6
* Bonaca et al. (2021) Bonaca, A., Naidu, R. P., Conroy, C., et al. 2021, ApJ, 909, L26
* Bovy et al. (2012) Bovy, J., Allende Prieto, C., Beers, T. C., et al. 2012, ApJ, 759, 131
* Bovy et al. (2016) Bovy, J., Bahmanyar, A., Fritz, T. K., & Kallivayalil, N. 2016, ApJ, 833, 31
* Bowden et al. (2015) Bowden, A., Belokurov, V., & Evans, N. W. 2015, MNRAS, 449, 1391
* Carlin et al. (2015) Carlin, J. L., Liu, C., Newberg, H. J., et al. 2015, AJ, 150, 4
* Chang et al. (2020) Chang, J., Yuan, Z., Xue, X.-X., et al. 2020, ApJ, 905, 100
* Cui et al. (2012) Cui, X.-Q., Zhao, Y.-H., Chu, Y.-Q., et al. 2012, RAA, 12, 1197
* El-Falou & Webb (2022) El-Falou, N. & Webb, J. J. 2022, MNRAS, 510, 2437
* Gaia Collaboration et al. (2021) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2021, A&A, 649, A1
* Gaia Collaboration et al. (2018) Gaia Collaboration, Helmi, A., van Leeuwen, F., et al. 2018, A&A, 616, A12
* Gibbons et al. (2014) Gibbons, S. L. J., Belokurov, V., & Evans, N. W. 2014, MNRAS, 445, 3788
* Grillmair (2009) Grillmair, C. J. 2009, ApJ, 693, 1118
* Grillmair (2019) Grillmair, C. J. 2019, ApJ, 884, 174
* Grillmair & Johnson (2006) Grillmair, C. J. & Johnson, R. 2006, ApJ, 639, L17
* Harris (1996) Harris, W. E. 1996, AJ, 112, 1487
* Hernquist (1990) Hernquist, L. 1990, ApJ, 356, 359
* Ibata et al. (2021) Ibata, R., Malhan, K., Martin, N., et al. 2021, ApJ, 914, 123
* Ibata et al. (2019) Ibata, R. A., Bellazzini, M., Malhan, K., Martin, N., & Bianchini, P. 2019, Nature Astronomy, 3, 667
* Ishigaki et al. (2016) Ishigaki, M. N., Hwang, N., Chiba, M., & Aoki, W. 2016, ApJ, 823, 157
* Koposov et al. (2014) Koposov, S. E., Irwin, M., Belokurov, V., et al. 2014, MNRAS, 442, L85
* Koposov et al. (2010) Koposov, S. E., Rix, H.-W., & Hogg, D. W. 2010, ApJ, 712, 260
* Kundu et al. (2019a) Kundu, R., Fernández-Trincado, J. G., Minniti, D., et al. 2019a, MNRAS, 489, 4565
* Kundu et al. (2019b) Kundu, R., Minniti, D., & Singh, H. P. 2019b, MNRAS, 483, 1737
* Kundu et al. (2021) Kundu, R., Navarrete, C., Fernández-Trincado, J. G., et al. 2021, A&A, 645, A116
* Küpper et al. (2010) Küpper, A. H. W., Kroupa, P., Baumgardt, H., & Heggie, D. C. 2010, MNRAS, 401, 105
* Law & Majewski (2010) Law, D. R. & Majewski, S. R. 2010, ApJ, 714, 229
* Li et al. (2022) Li, T. S., Ji, A. P., Pace, A. B., et al. 2022, ApJ, 928, 30
* Liang et al. (2017) Liang, X. L., Zhao, J. K., Oswalt, T. D., et al. 2017, ApJ, 844, 152
* Lindegren et al. (2021) Lindegren, L., Bastian, U., Biermann, M., et al. 2021, A&A, 649, A4
* Liu et al. (2015) Liu, X.-W., Zhao, G., & Hou, J.-L. 2015, Research in Astronomy and Astrophysics, 15, 1089
* Malhan et al. (2018) Malhan, K., Ibata, R. A., & Martin, N. F. 2018, MNRAS, 481, 3442
* Martin et al. (2013) Martin, C., Carlin, J. L., Newberg, H. J., & Grillmair, C. 2013, ApJ, 765, L39
* Martin et al. (2007) Martin, N. F., Ibata, R. A., & Irwin, M. 2007, ApJ, 668, L123
* Martin et al. (2014) Martin, N. F., Ibata, R. A., Rich, R. M., et al. 2014, ApJ, 787, 19
* McMillan (2017) McMillan, P. J. 2017, MNRAS, 465, 76
* Miyamoto & Nagai (1975) Miyamoto, M. & Nagai, R. 1975, PASJ, 27, 533
* Mucciarelli et al. (2018) Mucciarelli, A., Lapenna, E., Ferraro, F. R., & Lanzoni, B. 2018, ApJ, 859, 75
* Navarro et al. (1996) Navarro, J. F., Frenk, C. S., & White, S. D. M. 1996, ApJ, 462, 563
* Newberg et al. (2009) Newberg, H. J., Yanny, B., & Willett, B. A. 2009, ApJ, 700, L61
* Odenkirchen et al. (2003) Odenkirchen, M., Grebel, E. K., Dehnen, W., et al. 2003, AJ, 126, 2385
* Palau & Miralda-Escudé (2019) Palau, C. G. & Miralda-Escudé, J. 2019, MNRAS, 488, 1535
* Palau & Miralda-Escudé (2021) Palau, C. G. & Miralda-Escudé, J. 2021, MNRAS, 504, 2727
* Plummer (1911) Plummer, H. C. 1911, MNRAS, 71, 460
* Pouliasis et al. (2017) Pouliasis, E., Di Matteo, P., & Haywood, M. 2017, A&A, 598, A66
* Price-Whelan (2017) Price-Whelan, A. M. 2017, The Journal of Open Source Software, 2, 388
* Riello et al. (2021) Riello, M., De Angeli, F., Evans, D. W., et al. 2021, A&A, 649, A3
* Rockosi et al. (2002) Rockosi, C. M., Odenkirchen, M., Grebel, E. K., et al. 2002, AJ, 124, 349
* Schlafly & Finkbeiner (2011) Schlafly, E. F. & Finkbeiner, D. P. 2011, ApJ, 737, 103
* Schlegel et al. (1998) Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525
* Schönrich et al. (2010) Schönrich, R., Binney, J., & Dehnen, W. 2010, MNRAS, 403, 1829
* Sheffield et al. (2014) Sheffield, A. A., Johnston, K. V., Majewski, S. R., et al. 2014, ApJ, 793, 62
* Shipp et al. (2018) Shipp, N., Drlica-Wagner, A., Balbinot, E., et al. 2018, ApJ, 862, 114
* Vasiliev & Baumgardt (2021) Vasiliev, E. & Baumgardt, H. 2021, MNRAS, 505, 5978
* Xu et al. (2015) Xu, Y., Newberg, H. J., Carlin, J. L., et al. 2015, ApJ, 801, 105
* Xue et al. (2011) Xue, X.-X., Rix, H.-W., Yanny, B., et al. 2011, ApJ, 738, 79
* Yang et al. (2019a) Yang, C., Xue, X.-X., Li, J., et al. 2019a, ApJ, 886, 154
* Yang et al. (2019b) Yang, C., Xue, X.-X., Li, J., et al. 2019b, ApJ, 880, 65
* Yang et al. (2021) Yang, Y., Zhao, J., Zhang, J., Ye, X., & Zhao, G. 2021, ApJ, 922, 105
* Yang et al. (2022) Yang, Y., Zhao, J.-K., Ishigaki, M. N., et al. 2022, MNRAS, 513, 853
* Yanny et al. (2009) Yanny, B., Rockosi, C., Newberg, H. J., et al. 2009, AJ, 137, 4377
* Ye et al. (2021) Ye, X., Zhao, J., Zhang, J., Yang, Y., & Zhao, G. 2021, AJ, 162, 171
* Yuan et al. (2021) Yuan, Z., Malhan, K., Sestito, F., et al. 2021, arXiv e-prints, arXiv:2112.05775
* Zhao & Chen (2021) Zhao, G. & Chen, Y. 2021, Science China Physics, Mechanics, and Astronomy, 64, 239562
* Zhao et al. (2006) Zhao, G., Chen, Y.-Q., Shi, J.-R., et al. 2006, Chinese J. Astron. Astrophys., 6, 265
* Zhao et al. (2012) Zhao, G., Zhao, Y.-H., Chu, Y.-Q., Jing, Y.-P., & Deng, L.-C. 2012, RAA, 12, 723
* Zhao et al. (2009) Zhao, J., Zhao, G., & Chen, Y. 2009, ApJ, 692, L113
* Zhao et al. (2020) Zhao, J. K., Ye, X. H., Wu, H., et al. 2020, ApJ, 904, 61
* Zhao et al. (2018) Zhao, J. K., Zhao, G., Aoki, W., et al. 2018, ApJ, 868, 105
|
# TypeScript’s Evolution: An Analysis of Feature Adoption Over Time ††thanks:
This research was supported by an Australian Government Research Training
Program Scholarship.
Joshua D. Scarsbrook, Mark Utting, Ryan K. L. Ko School of Information
Technology and Electrical Engineering
The University of Queensland
Brisbane, Australia
<EMAIL_ADDRESS>
###### Abstract
TypeScript is a quickly evolving superset of JavaScript with active
development of new features. Our paper seeks to understand how quickly these
features are adopted by the developer community. Existing work in JavaScript
shows the adoption of dynamic language features can be a major hindrance to
static analysis. As TypeScript evolves the addition of features makes the
underlying standard more and more difficult to keep up with. In our work we
present an analysis of 454 open source TypeScript repositories and study the
adoption of 13 language features over the past three years. We show that while
new versions of the TypeScript compiler are aggressively adopted by the
community, the same cannot be said for language features. While some
experience strong growth others are rarely adopted by projects. Our work
serves as a starting point for future study of the adoption of features in
TypeScript. We also release our analysis and data gathering software as open
source in the hope it helps the programming languages community.
###### Index Terms:
TypeScript, JavaScript, Data Mining
## I Introduction
TypeScript[1] is a fast evolving superset of JavaScript implementing static
type checking. From 2020 to 2022, there have been ten releases each bringing
additional features and most adding new syntax to the language. With the rapid
pace of evolution the question becomes, how quickly are these features being
picked up by the developer community? Are some features more popular than
others?
This question has already been asked about other programming languages such as
JavaScript, Java, and Python. In JavaScript the work by Richards et al. [2]
explores the use of dynamic language features in JavaScript and concludes that
production applications often use dynamic features making static analysis
challenging. Similar work has also been done in Java where Parnin et al. [3]
discovered that most uses of generics were covered by a small number of
classes but the usage varies between developers.
TypeScript has an evolving standard without a formal specification. Our paper
seeks to understand how quickly new features in TypeScript are adopted, to
determine how important it is for tools to stay up to date with the latest
release. We hypothesize that it is unnecessary for program analysis tools to
support the entire language and a smaller subset is sufficient for most
applications.
In this paper, we focus specifically on syntactic features (features
implemented in the Abstract Syntax Tree without modifying language semantics)
introduced by TypeScript versions between 4.0 and 4.9 in popular TypeScript
libraries and applications. TypeScript also sees regular improvements to type
inference and language features that are expressed through the type checker.
These features are not a focus for our study.
In this paper, we aimed to answer three research questions about the adoption
of TypeScript features:
* •
(RQ1) What are the most popular features recently introduced in TypeScript?
* •
(RQ2) How quickly are new TypeScript features adopted by projects that use
TypeScript?
* •
(RQ3) How quickly are new TypeScript language versions adopted by projects
that use TypeScript?
Results are presented in Section III. In this paper, we contribute:
* •
A dataset of current popular TypeScript repositories collected from GitHub[4].
* •
A open source framework for TypeScript feature/version adoption studies.
* •
The first study of the rate of language feature/version adoption for
TypeScript.
* •
Recommendations for how important it is for tools to adopt new language
features in TypeScript.
Our paper is organized into four further sections. We start with our
methodology for analysis (Section II) before presenting our results (Section
III). We then make a brief review of related work (Section IV) before
concluding with a discussion including some future research directions
(Section V).
## II Methodology
We ran our study on top-starred/rated repositories containing TypeScript code
on GitHub. We extracted all commits between 2020 and 2022 (inclusive) and
extracted a series of boolean flags indicating the usage of each language
feature. The analysis code and data sets used for this analysis are available
in our repository.111See https://github.com/Vbitz/jsdata_msr.
### II-A Dataset
We started by downloading a list of the top $500$ TypeScript repositories from
GitHub. The repositories are sorted according to the number of users that have
starred the repository. These repositories include languages besides
TypeScript code but only TypeScript is considered in our paper. We collected
the list of repositories on January 4 2023 and included the list as part of
our dataset. Our analysis includes all commits attached to a given repository.
These projects often use feature branches which may include a feature well
before its released on the main branch. For all calculations we used the date
the feature first turned up in the repository rather than the date it was
included in a release.
Of those $500$ repositories $23$ had no commits extracted and an additional
$23$ recorded no versions of TypeScript. Therefore there are $454$
repositories with at least one version of TypeScript recorded.
### II-B Analysis
Our pipeline consists of an open source program written in Go[5] that extracts
every unique TypeScript file from every commit in each repository. This
includes all branches and all tags. We only consider commits made between
January 1st 2020 and December 31st 2022 inclusive. We filtered by dates and
only selected TypeScript features released between 2020 and 2022 because
including commits outside this time span will not yield useful results.
In our dataset of $454$ repositories, $87\%$ contain less than $1000$
TypeScript files. We consider $1,325,810$ total commits in our analysis. Git
commits contain multiple dates such as when the commit was authored versus
when the commit was committed. For our analysis we choose the latest possible
date included in the commit.
Extracted TypeScript files are parsed by TypeScript and usage of different
language features are detected according to their presence in the Abstract
Syntax Tree (AST). With extensive caching and duplicate detection, the entire
analysis takes approximately one hour.
### II-C Version Detection
We parse the package.json file in the root of the repository to detect the
TypeScript version from the installed dependencies.
### II-D Feature List
We focused on syntactic features which are exposed in the AST exported by
TypeScript. We chose to focus on features released in the last three years
(between 2020 and 2022).
TypeScript versions are released as Beta and a Release Candidate before they
are formally released. In our paper, we consider the full release to be Day
Zero, as listed below. Projects adopting betas will show up as adopting
features or versions before they were formally released (a negative number of
days relative to Day Zero). TypeScript 4.8 and 4.6 did not make syntactic
changes to the language and only included semantic and inference changes.
TABLE I: A list of the 8 TypeScript versions and 13 TypeScript features studied in our paper. Version | Release Date | | Name
---|---|---|---
4.9 | 2022-11-15 | $f_{0}$ | satisfies operator
| | $f_{1}$ | accessor property
4.7 | 2022-05-24 | $f_{2}$ | extends constraint on infer
| | $f_{3}$ | Variance Annotations in and out
4.5 | 2021-11-17 | $f_{4}$ | type import modifier
| | $f_{5}$ | Import assertions
4.4 | 2021-08-26 | $f_{6}$ | static blocks in classes
4.3 | 2021-05-26 | $f_{7}$ | override modifier on methods
4.2 | 2021-02-23 | $f_{8}$ | Abstract constructs signature
4.1 | 2020-11-19 | $f_{9}$ | Template literal types
| | $f_{10}$ | Key Remapping in Mapped Types
4.0 | 2020-08-20 | $f_{11}$ | Labeled Tuple Elements
| | $f_{12}$ | Short-Circuiting Assignment
Table I lists the $8$ versions and $13$ features in our study. This is not an
exhaustive list of features introduced since we excluded features requiring
type inference or type checking to identify.
Our dataset includes a few special repositories that have different
characteristics to other projects:
* •
TypeScript[1]: The source code of TypeScript is included as part of this
analysis.
* •
Babel[6]: Babel is a compiler for JavaScript. It includes both ECMAScript[7]
features and some TypeScript features since it has support for TypeScript
syntax.
## III Results
To address our research questions, we started with the adoption of TypeScript
versions before moving onto the adoption of TypeScript features.
### III-A RQ1: Feature Adoption Rating
We can categorize the adoption of features into two major groups based on
their adoption slopes.
Group one contains $f_{4}$, $f_{9}$, $f_{11}$, $f_{7}$, $f_{12}$, $f_{10}$ and
includes features that have more than $20$ repositories adopting them within
one year after release. The most popular feature in this dataset is $f_{4}$
(type modifiers on import) and the second most popular is $f_{9}$ (Template
Literal Types).
Both of these features were necessary to solve some missing gaps in the
TypeScript language. type modifiers ensure imports are used only for type
definitions, and are erased when the code is compiled. This allows including
other libraries and files without including runtime dependencies. It can also
help to break import loops in some cases where a module needs a type from a
module that imports it. Template Literal Types similarly give more flexibility
in how types are described and open up new avenues of meta programming.
Group two contains $f_{2}$, $f_{1}$, $f_{8}$, $f_{3}$, $f_{0}$, $f_{5}$,
$f_{6}$ and includes features that have less than $20$ repositories
implementing them one year after release. $f_{6}$ (static blocks in classes)
has the lowest adoption rate among our dataset with only four repositories
adopting it in a year.
Unlike $f_{4}$ and $f_{9}$ static blocks have equivalents in existing code so
they are only used in niche circumstances.
### III-B RQ2: Feature Adoption
Figure 1: How quickly is each TypeScript feature adopted relative to one
another. Note the release date of each feature as some features have not been
released for all $800$ days.
Figure 1 shows the adoption curve of each of the TypeScript feature we looked
at. Unlike Figure 2 we can immediately see two major differences. Different
features have significantly different adoption rates with some reaching high
levels of adoption and some barely being adopted at all. Secondly all features
have mostly linear adoption rates.
Features were detected across any file ending with .ts that can successfully
be parsed as TypeScript. This means features that are only used in unit tests
are also included here. In addition we include every branch of the repository
so some features are adopted first in a feature branch before being included
in the main branch.
TABLE II: The number of days before/after release where features were introduced into TypeScript and Babel. | $f_{0}$ | $f_{1}$ | $f_{2}$ | $f_{3}$ | $f_{4}$ | $f_{5}$ | $f_{6}$ | $f_{7}$ | $f_{8}$ | $f_{9}$ | $f_{10}$ | $f_{11}$ | $f_{12}$
---|---|---|---|---|---|---|---|---|---|---|---|---|---
TypeScript | $-73$ | $-80$ | $-80$ | $-62$ | $-50$ | $-57$ | $-61$ | $-60$ | $-46$ | $-70$ | $-70$ | $-92$ | $-96$
Babel | $37$ | $-19$ | $-6$ | $-6$ | $-515$ | N/A | $-114$ | $-27$ | $-1$ | $-322$ | $-35$ | $-21$ | $211$
Both Babel and TypeScript were major outliers in the feature adoption rates.
Table II focuses on these two repositories. Babel adopted some features well
before TypeScript introduced them and TypeScript adopted all features before
they were released. The behavior of TypeScript is easy to explain. A high
coverage rate for unit tests means TypeScript starts adopting features as soon
as they are implemented into the repository. Some features are part of the
ECMAScript standard rather than TypeScript so Babel may include these features
before TypeScript adds support for them. That explains why Babel adopts some
features well before TypeScript.
### III-C RQ3: TypeScript Versions
Figure 2: The adoption curves of different versions. Version 4.9 was released
50 days before the data collection ended (31st December 2022) so the data
stops there. Adoption rates asympote to 180 projects, which is around $40\%$
of projects. Other projects jump versions, rather than adopting every version.
Figure 2 shows the adoption curve of each of the TypeScript versions we pulled
features from. We can see here that all versions follow a similar adoption
curve, with an initial slow adoption of pre-release versions, then a rapid
adoption in the three months after release, followed by slower late adoption
by a small number of repositories. Roughly $1/3$ of projects ($160$ out of
$454$) adopt the latest release within the first three months after release
(except for TypeScript 4.9, which was released less than 50 days before data
collection ended). These fast adoption curves are not surprising since
JavaScript/TypeScript projects regularly update dependencies to the latest
revision and TypeScript releases do not introduce significant breaking
changes.
Most adoption happens in the first three months after release (Roughly $35\%$
of projects in our dataset) with a small tail at the end for projects that
update after a new version is already released. At the time of writing,
TypeScript releases new versions every three months so some projects may not
have adopted a version before the new version is released.
These are results aggregated over $454$ different repositories and we do not
see all repositories accounted for here. This comes down to two major reasons.
While some repositories (Visual Studio Code for example) adopt new versions
within a few days of release some take a few months to adopt new versions or
do not adopt them at all. The adoption averages out to the same curve though.
The other reason is we detected the TypeScript version using the package.json
file. The maximum adoption for any version is $185$ out of $454$ ($46$ have no
TypeScript version recorded). The reason is not all repositories adopt all
versions of TypeScript and most skip versions as they do not regularly update.
A few repositories adopted versions before they were formally released.
TypeScript depends on itself but overrides that with the local version. It
therefore adopts new versions before they are released.
## IV Related Work
Some existing work has already investigated adoption of language features in
JavaScript [2, 8, 9], Java [3, 10, 11], and Python [12, 13].
JavaScript allows for self-modifying code and code generated and evaluated at
runtime. These features make tracking the control flow over a programs
execution difficult so some previous works exclude them from analysis. The
work by Richards et al. [2] questions this approach by looking at the
prevalence of these features in production code. Due to the nature of those
features most analysis there is based on dynamic analysis rather than the
static analysis we use in our work.
A large amount of work in this area has been done in Java [3, 10, 11]. Firstly
the work by Parnin et al. [3] discovered that most uses of generics were
covered by a small number of classes but the usage varies between developers.
The work by Dyer et al. [10] broadened this by looking at $31,432$ Java
projects on SourceForge [14] and studying the adoption of $18$ language
features introduced in three versions of Java.
The work by Peng et al. [12] performs a similar study to our work focusing on
Python projects instead of TypeScript projects. They perform a smaller study
on $35$ different projects across a range of sectors. They make the
interesting observation that larger projects tend to use less involved
language features like safety checks rather than more advanced features like
diamond inheritance. This lines up with our outcome since the most popular
features we observed increase safety and the least popular feature (static
blocks in classes) can make control flow more difficult to read. Another work
by Yang et al. [15] follows a similar direction to the work by Richards et al.
[2] looking at the impact of dynamic features on static analysis of Python
code.
The work by Cristiani and Thiemann[16] includes a brief analysis of feature
usage in DefinitelyTyped[17]. The work is limited to types in type declaration
files rather than our study looking at TypeScript source code.
The static analysis field leverages this study to inform the language features
they implement support for. For instance the work by Rastogi et al.[18] seeks
to improve the safety of TypeScript programs and uses a smaller subset of
TypeScript called ”Safe TypeScript”. This work was done before prior to the
release of TypeScript 1.1 (October 6, 2014) and lacks may of the features
introduced after. In addition the work by Feldthaus and Møller[19] uses a
version of the TypeScript language to detect faults in JavaScript interfaces.
Like the work by Cristiani and Thiemann[16] it focuses on declaration files
rather than TypeScript source code.
Overall the related work covers two different kinds of study. Some work [2]
uses dynamic analysis to study the prevalence of dynamic features. The other
group of studies [12] look at the usage of features across different types of
project. Another further field[16, 19, 18] uses static analysis to perform
code analysis on TypeScript language features. Our work extends on the second
field of work by looking at a series of different versions.
## V Discussion & Concluding Remarks
The answer to RQ1 is that the most popular new language features are type
modifiers on imports and template literal types. While type modifiers solve an
existing issue of unintended side effects from imported modules, template
literal types give additional flexibility in how types are constructed.
The answer to RQ2 is more involved. Different features are adopted at
different rates which is an expected outcome. Some features are very niche and
are only used by a small number of libraries. The unexpected outcome is that
adoption rates are static over time and no features sees a large initial peak
as developers race to adopt them. Our interpretation of this is that very few
projects need a new feature, so they are adopted as developers learn about
them and gradually utilize them in new code and in code rewrites.
Finally, the answer to RQ3 is straightforward. Most projects adopt new
versions of TypeScript quickly with an expected long tail as remaining
projects update to new versions.
### V-A Conclusions
We observed a simple adoption curve for language versions, with most adoption
happening shortly after release with ${1}/{3}$ of repositories updating before
the next TypeScript version is released. However, the adoption of new language
features into repositories is much more gradual. A project can update to a new
version of TypeScript without changing their code at all, so without adopting
any new features. So adopting a new language feature may require adopting a
new TypeScript version, but not vice versa. We can draw the conclusion that
while a project has a feature available it may not adopt it until much later.
Returning to our overall goal of specifying a useful subset of TypeScript for
program analysis tools we can see that although new language versions are
adopted quickly by the ecosystem (${1}/{3}$ over $3$ months) the adoption of
new features is a lot more variable with some features never being adopted
outside of a few projects. This shows that it is important for tools to keep
up to date with language versions but it is less important to support all
language features (e.g. Group 2 features are used by only a few projects).
### V-B Future Work
Currently our analysis focuses on syntactic changes to TypeScript, which
misses improvements made to type inference and to the developer experience. It
would be useful in future research to expand the list of features and look at
semantic changes.
Our paper focuses on the features introduced in the 4.x versions of TypeScript
to make timely analysis possible. Future work could look at additional
TypeScript versions.
Additionally it would be interesting to run our analysis on a wider body of
repositories to see how the results change with less popular projects.
## References
* [1] “Typescript.” [Online]. Available: https://www.typescriptlang.org/
* Richards et al. [2010] G. Richards, S. Lebresne, B. Burg, and J. Vitek, “An Analysis of the Dynamic Behavior of JavaScript Programs,” in _Proceedings of the 31st ACM SIGPLAN Conference on Programming Language Design and Implementation_ , ser. PLDI ’10. New York, NY, USA: Association for Computing Machinery, 2010, p. 1–12. [Online]. Available: https://doi.org/10.1145/1806596.1806598
* Parnin et al. [2011] C. Parnin, C. Bird, and E. Murphy-Hill, “Java Generics Adoption: How New Features Are Introduced, Championed, or Ignored,” in _Proceedings of the 8th Working Conference on Mining Software Repositories_ , ser. MSR ’11. New York, NY, USA: Association for Computing Machinery, 2011, p. 3–12. [Online]. Available: https://doi.org/10.1145/1985441.1985446
* [4] “Github.” [Online]. Available: https://github.com/
* [5] “Golang.” [Online]. Available: https://go.dev/
* [6] “Babel.” [Online]. Available: https://babeljs.io/
* [7] ECMA TC39, “Ecmascript.” [Online]. Available: https://www.ecma-international.org/publications-and-standards/standards/ecma-262/
* Wei et al. [2016] S. Wei, F. Xhakaj, and B. G. Ryder, “Empirical study of the dynamic behavior of JavaScript objects,” _Software: Practice and Experience_ , vol. 46, no. 7, pp. 867–889, 2016. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/spe.2334
* Richards et al. [2011] G. Richards, C. Hammer, B. Burg, and J. Vitek, “The Eval That Men Do,” in _ECOOP 2011 – Object-Oriented Programming_ , M. Mezini, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 52–78.
* Dyer et al. [2014] R. Dyer, H. Rajan, H. A. Nguyen, and T. N. Nguyen, “Mining Billions of AST Nodes to Study Actual and Potential Usage of Java Language Features,” in _Proceedings of the 36th International Conference on Software Engineering_ , ser. ICSE 2014. New York, NY, USA: Association for Computing Machinery, 2014, p. 779–790. [Online]. Available: https://doi.org/10.1145/2568225.2568295
* Tempero et al. [2008] E. Tempero, J. Noble, and H. Melton, “How Do Java Programs Use Inheritance? An Empirical Study of Inheritance in Java Software,” in _ECOOP 2008 – Object-Oriented Programming_ , J. Vitek, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, pp. 667–691.
* Peng et al. [2021] Y. Peng, Y. Zhang, and M. Hu, “An Empirical Study for Common Language Features Used in Python Projects,” in _2021 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)_ , 2021, pp. 24–35.
* Åkerblom et al. [2014] B. Åkerblom, J. Stendahl, M. Tumlin, and T. Wrigstad, “Tracing Dynamic Features in Python Programs,” in _Proceedings of the 11th Working Conference on Mining Software Repositories_ , ser. MSR 2014. New York, NY, USA: Association for Computing Machinery, 2014, p. 292–295. [Online]. Available: https://doi.org/10.1145/2597073.2597103
* [14] “Sourceforge.” [Online]. Available: https://sourceforge.net/
* Yang et al. [2022] Y. Yang, A. Milanova, and M. Hirzel, “Complex Python Features in the Wild,” in _Proceedings of the 19th International Conference on Mining Software Repositories_ , ser. MSR ’22. New York, NY, USA: Association for Computing Machinery, 2022, p. 282–293. [Online]. Available: https://doi.org/10.1145/3524842.3528467
* Cristiani and Thiemann [2021] F. Cristiani and P. Thiemann, “Generation of typescript declaration files from javascript code,” in _Proceedings of the 18th ACM SIGPLAN International Conference on Managed Programming Languages and Runtimes_ , ser. MPLR 2021\. New York, NY, USA: Association for Computing Machinery, 2021, p. 97–112. [Online]. Available: https://doi.org/10.1145/3475738.3480941
* [17] “Definitelytyped.” [Online]. Available: http://definitelytyped.org/
* Rastogi et al. [2015] A. Rastogi, N. Swamy, C. Fournet, G. Bierman, and P. Vekris, “Safe & efficient gradual typing for typescript,” in _Proceedings of the 42nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages_ , ser. POPL ’15. New York, NY, USA: Association for Computing Machinery, 2015, p. 167–180. [Online]. Available: https://doi.org/10.1145/2676726.2676971
* Feldthaus and Møller [2014] A. Feldthaus and A. Møller, “Checking correctness of typescript interfaces for javascript libraries,” _SIGPLAN Not._ , vol. 49, no. 10, p. 1–16, oct 2014. [Online]. Available: https://doi.org/10.1145/2714064.2660215
|
# Walking for short distances and turning in lower-limb amputees: a study in
low-cost prosthesis users
Nidhi Seethapathi1,3, Anil Kumar Jain2 and Manoj Srinivasan1
1 Mechanical and Aerospace Engineering,
The Ohio State University, Columbus OH 43210, USA
2Santokba Durlabhji Memorial Hospital, Jaipur, Rajasthan 302015, India
3 Department of Bioengineering, University of Pennsylvania,
Philadelphia PA 19104, USA
###### Abstract
Preferred walking speed is a widely-used performance measure for people with
mobility issues. Often these speeds are measured walking in a straight line
and walking over short distances. However, daily walking involves walking for
bouts of different distances and walking with turning. Here, we observe
walking for short distances and walking in circles in subjects with unilateral
lower-limb amputation using a Jaipur Foot prosthetic leg. We find that the
preferred walking speeds are lower for unilateral amputees for short
distances, but the distance-dependence of preferred walking speed is less
steep than for non-amputees. In circle walking, unilateral amputees slow down
when walking in circles of smaller radii and above-knee amputees in particular
walk faster when the prosthetic leg is outside the circle. Using simple
mathematical models, we show that the observed preferred walking speeds are
predicted by minimal energy cost with increased costs for walking, changing
speeds, and turning for amputees compared to non-amputees. These results
predict that the cost of changing speeds and turning may be substantial in
amputees but still not as high as the constant-speed cost. These findings will
inform prosthesis design and rehabilitation therapy to better assist changing
speeds and turning tasks in amputee walking.
## 1 Introduction
Preferred overground walking speed is commonly used to quantify a human
subject’s mobility improvement after being fit with a new prosthetic leg or
after undergoing physical therapy or rehabilitation from stroke, other injury,
or movement disorder [1, 2]. Such walking speeds are estimated using a variety
of tests in the lab, most commonly using the 6 minute walk test [3] or the 10
m walk test [4], but also by having subjects walk other short distances such
as 3 m [5], 4 m [6], 5 m [7], and 15 m [8, 9]. In healthy adults with no
movement disorders, the preferred walking speed for walking in a straight line
is distance-dependent [10]: the speed is systematically lower for shorter
distances and this distance-dependence can be explained by the larger
energetic cost of speeding up and slowing down for shorter distances [10].
Here, we propose this distance dependence of walking speed as a more complete
measure of preferred walking speed (than measuring the speed at just one
distance) and characterize it in unilateral amputees. This distance dependence
of walking speed can also help shed light on daily walking behavior in humans,
in which a considerable percentage of walking occurs in short bouts [11],
especially in amputees [12].
Another important aspect of daily walking behavior is that people often need
to walk along curves or with some turning. Indeed, for subjects in one
previous study, between 8% and 50% of all walking steps in daily life involved
turning [13]. So effective mobility requires ability to walk with turning. As
a way of quantifying turning ability and as an additional measure of the
effectiveness of a prosthetic leg or a physio-therapeutic intervention, here
we propose the measurement of preferred speeds while walking in circles of
different radii and characterize such walking speeds in unilateral amputees.
Although such curved walking interventions have been used for other
populations [14, 15, 16], there is less literature on the subject for amputee-
walking. In healthy non-amputee adults, the tangential speed of walking
depends on the curvature of the circle walked: slower walking for smaller
circles or higher curvature, explained by the increased energetic cost of
walking with turning [17, 18]. Here, we test the hypothesis that walking
speeds depend on circle radius in unilateral amputees and whether the
prosthesis foot being the outer or inner foot affects circle-walking speeds.
Metabolic energy optimality has been used to make predictions for a number or
aspects of overground and treadmill walking behaviors in healthy non-amputee
adults [19, 20, 21]. Indeed, as noted above, the human preference slower
walking speeds for shorter distances and slower walking speeds for smaller
radii and higher curvatures have been attributed to optimization of the
corresponding energy costs [10, 17, 18]. However, there is less work making
such energy optimality based predictions for other population with amputations
or other movement disorders [22, 23, 24]. Here, we examine this energy
minimization hypothesis in the context of amputee walking for short distances
and in curves.
Amputees walking with passive prosthetic legs usually have a higher steady
state metabolic cost [25] compared to non-amputees (but see recent work that
find no significant cost difference in some amputee sub-populations [26]).
However, it is not known whether amputees also have a higher cost of speeding
up and slowing down when walking. The interactions between the walking cost
and the changing speed costs may decide how amputees walk over short
distances. The higher the cost of speeding up and slowing down, the greater it
will affect the optimal walking speed at short distances. Similarly, while it
is known that walking along curves costs more energy than walking in a
straight line in non-amputee adults [17, 18], the energy cost of turning has
not been characterized in amputees. Thus, by viewing preferred walking
behavior in amputees through the lens of energy minimization, these short
distance and circle walking tasks provide insight into how their cost of
changing speeds and the cost of turning may compare to that of non-amputee
individuals.
Here, we studied a small population of unilateral amputees, both above- and
below- knee, using the Jaipur foot prosthesis, a low cost prosthesis used
widely in the developing world [27]. This prosthesis, developed by P. K. Sethi
and co-workers [27], was designed for facilitating practices common in India,
such as bare foot or shod walking over unpaved uneven terrain and squatting or
cross-legged sitting on the floor. The Jaipur foot is used in over 22
countries and by hundreds of thousands of amputees, most widely used second
only to the SACH foot [28]. Thus, this study adds to the small number of
biomechanical studies (e.g., [29, 30]) on the Jaipur foot, which is under-
studied despite being so widely used.
## 2 Methods
### Subject population.
The experimental protocol was approved by the Ohio State University
Institution Review Board and all subjects participated with informed verbal
consent. All subjects ($N=12$ with 11 males, 1 female, $65.75\pm 12.6$ kg with
prosthesis and shoes, height $1.67\pm 0.09$ meters and age $39.67\pm 15.31$
years, mean $\pm$ s.d.) were unilateral amputees, out of which 7 subjects were
above-knee amputees and 5 were below-knee amputees. All subjects had a Jaipur
Foot prosthesis [27], manufactured and fit in the SDMH hospital in Jaipur; all
walking trials were also conducted at this location. Subjects walked
independently without using canes, crutches, hand rails, or other assistive
devices. The subjects did not carry any additional instrumentation. The
subjects performed two kinds of walking trials: (i) walking for short
distances and (ii) walking in circles, as described below (Figure 1).
### Walking for short distances.
Subjects were instructed to walk in a straight-line for five different short
distances: 4 m, 6 m, 8 m, 10 m and 23 m (Figure 1a). There were four trials
for each distance and trial order was randomized. Subjects were asked to “walk
the way they usually walk” and they had to start and end each trial standing
still, so they had to speed up from and slow down to rest. Average walking
speeds were estimated by measuring the time duration for each trial.
Figure 1: Overground walking experiment setup. We measured the preferred
walking speed of walking for unilateral amputees wearing a passive prosthetic
leg in two conditions: a) walking a range of short distances, starting and
stopping each bout at rest and b) walking in circles of different radii, both
clockwise and anti-clockwise.
### Walking in circles.
Subjects were asked to walk in circles of three different radii: 1 m, 2 m and
3 m (Figure 1b), completing 5, 4 and 3 laps, respectively, for these radii.
For each radius, subjects performed two trials, once with the prosthetic leg
inside the perimeter and once with the prosthetic leg outside the perimeter of
the circle. Trial order was randomized over the circle radii and walking
directions. The average speed was obtained by measuring the total walking
duration and averaging over all laps for each trial. Subjects walked with the
circle between their two feet, maintaining a non-zero step width, rather than
step on the circle with both feet.
### Mathematical model: walking for short distances.
For short distance walking, we compare the experimentally observed preferred
walking speed results to the walking speed predicted by minimizing the total
metabolic cost of the walking bout. For simplicity, we assume that people
walking a distance $D$ start from rest (specified in experiment), then
instantaneously speed up to some speed $v$, continue at that speed for the
whole distance and then instantaneously come to rest again. Thus, the total
cost of walking the distance includes the cost of accelerating from rest to
speed $v$ at the start, walking at constant speed $v$ and then decelerating to
rest at the end of the walking bout, given by the following equation
(analogous to the approach in [10]):
$E_{\mathrm{met}}=(a_{0}+a_{1}v+a_{2}v^{2})\frac{D}{v}+\lambda\left(\frac{1}{\eta_{\mathrm{pos}}}+\frac{1}{\eta_{\mathrm{neg}}}\right)\left(\frac{1}{2}mv^{2}\right).$
(1)
Here, $\dot{E}=a_{0}+a_{1}v+a_{2}v^{2}$ in Wkg-1, with $v$ in ms-1 models the
metabolic rate of walking $\dot{E}$ at a constant speed $v$ for both amputees
and non-amputees, with $a_{0}=4.97$, $a_{1}=-5.78$, and $a_{2}=5.62$ for
above-knee amputees [31], $a_{0}=3.24$, $a_{1}=-2.19$, and $a_{2}=2.89$ for
below-knee amputees [32], and $a_{0}=2.22$, $a_{1}=0$, and $a_{2}=1.155$ for
non-amputees [31]. All of these relations for $\dot{E}$ result in a classical
U-shaped relationship between energy cost per unit distance and speed of
walking. The quantity $\lambda$ provides a scaling factor between the kinetic
energy $mv^{2}/2$ and the energy cost required to achieve it, with
$\eta_{\mathrm{pos}}=0.25$ and $\eta_{\mathrm{neg}}=1.2$ are traditional
muscle efficiencies for performing positive and negative mechanical work,
respectively [33]. Because we did not directly estimate the cost of changing
speeds in amputees here (as in [10]), we tested different $\lambda$ values to
best explain the observed walking behavior.
The speed that minimizes the short-distance cost of walking $E_{\mathrm{met}}$
is given by the implicit equation: $\lambda
v^{3}\left({\eta_{\mathrm{pos}}}^{-1}+{\eta_{\mathrm{neg}}}^{-1}\right)/(a_{0}-a_{2}v^{2})=D$.
This relation implies that shorter distance bouts should have lower speeds.
### Mathematical model: Walking in circles.
Following [17, 18], who directly measured the cost for walking in circles for
non-amputee subjects, we analogously propose that the metabolic rate of
walking in a circle for amputees is
$\dot{E}=a_{0}+a_{1}v+a_{2}v^{2}+b_{2}(v/R)^{2}$, where $R$ is the circle
radius, with $a_{0,1,2}$ values as above. The cost per distance is given by
$\dot{E}/v=a_{0}/v+a_{1}+a_{2}v+b_{2}v/R^{2}$, which is minimized by the speed
$v=\sqrt{a_{0}/(a_{2}+b_{2}/R^{2})}$. Thus, the prediction is that the optimal
speed is smaller for smaller radius $R$. In the following, we chose $b_{2}$ to
best explain the speed reduction exhibited by our amputee subjects.
For both short-distance walking and circle walking, we also determine the
speeds that are within 1% of the minimum energy cost, because the energy
landscapes are usually flat and a small change in speed near the minimum
energy usually results in a much smaller energy change.
## 3 Results
### Preferred walking speed for unilateral amputees depends on distance
walked.
Both above- and below-knee amputees showed, on average, a decrease in
preferred walking speed for short distances. Pooled across all above and below
knee subjects, the preferred walking speed for each of the short distances (4
m, 6 m, 8 m and 10 m) was significantly lower than the preferred walking speed
for the long-distance 23 m trial ($p<0.0025$ for each distance, left-tailed
paired $t$-test). The percentage decreases in the amputees’ preferred walking
speeds, compared to the long distance 23 m trial, are shown in Figure 2a. The
percentage decrease is higher for shorter distances: in above knee amputees,
the percentage decreases ranged from 7% for the 10 m walk to 13% for the 4 m
walk, on average. In below knee amputees, the percentage decreases ranged from
7% for the 10 m walk to 18% for the 4 m walk.
### The rate of speed decrease with distance is lower for amputees compared to
non-amputees.
Although we find that the preferred walking speed for amputees decreases with
distance walked, the rate of decrease is flatter compared to that for non-
amputee individuals (data repeated from [10]). As seen in Figure 2b, the best-
fit slope of the distance dependence of speed for non-amputee subjects is
about three times steeper than that for the above-knee amputee subjects and
about 1.5 times that for the below-knee amputees. Thus, the above-knee
amputees slowed down by a much smaller percentage for short-distance walking
trials, compared to both below knee amputees and non-amputee subjects.
Figure 2: Decrease in preferred walking speed with distance walked for
amputees. a) Amputees showed a greater decrease in preferred walking speed for
short distances speed compared to able-bodied individuals. b) The rate of
change in preferred walking speed with distance is steeper for able-bodied
individuals than for the unilateral amputees.
### Increased cost of walking and changing speeds is consistent with flatter
speed-distance relationships.
Our simple optimization-based model of short distance walking predicts a
flatter distance-dependence of optimal walking speed for both above and below
knee amputees (Figure 2,3), when we take into account the increased constant-
speed metabolic cost of walking previously measured in experiments [31, 32]
and an increased cost of changing speeds compared to able-bodied subjects
[10]. Compared to non-amputees, the scaling factor $\lambda$ for the cost of
changing speeds was increased by a factor of 1.87 for below-knee amputees and
1.90 above-knee amputees. Nevertheless, mechanistically, our model suggests
that the amputees do not slow down as much for short distances because the
cost of moving at a reduced speed for the whole distance outweighs the cost of
changing speeds from rest. As seen in Figure 3, the mean amputee preferred
walking speeds plus one standard error is within 1% of the optimal energy
costs from this model.
Figure 3: Minimization of total metabolic cost captures slower short-distance
walking speeds. The total cost of the walking a short distance includes a term
due to constant-speed cost and a changing-speed cost. We find that minimizing
this total cost predicts the observed trends in changing preferred walking
speed with distance for both amputees and non-amputees. The error bars for
human data represent standard errors, and the filled bands represent the set
of all speeds within 1% of the energy optimal energy cost.
### Preferred walking speeds for unilateral amputees walking in circles
depends on the circle radii.
We find that, irrespective of type of amputation, all the unilateral amputees
that we observed slowed down when walking along circles of smaller radii
(Figure 5), compared to walking a longer distance in a straight line. The
slope of decrease in preferred walking speed with radius of circle walked, is
quite similar for the able-bodied (0.1 s-1) and amputee populations (0.08
s-1). Using our optimization-based model to fit this decreasing speed trend,
the best-fit scaling coefficient $b_{2}$ for the cost of turning was about six
times more than non-amputees for below-knee amputees and eight times more than
non-amputees for above-knee amputees.
Figure 4: Preferred walking speeds for circle walking. a) The preferred
walking speed for all the unilateral amputees showed a decrease with radius of
the circle walked. b) Amputees, when pooled together, did not show a
significant difference in preferred walking speed when walking with the
prosthesis-leg inside versus outside the circle. c) Above knee amputees show a
greater walking speed on average when the prosthesis leg is outside the
circle. Figure 5: Optimal walking speeds for circle walking. Minimizing the
energy cost of walking in a circle predicts slower walking for smaller
circles. Error bars shown for the data correspond to one standard error about
the mean, and these are generally within the set of all speeds within 1% of
the optimal energy costs (the shaded bands shown, as in Figure 3. The non-
amputee data and model used for comparison is from [17, 18].
### Preferred walking speeds for above-knee amputees walking in circles is
dependent on turning direction.
We had subjects walk both clockwise and anti-clockwise along circles drawn on
the ground. We did this so as to check for any effects due to having the
prosthesis-leg as the pivot, as opposed to the intact leg as the pivot.
Considering all amputees together, we did not find significant differences
between the two conditions ($p=0.096$). Just considering the above-knee
amputees, with trials for all radii pooled, we found that they walked slightly
faster when the prosthetic leg was outside the circle ($p=0.0008$, mean
difference about 0.03 m/s on average, median 0.04 m/s).
### Preferred walking speeds are proportional to degree of amputation.
For all the cases above, that is, straight-line walking for different
distances and circle-walking at different radii, we found that the preferred
walking speeds for the below-knee amputees are higher than above-knee and
lower than non-amputee preferred speeds (Figures 2-3). This result for
straight-line long-distance walking over long distances is known from other
work in the past [34]. However, the result for short-distance and circle-
walking is new, even if unsurprising. In addition, we found, as have others in
the past , that the preferred walking speed for the below-knee unilateral
amputees were higher than those for the above-knee unilateral amputees.
### Preferred gait initiation swing is usually with the affected limb.
In addition to measuring the preferred walking speeds, we noted whether the
subjects stepped forward with their affected or unaffected limb for their very
first step. Stepping forward with the affected limb corresponds to the first
swing phase being with the affected limb and the first stance phase being with
the unaffected limb. We found that 9 out of 12 subjects had over 80% of their
first steps be their prosthetic foot; the other three subjects had 69%, 36%,
and 0% of their steps start with swinging the affected limb. These leading
limb preferences are similar to those found in [35].
## 4 Discussion
Preferred walking speed is often used as a measure of progress in walking
rehabilitation for various populations, for instance persons with
neuromuscular disorders and amputees. An implicit assumption made when relying
on such measures is that higher speeds means more improvement. However, past
theoretical and experimental work on able-bodied subjects shows that the speed
at which people choose to move depends on the constraints of the motion itself
like the distance walked [10] and the curvature of the motion [17, 18]. So,
depending on the situation, people may sometimes move at a lower speed than
physically possible to satisfy some other objective, like minimizing energy.
Here, we find that these observations extend to amputee populations as well
and may have significant implications for the usage of preferred walking
speeds as a measure of performance. These implications are detailed in the
paragraphs below.
We find that above-knee and below-knee amputees slow down when walking short
distances, as do non-amputees. This implies that the distance over which the
preferred walking speed is measured and interpreted during rehabilitation may
systematically overestimate or underestimate the progress that the patient has
made. Many studies about measure progress in people with walking disorders
after rehabilitation, measure the preferred walking speed over very short
distances. In order to circumvent these distance-effects, we suggest measuring
the speed over a few distances, not just one or two. Also, when comparing the
preferred speed values for amputees to the able-bodied values, we suggest
comparing to the values for the same distance walked. This is because the
difference in walking speed between able-bodied and above-knee amputees over
short distances is lesser than the corresponding difference over longer
distances.
An alternative to in-lab measurements of preferred speeds is to track
subjects’ speeds and movements all day using body-worn sensors such as
pedometers, IMUs, GPS, etc [36, 37, 38]. Ultimately, it is these speeds during
daily living that is of relevance to quantifying mobility. Such ambulatory
measurements provide an opportunity to independently corroborate the results
in this study, by characterizing the speeds over bouts of different lengths
and walks with turns that naturally occur during daily life.
Everyday walking consists of not just straight line walking but also turning.
Here, we find that unilateral amputees also slow down when taking sharp turns
(circles of smaller radii) similar to non-amputees [17, 18]. We also find that
the corresponding walking speeds are lower for the amputees for all radii,
compared to those for able-bodied individuals. For all these reasons, we
propose that circle-walking lends itself as an additional useful measure of
walking performance during rehabilitation. Unlike the straight line walking
trials, we were not interested in capturing the effects of speeding up from
and slowing down to rest for circle walking. We believe that the multiple laps
used makes such distance-dependence effects negligible.
In the past, people have predicted aspects of walking behavior such as walking
speed, walking step width and walking step frequency using energy minimization
as a hypothesis. However, most of these studies look at constant-speed
straight line walking on treadmills. Moreover, a majority of these studies
attempt to predict non-amputee walking behavior. Here, we provide evidence
that energy-minimization can predict aspects of non-steady overground walking
behavior in a disabled population. Specifically, we have found that minimizing
a combined cost of constant-speed walking and a cost for changing speeds
captures the short-distance walking speed trends in amputees; similarly,
adding a cost for turning captures the circle-walking speed trends.
We have studied the preferred walking speeds of unilateral amputees wearing a
passive prosthetic leg (Jaipur Foot) used in a number of developing countries.
Using preferred walking speeds as a performance measure may be more relevant
where the resources available are limited, where access to other measures of
performance, such as using a gait lab with motion capture and force plates,
may be limited. Thus, we feel that our conclusions regarding the qualification
of preferred walking speeds as a performance measure would be more relevant
where walking-speed-based mobility measures would be more exclusively used.
Fitting the short-distance walking model and circle-walking model to the
amputee walking speed behavior, we found that the scaling factors for the cost
of changing speeds and for the cost of turning ($\lambda$ and $b_{2}$) are
much higher than for non-amputees. To test whether these increased cost
estimates, one could directly measure the metabolic cost of changing speed and
the cost of turning, as in [10, 17, 18]. Stability considerations may be an
alternative to metabolic cost being the determinant of reduced speeds,
especially for turning in a circle. One could examine this alternative
hypothesis by having subjects use different speeds and estimating simple
measures of stability [39, 40].
For short-distance walking, we measured the average speed over the whole bout
of the amputees. This average speed includes the acceleration and deceleration
periods. So the reduction in average speed is partly due to a greater portion
of the bout being spent in acceleration-deceleration and partly due to reduced
walking speed. This was the case in the earlier non-amputee study as well
[10]. In that study, walking slower for shorter distances was a real choice,
as the subjects could certainly cover the distance in shorter time. While we
did not test this hypothesis here by asking the amputees to walk faster than
they preferred to do, we suggest, based on prior studies with amputee
populations, that they can typically walk much faster than their preferred
speeds (e.g., [41, 31]). Specifically, we do not know of any demonstrated
instance in which someone’s preferred speed is also their maximum possible
speed, although it may be possible.
One limitation of this study is the relatively small sample size of subjects.
The experiments were conducted in a hospital in India and not all the patients
had the time or resources to participate in the study. Small sample sizes are
common with such amputee populations. Despite the small sample size, the
qualitative results for the amputees are robust. A follow-up study will be
conducted to investigate the population with a larger sample size. Another
potential limitation is that we did not limit our amputee sample by years
since amputation and age. However, we believe that this limitation is offset
by the fact that we focus here on the change in preferred speeds of each
subject under different conditions relative to his/her own long-distance
straight-line walking speed. So, the individual differences in terms of years
of being accustomed to wearing the prosthesis will not affect our results. We
compare the non-amputee walking speeds of a diverse group of college or
graduate student-age subjects [10, 17, 18] in the USA to those of Indian
amputees. It would perhaps be more appropriate to compare with age and size-
matched Indian non-amputee adults. However, past studies of variations in
walking speed due to location or country are much lesser than the differences
between non-amputee and amputee populations that we observe here [42]. So, our
fundamental findings will not change even with the more appropriate Indian
able-bodied population.
It would also be useful to repeat these experiments in other subject
populations, including amputees wearing other prostheses with different
mechanical properties. The Jaipur foot is a unique prosthesis, designed for
Indian life and has features that make it mechanically different compared to
other prostheses. Specifically, it has more mobility in the ankle and subtalar
joint to allow squatting, sitting cross legged, and walking over uneven
unpaved surfaces. It has three main pieces: a heel block and fore-foot-toe
block made out of micro-cellular rubber bonded to a wooden ankle block. Tread
rubber is used on the undersurface to provide traction and the entire assembly
is covered in a skin colored compound to provide cosmesis and water proofing.
We speculate that these construction features give more compliance, allowing
its users to perform more three dimensional tasks; future work should consider
more detailed biomechanical analyses for such 3D tasks.
Finally, given the simplicity of these tasks, we propose that these distance-
dependence of walking speeds and radius-dependence of walking speeds be used
as routine measures of mobility not just in amputees, but also other subject
populations such as the elderly and those with or recovering from other
movement disorders.
### Acknowledgments.
The authors thank Dr. Harlal Singh Mali for hosting N.S. in his lab briefly
during her visit to Jaipur and for initially facilitating interactions.
### Ethics Statement
All subjects participated with informed consent and the experimental protocol
was approved by the the Ohio State University Institution Review Board.
### Funding Statement.
For this study, N.S. received funding from Schlumberger Foundation Faculty for
the Future Fellowship, The Ohio State University Global Gateway Grant and the
Alumni Grant for Graduate Research and Scholarship. MS was supported in part
by NSF CMMI grant .
### Data Accessibility.
All data and codes form will be made available upon acceptance.
### Competing Interests.
We have no competing interests.
### Authors’ Contributions.
N.S. conceived the study, collected the data, analyzed the results, performed
the mathematical analyses, and wrote the paper. A.J. provided guidance for
data collection and edited the draft. M.S. provided ideas for designing the
study, suggested additional analyses, and edited the draft.
## References
* [1] Richard W Bohannon. Comfortable and maximum walking speed of adults aged 20?79 years: reference values and determinants. Age and ageing, 26(1):15–19, 1997.
* [2] AM Boonstra, V Fidler, and WH Eisma. Walking speed of normal subjects and amputees: aspects of validity of gait analysis. Prosthetics and Orthotics International, 17(2):78–82, 1993.
* [3] Nancy D Harada, Vicki Chiu, and Anita L Stewart. Mobility-related function in older adults: assessment with a 6-minute walk test. Archives of physical medicine and rehabilitation, 80(7):837–841, 1999.
* [4] S Amatachaya, S Naewla, K Srisim, P Arrayawichanon, and W Siritaratiwat. Concurrent validity of the 10-meter walk test as compared with the 6-minute walk test in patients with spinal cord injury at various levels of ability. Spinal Cord, 52(4):333, 2014.
* [5] James E Graham, Glenn V Ostir, Steven R Fisher, and Kenneth J Ottenbacher. Assessing walking speed in clinical research: a systematic review. Journal of evaluation in clinical practice, 14(4):552–562, 2008\.
* [6] Denise M Peters, Stacy L Fritz, and Debra E Krotish. Assessing the reliability and validity of a shorter walk test compared with the 10-meter walk test for measurements of gait speed in healthy, older adults. Journal of geriatric physical therapy, 36(1):24–30, 2013.
* [7] Christopher M Wilson, Stephanie R Kostsuca, and Judith A Boura. Utilization of a 5-meter walk test in evaluating self-selected gait speed during preoperative screening of patients scheduled for cardiac surgery. Cardiopulmonary physical therapy journal, 24(3):36, 2013.
* [8] Bruce H Dobkin. Short-distance walking speed and timed walking distance: redundant measures for clinical trials? Neurology, 66(4):584–586, 2006.
* [9] Ruth Dickstein. Rehabilitation of gait speed after stroke: a critical review of intervention approaches. Neurorehabilitation and neural repair, 22(6):649–660, 2008.
* [10] Nidhi Seethapathi and Manoj Srinivasan. The metabolic cost of changing walking speeds is significant, implies lower optimal speeds for shorter distances, and increases daily energy estimates. Biology letters, 11(9):20150486, 2015.
* [11] Michael S Orendurff, Jason A Schoen, Greta C Bernatz, Ava D Segal, and Glenn K Klute. How humans walk: bout duration, steps per bout, and rest duration. Journal of Rehabilitation Research & Development, 45(7), 2008.
* [12] Glenn K Klute, Jocelyn S Berge, Michael S Orendurff, Rhonda M Williams, and Joseph M Czerniecki. Prosthetic intervention effects on activity of lower-extremity amputees. Archives of physical medicine and rehabilitation, 87(5):717–722, 2006.
* [13] Brian C Glaister, Greta C Bernatz, Glenn K Klute, and Michael S Orendurff. Video task analysis of turning during activities of daily living. Gait & posture, 25(2):289–294, 2007.
* [14] Kristin A Lowry, Jennifer S Brach, Robert D Nebes, Stephanie A Studenski, and Jessie M VanSwearingen. Contributions of cognitive function to straight-and curved-path walking in older adults. Archives of physical medicine and rehabilitation, 93(5):802–807, 2012.
* [15] Marco Godi, Antonio Nardone, and Marco Schieppati. Curved walking in hemiparetic patients. Journal of rehabilitation medicine, 42(9):858–865, 2010.
* [16] Rebecca J Hess, Jennifer S Brach, Sara R Piva, and Jessie M VanSwearingen. Walking skill can be assessed in older adults: validity of the figure-of-8 walk test. Physical therapy, 90(1):89–99, 2010.
* [17] G. L. Brown. Nonlinear Locomotion: Mechanics, energetics, and optimality of walking in circles and other curved paths. PhD thesis, Ohio State University, 2012.
* [18] G. L. Brown and M. Srinivasan. Walking with turning: Metabolic energy, energy optimality, and preferred speeds while walking in circles, walking on complex paths, and path planning. submitted, in revision, 2019.
* [19] MY Zarrugh, FN Todd, and HJ Ralston. Optimization of energy expenditure during level walking. European journal of applied physiology and occupational physiology, 33(4):293–306, 1974.
* [20] Manoj Srinivasan. Optimal speeds for walking and running, and walking on a moving walkway. Chaos: An Interdisciplinary Journal of Nonlinear Science, 19(2):026112, 2009.
* [21] J Maxwell Donelan, Rodger Kram, et al. Mechanical and metabolic determinants of the preferred step width in human walking. Proceedings of the Royal Society of London B: Biological Sciences, 268(1480):1985–1992, 2001.
* [22] James M Finley, Amy J Bastian, and Jinger S Gottschall. Learning to be economical: the energy cost of walking tracks motor adaptation. The Journal of physiology, 591(4):1081–1095, 2013.
* [23] Nicholas P Fey, Glenn K Klute, and Richard R Neptune. Optimization of prosthetic foot stiffness to reduce metabolic cost and intact knee loading during below-knee amputee walking: a theoretical study. Journal of biomechanical engineering, 134(11):111005, 2012.
* [24] Matthew L Handford and Manoj Srinivasan. Robotic lower limb prosthesis design through simultaneous computer optimizations of human and prosthesis costs. Scientific reports, 6:19983, 2016.
* [25] RL Waters, Jacquelin Perry, DANIEL Antonelli, and Helen Hislop. Energy cost of walking of amputees: the influence of level of amputation. J Bone Joint Surg Am, 58(1):42–46, 1976.
* [26] Elizabeth Russell Esposito, Kelly M Rodriguez, Christopher A Ràbago, and Jason M Wilken. Does unilateral transtibial amputation lead to greater metabolic demand during walking. J Rehabil Res Dev, 51(8):1287–96, 2014.
* [27] PK Sethi, MP Udawat, SC Kasliwal, and R Chandra. Vulcanized rubber foot for lower limb amputees. Prosthetics and orthotics international, 2(3):125–136, 1978.
* [28] Peter Howitt, Ara Darzi, Guang-Zhong Yang, Hutan Ashrafian, Rifat Atun, James Barlow, Alex Blakemore, Anthony MJ Bull, Josip Car, Lesong Conteh, et al. Technologies for global health. The Lancet, 380(9840):507–535, 2012.
* [29] AP Arya, A Lees, HC Nerula, and L Klenerman. A biomechanical comparison of the sach, seattle and jaipur feet using ground reaction forces. Prosthetics and Orthotics International, 19(1):37–45, 1995.
* [30] P Lenka and R Kumar. Gait comparisons of trans tibial amputees with six different prosthetic feet in developing countries. IPJMR, pages 8–14, 2010.
* [31] S. M. H. J. Jaegers, L. D. W. Vos, P. Rispens, and A. L. Hof. The relationship between comfortable and most metabolically efficient walking speed in persons with unilateral above-knee amputation. Arch Phys Med Rehabil, 74:521–525, 1993.
* [32] J. J. Genin, G. J. Bastien, B. Franck, C. Detrembleur, and P. A. Willems. Effect of speed on the energy cost of walking in unilateral traumatic lower limb amputees. Eur J Appl Physiol, 103:655–663, 2008.
* [33] Manoj Srinivasan. Fifteen observations on the structure of energy-minimizing gaits in many simple biped models. Journal of The Royal Society Interface, 8(54):74–98, 2010.
* [34] Sarah J Mattes, Philip E Martin, and Todd D Royer. Walking symmetry and energy cost in persons with unilateral transtibial amputations: matching prosthetic and intact limb inertial properties. Archives of physical medicine and rehabilitation, 81(5):561–568, 2000.
* [35] Aline H Vrieling, HG Van Keeken, T Schoppen, E Otten, JPK Halbertsma, AL Hof, and K Postema. Gait initiation in lower limb amputees. Gait & posture, 27(3):423–430, 2008.
* [36] Nancy L Dudek, Omar D Khan, Edward D Lemaire, Meridith B Marks, and Leyana Saville. Ambulation monitoring of transtibial amputation subjects with patient activity monitor versus pedometer. Journal of Rehabilitation Research & Development, 45(4), 2008.
* [37] John R Rebula, Lauro V Ojeda, Peter G Adamczyk, and Arthur D Kuo. Measurement of foot placement and its variability with inertial sensors. Gait & posture, 38(4):974–980, 2013.
* [38] Brenton Hordacre, Christopher Barr, and Maria Crotty. Use of an activity monitor and gps device to assess community activity and participation in transtibial amputees. Sensors, 14(4):5845–5859, 2014.
* [39] Yang Wang and Manoj Srinivasan. Stepping in the direction of the fall: the next foot placement can be predicted from current upper body state in steady-state walking. Biology letters, 10(9):20140405, 2014.
* [40] Nidhi Seethapathi and Manoj Srinivasan. Step-to-step variations in human running reveal how humans run without falling. eLife, 8:e38371, 2019.
* [41] Henry J Ralston. Energy-speed relation and optimal speed during level walking. Internationale Zeitschrift für Angewandte Physiologie Einschliesslich Arbeitsphysiologie, 17(4):277–283, 1958.
* [42] Marc H Bornstein and Helen G Bornstein. The pace of life. Nature, 259(5544):557, 1976.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.